Most technical presentations dive straight into complex concepts and configurations, leaving beginners drowning in jargon before understanding why any of it matters. But what if someone took a different approach? This article is adapted from an internal tech talk where Mihai Scornea, Junior Software Engineer at R Systems, tackled one of the most complex topics in modern development with an ambitious goal: explain Kubernetes in one hour without relying on syntax and technical terms.
The Universe Before the Apple Pie
Carl Sagan once said, “If you wish to make an apple pie from scratch, you must first invent the universe.” This might sound like a super exaggerated example, but it’s perfectly true. If you start with absolutely nothing, you don’t even have the fabric of reality to work with.
Imagine trying to explain what an apple pie is to an alien from another dimension. They don’t know what flour, sugar, or apples are. The rules of physics could be completely different from ours. You would truly have to explain our entire universe to them, and you’d have to do it in their language.
This is exactly the challenge we face when trying to explain Kubernetes to someone. Us humans simply aren’t built to easily understand such complex concepts – not immediately, at least. So, to help people understand Kubernetes, I’ll start from the very beginning.
Kubernetes is a solution to a problem. Explained by itself, it doesn’t make much sense. But if we first understand the problem it fixes, then we can truly see how Kubernetes works.
The Developer’s Dilemma
Picture this common scenario: You write a program that works beautifully on your machine, then you send it to the client. But the client’s machine is vastly different from yours: it might not have the right operating system or it could be missing crucial libraries.
For example, your program might require Java 21 installed. The client should be able to run it on Windows even if you developed it on Linux, but they need Java 21 installed. The client doesn’t know how to install it. It becomes a problem. This is a simple example, but there are programs that require dozens of things installed and configured just right to work. The fact that the program works on the developer’s machine isn’t that useful – what matters is that the client should be able to use it. Unfortunately, we can’t just ship the developer’s laptop with every program.
Enter Virtual Machines
Some very smart people asked themselves: “What if we could?” They figured out a way to simulate a computer inside another computer, and virtual machines were born. All you need is software called a hypervisor that simulates virtual hardware using physical hardware. The client just needs a powerful computer and the hypervisor installed, and now we can fully simulate the developer’s computer on the client’s computer.
Instead of developing directly on their physical machine, the developer can install a hypervisor, test software on a virtual machine, make sure it works, then save the virtual machine as a file and send it to the client. The client runs it in their own hypervisor, and it comes with every library and dependency already installed.
But virtual machines aren’t perfect. What if we could make something lighter, faster, and more modular?
The Hotel Metaphor
Let me tie these computer concepts to something more familiar. Imagine our computer is a hotel – but not a regular hotel. This is a hotel where guests can bring their own food recipes to the waiters when they go to the restaurant.
The code of a computer program is very similar to a cooking recipe. You have:
- Kitchen space and tables (RAM) – where chefs work and place ingredients
- Empty bowls (variables) – containers for storing values
- Food processing like mixing two ingredients together (CPU operations) – taking values and performing operations
- The chef’s hands (CPU registers) – temporarily holding data during operations
- Extra tools like frying pans (libraries) – additional functionality needed for recipes
- The freezer (storage/hard drives) – where data is permanently stored
- Waiters (the kernel) – who read recipes line by line and coordinate everything
In a normal computer setup, there are no rooms in this hotel. All guests hang out in the same public area, and the hotel already has frying pans and spatulas ready (libraries are pre-installed). The problem arises when guests want different versions of tools – some might want a newer frying pan than others. Programs might require different versions of dependencies, and conflicts emerge.
The Container Solution
Building a separate virtual machine for every program is like building an entire hotel for every guest. It’s expensive – you need separate staff, separate buildings, and it takes a lot of time and resources. It’s as if we had a hotel chain to manage.
Smart people analyzed this problem and realized that only the guests really fight over some things. The rest of the hotel infrastructure (disk space, CPU, RAM, and kernel) remains pretty much the same in all cases. So, they came up with a brilliant idea: What if every guest just thought they were the only guest? What if we gave them individual rooms with room service?
This is what a container runtime like Docker does. Docker builds walls around guests and acts as room service. Whatever a guest wants gets sent down to the waiters (the kernel), who instruct the chefs (hardware). The kernel does its work and returns output back to the room through the Docker engine.
The guests have no idea what’s happening outside their rooms – they all think they have their own hotel. There’s a catch: every guest must bring all the tools necessary for their recipe. The hotel no longer provides any tools at all, just access to CPU and RAM. These rooms that hold programs and their dependencies are called containers.
The guests can also have access to the storage system and network of the host computer. They obtain access to storage through something called volumes and to the network interface through a form of port forwarding. For volumes, the host computer can “share” a folder on its file system and mount it at a location inside the container. The container will think that folder is in its own room as it modifies files in it, when, in reality, it is actually modifying files on the host computer.
From Containers to Kubernetes
Containers solve the dependency problem beautifully, but new challenges emerge:
- Resource exhaustion: If you need many containers, you might fill up your host machine
- Scaling complexity: The solution is to get more computers and run containers on them, but this requires manual tracking of everything
- High availability: If you want multiple containers of the same type across machines for redundancy, you need complex routing and load balancing
- Management overhead: You could manage two or three machines manually, but what about hundreds?
This is the problem that Kubernetes solves. Kubernetes excels at coordinating many computers to run containers and managing everything about them exactly the way you want. With Kubernetes, you can have 1000 computers or more, and they’ll all work toward your goal.
Kubernetes: The Orchestration Layer
Think of Kubernetes as a sophisticated hotel management system that coordinates multiple hotels (computers) to provide seamless service to guests (containers).
Core Components of The Control plane
The control plane in Kubernetes has a lot of components working together in order to manage where containers run and how the networking between them works. Luckily, they all have jobs similar to people working in a hotel (or, in our case, a hotel complex with multiple buildings), therefore, they can all be described in an easier manner:
The Container Runtime: This is the component that runs the actual containers on our machines. It basically behaves like Docker. It can be told to run new containers or delete existing ones and it will do so. It will also handle the other aspects like port forwarding and mounting volumes, just like Docker. This is our Room Service and also housekeeping.
The Kubelet: This is also part of the housekeeping of our hotel, or more like the manager of the housekeeping. A kubelet runs in every single hotel and its job is to instruct the Container Runtime what containers should be moving in and out of the hotel. The Container Runtime then runs these containers.
The kube-api-server: This is the receptionist of our hotel complex. Every single transfer of information about how the hotel manages its guests goes through the receptionist. Every other component only talks to the receptionist. Even us, when we want to make a phone call and book a room for our container, we will be talking to this receptionist. The receptionist stores all information about the hotel in a guestbook called the ETCD database.
The ETCD database: This is the guestbook that our receptionist kube-api-server writes information to and also where it reads from when it is asked for information by the other employees.
The Kubectl command line interface: This is the phone line that we can use to book rooms for our containers in the hotel. We can use commands like “kubectl get pods” for example to get a list of all the rooms occupied in the hotel. This phone line talks directly to the kube-api-server receptionist.
The kube-scheduler: This is like a reservation planner. When we ask the kube-api-server for a room in a hotel, the kube-scheduler is responsible for finding a suitable hotel with a room big enough for our guest. Some containers might require more resources than what is available on a computer in the cluster, so, they will be scheduled on a machine that has enough resources. It will tell the kube-api-server where the containers should be placed and, the kube-api-server will note it down in the ETCD database. When the Kubelets check via the kube-api-server, the kubelet responsible for the building assigned to the container will make sure to deploy it.
The kube-controller-manager: This one can perform logical operations on the data stored in the ETCD database. For example, let’s say that we call the kube-api-server receptionist using the kubectl phone line and tell them that we want to move a team of 11 footbal player containers into the hotel (equivalent to applying a replicaset of 11 identifcal containers). The kube-api-server will note this down
The CoreDNS: This is our information desk. As you will see later, multiple pods of the same type can be placed on the same floor in order to make them easy to reach. CoreDNS can tell us which floor the pods we want are assigned to. Each floor will have an IP address instead of a floor number. For example, we can ask “where is the football-players floor?” and it will tell us “floor 10.96.0.42”. We can then ask the elevator operator for that floor and they will make sure we reach the rooms we needed.
The kube-proxy: This is our elevator operator. It always makes sure that we reach the right rooms when we ask for a particular floor. In practice, when we define such a floor (or kubernetes service), it is responsible with creating all the networking rules necessary for us to reach a room of the floor we want, when we only access the IP of the floor. When we access a floor, we are randomly routed to one of the pods (containers) on that floor.
The CNI plugin: Even if each floor has an IP address, each room also has one and they are all on a network called the internal kubernetes overlay network. This CNI plugin is responsible for assigning IP addresses to the rooms and making sure they can all reach each other by these IP addresses. It is the building planner.
Core Components of Things Deployed on Kubernetes
Pods: The smallest unit in Kubernetes – an enclosure that can house one or more containers. Unlike individual containers, pods have their own IP address in the internal Kubernetes network and the containers within a pod can share this IP address and storage volumes. Usually, people use the term pod and container interchangeably, however, a pod can contain one or more containers.
Services: Stable IP addresses tied to pods with particular labels. When you make a request to a service, it routes your request to one of the matching pods, providing built-in load balancing. These are the floors of the hotel mentioned earlier. Moreover, all hotels have glass bridges between their floors so that going to one floor gives us access to all the pods on that floor from all hotels.
Replica Sets: Ensure you always have the desired number of identical pods running. If one crashes, the replica set automatically creates a replacement. This is like booking rooms for a football team of 11 players with identical needs as mentioned earlier. Such a replica set can be seen in the image with the service, for the nginx pods.
Deployments: Control replica sets and enable zero-downtime updates through rolling updates – imagine changing an airplane’s engines mid-flight, one by one. These, combined with the services can guarantee smooth version upgrades. We can replace an entire hotel floor with new versions of the pods seamlessly. Gradually scaling down a replicaset and scaling up another one like that is called a rolling update.
The Magic of Rolling Updates
Here’s where Kubernetes truly shines. Imagine you have all your pods managed by a replica set and you want to update them. One method is to delete all pods and start an updated replica set, but this creates downtime – users get 404 errors and complaints.
Kubernetes deployments can perform rolling updates instead. During a rolling update:
- Kubernetes slowly removes pods from the old version
- Simultaneously adds pods with the new version
- Traffic gets served by both versions during the transition
- Users might get mixed responses but never see errors
- The update completes seamlessly with zero downtime
This is the magic trick that modern applications use to never go down, even during updates.
Why This Matters
Kubernetes abstracts away the complexity of managing distributed systems. Instead of manually configuring load balancers, tracking which containers run where, and orchestrating updates across multiple machines, you declare what you want, and Kubernetes makes it happen.
It’s the difference between manually conducting a symphony with hundreds of musicians versus having an automated system that ensures every musician plays their part perfectly, in harmony, without you needing to coordinate each individual note.
Conclusion
Understanding Kubernetes doesn’t require memorizing syntax or technical specifications. It requires understanding the problems it solves:
- Dependency conflicts → Containers provide isolation
- Manual scaling → Kubernetes automates container orchestration
- Complex deployments → Rolling updates eliminate downtime
- Resource management → Kubernetes optimally distributes workloads
Like Carl Sagan’s apple pie, once you understand the universe that Kubernetes operates in (the problems of modern software deployment), the solution becomes not just comprehensible, but elegant.
The next time someone asks you about Kubernetes, don’t start with pods and services. Start with the developer who needs their program to work everywhere, the client who can’t install dependencies, and the hotel that found a way to give every guest their own perfect room while sharing the same building.
That’s Kubernetes: the universe that makes the modern software apple pie possible.
This article is based on one of our regular internal tech talks, where team members from across our global offices share their expertise and insights with colleagues. These sessions are part of our commitment to fostering a culture of continuous learning and knowledge sharing – whether you’re a junior engineer with a fresh perspective or a senior architect with years of experience, everyone has something valuable to contribute. If you’re interested in joining a team that values both personal growth and collective expertise, explore our open roles.