Get startedGet started for free

Kubernetes components

1. Kubernetes components

The Kubernetes control plane is the fleet of cooperating processes that make a Kubernetes cluster work. Although you might only directly work with a few of these components, it's important to understand what the fleet does and the role they each play. In this section of the course, you'll get an opportunity to see how a Kubernetes cluster is constructed, part by part. This will help illustrate how a Kubernetes cluster that runs in GKE is easier to manage than one you provisioned yourself. First, a cluster needs computers, and these computers are usually virtual machines. They always are in GKE, but they could be physical computers too. One computer is called the control plane, and the others are called nodes. The node's job is to run Pods, and the control plane's is to coordinate the entire cluster. Let's look at the control-plane components. Several critical Kubernetes components run on the control plane. First is the kube-APIserver component, which is the only single component that you'll interact with directly. The job of this component is to accept commands that view or change the state of the cluster. This includes launching Pods. Next is the kubectl command. The job of the kubectl command is to connect to the kube-APIserver and communicate with it using the Kubernetes API. The kube-APIserver also authenticates incoming requests, determines whether they are authorized and valid, and manages admission control. But it's not just kubectl that talks with kube-APIserver. In fact, any query or change to the cluster's state must be addressed to the kube-APIserver. There is the etcd component, which is the cluster's database. Its job is to reliably store the state of the cluster. This includes all the cluster configuration data, along with more dynamic information such as what nodes are part of the cluster, what Pods should be running, and where they should be running. You'll never directly interact with etcd, instead the kube-APIserver interacts with the database on behalf of the rest of the system. Next is kube-scheduler, which is responsible for scheduling Pods onto the nodes. Kube-scheduler evaluates the requirements of each individual Pod and selects which node is most suitable. However, it doesn't do the work of actually launching Pods on nodes (that's done by another component). Instead, whenever it discovers a Pod object that doesn't yet have an assigned node, it chooses a node and writes the name of that node into the Pod object. How does kube-scheduler decide where to run a Pod? It knows the state of all the nodes, and also obeys constraints you define regarding where a Pod can run, considering hardware, software, and policy details. For example, you might specify that a certain Pod is only allowed to run on nodes with a specific amount of memory. You can also define affinity parameters, which specify when groups of Pods should run on the same node. Alternatively, you can define anti-affinity parameters, which ensure that Pods do not run on the same node. The kube-controller-manager component has a broader job–it continuously monitors the state of a cluster through the kube-APIserver. Whenever the current state of the cluster doesn't match the desired state, kube-controller-manager will attempt to make changes to achieve the desired state. It's called the controller manager because many Kubernetes objects are maintained by loops of code called controllers, which handle the process of remediation. You can use certain Kubernetes controllers to manage workloads. For example, remember our problem of keeping 3 nginx Pods always running? They can be gathered into a controller object called a deployment that runs, scales, and brings them together underneath a front end. Other types of controllers have system-level responsibilities. For example, the Node Controller's job is to monitor and respond when a node is offline. The kube-cloud-manager component manages controllers that interact with underlying cloud providers. For example, if you manually launched a Kubernetes cluster on Compute Engine, kube-cloud-manager would be responsible for bringing in Google Cloud features like load balancers and storage volumes. Now let's shift our focus to nodes. Each node runs a small family of control-plane components called a kubelet. You can think of a kubelet as Kubernetes's agent on each node. When the kube-APIserver wants to start a Pod on a node, it connects to that node's kubelet. Kubelet uses the container runtime to start the Pod and monitors its lifecycle, including readiness and liveness probes, and reports back to the kube-APIserver. The term container runtime, which was mentioned earlier in this course, is the software used to launch a container from a container image. Kubernetes offers several container runtime choices, but the Linux distribution that GKE uses for its nodes launches containers that use containerd, the runtime component of Docker. And finally, there is the kube-proxy component, which maintains network connectivity among the Pods in a cluster. In open source Kubernetes, network connectivity is accomplished by using the firewalling capabilities of iptables, which are built into the Linux kernel. We saw that the Kubernetes Control Plane is a complex management system. But how is GKE different from Kubernetes? From the user's perspective, it's a lot simpler. GKE manages all the control plane components for us. It still exposes an IP address to which we send all of our Kubernetes API requests, but GKE is responsible for provisioning and managing all the control plane infrastructure behind it. It also eliminates the need for a separate control plane. Node configuration and management depends on the type of GKE mode you use. With the Autopilot mode, which is recommended, GKE manages the underlying infrastructure such as node configuration, autoscaling, auto-upgrades, baseline security configurations, and baseline networking configuration. With the Standard mode, you manage the underlying infrastructure, including configuring the individual nodes.

2. Let's practice!

Create Your Free Account

or

By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.