Kubernetes is a platform for running containers.
It takes care of starting your containerized applications, rolling out updates, maintaining service levels, scaling to meet demand, securing access, and much more.
The two core concepts in Kubernetes are:
- API, which you use to define your applications
- cluster, which runs your applications.
A cluster is a set of individual servers that have all been configured with a container runtime like Docker, and then joined into a single logical unit with Kubernetes.
Kubernetes is a container orchestrator. A cluster is a single logical unit composed of many server nodes. Some nodes run the Kubernetes API, whereas others run application workloads, all in containers.
Each node has a container runtime installed. Kubernetes supports multiple options, including Docker, containerd, and rkt.
The Kubernetes API runs in containers on Linux nodes, but the cluster can include other platforms. Joining Windows nodes to your cluster lets you run Linuxand Windows apps in containers with Kubernetes
The Kubernetes cluster is there to run your applications. You define your apps in YAML files and send those files to the Kubernetes API. Kubernetes looks at what you’re asking for in the YAML and compares it to what’s already running in the cluster. It makes any changes it needs to get to the desired state, which could be updating a configuration, removing containers, or creating new containers.
Defining the structure of the application is your job, but running and managing every- thing is down to Kubernetes.
Kubernetes manages more than just containers, which is what makes it a complete application platform. The cluster has a distributed database, and you can use that to store both configuration files for your applications and secrets like API keys and connnection credentials.
Kubernetes also provides storage, so your applications can maintain data outside of containers, giving you high availability for stateful apps.
Kubernetes also manages network traffic coming into the cluster by sending it to the right containers for processing.
YAML files are properly called application manifests, because they’re a list of all the components that go into shipping the app. Those components are Kubernetes resources;
One point you should know: the components of Kubernetes itself need to run as Linux containers. You can’t run Kubernetes in Windows (although you can run Win- dows apps in containers with a multinode Kubernetes cluster), so you’ll need a Linux virtual machine (VM) if you’re working on Windows. Docker Desktop sets that up and manages it for you.
You don’t need to use Docker with Kubernetes, but it is the easiest and most flexible way to package your apps so you can run them in containers with Kubernetes.
Lab environment:
In our lab , we have a single node cluster with docker and k3s
To get the cluster up and running, install docker and k3s(minimalist version of kubernetes)
# install Docker:
curl -fsSL https://get.docker.com | sh
# install K3s:
curl -sfL https://get.k3s.io | sh -s - --docker --disable=traefik --write-kubeconfig-mode=644
There are other methods of installing the cluster too but for our exercise we will keep it this way.
Once you are done with the installation, basically your cluster is ready.
Verify your cluster:
Kubectl get nodes
linuxadmin@node1k8:~/kuberneteslearning$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
node1k8 Ready control-plane,master 14h v1.26.3+k3s1
Kubectl is the Kubernetes command line tool. We can you kubectl to work with local and remote clusters
Kubernetes runs containers for your application workloads, but the containers themselves are not objects you need to work with. Every container belongs to a Pod, which is a Kubernetes object for managing one or more containers, and Pods, in turn, are managed by other resources. These higher-level resources abstract away the details of the container, which powers self-healing applications and lets you use a desired-state workflow: you tell Kubernetes what you want to happen, and it decides how to make it happen
A container is a virtualized environment that typically runs a single application component. Kubernetes wraps the container in another virtualized environment: the Pod.
A Pod is a unit of compute, which runs on a single node in the cluster. The Pod has its own virtual IP address, which is managed by Kubernetes, and Pods in the cluster can communicate with other Pods over that virtual network, even if they’re running on different nodes.
We normally run a single container in a Pod, but you can run multiple contain- ers in one Pod, which opens up some interesting deployment options.
Lab Exercise 1-1:
Lets run a simple pod with out YAML file.
# run a Pod with a single container; the restart flag tells Kubernetes to create just the Pod and no other resources:Kubectl run nginx —image=nginx —restart=Never
# list all the Pods in the cluster:
linuxadmin@node1k8:~/kuberneteslearning$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 23s
# show detailed information about the Pod:
linuxadmin@node1k8:~$ kubectl describe pod nginx
Name: nginx
Namespace: default
Priority: 0
Service Account: default
Node: node1k8/10.9.0.4
Start Time: Sun, 02 Apr 2023 08:24:56 +0000
Labels: run=nginx
Annotations: <none>
Status: Running
IP: 10.42.0.14
IPs:
IP: 10.42.0.14
Containers:
nginx:
Container ID: docker://87c1e4477b10caaabf9d3a7ca1cf8a5cf2e89b2a59e7c2291f9bf92c86bbef49
Image: nginx
Image ID: docker-pullable://nginx@sha256:2ab30d6ac53580a6db8b657abf0f68d75360ff5cc1670a85acb5bd85ba1b19c0
Port: <none>
Host Port: <none>
State: Running
Started: Sun, 02 Apr 2023 08:25:03 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cjbfj (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-cjbfj:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 32m default-scheduler Successfully assigned default/nginx to node1k8
Normal Pulling 32m kubelet Pulling image "nginx"
Normal Pulled 32m kubelet Successfully pulled image "nginx" in 6.157814872s (6.157823772s including waiting)
Normal Created 32m kubelet Created container nginx
Normal Started 32m kubelet Started container nginx
Kubernetes doesn’t really run containers, though—it passes the responsibility for that to the container
Kubernetes doesn’t really run containers, though—it passes the responsibility for that to the container runtime installed on the node, which could be Docker or containerd or something more exotic.
That’s why the Pod is an abstraction. Its the resource that Kubernetes manage.
Lab exercise 1-2
# specify custom columns in the output, selecting network details:
# specify custom columns in the output, selecting network details:
linuxadmin@node1k8:~$ kubectl get pod nginx --output custom-columns=NAME:metadata.name,NODE_IP:status.hostIP,POD_IP:status.podIP
NAME NODE_IP POD_IP
nginx 10.9.0.4 10.42.0.14
# specify a JSONPath query in the output, selecting the ID of the first container in the Pod:
linuxadmin@node1k8:~$ kubectl get pod nginx -o jsonpath='{.status.containerStatuses[0].containerID}'
docker://87c1e4477b10caaabf9d3a7ca1cf8a5cf2e89b2a59e7c2291f9bf92c86bbef49
JSONPath is an alternative output format that supports complex queries. This query fetches the ID of the first container in the Pod. There is only one in this case, but there could be many, and the first index is zero.
Takeaways:
Querying the output from commands is a useful way to see the information you care about, and because you can access all the details of the resource, it’s great for automation too.
The other takeaway is a reminder that Kubernetes does not run containers—the container ID in the Pod is a reference to another system that runs containers.
Pods are allocated to one node when they’re created, and it’s that node’s responsibility to manage the Pod and its containers. It does that by working with the container runtime using a known API called the Container Runtime Interface (CRI). The CRI lets the node manage containers in the same way for all the different container run- times.
It uses a standard API to create and delete containers and to query their state. While the Pod is running, the node works with the container runtime to ensure the Pod has all the containers it needs.
Lab exercise 1-3:
# find the Pod’s container:
linuxadmin@node1k8:~$ docker container ls -q --filter label=io.kubernetes.container.name=nginx
ffa4fd6eefb2
# now delete that container:
linuxadmin@node1k8:~$ docker container rm -f $(docker container ls -q --filter label=io.kubernetes.container.name=nginx)
ffa4fd6eefb2
# check the Pod status:
klinuxadmin@node1k8:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 1 4m50s
# and find the container again:
linuxadmin@node1k8:~$ docker container ls -q --filter label=io.kubernetes.container.name=nginx
38153f26c530
Takeaways
Kubernetes will automatically recreate a container if it is removed or fails, based on the restartPolicy
Takeaways
Kubernetes will automatically recreate a container if it is removed or fails, based on the restartPolicy defined for the pod.
When a container terminates or is removed from a pod, Kubernetes will try to restart the container according to the pod’s restartPolicy. The restartPolicy defines how the container should be restarted if it fails or is terminated.
By default, the restartPolicy is set to Always, which means that Kubernetes will always try to restart the container. If a container is terminated, Kubernetes will create a new container with the same configuration to replace it.
If the restartPolicy is set to OnFailure, Kubernetes will only restart the container if it fails. If the container exits with a zero exit code, indicating a successful termination, Kubernetes will not restart the container.
If the restartPolicy is set to Never, Kubernetes will not try to restart the container if it is terminated or fails. In this case, you would need to manually create a new pod with a new container.
It’s the abstraction from containers to Pods that lets Kubernetes repair issues like this. A failed container is a temporary fault; the Pod still exists, and the Pod can be brought back up to spec with a new container.
Lab exercise 1-4
We have deployed nginx which is a web application
From the node we were able to ping the pod IP where the application resides.
linuxadmin@node1k8:~$ ping 10.42.0.15
PING 10.42.0.15 (10.42.0.15) 56(84) bytes of data.
64 bytes from 10.42.0.15: icmp_seq=1 ttl=64 time=0.082 ms
64 bytes from 10.42.0.15: icmp_seq=2 ttl=64 time=0.063 ms
64 bytes from 10.42.0.15: icmp_seq=3 ttl=64 time=0.104 ms
^C
We were able to telnet of default application port 80
linuxadmin@node1k8:~$ telnet 10.42.0.15 80
Trying 10.42.0.15...
Connected to 10.42.0.15.
Escape character is '^]'.
^CConnection closed by foreign host.
but you can’t browse to it because we haven’t configured Kubernetes to route network traffic to the Pod.
Port forwarding is a feature of kubectl. It starts listening for traffic on
your local machine and sends it to the Pod running in the cluster.
Kubectl can forward traffic from a node to a Pod, which is a quick way to communicate with a Pod from
but you can’t browse to it because we haven’t configured Kubernetes to route network traffic to the Pod.
Port forwarding is a feature of kubectl. It starts listening for traffic on
your local machine and sends it to the Pod running in the cluster.
Kubectl can forward traffic from a node to a Pod, which is a quick way to communicate with a Pod from outside the cluster. You can listen on a specific port on your machine—which is the single node in your cluster— and forward traffic to the application running in the Pod.
#listen on port 8080 on your machine and send traffic to the Pod on port 80:
linuxadmin@node1k8:~$ kubectl port-forward pod/nginx 8080:80
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80
Takeaway
There are two methods actually to access the services inside a pod.
Kubernetes port-forward and exposing a pod externally are two different ways to access the services running inside a pod, and they have different use cases.
Port-forwarding allows you to create a secure tunnel between your local machine and the Kubernetes cluster, and forward traffic to a specific port on a pod. This means that you can access the service running inside the pod as if it were running on your local machine, without the need to expose the pod externally. Port-forwarding is useful for debugging or testing purposes, where you need to access a service running inside a pod that is not exposed to the outside world.
Exposing a pod externally, on the other hand, creates a Kubernetes service that maps a port on the cluster node to a port on the pod, making it accessible from outside the cluster. This is useful for making your application accessible to users or other services outside of the Kubernetes cluster.
In the previous example, we use port forwarding to access the service in local machine.
Lab exercise 1-5
Lets expose a pod externally in Kubernetes cluster
linuxadmin@node1k8:~$ kubectl expose pod nginx --type=NodePort --port=80
service/nginx exposed
linuxadmin@node1k8:~$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 16h
nginx NodePort 10.43.20.48 <none> 80:31654/TCP 1s
Steps with details:
- Identify the name of the pod that you want to access. You can use the kubectl get pods command to list all the pods in the cluster.
- Check if the pod has a service associated with it. You can use the kubectl get svc command to list all the services in the cluster. If there is no service associated with the pod, you will need to create one.
- If the pod does not have a service associated with it, create one by running the following command:
kubectl expose pod <pod-name> –type=NodePort –port=<port-number>
Replace <pod-name> with the name of the pod you want to expose and <port-number> with the port number the application in the pod is listening on. - Once the service is created, use the kubectl describe svc <service-name> command to get the NodePort number assigned to the service.
Replace <service-name> with the name of the service that you created in step 3. - Get the IP address of any node in the cluster. You can use the kubectl get nodes command to list all the nodes in the cluster.
Note down the IP address of any node in the cluster. - In your web browser, enter the IP address of the node, followed by a colon and the NodePort number that you obtained in step 4.
- For example, if the IP address of the node is 192.168.1.100 and the NodePort number is 31654, you would enter http://100.100.100.100:31654 in your browser.