We offer Linux, database design,bash scripting, Linux server management, SQL coding and more classes in self-paced video format starting at $60. Click here to learn more and register. For complete self-paced system admin training, visit our System Admin- Complete Training Bundle page.
Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available. In this article, we review the 17 topics that are essential for all Kubernetes system admins for deploying and managing Kubernetes containers.
For those who are not familiar with Kubernetes and container technology, reading Overview of Kubernetes evolution from virtual servers and Kubernetes architecture is highly recommended.
1- Setting up Kubernetes cluster
Kubernetes consists of combination of multiple open source components. These are developed by different parties, making it difficult to find and download all the related packages and install, configure, and make them work from scratch. Fortunately, there are some different solutions and tools that have been developed to set up Kubernetes clusters effortlessly. Therefore, it is highly recommended you use such a tool to set up Kubernetes on your environment. The following tools are categorized by different types of solution to build your own Kubernetes:
- Self-managed solutions that include: minikube, kubeadm, kubespray, and kops
- Enterprise solutions that include: OpenShift (https://www.openshift.com) and Tectonic (https://coreos.com/tectonic/)
- Cloud-hosted solutions that include: Google Kubernetes engine (https://cloud.google.com/kubernetes-engine/), Amazon elastic container service for Kubernetes (Amazon EKS, https://aws.amazon.com/eks/), and Azure Container Service (AKS, https://azure.microsoft.com/en-us/services/container-service/)
A self-managed solution is suitable if we just want to build a development environment or do a proof of concept quickly.
2- Using Kubernetes via CLI or RESTful API
Here you can start creating different kinds of resources on the Kubernetes system. In order to realize your application in a microservices structure, good knowledge of Kubernetes Command Line Interface (CLI) will be a good start towards understanding the concepts of the Kubernetes resources and consolidating them. After you deploy applications in Kubernetes, you can work on its scalable and efficient container management, and also fulfill the DevOps delivering procedure of microservices.
Working with Kubernetes is quite easy, using either a CLI or API (RESTful). After you install Kubernetes master, you can run a kubectl command to see system version, etc.
kubectl is the only command for Kubernetes clusters, and it controls the Kubernetes cluster manager. Any container, or Kubernetes cluster operation, can be performed by a kubectl command. In addition, kubectl allows the inputting of information via either the command line's optional arguments or a file (use the -f option); it is highly recommended to use a file, because you can maintain Kubernetes configuration as code.
3- Linking Pods and containers
The Pod is a group of one or more containers and the smallest deployable unit in Kubernetes. Pods are always co-located and co-scheduled, and run in a shared context. Each Pod is isolated by the following Linux namespaces: The process ID (PID) namespace The network namespace The interprocess communication (IPC) namespace The unix time sharing (UTS) namespace In a pre-container world, they would have been executed on the same physical or virtual machine. It is useful to construct your own application stack Pod (for example, web server and database) that are mixed by different Docker images.
4- Managing Pods with ReplicaSets
A ReplicaSet is a term for API objects in Kubernetes that refer to Pod replicas. The idea is to be able to control a set of Pods' behaviors. The ReplicaSet ensures that the Pods, in the amount of a user-specified number, are running all the time. If some Pods in the ReplicaSet crash and terminate, the system will recreate Pods with the original configurations on healthy nodes automatically, and keep a certain number of processes continuously running. While changing the size of set, users can scale the application out or down easily. According to this feature, whether you need replicas of Pods or not, you can always rely on ReplicaSet for auto-recovery and scalability.
5- Deployment API
The Deployment API was introduced in Kubernetes version 1.2. It is replacing the replication controller. The functionalities of rolling-update and rollback by replication controller, it was achieved with client side (kubectl command and REST API), that kubectl need to keep connect while updating a replication controller. On the other hand, Deployments takes care of the process of rolling-update and rollback at the server side. Once that request is accepted, the client can disconnect immediately. Therefore, the Deployments API is designed as a higher-level API to manage ReplicaSet objects. This section will explore how to use the Deployments API to manage ReplicaSets.
6- Working with Services
The network service is an application that receives requests and provides a solution. Clients access the service by a network connection. They do not have to know the architecture of the service or how it runs. The only thing that clients have to verify is whether the endpoint of the service can be accessed, and then follow its usage policy to get the response of the server. The Kubernetes Service has similar ideas. It is not necessary to understand every Pod before reaching their functionalities. For components outside the Kubernetes system, they just access the Kubernetes Service with an exposed network port to communicate with running Pods. It is not necessary to be aware of the containers' IPs and ports. Behind Kubernetes Services, we can fulfill a zero-downtime update for our container programs without struggling:
7- Working with volumes
Files in a container are ephemeral. When the container is terminated, the files are gone. Docker has introduced data volumes to help us persist data. However, when it comes to multiple hosts, as a container cluster, it is hard to manage volumes across all the containers and hosts for file sharing or provisioning volume dynamically. Kubernetes introduces volume, which lives with a Pod across a container life cycle. It supports various types of volumes, including popular network disk solutions and storage services in different public clouds.
8- Using storage classes
In the cloud world, people provision storage or data volume dynamically. While PersistentVolumeClaim is based on existing static PersistentVolume that is provisioned by administrators, it might be really beneficial if the cloud volume could be requested dynamically when it needs to be. Storage classes are designed to resolve this problem. To make storage classes available in your cluster, three conditions need to be met. First, the DefaultStorageClass admission controller has to be enabled. Then PersistentVolumeClaim needs to request a storage class. The last condition is trivial; administrators have to configure a storage class in order to make dynamic provisioning work.
9- Working with Secrets
Kubernetes Secrets manage information in key-value formats with the value encoded. It can be a password, access key, or token. With Secrets, users do not have to expose sensitive data in the configuration file. Secrets can reduce the risk of credential leaks and make our resource configurations more organized. Currently, there are three types of Secrets:
- Docker registry
Generic/Opaque is the default type that you can use in your application. Docker registry is used to store the credential of a private Docker registry. TLS Secret is used to store the CA certificate bundle for cluster administration. Kubernetes creates built-in Secrets for the credentials that using to access API server.
Before using Secrets, we have to keep in mind that Secret should be always created before dependent Pods, so dependent Pods can reference it properly. In addition, Secrets have a 1 MB size limitation. It works properly for defining a bunch of information in a single Secret. However, Secret is not designed for storing large amounts of data. For configuration data, consider using ConfigMaps. For large amounts of non-sensitive data, consider using volumes instead.
10- Working with names
When you create any Kubernetes object, such as a Pod, Deployment, and Service, you can assign a name to it. The names in Kubernetes are spatially unique, which means you cannot assign the same name in the Pods.
Getting ready Kubernetes allows us to assign a name with the following restrictions:
- Up to 253 characters
- Lowercase of alphabet and numeric characters
- May contain special characters in the middle, but only dashs (-) and dots (.)
11- Working with Namespaces
In a Kubernetes cluster, the name of a resource is a unique identifier within a Namespace. Using a Kubernetes Namespace could separate user spaces for different environments in the same cluster. It gives you the flexibility of creating an isolated environment and partitioning resources to different projects and teams. You may consider Namespace as a virtual cluster. Pods, Services, and Deployments are contained in a certain Namespace. Some low-level resources, such as nodes and persistentVolumes, do not belong to any Namespace. Before we dig into the resource Namespace, let's understand kubeconfig and some keywords first
- kubeconfig is used to call the file which configures the access permission of Kubernetes clusters. As the original configuration of the system, Kubernetes takes $HOME/.kube/config as a kubeconfig file.
- kubeconfig defines user, cluster, and context: kubeconfig lists multiple users for defining authentication, and multiple clusters for indicating the Kubernetes API server. Also, the context in kubeconfig is the combination of a user and a cluster: accessing a certain Kubernetes cluster with what kind of authentication.
- Users and clusters are sharable between contexts. However, each context can only have a single user and single cluster definition.
- Namespace can be attached to context: Every context can be assigned to an existing Namespace. If there are none, it is along with the default Namespace, named default, as well.
- The current context is the default environment for client: We may have several contexts in kubeconfig, but only one for the current context. The current context and the Namespace attached on it will construct the default computing environment for users.
Now you will get the idea that, as Namespace works with kubeconfig, users can easily switch default resources for usage by switching the current context in kubeconfig. Nevertheless, users can still start any resource in a different Namespace with a specified one.
12- Working with labels and selectors
Labels are a set of key/value pairs, which are attached to object metadata. We could use labels to select, organize, and group objects, such as Pods, ReplicaSets, and Services. Labels are not necessarily unique. Objects could carry the same set of labels. Label selectors are used to query objects with labels of the following types:
- Use equal (= or ==) or not-equal (!=) operators
- Use in or notin operators
13- Scaling your containers
Scaling up and down the application or service based on predefined criteria is a common way to utilize the most compute resources in most efficient way. In Kubernetes, you can scale up and down manually or use a Horizontal Pod Autoscaler (HPA) to do autoscaling.
14- Updating live containers
For the benefit of containers, we can easily publish new programs by executing the latest image, and reduce the headache of environment setup. But, what about publishing the program on running containers? While managing a container natively, we have to stop the running containers prior to booting up new ones with the latest images and the same configurations. There are some simple and efficient methods for updating your program in the Kubernetes system. One is called rolling-update, which means Deployment can update its Pods without downtime to clients. The other method is called recreate, which just terminates all Pods then create a new set.
15- Forwarding container ports
Here you should learn how to work with the Kubernetes Services to forward the container port internally and externally. Also, you need to know about Kubernetes handles internal and external communications. There are four networking models in Kubernetes as follows:
- Container-to-container communications
- Pod-to-pod communications
- Pod-to-service communications
- External-to-internal communications
16- Submitting Jobs on Kubernetes
Your container application is designed not only for daemon processes such as nginx, but also for some batch Jobs which eventually exit when the task is complete. Kubernetes supports this scenario; you can submit a container as a Job and Kubernetes will dispatch to an appropriate node and execute your Job.
17- Working with configuration files
Kubernetes supports two different file formats, YAML and JSON. Each format can describe the same function of Kubernetes.
Now that you learned about 17 essential topics for Kubernetes deployment and management, we can move to more advance topics in 9 advance topics for deploying and managing Kubernetes containers article. Also, a good understanding of microservices or how to migrate from monolithic to microservices is crucial for doing advance work on Kubernetes containers. Here is a good article to learn more on microservices.