Table of contents
- Introduction
- What is Kubernetes?
- Why you need containers?
- Kubernetes Features
- Why use Kubernetes?
- Kubernetes Basics
- Kubernetes Architecture
- Other Key Terminologies
- Installation and Setup
- Using Kubectl to Interact With Kubernetes
- Create a Pod
- Create a Deployment
- Scale a Deployment
- Expose a Service
- Using Port Forwarding
- Apply a YAML File
- Advantages of Kubernetes
- Disadvantages of Kubernetes
- Summary:
Introduction
Kubernetes is the most popular orchestrator for deploying and scaling containerized systems. You can use Kubernetes to reliably build and distribute your applications in the cloud.
In this beginners guide, you’ll learn what Kubernetes can do and how to get started running your own containerized solutions.
What is Kubernetes?
Kubernetes is an open-source system that automates container deployment tasks. It was originally developed at Google but is now maintained as part of the Cloud Native Computing Foundation (CNCF).
Kubernetes has risen to prominence because it solves many of the challenges around using containers in production. It makes it easy to launch limitless container replicas, distribute them across multiple physical hosts, and set up networking so users can reach your service.
Most developers begin their container journey with Docker. While this is a comprehensive tool, it’s relatively low-level and relies on CLI commands that interact with one container at a time. Kubernetes provides much higher-level abstractions for defining applications and their infrastructure using declarative schemas you can collaborate on.
Why you need containers?
Today’s internet user never accept downtime. Therefore developers have to find a method to perform maintenance and update without interrupting their services.
Therefore container, which is isolated environments. It includes everything needed for application to run. It makes it easy for a developer to edit and deploying apps. Moreover, containerization has become a preferred method for packaging, deploying, and update web apps.
Kubernetes Features
Kubernetes has a comprehensive feature set that includes a full spectrum of capabilities for running containers and associated infrastructure:
Automated rollouts, scaling, and rollbacks – Kubernetes automatically creates the specified number of replicas, distributes them onto suitable hardware, and takes action to reschedule your containers if a node goes down. You can instantly scale the number of replicas on-demand or in response to changing conditions such as CPU usage.
Service discovery, load balancing, and network ingress – Kubernetes provides a complete networking solution that covers internal service discovery and public container exposure.
Stateless and stateful applications – While Kubernetes initially focused on stateless containers, it’s now also got built-in objects to represent stateful apps too. You can run any kind of application in Kubernetes.
Storage management – Persistent storage is abstracted by a consistent interface that works across providers, whether in the cloud, on a network share, or a local filesystem.
Declarative state – Kubernetes uses object manifests in YAML files to define the state you want to create in your cluster. Applying a manifest instructs Kubernetes to automatically transition the cluster to the target state. You don’t have to manually script the changes you want to see.
Works across environments – Kubernetes can be used in the cloud, at the edge, or on your developer workstation. Many different distributions are available to match different use cases. Major cloud providers like AWS and Google Cloud offer managed Kubernetes services, while single-node distributions such as Minikube and K3s are great for local use.
- Highly extensible – Kubernetes packs in a lot of functionality but you can add even more using extensions. You can create custom object types, controllers, and operators to support your own workloads.
With so many features available, Kubernetes is ideal for any situation where you want to deploy containers with declarative configuration.
What task are performed by Kubernetes?
Kubernetes is the Linux kernel which is used for distributed systems. It helps you to be abstract the underlying hardware of the nodes(servers) and offers a consistent interface for applications that consume the shared pool of resources.
Why use Kubernetes?
Kubernetes helps you to control the resource allocation and traffic management for cloud applications and microservices. It also helps to simplify various aspects of service-oriented infrastructures. Kubernetes allows you to assure where and when containerized applications run and helps you to find resources and tools you want to work with.
Kubernetes Basics
Now in this Kubernetes tutorial, we will learn some important Basics of Kubernetes:
Cluster: It is a collection of hosts(servers) that helps you to aggregate their available resources. That includes ram, CPU, ram, disk, and their devices into a usable pool.
Master: The master is a collection of components which make up the control panel of Kubernetes. These components are used for all cluster decisions. It includes both scheduling and responding to cluster events.
Node: It is a single host which is capable of running on a physical or virtual machine. A node should run both kube-proxy, minikube, and kubelet which are considered as a part of the cluster.
Namespace: It is a logical cluster or environment. It is a widely used method which is used for scoping access or dividing a cluster.
Kubernetes Architecture
Below is a detailed Kubernetes architecture diagram:
Master Node
The master node is the first and most vital component which is responsible for the management of Kubernetes cluster. It is the entry point for all kind of administrative tasks. There might be more than one master node in the cluster to check for fault tolerance.
The master node has various components like API Server, Controller Manager, Scheduler, and ETCD. Let see all of them.
API Server: The API server acts as an entry point for all the REST commands used for controlling the cluster.
Scheduler
The scheduler schedules the tasks to the slave node. It stores the resource usage information for every slave node. It is responsible for distributing the workload.
It also helps you to track how the working load is used on cluster nodes. It helps you to place the workload on resources which are available and accept the workload.
Etcd
etcd components store configuration detail and wright values. It communicates with the most component to receive commands and work. It also manages network rules and port forwarding activity.
Worker/Slave nodes
Worker nodes are another essential component which contains all the required services to manage the networking between the containers, communicate with the master node, which allows you to assign resources to the scheduled containers.
Kubelet: gets the configuration of a Pod from the API server and ensures that the described containers are up and running.
Docker Container: Docker container runs on each of the worker nodes, which runs the configured pods.
Kube-proxy: Kube-proxy acts as a load balancer and network proxy to perform service on a single worker node.
Pods: A pod is a combination of single or multiple containers that logically run together on nodes.
Other Key Terminologies
Replication Controllers
A replication controller is an object which defines a pod template. It also controls parameters to scale identical replicas of Pod horizontally by increasing or decreasing the number of running copies.
Replication Sets
Replication sets are an interaction on the replication controller design with flexibility in how the controller recognizes the pods it is meant to manage. It replaces replication controllers because of their higher replicate selection capability.
Deployments
Deployment is a common workload which can be directly created and manage. Deployment use replication set as a building block which adds the feature of life cycle management.
Stateful Sets
It is a specialized pod control which offers ordering and uniqueness. It is mainly used to have fine-grained control, which you have a particular need regarding deployment order, stable networking, and persistent data.
Daemon Sets
Daemon sets are another specialized form of pod controller that runs a copy of a pod on every node in the cluster. This type of pod controller is an effective method for deploying pods that allows you to perform maintenance and offers services for the nodes themselves.
Kubernetes vs. Docker Swarm
Here are important differences between Kubernetes vs Docker.
Installation and Setup
There are many different ways to get started with Kubernetes because of the range of distributions on offer. Creating a cluster using the official distribution is relatively involved so most people use a packaged solution like Minikube, MicroK8s, K3s, or Kind.
We’ll use K3s for this tutorial. It’s an ultra-lightweight Kubernetes distribution that bundles all the Kubernetes components into a single binary. Unlike other options, there’s no dependencies to install or heavy VMs to run. It also includes the Kubectl CLI that you’ll use to issue Kubernetes commands.
Running the following command will install K3s on your machine:
It automatically downloads the latest available Kubernetes release and registers a system service for K3s.
After installation, run the following command to copy the auto-generated Kubectl config file into your .kube directory:
Now tell K3s to use this config file by running the following command:
You can add this line to your ~/.profile or ~/.bashrc file to automatically apply the change after you login.
Next run this command:
You should see a single node appear, named with your machine’s hostname. The node shows as Ready so your Kubernetes cluster can now be used!
Using Kubectl to Interact With Kubernetes
Now you’re familiar with the basics, you can start adding workloads to your cluster with Kubectl. Here’s a quick reference for some key commands.
List Pods This displays the Pods in your cluster:
Specify a namespace with the -n or --namespace flag:
Alternatively, get Pods from all your namespaces by specifying --all-namespaces:
This includes Kubernetes system components.
Create a Pod
Create a Pod with the following command:
This starts a Pod called nginx that will run the nginx:latest container image.
Create a Deployment
Creating a Deployment lets you scale multiple replicas of a container:
You’ll see three Pods are created, each running the nginx:latest image:
Scale a Deployment
Now use this command to increase the replica count:
Kubernetes has created two extra Pods to provide additional capacity:
Expose a Service
Now let’s make this NGINX server accessible.
Run the following command to create a service that’s exposed on a port of the Node running the Pods:
Discover the port that’s been assigned by running this command:
The port is 30226. Visiting :30226 in your browser will show the default NGINX landing page.
You can use localhost as if you’ve been following along with the single-node K3s cluster created in this tutorial. Otherwise run the get nodes command and use the INTERNAL-IP that’s displayed.
Using Port Forwarding
You can access a service without binding it to a Node port by using Kubectl’s integrated port-forwarding functionality. Delete your first service and create a new one without the --type flag:
This creates a ClusterIP service that can be accessed on an internal IP, within the cluster.
Retrieve the service’s details by running this command:
The service can be accessed inside the cluster at 10.100.191.238:80.
You can reach this address from your local machine with the following command:
Visiting localhost:8080 in your browser will display the NGINX landing page. Kubectl is redirecting traffic to the service inside your cluster. You can press Ctrl+C in your terminal to stop the port forwarding session when you’re done.
Port forwarding works without services too. You can directly connect to a Pod in your deployment with this command:
Visiting localhost:8080 will again display the NGINX landing page, this time without going through a service.
Apply a YAML File
Finally, let’s see how to apply a declarative YAML file to your cluster. First write a simple manifest for your Pod:
Save this manifest to nginx.yaml and run kubectl apply to automatically create your Pod:
You can repeat the command after you modify the file to apply any changes to your cluster.
Now you’re familiar with the basics of using Kubectl to interact with Kubernetes.
Advantages of Kubernetes
Easy organization of service with pods.
It is developed by Google, who bring years of valuable industry experience to the table.
Largest community among container orchestration tools.
Offers a variety of storage options, including on-premises, SANs and public clouds.
Adheres to the principals of immutable infrastructure.
Kubernetes can run on-premises bare metal, OpenStack, public clouds Google, Azure, AWS, etc.
Helps you to avoid vendor lock issues as it can use any vendor-specific APIs or services except where Kubernetes provides an abstraction, e.g., load balancer and storage.
Containerization using kubernetes allows package software to serve these goals. It will enable applications that need to be released and updated without any downtime.
Kubernetes allows you to assure those containerized applications run where and when you want and helps you to find resources and tools which you want to work.
Disadvantages of Kubernetes
Kubenetes dashboard not as useful as it should be.
Kubernetes is a little bit complicated and unnecessary in environments where all development is done locally.
Security is not very effective.
Summary:
Container helps an organization to perform maintenance and update without interrupting services.
Kubernetes is an example of a container management system developed in the Google platform.
The biggest advantage of using Kubernetes is that it can run on-premises OpenStack, public clouds Google, Azure, AWS, etc.
Kubernetes offers automated Scheduling and Self-Healing Capabilities.
Cluster, master, node, and namespace are important basic of kubernetes.
Master node and work node are important components of Kubernetes architecture.
Replication Controllers, Replication sets, Deployments, Stateful Sets, Daemon Sets are other important terms used in Kubernetes.
Docker swarm does not allow auto-scaling while Kubernetes allows auto-scaling.
The biggest drawback of Kubenetes is that it’s dashboard not very useful and effective.