Introduction to Kubernetes and installation
What is Kubernetes?
Kubernetes (often abbreviated as K8s) is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. Originally developed by Google, it is now maintained by the Cloud Native Computing Foundation (CNCF).
Why Use Kubernetes?
Kubernetes provides several key benefits for managing applications in a containerized environment:
- Automated Deployment & Scaling – Easily deploy and scale applications as demand increases or decreases.
- Self-Healing – Automatically restarts failed containers, replaces unresponsive instances, and reschedules workloads as needed.
- Load Balancing – Distributes traffic efficiently to maintain application performance.
- Storage Orchestration – Manages storage needs dynamically with Persistent Volumes (PVs) and Persistent Volume Claims (PVCs).
- Multi-Cloud & Hybrid Cloud Support – Works seamlessly across on-premise, public cloud, and hybrid cloud environments.
Key Components of Kubernetes
- Master Node – Controls the cluster and manages scheduling, networking, and scaling.
- API Server – Exposes the Kubernetes API.
- Scheduler – Assigns workloads to worker nodes.
- Controller Manager – Ensures desired state is maintained.
- etcd – Stores cluster configuration data.
- Worker Nodes – Run the application workloads.
- Kubelet – Manages the node and communicates with the master node.
- Container Runtime – Runs the containerized applications (e.g., Docker, containerd).
- Kube Proxy – Manages networking and load balancing within the cluster.
How Kubernetes Works
Kubernetes organizes applications into Pods, which are the smallest deployable units. A Pod can contain one or multiple containers that share resources like storage and networking. Kubernetes ensures these Pods are running efficiently and are properly distributed across the cluster.
Common Kubernetes Objects
- Pods – The basic unit of deployment in Kubernetes.
- Deployments – Manage replica sets and rolling updates.
- Services – Expose applications running in Pods to the network.
- ConfigMaps & Secrets – Manage configuration data and sensitive information.
- Persistent Volumes (PVs) & Persistent Volume Claims (PVCs) – Handle data storage.
- Ingress – Manage external access to services.
Getting Started with Kubernetes
Prerequisites
To get started with Kubernetes, ensure you have the following:
- A basic understanding of containers and Docker.
- A Linux/macOS/Windows system with virtualization enabled.
- Installed tools: kubectl (Kubernetes CLI), Minikube (for local clusters), or a cloud Kubernetes provider.
Step 1: Install Kubernetes Command-Line Tool (kubectl)
kubectl
is the primary command-line tool for managing Kubernetes clusters. Install it using the following commands:
On Linux
curl -LO "https://dl.k8s.io/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl"
sudo chmod +x kubectl
sudo mv kubectl /usr/local/bin/
kubectl version --client
On macOS (using Homebrew)
brew install kubectl
kubectl version --client
On Windows (using Chocolatey)
choco install kubernetes-cli
kubectl version --client
Step 2: Set Up a Kubernetes Cluster
Option 1: Local Kubernetes Cluster (Using Minikube)
Minikube is a lightweight Kubernetes distribution that runs on your local machine. It is ideal for testing and development.
Install Minikube
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube
Start Minikube Cluster
minikube start --driver=docker
This command starts a local Kubernetes cluster using Docker as the driver.
Verify Installation
kubectl get nodes
If Minikube is running successfully, you should see a single node in the output.
Option 2: Cloud-Based Kubernetes Cluster
If you want to deploy Kubernetes in the cloud, major providers offer managed Kubernetes services:
- Google Kubernetes Engine (GKE) (Google Cloud)
- Amazon Elastic Kubernetes Service (EKS) (AWS)
- Azure Kubernetes Service (AKS) (Microsoft Azure)
Each provider has a command-line tool to create a cluster quickly. For example, to create a Kubernetes cluster on GKE:
gcloud container clusters create my-cluster --num-nodes=3
Step 3: Deploy Your First Application on Kubernetes
Create a Deployment
A Kubernetes Deployment ensures that a specified number of replicas of a containerized application are running.
Create a simple deployment using Nginx:
kubectl create deployment nginx --image=nginx
Check if the deployment is running:
kubectl get deployments
Expose the Application
To access the application, expose it as a Kubernetes Service:
kubectl expose deployment nginx --type=LoadBalancer --port=80
Get the service details:
kubectl get services
If using Minikube, open the service in the browser:
minikube service nginx
Step 4: Managing Kubernetes Resources
Check Running Pods
kubectl get pods
View Pod Logs
kubectl logs <pod-name>
Scale an Application
To scale the deployment to 3 replicas:
kubectl scale deployment nginx --replicas=3
Verify scaling:
kubectl get pods
Delete a Deployment
kubectl delete deployment nginx
Step 5: Clean Up
If using Minikube, delete the cluster to free up resources:
minikube delete
For cloud-based clusters, delete them using provider-specific commands to avoid unnecessary charges.
Conclusion
Kubernetes simplifies container management, making it an essential tool for modern cloud-native applications. By automating deployment, scaling, and operations, Kubernetes helps developers build resilient, scalable, and efficient applications in any environment.
Read this: Kubernetes Cheat Sheet: 50+ Essential Commands