In this article, we explore the Ingress object in Kubernetes (K8S), and look at how it can be used with some examples. We will then walk step-by-step through setting up an NGINX Ingress controller with Azure Kubernetes Service (AKS).
What is Ingress in Kubernetes?
Ingress in K8S is an object that allows access to services within your cluster, from outside your cluster.
The official documentation on kubernetes.io describes Ingress:
An API object that manages external access to the services in a cluster, typically HTTP. Ingress may provide load balancing, SSL termination and name-based virtual hosting.
Traffic routing is defined by rules specified on the Ingress resource.
Ingress objects refer to allowing HTTP or HTTPS traffic through to your cluster services. They do not expose other ports or protocols to the wider world. For this, a service type of LoadBalancer or NodePort should be used.
A service is an external interface to a logical set of Pods. Services use a ‘virtual IP address’ local to the cluster, external services would have no way to access these IP addresses without an Ingress.
Ingress, LoadBalancer, and NodePort
Ingress, LoadBalancer, and NodePort are all ways of exposing services within your K8S cluster for external consumption.
NodePort and LoadBalancer let you expose a service by specifying that value in the service’s type.
With a NodePort, K8S allocates a specific port on each node to the service specified. Any request received on the port by the cluster simply gets forwarded to the service.
With a LoadBalancer, there needs to be an external service outside of the K8S cluster to provide the public IP address. In Azure, this would be an Azure Application Gateway in front of your Azure Kubernetes Service (AKS) cluster. In AWS, this would be an Application Load Balancer (ALB) in front of your Elastic Kubernetes Service (EKS), and in Google cloud, this would be a Network Load Balancer in front of your Google Kubernetes Engine (GKE) cluster.
Each time a new service is exposed, a new LoadBalancer needs to be created to get a public IP address. Conveniently, the Load balancer provisioning happens automatically for you because of the way the Cloud providers plugin to Kubernetes, so that doesn’t have to be done separately.
Ingress is a completely independent resource to your service. As well as enabling routing rules to be consolidated in one place (the Ingress object), this has the advantage of being a separate, decoupled entity that can be created and destroyed separately from any services.
Ingress Controllers
To set up Ingress in K8S, you need to configure an Ingress controller. These do not come as default with the cluster and must be installed separately. An ingress controller is typically a reverse web proxy server implementation in the cluster.
There are many available Ingress controllers, all of which have different features. The official documentation lists the available Ingress controllers. A few commonly used ones include:
- AKS Application Gateway Ingress Controller is an ingress controller that configures the Azure Application Gateway.
- GKE Ingress Controller for Google Cloud
- AWS Application Load Balancer Ingress Controller
- HAProxy Ingress is an ingress controller for HAProxy.
- Istio Ingress is an Istio based ingress controller.
- The NGINX Ingress Controller for Kubernetes works with the NGINX webserver (as a proxy).
- The Traefik Kubernetes Ingress provider is an ingress controller for the Traefik proxy.
You can have multiple ingress controllers in a cluster mapped to multiple load balancers should you wish!
Learnk8s has a fantastic feature comparison of all the available ingress controllers to help you make your choice. Note the limitations of the Azure application gateway ingress controller. For now, the NGINX Ingress Controller seems like a better choice…
Setting up Ingress with NGINX - Step by Step
NGINX is a widely used Ingress controller, we will run through how to set this up with Azure Kubernetes Service. We will set up two simple web services and use the NGINX Ingress to route the traffic accordingly.
Step 1 – Fire up your AKS cluster and connect to it
To do this, browse to the AKS cluster resource in the Azure Portal and click on connect. The commands needed to connect via your shell using the Azure CLI will be shown.
Step 2 – Install the NGINX Ingress controller
It will install the controller in the ingress-nginx namespace, creating that namespace if it doesn’t already exist.
kubectl apply -f raw.githubusercontent.com/kubernetes/ingres..
Note you can also use Helm to install if you have it installed (you don’t need to run this if you have already installed using the previous command):
helm upgrade --install ingress-nginx ingress-nginx --repo kubernetes.github.io/ingress-nginx --namespace ingress-nginx --create-namespace
Step 3 – Check the Ingress controller pod is running
kubectl get pods --namespace ingress-nginx
Step 4 – Check the NGINX Ingress controller has been assigned a public Ip address
kubectl get service ingress-nginx-controller --namespace=ingress-nginx
Note the service type is LoadBalancer:
Browsing to this IP address will show you the NGINX 404 page. This is because we have not set up any routing rules for our services yet.
Step 5 – Set up a basic web app for testing our new Ingress controller
First, we need to set up a DNS record pointing to the External IP address we discovered in the previous step. Once that is set, run the following command to set up a demo (replace the [DNS_NAME] with your record, e.g. jackwesleyroper.io).
Note that you must set up a DNS record, this step will not work with an IP address. This command comes from the NGINX documentation, we will look at declarative approaches later in this article.
Step 6 – Browse to the web address
You will see ‘It works!’ displayed, confirming that the Ingress controller is correctly routing traffic to the demo app.
Step 7 – Set up two more web apps
Now we will set up two more web apps, and route traffic between them using NGINX.
We will create two YAML files using the demo apps from the official Azure documentation.
aks-helloworld-one.yaml
aks-helloworld-two.yaml
Apply the two configuration files to setup the apps:
kubectl apply -f aks-helloworld-one.yaml --namespace ingress-nginx
kubectl apply -f aks-helloworld-two.yaml --namespace ingress-nginx
Check the new pods are running (you should see two aks-helloworld pods running):
kubectl get pods --namespace ingress-nginx
Step 8 – Setup the Ingress to route traffic between the two apps
We will set up path-based routing to direct traffic to the appropriate web apps based on the URL the user enters. EXTERNAL_IP/hello-world-one is routed to the service named aks-helloworld-one. Traffic to EXTERNAL_IP/hello-world-two is routed to the aks-helloworld-two service. Where the path is not specified by the user (EXTERNAL_IP/), the traffic is routed to aks-helloworld-one.
Create a file named hello-world-ingress.yaml.
Create the ingress
kubectl apply -f hello-world-ingress.yaml --namespace ingress-nginx
Step 9 – Browse to the EXTERNAL_IP/hello-world-one
Key Takeaway
An Ingress in K8S is a robust way to expose services within your K8S cluster to the outside world. It allows you to consolidate routing rules in one place. There are many available Ingress controllers available for use, in this article, we configured an NGINX Ingress on AKS and used it to route traffic between two demo apps.