My first experience with a production Kubernetes cluster was a self-hosted minikube cluster within AWS. It had the AWS Load Balancer Controller, and I learnt that creating Ingress and Service (type=LoadBalancer
) resources would provision an Application and Network Load Balancer within AWS respectively.
In another cluster, I had the opportunity to interact with the Nginx Ingress Controller. Because of how I first interacted with Ingress resources and the AWS Load Balancer Controller, I was confused on how the Ingress resources worked with the ingress controller, and to properly configure it to route client requests.
It has been a while since then, and my understanding of it has improved. I decided to write this guide to explain how it works, and hopefully someone who is in a similar situation will be able to benefit from this.
In this post, I explain about the Ingress Nginx Controller, how to deploy it as a Deployment workload, and how to effectively use it to route client requests to application pods.
Prerequisites
You have:
- a basic understanding of Kubernetes, and how ingress controllers work
- familiarity with the
kubectl
CLI tool - the
KUBECONFIG
environment variable, or.kube/config
has been configured to talk to your intended cluster
Set Up
- Host OS: Ubuntu 22.04.4 LTS
- Guest OS in LXC container: Ubuntu 22.04.4 LTS
- k0s version: v1.30.0+k0s.0 (deployment guide)
- Kubernetes Cluster:
- 1 k0s controller
- 3 k0s workers
Overview of Nginx Ingress Controller Helm Charts
It can be tricky to manually install the ingress controller, so the preferred way is via Helm charts. There are two main versions of the Ingress Controller helm chart:
- Ingress Nginx Controller
- Open Source Version (I am using this)
- Repo: https://github.com/kubernetes/ingress-nginx (under charts/ingress-nginx)
- Docs: https://kubernetes.github.io/ingress-nginx/
- Nginx Ingress Controller
- F5 NGINX, Inc. Version
- Repo: https://github.com/nginxinc/kubernetes-ingress (under charts/nginx-ingress)
- Docs: https://docs.nginx.com/nginx-ingress-controller/installation/installing-nic/installation-with-helm/
Although there are some subtle differences around the 1) Kubernetes resources the both Helm charts deploy, and 2) features of the Nginx server they use, they both have similar functionality.
For this post, I will be using the open source helm chart version. I am using the commands listed within the open source Ingress Nginx Controller installation quick start guide.
Step 1: Deploying the Ingress Nginx Controller as a Deployment Workload
Firstly, we will deploy the Ingress Nginx Controller using the default mode – as a single pod Deployment:
### install helm chart within the ingress-nginx namespace
$ helm upgrade --install ingress-nginx ingress-nginx --repo https://kubernetes.github.io/ingress-nginx --namespace ingress-nginx --create-namespace
Release "ingress-nginx" does not exist. Installing it now.
NAME: ingress-nginx
LAST DEPLOYED: Sat May 25 21:05:11 2024
NAMESPACE: ingress-nginx
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The ingress-nginx controller has been installed.
It may take a few minutes for the load balancer IP to be available.
(...)
### verify that pod is up and running
$ kubectl get po --all-namespaces
NAMESPACE NAME READY STATUS (...)
ingress-nginx ingress-nginx-controller-(...) 1/1 Running (...)
Verify that the helm chat deployed with no errors:
### output formatted for readability
$ helm list
NAME NAMESPACE REVISION
ingress-nginx ingress-nginx 1
UPDATED STATUS CHART APP VERSION
(timestamp) deployed ingress-nginx-4.10.1 1.10.1
(Optional) We can take a look at the pod details:
$ kubectl describe po -n ingress-nginx ingress-nginx-controller-cf668668c-mmv9t
Name: ingress-nginx-controller-cf668668c-mmv9t
Namespace: ingress-nginx
Priority: 0
Service Account: ingress-nginx
Node: worker1/10.92.25.210
Start Time: Sat, 25 May 2024 21:05:31 +0000
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
app.kubernetes.io/version=1.10.1
helm.sh/chart=ingress-nginx-4.10.1
pod-template-hash=cf668668c
Annotations: <none>
Status: Running
IP: 10.244.0.14
IPs:
IP: 10.244.0.14
Controlled By: ReplicaSet/ingress-nginx-controller-cf668668c
Containers:
controller:
Container ID: (...)
Image: (...)
Image ID: (...)
Ports: 80/TCP, 443/TCP, 8443/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
SeccompProfile: RuntimeDefault
Args:
/nginx-ingress-controller
--publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
--election-id=ingress-nginx-leader
--controller-class=k8s.io/ingress-nginx
--ingress-class=nginx
--configmap=$(POD_NAMESPACE)/ingress-nginx-controller
--validating-webhook=:8443
--validating-webhook-certificate=/usr/local/certificates/cert
--validating-webhook-key=/usr/local/certificates/key
--enable-metrics=false
(...)
Step 1a: Understanding the Ingress Nginx Controller Kubernetes Resources
Let’s take a look at the resources deployed within the Ingress Nginx Controller namespace, ingress-nginx
:
$ kubectl get all -n ingress-nginx -o name
pod/ingress-nginx-controller-cf668668c-mmv9t
service/ingress-nginx-controller
service/ingress-nginx-controller-admission
deployment.apps/ingress-nginx-controller
replicaset.apps/ingress-nginx-controller-cf668668c
It is a typical Deployment fronted by a two Services – the following blurb explains the the reason for this:
[…] the first service provides an external entry point for the ingress controller and it’s responsible for traffic routing and load balancing. The second service [i.e. the admission one] is responsible for validating and modifying ingress resources before they are processed by the ingress controller.
source: https://discuss.kubernetes.io/t/why-ingress-nginx-controller-have-two-service-created/22654
If we look at the Endpoints, we will see that both of the services do point to the same pod (albeit using different ports):
$ kubectl get endpoints -n ingress-nginx
NAME ENDPOINTS
ingress-nginx-controller 10.244.1.14:443,10.244.1.14:80
ingress-nginx-controller-admission 10.244.1.14:8443
Now that we have an idea of the resources that make up the Ingress Nginx Controller, we can move on to deploy a sample application that will make use of it.
Step 2: Deploying an Application (Comprising of Deployment, Service, and Ingress)
To have a concrete example to talk about, I will deploy a sample application of, well, Nginx (it makes a good testing container because of its default landing page).
After which, I will create an Ingress resource that defines a rule that routes all traffic on the root path to the Nginx pod. The Ingress resource is created with the nginx
class to indicate that the Ingress Nginx Controller should be the one responding to this creation event.
### deploy the application pod and service
$ kubectl run webserver --image=nginx --port=80 --restart=Never --expose
service/webserver created
pod/webserver created
### verify resources in default namespace
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/webserver 1/1 Running 0 26s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP
service/webserver ClusterIP 10.104.74.251 <none> 80/TCP
### create an Ingress resource that redirects traffic to the "webserver" pod running Nginx
$ kubectl create ingress webserver --class=nginx --rule="/=webserver:80"
ingress.networking.k8s.io/webserver created
$ kubectl get ing
NAME CLASS HOSTS ADDRESS PORTS AGE
webserver nginx * 80 14m
$ kubectl get ing webserver -o yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
creationTimestamp: "2024-05-25T21:25:15Z"
generation: 1
name: webserver
namespace: default
resourceVersion: "11219"
uid: 5f3b93ba-67d3-4cec-ab16-eb15af1165d5
spec:
ingressClassName: nginx
rules:
- http:
paths:
- backend:
service:
name: webserver
port:
number: 80
path: /
pathType: Exact
status:
loadBalancer: {}
Step 3: Observing the Ingress Nginx Controller Pod
Step 3a: Ingress Nginx Controller Pod Logs
If we look at the Ingress Nginx Controller pod logs now, we will notice some log entries for the Ingress that we created in the previous step:
$ kubectl logs -n ingress-nginx ingress-nginx-controller-cf668668c-mmv9t
...
(timestamp) main.go:107] "successfully validated configuration, accepting" ingress="default/webserver"
(timestamp) store.go:440] "Found valid IngressClass" ingress="default/webserver" ingressclass="nginx"
(timestamp) event.go:364] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"webserver", UID:"5f3b93ba-67d3-4cec-ab16-eb15af1165d5", APIVersion:"networking.k8s.io/v1", ResourceVersion:"11219", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
(timestamp) controller.go:190] "Configuration changes detected, backend reload required"
(timestamp) controller.go:210] "Backend successfully reloaded"
(timestamp) event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-cf668668c-mmv9t", UID:"803e3c00-35cd-422c-adca-a4ae793bdf30", APIVersion:"v1", ResourceVersion:"8482", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
Essentially, what happened is that the Ingress Nginx Controller pod is watching the Kubernetes API server. Once it receives an event that an Ingress resource has been created, it checks against its syntax, as well as if it has defined the IngressClass=nginx
.
If the new Ingress manifest is in order, the Nginx (sub?)process will update its config file used for routing, and trigger a hot reload without bringing down the pod (see docs for more details on this process).
Step 3b: Ingress Nginx Controller Pod Configuration File
The routing configuration is stored within the Ingress Nginx Controller pod’s configuration file. I have extracted the relevant high level parts here:
$ kubectl exec -it -n ingress-nginx ingress-nginx-controller-cf668668c-mmv9t -- /bin/bash -c "cat nginx.conf"
(...)
http {
(...)
server {
(...)
location = / {
set $namespace "default";
set $ingress_name "webserver";
set $service_name "webserver";
set $service_port "80";
set $location_path "/";
set $global_rate_limit_exceeding n;
rewrite_by_lua_block {
lua_ingress.rewrite(...)
}
(...)
}
}
}
Basically, if this Nginx server receives a request for the root path (i.e. with no path specified in the request), it will use the OpenResty ngx_http_lua_module to do the rewrite of the request to pass it to the application service.
Interlude
Let’s take a look at what we have so far:
- There is an application (Nginx) Pod that is fronted/load balanced by a Service resource
- An Ingress resource that routes traffic to the Service was created with the “
class=nginx
” attribute. This creates an event within Kubernetes. - The Ingress Nginx Controller pod sees this event and reacts: it verifies that the Ingress manifest is syntactically correct, and it has necessary attributes have been set.
- If all checks pass, the Ingress Nginx Controller adds the routing details to its internal configuration file, and hot reloads it
- The Ingress Nginx Controller pod acts as a reverse proxy, routing traffic it receives to the corresponding services within the cluster
- Ingress resources are used to create these routing rules
As of now, there is no path to route requests external to the Kubernetes cluster to the Ingress Nginx Controller.
request --> ??? -|-> Nginx Ingress --> Nginx Ingress --> Application
| Controller Controller Service/
| Service Pod Pod
v
(k8s cluster boundary)
Step 4: Forward Traffic From Local Port to Ingress Nginx Controller Pod
A simple solution would be to use the current node where you issue your curl request as a server to receive and forward client traffic to the Ingress Nginx Controller Service inside the Kubernetes cluster.
This set up is similar to the “Using a self-provisioned edge” documented here, and is an easy approach for purposes of testing and/or in simple set ups (e.g. homelabs):
request --> this -|-> Nginx Ingress --> Nginx Ingress --> Application
node | Controller Controller Service/
(port | Service Pod Pod
8080) | (port 80)
|
v
(k8s cluster boundary)
* note: the components may reside on different underlying nodes/instances
Using the kubectl port-forward
subcommand, forward all traffic received on localhost port 8080 to the Ingress Nginx Controller Service port 80:
$ kubectl port-forward --namespace=ingress-nginx service/ingress-nginx-controller 8080:80 &
[1] 1468
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80
Step 5: Routing Client Requests to the Application
We can test that the Ingress Nginx Controller correctly routes traffic to the Application Nginx Service by curl
‘ing 127.0.0.1:8080:
$ curl 127.0.0.1:8080
Handling connection for 8080
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
Step 6: (Optional) Uninstall the Ingress Nginx Controller Helm Chart
$ helm uninstall ingress-nginx -n ingress-nginx
Other Deployment Environments
Once you have a basic understanding of how the Ingress Nginx Controller works, it becomes trivial to expand and apply that knowledge into other Kubernetes clusters on various environments.
The official document has an entire section that describes the deployment of the Ingress Nginx Controller on several popular container platforms (e.g. Docker/Rancher), public cloud providers (e.g. AWS, GCE – GKE), and even on bare-metal.
There is an interesting subsection that describes the various options when using a bare-metal environment, including exposing the Ingress Nginx Controller as a NodePort Service, and using MetalLB.
Summary
In this post, I walked through the steps to install the open source Ingress Nginx Controller inside a Kubernetes cluster as a single replica Deployment workload. After which, I created a simple Nginx Pod and Service to simulate a custom application.
I then created an Ingress resource (including the “class=nginx
” attribute) with a HTTP rule that routes traffic from the root domain to the Nginx Application Service resource. The Ingress Nginx Controller process sees this event from the Kubernetes API server, and creates the corresponding routing rule within its own configuration file.
At this point, the Ingress Nginx Controller is acting as a reverse proxy for requests that it receives. To allow external (to the cluster) requests from being accepted into the cluster, I used the current node as a forwarder via the kubectl port-forward
subcommand.
The final set up looks like this:
request --> this -|-> Nginx Ingress --> Nginx Ingress --> Application
node | Controller Controller Service/
(port | Service Pod Pod
8080) | (port 80)
|
v
(k8s cluster boundary)
* note: the components may reside on different underlying nodes/instances