How to Install and Configure the Ingress NGINX Controller on Kubernetes (Complete Sample App Included!)

view of village from gray window

The Ingress-NGINX Controller for Kubernetes is a very popular option for configuring Ingress on Kubernetes. Let’s walk through the installation process and get our a load balancer set up and ready to handle traffic from the web.

What You’ll Need

Running Kubernetes Cluster

Any k8s cluster will do fine, but we will be using Vultr’s VKE (Vultr Kubernetes Engine) today. If you’d like to follow along to a tee (for FREE!), you can use sign up using my link to get a $100 credit to your account: https://ascode.com/vultr. They offer a really quick (like 5 clicks) deploy to get up and running quickly.

If you want a step-by-step guide to deploying Kubernetes on AWS using Infrastructure as Code instead, check out my video on deploying a AWS EKS with Terraform!

How to Deploy AWS EKS with Terraform

The Following CLI Tools

Install the Ingress NGINX Controller

The Ingress NGINX documentation gives us two main options to install: via kubectl manifests OR with helm charts. We’ll be using helm charts and I recommend this as it’s easier to manage than manifests.

Installing the chart only takes one line (ish) with helm. This command will upgrade the ingress-nginx helm chart if it exists and install it if it doesn’t. It will also create a namespace if needed.

helm upgrade --install ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace ingress-nginx --create-namespace

We will be using the default installation settings, but if further configuration of the helm values is necessary, you can find the default values in the chart repo here and pass them as individual helm values or a values file.

# override a single value
helm ... --set specialSetting=awesome

# override with a values file
helm ... --values myNewValuesFile.yaml

Verify Installation

After installing, verify your installation is running by checking to see that pods and services come up in the ingress-nginx namespace:

❯ k get po,svc -n ingress-nginx

NAME READY STATUS RESTARTS AGE
pod/ingress-nginx-controller-6858749594-nc4bk 1/1 Running 0 5m

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ingress-nginx-controller LoadBalancer 10.96.205.48 104.238.131.98 80:30251/TCP,443:31355/TCP 5m
service/ingress-nginx-controller-admission ClusterIP 10.111.21.141 <none> 443/TCP 5m

Note the service with type LoadBalancer and it’s EXTERNAL-IP. This is the load balancer provisioned by your cloud provider (in our case Vultr) used by Ingress NGINX. This load balancer will be used to handle connections from the internet.

Deploy a Sample Application (If You Don’t Have One)

Let’s run a 3-replica NGINX deployment for our sample application. If you have a running application already, you can skip this section.

Save the following to a file called ascode-example-deployment.yaml


ascode-example-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: ascode-example
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80

… and create a service manifest called ascode-example-service.yaml to create an internal load balancer for our deployment.


ascode-example-service.yaml

apiVersion: v1
kind: Service
metadata:
name: nginx-service
namespace: ascode-example
spec:
type: ClusterIP
selector:
app: nginx
ports:
- protocol: TCP
port: 8080
targetPort: 80

And finally, create the ascode-example namespace and apply the manifests to our cluster.

kubectl create namespace ascode-example
kubectl apply -f ascode-example-deployment.yaml
kubectl apply -f ascode-example-service.yaml

This will create our entire sample application. To recap, our sample application has:

Never Trust Code! Always Verify!

Before moving on, check to make sure that your service and deployment work properly by port-forwarding the service to your local machine. Run kubectl port-forward service/nginx-service 8080:8080 -n ascode-example and you should see the NGINX welcome screen when you visit http://localhost:8080. Here’s what it should look like.

❯ kubectl port-forward service/nginx-service 8080:8080 -n ascode-example

Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80

...

❯ curl http://localhost:8080

<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

This is the DevOps engineers “Hello, world!” in my opinion. Congrats! You’re ready to show off your app to the world.

Expose Our App to the World with Ingress NGINX

We have a working application running internally to our cluster, but we want to show it to the world. It’s time for our baby app to fly! (with ingress-nginx, naturally).

We’ll need to configure an Ingress resource, and we’ll do it the same way we created our deployment and service. Save the following to a file called ascode-example-ingress.yaml


ascode-example-ingress.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
namespace: ascode-example
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-service
port:
number: 8080

In plain English, this creates an Ingress resource that uses the nginx ingress class to route traffic to the / endpoint to the previously created nginx-service. And of course we’ve already verified that service returns the NGINX “Hello, world!” via kubectl port-forward.

Apply the ingress!

kubectl apply -f ascode-example-ingress.yaml

Did it Work?

Let’s check! Let’s find our load balancer’s IP and see if we see the NGINX hello world!

❯ kubectl -n ingress-nginx get svc ingress-nginx-controller

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.96.205.48 104.238.131.98 80:30251/TCP,443:31355/TCP 57m

#
# Verify the service type: loadbalancer
# Note the loadbalancer external IP ( 104.238.131.98 )
#

❯ curl http://104.238.131.98/

<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

That’s it! You now have a fully functional application running in Kubernetes that can be accessed from the outside world. However, we aren’t compeletly production-ready. There’s a few things that we should improve before then:

…but that’s enough code for today. Stay tuned and we’ll cover the next steps in a future release. Feel free to drop questions and ask for help in the comments section or our discord server.

Thank you and hope this helps! Subscribe to stay tuned, and we’ll see you in the next one!

2/17/2024 Update: Video version released!

Jedi Park Avatar

Posted by

Leave a Reply

Discover more from AsCode

Subscribe now to keep reading and get access to the full archive.

Continue reading