Blog
April 21, 2020

Connecting Users to Applications with Kubernetes Ingress Controllers

Connecting Users to Applications with Kubernetes Ingress Controllers

Containerization technologies such as Docker have been rapidly adopted by software teams, and the ensuing ability to easily package application components into reusable parts has given rise to the widespread use of microservices. However, these benefits have come with a price tag as developers are faced with challenges around orchestrating and managing connectivity to the underlying containers in a datacenter. Kubernetes (k8s) is the de facto standard for teams developing cloud-native applications and directly addresses these issues. In this article, we’ll briefly review one of the most critical but perhaps confusing aspects of Kubernetes networking: The Ingress Controller.

From Pods to Production

Let’s start with a typical scenario: A development team is tasked with writing a backend API service for external applications/users. In the early phases of development, an engineer may instantiate a local instance of the containerized implementation on development machines using direct docker invocations or even docker-compose. At some point, though, they’ll want to deploy a version of the service to a shared development or staging cluster in the same manner as the final production configuration. Pods, the fundamental deployable units defined by k8s, and higher-level abstractions such as Deployment resources can help automate lifecycle management of containers during this step, but they don’t address how applications are accessed over the network. Kubernetes provides a dedicated resource abstraction for this purpose.

Exposing applications as k8s services

When deployed within Pods, containers are assigned IP addresses that may change over time due to a variety of lifecycle operations. This immediately poses challenges when other components need to find and establish network connections with them. The Service resource defined as part of Kubernetes networking manages these aspects automatically. Users can define Services that get associated with underlying Pods through selectors and allows them to be accessed through a user-specified service name. Where connections can be established from and how they’re implemented depends upon the Service type configured. The most commonly used service types offered by k8s are:

  • ClusterIP
  • NodePort
  • LoadBalancer

ClusterIP service type

The ClusterIP is enabled by default if no type is defined in the Service resource definition. When invoked, k8s creates a virtual cluster IP address that can be used to connect to the underlying Pods. The caveat, though, is that this IP address is routable only within the cluster itself. ClusterIP services are often used for exposing internal-only application endpoints to each other.

NodePort service type

The NodePort type provides the simplest mechanism for external access to services. Specifically, it opens a specific port (within a k8s configured port range) on every node in the cluster. Underneath, a ClusterIP service is created, and clients that attempt to connect to an exposed NodePort are routed through. While the NodePort service type extends a method to access services from outside of the cluster, it has some drawbacks including:

  • Services can only be exposed on ports from a range (30000-32767 by default)
  • One port can only be mapped to a single service
  • Clients connect through a node and if the corresponding IP of the underlying host / VM changes, they need to be updated accordingly

LoadBalancer service type

The LoadBalancer type is often used in cloud environments in order to automate the provisioning of external load balancers outside of the underlying k8s cluster. While this enables external network access and avoids the problem of IP addresses shifting out from underneath clients, the use of LoadBalancer Service types can quickly lead to high costs from the underlying cloud (e.g. GCE or AWS).

While the three Service types above are viable alternatives for some use cases, when application developers want to expose their services externally without the limitations of NodePort and LoadBalancer types, there’s a better alternative extended by Kubernetes networking.

K8s networking and the Ingress resource abstraction

Kubernetes defines a native Ingress resource abstraction that exposes HTTP and HTTPS endpoints and routes traffic based upon rules defined by the user. The Ingress resource is a natural fit when developers and devops engineers want to expose multiple underlying services through a single external endpoint and/or load balancer. The Ingress resource definition allows them to route traffic to defined Service resources based upon, for example, host and/or prefix rules. Therefore, it complements the Service resource capabilities to provide a flexible method for enabling external access. However, defining an Ingress resource on its own doesn’t actually expose services outside Kubernetes since it simply conveys a request for networking configuration.

The wizard behind the curtain: Ingress Controllers

Our discussion thus far has helped highlight the need for ingress resources, but it leaves open the question of how corresponding requests are acted upon. The answer comes in the form of Ingress Controllers which are responsible for consuming inbound requests and creating the corresponding routing specifications in a technology-specific manner. Typically, the specific controller installed in a k8s cluster is selected and deployed by operators. There are many potential options available, but a few illustrative examples include:

  • AWS ALB - An instance of an Ingress Controller tied to a specific public cloud, it satisfies inbound Ingress resource requests using AWS Application Load Balancers
  • NGINX - Implements Ingress resources using the NGINX open source software
  • Traefik - A leading open source Kubernetes Ingress Controller that makes setting up routes between Kubernetes services and the outside world simple and reliable

There need not be a strict either / or decision when it comes to choosing Ingress Controllers, and k8s operators can elect to deploy multiple controllers if desired. The selection process should take into consideration the benefits specific controllers like Traefik may provide such as:

  • Lets Encrypt support for automated certificate management
  • Traffic splitting based upon custom weight definitions
  • Flexibly route definitions including support for name and path based routing as well as route prioritization
  • Custom resource definitions that provide additional controller-specific enhancements

By integrating controllers that align with their use cases, development teams can enjoy a variety of capabilities provided by Kubernetes for external access without having to become networking experts.

Looking to understand the critical role that networking plays in modern application deployments with Kubernetes? Check out our latest white paper on Kubernetes for Cloud-Native Application Networks.

About the Author

Latest from Traefik Labs

How Traefik Labs is Pioneering the Kubernetes Gateway API Revolution
Blog

How Traefik Labs is Pioneering the Kubernetes Gateway API Revolution

Read more
Traefik Proxy v3.2 - A Munster Release
Blog

Traefik Proxy v3.2 - A Munster Release

Read more
GitOps-Driven Runtime API Governance: The Secret Sauce for Scale
Webinar

GitOps-Driven Runtime API Governance: The Secret Sauce for Scale

Watch now

Traefik Labs uses cookies to improve your experience. By continuing to browse the site you are agreeing to our use of cookies. Find out more in the Cookie Policy.