Blog
January 11, 2022

How to Deploy Traefik Proxy Using Flux and GitOps Principles

flux and gitops

GitOps makes configuration management seamless by creating a single source of truth for configuration changes so that changes can be transparent, validated, and low-risk. This article and this GitHub repository will show you how Traefik Proxy and Flux can work together to help you implement GitOps principles. But before we jump in, what are these tools, and how do they help?

What is Flux?

Flux is a tool created by Weaveworks, the inventor of GitOps. It is the operator that makes GitOps happen in your cluster, ensuring cluster configurations match the ones in Git. Flux exposes an interface to this tool that helps you build and apply YAML to your clusters so you can automate your deployments. It also uses Helm.

What is Traefik Proxy?

Traefik Proxy is an edge router that exposes your services. In the Kubernetes ecosystem, this is called an ingress controller. Traefik Proxy is continuously updated to work with the latest version of Kubernetes.

Traefik Proxy provides CRDs such as an ingress route, TCP/UDP ingress routes, and TraefikService (an abstraction layer running on top of Kubernetes Services and middleware). The middlewares are applied to routers and can tweak requests before hitting Services. These objects help you expose Services to create a configuration.

Traefik Proxy also has a certificate resolver feature that lets you automatically obtain certificates from Let’s Encrypt. Traefik Proxy’s middlewares constitute small applications that you can use to tweak requests. They will help you, for example, add headers or circuit breaker rate limits. Traefik Proxy also supports canary deployments, and we recently added middlewares for TCP ingress routes. Traefik Proxy is a leading modern reverse proxy and load balancer that makes deploying microservices easier.

gitops academy gopher
Master GitOps for Reliable Kubernetes in our Free CourseLearn about GitOps and how to use it to deploy applications in Kubernetes.Sign Up Today

How to deploy Traefik Proxy in multiple clusters using Flux

We will be promoting code from staging to production using Kustomize, built into Flux. Before anything else, there are a few prerequisites you will need:

  • Two Kubernetes clusters acting as, e.g., staging and production environments
  • An empty GitHub repo
  • Exported environment variables GITHUB_TOKEN, GITHUB_USER, GITHUB_REPO
  • The latest Flux CLI installed on a workstation

This article aims to show you how to maintain Kubernetes environments with GitOps principles. We will highlight a common use case with two clusters: one staging environment and one production environment.

On both clusters, we will deploy Traefik Proxy and the sample applications that will be exposed externally thanks to Traefik and IngressRoute resources.

The configuration that we are preparing is based on the following structure:

  • Base - the initial configuration that should be common to all environments (staging and production).
  • Staging - the configuration for staging environments. It is inherited from the base configuration and applies the changes related specifically to the staging environment.
  • Production - the configuration for the production environment. Again, it is inherited from the base configuration with the changes needed for the production environment.

All the changes that are created for a specific environment will be created as a patch, so we will only update the attributes that we are going to change. The remaining configuration is inherited from the base configuration. That entire process is managed using Kustomize, built into Flux.

The entire configuration will be prepared locally, without executing any imperative commands on those two clusters that have been provisioned as a prerequisite.

Once our configuration will be created locally, we will push the entire code to the Git repository.

The Git repository is our source of truth. From now any change we will apply to a cluster will go through our Git repository.

This is a core idea of GitOps. We configure the desired state of our infrastructure using commonly-known Kubernetes manifests. We then push the desired state to a Git repository. Flux then pulls the configuration to the cluster, so the configuration can be applied.

It is very beneficial for us to store our cluster configuration in a Git repository. That way, we can track all the changes and test pull requests before merging them into the main branch. We can ensure that the code that is committed to the Git repository is running on our clusters.

Flux will run our applications and infrastructure components: Traefik Proxy and our sample application on our cluster according to the code we prepared locally.

Let’s get started!

Step 1: Create the infrastructure repository

Create a Git repository to store all configuration files. In our example, we will use a GitHub repo. Use the following command to create a GitHub repo:

gh repo create flux-traefik-demo --public --description "Flux and Traefik - demo"  --clone

This command assumes you are in an empty directory with no Git repository. It will have the same effect as running Git init. Gh refers to GitHub CLI. If your home directory is a Git repository, you might want to run Git init in an empty directory before running the above command. This will create the remote repository and set up the Git remote.

Step 2: Create the repository structure

The following command will create the top directory structures needed to keep manifests that describe the entire infrastructure:

The apps contain custom manifests per cluster as well as Helm releases.

  • The infrastructure contains common infrastructure tools, such as Helm repository definitions or common infrastructure components.
  • Clusters contain Flux configurations.
mkdir -pv ./apps/{base,staging,production}/traefik  ./clusters/{production,staging} ./infrastructure/{sources,crds}
β”œβ”€β”€ apps
β”‚   β”œβ”€β”€ base
β”‚   β”œβ”€β”€ production
β”‚   └── staging
β”œβ”€β”€ infrastructure
β”‚   β”œβ”€β”€ crds
β”‚   └── sources
└── clusters
    β”œβ”€β”€ production
    └── staging

Step 3: Create the Traefik base configuration

A typical way of installing Traefik is to create a service account, RBAC, deployment, and Service (of load balancer type) to expose Traefik.

First, create your RBAC resources.

Traefik can also be installed using the official Helm Chart. Flux supports Helm releases. However, in this case we will install all the required resources manually.

cat > ./apps/base/traefik/rbac.yaml <<EOF
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: traefik-ingress-controller
rules:
  - apiGroups:
      - ""
    resources:
      - services
      - endpoints
      - secrets
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
      - networking.k8s.io
    resources:
      - ingresses
      - ingressclasses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
    resources:
      - ingresses/status
    verbs:
      - update
  - apiGroups:
      - traefik.containo.us
    resources:
      - middlewares
      - middlewaretcps
      - ingressroutes
      - traefikservices
      - ingressroutetcps
      - ingressrouteudps
      - tlsoptions
      - tlsstores
      - serverstransports
    verbs:
      - get
      - list
      - watch

---
kind: ServiceAccount
apiVersion: v1
metadata:
  name: traefik-ingress-controller
  namespace: traefik

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: traefik-ingress-controller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: traefik-ingress-controller
subjects:
  - kind: ServiceAccount
    name: traefik-ingress-controller
    namespace: traefik
EOF

Then create your Traefik Proxy deployment resource.

cat > ./apps/base/traefik/traefik.yaml << EOF
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: traefik
  labels:
    app.kubernetes.io/instance: traefik
    app.kubernetes.io/name: traefik
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: traefik
      app.kubernetes.io/instance: traefik
  template:
    metadata:
      labels:
        app.kubernetes.io/name: traefik
        app.kubernetes.io/instance: traefik
    spec:
      serviceAccountName: traefik-ingress-controller
      terminationGracePeriodSeconds: 60
      containers:
        - name: traefik
          image: traefik:2.5.4
          args:
            - "--entryPoints.web.address=:8000/tcp"
            - "--entryPoints.websecure.address=:8443/tcp"
            - "--entryPoints.traefik.address=:9000/tcp"
            - "--api=true"
            - "--api.dashboard=true"
            - "--ping=true"
            - "--providers.kubernetescrd"
            - "--providers.kubernetescrd.allowCrossNamespace=true"
          readinessProbe:
            httpGet:
              path: /ping
              port: 9000
            failureThreshold: 1
            initialDelaySeconds: 5
            periodSeconds: 5
            successThreshold: 1
            timeoutSeconds: 2

          livenessProbe:
            httpGet:
              path: /ping
              port: 9000
            failureThreshold: 3
            initialDelaySeconds: 5
            periodSeconds: 5
            successThreshold: 1
            timeoutSeconds: 2

          resources:
            limits:
              cpu: 1000m
              memory: 1000Mi
            requests:
              cpu: 100m
              memory: 50Mi

          ports:
            - name: web
              containerPort: 8000
              protocol: TCP

            - name: websecure
              containerPort: 8443
              protocol: TCP

            - name: traefik
              containerPort: 9000
              protocol: TCP

          volumeMounts:
            - mountPath: /data
              name: storage-volume
      volumes:
        - name: storage-volume
          emptyDir: {}
EOF

Next, create a load balancer type service that exposes Traefik Proxy.

cat > ./apps/base/traefik/svc.yaml << EOF
---
apiVersion: v1
kind: Service
metadata:
  name: traefik
  labels:
    app.kubernetes.io/instance: traefik
    app.kubernetes.io/name: traefik
spec:
  selector:
    app.kubernetes.io/instance: traefik
    app.kubernetes.io/name: traefik
  type: LoadBalancer
  externalTrafficPolicy: Local
  ports:
    - port: 80
      name: web
      targetPort: web
      protocol: TCP
    - port: 443
      name: websecure
      targetPort: websecure
      protocol: TCP
EOF

Create a Kustomization file that lists all the resources that have been created.

cat > ./apps/base/traefik/kustomization.yaml << EOF
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - rbac.yaml
  - traefik.yaml
  - svc.yaml
EOF

Note: What is Kustomize? Kustomize is a standalone tool that allows you to customize Kubernetes objects through a Kustomization file. Check the Kubernetes documentation for an overview of Kustomize.

Step 4: Customize Traefik Proxy configurations for your production cluster

Create a namespace for your Traefik resources.

cat > ./apps/production/traefik/namespace.yaml << EOF
---
apiVersion: v1
kind: Namespace
metadata:
  name: traefik-production
EOF

Create a patch for a Traefik Proxy deployment to happen in your cluster. You first created a base configuration, where you have a configuration that will be adapted to meet the requirements for a specific environment. Use a patch to update your base configuration. It will only apply the lines that have been modified. The patch will contain fewer lines than the base. This is what Flux does in the background after it has been applied to the cluster.

As we previously mentioned, the patch contains only those attributes being changed explicitly for that environment. The patch is a feature coming from Kustomize. Once Flux deploys the code, the patch will inherit the base configuration and merge to obtain the final Kubernetes manifests. In the Traefik configuration, we are using one of the features that help the issuing of TLS certificates from LetsEncrypt. It will connect with Let’s Encrypt and obtain a certificate. Make sure you have updated your email address accordingly. There is a lot more to configure with Let's Encrypt, so I encourage you to visit the official documentation to learn more.

cat > ./apps/production/traefik/traefik-patch.yaml << EOF
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: traefik
spec:
  template:
    spec:
      containers:
        - name: traefik
          args:
            - "--entryPoints.web.address=:8000/tcp"
            - "--entryPoints.websecure.address=:8443/tcp"
            - "--entryPoints.traefik.address=:9000/tcp"
            - "--api=true"
            - "--api.dashboard=true"
            - "--ping=true"
            - "--providers.kubernetescrd"
            - "--providers.kubernetescrd.allowCrossNamespace=true"
            - "--certificatesresolvers.myresolver.acme.storage=/data/acme.json"
            - "--certificatesresolvers.myresolver.acme.tlschallenge=true"
            - "--certificatesresolvers.myresolver.acme.email=jakub.hajek+webinar@traefik.io"
EOF

Then, create Kustomization to add resources.

cat > ./apps/production/traefik/kustomization.yaml << EOF
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: traefik-production
resources:
  - namespace.yaml
  - ../../base/traefik

patchesStrategicMerge:
  - traefik-patch.yaml
EOF

Step 5: Customize Traefik Proxy configurations for your staging cluster

Create a namespace for your Traefik Proxy deployment on a staging cluster.

cat > ./apps/staging/traefik/namespace.yaml << EOF
---
apiVersion: v1
kind: Namespace
metadata:
  name: traefik-staging
EOF

Create a Traefik Proxy patch for your staging cluster.

cat > ./apps/staging/traefik/traefik-patch.yaml << EOF
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: traefik
spec:
  template:
    spec:
      containers:
        - name: traefik
          args:
            - "--entryPoints.web.address=:8000/tcp"
            - "--entryPoints.websecure.address=:8443/tcp"
            - "--entryPoints.traefik.address=:9000/tcp"
            - "--api=true"
            - "--api.dashboard=true"
            - "--ping=true"
            - "--providers.kubernetescrd"
            - "--providers.kubernetescrd.allowCrossNamespace=true"
            - "--certificatesresolvers.myresolver.acme.storage=/data/acme.json"
            - "--certificatesresolvers.myresolver.acme.tlschallenge=true"
            - "--certificatesresolvers.myresolver.acme.email=jakub.hajek+webinar@traefik.io"
            - "--certificatesresolvers.myresolver.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory"
EOF

Create a Kustomization resource for deploying your Traefik Proxy-related resources.

cat > ./apps/staging/traefik/kustomization.yaml << EOF
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: traefik-staging
resources:
  - namespace.yaml
  - ../../base/traefik

patchesStrategicMerge:
  - traefik-patch.yaml
EOF

Step 6: Create a Traefik Proxy CRD

Traefik Proxy requires you to have Custom Resources deployed on each cluster. The following command will create a common resource (including Treafik's CRD) that will deploy on each cluster.

cat > ./infrastructure/crds/traefik-crds.yaml << EOF
---
apiVersion: source.toolkit.fluxcd.io/v1beta1
kind: GitRepository
metadata:
  name: traefik-crds
  namespace: flux-system
spec:
  interval: 30m
  url: https://github.com/traefik/traefik-helm-chart.git
  ref:
    tag: v10.3.0
  ignore: |
    # exclude all
    /*
    # path to crds
    !/traefik/crds/
---
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
  name: traefik-api-crds
  namespace: flux-system
spec:
  interval: 15m
  prune: false
  sourceRef:
    kind: GitRepository
    name: traefik-crds
    namespace: flux-system
  healthChecks:
  - apiVersion: apiextensions.k8s.io/v1
    kind: CustomResourceDefinition
    name: ingressroutes.traefik.containo.us
  - apiVersion: apiextensions.k8s.io/v1
    kind: CustomResourceDefinition
    name: ingressroutetcps.traefik.containo.us
  - apiVersion: apiextensions.k8s.io/v1
    kind: CustomResourceDefinition
    name: ingressrouteudps.traefik.containo.us
  - apiVersion: apiextensions.k8s.io/v1
    kind: CustomResourceDefinition
    name: middlewares.traefik.containo.us
  - apiVersion: apiextensions.k8s.io/v1
    kind: CustomResourceDefinition
    name: middlewaretcps.traefik.containo.us
  - apiVersion: apiextensions.k8s.io/v1
    kind: CustomResourceDefinition
    name: serverstransports.traefik.containo.us
  - apiVersion: apiextensions.k8s.io/v1
    kind: CustomResourceDefinition
    name: tlsoptions.traefik.containo.us
  - apiVersion: apiextensions.k8s.io/v1
    kind: CustomResourceDefinition
    name: tlsstores.traefik.containo.us
  - apiVersion: apiextensions.k8s.io/v1
    kind: CustomResourceDefinition
    name: traefikservices.traefik.containo.us
EOF

Create Kustomization for your Traefik Proxy CRDS.

cat > ./infrastructure/crds/kustomization.yaml << EOF
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: flux-system
resources:
  - traefik-crds.yaml
EOF

Create a Kustomization file that deploys the file contained in the CRDS directory.

cat > ./infrastructure/kustomization.yaml << EOF
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- crds
EOF

Step 7: Create the initial Flux configuration for your production cluster

cat > ./infrastructure/kustomization.yaml << EOF
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - crds
EOF
cat > ./clusters/production/infrastructure.yaml << EOF
---
apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
metadata:
  name: infrastructure
  namespace: flux-system
spec:
  interval: 10m0s
  sourceRef:
    kind: GitRepository
    name: flux-system
  path: ./infrastructure
  prune: true
  wait: true
EOF

Step 8: Create the initial Flux configuration for your staging cluster

cat > ./clusters/staging/apps.yaml << EOF
---
apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
metadata:
  name: apps
  namespace: flux-system
spec:
  interval: 10m0s
  dependsOn:
    - name: infrastructure
  sourceRef:
    kind: GitRepository
    name: flux-system
  path: ./apps/staging
  prune: true
  wait: true
EOF
cat > ./clusters/staging/infrastructure.yaml << EOF
apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
metadata:
  name: infrastructure
  namespace: flux-system
spec:
  interval: 10m0s
  sourceRef:
    kind: GitRepository
    name: flux-system
  path: ./infrastructure
  prune: true
  wait: true
EOF

Step 9: Create the initial commit

Once all the initial configurations have been created, you can commit and push the code to your repository. The next step is to bootstrap clusters with the Flux CLI command.

β”œβ”€β”€ apps
β”‚   β”œβ”€β”€ base
β”‚   β”‚   └── traefik
β”‚   β”‚       β”œβ”€β”€ kustomization.yaml
β”‚   β”‚       β”œβ”€β”€ rbac.yaml
β”‚   β”‚       β”œβ”€β”€ svc.yaml
β”‚   β”‚       └── traefik.yaml
β”‚   β”œβ”€β”€ production
β”‚   β”‚   └── traefik
β”‚   β”‚       β”œβ”€β”€ kustomization.yaml
β”‚   β”‚       β”œβ”€β”€ namespace.yaml
β”‚   β”‚       └── traefik-patch.yaml
β”‚   └── staging
β”‚       └── traefik
β”‚           β”œβ”€β”€ kustomization.yaml
β”‚           β”œβ”€β”€ namespace.yaml
β”‚           └── traefik-patch.yaml
β”œβ”€β”€ clusters
β”‚   β”œβ”€β”€ production
β”‚   β”‚   β”œβ”€β”€ apps.yaml
β”‚   β”‚   └── infrastructure.yaml
β”‚   └── staging
β”‚       β”œβ”€β”€ apps.yaml
β”‚       └── infrastructure.yaml
└── infrastructure
    β”œβ”€β”€ crds
    β”‚   β”œβ”€β”€ kustomization.yaml
    β”‚   └── traefik-crds.yaml
    β”œβ”€β”€ kustomization.yaml
    └── sources

Step 10: Bootstrap your clusters

Once you have created the configuration files, you can bootstrap Flux on both clusters. Before running the bootstrap command, make sure to export the following environment variables.

export GITHUB_TOKEN
export GITHUB_USER
export GITHUB_REPO

These variables are necessary for Flux to connect with your Github account using the PAT (personal access token). The PAT token needs to have the appropriate permissions (admin rights) to create the deployment keys on your repository.

Github_user refers to your username, and Github_repo refers to the repo you made earlier.

Change the Kubernetes context to the staging cluster and then execute the command that will bootstrap Flux on a staging cluster.

flux bootstrap github \
--branch=main \
--context=t1.aws.traefiklabs.tech \
--owner=${GITHUB_USER} \
--repository=${GITHUB_REPO} \
--path=clusters/staging \
--components-extra=image-reflector-controller,image-automation-controller  \
--personal

Observe the following command that will present the status of the provisioned Flux resources. You can also manually explore other resources that have been declared in the created Kubernetes manifests.

flux get all

Once the staging cluster has been correctly configured, switch the Kubernetes context and execute the following command.

flux bootstrap github \
--branch=main \
--context=t2.aws.traefiklabs.tech \
--owner=${GITHUB_USER} \
--repository=${GITHUB_REPO} \
--path=clusters/production \
--components-extra=image-reflector-controller,image-automation-controller  \
--personal

Explore the resources that have been configured on your cluster. You can also interact with the Flux CLI to explore the status of all your Flux resources.

Step 11: Create a base deployment for a whoami application

Create a base configuration for the whoami application. The whoami application is a simple demo application that displays headers from the containers. It is a kind of echo server that we use for testing purposes.

mkdir -pv ./apps/{base,staging,production}/whoami
cat > ./apps/base/whoami/deployment.yaml << EOF
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: whoamiv1
  labels:
    name: whoamiv1
spec:
  replicas: 1
  selector:
    matchLabels:
      task: whoamiv1
  template:
    metadata:
      labels:
        task: whoamiv1
    spec:
      containers:
        - name: whoamiv1
          image: traefik/traefikee-webapp-demo:v2
          args:
            - -ascii
            - -name=FOO
          ports:
            - containerPort: 80
          readinessProbe:
            httpGet:
              path: /ping
              port: 80
            failureThreshold: 1
            initialDelaySeconds: 2
            periodSeconds: 3
            successThreshold: 1
            timeoutSeconds: 2
          resources:
            requests:
              cpu: 10m
              memory: 128Mi
            limits:
              cpu: 200m
              memory: 256Mi
---
apiVersion: v1
kind: Service
metadata:
  name: whoamiv1
  namespace: app
spec:
  ports:
    - name: http
      port: 80
  selector:
    task: whoamiv1
EOF

Create an IngressRoute object that exposes the whoami application. The Ingressroute is one of Traefik CRD's resources that create IngressRoute objects to create routers that expose applications externally. You can learn more about this in the official Traefik Proxy documentation on Kubernetes IngressRoutes.

cat > ./apps/base/whoami/ingressroute.yaml << EOF
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: whoami
spec:
  entryPoints:
    - websecure
  routes:
    - kind: Rule
      match: Host(\`fix.me\`)
      services:
        - kind: Service
          name: whoamiv1
          port: 80
  tls:
    certResolver: myresolver
EOF
cat > ./apps/base/whoami/kustomization.yaml << EOF
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - deployment.yaml
  - ingressroute.yaml
EOF

Step 12: Create a custom release for your staging environment

Create a namespace for the whoami application to be deployed.

cat > ./apps/staging/whoami/namespace.yaml <<EOF
---
apiVersion: v1
kind: Namespace
metadata:
  name: whoami-staging
EOF

Create a patch for the whoami application. The changes we are adding here are strictly related to the staging environment. It is a tiny change. We are only changing the header, but you can add more environment-related updates such as mounting different secrets and adding Staging environment variables if you choose.

cat > ./apps/staging/whoami/whoami-patch.yaml << EOF
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: whoamiv1
spec:
  replicas: 4
  template:
    spec:
      containers:
        - name: whoamiv1
          args:
            - -ascii
            - -name=STAGING
EOF

Create a patch that updates the Host rule for the whoami application. The patch is needed to update the host rule that will be used to access the whoami application.

cat > ./apps/staging/whoami/ingressroute-patch.yaml <<EOF
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: whoami
spec:
  routes:
   - kind: Rule
     match: Host(\`whoami.t1.demo.traefiklabs.tech\`)
     services:
        - kind: Service
          name: whoamiv1
          port: 80
EOF

Create a Kustomization configuration to deploy the whoami application on your staging cluster.

cat > ./apps/staging/whoami/kustomization.yaml << EOF
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: whoami-staging
resources:
  - namespace.yaml
  - ../../base/whoami

patchesStrategicMerge:
  - whoami-patch.yaml
  - ingressroute-patch.yaml
EOF

Step 13: Create a custom release for your production environment

Create a namespace for the whoami application to be deployed to production.

cat > ./apps/production/whoami/namespace.yaml << EOF
---
apiVersion: v1
kind: Namespace
metadata:
  name: whoami-production
EOF

Create a patch for the whoami application. It is a tiny change to present the difference between staging and production examples. In a real use case, you can add more changes related to the production environment, such as mounting production configmap, secrets, or environment variables.

cat > ./apps/production/whoami/whoami-patch.yaml << EOF
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: whoamiv1
spec:
  replicas: 8
  template:
    spec:
      containers:
        - name: whoamiv1
          args:
            - -ascii
            - -name=PRODUCTION
EOF

Create a patch that updates the Host rule for the whoami application. The patch just changed the value of Host matching rule.

cat > ./apps/production/whoami/ingressroute-patch.yaml << EOF
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: whoami
spec:
  routes:
   - kind: Rule
     match: Host(\`whoami.t2.demo.traefiklabs.tech\`)
     services:
        - kind: Service
          name: whoamiv1
          port: 80
EOF

Create a Kustomization configuration to deploy the whoami application to your staging cluster.

cat > ./apps/production/whoami/kustomization.yaml << EOF
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: whoami-production
resources:
  - namespace.yaml
  - ../../base/whoami

patchesStrategicMerge:
  - whoami-patch.yaml
  - ingressroute-patch.yaml
EOF
gitops academy gopher
Master GitOps for Reliable Kubernetes in our Free CourseLearn about GitOps and how to use it to deploy applications in Kubernetes.Sign Up Today

And there you have it. If you have followed these steps, you have successfully deployed Traefik Proxy in Flux using a GitOps approach. This makes it easier for you to maintain multiple configurations in Kubernetes.

Let’s summarize what we have already achieved. We started with two built-from-scratch clusters where there were no resources created. Then we created our directory structure and all the necessary Kubernetes manifests. We:

  • Deployed Traefik Proxy with the required resources (service account, RBAC)
  • Deployed a test application
  • Created the Ingressroute to reach the test application

We created a base configuration and then used Kustomize to create overlays for our staging and production environments. We used a feature called patchStrategicmerge to update our configuration for these environments accordingly. Again, the base configuration was inherited, and the only changes were applied and merged to have the final manifests.

Our entire configuration was done locally, without any imperative interactions with our clusters. Once we were happy with the configuration, we pushed the changes to the repo and then started to bootstrap Flux on our two clusters.

Once Flux was correctly started on our cluster it was able to pull the configuration from the repository and apply it to the appropriate clusters.

Every change that we would like to make has to be done through a repository, so you edit or update the files and commit them to repo. Flux will take care of deploying on the clusters.

What are the next steps?

I definitely encourage you to look at Traefik's more advanced load balancing techniques. Check out this webinar to learn more. There are a lot of things to explore in regards to Flux and designing scalable infrastructures. To participate in our community, get involved in our forum. There you can ask questions, post answers, and engage in discussions. I hope to see you there!

About the Author

Latest from Traefik Labs

How Traefik Labs is Pioneering the Kubernetes Gateway API Revolution
Blog

How Traefik Labs is Pioneering the Kubernetes Gateway API Revolution

Read more
Traefik Proxy v3.2 - A Munster Release
Blog

Traefik Proxy v3.2 - A Munster Release

Read more
GitOps-Driven Runtime API Governance: The Secret Sauce for Scale
Webinar

GitOps-Driven Runtime API Governance: The Secret Sauce for Scale

Watch now

Traefik Labs uses cookies to improve your experience. By continuing to browse the site you are agreeing to our use of cookies. Find out more in the Cookie Policy.