avatar
Fyodor

Aug 19, 2024

Deployment to Kubernetes with ArgoCD

Deployment to Kubernetes with ArgoCD

Get featured on Geeklore.io

Geeklore is in very early stages of development. If you want to help us grow, consider letting us sponsor you by featuring your tool / platform / company here instead of this text. 😊

In my journey to broaden my DevOps skills, I took inspiration from @kubeden to create a project which cenetered around ArgoCD.

Provided below are the Objectives of the project.

Objectives

1. Deploy K8S cluster and a container registry on DigitalOcean with Terraform

DigitalOcean is a cloud service provider, just like AWS. The difference is that it is more affordable, and less complex ( due to having less services available at the moment ).

My terraform configuration is organized into three files:

Terraform/
|-- main.tf
|-- container_registry_cluster.tf
|-- variables.tf
  • main.tf: Configures the DigitalOcean provider, and the Access Token (The same concept as Access Keys with AWS)
  • container_registry_cluster.tf: Defines the K8S cluster and container registry resources.
  • variables.tf: Stores sensitive information like DigitalOcean API token.

!Note you need to allow K8S to access the Container registry in order to pull images from it later on. This can be done via the following command

doctl registry kubernetes-manifest registry.digitalocean.com/$CONTAINER_REG_NAME | kubectl apply -f -
// main.tf
terraform {
  required_providers {
    digitalocean = {
        source = "digitalocean/digitalocean"
        version = "~> 2.0"
    }
  }
}

provider "digitalocean" {
  token = var.do_token
}

data "digitalocean_ssh_key" "terraform" {
  name = "ssh_main_key"
}
// container_registry_cluster.tf
resource "digitalocean_container_registry" "container_reg" {
    name = "container-registry-test191202"
    subscription_tier_slug = "starter"
    region = "ams3"
}

resource "digitalocean_kubernetes_cluster" "cluster_test" {
    name = "test-cluster" 
    region = "ams3"
    version = "1.30.2-do.0"

    node_pool {
      name = "worker-pool"
      size = "s-2vcpu-2gb"
      node_count = 1
    }
}
// variables.tf
variable "do_token" {
  type = string
  description = "DO personal access token"
  default = "ENTER_YOUR_PERSONAL_TOKEN_HERE"
}

Note! Anyone with the token will be able to access your account! Having the token exposed like that is bad practice. I only did it for simplicity, and ease of use. My TF files will not be committed to GitHub.

2. Deploy Nginx ingress controller and Certificate Manager on your cluster

Deploying Nginx Ingress Controller:

Nginx Ingress Controller allows you to manage HTTP and HTTPS routing for your services within the cluster. By using a single Load Balancer, you can route traffic to different services based on rules defined in the ingress.

Otherwise, without an Ingress you would need a Load Balancer for each service that needs to be accessed by clients. This is insufficient.

Deploying the Nginx is quite simple with the use of Helm:

helm upgrade --install ingress-nginx ingress-nginx \
  --repo https://kubernetes.github.io/ingress-nginx \
  --namespace ingress-nginx --create-namespace

We also need to configure the ingress rules, based on which it will forward the traffic within the K8S cluster.

// node-express-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: node-express-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  tls:
  - hosts:
    - djingle.xyz
    secretName: nodeapp-tls
  ingressClassName: nginx
  rules:
  - host: djingle.xyz
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: node-express-app-service
            port:
              number: 80

This configuration allows Nginx to route traffic based on the djingle.xyz domain to the node-express-app-service.

Deploying Cert Manager

Cert Manager makes our life easier by automating the management and issuance of TLS certificates. This is important for securing our application with HTTPS.

To deploy Cert Manager, use the following commands:

helm repo add jetstack https://charts.jetstack.io --force-update

helm install \
  cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --create-namespace \
  --version v1.15.3 \
  --set crds.enabled=true

After deployment, we need to configure the Issuer and Certificate

Issuer:

All cert-manager certificates require a referenced issuer that is in a ready condition to attempt to honor the request. An Issuer defines how certificates should be generated by specifying the certificate authority (CA) and how to verify ownership of a domain (e.g., through ACME for Let's Encrypt as we did in our case).

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-issuer
spec:
  acme:
    # You must replace this email address with your own.
    # Let's Encrypt will use this to contact you about expiring
    # certificates, and issues related to your account.
    email: [email protected]
    server: https://acme-v02.api.letsencrypt.org/directory
    privateKeySecretRef:
      # Secret resource that will be used to store the account's private key.
      name: letsecnrypt-issuer-key
    # Add a single challenge solver, HTTP01 using nginx
    solvers:
    - http01:
        ingress:
          ingressClassName: nginx

Certificate:

A Certificate resource requests a TLS certificate from a specified Issuer and stores the resulting certificate in a Kubernetes Secret. It specifies the domain names to secure and the Issuer to use.

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: node-app-cert
spec:
  secretName: nodeapp-tls
  dnsNames:
    - *INSERT_DNS_NAME* 
  issuerRef:
    name: letsencrypt-issuer
    # We can reference ClusterIssuers by changing the kind here.
    # The default value is Issuer (i.e. a locally namespaced Issuer)
    kind: ClusterIssuer
    # This is optional since cert-manager will default to this value however
    # if you are using an external issuer, change this to that issuer group.

3. Deploy ArgoCD to your new cluster

Before I get on with the deployment, here are a few words on ArgoCD and what is so fascinating about it.

A traditional CI/CD pipeline typically follows these steps: First, the code is committed to the repository, triggering automated tests. Once the tests pass, the application is deployed. Next, a Docker image is built, and the corresponding YAML manifest for deployment is updated. Finally, Jenkins (or a similar CI/CD tool) accesses the Kubernetes cluster and applies the new changes using kubectl apply. (This is a simplified overview of a common CI/CD workflow.)

This is not practical for a number of reasons:

  1. Security is compromised since we give an external agent 'Jenkins' access credentials to the cluster.

  2. Even then, then when the new manifest is applied Jenkins has no idea of knowing if everything is still running without any discrepancies. The whole cluster might be down, and Jenkins would not even know a thing.

One of the cool things with ArgoCD is that it's directly deployed in the cluster. Contrary to Jenkins, it has visibility over all the components, and their status.

Install ArgoCD with these commands:

kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

!! Note that this only deployes the ArgoCD within the cluster !! It is not yet working properly, as we still need to configure it which will be done later on in step 7.

4. Create a simple hello-world nodeJS express app

Before we create the application, we first need to initialize npm, and then install express.

NPM stands for 'Node Package Manager' think of it like a tool which we use to install different dependancies ( libraries ) in order to make our life easier.

In order to initiate npm, and install express as a dependency, we need to run the following commands:

npm init -y
npm install express --save

Creating the nodeJS application is quite simple, as it consists of a few lines of code.

// app.js
import express from 'express'

const app = express()

app.get("/", async (req, res) => {
    res.send("Hello World!")
})

app.listen(8080, () => {
    console.log('Server is running on port 8080!')
})

5. Push the Express App to Github and Create a Workflow to Automatically Build and Push to your Container Registry

In this case, for the workflow I will use GitHub Actions. Github Actions are a set of commands that are executed when a commit is made to a repository.

To configure such actions, we need to create the following directory in the root of the repo -> ./github/workflows

Inside the newly created directory, I have created a 'build.yaml' file which contains the following operations:

// build.yaml
name: Build And Push Docker Container

on:
  push:
    branches:
      - main 

jobs:
  build_and_push:
    runs-on: ubuntu-latest
    steps:
      - 
        name: Checkout the repo
        uses: actions/checkout@v4
      - 
        name: Build image
        run: docker build -t node-express.js .
      - 
        name: Install doctl
        uses: digitalocean/action-doctl@v2
        with:
          token: ${{ secrets.DIGITALOCEAN_ACCESS_TOKEN }}
      -
        name: Log in to DO Container Registry
        run: doctl registry login --expiry-seconds 600
      -
        name: Tag image
        run: docker tag node-express.js registry.digitalocean.com/${{ vars.CONTAINER_REGISTRY_NAME }}/node-express.js
      -
        name: Push image
        run: docker push registry.digitalocean.com/${{ vars.CONTAINER_REGISTRY_NAME }}/node-express.js:latest

All this does is build an image and then push it to the Digital Ocean Container Registry.

We need to present the Access Token, which in this case is done via a secret.

6. Create a different repo for K8S files and add Kustomize configuration for your Express App

For reference my K8S repo for this project can be accessed HERE

7. Deploy the app to ArgoCD - The star of the show

The final step is to deploy the app using ArgoCD. The application.yaml manifest instructs ArgoCD to monitor a specific Git repository for changes.

Thats it! From this point on, ArgoCD starts listening to the repo for any changes, and configures the cluster effectively syncing it to the state provided in the GitHub repository.

// application.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: myapp-argo-application
  namespace: argocd
spec:
  project: default

  source:
    repoURL: https://github.com/Fedd911/k8S-manifest-argoCD 
    targetRevision: HEAD 
    path: dev
  destination:
    server: https://kubernetes.default.svc
    namespace: default

  syncPolicy:
    
    automated:
      selfHeal: true
      prune: true

Key configurations in the manifest:

  • repoURL -> The target repository which ArgoCD uses in order to sync the cluster.
  • Destination -> Provides the target cluster. In this case we are able to use the default ns service since ArgoCD is located within the cluster itself.

ArgoCD UI

ArgoCD has a cool UI which can be accessed locally, by port forwarding the service to a local port.

In order to execute that, we need to run the following command:

kubectl port-forward svc/argocd-server -n argocd 9090:443

Since we are forwarding from port 9090, we need to access the app via that port: http://localhost:9090/

Once presented with the login window, you need to use the following credentials:

Username: admin (by default)

The password is stored locally in a secret on the cluster.

In order to access it we need to run the following command

kubectl get secret argocd-initial-admin-secret  -n argocd -o yaml

// This extracts the secret in a .yaml format.

Then the password needs to be decoded.

example password: cGFzc3dvcmQ=

// Take the value of the password and decode it:
	
echo cGFzc3dvcmQ= | base64 --decode

The output is the password which should be used to login into the ArgoCD UI.

If your cluster is unhealthy or the application is down, you will be able to see it directly in the WebUI.

Objectives

Conclusion

I had a lot of fun working on this project. It gave me hands-on experience with ArgoCD, Cert Manager, and Nginx Ingress Controller.

The project itself is quite simple. Although this barely scrapes the surface, it is more than sufficient to demonstrate the power of GitOps and the benefits of working with these tools.

Useful Resources

ArgoCD Tutorial for Beginners | HERE

Nginx Ingress Controller and Cert Manager setup 2024 | HERE

Getting started with Kustomize | HERE

Latest Comments

    avatar Algorithm Adept
    Kuberdenis Algorithm Adept Level 27
    4 months ago

    this post was amazing!
    good job on the amazing explanation and wide area of technologies

    avatar Rookie
    Fyodor Rookie Level 2
    Author 4 months ago

    Thank you for your support!

More From This User

© 2024 Geeklore - DEV Community RPG

Facebook Twitter Linkedin Instagram

Campaign Progression Updated!