Tag: ci cd

  • Mastering Prow: A Guide to Developing Your Own Plugin for Kubernetes CI/CD Workflow

    Continuous Integration and Continuous Delivery (CI/CD) pipelines are essential components of modern software development, especially in the world of Kubernetes and containerized applications. To facilitate these pipelines, many organizations use Prow, a CI/CD system built specifically for Kubernetes. While Prow offers a rich set of features out of the box, you may need to develop your own plugins to tailor the system to your organization’s requirements. In this guide, we’ll explore the world of Prow plugin development and show you how to get started.

    Prerequisites

    Before diving into Prow plugin development, ensure you have the following prerequisites:

    • Basic Knowledge of Kubernetes and CI/CD Concepts: Familiarity with Kubernetes concepts such as Pods, Deployments, and Services, as well as understanding CI/CD principles, will be beneficial for understanding Prow plugin development.
    • Access to a Kubernetes Cluster: You’ll need access to a Kubernetes cluster for testing your plugins. If you don’t have one already, you can set up a local cluster using tools like Minikube or use a cloud provider’s managed Kubernetes service.
    • Prow Setup: Install and configure Prow in your Kubernetes cluster. You can visit Velotio Technologies – Getting Started with Prow: A Kubernetes-Native CI/CD Framework
    • Development Environment Setup: Ensure you have Git, Go, and Docker installed on your local machine for developing and testing Prow plugins. You’ll also need to configure your environment to interact with your organization’s Prow setup.

    The Need for Custom Prow Plugins

    While Prow provides a wide range of built-in plugins, your organization’s Kubernetes workflow may have specific requirements that aren’t covered by these defaults. This is where developing custom Prow plugins comes into play. Custom plugins allow you to extend Prow’s functionality to cater to your needs. Whether automating workflows, integrating with other tools, or enforcing custom policies, developing your own Prow plugins gives you the power to tailor your CI/CD pipeline precisely.

    Getting Started with Prow Plugin Development

    Developing a custom Prow plugin may seem daunting, but with the right approach and tools, it can be a rewarding experience. Here’s a step-by-step guide to get you started:

    1. Set Up Your Development Environment

    Before diving into plugin development, you need to set up your development environment. You will need Git, Go, and access to a Kubernetes cluster for testing your plugins. Ensure you have the necessary permissions to make changes to your organization’s Prow setup.

    2. Choose a Plugin Type

    Prow supports various plugin types, including postsubmits, presubmits, triggers, and utilities. Choose the type that best fits your use case.

    • Postsubmits: These plugins are executed after the code is merged and are often used for tasks like publishing artifacts or creating release notes.
    • Presubmits: Presubmit plugins run before code is merged, typically used for running tests and ensuring code quality.
    • Triggers: Trigger plugins allow you to trigger custom jobs based on specific events or criteria.
    • Utilities: Utility plugins offer reusable functions and utilities for other plugins.

    3. Create Your Plugin

    Once you’ve chosen a plugin type, it’s time to create it. Below is an example of a simple Prow plugin written in Go, named comment-plugin.go. It will create a comment on a pull request each time an event is received.

    This code sets up a basic HTTP server that listens for GitHub events and handles them by creating a comment using the GitHub API. Customize this code to fit your specific use case.

    package main
    
    import (
        "encoding/json"
        "flag"
        "net/http"
        "os"
        "strconv"
        "time"
    
        "github.com/sirupsen/logrus"
        "k8s.io/test-infra/pkg/flagutil"
        "k8s.io/test-infra/prow/config"
        "k8s.io/test-infra/prow/config/secret"
        prowflagutil "k8s.io/test-infra/prow/flagutil"
        configflagutil "k8s.io/test-infra/prow/flagutil/config"
        "k8s.io/test-infra/prow/github"
        "k8s.io/test-infra/prow/interrupts"
        "k8s.io/test-infra/prow/logrusutil"
        "k8s.io/test-infra/prow/pjutil"
        "k8s.io/test-infra/prow/pluginhelp"
        "k8s.io/test-infra/prow/pluginhelp/externalplugins"
    )
    
    const pluginName = "comment-plugin"
    
    type options struct {
        port int
    
        config                 configflagutil.ConfigOptions
        dryRun                 bool
        github                 prowflagutil.GitHubOptions
        instrumentationOptions prowflagutil.InstrumentationOptions
    
        webhookSecretFile string
    }
    
    type server struct {
        tokenGenerator func() []byte
        botUser        *github.UserData
        email          string
        ghc            github.Client
        log            *logrus.Entry
        repos          []github.Repo
    }
    
    func helpProvider(_ []config.OrgRepo) (*pluginhelp.PluginHelp, error) {
        pluginHelp := &pluginhelp.PluginHelp{
           Description: `The sample plugin`,
        }
        return pluginHelp, nil
    }
    
    func (o *options) Validate() error {
        return nil
    }
    
    func gatherOptions() options {
        o := options{config: configflagutil.ConfigOptions{ConfigPath: "./config.yaml"}}
        fs := flag.NewFlagSet(os.Args[0], flag.ExitOnError)
        fs.IntVar(&o.port, "port", 8888, "Port to listen on.")
        fs.BoolVar(&o.dryRun, "dry-run", false, "Dry run for testing. Uses API tokens but does not mutate.")
        fs.StringVar(&o.webhookSecretFile, "hmac-secret-file", "/etc/hmac", "Path to the file containing GitHub HMAC secret.")
        for _, group := range []flagutil.OptionGroup{&o.github} {
           group.AddFlags(fs)
        }
        fs.Parse(os.Args[1:])
        return o
    }
    
    func main() {
        o := gatherOptions()
        if err := o.Validate(); err != nil {
           logrus.Fatalf("Invalid options: %v", err)
        }
    
        logrusutil.ComponentInit()
        log := logrus.StandardLogger().WithField("plugin", pluginName)
    
        if err := secret.Add(o.webhookSecretFile); err != nil {
           logrus.WithError(err).Fatal("Error starting secrets agent.")
        }
    
        gitHubClient, err := o.github.GitHubClient(o.dryRun)
        if err != nil {
           logrus.WithError(err).Fatal("Error getting GitHub client.")
        }
    
        email, err := gitHubClient.Email()
        if err != nil {
           log.WithError(err).Fatal("Error getting bot e-mail.")
        }
    
        botUser, err := gitHubClient.BotUser()
        if err != nil {
           logrus.WithError(err).Fatal("Error getting bot name.")
        }
        repos, err := gitHubClient.GetRepos(botUser.Login, true)
        if err != nil {
           log.WithError(err).Fatal("Error listing bot repositories.")
        }
        serv := &server{
           tokenGenerator: secret.GetTokenGenerator(o.webhookSecretFile),
           botUser:        botUser,
           email:          email,
           ghc:            gitHubClient,
           log:            log,
           repos:          repos,
        }
    
        health := pjutil.NewHealthOnPort(o.instrumentationOptions.HealthPort)
        health.ServeReady()
    
        mux := http.NewServeMux()
        mux.Handle("/", serv)
        externalplugins.ServeExternalPluginHelp(mux, log, helpProvider)
        logrus.Info("starting server " + strconv.Itoa(o.port))
        httpServer := &http.Server{Addr: ":" + strconv.Itoa(o.port), Handler: mux}
        defer interrupts.WaitForGracefulShutdown()
        interrupts.ListenAndServe(httpServer, 5*time.Second)
    }
    
    func (s *server) ServeHTTP(w http.ResponseWriter, r *http.Request) {
        logrus.Info("inside http server")
        _, _, payload, ok, _ := github.ValidateWebhook(w, r, s.tokenGenerator)
        logrus.Info(string(payload))
        if !ok {
           return
        }
        logrus.Info(w, "Event received. Have a nice day.")
        if err := s.handleEvent(payload); err != nil {
           logrus.WithError(err).Error("Error parsing event.")
        }
    }
    
    func (s *server) handleEvent(payload []byte) error {
        logrus.Info("inside handler")
        var pr github.PullRequestEvent
        if err := json.Unmarshal(payload, &pr); err != nil {
           return err
        }
        logrus.Info(pr.Number)
        if err := s.ghc.CreateComment(pr.PullRequest.Base.Repo.Owner.Login, pr.PullRequest.Base.Repo.Name, pr.Number, "comment from smaple-plugin"); err != nil {
           return err
        }
        return nil
    }

    4. Deploy Your Plugin

    To deploy your custom Prow plugin, you will need to create a Docker image and deploy it into your Prow cluster.

    FROM golang as app-builder
    WORKDIR /app
    RUN apt  update
    RUN apt-get install git
    COPY . .
    RUN CGO_ENABLED=0 go build -o main
    
    FROM alpine:3.9
    RUN apk add ca-certificates git
    COPY --from=app-builder /app/main /app/custom-plugin
    ENTRYPOINT ["/app/custom-plugin"]

    docker build -t jainbhavya65/custom-plugin:v1 .

    docker push jainbhavya65/custom-plugin:v1

    Deploy the Docker image using Kubernetes deployment:
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: comment-plugin
    spec:
      progressDeadlineSeconds: 600
      replicas: 1
      revisionHistoryLimit: 10
      selector:
        matchLabels:
          app: comment-plugin
      strategy:
        rollingUpdate:
          maxSurge: 25%
          maxUnavailable: 25%
        type: RollingUpdate
      template:
        metadata:
          creationTimestamp: null
          labels:
            app: comment-plugin
        spec:
          containers:
          - args:
            - --github-token-path=/etc/github/oauth
            - --hmac-secret-file=/etc/hmac-token/hmac
            - --port=80
            image: <IMAGE>
            imagePullPolicy: Always
            name: comment-plugin
            ports:
            - containerPort: 80
              protocol: TCP
            volumeMounts:
            - mountPath: /etc/github
              name: oauth
              readOnly: true
            - mountPath: /etc/hmac-token
              name: hmac
              readOnly: true
          volumes:
          - name: oauth
            secret:
              defaultMode: 420
              secretName: oauth-token
          - name: hmac
            secret:
              defaultMode: 420
              secretName: hmac-token

    Create a service for deployment:
    apiVersion: v1
    kind: Service
    metadata:
      name: comment-plugin
    spec:
      ports:
      - port: 80
        protocol: TCP
        targetPort: 80
      selector:
        app: comment-plugin
      sessionAffinity: None
      type: ClusterIP
    view raw

    After creating the deployment and service, integrate it into your organization’s Prow configuration. This involves updating your Prow plugin.yaml files to include your plugin and specify when it should run.

    external_plugins: 
    - name: comment-plugin
      # No endpoint specified implies "http://{{name}}". // as we deploy plugin into same cluster
      # if plugin is not deployed in same cluster then you can give endpoint
      events:
      # only pull request and issue comment events are send to our plugin
      - pull_request
      - issue_comment

    Conclusion

    Mastering Prow plugin development opens up a world of possibilities for tailoring your Kubernetes CI/CD workflow to meet your organization’s needs. While the initial learning curve may be steep, the benefits of custom plugins in terms of automation, efficiency, and control are well worth the effort.

    Remember that the key to successful Prow plugin development lies in clear documentation, thorough testing, and collaboration with your team to ensure that your custom plugins enhance your CI/CD pipeline’s functionality and reliability. As Kubernetes and containerized applications continue to evolve, Prow will remain a valuable tool for managing your CI/CD processes, and your custom plugins will be the secret sauce that sets your workflow apart from the rest.

  • Ensure Continuous Delivery On Kubernetes With GitOps’ Argo CD

    What is GitOps?

    GitOps is a Continuous Deployment model for cloud-native applications. In GitOps, the Git repositories which contain the declarative descriptions of the infrastructure are considered as the single source of truth for the desired state of the system and we need to have an automated way to ensure that the deployed state of the system always matches the state defined in the Git repository. All the changes (deployment/upgrade/rollback) on the environment are triggered by changes (commits) made on the Git repository

    The artifacts that we run on any environment always have a corresponding code for them on some Git repositories. Can we say the same thing for our infrastructure code?

    Infrastructure as code tools, completely declarative orchestration tools like Kubernetes allow us to represent the entire state of our system in a declarative way. GitOps intends to make use of this ability and make infrastructure-related operations more developer-centric.

    Role of Infrastructure as Code (IaC) in GitOps

    The ability to represent the infrastructure as code is at the core of GitOps. But just having versioned controlled infrastructure as code doesn’t mean GitOps, we also need to have a mechanism in place to keep (try to keep) our deployed state in sync with the state we define in the Git repository.

    Infrastructure as Code is necessary but not sufficient to achieve GitOps

    GitOps does pull-based deployments

    Most of the deployment pipelines we see currently, push the changes in the deployed environment. For example, consider that we need to upgrade our application to a newer version then we will update its docker image tag in some repository which will trigger a deployment pipeline and update the deployed application. Here the changes were pushed on the environment. In GitOps, we just need to update the image tag on the Git repository for that environment and the changes will be pulled to the environment to match the updated state in the Git repository. The magic of keeping the deployed state in sync with state-defined on Git is achieved with the help of operators/agents. The operator is like a control loop which can identify differences between the deployed state and the desired state and make sure they are the same.

    Key benefits of GitOps:

    1. All the changes are verifiable and auditable as they make their way into the system through Git repositories.
    2. Easy and consistent replication of the environment as Git repository is the single source of truth. This makes disaster recovery much quicker and simpler.
    3. More developer-centric experience for operating infrastructure. Also a smaller learning curve for deploying dev environments.
    4. Consistent rollback of application as well as infrastructure state.

    Introduction to Argo CD

    Argo CD is a continuous delivery tool that works on the principles of GitOps and is built specifically for Kubernetes. The product was developed and open-sourced by Intuit and is currently a part of CNCF.

    Key components of Argo CD:

    1. API Server: Just like K8s, Argo CD also has an API server that exposes APIs that other systems can interact with. The API server is responsible for managing the application, repository and cluster credentials, enforcing authentication and authorization, etc.
    2. Repository server: The repository server keeps a local cache of the Git repository, which holds the K8s manifest files for the application. This service is called by other services to get the K8s manifests.  
    3. Application controller: The application controller continuously watches the deployed state of the application and compares it with the desired state of the application, reports the API server whenever they are not in sync with each other and seldom takes corrective actions as well. It is also responsible for executing user-defined hooks for various lifecycle events of the application.

    Key objects/resources in Argo CD:

    1. Application: Argo CD allows us to represent the instance of the application which we want to deploy in an environment by creating Kubernetes objects of a custom resource definition(CRD) named Application. In the specification of Application type objects, we specify the source (repository) of our application’s K8s manifest files, the K8s server where we want to deploy those manifests, namespace, and other information.
    2. AppProject: Just like Application, Argo CD provides another CRD named AppProject. AppProjects are used to logically group related-applications.
    3. Repo Credentials: In the case of private repositories, we need to provide access credentials. For credentials, Argo CD uses the K8s secrets and config map. First, we create objects of secret types and then we update a special-purpose configuration map named argocd-cm with the repository URL and the secret which contains the credentials.
    4. Cluster Credentials: Along with Git repository credentials, we also need to provide the K8s cluster credentials. These credentials are also managed using K8s secret, we are required to add the label argocd.argoproj.io/secret-type: cluster to these secrets.

    Demo:

    Enough of theory, let’s try out the things we discussed above. For the demo, I have created a simple app named message-app. This app reads a message set in the environment variable named MESSAGE. We will populate the values of this environment variable using a K8s config map. I have kept the K8s manifest files for the app in a separate repository. We have the application and the K8s manifest files ready. Now we are all set to install Argo CD and deploy our application.

    Installing Argo CD:

    For installing Argo CD, we first need to create a namespace named argocd.

    kubectl create namespace argocd
    kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

    Applying the files from the argo-cd repo directly is fine for demo purposes, but in actual environments, you must copy the file in your repository before applying them. 

    We can see that this command has created the core components and CRDs we discussed earlier in the blog. There are some additional resources as well but we can ignore them for the time being.

    Accessing the Argo CD GUI

    We have the Argo CD running in our cluster, Argo CD also provides a GUI which gives us a graphical representation of our k8s objects. It allows us to view events, pod logs, and other configurations.

    By default, the GUI service is not exposed outside the cluster. Let us update its service type to LoadBalancer so that we can access it from outside.

    kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'

    After this, we can use the external IP of the argocd-server service and access the GUI. 

    The initial username is admin and the password is the name of the api-server pod. The password can be obtained by listing the pods in the argocd namespace or directly by this command.

    kubectl get pods -n argocd -l app.kubernetes.io/name=argocd-server -o name | cut -d'/' -f 2 

    Deploy the app:

    Now let’s go ahead and create our application for the staging environment for our message app.

    apiVersion: argoproj.io/v1alpha1
    kind: Application
    metadata:
      name: message-app-staging
      namespace: argocd
      environment: staging
      finalizers:
        - resources-finalizer.argocd.argoproj.io
    spec:
      project: default
    
      # Source of the application manifests
      source:
        repoURL: https://github.com/akash-gautam/message-app-manifests.git
        targetRevision: HEAD
        path: manifests/staging
    
      # Destination cluster and namespace to deploy the application
      destination:
        server: https://kubernetes.default.svc
        namespace: staging
    
      syncPolicy:
        automated:
          prune: false
          selfHeal: false

    In the application spec, we have specified the repository, where our manifest files are stored and also the path of the files in the repository. 

    We want to deploy our app in the same k8s cluster where ArgoCD is running so we have specified the local k8s service URL in the destination. We want the resources to be deployed in the staging namespace, so we have set it accordingly.

    In the sync policy, we have enabled automated sync. We have kept the project as default. 

    Adding the resources-finalizer.argocd.argoproj.io ensures that all the resources created for the application are deleted when the Application is deleted. This is fine for our demo setup but might not always be desirable in real-life scenarios.

    Our git repos are public so we don’t need to create secrets for git repo credentials.

    We are deploying in the same cluster where Argo CD itself is running. As this is a demo setup, we can use the admin user created by Argo CD, so we don’t need to create secrets for cluster credentials either.

    Now let’s go ahead and create the application and see the magic happen.

    kubectl apply -f message-app-staging.yaml

    As soon as the application is created, we can see it on the GUI. 

    By clicking on the application, we can see all the Kubernetes objects created for it.

    It also shows the objects which are indirectly created by the objects we create. In the above image, we can see the replica set and endpoint object which are created as a result of creating the deployment and service respectively.

    We can also click on the individual objects and see their configuration. For pods, we can see events and logs as well.

    As our app is deployed now, we can grab public IP of message-app service and access it on the browser.

    We can see that our app is deployed and accessible.

    Updating the app

    For updating our application, all we need to do is commit our changes to the GitHub repository. We know the message-app just displays the message we pass to it via. Config map, so let’s update the message and push it to the repository.

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: message-configmap
      labels:
        app: message-app
    data:
      MESSAGE: "This too shall pass" #Put the message you want to display here.

    Once the commit is done, Argo CD will start to sync again.

    Once the sync is done, we will restart our message app pod, so that it picks up the latest values in the config map. Then we need to refresh the browser to see updated values.

    As we discussed earlier, for making any changes to the environment, we just need to update the repo which is being used as the source for the environment and then the changes will get pulled in the environment. 

    We can follow an exact similar approach and deploy the application to the production environment as well. We just need to create a new application object and set the manifest path and deployment namespace accordingly.

    Conclusion: 

    It’s still early days for GitOps, but it has already been successfully implemented at scale by many organizations. As the GitOps tools mature along with the ever-growing adoption of Kubernetes, I think many organizations will consider adopting GitOps soon. GitOps is not limited only to Kubernetes, but the completely declarative nature of Kubernetes makes it simpler to achieve GitOps. Argo CD is a deployment tool that’s tailored for Kubernetes and allows us to do deployments in a Kubernetes native way while following the principles of GitOps.I hope this blog helped you in understanding how what and why of GitOps and gave some insights to Argo CD.

  • Helm 3: A More Secured and Simpler Kubernetes Package Manager

    What is Helm?

    Helm helps you manage Kubernetes applications. Helm Charts help developers and operators easily define, install, and upgrade even the most complex Kubernetes application.

    Below are the three big concepts regarding Helm.

    1. Chart – A chart is a Helm package. It contains all resource definitions necessary to run an application, tool or service inside the Kubernetes cluster.

    2. Repository – A repository is a place where charts can be collected and shared.

    3, Release – Release is an instance of a chart running in a Kubernetes cluster. One chart can often be installed many times in the same cluster, and each time it is installed, a new release is created.

    Registry – Helm Registry stores Helm charts in a hierarchy storage structure and provides a function to orchestrate charts from the existing charts. To deploy and configure registry, refer to this.

    Why Helm?

    1. It helps find and use popular software packaged as Kubernetes charts
    2. Shares your own applications as Kubernetes charts
    3. Manages releases of Helm packages
    4. Creates reproducible builds of your Kubernetes applications

    Changes since Helm2

    Helm3 includes following major changes:

    1. Client-only architecture

    Helm 2 is a client-server architecture with the client called as Helm and the server called as Tiller. The client interacts with the Tiller and the chart repository. Tiller interacts with the Kubernetes API server. It renders Helm template files into Kubernetes manifest files, that it uses for operations on the Kubernetes cluster through the Kubernetes API.

    Helm 3 has a client-only architecture with the client still called as Helm. It operates similar to Helm 2 client, but the client interacts directly with the Kubernetes API server. The in-cluster server Tiller is removed in Helm 3.

     

    2. No need to initialize Helm

    Initializing Helm is obsolete in version 3. i.e. Helm init was removed and you don’t need to install Tiller in the cluster and set up a Helm state before using Helm. A Helm state is created automatically, whenever required.

    3. Chart dependency updated

    In Helm 2, chart dependencies are declared in requirements.yaml, as shown in the following example:

    dependencies:

    – name: mysql

      version: “1.3.2”

      repository: “https://example.com/charts/mysql

    Chart dependencies are consolidated in Helm 3, hence moving the dependency definitions to Chart.yaml.

    4. Chart value validation

    In Helm 3, values passed to a chart during any Helm commands can be validated against a JSON schema. This validation is beneficial to help chart consumers avoid setting incorrect values and help improve chart usability. To enable consumers to avoid setting incorrect values, add a schema file named values.schema.json in the chart folder.

    Following commands call the validation:

    • helm install
    • helm upgrade
    • helm template

    5. Helm test framework updates

    Helm 3 includes following updates to the test framework (helm test):

    • Users can define tests as job resources
    • The test-failure hook was removed
    • The test-success hook was renamed to test, but the alias remains for test-success
    • You can dump logs from test pods with –logs flag

    Helm 3 is more than just removing Tiller. It has a lot of new capabilities. There is little or no difference from CLI or usage point of view in Helm 3 when compared with Helm 2.

    Prerequisites

    1. A running Kubernetes cluster.
    2. The Kubernetes cluster API endpoint should be reachable from the machine you are running Helm commands.

    Installing Helm 

    1. Download binary from here.
    2. Unpack it (tar -zxvf helm-v3.0.0-linux-amd64.tgz)
    3. Find the Helm binary and move it to its desired destination (mv linux-amd64/helm /usr/local/bin/helm)

    From there, you should be able to run the client command: ‘helm help’. 

    Note: We will be using Helm version 3.0.0

    Deploy a sample Helm Chart

    Use below command to create new chart named mysql in a new directory

    $ helm create mysql

    After running above command, Helm creates a directory with the following layout:

    velotiotech:~/work/mysql$ tree
    .
    ├── charts
    ├── Chart.yaml
    ├── templates
    │   ├── deployment.yaml
    │   ├── _helpers.tpl
    │   ├── ingress.yaml
    │   ├── NOTES.txt
    │   ├── serviceaccount.yaml
    │   ├── service.yaml
    │   └── tests
    │       └── test-connection.yaml
    └── values.yaml
    
    3 directories, 9 files

    It creates a Chart.yaml file containing global variables for the chart such as version and description.

    velotiotech:~/work/mysql$ cat Chart.yaml 
    apiVersion: v2
    name: mysql
    description: A Helm chart for Kubernetes
    
    # A chart can be either an 'application' or a 'library' chart.
    #
    # Application charts are a collection of templates that can be packaged into versioned archives
    # to be deployed.
    #
    # Library charts provide useful utilities or functions for the chart developer. They're included as
    # a dependency of application charts to inject those utilities and functions into the rendering
    # pipeline. Library charts do not define any templates and therefore cannot be deployed.
    type: application
    
    # This is the chart version. This version number should be incremented each time you make changes
    # to the chart and its templates, including the app version.
    version: 0.1.0
    
    # This is the version number of the application being deployed. This version number should be
    # incremented each time you make changes to the application.
    appVersion: 1.16.0

    Then comes templates directory. There you put all the *.yaml files for Kubernetes. Helm uses Go template markup language to customize *.yaml files. Helm creates three default file types: deployment, service, ingress. All the files in this directory are skeletons that are filled with the variables from the values.yaml when you deploy your Helm chart. File _helpers.tpl contains your custom helper functions for variable calculation.

    By default, Helm creates an nginx deployment. We will customize it to create a Helm Chart to deploy mysql on Kubernetes cluster. Add new deployment to the templates directory.

    velotiotech:~/work/mysql$ cat templates/deployment.yaml 
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: {{ include "mysql.fullname" . }}
    spec:
      selector:
        matchLabels:
          app: {{ include "mysql.name" . }}
      template:
        metadata:
          labels:
            app: {{ include "mysql.name" . }}
        spec:
          containers:
          - name: {{ .Chart.Name }}
            image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
            imagePullPolicy: {{ .Values.image.pullPolicy }}
            env:
            - name: MYSQL_ROOT_PASSWORD
              value: {{ .Values.mysql_root_password }}
            ports:
            - containerPort: {{ .Values.service.port }}
              name: mysql
          volumes:
          - name: mysql-persistent-storage
            persistentVolumeClaim:
              claimName: {{ .Values.persistentVolumeClaim }}

    Also, let’s create PVC which is used in deployment by just adding below file to the templates directory.

    velotiotech:~/work/mysql$ cat templates/persistentVolumeClaim.yml 
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: {{ .Values.persistentVolumeClaim }}
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi

    Helm runs each file in the templates directory through Go template rendering engine. Let’s create service.yaml for connecting to mysql instance.

    velotiotech:~/work/mysql$ cat templates/service.yaml 
    apiVersion: v1
    kind: Service
    metadata:
      name: {{ include "mysql.fullname" . }}
    spec:
      ports:
      - port: {{ .Values.service.port }}
      selector:
        app: {{ include "mysql.name" . }}
      clusterIP: None

    Update values.yaml to populate the above chart’s templates.

    velotiotech:~/work/mysql$ cat values.yaml 
    # Default values for mysql.
    # This is a YAML-formatted file.
    # Declare variables to be passed into your templates.
    
    image:
      repository: mysql
      tag: 5.6
      pullPolicy: IfNotPresent
    
    nameOverride: ""
    fullnameOverride: ""
    
    serviceAccount:
      # Specifies whether a service account should be created
      create: false
      # The name of the service account to use.
      # If not set and create is true, a name is generated using the fullname template
      name:
    
    mysql_root_password: password 
    
    service:
      port: 3306
    
    persistentVolumeClaim: mysql-data-disk
    
    resources: {}
      # We usually recommend not to specify default resources and to leave this as a conscious
      # choice for the user. This also increases chances charts run on environments with little
      # resources, such as Minikube. If you do want to specify resources, uncomment the following
      # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
      # limits:
      #   cpu: 100m
      #   memory: 128Mi
      # requests:
      #   cpu: 100m
      #   memory: 128Mi

    After adding above deployment files, directory structure will look like:

    velotiotech:~/work/mysql$ tree
    .
    ├── charts
    ├── Chart.yaml
    ├── templates
    │   ├── deployment.yaml
    │   ├── _helpers.tpl
    │   ├── NOTES.txt
    │   ├── persistentVolumeClaim.yml
    │   ├── serviceaccount.yaml
    │   ├── service.yaml
    │   └── tests
    │       └── test-connection.yaml
    └── values.yaml
    
    3 directories, 9 files

    To render chart templates locally and display the output to check if everything is correct:

    $ helm template mysql

    Execute the following helm install command to deploy our mysql chart in the Kubernetes cluster.

    $ helm install mysql-release ./mysql

    velotiotech:~/work$ helm install mysql-release ./mysql
    NAME: mysql-release
    LAST DEPLOYED: Mon Nov 25 14:48:38 2019
    NAMESPACE: mysql-chart
    STATUS: deployed
    REVISION: 1
    NOTES:
    1. Use below command to connect to mysql:
       kubectl run -it --rm --image=mysql:5.6 --restart=Never mysql-client -- mysql -h mysql-release -ppassword
    
    2. Try creating database in mysql using command:
       create database test;

    Now the chart is installed. Note that installing a Helm chart creates a new release object. The release above is named mysql-release.

    To keep a track of a release’s state, or to re-read configuration information, you can use Helm status:

    $ helm status mysql-release

    Additionally, to create a package, use below command which requires path for chart (which must contain a Chart.yaml file) and then package that directory:

    $ helm package <path_to_Chart.yaml>

    This command creates an archive like mysql-0.1.0.tgz, with which you can share your chart with others. For instance, you can upload this file to the Helm repository.

    You can also delete the sample deployment using delete command. For example,

    $ helm delete mysql-release

    Upgrade a release

    Helm provides a way to perform an install or an upgrade as a single command. Use Helm upgrade with the –install command. This will help Helm to see if the release is already installed. If not, it will run an install. If it is, then the existing release will be upgraded.

    $ helm upgrade --install <release name> --values <values file> <chart directory>

  • How to Avoid Screwing Up CI/CD: Best Practices for DevOps Team

    Basic Fundamentals (One-line definition) :

    CI/CD is defined as continuous integration, continuous delivery, and/or continuous deployment. 

    Continuous Integration: 

    Continuous integration is defined as a practice where a developer’s changes are merged back to the main branch as soon as possible to avoid facing integration challenges.

    Continuous Delivery:

    Continuous delivery is basically the ability to get all the types of changes deployed to production or delivered to the customer in a safe, quick, and sustainable way.

    An oversimplified CI/CD pipeline

    Why CI/CD?

    • Avoid integration hell

    In most modern application development scenarios, multiple developers work on different features simultaneously. However, if all the source code is to be merged on the same day, the result can be a manual, tedious process of resolving conflicts between branches, as well as a lot of rework.  

    Continuous integration (CI) is the process of merging the code changes frequently (can be daily or multiple times a day also) to a shared branch (aka master or truck branch). The CI process makes it easier and quicker to identify bugs, saving a lot of developer time and effort.

    • Faster time to market

    Less time is spent on solving integration problems and reworking, allowing faster time to market for products.

    • Have a better and more reliable code

    The changes are small and thus easier to test. Each change goes through a rigorous cycle of unit tests, integration/regression tests, and performance tests before being pushed to prod, ensuring a better quality code.  

    • Lower costs 

    As we have a faster time to market and fewer integration problems,  a lot of developer time and development cycles are saved, leading to a lower cost of development.

    Enough theory now, let’s dive into “How do I get started ?”

    Basic Overview of CI/CD

    Decide on your branching strategy

    A good branching strategy should have the following characteristics:

    • Defines a clear development process from initial commit to production deployment
    • Enables parallel development
    • Optimizes developer productivity
    • Enables faster time to market for products and services
    • Facilitates integration with all DevOps practices and tools such as different versions of control systems

    Types of branching strategies (please refer to references for more details) :

    • Git flow – Ideal when handling multiple versions of the production code and for enterprise customers who have to adhere to release plans and workflows 
    • Trunk-based development – Ideal for simpler workflows and if automated testing is available, leading to a faster development time
    • Other branching strategies that you can read about are Github flow, Gitlab flow, and Forking flow.

    Build or compile your code 

    The next step is to build/compile your code, and if it is interpreted code, go ahead and package it.

    Build best practices :

    • Build Once – Building the same artifact for multiple env is inadvisable.
    • Exact versions of third-party dependencies should be used.
    • Libraries used for debugging, etc., should be removed from the product package.
    • Have a feedback loop so that the team is made aware of the status of the build step.
    • Make sure your builds are versioned correctly using semver 2.0 (https://semver.org/).
    • Commit early, commit often.

    Select tool for stitching the pipeline together

    • You can choose from GitHub actions, Jenkins, circleci, GitLab, etc.
    • Tool selection will not affect the quality of your CI/CD pipeline but might increase the maintenance if we go for managed CI/CD services as opposed to services like Jenkins deployed onprem. 

    Tools and strategy for SAST

    Instead of just DevOps, we should think of devsecops. To make the code more secure and reliable, we can introduce a step for SAST (static application security testing).

    SAST, or static analysis, is a testing procedure that analyzes source code to find security vulnerabilities. SAST scans the application code before the code is compiled. It’s also known as white-box testing, and it helps shift towards a security-first mindset as the code is scanned right at the start of SDLC.

    Problems SAST solves:

    • SAST tools give developers real-time feedback as they code, helping them fix issues before they pass the code to the next phase of the SDLC. 
    • This prevents security-related issues from being considered an afterthought. 

    Deployment strategies

    How will you deploy your code with zero downtime so that the customer has the best experience? Try and implement one of the strategies below automatically via CI/CD. This will help in keeping the blast radius to the minimum in case something goes wrong. 

    • Ramped (also known as rolling-update or incremental): The new version is slowly rolled out to replace the older version of the product .
    • Blue/Green: The new version is released alongside the older version, then the traffic is switched to the newer version.
    • Canary: The new version is released to a selected group of users before doing  a full rollout. This can be achieved by feature flagging as well. For more information, read about tools like launch darkly(https://launchdarkly.com/) and git unleash (https://github.com/Unleash/unleash). 
    • A/B testing: The new version is released to a subset of users under specific conditions.
    • Shadow: The new version receives real-world traffic alongside the older version and doesn’t impact the response.

    Config and Secret Management

    According to the 12-factor app, application configs should be exposed to the application with environment variables. However, it does not have restrictions on where these configurations need to be stored and sourced from.

    A few things to keep in mind while storing configs.

    • Versioning of configs always helps, but storing secrets in VCS is strongly discouraged.
    • For an enterprise, it is beneficial to use a cloud-agnostic solution.

    Solution:

    • Store your configuration secrets outside of the version control system.
    • You can use AWS secret manager, Vault, and even S3 for storing your configs, e.g.: S3 with KMS, etc. There are other services available as well, so choose the one which suits your use case the best.

    Automate versioning and release notes generation

    All the releases should be tagged in the version control system. Versions can be automatically updated by looking at the git commit history and searching for keywords.

    There are many modules available for release notes generation. Try and automate these as well as a part of your CI/CD process. If this is done, you can successfully eliminate human intervention from the release process.

    Example from GitHub actions workflow :

    - name: Automated Version Bump
      id: version-bump
      uses: 'phips28/gh-action-bump-version@v9.0.16'
      env:
        GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
      with:
        commit-message: 'CI: Bump version to v{{version}}'

    Have a rollback strategy

    In case of regression, performance, or smoke test fails after deployment onto an environment, feedback should be given and the version should be rolled back automatically as a part of the CI/CD process. This makes sure that the environment is up and also reduces the MTTR (mean time to recovery), and MTTD (mean time to detection) in case there is a production outage due to code deployment.

    GitOps tools like argocd and flux make it easy to do things like this, but even if you are not using any of the GitOps tools, this can be easily managed using scripts or whatever tool you are using for deployment.

    Include db changes as a part of your CI/CD

    Databases are often created manually and frequently evolve through manual changes, informal processes, and even testing in production. Manual changes often lack documentation and are harder to review, test, and coordinate with software releases. This makes the system more fragile with a higher risk of failure.

    The correct way to do this is to include the database in source control and CI/CD pipeline. This lets the team document each change, follow the code review process, test it thoroughly before release, make rollbacks easier, and coordinate with software releases. 

    For a more enterprise or structured solution, we could use a tool such as Liquibase, Alembic, or Flyway.

    How it should ideally be done:

    • We can have a migration-based strategy where, for each DB change, an additional migration script is added and is executed as a part of CI/CD .
    • Things to keep in mind are that the CI/CD process should be the same across all the environments. Also, the amount of data on prod and other environments might vary drastically, so batching and limits should be used so that we don’t end up using all the memory of our database server.
    • As far as possible, DB migrations should be backward compatible. This makes it easier for rollbacks. This is the reason some companies only allow additive changes as a part of db migration scripts. 

    Real-world scenarios

    • Gated approach 

    It is not always possible to have a fully automated CI/CD pipeline because the team may have just started the development of a product and might not have automated testing yet.

    So, in cases like these, we have manual gates that can be approved by the responsible teams. For example, we will deploy to the development environment and then wait for testers to test the code and approve the manual gate, then the pipeline can go forward.

    Most of the tools support these kinds of requests. Make sure that you are not using any kind of resources for this step otherwise you will end up blocking resources for the other pipelines.

    Example:

    https://www.jenkins.io/doc/pipeline/steps/pipeline-input-step/#input-wait-for-interactive-input

    def LABEL_ID = "yourappname-${UUID.randomUUID().toString()}"
    def BRANCH_NAME = "<Your branch name>"
    def GIT_URL = "<Your git url>"
    // Start Agent
    node(LABEL_ID) {
        stage('Checkout') {
            doCheckout(BRANCH_NAME, GIT_URL)
        }
        stage('Build') {
            ...
        }
        stage('Tests') {
            ...
        }    
    }
    // Kill Agent
    // Input Step
    timeout(time: 15, unit: "MINUTES") {
        input message: 'Do you want to approve the deploy in production?', ok: 'Yes'
    }
    // Start Agent Again
    node(LABEL_ID) {
        doCheckout(BRANCH_NAME, GIT_URL) 
        stage('Deploy') {
            ...
        }
    }
    def doCheckout(branchName, gitUrl){
        checkout([$class: 'GitSCM',
            branches: [[name: branchName]],
            doGenerateSubmoduleConfigurations: false,
            extensions:[[$class: 'CloneOption', noTags: true, reference: '', shallow: true]],
            userRemoteConfigs: [[credentialsId: '<Your credentials id>', url: gitUrl]]])
    }

    Observability of releases 

    Whenever we are debugging the root cause of issues in production, we might need the information below. As the system gets more complex with multiple upstreams and downstream, it becomes imperative that we have this information, all in one place, for efficient debugging and support by the operations team.

    • When was the last deployment? What version was deployed?
    • The deployment history as to which version was deployed when along with the code changes that went in.

    Below are the 2 ways generally organizations follow to achieve this:

    • Have a release workflow that is tracked using a Change request or Service request on Jira or any other tracking tool.
    • For GitOps applications using tools like Argo CD and flux, all this information is available as a part of the version control system and can be derived from there.

    DORA metrics 

    DevOps maturity of a team is measured based on mainly four metrics that are defined below, and CI/CD helps in improving all of the below. So, teams and organizations should try and achieve the Elite status for DORA metrics.

    • Deployment Frequency— How often an org successfully releases to production
    • Lead Time for Changes— The amount of time a commit takes to get into prod
    • Change Failure Rate— The percentage of deployments causing a failure in prod
    • Time to Restore Service— How long an org takes to recover from a failure in prod

    Conclusion 

    CI/CD forms an integral part of DevOps and SRE practices, and if done correctly,  it can impact the team’s and organization’s productivity in a huge way. 

    So, try and implement the above principles and get one step closer to having a highly productive team and a better product.

  • Create CI/CD Pipeline in GitLab in under 10 mins

    Why Chose GitLab Over Other CI tools?

    If there are many tools available in the market, like CircleCI, Github Actions, Travis CI, etc., what makes GitLab CI so special? The easiest way for you to decide if GitLab CI is right for you is to take a look at following use-case:

    GitLab knocks it out of the park when it comes to code collaboration and version control. Monitoring the entire code repository along with all branches becomes manageable. With other popular tools like Jenkins, you can only monitor some branches. If your development teams are spread across multiple locations globally, GitLab serves a good purpose. Regarding price, while Jenkins is free, you need to have a subscription to use all of Gitlab’s features.

    In GitLab, every branch can contain the gitlab-ci.yml file, which makes it easy to modify the workflows. For example, if you want to run unit tests on branch A and perform functional testing on branch B, you can simply modify the YAML configuration for CI/CD, and the runner will take care of running the job for you. Here is a comprehensive list of Pros and Cons of Gitlab to help you make a better decision.

    Intro

    GitLab is an open-source collaboration platform that provides powerful features beyond hosting a code repository. You can track issues, host packages and registries, maintain Wikis, set up continuous integration (CI) and continuous deployment (CD) pipelines, and more.

    In this tutorial, you will configure a pipeline with three stages: build, deploy, test. The pipeline will run for each commit pushed to the repository.

    GitLab and CI/CD

    As we all are aware, a fully-fledged CI/CD pipeline primarily includes the following stages:

    • Build
    • Test
    • Deploy

    Here is a pictorial representation of how GitLab covers CI and CD:

    Source: gitlab.com

    Let’s take a look at an example of an automation testing pipeline. Here, CI empowers test automation and CD automates the release process to various environments. The below image perfectly demonstrates the entire flow.

    Source: xenonstack.com

    Let’s create the basic 3-stage pipeline

    Step 1: Create a project > Create a blank project

    Visit gitlab.com and create your account if you don’t have one already. Once done, click “New Project,” and on the following screen, click “Create Blank Project.” Name it My First Project, leave other settings to default for now, and click Create.
    Alternatively, if you already have your codebase in GitLab, proceed to Step 2.

    Step 2: Create a GitLab YAML

    To create a pipeline in GitLab, we need to define it in a YAML file. This yaml file should reside in the root directory of your project and should be named gitlab-ci.yml. GitLab provides a set of predefined keywords that are used to define a pipeline. 

    In order to design a basic pipeline, let’s understand the structure of a pipeline. If you are already familiar with the basic structure given below, you may want to jump below to the advanced pipeline outline for various environments.

    The hierarchy in GitLab has Pipeline > Stages > Jobs as shown below. The Source or SRC  is often a git commit or a CRON job, which triggers the pipeline on a defined branch.

    Now, let’s understand the commonly used keywords to design a pipeline:

    1. stages: This is used to define stages in the pipeline.
    2. variables: Here you can define the environment variables that can be accessed in all the jobs.
    3. before_script: This is a list of commands to be executed before each job. For example: creating specific directories, logging, etc.
    4. artifacts: If your job creates any artifacts, you can mention the path to find them here.
    5. after_script: This is a list of commands to be executed after each job. For example: cleanup.
    6. tags: This is a tag/label to identify the runner or a GitLab agent to assign your jobs to. If the tags are not specified, the jobs run on shared runners.
    7. needs: If you want your jobs to be executed in a certain order or you want a particular job to be executed before the current job, then you can set this value to the specific job name.
    8. only/except: These keywords are used to control when the job should be added to the pipeline. Use ‘only’ to define when a job should be added, whereas ‘except’ is used to define when a job should not be added. Alternatively, the ‘rules’ keyword is also used to add/exclude jobs based on conditions.

    You can find more keywords here.

    Let’s create a sample YAML file.

    stages:
        - build
        - deploy
        - test
    
    variables:
      RAILS_ENV: "test"
      NODE_ENV: "test"
      GIT_STRATEGY: "clone"
      CHROME_VERSION: "103"
      DOCKER_VERSION: "20.10.14"
    
    build-job:
      stage: build
      script:
        - echo "Check node version and build your binary or docker image."
        - node -v
        - bash buildScript.sh
    
    deploy-code:
      stage: deploy
      needs: build-job
      script:
        - echo "Deploy your code "
        - cd to/your/desired/folder
        - bash deployScript.sh
    
    test-code:
      stage: test
      needs: deploy-code
      script:
        - echo "Run your tests here."
        - cd to/your/desired/folder
        - npm run test

    As you can see, if you have your scripts in a bash file, you can run them from here providing the correct path. 

    Once your YAML is ready, commit the file. 

    Step 3: Check Pipeline Status

    Navigate to CICD > Pipelines from the left navigation bar. You can check the status of the pipeline on this page.

    Here, you can check the commit ID, branch, the user who triggered the pipeline, stages, and their status.

    If you click on the status, you will get a detailed view of pipeline execution.

    If you click on a job under any stage, you can check console logs in detail.

    If you have any artifacts created in your pipeline jobs, you can find them by clicking on the 3 dots for the pipeline instance.

    Advanced Pipeline Outline

    For an advanced pipeline that consists of various environments, you can refer to the below YAML. Simply remove the echo statements and replace them with your set of commands.

    image: your-repo:tag
    variables:
    DOCKER_DRIVER: overlay2
    DOCKER_TLS_CERTDIR: ""
    DOCKER_HOST: tcp://localhost:2375
    SAST_DISABLE_DIND: "true"
    DS_DISABLE_DIND: "false"
    GOCACHE: "$CI_PROJECT_DIR/.cache"
    cache: # this section is used to cache libraries etc between pipeline runs thus reducing the amount of time required for pipeline to run
    key: ${CI_PROJECT_NAME}
    paths:
      - cache-path/
    #include:
     #- #echo "You can add other projects here."
     #- #project: "some/other/important/project"
       #ref: main
       #file: "src/project.yml"
    default:
    tags:
      - your-common-instance-tag
    stages:
    - build
    - test
    - deploy_dev
    - dev_tests
    - deploy_qa
    - qa_tests
    - rollback_qa
    - prod_gate
    - deploy_prod
    - rollback_prod
    - cleanup
    build:
    stage: build
    services:
      - docker:19.03.0-dind
    before_script:
      - echo "Run your pre-build commnadss here"
      - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
    script:
      - docker build -t $CI_REGISTRY/repo:$DOCKER_IMAGE_TAG  --build-arg GITLAB_USER=$GITLAB_USER --build-arg GITLAB_PASSWORD=$GITLAB_PASSWORD -f ./Dockerfile .
      - docker push $CI_REGISTRY/repo:$DOCKER_IMAGE_TAG
      - echo "Run your builds here"
    unit_test:
    stage: test
    image: your-repo:tag
    script:
      - echo "Run your unit tests here"
    linting:
    stage: test
    image: your-repo:tag
    script:
      - echo "Run your linting tests here"
    sast:
    stage: test
    image: your-repo:tag
    script:
      - echo "Run your Static application security testing here "
    deploy_dev:
    stage: deploy_dev
    image: your-repo:tag
    before_script:
      - source file.sh
      - export VARIABLE="$VALUE"
      - echo "deploy on dev"
    script:
      - echo "deploy on dev"
    after_script:
      #if deployment fails run rollback on dev
      - echo "Things to do after deployment is run"
    only:
      - master #Depends on your branching strategy
    integration_test_dev:
    stage: dev_tests
    image: your-repo:tag
    script:
      - echo "run test  on dev"
    only:
      - master
    allow_failure: true # In case failures are allowed
    deploy_qa:
    stage: deploy_qa
    image: your-repo:tag
    before_script:
      - source file.sh
      - export VARIABLE="$VALUE"
      - echo "deploy on qa"
    script:
      - echo "deploy on qa
    after_script:
      #if deployment fails run rollback on qa
      - echo "Things to do after deployment script is complete "
    only:
      - master
    needs: ["integration_test_dev", "deploy_dev"]
    allow_failure: false
    integration_test_qa:
    stage: qa_tests
    image: your-repo:tag
    script:
      - echo "deploy on qa
    only:
      - master
    allow_failure: true # in case you want to allow failures
     
    rollback_qa:
    stage: rollback_qa
    image: your-repo:tag
    before_script:
      - echo "Things to rollback after qa integration failure"
    script:
      - echo "Steps to rollback"
    after_script:
      - echo "Things to do after rollback"
    only:
      - master
    needs:
      [
        "deploy_qa",
      ]
    when: on_failure #This will run in case the qa deploy job fails
    allow_failure: false
     
    prod_gate: # this is manual gate for prod approval
    before_script:
      - echo "your commands here"
    stage: prod_gate
    only:
      - master
    needs:
      - deploy_qa
    when: manual
     
     
    deploy_prod:
    stage: deploy_prod
    image: your-repo:tag
    tags:
      - some-tag
    before_script:
      - source file.sh
      - echo "your commands here"
    script:
      - echo "your commands here"
    after_script:
      #if deployment fails
      - echo "your commands here"
    only:
      - master
    needs: [ "deploy_qa"]
    allow_failure: false
    rollback_prod: # This stage should be run only when prod deployment fails
    stage: rollback_prod
    image: your-repo:tag
    before_script:
      - export VARIABLE="$VALUE"
      - echo "your commands here"
    script:
      - echo "your commands here"
    only:
      - master
    needs: [ "deploy_prod"]
    allow_failure: false
    when: on_failure
    cleanup:
    stage: cleanup
    script:
      - echo "run cleanup"
      - rm -rf .cache/
    when: always

    Conclusion

    If you have worked with Jenkins, you know the pain points of working with groovy code. Thus, GitLab CI makes it easy to design, understand, and maintain the pipeline code. 

    Here are some pros and cons of using GitLab CI that will help you decide if this is the right tool for you!