Category: Industry

  • How to deploy GitHub Actions Self-Hosted Runners on Kubernetes

    GitHub Actions jobs are run in the cloud by default; however, sometimes we want to run jobs in our own customized/private environment where we have full control. That is where a self-hosted runner saves us from this problem. 

    To get a basic understanding of running self-hosted runners on the Kubernetes cluster, this blog is perfect for you. 

    We’ll be focusing on running GitHub Actions on a self-hosted runner on Kubernetes. 

    An example use case would be to create an automation in GitHub Actions to execute MySQL queries on MySQL Database running in a private network (i.e., MySQL DB, which is not accessible publicly).

    A self-hosted runner requires the provisioning and configuration of a virtual machine instance; here, we are running it on Kubernetes. For running a self-hosted runner on a Kubernetes cluster, the action-runner-controller helps us to make that possible.

    This blog aims to try out self-hosted runners on Kubernetes and covers:

    1. Deploying MySQL Database on minikube, which is accessible only within Kubernetes Cluster.
    2. Deploying self-hosted action runners on the minikube.
    3. Running GitHub Action on minikube to execute MySQL queries on MySQL Database.

    Steps for completing this tutorial:

    Create a GitHub repository

    1. Create a private repository on GitHub. I am creating it with the name velotio/action-runner-poc.

    Setup a Kubernetes cluster using minikube

    1. Install Docker.
    2. Install Minikube.
    3. Install Helm 
    4. Install kubectl

    Install cert-manager on a Kubernetes cluster

    • By default, actions-runner-controller uses cert-manager for certificate management of admission webhook, so we have to make sure cert-manager is installed on Kubernetes before we install actions-runner-controller. 
    • Run the below helm commands to install cert-manager on minikube.
    • Verify installation using “kubectl –namespace cert-manager get all”. If everything is okay, you will see an output as below:

    Setting Up Authentication for Hosted Runners‍

    There are two ways for actions-runner-controller to authenticate with the GitHub API (only 1 can be configured at a time, however):

    1. Using a GitHub App (not supported for enterprise-level runners due to lack of support from GitHub.)
    2. Using a PAT (personal access token)

    To keep this blog simple, we are going with PAT.

    To authenticate an action-runner-controller with the GitHub API, we can use a  PAT with the action-runner-controller registers a self-hosted runner.

    • Go to account > Settings > Developers settings > Personal access token. Click on “Generate new token”. Under scopes, select “Full control of private repositories”.
    •  Click on the “Generate token” button.
    • Copy the generated token and run the below commands to create a Kubernetes secret, which will be used by action-runner-controller deployment.
    export GITHUB_TOKEN=XXXxxxXXXxxxxXYAVNa 

    kubectl create ns actions-runner-system

    Create secret

    kubectl create secret generic controller-manager  -n actions-runner-system 
    --from-literal=github_token=${GITHUB_TOKEN}

    Install action runner controller on the Kubernetes cluster

    • Run the below helm commands
    helm repo add actions-runner-controller https://actions-runner-controller.github.io/actions-runner-controller
    helm repo update
    helm upgrade --install --namespace actions-runner-system 
    --create-namespace --wait actions-runner-controller 
    actions-runner-controller/actions-runner-controller --set 
    syncPeriod=1m

    • Verify that the action-runner-controller installed properly using below command
    kubectl --namespace actions-runner-system get all

     

    Create a Repository Runner

    • Create a RunnerDeployment Kubernetes object, which will create a self-hosted runner named k8s-action-runner for the GitHub repository velotio/action-runner-poc
    • Please Update Repo name from “velotio/action-runner-poc” to “<Your-repo-name>”
    • To create the RunnerDeployment object, create the file runner.yaml as follows:
    apiVersion: actions.summerwind.dev/v1alpha1
    kind: RunnerDeployment
    metadata:
     name: k8s-action-runner
     namespace: actions-runner-system
    spec:
     replicas: 2
     template:
       spec:
         repository: velotio/action-runner-poc

    • To create, run this command:
    kubectl create -f runner.yaml

    Check that the pod is running using the below command:

    kubectl get pod -n actions-runner-system | grep -i "k8s-action-runner"

    • If everything goes well, you should see two action runners on the Kubernetes, and the same are registered on Github. Check under Settings > Actions > Runner of your repository.
    • Check the pod with kubectl get po -n actions-runner-system

    Install a MySQL Database on the Kubernetes cluster

    • Create PV and PVC for MySQL Database. 
    • Create mysql-pv.yaml with the below content.
    apiVersion: v1
    kind: PersistentVolume
    metadata:
     name: mysql-pv-volume
     labels:
       type: local
    spec:
     capacity:
       storage: 2Gi
     accessModes:
       - ReadWriteOnce
     hostPath:
       path: "/mnt/data"
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
     name: mysql-pv-claim
    spec:
     accessModes:
       - ReadWriteOnce
     resources:
       requests:
         storage: 2Gi

    • Create mysql namespace
    kubectl create ns mysql

    • Now apply mysql-pv.yaml to create PV and PVC 
    kubectl create -f mysql-pv.yaml -n mysql

    Create the file mysql-svc-deploy.yaml and add the below content to mysql-svc-deploy.yaml

    Here, we have used MYSQL_ROOT_PASSWORD as “password”.

    apiVersion: v1
    kind: Service
    metadata:
     name: mysql
    spec:
     ports:
       - port: 3306
     selector:
       app: mysql
     clusterIP: None
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
     name: mysql
    spec:
     selector:
       matchLabels:
         app: mysql
     strategy:
       type: Recreate
     template:
       metadata:
         labels:
           app: mysql
       spec:
         containers:
           - image: mysql:5.6
             name: mysql
             env:
                 # Use secret in real usage
               - name: MYSQL_ROOT_PASSWORD
                 value: password
             ports:
               - containerPort: 3306
                 name: mysql
             volumeMounts:
               - name: mysql-persistent-storage
                 mountPath: /var/lib/mysql
         volumes:
           - name: mysql-persistent-storage
             persistentVolumeClaim:
               claimName: mysql-pv-claim

    • Create the service and deployment
    kubectl create -f mysql-svc-deploy.yaml -n mysql

    • Verify that the MySQL database is running
    kubectl get po -n mysql

    Create a GitHub repository secret to store MySQL password

    As we will use MySQL password in the GitHub action workflow file as a good practice, we should not use it in plain text. So we will store MySQL password in GitHub secrets, and we will use this secret in our GitHub action workflow file.

    • Create a secret in the GitHub repository and give the name to the secret as “MYSQL_PASS”, and in the values, enter “password”. 

    Create a GitHub workflow file

    • YAML syntax is used to write GitHub workflows. For each workflow, we use a separate YAML file, which we store at .github/workflows/ directory. So, create a .github/workflows/ directory in your repository and create a file .github/workflows/mysql_workflow.yaml as follows.
    ---
    name: Example 1
    on:
     push:
       branches: [ main ]
    jobs:
     build:
       name: Build-job
       runs-on: self-hosted
       steps:
       - name: Checkout
         uses: actions/checkout@v2
     
       - name: MySQLQuery
         env:
           PASS: ${{ secrets.MYSQL_PASS }}
         run: |
           docker run -v ${GITHUB_WORKSPACE}:/var/lib/docker --rm mysql:5.6 sh -c "mysql -u root -p$PASS -hmysql.mysql.svc.cluster.local </var/lib/docker/test.sql"

    • If you check the docker run command in the mysql_workflow.yaml file, we are referring to the .sql file, i.e., test.sql. So, create a test.sql file in your repository as follows:
    use mysql;
    CREATE TABLE IF NOT EXISTS Persons (
       PersonID int,
       LastName varchar(255),
       FirstName varchar(255),
       Address varchar(255),
       City varchar(255)
    );
     
    SHOW TABLES;

    • In test.sql, we are running MySQL queries like create tables.
    • Push changes to your repository main branch.
    • If everything is fine, you will be able to see that the GitHub action is getting executed in a self-hosted runner pod. You can check it under the “Actions” tab of your repository.
    • You can check the workflow logs to see the output of SHOW TABLES—a command we have used in the test.sql file—and check whether the persons tables is created.

    References

  • How to Setup HashiCorp Vault HA Cluster with Integrated Storage (Raft)

    As businesses move their data to the public cloud, one of the most pressing issues is how to keep it safe from illegal access.

    Using a tool like HashiCorp Vault gives you greater control over your sensitive credentials and fulfills cloud security regulations.

    In this blog, we’ll walk you through HashiCorp Vault High Availability Setup.

    Hashicorp Vault

    Hashicorp Vault is an open-source tool that provides a secure, reliable way to store and distribute sensitive information like API keys, access tokens, passwords, etc. Vault provides high-level policy management, secret leasing, audit logging, and automatic revocation to protect this information using UI, CLI, or HTTP API.

    High Availability

    Vault can run in a High Availability mode to protect against outages by running multiple Vault servers. When running in HA mode, Vault servers have two additional states, i.e., active and standby. Within a Vault cluster, only a single instance will be active, handling all requests, and all standby instances redirect requests to the active instance.

    Integrated Storage Raft

    The Integrated Storage backend is used to maintain Vault’s data. Unlike other storage backends, Integrated Storage does not operate from a single source of data. Instead, all the nodes in a Vault cluster will have a replicated copy of Vault’s data. Data gets replicated across all the nodes via the Raft Consensus Algorithm.

    Raft is officially supported by Hashicorp.

    Architecture

    Prerequisites

    This setup requires Vault, Sudo access on the machines, and the below configuration to create the cluster.

    • Install Vault v1.6.3+ent or later on all nodes in the Vault cluster 

    In this example, we have 3 CentOs VMs provisioned using VMware. 

    Setup

    1. Verify the Vault version on all the nodes using the below command (in this case, we have 3 nodes node1, node2, node3):

    vault --version

    2. Configure SSL certificates

    Note: Vault should always be used with TLS in production to provide secure communication between clients and the Vault server. It requires a certificate file and key file on each Vault host.

    We can generate SSL certs for the Vault Cluster on the Master and copy them on the other nodes in the cluster.

    Refer to: https://developer.hashicorp.com/vault/tutorials/secrets-management/pki-engine#scenario-introduction for generating SSL certs.

    • Copy tls.crt tls.key tls_ca.pem to /etc/vault.d/ssl/ 
    • Change ownership to `vault`
    [user@node1 ~]$ cd /etc/vault.d/ssl/           
    [user@node1 ssl]$ sudo chown vault. tls*

    • Copy tls* from /etc/vault.d/ssl to of the nodes

    3. Configure the enterprise license. Copy license on all nodes:

    cp /root/vault.hclic /etc/vault.d/vault.hclic
    chown root:vault /etc/vault.d/vault.hclic
    chmod 0640 /etc/vault.d/vault.hclic

    4. Create the storage directory for raft storage on all nodes:

    sudo mkdir --parents /opt/raft
    sudo chown --recursive vault:vault /opt/raft

    5. Set firewall rules on all nodes:

    sudo firewall-cmd --permanent --add-port=8200/tcp
    sudo firewall-cmd --permanent --add-port=8201/tcp
    sudo firewall-cmd --reload

    6. Create vault configuration file on all nodes:

    ### Node 1 ###
    [user@node1 vault.d]$ cat vault.hcl
    storage "raft" {
        path = "/opt/raft"
        node_id = "node1"
        retry_join 
        {
            leader_api_addr = "https://node2.int.us-west-1-dev.central.example.com:8200"
            leader_ca_cert_file = "/etc/vault.d/ssl/tls_ca.pem"
            leader_client_cert_file = "/etc/vault.d/ssl/tls.crt"
            leader_client_key_file = "/etc/vault.d/ssl/tls.key"
        }
        retry_join 
        {
            leader_api_addr = "https://node3.int.us-west-1-dev.central.example.com:8200"
            leader_ca_cert_file = "/etc/vault.d/ssl/tls_ca.pem"
            leader_client_cert_file = "/etc/vault.d/ssl/tls.crt"
            leader_client_key_file = "/etc/vault.d/ssl/tls.key"
        }
    }
    
    listener "tcp" {
       address = "0.0.0.0:8200"
       tls_disable = false
       tls_cert_file = "/etc/vault.d/ssl/tls.crt"
       tls_key_file = "/etc/vault.d/ssl/tls.key"
       tls_client_ca_file = "/etc/vault.d/ssl/tls_ca.pem"
       tls_cipher_suites = "TLS_TEST_128_GCM_SHA256,
                            TLS_TEST_128_GCM_SHA256,
                            TLS_TEST20_POLY1305,
                            TLS_TEST_256_GCM_SHA384,
                            TLS_TEST20_POLY1305,
                            TLS_TEST_256_GCM_SHA384"
    }
    api_addr = "https://node1.int.us-west-1-dev.central.example.com:8200"
    cluster_addr = "https://node1.int.us-west-1-dev.central.example.com:8201"
    disable_mlock = true
    ui = true
    log_level = "trace"
    disable_cache = true
    cluster_name = "POC"
    
    # Enterprise license_path
    # This will be required for enterprise as of v1.8
    license_path = "/etc/vault.d/vault.hclic"

    ### Node 2 ###
    [user@node2 vault.d]$ cat vault.hcl
    storage "raft" {
        path = "/opt/raft"
        node_id = "node2"
        retry_join 
        {
            leader_api_addr = "https://node1.int.us-west-1-dev.central.example.com:8200"
            leader_ca_cert_file = "/etc/vault.d/ssl/tls_ca.pem"
            leader_client_cert_file = "/etc/vault.d/ssl/tls.crt"
            leader_client_key_file = "/etc/vault.d/ssl/tls.key"
        }
        retry_join 
        {
            leader_api_addr = "https://node3.int.us-west-1-dev.central.example.com:8200"
            leader_ca_cert_file = "/etc/vault.d/ssl/tls_ca.pem"
            leader_client_cert_file = "/etc/vault.d/ssl/tls.crt"
            leader_client_key_file = "/etc/vault.d/ssl/tls.key"
        } 
    }
    
    listener "tcp" {
       address = "0.0.0.0:8200"
       tls_disable = false
       tls_cert_file = "/etc/vault.d/ssl/tls.crt"
       tls_key_file = "/etc/vault.d/ssl/tls.key"
       tls_client_ca_file = "/etc/vault.d/ssl/tls_ca.pem"
       tls_cipher_suites = "TLS_TEST_128_GCM_SHA256,
                            TLS_TEST_128_GCM_SHA256,
                            TLS_TEST20_POLY1305,
                            TLS_TEST_256_GCM_SHA384,
                            TLS_TEST20_POLY1305,
                            TLS_TEST_256_GCM_SHA384"
    }
    api_addr = "https://node2.int.us-west-1-dev.central.example.com:8200"
    cluster_addr = "https://node2.int.us-west-1-dev.central.example.com:8201"
    disable_mlock = true
    ui = true
    log_level = "trace"
    disable_cache = true
    cluster_name = "POC"
    
    # Enterprise license_path
    # This will be required for enterprise as of v1.8
    license_path = "/etc/vault.d/vault.hclic"

    ### Node 3 ###
    [user@node3 ~]$ cat /etc/vault.d/vault.hcl
    storage "raft" {
        path = "/opt/raft"
        node_id = "node3"
        retry_join 
        {
            leader_api_addr = "https://node1.int.us-west-1-dev.central.example.com:8200"
            leader_ca_cert_file = "/etc/vault.d/ssl/tls_ca.pem"
            leader_client_cert_file = "/etc/vault.d/ssl/tls.crt"
            leader_client_key_file = "/etc/vault.d/ssl/tls.key"
        }
        retry_join 
        {
            leader_api_addr = "https://node2.int.us-west-1-dev.central.example.com:8200"
            leader_ca_cert_file = "/etc/vault.d/ssl/tls_ca.pem"
            leader_client_cert_file = "/etc/vault.d/ssl/tls.crt"
            leader_client_key_file = "/etc/vault.d/ssl/tls.key"
        }
    }
    
    listener "tcp" {
       address = "0.0.0.0:8200"
       tls_disable = false
       tls_cert_file = "/etc/vault.d/ssl/tls.crt"
       tls_key_file = "/etc/vault.d/ssl/tls.key"
       tls_client_ca_file = "/etc/vault.d/ssl/tls_ca.pem"
       tls_cipher_suites = "TLS_TEST_128_GCM_SHA256,
                            TLS_TEST_128_GCM_SHA256,
                            TLS_TEST20_POLY1305,
                            TLS_TEST_256_GCM_SHA384,
                            TLS_TEST20_POLY1305,
                            TLS_TEST_256_GCM_SHA384"
    }
    api_addr = "https://node3.int.us-west-1-dev.central.example.com:8200"
    cluster_addr = "https://node3.int.us-west-1-dev.central.example.com:8201"
    disable_mlock = true
    ui = true
    log_level = "trace"
    disable_cache = true
    cluster_name = "POC"
    
    # Enterprise license_path
    # This will be required for enterprise as of v1.8
    license_path = "/etc/vault.d/vault.hclic"

    7. Set environment variables on all nodes:

    export VAULT_ADDR=https://$(hostname):8200
    export VAULT_CACERT=/etc/vault.d/ssl/tls_ca.pem
    export CA_CERT=`cat /etc/vault.d/ssl/tls_ca.pem`

    8. Start Vault as a service on all nodes:

    You can view the systemd unit file if interested by: 

    cat /etc/systemd/system/vault.service
    systemctl enable vault.service
    systemctl start vault.service
    systemctl status vault.service

    9. Check Vault status on all nodes:

    vault status

    10. Initialize Vault with the following command on vault node 1 only. Store unseal keys securely.

    [user@node1 vault.d]$ vault operator init -key-shares=1 -key-threshold=1
    Unseal Key 1: HPY/g5OiT8ivD6L4Bqfjx9L1We2MVb4WZAqKZk6zFf8=
    Initial Root Token: hvs.j4qTq1IZP9nscILMtN2p9GE0
    Vault initialized with 1 key shares and a key threshold of 1.
    Please securely distribute the key shares printed above. 
    When the Vault is re-sealed, restarted, or stopped, you must supply at least 1 of these keys to unseal it
    before it can start servicing requests.
    Vault does not store the generated root key. 
    Without at least 1 keys to reconstruct the root key, Vault will remain permanently sealed!
    It is possible to generate new unseal keys, provided you have a
    quorum of existing unseal keys shares. See "vault operator rekey" for more information.

    11. Set Vault token environment variable for the vault CLI command to authenticate to the server. Use the following command, replacing <initial-root- token> with the value generated in the previous step.

    export VAULT_TOKEN=<initial-root-token>
    echo "export VAULT_TOKEN=$VAULT_TOKEN" >> /root/.bash_profile
    ### Repeat this step for the other 2 servers.

    12. Unseal Vault1 using the unseal key generated in step 10. Notice the Unseal Progress key-value change as you present each key. After meeting the key threshold, the status of the key value for Sealed should change from true to false.

    [user@node1 vault.d]$ vault operator unseal HPY/g5OiT8ivD6L4Bqfjx9L1We2MVb4WZAqKZk6zFf8=
    Key                         Value
    ---                         -----
    Seal Type                   shamir
    Initialized                 true
    Sealed                      false
    Total Shares                1
    Threshold                   1
    Version                     1.11.0
    Build Date                  2022-06-17T15:48:44Z
    Storage Type                raft
    Cluster Name                POC
    Cluster ID                  109658fe-36bd-7d28-bf92-f095c77e860c
    HA Enabled                  true
    HA Cluster                  https://node1.int.us-west-1-dev.central.example.com:8201
    HA Mode                     active
    Active Since                2022-06-29T12:50:46.992698336Z
    Raft Committed Index        36
    Raft Applied Index          36

    13. Unseal Vault2 (Use the same unseal key generated in step 10 for Vault1):

    [user@node2 vault.d]$ vault operator unseal HPY/g5OiT8ivD6L4Bqfjx9L1We2MVb4WZAqKZk6zFf8=
    Key                Value
    ---                -----
    Seal Type          shamir
    Initialized        true
    Sealed             true
    Total Shares       1
    Threshold          1
    Unseal Progress    0/1
    Unseal Nonce       n/a
    Version            1.11.0
    Build Date         2022-06-17T15:48:44Z
    Storage Type       raft
    HA Enabled         true
    
    [user@node2 vault.d]$ vault status
    Key                   Value
    ---                   -----
    Seal Type             shamir
    Initialized           true
    Sealed                true
    Total Shares          1
    Threshold             1
    Version               1.11.0
    Build Date            2022-06-17T15:48:44Z
    Storage Type          raft
    Cluster Name          POC
    Cluster ID            109658fe-36bd-7d28-bf92-f095c77e860c
    HA Enabled            true
    HA Cluster            https://node1.int.us-west-1-dev.central.example.com:8201
    HA Mode               standby
    Active Node Address   https://node1.int.us-west-1-dev.central.example.com:8200
    Raft Committed Index  37
    Raft Applied Index    37

    14. Unseal Vault3 (Use the same unseal key generated in step 10 for Vault1):

    [user@node3 ~]$ vault operator unseal HPY/g5OiT8ivD6L4Bqfjx9L1We2MVb4WZAqKZk6zFf8=
    Key                Value
    ---                -----
    Seal Type          shamir
    Initialized        true
    Sealed             true
    Total Shares       1
    Threshold          1
    Unseal Progress    0/1
    Unseal Nonce       n/a
    Version            1.11.0
    Build Date         2022-06-17T15:48:44Z
    Storage Type       raft
    HA Enabled         true
    
    [user@node3 ~]$ vault status
    Key                       Value
    ---                       -----
    Seal Type                 shamir
    Initialized               true
    Sealed                    false
    Total Shares              1
    Threshold                 1
    Version                   1.11.0
    Build Date                2022-06-17T15:48:44Z
    Storage Type              raft
    Cluster Name              POC
    Cluster ID                109658fe-36bd-7d28-bf92-f095c77e860c
    HA Enabled                true
    HA Cluster                https://node1.int.us-west-1-dev.central.example.com:8201
    HA Mode                   standby
    Active Node Address       https://node1.int.us-west-1-dev.central.example.com:8200
    Raft Committed Index      39
    Raft Applied Index        39

    15. Check the cluster’s raft status with the following command:

    [user@node3 ~]$ vault operator raft list-peers
    Node      Address                                            State       Voter
    ----      -------                                            -----       -----
    node1    node1.int.us-west-1-dev.central.example.com:8201    leader      true
    node2    node2.int.us-west-1-dev.central.example.com:8201    follower    true
    node3    node3.int.us-west-1-dev.central.example.com:8201    follower    true

    16. Currently, node1 is the active node. We can experiment to see what happens if node1 steps down from its active node duty.

    In the terminal where VAULT_ADDR is set to: https://node1.int.us-west-1-dev.central.example.com, execute the step-down command.

    $ vault operator step-down # equivalent of stopping the node or stopping the systemctl service
    Success! Stepped down: https://node2.int.us-west-1-dev.central.example.com:8200

    In the terminal, where VAULT_ADDR is set to https://node2.int.us-west-1-dev.central.example.com:8200, examine the raft peer set.

    [user@node1 ~]$ vault operator raft list-peers
    Node      Address                                            State       Voter
    ----      -------                                            -----       -----
    node1    node1.int.us-west-1-dev.central.example.com:8201    follower    true
    node2    node2.int.us-west-1-dev.central.example.com:8201    leader      true
    node3    node3.int.us-west-1-dev.central.example.com:8201    follower    true

    Conclusion 

    Vault servers are now operational in High Availability mode, and we can test this by writing a secret from either the active or standby Vault instance and see it succeed as a test of request forwarding. Also, we can shut down the active vault instance (sudo systemctl stop vault) to simulate a system failure and see the standby instance assumes the leadership.

  • Benefits of Intelligent Revenue Cycle Management (iRCM) in Healthcare

    Highlights

    Intelligent Revenue Cycle Management (iRCM) is driving a dramatic revolution in the healthcare industry. This innovative approach incorporates advanced technologies like Artificial Intelligence (AI), Machine Learning (ML), and other critical tools to revolutionize revenue cycles. AI and ML are at the forefront of this transformation, playing critical roles in optimizing Revenue Cycles. Beyond the commonly employed AI applications for eligibility/benefits verification and payment estimations, there is a broader scope for AI in RCM.

    A research study by Change Healthcare found that approximately two-thirds of healthcare facilities and health systems are leveraging AI in their revenue cycle management (RCM) processes. Among these organizations, 72% use AI for eligibility/benefits verification, and 64% employ it for payment estimations (likely due to the No Surprises Act). However, the role of AI in RCM extends beyond these areas.

    The 2022 State of Revenue Integrity survey by the National Association of Healthcare Revenue Integrity highlights other AI-enabled functions in RCM, including charge description master (CDM) maintenance, charge capture, denials management, payer contract management, physician credentialing, and claim auditing, among others.

    Shedding Light on the Power of AI and Advanced Technologies in iRCM

    In addition to AI, technologies such as natural language processing (NLP), robotic process automation (RPA), and advanced analytics improve Intelligent Revenue Cycle Management (iRCM). NLP enables the extraction of essential insights from unstructured data sources, hence assisting with exact coding, documentation, and compliance. RPA automates repetitive and rule-based processes, liberating resources and lowering administrative burdens. Advanced analytics provides data-driven insights, empowering healthcare organizations to make informed decisions, identify areas for improvement, and drive continuous optimization.

    The adoption of artificial intelligence and other technologies into iRCM represents a paradigm change in healthcare financial management. It opens new avenues for healthcare organizations to increase revenue capture, save costs, streamline processes, and improve patient happiness.

    iRCM: Unleashing the Advantages of Advanced Technologies for A Healthcare System’s Financial Success

    iRCM leverages modern technologies to enhance financial success in healthcare. iRCM streamlines and optimizes the revenue cycle processes, leading to improved revenue capture, reduced costs, and enhanced financial outcomes for healthcare organizations. Here’s how Intelligent Revenue Cycle Management unleashes the advantages of advanced technologies like AI, ML, and RPA for healthcare’s financial success:

    AI in iRCM
    • Claims Processing: AI algorithms can analyze and handle enormous amounts of claims data, automate claim validation, and detect trends or abnormalities that may result in denials or mistakes. This can save manual work, enhance accuracy, and shorten the claims processing cycle.
    • Prioritization and Workflow Optimization: Artificial intelligence can intelligently prioritize jobs based on urgency, complexity, or revenue impact. It has the ability to automate work and optimize workflow management, ensuring that high-priority matters are addressed quickly and efficiently.
    • Intelligent Coding and Documentation: AI techniques like Natural Language Processing (NLP) are vital in analyzing medical records, extracting relevant information, identifying medical concepts, and assisting in coding. AI models trained on large datasets provide intelligent code recommendations, streamlining workflows in revenue cycle management for healthcare organizations.
    ML in iRCM:
    • Predictive Analytics: ML algorithms can analyze historical data to identify trends, patterns, and correlations related to revenue cycle performance. By leveraging this information, organizations can predict potential issues, optimize processes, and make data-driven decisions to improve financial outcomes.
    • Denial Management: ML can help learn from past denials and identify patterns contributing to claim rejections. This enables proactive measures to be taken, such as improving the documentation or addressing common billing errors, to minimize future denials.
    RPA in iRCM:
    • Automated Data Entry and Extraction: RPA can automate repetitive manual tasks, like data entry from paper documents or extracting information from records. This reduces errors, saves time, and improves efficiency in revenue cycle processes.
    • Claim Status and Follow-up: RPA bots can automatically retrieve claim status from payer portals, update the system with real-time information, and initiate follow-up actions based on predefined rules. This streamlines the claims management process and enhances revenue capture.
    • Eligibility Verification: RPA can automate the verification of patient insurance eligibility by extracting relevant information from various sources and validating it against payer databases. This ensures accurate billing and minimizes claim denials.

    Revolutionize Your Revenue Cycle: Embrace Intelligent Revenue Cycle Management Powered by Advanced Technologies

    iRCM (Intelligent Revenue Cycle Management) represents a cutting-edge approach to optimizing financial success in the healthcare industry. By harnessing the power of advanced technologies like AI, ML, and RPA, iRCM revolutionizes revenue cycle management processes. R Systems, a renowned provider of intelligent Revenue Cycle Management Services, offers comprehensive solutions integrating AI algorithms for intelligent coding and documentation, ML models for predictive analytics, and RPA for streamlined automation.

    With R Systems’ innovative approach, healthcare organizations can unlock the full potential of their revenue cycle, ensuring accurate coding, minimizing claim denials, and maximizing revenue capture. By embracing R Systems’ expertise and leveraging intelligent technologies, healthcare providers can experience efficient operations, improved financial outcomes, and elevated standards of patient care in their revenue cycle management.

  • Best Practices for Kafka Security

    Overview‍

    We will cover the security concepts of Kafka and walkthrough the implementation of encryption, authentication, and authorization for the Kafka cluster.

    This article will explain how to configure SASL_SSL (simple authentication security layer) security for your Kafka cluster and how to protect the data in transit. SASL_SSL is a communication type in which clients use authentication mechanisms like PLAIN, SCRAM, etc., and the server uses SSL certificates to establish secure communication. We will use the SCRAM authentication mechanism here for the client to help establish mutual authentication between the client and server. We’ll also discuss authorization and ACLs, which are important for securing your cluster.

    Prerequisites

    Running Kafka Cluster, basic understanding of security components.

    Need for Kafka Security

    The primary reason is to prevent unlawful internet activities for the purpose of misuse, modification, disruption, and disclosure. So, to understand the security in Kafka cluster a secure Kafka cluster, we need to know three terms:

    • Authentication – It is a security method used for servers to determine whether users have permission to access their information or website.
    • Authorization – The authorization security method implemented with authentication enables servers to have a methodology of identifying clients for access. Basically, it gives limited access, which is sufficient for the client.
    • Encryption – It is the process of transforming data to make it distorted and unreadable without a decryption key. Encryption ensures that no other client can intercept and steal or read data.

    Here is the quick start guide by Apache Kafka, so check it out if you still need to set up Kafka.

    https://kafka.apache.org/quickstart

    We’ll not cover the theoretical aspects here, but you can find a ton of sources on how these three components work internally. For now, we’ll focus on the implementation part and how Kafka revolves around security.

    This image illustrates SSL communication between the Kafka client and server.

    We are going to implement the steps in the below order:

    • Create a Certificate Authority
    • Create a Truststore & Keystore

    Certificate Authority – It is a trusted entity that issues SSL certificates. As such, a CA is an independent entity that acts as a trusted third party, issuing certificates for use by others. A certificate authority validates the credentials of a person or organization that requests a certificate before issuing one.

    Truststore – A truststore contains certificates from other parties with which you want to communicate or certificate authorities that you trust to identify other parties. In simple words, a list of CAs that can validate the certificate signed by the trusted CA.

    KeyStore – A KeyStore contains private keys and certificates with their corresponding public keys. Keystores can have one or more CA certificates depending upon what’s needed.

    For Kafka Server, we need a server certificate, and here, Keystore comes into the picture since it stores a server certificate. The server certificate should be signed by Certificate Authority (CA). The KeyStore requests to sign the server certificate and in response, CA send a signed CRT to Keystore.

    We will create our own certificate authority for demonstration purposes. If you don’t want to create a private certificate authority, there are many certificate providers you can go with, like IdenTrust and GoDaddy. Since we are creating one, we need to tell our Kafka client to trust our private certificate authority using the Trust Store.

    This block diagram shows you how all the components communicate with each other and their role to generate the final certificate.

    So, let’s create our Certificate Authority. Run the below command in your terminal:

    “openssl req -new -keyout <private_key_name> -out <public_certificate_name>”

    It will ask for a passphrase, and keep it safe for future use cases. After successfully executing the command, we should have two files named private_key_name and public_certificate_name.

    Now, let’s create a KeyStore and trust store for brokers; we need both because brokers also interact internally with each other. Let’s understand with the help of an example: Broker A wants to connect with Broker B, so Broker A acts as a client and Broker B as a server. We are using the SASL_SSL protocol, so A needs SASL credentials, and B needs a certificate for authentication. The reverse is also possible where Broker B wants to connect with Broker A, so we need both a KeyStore and a trust store for authentication.

    Now let’s create a trust store. Execute the below command in the terminal, and it should ask for the password. Save the password for future use:

    “keytool -keystore <truststore_name.jks> -alias <alias name of the entry to process> -import -file <public_certificate_name>”

    Here, we are using the .jks extension for the file, which stands for Java KeyStore. You can also use Public-Key Cryptography Standards #12 (pkcs12) instead of .jks, but that’s totally up to you. public_certificate_name is the same certificate while we create CA.

    For the KeyStore configuration, run the below command and store the password:

    “keytool genkey -keystore <keystore_name.jks> -validity <number_of_days> -storepass <store_password> -genkey -alias <alias_name> -keyalg <key algorithm name> -ext SAN=<“DNS:localhost”>”

    This action creates the KeyStore file in the current working directory. The question “First and Last Name” requires you to enter a fully qualified domain name because some certificate authorities, such as VeriSign, expect this property to be a fully qualified domain name. Not all CAs require a fully qualified domain name, but I recommend using a fully qualified domain name for portability. All other information should be valid. If the information cannot be verified, a certificate authority such as VeriSign will not sign the CSR generated for that record. I’m using localhost for the domain name here, as seen in the above command itself.

    Keystore has an entry with alias_name. It contains the private key and information needed for generating a CSR. Now let’s create a signing certificate request, so it will be used to get a signed certificate from Certificate Authority.

    Execute the below command in your terminal:

    “keytool -keystore <keystore_name.jks> -alias <alias_name> -certreq -file <file_name.csr>”

    So, we have generated a signing certificate request using a KeyStore (the KeyStore name and alias name should be the same). It should ask for the KeyStore password, so enter the same one used while creating the KeyStore.

    Now, execute the below command. It will ask for the password, so enter the CA password, and now we have a signed certificate:

    “openssl x509 -req -CA <public_certificate_name> -CAkey <private_key_name> -in <csr file> -out <signed_file_name> -CAcreateserial”

    Finally, we need to add the public certificate of CA and signed certificate in the KeyStore, so run the below command. It will add the CA certificate to the KeyStore.

    “keytool -keystore <keystore_name.jks> -alias <public_certificate_name> -import -file <public_certificate_name>”

    Now, let’s run the below command; it will add the signed certificate to the KeyStore.

    “keytool -keystore <keystore_name.jks> -alias <alias_name> -import -file <signed_file_name>”

    As of now, we have generated all the security files for the broker. For internal broker communication, we are using SASL_SSL (see security.inter.broker.protocol in server.properties). Now we need to create a broker username and password using the SCRAM method. For more details, click here.

    Run the below command:

    “kafka-configs.sh –zookeeper <host: port> –entity-type users –entity-name <username> –alter –add-config ‘SCRAM-SHA-512=[password=<password>]’”

    NOTE: Credentials for inter-broker communication must be created before Kafka brokers are started.

    Now, we need to configure the Kafka broker property file, so update the file as given below:

    listeners=SASL_SSL://localhost:9092
    advertised.listeners=SASL_SSL://localhost:9092
    ssl.truststore.location={path/to/truststore_name.jks}
    ssl.truststore.password={truststore_password}
    ssl.keystore.location={/path/to/keystore_name.jks}
    ssl.keystore.password={keystore_password}
    security.inter.broker.protocol=SASL_SSL
    ssl.client.auth=none
    ssl.protocol=TLSv1.2
    sasl.enabled.mechanisms=SCRAM-SHA-512
    sasl.mechanism.inter.broker.protocol=SCRAM-SHA-512
    listener.name.sasl_ssl.scram-sha-512.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username={username} password={password};
    super.users=User:{username}

    NOTE: If you are using an external jaas config file, then remove the ScramLoginModule line and set this environment variable before starting broker. “export KAFKA_OPTS=-Djava.security.auth.login.config={path/to/broker.conf}”

    Now, if we run Kafka, the broker should be running on port 9092 without any failure, and if you have multiple brokers inside Kafka, the same config file can be replicated among them, but the port should be different for each broker.

    Producers and consumers need a username and a password to access the broker, so let’s create their credentials and update respective configurations.

    Create a producer user and update producer.properties inside the bin directory, so execute the below command in your terminal.

    “bin/kafka-configs.sh –zookeeper <host: port> –entity-type users –entity-name <producer_name> –alter –add-config ‘SCRAM-SHA-512=[password=<password>]’”

    We need a trust store file for our clients (producer and consumer), but as we already know how to create a trust store, this is a small task for you. It is suggested that producers and consumers should have separate trust stores because when we move Kafka to production, there could be multiple producers and consumers on different machines.

    security.protocol=SASL_SSL
    ssl.protocol=TLSv1.2
    ssl.truststore.location={path/to/client.truststore.jks}
    ssl.truststore.password={password}
    sasl.mechanism=SCRAM-SHA-512
    sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username={producer_name} password={password};

    The below command creates a consumer user, so now let’s update consumer.properties inside the bin directory:

    “bin/kafka-configs.sh –zookeeper <host: port> –entity-type users –entity-name <consumer_name> –alter –add-config ‘SCRAM-SHA-512=[password=<password>]’”

    security.protocol=SASL_SSL
    ssl.protocol=TLSv1.2
    ssl.truststore.location={path/to/client.truststore.jks}
    ssl.truststore.password={password}
    sasl.mechanism=SCRAM-SHA-512
    sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username={consumer_name} password={password};

    As of now, we have implemented encryption and authentication for Kafka brokers. To verify that our producer and consumer are working properly with SCRAM credentials, run the console producer and consumer on some topics.

    Authorization is not implemented yet. Kafka uses access control lists (ACLs) to specify which users can perform which actions on specific resources or groups of resources. Each ACL has a principal, a permission type, an operation, a resource type, and a name.

    The default authorizer is ACLAuthorizer provided by Kafka; Confluent also provides the Confluent Server Authorizer, which is totally different from ACLAuthorizer. An authorizer is a server plugin used by Kafka to authorize actions. Specifically, the authorizer controls whether operations should be authorized based on the principal and resource being accessed.

    Format of ACLs – Principal P is [Allowed/Denied] Operation O from Host H on any Resource R matching ResourcePattern RP

    Execute the below command to create an ACL with writing permission for the producer:

    “bin/kafka-acls.sh –authorizer-properties zookeeper.connect=<host: port> –add –allow-principal User:<producer_name> –operation WRITE –topic <topic_name>”

    The above command should create ACL of write operation for producer_name on topic_name.

    Now, execute the below command to create an ACL with reading permission for the consumer:

    “bin/kafka-acls.sh –authorizer-properties zookeeper.connect=<host: port> –add –allow-principal User:<consumer_name> –operation READ –topic <topic_name>”

    Now we need to define the consumer group ID for this consumer, so the below command associates a consumer with a given consumer group ID.

    “bin/kafka-acls.sh –authorizer-properties zookeeper.connect=<host: port> –add –allow-principal User:<consumer_name> –operation READ –group <consumer_group_name>”

    Now, we need to add some configuration in two files: broker.properties and consumer.properties.

    # Authorizer class
    authorizer.class.name=kafka.security.authorizer.AclAuthorizer

    The above line indicates that AclAuthorizer class is used for authorization.

    # consumer group id
    group.id=<consumer_group_name>

    Consumer group-id is mandatory, and if we do not specify any group, a consumer will not be able to access the data from topics, so to start a consumer, group-id should be provided.

    Let’s test the producer and consumer one by one, run the console producer and also run the console consumer in another terminal; both should be running without error.

    console-producer
    console-consumer

    Voila!! Your Kafka is secured.

    Summary

    In a nutshell, we have implemented security in our Kafka using the SASL_SSL mechanism and learned how to create ACLs and give different permission to different users.

    Apache Kafka is the wild west without security. By default, there is no encryption, authentication, or access control list. Any client can communicate with the Kafka broker using the PLAINTEXT port. Access using this port should be restricted to trusted clients only. You can use network segmentation and/or authentication ACLs to restrict access to trusted IP addresses in these cases. If none of these are used, the cluster is wide open and available to anyone. A basic knowledge of Kafka authentication, authorization, encryption, and audit trails is required to safely move a system into production.

  • Discover the Benefits of Android Clean Architecture

    All architectures have one common goal: to manage the complexity of our application. We may not need to worry about it on a smaller project, but it becomes a lifesaver on larger ones. The purpose of Clean Architecture is to minimize code complexity by preventing implementation complexity.

    We must first understand a few things to implement the Clean Architecture in an Android project.

    • Entities: Encapsulate enterprise-wide critical business rules. An entity can be an object with methods or data structures and functions.
    • Use cases: It demonstrates data flow to and from the entities.
    • Controllers, gateways, presenters: A set of adapters that convert data from the use cases and entities format to the most convenient way to pass the data to the upper level (typically the UI).
    • UI, external interfaces, DB, web, devices: The outermost layer of the architecture, generally composed of frameworks such as database and web frameworks.

    Here is one thumb rule we need to follow. First, look at the direction of the arrows in the diagram. Entities do not depend on use cases and use cases do not depend on controllers, and so on. A lower-level module should always rely on something other than a higher-level module. The dependencies between the layers must be inwards.

    Advantages of Clean Architecture:

    • Strict architecture—hard to make mistakes
    • Business logic is encapsulated, easy to use, and tested
    • Enforcement of dependencies through encapsulation
    • Allows for parallel development
    • Highly scalable
    • Easy to understand and maintain
    • Testing is facilitated

    Let’s understand this using the small case study of the Android project, which gives more practical knowledge rather than theoretical.

    A pragmatic approach

    A typical Android project typically needs to separate the concerns between the UI, the business logic, and the data model, so taking “the theory” into account, we decided to split the project into three modules:

    • Domain Layer: contains the definitions of the business logic of the app, the data models, the abstract definition of repositories, and the definition of the use cases.
    Domain Module
    • Data Layer: This layer provides the abstract definition of all the data sources. Any application can reuse this without modifications. It contains repositories and data sources implementations, the database definition and its DAOs, the network APIs definitions, some mappers to convert network API models to database models, and vice versa.
    Data Module
    • Presentation layer: This is the layer that mainly interacts with the UI. It’s Android-specific and contains fragments, view models, adapters, activities, composable, and so on. It also includes a service locator to manage dependencies.
    Presentation Module

    Marvel’s comic characters App

    To elaborate on all the above concepts related to Clean Architecture, we are creating an app that lists Marvel’s comic characters using Marvel’s developer API. The app shows a list of Marvel characters, and clicking on each character will show details of that character. Users can also bookmark their favorite characters. It seems like nothing complicated, right?

    Before proceeding further into the sample, it’s good to have an idea of the following frameworks because the example is wholly based on them.

    • Jetpack Compose – Android’s recommended modern toolkit for building native UI.
    • Retrofit 2 – A type-safe HTTP client for Android for Network calls.
    • ViewModel – A class responsible for preparing and managing the data for an activity or a fragment.
    • Kotlin – Kotlin is a cross-platform, statically typed, general-purpose programming language with type inference.

    To get a characters list, we have used marvel’s developer API, which returns the list of marvel characters.

    http://gateway.marvel.com/v1/public/characters

    The domain layer

    In the domain layer, we define the data model, the use cases, and the abstract definition of the character repository. The API returns a list of characters, with some info like name, description, and image links.

    data class CharacterEntity(
        val id: Long,
        val name: String,
        val description: String,
        val imageUrl: String,
        val bookmarkStatus: Boolean
    )

    interface MarvelDataRepository {
        suspend fun getCharacters(dataSource: DataSource): Flow<List<CharacterEntity>>
        suspend fun getCharacter(characterId: Long): Flow<CharacterEntity>
        suspend fun toggleCharacterBookmarkStatus(characterId: Long): Boolean
        suspend fun getComics(dataSource: DataSource, characterId: Long): Flow<List<ComicsEntity>>
    }

    class GetCharactersUseCase(
        private val marvelDataRepository: MarvelDataRepository,
        private val ioDispatcher: CoroutineDispatcher = Dispatchers.IO
    ) {
        operator fun invoke(forceRefresh: Boolean = false): Flow<List<CharacterEntity>> {
            return flow {
                emitAll(
                    marvelDataRepository.getCharacters(
                        if (forceRefresh) {
                            DataSource.Network
                        } else {
                            DataSource.Cache
                        }
                    )
                )
            }
                .flowOn(ioDispatcher)
        }
    }

    The data layer

    As we said before, the data layer must implement the abstract definition of the domain layer, so we need to put the repository’s concrete implementation in this layer. To do so, we can define two data sources, a “local” data source to provide persistence and a “remote” data source to fetch the data from the API.

    class MarvelDataRepositoryImpl(
        private val marvelRemoteService: MarvelRemoteService,
        private val charactersDao: CharactersDao,
        private val comicsDao: ComicsDao,
        private val ioDispatcher: CoroutineDispatcher = Dispatchers.IO
    ) : MarvelDataRepository {
    
        override suspend fun getCharacters(dataSource: DataSource): Flow<List<CharacterEntity>> =
            flow {
                emitAll(
                    when (dataSource) {
                        is DataSource.Cache -> getCharactersCache().map { list ->
                            if (list.isEmpty()) {
                                getCharactersNetwork()
                            } else {
                                list.toDomain()
                            }
                        }
                            .flowOn(ioDispatcher)
    
                        is DataSource.Network -> flowOf(getCharactersNetwork())
                            .flowOn(ioDispatcher)
                    }
                )
            }
    
        private suspend fun getCharactersNetwork(): List<CharacterEntity> =
            marvelRemoteService.getCharacters().body()?.data?.results?.let { remoteData ->
                if (remoteData.isNotEmpty()) {
                    charactersDao.upsert(remoteData.toCache())
                }
                remoteData.toDomain()
            } ?: emptyList()
    
        private fun getCharactersCache(): Flow<List<CharacterCache>> =
            charactersDao.getCharacters()
    
        override suspend fun getCharacter(characterId: Long): Flow<CharacterEntity> =
            charactersDao.getCharacterFlow(id = characterId).map {
                it.toDomain()
            }
    
        override suspend fun toggleCharacterBookmarkStatus(characterId: Long): Boolean {
    
            val status = charactersDao.getCharacter(characterId)?.bookmarkStatus?.not() ?: false
    
            return charactersDao.toggleCharacterBookmarkStatus(id = characterId, status = status) > 0
        }
    
        override suspend fun getComics(
            dataSource: DataSource,
            characterId: Long
        ): Flow<List<ComicsEntity>> = flow {
            emitAll(
                when (dataSource) {
                    is DataSource.Cache -> getComicsCache(characterId = characterId).map { list ->
                        if (list.isEmpty()) {
                            getComicsNetwork(characterId = characterId)
                        } else {
                            list.toDomain()
                        }
                    }
                    is DataSource.Network -> flowOf(getComicsNetwork(characterId = characterId))
                        .flowOn(ioDispatcher)
                }
            )
        }
    
        private suspend fun getComicsNetwork(characterId: Long): List<ComicsEntity> =
            marvelRemoteService.getComics(characterId = characterId)
                .body()?.data?.results?.let { remoteData ->
                    if (remoteData.isNotEmpty()) {
                        comicsDao.upsert(remoteData.toCache(characterId = characterId))
                    }
                    remoteData.toDomain()
                } ?: emptyList()
    
        private fun getComicsCache(characterId: Long): Flow<List<ComicsCache>> =
            comicsDao.getComics(characterId = characterId)
    }

    Since we defined the data source to manage persistence, in this layer, we also need to determine the database for which we are using the room database. In addition, it’s good practice to create some mappers to map the API response to the corresponding database entity.

    fun List<Characters>.toCache() = map { character -> character.toCache() }
    
    fun Characters.toCache() = CharacterCache(
        id = id ?: 0,
        name = name ?: "",
        description = description ?: "",
        imageUrl = thumbnail?.let {
            "${it.path}.${it.extension}"
        } ?: ""
    )
    
    fun List<Characters>.toDomain() = map { character -> character.toDomain() }
    
    fun Characters.toDomain() = CharacterEntity(
        id = id ?: 0,
        name = name ?: "",
        description = description ?: "",
        imageUrl = thumbnail?.let {
            "${it.path}.${it.extension}"
        } ?: "",
        bookmarkStatus = false
    )

    @Entity
    data class CharacterCache(
        @PrimaryKey
        val id: Long,
        val name: String,
        val description: String,
        val imageUrl: String,
        val bookmarkStatus: Boolean = false
    ) : BaseCache

    The presentation layer

    In this layer, we need a UI component like fragments, activity, or composable to display the list of characters; here, we can use the widely used MVVM approach. The view model takes the use cases in its constructors and invokes the corresponding use case according to user actions (get a character, characters & comics, etc.).

    Each use case will invoke the appropriate method in the repository.

    class CharactersListViewModel(
        private val getCharacters: GetCharactersUseCase,
        private val toggleCharacterBookmarkStatus: ToggleCharacterBookmarkStatus
    ) : ViewModel() {
    
        private val _characters = MutableStateFlow<UiState<List<CharacterViewState>>>(UiState.Loading())
        val characters: StateFlow<UiState<List<CharacterViewState>>> = _characters
    
        init {
            _characters.value = UiState.Loading()
            getAllCharacters()
        }
    
        private fun getAllCharacters(forceRefresh: Boolean = false) {
            getCharacters(forceRefresh)
                .catch { error ->
                    error.printStackTrace()
                    when (error) {
                        is UnknownHostException, is ConnectException, is SocketTimeoutException -> _characters.value =
                            UiState.NoInternetError(error)
                        else -> _characters.value = UiState.ApiError(error)
                    }
                }.map { list ->
                    _characters.value = UiState.Loaded(list.toViewState())
                }.launchIn(viewModelScope)
        }
    
        fun refresh(showLoader: Boolean = false) {
            if (showLoader) {
                _characters.value = UiState.Loading()
            }
            getAllCharacters(forceRefresh = true)
        }
    
        fun bookmarkCharacter(characterId: Long) {
            viewModelScope.launch {
                toggleCharacterBookmarkStatus(characterId = characterId)
            }
        }
    }

    /*
    * Scaffold(Layout) for Characters list page
    * */
    
    
    @SuppressLint("UnusedMaterialScaffoldPaddingParameter")
    @Composable
    fun CharactersListScaffold(
        showComics: (Long) -> Unit,
        closeAction: () -> Unit,
        modifier: Modifier = Modifier,
        charactersListViewModel: CharactersListViewModel = getViewModel()
    ) {
        Scaffold(
            modifier = modifier,
            topBar = {
                TopAppBar(
                    title = {
                        Text(text = stringResource(id = R.string.characters))
                    },
                    navigationIcon = {
                        IconButton(onClick = closeAction) {
                            Icon(
                                imageVector = Icons.Filled.Close,
                                contentDescription = stringResource(id = R.string.close_icon)
                            )
                        }
                    }
                )
            }
        ) {
            val state = charactersListViewModel.characters.collectAsState()
    
            when (state.value) {
    
                is UiState.Loading -> {
                    Loader()
                }
    
                is UiState.Loaded -> {
                    state.value.data?.let { characters ->
                        val isRefreshing = remember { mutableStateOf(false) }
                        SwipeRefresh(
                            state = rememberSwipeRefreshState(isRefreshing = isRefreshing.value),
                            onRefresh = {
                                isRefreshing.value = true
                                charactersListViewModel.refresh()
                            }
                        ) {
                            isRefreshing.value = false
    
                            if (characters.isNotEmpty()) {
    
                                LazyVerticalGrid(
                                    columns = GridCells.Fixed(2),
                                    modifier = Modifier
                                        .padding(5.dp)
                                        .fillMaxSize()
                                ) {
                                    items(characters) { state ->
                                        CharacterTile(
                                            state = state,
                                            characterSelectAction = {
                                                showComics(state.id)
                                            },
                                            bookmarkAction = {
                                                charactersListViewModel.bookmarkCharacter(state.id)
                                            },
                                            modifier = Modifier
                                                .padding(5.dp)
                                                .fillMaxHeight(fraction = 0.35f)
                                        )
                                    }
                                }
    
                            } else {
                                Info(
                                    messageResource = R.string.no_characters_available,
                                    iconResource = R.drawable.ic_no_data
                                )
                            }
                        }
                    }
                }
    
                is UiState.ApiError -> {
                    Info(
                        messageResource = R.string.api_error,
                        iconResource = R.drawable.ic_something_went_wrong
                    )
                }
    
                is UiState.NoInternetError -> {
                    Info(
                        messageResource = R.string.no_internet,
                        iconResource = R.drawable.ic_no_connection,
                        isInfoOnly = false,
                        buttonAction = {
                            charactersListViewModel.refresh(showLoader = true)
                        }
                    )
                }
            }
        }
    }
    
    @Preview
    @Composable
    private fun CharactersListScaffoldPreview() {
        MarvelComicTheme {
            CharactersListScaffold(showComics = {}, closeAction = {})
        }
    }

    Let’s see how the communication between the layers looks like.

    Source: Clean Architecture Tutorial for Android

    As you can see, each layer communicates only with the closest one, keeping inner layers independent from lower layers, this way, we can quickly test each module separately, and the separation of concerns will help developers to collaborate on the different modules of the project.

    Thank you so much!

  • Building A Containerized Microservice in Golang: A Step-by-step Guide

    With the evolving architectural design of web applications, microservices have been a successful new trend in architecting the application landscape. Along with the advancements in application architecture, transport method protocols, such as REST and gRPC are getting better in efficiency and speed. Also, containerizing microservice applications help greatly in agile development and high-speed delivery.

    In this blog, I will try to showcase how simple it is to build a cloud-native application on the microservices architecture using Go.

    We will break the solution into multiple steps. We will learn how to:

    1) Build a microservice and set of other containerized services which will have a very specific set of independent tasks and will be related only with the specific logical component.

    2) Use go-kit as the framework for developing and structuring the components of each service.

    3) Build APIs that will use HTTP (REST) and Protobuf (gRPC) as the transport mechanisms, PostgreSQL for databases and finally deploy it on Azure stack for API management and CI/CD.

    Note: Deployment, setting up the CI-CD and API-Management on Azure or any other cloud is not in the scope of the current blog.

    Prerequisites:

    • A beginner’s level of understanding of web services, Rest APIs and gRPC
    • GoLand/ VS Code
    • Properly installed and configured Go. If not, check it out here
    • Set up a new project directory under the GOPATH
    • Understanding of the standard Golang project. For reference, visit here
    • PostgreSQL client installed
    • Go kit

    What are we going to do?

    We will develop a simple web application working on the following problem statement:

    • A global publishing company that publishes books and journals wants to develop a service to watermark their documents. A document (books, journals) has a title, author and a watermark property
    • The watermark operation can be in Started, InProgress and Finished status
    • The specific set of users should be able to do the watermark on a document
    • Once the watermark is done, the document can never be re-marked

    Example of a document:

    {content: “book”, title: “The Dark Code”, author: “Bruce Wayne”, topic: “Science”}

    For a detailed understanding of the requirement, please refer to this.

    Architecture:

    In this project, we will have 3 microservices: Authentication Service, Database Service and the Watermark Service. We have a PostgreSQL database server and an API-Gateway.

    Authentication Service:

    The application is supposed to have a role-based and user-based access control mechanism. This service will authenticate the user according to its specific role and return HTTP status codes only. 200 when the user is authorized and 401 for unauthorized users.

    APIs:

    • /user/access, Method: GET, Secured: True, payload: user: <name></name>
      It will take the user name as an input and the auth service will return the roles and the privileges assigned to it
    • /authenticate, Method: GET, Secured: True, payload: user: <name>, operation: <op></op></name>
      It will authenticate the user with the passed operation if it is accessible for the role
    • /healthz, Method: GET, Secured: True
      It will return the status of the service

    Database Service:

    We will need databases for our application to store the user, their roles and the access privileges to that role. Also, the documents will be stored in the database without the watermark. It is a requirement that any document cannot have a watermark at the time of creation. A document is said to be created successfully only when the data inputs are valid and the database service returns the success status.

    We will be using two databases for two different services for them to be consumed. This design is not necessary, but just to follow the “Single Database per Service” rule under the microservice architecture.

    APIs:

    • /get, Method: GET, Secured: True, payload: filters: []filter{“field-name”: “value”}
      It will return the list of documents according to the specific filters passed
    • /update, Method: POST, Secured: True, payload: “Title”: <id>, document: {“field”: “value”, …}</id>
      It will update the document for the given title id
    • /add, Method: POST, Secured: True, payload: document: {“field”: “value”, …}
      It will add the document and return the title-ID
    • /remove Method: POST, Secured: True, payload: title: <id></id>
      It will remove the document entry according to the passed title-id
    • /healthz, Method: GET, Secured: True
      It will return the status of the service

    Watermark Service:

    This is the main service that will perform the API calls to watermark the passed document. Every time a user needs to watermark a document, it needs to pass the TicketID in the watermark API request along with the appropriate Mark. It will try to call the database Update API internally with the provided request and returns the status of the watermark process which will be initially “Started”, then in some time the status will be “InProgress” and if the call was valid, the status will be “Finished”, or “Error”, if the request is not valid.

    APIs:

    • /get, Method: GET, Secured: True, payload: filters: []filter{“field-name”: “value”}
      It will return the list of documents according to the specific filters passed
    • /status, Method: GET, Secured: True, payload: “Ticket”: <id></id>
      It will return the status of the document for watermark operation for the passed ticket-id
    • /addDocument, Method: POST, Secured: True, payload: document: {“field”: “value”, …}
      It will add the document and return the title-ID
    • /watermark, Method: POST, Secured: True, payload: title: <id>, mark: “string”</id>
      It is the main watermark operation API which will accept the mark string
    • /healthz, Method: GET, Secured: True
      It will return the status of the service

    Operations and Flow:

    Watermark Service APIs are the only ones that will be used by the user/actor to request watermark or add the document. Authentication and Database service APIs are the private ones that will be called by other services internally. The only URL accessible to the user is the API Gateway URL.

    1. The user will access the API Gateway URL with the required user name, the ticket-id and the mark with which the user wants the document to apply watermark
    2. The user should not know about the authentication or database services
    3. Once the request is made by the user, it will be accepted by the API Gateway. The gateway will validate the request along with the payload
    4. An API forwarding rule of configuring the traffic of a specific request to a service should be defined in the gateway. The request when validated, will be forwarded to the service according to that rule.
    5. We will define an API forwarding rule where the request made for any watermark will be first forwarded to the authentication service which will authenticate the request, check for authorized users and return the appropriate status code.
    6. The authorization service will check for the user from which the request has been made, into the user database and its roles and permissions. It will send the response accordingly
    7. Once the request has been authorized by the service, it will be forwarded back to the actual watermark service
    8. The watermark service then performs the appropriate operation of putting the watermark on the document or add a new entry of the document or any other request
    9. The operation from the watermark service of Get, Watermark or AddDocument will be performed by calling the database CRUD APIs and forwarded to the user
    10. If the request is to AddDocument then the service should return the “TicketID” or if it is for watermark then it should return the status of  the operation

    Note:

    Each user will have some specific roles, based on which the access controls will be identified for the user. For the sake of simplicity, the roles will be based on the type of document only, not the specific name of the book or journal

    Getting Started:

    Let’s start by creating a folder for our application in the $GOPATH. This will be the root folder containing our set of services.

    Project Layout:

    The project will follow the standard Golang project layout. If you want the full working code, please refer here

    • api: Stores the versions of the APIs swagger files and also the proto and pb files for the gRPC protobuf interface.
    • cmd: This will contain the entry point (main.go) files for all the services and also any other container images if any
    • docs: This will contain the documentation for the project
    • config: All the sample files or any specific configuration files should be stored here
    • deploy: This directory will contain the deployment files used to deploy the application
    • internal: This package is the conventional internal package identified by the Go compiler. It contains all the packages which need to be private and imported by its child directories and immediate parent directory. All the packages from this directory are common across the project
    • pkg: This directory will have the complete executing code of all the services in separate packages.
    • tests: It will have all the integration and E2E tests
    • vendor: This directory stores all the third-party dependencies locally so that the version doesn’t mismatch later

    We are going to use the Go kit framework for developing the set of services. The official Go kit examples of services are very good, though the documentation is not that great.

    Watermark Service:

    1. Under the Go kit framework, a service should always be represented by an interface.

    Create a package named watermark in the pkg folder. Create a new service.go file in that package. This file is the blueprint of our service.

    package watermark
    
    import (
    	"context"
    
    	"github.com/velotiotech/watermark-service/internal"
    )
    
    type Service interface {
    	// Get the list of all documents
    	Get(ctx context.Context, filters ...internal.Filter) ([]internal.Document, error)
    	Status(ctx context.Context, ticketID string) (internal.Status, error)
    	Watermark(ctx context.Context, ticketID, mark string) (int, error)
    	AddDocument(ctx context.Context, doc *internal.Document) (string, error)
    	ServiceStatus(ctx context.Context) (int, error)
    }

    2. As per the functions defined in the interface, we will need five endpoints to handle the requests for the above methods. If you are wondering why we are using a context package, please refer here. Contexts enable the microservices to handle the multiple concurrent requests, but maybe in this blog, we are not using it too much. It’s just the best way to work with it.

    3. Implementing our service:

    package watermark
    
    import (
    	"context"
    	"net/http"
    	"os"
    
    	"github.com/velotiotech/watermark-service/internal"
    
    	"github.com/go-kit/kit/log"
    	"github.com/lithammer/shortuuid/v3"
    )
    
    type watermarkService struct{}
    
    func NewService() Service { return &watermarkService{} }
    
    func (w *watermarkService) Get(_ context.Context, filters ...internal.Filter) ([]internal.Document, error) {
    	// query the database using the filters and return the list of documents
    	// return error if the filter (key) is invalid and also return error if no item found
    	doc := internal.Document{
    		Content: "book",
    		Title:   "Harry Potter and Half Blood Prince",
    		Author:  "J.K. Rowling",
    		Topic:   "Fiction and Magic",
    	}
    	return []internal.Document{doc}, nil
    }
    
    func (w *watermarkService) Status(_ context.Context, ticketID string) (internal.Status, error) {
    	// query database using the ticketID and return the document info
    	// return err if the ticketID is invalid or no Document exists for that ticketID
    	return internal.InProgress, nil
    }
    
    func (w *watermarkService) Watermark(_ context.Context, ticketID, mark string) (int, error) {
    	// update the database entry with watermark field as non empty
    	// first check if the watermark status is not already in InProgress, Started or Finished state
    	// If yes, then return invalid request
    	// return error if no item found using the ticketID
    	return http.StatusOK, nil
    }
    
    func (w *watermarkService) AddDocument(_ context.Context, doc *internal.Document) (string, error) {
    	// add the document entry in the database by calling the database service
    	// return error if the doc is invalid and/or the database invalid entry error
    	newTicketID := shortuuid.New()
    	return newTicketID, nil
    }
    
    func (w *watermarkService) ServiceStatus(_ context.Context) (int, error) {
    	logger.Log("Checking the Service health...")
    	return http.StatusOK, nil
    }
    
    var logger log.Logger
    
    func init() {
    	logger = log.NewLogfmtLogger(log.NewSyncWriter(os.Stderr))
    	logger = log.With(logger, "ts", log.DefaultTimestampUTC)
    }

    We have defined the new type watermarkService empty struct which will implement the above-defined service interface. This struct implementation will be hidden from the rest of the world.

    NewService() is created as the constructor of our “object”. This is the only function available outside this package to instantiate the service.

    4. Now we will create the endpoints package which will contain two files. One is where we will store all types of requests and responses. The other file will be endpoints which will have the actual implementation of the requests parsing and calling the appropriate service function.

    – Create a file named reqJSONMap.go. We will define all the requests and responses struct with the fields in this file such as GetRequest, GetResponse, StatusRequest, StatusResponse, etc. Add the necessary fields in these structs which we want to have input in a request or we want to pass the output in the response.

    package endpoints
    
    import "github.com/velotiotech/watermark-service/internal"
    
    type GetRequest struct {
    	Filters []internal.Filter `json:"filters,omitempty"`
    }
    
    type GetResponse struct {
    	Documents []internal.Document `json:"documents"`
    	Err       string              `json:"err,omitempty"`
    }
    
    type StatusRequest struct {
    	TicketID string `json:"ticketID"`
    }
    
    type StatusResponse struct {
    	Status internal.Status `json:"status"`
    	Err    string          `json:"err,omitempty"`
    }
    
    type WatermarkRequest struct {
    	TicketID string `json:"ticketID"`
    	Mark     string `json:"mark"`
    }
    
    type WatermarkResponse struct {
    	Code int    `json:"code"`
    	Err  string `json:"err"`
    }
    
    type AddDocumentRequest struct {
    	Document *internal.Document `json:"document"`
    }
    
    type AddDocumentResponse struct {
    	TicketID string `json:"ticketID"`
    	Err      string `json:"err,omitempty"`
    }
    
    type ServiceStatusRequest struct{}
    
    type ServiceStatusResponse struct {
    	Code int    `json:"status"`
    	Err  string `json:"err,omitempty"`
    }

    – Create a file named endpoints.go. This file will contain the actual calling of the service implemented functions.

    package endpoints
    
    import (
    	"context"
    	"errors"
    	"os"
    
    	"github.com/aayushrangwala/watermark-service/internal"
    	"github.com/aayushrangwala/watermark-service/pkg/watermark"
    
    	"github.com/go-kit/kit/endpoint"
    	"github.com/go-kit/kit/log"
    )
    
    type Set struct {
    	GetEndpoint           endpoint.Endpoint
    	AddDocumentEndpoint   endpoint.Endpoint
    	StatusEndpoint        endpoint.Endpoint
    	ServiceStatusEndpoint endpoint.Endpoint
    	WatermarkEndpoint     endpoint.Endpoint
    }
    
    func NewEndpointSet(svc watermark.Service) Set {
    	return Set{
    		GetEndpoint:           MakeGetEndpoint(svc),
    		AddDocumentEndpoint:   MakeAddDocumentEndpoint(svc),
    		StatusEndpoint:        MakeStatusEndpoint(svc),
    		ServiceStatusEndpoint: MakeServiceStatusEndpoint(svc),
    		WatermarkEndpoint:     MakeWatermarkEndpoint(svc),
    	}
    }
    
    func MakeGetEndpoint(svc watermark.Service) endpoint.Endpoint {
    	return func(ctx context.Context, request interface{}) (interface{}, error) {
    		req := request.(GetRequest)
    		docs, err := svc.Get(ctx, req.Filters...)
    		if err != nil {
    			return GetResponse{docs, err.Error()}, nil
    		}
    		return GetResponse{docs, ""}, nil
    	}
    }
    
    func MakeStatusEndpoint(svc watermark.Service) endpoint.Endpoint {
    	return func(ctx context.Context, request interface{}) (interface{}, error) {
    		req := request.(StatusRequest)
    		status, err := svc.Status(ctx, req.TicketID)
    		if err != nil {
    			return StatusResponse{Status: status, Err: err.Error()}, nil
    		}
    		return StatusResponse{Status: status, Err: ""}, nil
    	}
    }
    
    func MakeAddDocumentEndpoint(svc watermark.Service) endpoint.Endpoint {
    	return func(ctx context.Context, request interface{}) (interface{}, error) {
    		req := request.(AddDocumentRequest)
    		ticketID, err := svc.AddDocument(ctx, req.Document)
    		if err != nil {
    			return AddDocumentResponse{TicketID: ticketID, Err: err.Error()}, nil
    		}
    		return AddDocumentResponse{TicketID: ticketID, Err: ""}, nil
    	}
    }
    
    func MakeWatermarkEndpoint(svc watermark.Service) endpoint.Endpoint {
    	return func(ctx context.Context, request interface{}) (interface{}, error) {
    		req := request.(WatermarkRequest)
    		code, err := svc.Watermark(ctx, req.TicketID, req.Mark)
    		if err != nil {
    			return WatermarkResponse{Code: code, Err: err.Error()}, nil
    		}
    		return WatermarkResponse{Code: code, Err: ""}, nil
    	}
    }
    
    func MakeServiceStatusEndpoint(svc watermark.Service) endpoint.Endpoint {
    	return func(ctx context.Context, request interface{}) (interface{}, error) {
    		_ = request.(ServiceStatusRequest)
    		code, err := svc.ServiceStatus(ctx)
    		if err != nil {
    			return ServiceStatusResponse{Code: code, Err: err.Error()}, nil
    		}
    		return ServiceStatusResponse{Code: code, Err: ""}, nil
    	}
    }
    
    func (s *Set) Get(ctx context.Context, filters ...internal.Filter) ([]internal.Document, error) {
    	resp, err := s.GetEndpoint(ctx, GetRequest{Filters: filters})
    	if err != nil {
    		return []internal.Document{}, err
    	}
    	getResp := resp.(GetResponse)
    	if getResp.Err != "" {
    		return []internal.Document{}, errors.New(getResp.Err)
    	}
    	return getResp.Documents, nil
    }
    
    func (s *Set) ServiceStatus(ctx context.Context) (int, error) {
    	resp, err := s.ServiceStatusEndpoint(ctx, ServiceStatusRequest{})
    	svcStatusResp := resp.(ServiceStatusResponse)
    	if err != nil {
    		return svcStatusResp.Code, err
    	}
    	if svcStatusResp.Err != "" {
    		return svcStatusResp.Code, errors.New(svcStatusResp.Err)
    	}
    	return svcStatusResp.Code, nil
    }
    
    func (s *Set) AddDocument(ctx context.Context, doc *internal.Document) (string, error) {
    	resp, err := s.AddDocumentEndpoint(ctx, AddDocumentRequest{Document: doc})
    	if err != nil {
    		return "", err
    	}
    	adResp := resp.(AddDocumentResponse)
    	if adResp.Err != "" {
    		return "", errors.New(adResp.Err)
    	}
    	return adResp.TicketID, nil
    }
    
    func (s *Set) Status(ctx context.Context, ticketID string) (internal.Status, error) {
    	resp, err := s.StatusEndpoint(ctx, StatusRequest{TicketID: ticketID})
    	if err != nil {
    		return internal.Failed, err
    	}
    	stsResp := resp.(StatusResponse)
    	if stsResp.Err != "" {
    		return internal.Failed, errors.New(stsResp.Err)
    	}
    	return stsResp.Status, nil
    }
    
    func (s *Set) Watermark(ctx context.Context, ticketID, mark string) (int, error) {
    	resp, err := s.WatermarkEndpoint(ctx, WatermarkRequest{TicketID: ticketID, Mark: mark})
    	wmResp := resp.(WatermarkResponse)
    	if err != nil {
    		return wmResp.Code, err
    	}
    	if wmResp.Err != "" {
    		return wmResp.Code, errors.New(wmResp.Err)
    	}
    	return wmResp.Code, nil
    }
    
    var logger log.Logger
    
    func init() {
    	logger = log.NewLogfmtLogger(log.NewSyncWriter(os.Stderr))
    	logger = log.With(logger, "ts", log.DefaultTimestampUTC)
    }

    In this file, we have a struct Set which is the collection of all the endpoints. We have a constructor for the same. We have the internal constructor functions which will return the objects which implement the generic endpoint. Endpoint interface of Go kit such as MakeGetEndpoint(), MakeStatusEndpoint() etc.

    In order to expose the Get, Status, Watermark, ServiceStatus and AddDocument APIs, we need to create endpoints for all of them. These functions handle the incoming requests and call the specific service methods

    5. Adding the Transports method to expose the services. Our services will support HTTP and will be exposed using Rest APIs and protobuf and gRPC.

    Create a separate package of transport in the watermark directory. This package will hold all the handlers, decoders and encoders for a specific type of transport mechanism

    6. Create a file http.go: This file will have the transport functions and handlers for HTTP with a separate path as the API routes.

    package transport
    
    import (
    	"context"
    	"encoding/json"
    	"net/http"
    	"os"
    
    	"github.com/velotiotech/watermark-service/internal/util"
    	"github.com/velotiotech/watermark-service/pkg/watermark/endpoints"
    
    	"github.com/go-kit/kit/log"
    	httptransport "github.com/go-kit/kit/transport/http"
    )
    
    func NewHTTPHandler(ep endpoints.Set) http.Handler {
    	m := http.NewServeMux()
    
    	m.Handle("/healthz", httptransport.NewServer(
    		ep.ServiceStatusEndpoint,
    		decodeHTTPServiceStatusRequest,
    		encodeResponse,
    	))
    	m.Handle("/status", httptransport.NewServer(
    		ep.StatusEndpoint,
    		decodeHTTPStatusRequest,
    		encodeResponse,
    	))
    	m.Handle("/addDocument", httptransport.NewServer(
    		ep.AddDocumentEndpoint,
    		decodeHTTPAddDocumentRequest,
    		encodeResponse,
    	))
    	m.Handle("/get", httptransport.NewServer(
    		ep.GetEndpoint,
    		decodeHTTPGetRequest,
    		encodeResponse,
    	))
    	m.Handle("/watermark", httptransport.NewServer(
    		ep.WatermarkEndpoint,
    		decodeHTTPWatermarkRequest,
    		encodeResponse,
    	))
    
    	return m
    }
    
    func decodeHTTPGetRequest(_ context.Context, r *http.Request) (interface{}, error) {
    	var req endpoints.GetRequest
    	if r.ContentLength == 0 {
    		logger.Log("Get request with no body")
    		return req, nil
    	}
    	err := json.NewDecoder(r.Body).Decode(&req)
    	if err != nil {
    		return nil, err
    	}
    	return req, nil
    }
    
    func decodeHTTPStatusRequest(ctx context.Context, r *http.Request) (interface{}, error) {
    	var req endpoints.StatusRequest
    	err := json.NewDecoder(r.Body).Decode(&req)
    	if err != nil {
    		return nil, err
    	}
    	return req, nil
    }
    
    func decodeHTTPWatermarkRequest(_ context.Context, r *http.Request) (interface{}, error) {
    	var req endpoints.WatermarkRequest
    	err := json.NewDecoder(r.Body).Decode(&req)
    	if err != nil {
    		return nil, err
    	}
    	return req, nil
    }
    
    func decodeHTTPAddDocumentRequest(_ context.Context, r *http.Request) (interface{}, error) {
    	var req endpoints.AddDocumentRequest
    	err := json.NewDecoder(r.Body).Decode(&req)
    	if err != nil {
    		return nil, err
    	}
    	return req, nil
    }
    
    func decodeHTTPServiceStatusRequest(_ context.Context, _ *http.Request) (interface{}, error) {
    	var req endpoints.ServiceStatusRequest
    	return req, nil
    }
    
    func encodeResponse(ctx context.Context, w http.ResponseWriter, response interface{}) error {
    	if e, ok := response.(error); ok && e != nil {
    		encodeError(ctx, e, w)
    		return nil
    	}
    	return json.NewEncoder(w).Encode(response)
    }
    
    func encodeError(_ context.Context, err error, w http.ResponseWriter) {
    	w.Header().Set("Content-Type", "application/json; charset=utf-8")
    	switch err {
    	case util.ErrUnknown:
    		w.WriteHeader(http.StatusNotFound)
    	case util.ErrInvalidArgument:
    		w.WriteHeader(http.StatusBadRequest)
    	default:
    		w.WriteHeader(http.StatusInternalServerError)
    	}
    	json.NewEncoder(w).Encode(map[string]interface{}{
    		"error": err.Error(),
    	})
    }
    
    var logger log.Logger
    
    func init() {
    	logger = log.NewLogfmtLogger(log.NewSyncWriter(os.Stderr))
    	logger = log.With(logger, "ts", log.DefaultTimestampUTC)
    }

    This file is the map of the JSON payload to their requests and responses. It contains the HTTP handler constructor which registers the API routes to the specific handler function (endpoints) and also the decoder-encoder of the requests and responses respectively into a server object for a request. The decoders and encoders are basically defined just to translate the request and responses in the desired form to be processed. In our case, we are just converting the requests/responses using the json encoder and decoder into the appropriate request and response structs.

    We have the generic encoder for the response output, which is a simple JSON encoder.

    7. Create another file in the same transport package with the name grpc.go. Similar to above, the name of the file is self-explanatory. It is the map of protobuf payload to their requests and responses. We create a gRPC handler constructor which will create the set of grpcServers and registers the appropriate endpoint to the decoders and encoders of the request and responses

    package transport
    
    import (
    	"context"
    
    	"github.com/velotiotech/watermark-service/api/v1/pb/watermark"
    
    	"github.com/velotiotech/watermark-service/internal"
    	"github.com/velotiotech/watermark-service/pkg/watermark/endpoints"
    
    	grpctransport "github.com/go-kit/kit/transport/grpc"
    )
    
    type grpcServer struct {
    	get           grpctransport.Handler
    	status        grpctransport.Handler
    	addDocument   grpctransport.Handler
    	watermark     grpctransport.Handler
    	serviceStatus grpctransport.Handler
    }
    
    func NewGRPCServer(ep endpoints.Set) watermark.WatermarkServer {
    	return &grpcServer{
    		get: grpctransport.NewServer(
    			ep.GetEndpoint,
    			decodeGRPCGetRequest,
    			decodeGRPCGetResponse,
    		),
    		status: grpctransport.NewServer(
    			ep.StatusEndpoint,
    			decodeGRPCStatusRequest,
    			decodeGRPCStatusResponse,
    		),
    		addDocument: grpctransport.NewServer(
    			ep.AddDocumentEndpoint,
    			decodeGRPCAddDocumentRequest,
    			decodeGRPCAddDocumentResponse,
    		),
    		watermark: grpctransport.NewServer(
    			ep.WatermarkEndpoint,
    			decodeGRPCWatermarkRequest,
    			decodeGRPCWatermarkResponse,
    		),
    		serviceStatus: grpctransport.NewServer(
    			ep.ServiceStatusEndpoint,
    			decodeGRPCServiceStatusRequest,
    			decodeGRPCServiceStatusResponse,
    		),
    	}
    }
    
    func (g *grpcServer) Get(ctx context.Context, r *watermark.GetRequest) (*watermark.GetReply, error) {
    	_, rep, err := g.get.ServeGRPC(ctx, r)
    	if err != nil {
    		return nil, err
    	}
    	return rep.(*watermark.GetReply), nil
    }
    
    func (g *grpcServer) ServiceStatus(ctx context.Context, r *watermark.ServiceStatusRequest) (*watermark.ServiceStatusReply, error) {
    	_, rep, err := g.get.ServeGRPC(ctx, r)
    	if err != nil {
    		return nil, err
    	}
    	return rep.(*watermark.ServiceStatusReply), nil
    }
    
    func (g *grpcServer) AddDocument(ctx context.Context, r *watermark.AddDocumentRequest) (*watermark.AddDocumentReply, error) {
    	_, rep, err := g.addDocument.ServeGRPC(ctx, r)
    	if err != nil {
    		return nil, err
    	}
    	return rep.(*watermark.AddDocumentReply), nil
    }
    
    func (g *grpcServer) Status(ctx context.Context, r *watermark.StatusRequest) (*watermark.StatusReply, error) {
    	_, rep, err := g.status.ServeGRPC(ctx, r)
    	if err != nil {
    		return nil, err
    	}
    	return rep.(*watermark.StatusReply), nil
    }
    
    func (g *grpcServer) Watermark(ctx context.Context, r *watermark.WatermarkRequest) (*watermark.WatermarkReply, error) {
    	_, rep, err := g.watermark.ServeGRPC(ctx, r)
    	if err != nil {
    		return nil, err
    	}
    	return rep.(*watermark.WatermarkReply), nil
    }
    
    func decodeGRPCGetRequest(_ context.Context, grpcReq interface{}) (interface{}, error) {
    	req := grpcReq.(*watermark.GetRequest)
    	var filters []internal.Filter
    	for _, f := range req.Filters {
    		filters = append(filters, internal.Filter{Key: f.Key, Value: f.Value})
    	}
    	return endpoints.GetRequest{Filters: filters}, nil
    }
    
    func decodeGRPCStatusRequest(_ context.Context, grpcReq interface{}) (interface{}, error) {
    	req := grpcReq.(*watermark.StatusRequest)
    	return endpoints.StatusRequest{TicketID: req.TicketID}, nil
    }
    
    func decodeGRPCWatermarkRequest(_ context.Context, grpcReq interface{}) (interface{}, error) {
    	req := grpcReq.(*watermark.WatermarkRequest)
    	return endpoints.WatermarkRequest{TicketID: req.TicketID, Mark: req.Mark}, nil
    }
    
    func decodeGRPCAddDocumentRequest(_ context.Context, grpcReq interface{}) (interface{}, error) {
    	req := grpcReq.(*watermark.AddDocumentRequest)
    	doc := &internal.Document{
    		Content:   req.Document.Content,
    		Title:     req.Document.Title,
    		Author:    req.Document.Author,
    		Topic:     req.Document.Topic,
    		Watermark: req.Document.Watermark,
    	}
    	return endpoints.AddDocumentRequest{Document: doc}, nil
    }
    
    func decodeGRPCServiceStatusRequest(_ context.Context, grpcReq interface{}) (interface{}, error) {
    	return endpoints.ServiceStatusRequest{}, nil
    }
    
    func decodeGRPCGetResponse(_ context.Context, grpcReply interface{}) (interface{}, error) {
    	reply := grpcReply.(*watermark.GetReply)
    	var docs []internal.Document
    	for _, d := range reply.Documents {
    		doc := internal.Document{
    			Content:   d.Content,
    			Title:     d.Title,
    			Author:    d.Author,
    			Topic:     d.Topic,
    			Watermark: d.Watermark,
    		}
    		docs = append(docs, doc)
    	}
    	return endpoints.GetResponse{Documents: docs, Err: reply.Err}, nil
    }
    
    func decodeGRPCStatusResponse(_ context.Context, grpcReply interface{}) (interface{}, error) {
    	reply := grpcReply.(*watermark.StatusReply)
    	return endpoints.StatusResponse{Status: internal.Status(reply.Status), Err: reply.Err}, nil
    }
    
    func decodeGRPCWatermarkResponse(ctx context.Context, grpcReply interface{}) (interface{}, error) {
    	reply := grpcReply.(*watermark.WatermarkReply)
    	return endpoints.WatermarkResponse{Code: int(reply.Code), Err: reply.Err}, nil
    }
    
    func decodeGRPCAddDocumentResponse(ctx context.Context, grpcReply interface{}) (interface{}, error) {
    	reply := grpcReply.(*watermark.AddDocumentReply)
    	return endpoints.AddDocumentResponse{TicketID: reply.TicketID, Err: reply.Err}, nil
    }
    
    func decodeGRPCServiceStatusResponse(ctx context.Context, grpcReply interface{}) (interface{}, error) {
    	reply := grpcReply.(*watermark.ServiceStatusReply)
    	return endpoints.ServiceStatusResponse{Code: int(reply.Code), Err: reply.Err}, nil
    }

    – Before moving on to the implementation, we have to create a proto file that acts as the definition of all our service interface and the requests response structs, so that the protobuf files (.pb) can be generated to be used as an interface between services to communicate.

    – Create package pb in the api/v1 package path. Create a new file watermarksvc.proto. Firstly, we will create our service interface, which represents the remote functions to be called by the client. Refer to this for syntax and deep understanding of the protobuf.

    We will convert the service interface to the service interface in the proto file. Also, we have created the request and response structs exactly the same once again in the proto file so that they can be understood by the RPC defined in the service.

    syntax = "proto3";
    
    package pb;
    
    service Watermark {
        rpc Get (GetRequest) returns (GetReply) {}
    
        rpc Watermark (WatermarkRequest) returns (WatermarkReply) {}
    
        rpc Status (StatusRequest) returns (StatusReply) {}
    
        rpc AddDocument (AddDocumentRequest) returns (AddDocumentReply) {}
    
        rpc ServiceStatus (ServiceStatusRequest) returns (ServiceStatusReply) {}
    }
    
    message Document {
        string content = 1;
        string title = 2;
        string author = 3;
        string topic = 4;
        string watermark = 5;
    }
    
    message GetRequest {
        message Filters {
            string key = 1;
            string value = 2;
        }
        repeated Filters filters = 1;
    }
    
    message GetReply {
        repeated Document documents = 1;
        string Err = 2;
    }
    
    message StatusRequest {
        string ticketID = 1;
    }
    
    message StatusReply {
        enum Status {
            PENDING = 0;
            STARTED = 1;
            IN_PROGRESS = 2;
            FINISHED = 3;
            FAILED = 4;
        }
        Status status = 1;
        string Err = 2;
    }
    
    message WatermarkRequest {
        string ticketID = 1;
        string mark = 2;
    }
    
    message WatermarkReply {
        int64 code = 1;
        string err = 2;
    }
    
    message AddDocumentRequest {
        Document document = 1;
    }
    
    message AddDocumentReply {
        string ticketID = 1;
        string err = 2;
    }
    
    message ServiceStatusRequest {}
    
    message ServiceStatusReply {
        int64 code = 1;
        string err = 2;
    }

    Note: Creating the proto files and generating the pb files using protoc is not the scope of this blog. We have assumed that you already know how to create a proto file and generate a pb file from it. If not, please refer protobuf and protoc gen

    I have also created a script to generate the pb file, which just needs the path with the name of the proto file.

    #!/usr/bin/env sh
    
    # Install proto3 from source
    #  brew install autoconf automake libtool
    #  git clone https://github.com/google/protobuf
    #  ./autogen.sh ; ./configure ; make ; make install
    #
    # Update protoc Go bindings via
    #  go get -u github.com/golang/protobuf/{proto,protoc-gen-go}
    #
    # See also
    #  https://github.com/grpc/grpc-go/tree/master/examples
    
    REPO_ROOT="${REPO_ROOT:-$(cd "$(dirname "$0")/../.." && pwd)}"
    PB_PATH="${REPO_ROOT}/api/v1/pb"
    PROTO_FILE=${1:-"watermarksvc.proto"}
    
    
    echo "Generating pb files for ${PROTO_FILE} service"
    protoc -I="${PB_PATH}"  "${PB_PATH}/${PROTO_FILE}" --go_out=plugins=grpc:"${PB_PATH}"

    8. Now, once the pb file is generated in api/v1/pb/watermark package, we will create a new struct grpcserver, grouping all the endpoints for gRPC. This struct should implement pb.WatermarkServer which is the server interface referred by the services.

    To implement these services, we are defining the functions such as func (g *grpcServer) Get(ctx context.Context, r *pb.GetRequest) (*pb.GetReply, error). This function should take the request param and run the ServeGRPC() function and then return the response. Similarly, we should implement the ServeGRPC() functions for the rest of the functions.

    These functions are the actual Remote Procedures to be called by the service.

    We will also need to add the decode and encode functions for the request and response structs from protobuf structs. These functions will map the proto Request/Response struct to the endpoint req/resp structs. For example: func decodeGRPCGetRequest(_ context.Context, grpcReq interface{}) (interface{}, error). This will assert the grpcReq to pb.GetRequest and use its fields to fill the new struct of type endpoints.GetRequest{}. The decoding and encoding functions should be implemented similarly for the other requests and responses.

    package transport
    
    import (
    	"context"
    
    	"github.com/velotiotech/watermark-service/api/v1/pb/watermark"
    
    	"github.com/velotiotech/watermark-service/internal"
    	"github.com/velotiotech/watermark-service/pkg/watermark/endpoints"
    
    	grpctransport "github.com/go-kit/kit/transport/grpc"
    )
    
    type grpcServer struct {
    	get           grpctransport.Handler
    	status        grpctransport.Handler
    	addDocument   grpctransport.Handler
    	watermark     grpctransport.Handler
    	serviceStatus grpctransport.Handler
    }
    
    func NewGRPCServer(ep endpoints.Set) watermark.WatermarkServer {
    	return &grpcServer{
    		get: grpctransport.NewServer(
    			ep.GetEndpoint,
    			decodeGRPCGetRequest,
    			decodeGRPCGetResponse,
    		),
    		status: grpctransport.NewServer(
    			ep.StatusEndpoint,
    			decodeGRPCStatusRequest,
    			decodeGRPCStatusResponse,
    		),
    		addDocument: grpctransport.NewServer(
    			ep.AddDocumentEndpoint,
    			decodeGRPCAddDocumentRequest,
    			decodeGRPCAddDocumentResponse,
    		),
    		watermark: grpctransport.NewServer(
    			ep.WatermarkEndpoint,
    			decodeGRPCWatermarkRequest,
    			decodeGRPCWatermarkResponse,
    		),
    		serviceStatus: grpctransport.NewServer(
    			ep.ServiceStatusEndpoint,
    			decodeGRPCServiceStatusRequest,
    			decodeGRPCServiceStatusResponse,
    		),
    	}
    }
    
    func (g *grpcServer) Get(ctx context.Context, r *watermark.GetRequest) (*watermark.GetReply, error) {
    	_, rep, err := g.get.ServeGRPC(ctx, r)
    	if err != nil {
    		return nil, err
    	}
    	return rep.(*watermark.GetReply), nil
    }
    
    func (g *grpcServer) ServiceStatus(ctx context.Context, r *watermark.ServiceStatusRequest) (*watermark.ServiceStatusReply, error) {
    	_, rep, err := g.get.ServeGRPC(ctx, r)
    	if err != nil {
    		return nil, err
    	}
    	return rep.(*watermark.ServiceStatusReply), nil
    }
    
    func (g *grpcServer) AddDocument(ctx context.Context, r *watermark.AddDocumentRequest) (*watermark.AddDocumentReply, error) {
    	_, rep, err := g.addDocument.ServeGRPC(ctx, r)
    	if err != nil {
    		return nil, err
    	}
    	return rep.(*watermark.AddDocumentReply), nil
    }
    
    func (g *grpcServer) Status(ctx context.Context, r *watermark.StatusRequest) (*watermark.StatusReply, error) {
    	_, rep, err := g.status.ServeGRPC(ctx, r)
    	if err != nil {
    		return nil, err
    	}
    	return rep.(*watermark.StatusReply), nil
    }
    
    func (g *grpcServer) Watermark(ctx context.Context, r *watermark.WatermarkRequest) (*watermark.WatermarkReply, error) {
    	_, rep, err := g.watermark.ServeGRPC(ctx, r)
    	if err != nil {
    		return nil, err
    	}
    	return rep.(*watermark.WatermarkReply), nil
    }
    
    func decodeGRPCGetRequest(_ context.Context, grpcReq interface{}) (interface{}, error) {
    	req := grpcReq.(*watermark.GetRequest)
    	var filters []internal.Filter
    	for _, f := range req.Filters {
    		filters = append(filters, internal.Filter{Key: f.Key, Value: f.Value})
    	}
    	return endpoints.GetRequest{Filters: filters}, nil
    }
    
    func decodeGRPCStatusRequest(_ context.Context, grpcReq interface{}) (interface{}, error) {
    	req := grpcReq.(*watermark.StatusRequest)
    	return endpoints.StatusRequest{TicketID: req.TicketID}, nil
    }
    
    func decodeGRPCWatermarkRequest(_ context.Context, grpcReq interface{}) (interface{}, error) {
    	req := grpcReq.(*watermark.WatermarkRequest)
    	return endpoints.WatermarkRequest{TicketID: req.TicketID, Mark: req.Mark}, nil
    }
    
    func decodeGRPCAddDocumentRequest(_ context.Context, grpcReq interface{}) (interface{}, error) {
    	req := grpcReq.(*watermark.AddDocumentRequest)
    	doc := &internal.Document{
    		Content:   req.Document.Content,
    		Title:     req.Document.Title,
    		Author:    req.Document.Author,
    		Topic:     req.Document.Topic,
    		Watermark: req.Document.Watermark,
    	}
    	return endpoints.AddDocumentRequest{Document: doc}, nil
    }
    
    func decodeGRPCServiceStatusRequest(_ context.Context, grpcReq interface{}) (interface{}, error) {
    	return endpoints.ServiceStatusRequest{}, nil
    }
    
    func decodeGRPCGetResponse(_ context.Context, grpcReply interface{}) (interface{}, error) {
    	reply := grpcReply.(*watermark.GetReply)
    	var docs []internal.Document
    	for _, d := range reply.Documents {
    		doc := internal.Document{
    			Content:   d.Content,
    			Title:     d.Title,
    			Author:    d.Author,
    			Topic:     d.Topic,
    			Watermark: d.Watermark,
    		}
    		docs = append(docs, doc)
    	}
    	return endpoints.GetResponse{Documents: docs, Err: reply.Err}, nil
    }
    
    func decodeGRPCStatusResponse(_ context.Context, grpcReply interface{}) (interface{}, error) {
    	reply := grpcReply.(*watermark.StatusReply)
    	return endpoints.StatusResponse{Status: internal.Status(reply.Status), Err: reply.Err}, nil
    }
    
    func decodeGRPCWatermarkResponse(ctx context.Context, grpcReply interface{}) (interface{}, error) {
    	reply := grpcReply.(*watermark.WatermarkReply)
    	return endpoints.WatermarkResponse{Code: int(reply.Code), Err: reply.Err}, nil
    }
    
    func decodeGRPCAddDocumentResponse(ctx context.Context, grpcReply interface{}) (interface{}, error) {
    	reply := grpcReply.(*watermark.AddDocumentReply)
    	return endpoints.AddDocumentResponse{TicketID: reply.TicketID, Err: reply.Err}, nil
    }
    
    func decodeGRPCServiceStatusResponse(ctx context.Context, grpcReply interface{}) (interface{}, error) {
    	reply := grpcReply.(*watermark.ServiceStatusReply)
    	return endpoints.ServiceStatusResponse{Code: int(reply.Code), Err: reply.Err}, nil
    }

    9. Finally, we just have to create the entry point files (main) in the cmd for each service. As we already have mapped the appropriate routes to the endpoints by calling the service functions and also we mapped the proto service server to the endpoints by calling ServeGRPC() functions, now we have to call the HTTP and gRPC server constructors here and start them.

    Create a package watermark in the cmd directory and create a file watermark.go which will hold the code to start and stop the HTTP and gRPC server for the service

    package main
    
    import (
    	"fmt"
    	"net"
    	"net/http"
    	"os"
    	"os/signal"
    	"syscall"
    
    	pb "github.com/velotiotech/watermark-service/api/v1/pb/watermark"
    	"github.com/velotiotech/watermark-service/pkg/watermark"
    	"github.com/velotiotech/watermark-service/pkg/watermark/endpoints"
    	"github.com/velotiotech/watermark-service/pkg/watermark/transport"
    
    	"github.com/go-kit/kit/log"
    	kitgrpc "github.com/go-kit/kit/transport/grpc"
    	"github.com/oklog/oklog/pkg/group"
    	"google.golang.org/grpc"
    )
    
    const (
    	defaultHTTPPort = "8081"
    	defaultGRPCPort = "8082"
    )
    
    func main() {
    	var (
    		logger   log.Logger
    		httpAddr = net.JoinHostPort("localhost", envString("HTTP_PORT", defaultHTTPPort))
    		grpcAddr = net.JoinHostPort("localhost", envString("GRPC_PORT", defaultGRPCPort))
    	)
    
    	logger = log.NewLogfmtLogger(log.NewSyncWriter(os.Stderr))
    	logger = log.With(logger, "ts", log.DefaultTimestampUTC)
    
    	var (
    		service     = watermark.NewService()
    		eps         = endpoints.NewEndpointSet(service)
    		httpHandler = transport.NewHTTPHandler(eps)
    		grpcServer  = transport.NewGRPCServer(eps)
    	)
    
    	var g group.Group
    	{
    		// The HTTP listener mounts the Go kit HTTP handler we created.
    		httpListener, err := net.Listen("tcp", httpAddr)
    		if err != nil {
    			logger.Log("transport", "HTTP", "during", "Listen", "err", err)
    			os.Exit(1)
    		}
    		g.Add(func() error {
    			logger.Log("transport", "HTTP", "addr", httpAddr)
    			return http.Serve(httpListener, httpHandler)
    		}, func(error) {
    			httpListener.Close()
    		})
    	}
    	{
    		// The gRPC listener mounts the Go kit gRPC server we created.
    		grpcListener, err := net.Listen("tcp", grpcAddr)
    		if err != nil {
    			logger.Log("transport", "gRPC", "during", "Listen", "err", err)
    			os.Exit(1)
    		}
    		g.Add(func() error {
    			logger.Log("transport", "gRPC", "addr", grpcAddr)
    			// we add the Go Kit gRPC Interceptor to our gRPC service as it is used by
    			// the here demonstrated zipkin tracing middleware.
    			baseServer := grpc.NewServer(grpc.UnaryInterceptor(kitgrpc.Interceptor))
    			pb.RegisterWatermarkServer(baseServer, grpcServer)
    			return baseServer.Serve(grpcListener)
    		}, func(error) {
    			grpcListener.Close()
    		})
    	}
    	{
    		// This function just sits and waits for ctrl-C.
    		cancelInterrupt := make(chan struct{})
    		g.Add(func() error {
    			c := make(chan os.Signal, 1)
    			signal.Notify(c, syscall.SIGINT, syscall.SIGTERM)
    			select {
    			case sig := <-c:
    				return fmt.Errorf("received signal %s", sig)
    			case <-cancelInterrupt:
    				return nil
    			}
    		}, func(error) {
    			close(cancelInterrupt)
    		})
    	}
    	logger.Log("exit", g.Run())
    }
    
    func envString(env, fallback string) string {
    	e := os.Getenv(env)
    	if e == "" {
    		return fallback
    	}
    	return e
    }

    Let’s walk you through the above code. Firstly, we will use the fixed ports to make the server listen to them. 8081 for HTTP Server and 8082 for gRPC Server. Then in these code stubs, we will create the HTTP and gRPC servers, endpoints of the service backend and the service.

    service = watermark.NewService()
    eps = endpoints.NewEndpointSet(service)
    grpcServer = transport.NewGRPCServer(eps)
    httpHandler = transport.NewHTTPHandler(eps)

    Now the next step is interesting. We are creating a variable of oklog.Group. If you are new to this term, please refer here. Group helps you elegantly manage the group of Goroutines. We are creating three Goroutines: One for HTTP server, second for gRPC server and the last one for watching on the cancel interrupts. Just like this:

    g.Add(func() error {
        logger.Log("transport", "HTTP", "addr", httpAddr)
        return http.Serve(httpListener, httpHandler)
    }, func(error) {
        httpListener.Close()
    })

    Similarly, we will start a gRPC server and a cancel interrupt watcher.
    Great!! We are done here. Now, let’s run the service.

    go run ./cmd/watermark/watermark.go

    The server has started locally. Now, just open a Postman or run curl to one of the endpoints. See below:
    We ran the HTTP server to check the service status:

    ~ curl http://localhost:8081/healthz
    {"status":200}

    We have successfully created a service and ran the endpoints.

    Further:

    I really like to make a project complete always with all the other maintenance parts revolving around. Just like adding the proper README, have proper .gitignore, .dockerignore, Makefile, Dockerfiles, golang-ci-lint config files, and CI-CD config files etc.

    I have created a separate Dockerfile for each of the three services in path /images/.

    I have created a multi-staged dockerfile to create the binary of the service and run it. We will just copy the appropriate directories of code in the docker image, build the image all in one and then create a new image in the same file and copy the binary in it from the previous one. Similarly, the dockerfiles are created for other services also.

    In the dockerfile, we have given the CMD as go run watermark. This command will be the entry point of the container.
    I have also created a Makefile which has two main targets: build-image and build-push. The first one is to build the image and the second is to push it.

    Note: I am keeping this blog concise as it is difficult to cover all the things. The code in the repo that I have shared in the beginning covers most of the important concepts around services. I am still working and continue committing improvements and features.

    Let’s see how we can deploy:

    We will see how to deploy all these services in the containerized orchestration tools (ex: Kubernetes). Assuming you have worked on Kubernetes with at least a beginner’s understanding before.

    In deploy dir, create a sample deployment having three containers: auth, watermark and database. Since for each container, the entry point commands are already defined in the dockerfiles, we don’t need to send any args or cmd in the deployment.

    We will also need the service which will be used to route the external traffic of request from another load balancer service or nodeport type service. To make it work, we might have to create a nodeport type of service to expose the watermark-service to make it running for now.

    Another important and very interesting part is to deploy the API Gateway. It is required to have at least some knowledge of any cloud provider stack to deploy the API Gateway. I have used Azure stack to deploy an API Gateway using the resource called as “API-Management” in the Azure plane. Refer the rules config files for the Azure APIM api-gateway:

    Further, only a proper CI/CD setup is remaining which is one of the most essential parts of a project after development.
    I would definitely like to discuss all the above deployment-related stuff in more detail but that is not in the scope of my current blog. Maybe I will post another blog for the same.

    Wrapping up:

    We have learned how to build a complete project with three microservices in Golang using one of the best-distributed system development frameworks: Go kit. We have also used the database PostgreSQL using the GORM used heavily in the Go community.
    We did not stop just at the development but also we tried to theoretically cover the development lifecycle of the project by understanding what, how and where to deploy.

    We created one microservice completely from scratch. Go kit makes it very simple to write the relationship between endpoints, service implementations and the communication/transport mechanisms. Now, go and try to create other services from the problem statement.

  • Building Your First AWS Serverless Application? Here’s Everything You Need to Know

    A serverless architecture is a way to implement and run applications and services or micro-services without need to manage infrastructure. Your application still runs on servers, but all the servers management is done by AWS. Now we don’t need to provision, scale or maintain servers to run our applications, databases and storage systems. Services which are developed by developers who don’t let developers build application from scratch.

    Why Serverless

    1. More focus on development rather than managing servers.
    2. Cost Effective.
    3. Application which scales automatically.
    4. Quick application setup.

    Services For ServerLess

    For implementing serverless architecture there are multiple services which are provided by cloud partners though we will be exploring most of the services from AWS. Following are the services which we can use depending on the application requirement.

    1. Lambda: It is used to write business logic / schedulers / functions.
    2. S3: It is mostly used for storing objects but it also gives the privilege to host WebApps. You can host a static website on S3.
    3. API Gateway: It is used for creating, publishing, maintaining, monitoring and securing REST and WebSocket APIs at any scale.
    4. Cognito: It provides authentication, authorization & user management for your web and mobile apps. Your users can sign in directly sign in with a username and password or through third parties such as Facebook, Amazon or Google.
    5. DynamoDB: It is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability.

    Three-tier Serverless Architecture

    So, let’s take a use case in which you want to develop a three tier serverless application. The three tier architecture is a popular pattern for user facing applications, The tiers that comprise the architecture include the presentation tier, the logic tier and the data tier. The presentation tier represents the component that users directly interact with web page / mobile app UI. The logic tier contains the code required to translate user action at the presentation tier to the functionality that drives the application’s behaviour. The data tier consists of your storage media (databases, file systems, object stores) that holds the data relevant to the application. Figure shows the simple three-tier application.

     Figure: Simple Three-Tier Architectural Pattern

    Presentation Tier

    The presentation tier of the three tier represents the View part of the application. Here you can use S3 to host static website. On a static website, individual web pages include static content and they also contain client side scripting.

    The following is a quick procedure to configure an Amazon S3 bucket for static website hosting in the S3 console.

    To configure an S3 bucket for static website hosting

    1. Log in to the AWS Management Console and open the S3 console at

    2. In the Bucket name list, choose the name of the bucket that you want to enable static website hosting for.

    3. Choose Properties.

    4. Choose Static Website Hosting

    Once you enable your bucket for static website hosting, browsers can access all of your content through the Amazon S3 website endpoint for your bucket.

    5. Choose Use this bucket to host.

    A. For Index Document, type the name of your index document, which is typically named index.html. When you configure a S3 bucket for website hosting, you must specify an index document, which will be returned by S3 when requests are made to the root domain or any of the subfolders.

    B. (Optional) For 4XX errors, you can optionally provide your own custom error document that provides additional guidance for your users. Type the name of the file that contains the custom error document. If an error occurs, S3 returns an error document.

    C. (Optional) If you want to give advanced redirection rules, In the edit redirection rule text box, you have to XML to describe the rule.
    E.g.

    <RoutingRules>
        <RoutingRule>
            <Condition>
                <HttpErrorCodeReturnedEquals>403</HttpErrorCodeReturnedEquals>
            </Condition>
            <Redirect>
                <HostName>mywebsite.com</HostName>
                <ReplaceKeyPrefixWith>notfound/</ReplaceKeyPrefixWith>
            </Redirect>
        </RoutingRule>
    </RoutingRules>

    6. Choose Save

    7. Add a bucket policy to the website bucket that grants access to the object in the S3 bucket for everyone. You must make the objects that you want to serve publicly readable, when you configure a S3 bucket as a website. To do so, you write a bucket policy that grants everyone S3:GetObject permission. The following bucket policy grants everyone access to the objects in the example-bucket bucket.

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "PublicReadGetObject",
                "Effect": "Allow",
                "Principal": "*",
                "Action": [
                    "s3:GetObject"
                ],
                "Resource": [
                    "arn:aws:s3:::example-bucket/*"
                ]
            }
        ]
    }

    Note: If you choose Disable Website Hosting, S3 removes the website configuration from the bucket, so that the bucket no longer accessible from the website endpoint, but the bucket is still available at the REST endpoint.

    Logic Tier

    The logic tier represents the brains of the application. Here the two core services for serverless will be used i.e. API Gateway and Lambda to form your logic tier can be so revolutionary. The feature of the 2 services allow you to build a serverless production application which is highly scalable, available and secure. Your application could use number of servers, however by leveraging this pattern you do not have to manage a single one. In addition, by using these managed services together you get following benefits:

    1. No operating system to choose, secure or manage.
    2. No servers to right size, monitor.
    3. No risk to your cost by over-provisioning.
    4. No Risk to your performance by under-provisioning.

    API Gateway

    API Gateway is a fully managed service for defining, deploying and maintaining APIs. Anyone can integrate with the APIs using standard HTTPS requests. However, it has specific features and qualities that result it being an edge for your logic tier.

    Integration with Lambda

    API Gateway gives your application a simple way to leverage the innovation of AWS lambda directly (HTTPS Requests). API Gateway forms the bridge that connects your presentation tier and the functions you write in Lambda. After defining the client / server relationship using your API, the contents of the client’s HTTPS requests are passed to Lambda function for execution. The content include request metadata, request headers and the request body.

    API Performance Across the Globe

    Each deployment of API Gateway includes an Amazon CloudFront distribution under the covers. Amazon CloudFront is a content delivery web service that used Amazon’s global network of edge locations as connection points for clients integrating with API. This helps drive down the total response time latency of your API. Through its use of multiple edge locations across the world, Amazon CloudFront also provides you capabilities to combat distributed denial of service (DDoS) attack scenarios.

    You can improve the performance of specific API requests by using API Gateway to store responses in an optional in-memory cache. This not only provides performance benefits for repeated API requests, but is also reduces backend executions, which can reduce overall cost.

    Let’s dive into each step

    1. Create Lambda Function
    Login to Aws Console and head over to Lambda Service and Click on “Create A Function”

    A. Choose first option “Author from scratch”
    B. Enter Function Name
    C. Select Runtime e.g. Python 2.7
    D. Click on “Create Function”

    As your function is ready, you can see your basic function will get generated in language you choose to write.
    E.g.

    import json
    
    def lambda_handler(event, context):
        # TODO implement
        return {
            'statusCode': 200,
            'body': json.dumps('Hello from Lambda!')
        }

    2. Testing Lambda Function

    Click on “Test” button at the top right corner where we need to configure test event. As we are not sending any events, just give event a name, for example, “Hello World” template as it is and “Create” it.

    Now, when you hit the “Test” button again, it runs through testing the function we created earlier and returns the configured value.

    Create & Configure API Gateway connecting to Lambda

    We are done with creating lambda functions but how to invoke function from outside world ? We need endpoint, right ?

    Go to API Gateway & click on “Get Started” and agree on creating an Example API but we will not use that API we will create “New API”. Give it a name by keeping “Endpoint Type” regional for now.

    Create the API and you will go on the page “resources” page of the created API Gateway. Go through the following steps:

    A. Click on the “Actions”, then click on “Create Method”. Select Get method for our function. Then, “Tick Mark” on the right side of “GET” to set it up.
    B. Choose “Lambda Function” as integration type.
    C. Choose the region where we created earlier.
    D. Write the name of Lambda Function we created
    E. Save the method where it will ask you for confirmation of “Add Permission to Lambda Function”. Agree to that & that is done.
    F. Now, we can test our setup. Click on “Test” to run API. It should give the response text we had on the lambda test screen.

    Now, to get endpoint. We need to deploy the API. On the Actions dropdown, click on Deploy API under API Actions. Fill in the details of deployment and hit Deploy.

    After that, we will get our HTTPS endpoint.

    On the above screen you can see the things like cache settings, throttling, logging which can be configured. Save the changes and browse the invoke URL from which we will get the response which was earlier getting from Lambda. So, here is our logic tier of serverless application is to be done.

    Data Tier

    By using Lambda as your logic tier, you have a number of data storage options for your data tier. These options fall into broad categories: Amazon VPC hosted data stores and IAM-enabled data stores. Lambda has the ability to integrate with both securely.

    Amazon VPC Hosted Data Stores

    1. Amazon RDS
    2. Amazon ElasticCache
    3. Amazon Redshift

    IAM-Enabled Data Stores

    1. Amazon DynamoDB
    2. Amazon S3
    3. Amazon ElasticSearch Service

    You can use any of those for storage purpose, But DynamoDB is one of best suited for ServerLess application.

    Why DynamoDB ?

    1. It is NoSQL DB, also that is fully managed by AWS.
    2. It provides fast & prectable performance with seamless scalability.
    3. DynamoDB lets you offload the administrative burden of operating and scaling a distributed system.
    4. It offers encryption at rest, which eliminates the operational burden and complexity involved in protecting sensitive data.
    5. You can scale up/down your tables throughput capacity without downtime/performance degradation.
    6. It provides On-Demand backups as well as enable point in time recovery for your DynamoDB tables.
    7. DynamoDB allows you to delete expired items from table automatically to help you reduce storage usage and the cost of storing data that is no longer relevant.

    Following is the sample script for DynamoDB with Python which you can use with lambda.

    from __future__ import print_function # Python 2/3 compatibility
    import boto3
    import json
    import decimal
    from boto3.dynamodb.conditions import Key, Attr
    from botocore.exceptions import ClientError
    
    # Helper class to convert a DynamoDB item to JSON.
    class DecimalEncoder(json.JSONEncoder):
        def default(self, o):
            if isinstance(o, decimal.Decimal):
                if o % 1 > 0:
                    return float(o)
                else:
                    return int(o)
            return super(DecimalEncoder, self).default(o)
    
    dynamodb = boto3.resource("dynamodb", region_name='us-west-2', endpoint_url="http://localhost:8000")
    
    table = dynamodb.Table('Movies')
    
    title = "The Big New Movie"
    year = 2015
    
    try:
        response = table.get_item(
            Key={
                'year': year,
                'title': title
            }
        )
    except ClientError as e:
        print(e.response['Error']['Message'])
    else:
        item = response['Item']
        print("GetItem succeeded:")
        print(json.dumps(item, indent=4, cls=DecimalEncoder))

    Note: To run the above script successfully you need to attach policy to your role for lambda. So in this case you need to attach policy for DynamoDB operations to take place & for CloudWatch if required to store your logs. Following is the policy which you can attach to your role for DB executions.

    {
    	"Version": "2012-10-17",
    	"Statement": [{
    			"Effect": "Allow",
    			"Action": [
    				"dynamodb:BatchGetItem",
    				"dynamodb:GetItem",
    				"dynamodb:Query",
    				"dynamodb:Scan",
    				"dynamodb:BatchWriteItem",
    				"dynamodb:PutItem",
    				"dynamodb:UpdateItem"
    			],
    			"Resource": "arn:aws:dynamodb:eu-west-1:123456789012:table/SampleTable"
    		},
    		{
    			"Effect": "Allow",
    			"Action": [
    				"logs:CreateLogStream",
    				"logs:PutLogEvents"
    			],
    			"Resource": "arn:aws:logs:eu-west-1:123456789012:*"
    		},
    		{
    			"Effect": "Allow",
    			"Action": "logs:CreateLogGroup",
    			"Resource": "*"
    		}
    	]
    }

    Sample Architecture Patterns

    You can implement the following popular architecture patterns using API Gateway & Lambda as your logic tier, Amazon S3 for presentation tier, DynamoDB as your data tier. For each example, we will only use AWS Service that do not require users to manage their own infrastructure.

    Mobile Backend

    1. Presentation Tier: A mobile application running on each user’s smartphone.

    2. Logic Tier: API Gateway & Lambda. The logic tier is globally distributed by the Amazon CloudFront distribution created as part of each API Gateway each API. A set of lambda functions can be specific to user / device identity management and authentication & managed by Amazon Cognito, which provides integration with IAM for temporary user access credentials as well as with popular third party identity providers. Other Lambda functions can define core business logic for your Mobile Back End.

    3. Data Tier: The various data storage services can be leveraged as needed; options are given above in data tier.

    Amazon S3 Hosted Website

    1. Presentation Tier: Static website content hosted on S3, distributed by Amazon CLoudFront. Hosting static website content on S3 is a cost effective alternative to hosting content on server-based infrastructure. However, for a website to contain rich feature, the static content often must integrate with a dynamic back end.

    2. Logic Tier: API Gateway & Lambda, static web content hosted in S3 can directly integrate with API Gateway, which can be CORS complaint.

    3. Data Tier: The various data storage services can be leveraged based on your requirement.

    ServerLess Costing

    At the top of the AWS invoice, we can see the total costing of AWS Services. The bill was processed for 2.1 million API request & all of the infrastructure required to support them.

    Following is the list of services with their costing.

    Note: You can get your costing done from AWS Calculator using following links;

    1. https://calculator.s3.amazonaws.com/index.html
    2. AWS Pricing Calculator

    Conclusion

    The three-tier architecture pattern encourages the best practice of creating application component that are easy to maintain, develop, decoupled & scalable. Serverless Application services varies based on the requirements over development.

  • Learn How to Quickly Setup Istio Using GKE and its Applications

    In this blog, we will try to understand Istio and its YAML configurations. You will also learn why Istio is great for managing traffic and how to set it up using Google Kubernetes Engine (GKE). I’ve also shed some light on deploying Istio in various environments and applications like intelligent routing, traffic shifting, injecting delays, and testing the resiliency of your application.

    What is Istio?

    The Istio’s website says it is “An open platform to connect, manage, and secure microservices”.

    As a network of microservices known as ‘Service Mesh’ grows in size and complexity, it can become tougher to understand and manage. Its requirements can include discovery, load balancing, failure recovery, metrics, and monitoring, and often more complex operational requirements such as A/B testing, canary releases, rate limiting, access control, and end-to-end authentication. Istio claims that it provides complete end to end solution to these problems.

    Why Istio?

    • Provides automatic load balancing for various protocols like HTTP, gRPC, WebSocket, and TCP traffic. It means you can cater to the needs of web services and also frameworks like Tensorflow (it uses gRPC).
    • To control the flow of traffic and API calls between services, make calls more reliable, and make the network more robust in the face of adverse conditions.
    • To gain understanding of the dependencies between services and the nature and flow of traffic between them, providing the ability to quickly identify issues etc.

    Let’s explore the architecture of Istio.

    Istio’s service mesh is split logically into two components:

    1. Data plane – set of intelligent proxies (Envoy) deployed as sidecars to the microservice they control communications between microservices.
    2. Control plane – manages and configures proxies to route traffic. It also enforces policies.

    Envoy – Istio uses an extended version of envoy (L7 proxy and communication bus designed for large modern service-oriented architectures) written in C++. It manages inbound and outbound traffic for service mesh.

    Enough of theory, now let us setup Istio to see things in action. A notable point is that Istio is pretty fast. It’s written in Go and adds a very tiny overhead to your system.

    Setup Istio on GKE

    You can either setup Istio via command line or via UI. We have used command line installation for this blog.

    Sample Book Review Application

    Following this link, you can easily

    The Bookinfo application is broken into four separate microservices:

    • productpage. The productpage microservice calls the details and reviews microservices to populate the page.
    • details. The details microservice contains book information.
    • reviews. The reviews microservice contains book reviews. It also calls the ratings microservice.
    • ratings. The ratings microservice contains book ranking information that accompanies a book review.

    There are 3 versions of the reviews microservice:

    • Version v1 doesn’t call the ratings service.
    • Version v2 calls the ratings service and displays each rating as 1 to 5 black stars.
    • Version v3 calls the ratings service and displays each rating as 1 to 5 red stars.

    The end-to-end architecture of the application is shown below.

    If everything goes well, You will have a web app like this (served at http://GATEWAY_URL/productpage)

    Let’s take a case where 50% of traffic is routed to v1 and the remaining 50% to v3.

    This is how the config file looks like (/path/to/istio-0.2.12/samples/bookinfo/kube/route-rule-reviews-50-v3.yaml) 

    apiVersion: config.istio.io/v1alpha2
    kind: RouteRule
    metadata:
      name: reviews-default
    spec:
      destination:
        name: reviews
      precedence: 1
      route:
      - labels:
          version: v1
        weight: 50
      - labels:
          version: v3
        weight: 50

    Let’s try to understand the config file above.

    Istio provides a simple Domain-specific language (DSL) to control how API calls and layer-4 traffic flow across various services in the application deployment.

    In the above configuration, we are trying to Add a “Route Rule”. It means we will be routing the traffic coming to destinations. The destination is the name of the service to which the traffic is being routed. The route labels identify the specific service instances that will receive traffic.

    In this Kubernetes deployment of Istio, the route label “version: v1” and “version: v3” indicates that only pods containing the label “version: v1” and “version: v3” will receive 50% traffic each.

    Now multiple route rules could be applied to the same destination. The order of evaluation of rules corresponding to a given destination, when there is more than one, can be specified by setting the precedence field of the rule.

    The precedence field is an optional integer value, 0 by default. Rules with higher precedence values are evaluated first. If there is more than one rule with the same precedence value the order of evaluation is undefined.

    When is precedence useful? Whenever the routing story for a particular service is purely weight based, it can be specified in a single rule.

    Once a rule is found that applies to the incoming request, it will be executed and the rule-evaluation process will terminate. That’s why it’s very important to carefully consider the priorities of each rule when there is more than one.

    In short, it means route label “version: v1” is given preference over route label “version: v3”.

    Intelligent Routing Using Istio

    We will demonstrate an example in which we will be aiming to get more control over routing the traffic coming to our app. Before reading ahead, make sure that you have installed Istio and book review application.

    First, we will set a default version for all microservices.

    > kubectl create -f samples/bookinfo/kube/route-rule-all-v1.yaml

    Then wait a few seconds for the rules to propagate to all pods before attempting to access the application. This will set the default route to v1 version (which doesn’t call rating service). Now we want a specific user, say Velotio, to see v2 version. We write a yaml (test-velotio.yaml) file.

    apiVersion: config.istio.io/v1alpha2
    kind: RouteRule
    metadata:
      name: test-velotio
      namespace: default
      ...
    spec:
      destination:
        name: reviews
      match:
        request:
          headers:
            cookie:
              regex: ^(.*?;)?(user=velotio)(;.*)?$
      precedence: 2
      route:
      - labels:
          version: v2

    We then set this rule

    > kubectl create -f path/to/test-velotio.yml

    Now if any other user logs in it won’t see any ratings (it will see v1 version) but when “velotio” user logs in it will see v2 version!

    This is how we can intelligently do content-based routing. We used Istio to send 100% of the traffic to the v1 version of each of the Bookinfo services. You then set a rule to selectively send traffic to version v2 of the reviews service based on a header (i.e., a user cookie) in a request.

    Traffic Shifting

    Now Let’s take a case in which we have to shift traffic from an old service to a new service.

    We can use Istio to gradually transfer traffic from one microservice to another one. For example, we can move 10, 20, 25..100% of traffic. Here for simplicity of the blog, we will move traffic from reviews:v1 to reviews:v3 in two steps 40% to 100%.

    First, we set the default version v1.

    > kubectl create -f samples/bookinfo/kube/route-rule-all-v1.yaml

    We write a yaml file route-rule-reviews-40-v3.yaml

    apiVersion: config.istio.io/v1alpha2
    kind: RouteRule
    metadata:
      name: reviews-default
      namespace: default
    spec:
      destination:
        name: reviews
      precedence: 1
      route:
      - labels:
          version: v1
        weight: 60
      - labels:
          version: v3
        weight: 40

    Then we apply a new rule.

    > kubectl create -f path/to/route-rule-reviews-40-v3.yaml

    Now, Refresh the productpage in your browser and you should now see red colored star ratings approximately 40% of the time. Once that is stable, we transfer all the traffic to v3.

    > istioctl replace -f samples/bookinfo/kube/route-rule-reviews-v3.yaml

    Inject Delays and Test the Resiliency of Your Application

    Here we will check fault injection using HTTP delay. To test our Bookinfo application microservices for resiliency, we will inject a 7s delay between the reviews:v2 and ratings microservices, for user “Jason”. Since the reviews:v2 service has a 10s timeout for its calls to the ratings service, we expect the end-to-end flow to continue without any errors.

    > istioctl create -f samples/bookinfo/kube/route-rule-ratings-test-delay.yaml

    Now we check if the rule was applied correctly,

    > istioctl get routerule ratings-test-delay -o yaml

    Now we allow several seconds to account for rule propagation delay to all pods. Log in as user “Jason”. If the application’s front page was set to correctly handle delays, we expect it to load within approximately 7 seconds.

    Conclusion

    In this blog we only explored the routing capabilities of Istio. We found Istio to give us good amount of control over routing, fault injection etc in microservices. Istio has a lot more to offer like load balancing and security. We encourage you guys to toy around with Istio and tell us about your experiences.

    Happy Coding!

  • The 7 Most Useful Design Patterns in ES6 (and how you can implement them)

    After spending a couple of years in JavaScript development, I’ve realized how incredibly important design patterns are, in modern JavaScript (ES6). And I’d love to share my experience and knowledge on the subject, hoping you’d make this a critical part of your development process as well.

    Note: All the examples covered in this post are implemented with ES6 features, but you can also integrate the design patterns with ES5.

    At Velotio, we always follow best practices to achieve highly maintainable and more robust code. And we are strong believers of using design patterns as one of the best ways to write clean code. 

    In the post below, I’ve listed the most useful design patterns I’ve implemented so far and how you can implement them too:

    1. Module

    The module pattern simply allows you to keep units of code cleanly separated and organized. 

    Modules promote encapsulation, which means the variables and functions are kept private inside the module body and can’t be overwritten.

    Creating a module in ES6 is quite simple.

    // Addition module
    export const sum = (num1, num2) => num1 + num2;

    // usage
    import { sum } from 'modules/sum';
    const result = sum(20, 30); // 50

    ES6 also allows us to export the module as default. The following example gives you a better understanding of this.

    // All the variables and functions which are not exported are private within the module and cannot be used outside. Only the exported members are public and can be used by importing them.
    
    // Here the businessList is private member to city module
    const businessList = new WeakMap();
     
    // Here City uses the businessList member as it’s in same module
    class City {
     constructor() {
       businessList.set(this, ['Pizza Hut', 'Dominos', 'Street Pizza']);
     }
     
     // public method to access the private ‘businessList’
     getBusinessList() {
       return businessList.get(this);
     }
    
    // public method to add business to ‘businessList’
     addBusiness(business) {
       businessList.get(this).push(business);
     }
    }
     
    // export the City class as default module
    export default City;

    // usage
    import City from 'modules/city';
    const city = new City();
    city.getBusinessList();

    There is a great article written on the features of ES6 modules here.

    2. Factory

    Imagine creating a Notification Management application where your application currently only allows for a notification through Email, so most of the code lives inside the EmailNotification class. And now there is a new requirement for PushNotifications. So, to implement the PushNotifications, you have to do a lot of work as your application is mostly coupled with the EmailNotification. You will repeat the same thing for future implementations.

    To solve this complexity, we will delegate the object creation to another object called factory.

    class PushNotification {
     constructor(sendTo, message) {
       this.sendTo = sendTo;
       this.message = message;
     }
    }
     
    class EmailNotification {
     constructor(sendTo, cc, emailContent) {
       this.sendTo = sendTo;
       this.cc = cc;
       this.emailContent = emailContent;
     }
    }
     
    // Notification Factory
     
    class NotificationFactory {
     createNotification(type, props) {
       switch (type) {
         case 'email':
           return new EmailNotification(props.sendTo, props.cc, props.emailContent);
         case 'push':
           return new PushNotification(props.sendTo, props.message);
       }
     }
    }
     
    // usage
    const factory = new NotificationFactory();
     
    // create email notification
    const emailNotification = factory.createNotification('email', {
     sendTo: 'receiver@domain.com',
     cc: 'test@domain.com',
     emailContent: 'This is the email content to be delivered.!',
    });
     
    // create push notification
    const pushNotification = factory.createNotification('push', {
     sendTo: 'receiver-device-id',
     message: 'The push notification message',
    });

    3. Observer

    (Also known as the publish/subscribe pattern.)

    An observer pattern maintains the list of subscribers so that whenever an event occurs, it will notify them. An observer can also remove the subscriber if the subscriber no longer wishes to be notified.

    On YouTube, many times, the channels we’re subscribed to will notify us whenever a new video is uploaded.

    // Publisher
    class Video {
     constructor(observable, name, content) {
       this.observable = observable;
       this.name = name;
       this.content = content;
       // publish the ‘video-uploaded’ event
       this.observable.publish('video-uploaded', {
         name,
         content,
       });
     }
    }
    // Subscriber
    class User {
     constructor(observable) {
       this.observable = observable;
       this.intrestedVideos = [];
       // subscribe with the event naame and the call back function
       this.observable.subscribe('video-uploaded', this.addVideo.bind(this));
     }
     
     addVideo(video) {
       this.intrestedVideos.push(video);
     }
    }
    // Observer 
    class Observable {
     constructor() {
       this.handlers = [];
     }
     
     subscribe(event, handler) {
       this.handlers[event] = this.handlers[event] || [];
       this.handlers[event].push(handler);
     }
     
     publish(event, eventData) {
       const eventHandlers = this.handlers[event];
     
       if (eventHandlers) {
         for (var i = 0, l = eventHandlers.length; i < l; ++i) {
           eventHandlers[i].call({}, eventData);
         }
       }
     }
    }
    // usage
    const observable = new Observable();
    const user = new User(observable);
    const video = new Video(observable, 'ES6 Design Patterns', videoFile);

    4. Mediator

    The mediator pattern provides a unified interface through which different components of an application can communicate with each other.

    If a system appears to have too many direct relationships between components, it may be time to have a central point of control that components communicate through instead. 

    The mediator promotes loose coupling. 

    A real-time analogy could be a traffic light signal that handles which vehicles can go and stop, as all the communications are controlled from a traffic light.

    Let’s create a chatroom (mediator) through which the participants can register themselves. The chatroom is responsible for handling the routing when the participants chat with each other. 

    // each participant represented by Participant object
    class Participant {
     constructor(name) {
       this.name = name;
     }
      getParticiantDetails() {
       return this.name;
     }
    }
     
    // Mediator
    class Chatroom {
     constructor() {
       this.participants = {};
     }
     
     register(participant) {
       this.participants[participant.name] = participant;
       participant.chatroom = this;
     }
     
     send(message, from, to) {
       if (to) {
         // single message
         to.receive(message, from);
       } else {
         // broadcast message to everyone
         for (key in this.participants) {
           if (this.participants[key] !== from) {
             this.participants[key].receive(message, from);
           }
         }
       }
     }
    }
     
    // usage
    // Create two participants  
     const john = new Participant('John');
     const snow = new Participant('Snow');
    // Register the participants to Chatroom
     var chatroom = new Chatroom();
     chatroom.register(john);
     chatroom.register(snow);
    // Participants now chat with each other
     john.send('Hey, Snow!');
     john.send('Are you there?');
     snow.send('Hey man', yoko);
     snow.send('Yes, I heard that!');

    5. Command

    In the command pattern, an operation is wrapped as a command object and passed to the invoker object. The invoker object passes the command to the corresponding object, which executes the command.

    The command pattern decouples the objects executing the commands from objects issuing the commands. The command pattern encapsulates actions as objects. It maintains a stack of commands whenever a command is executed, and pushed to stack. To undo a command, it will pop the action from stack and perform reverse action.

    You can consider a calculator as a command that performs addition, subtraction, division and multiplication, and each operation is encapsulated by a command object.

    // The list of operations can be performed
    const addNumbers = (num1, num2) => num1 + num2;
    const subNumbers = (num1, num2) => num1 - num2;
    const multiplyNumbers = (num1, num2) => num1 * num2;
    const divideNumbers = (num1, num2) => num1 / num2;
     
    // CalculatorCommand class initialize with execute function, undo function // and the value 
    class CalculatorCommand {
     constructor(execute, undo, value) {
       this.execute = execute;
       this.undo = undo;
       this.value = value;
     }
    }
    // Here we are creating the command objects
    const DoAddition = value => new CalculatorCommand(addNumbers, subNumbers, value);
    const DoSubtraction = value => new CalculatorCommand(subNumbers, addNumbers, value);
    const DoMultiplication = value => new CalculatorCommand(multiplyNumbers, divideNumbers, value);
    const DoDivision = value => new CalculatorCommand(divideNumbers, multiplyNumbers, value);
     
    // AdvancedCalculator which maintains the list of commands to execute and // undo the executed command
    class AdvancedCalculator {
     constructor() {
       this.current = 0;
       this.commands = [];
     }
     
     execute(command) {
       this.current = command.execute(this.current, command.value);
       this.commands.push(command);
     }
     
     undo() {
       let command = this.commands.pop();
       this.current = command.undo(this.current, command.value);
     }
     
     getCurrentValue() {
       return this.current;
     }
    }
    
    // usage
    const advCal = new AdvancedCalculator();
     
    // invoke commands
    advCal.execute(new DoAddition(50)); //50
    advCal.execute(new DoSubtraction(25)); //25
    advCal.execute(new DoMultiplication(4)); //100
    advCal.execute(new DoDivision(2)); //50
     
    // undo commands
    advCal.undo();
    advCal.getCurrentValue(); //100

    6. Facade

    The facade pattern is used when we want to show the higher level of abstraction and hide the complexity behind the large codebase.

    A great example of this pattern is used in the common DOM manipulation libraries like jQuery, which simplifies the selection and events adding mechanism of the elements.

    // JavaScript:
    /* handle click event  */
    document.getElementById('counter').addEventListener('click', () => {
     counter++;
    });
     
    // jQuery:
    /* handle click event */
    $('#counter').on('click', () => {
     counter++;
    });

    Though it seems simple on the surface, there is an entire complex logic implemented when performing the operation.

    The following Account Creation example gives you clarity about the facade pattern: 

    // Here AccountManager is responsible to create new account of type 
    // Savings or Current with the unique account number
    let currentAccountNumber = 0;
    
    class AccountManager {
     createAccount(type, details) {
       const accountNumber = AccountManager.getUniqueAccountNumber();
       let account;
       if (type === 'current') {
         account = new CurrentAccount();
       } else {
         account = new SavingsAccount();
       }
       return account.addAccount({ accountNumber, details });
     }
     
     static getUniqueAccountNumber() {
       return ++currentAccountNumber;
     }
    }
    
    
    // class Accounts maintains the list of all accounts created
    class Accounts {
     constructor() {
       this.accounts = [];
     }
     
     addAccount(account) {
       this.accounts.push(account);
       return this.successMessage(complaint);
     }
     
     getAccount(accountNumber) {
       return this.accounts.find(account => account.accountNumber === accountNumber);
     }
     
     successMessage(account) {}
    }
    
    // CurrentAccounts extends the implementation of Accounts for providing more specific success messages on successful account creation
    class CurrentAccounts extends Accounts {
     constructor() {
       super();
       if (CurrentAccounts.exists) {
         return CurrentAccounts.instance;
       }
       CurrentAccounts.instance = this;
       CurrentAccounts.exists = true;
       return this;
     }
     
     successMessage({ accountNumber, details }) {
       return `Current Account created with ${details}. ${accountNumber} is your account number.`;
     }
    }
     
    // Same here, SavingsAccount extends the implementation of Accounts for providing more specific success messages on successful account creation
    class SavingsAccount extends Accounts {
     constructor() {
       super();
       if (SavingsAccount.exists) {
         return SavingsAccount.instance;
       }
       SavingsAccount.instance = this;
       SavingsAccount.exists = true;
       return this;
     }
     
     successMessage({ accountNumber, details }) {
       return `Savings Account created with ${details}. ${accountNumber} is your account number.`;
     }
    }
     
    // usage
    // Here we are hiding the complexities of creating account
    const accountManager = new AccountManager();
     
    const currentAccount = accountManager.createAccount('current', { name: 'John Snow', address: 'pune' });
     
    const savingsAccount = accountManager.createAccount('savings', { name: 'Petter Kim', address: 'mumbai' });

    7. Adapter

    The adapter pattern converts the interface of a class to another expected interface, making two incompatible interfaces work together. 

    With the adapter pattern, you might need to show the data from a 3rd party library with the bar chart representation, but the data formats of the 3rd party library API and the display bar chart are different. Below, you’ll find an adapter that converts the 3rd party library API response to Highcharts’ bar representation:

    // API Response
    [{
       symbol: 'SIC DIVISION',
       exchange: 'Agricultural services',
       volume: 42232,
    }]
     
    // Required format
    [{
       category: 'Agricultural services',
       name: 'SIC DIVISION',
       y: 42232,
    }]
     
    const mapping = {
     symbol: 'category',
     exchange: 'name',
     volume: 'y',
    };
     
    const highchartsAdapter = (response, mapping) => {
     return response.map(item => {
       const normalized = {};
     
       // Normalize each response's item key, according to the mapping
       Object.keys(item).forEach(key => (normalized[mapping[key]] = item[key]));
       return normalized;
     });
    };
     
    highchartsAdapter(response, mapping);

    Conclusion

    This has been a brief introduction to the design patterns in modern JavaScript (ES6). This subject is massive, but hopefully this article has shown you the benefits of using it when writing code.

    Related Articles

    1. Cleaner, Efficient Code with Hooks and Functional Programming

    2. Building a Progressive Web Application in React [With Live Code Examples]

  • Implementing GraphQL with Flutter: Everything you need to know

    Thinking about using GraphQL but unsure where to start? 

    This is a concise tutorial based on our experience using GraphQL. You will learn how to use GraphQL in a Flutter app, including how to create a query, a mutation, and a subscription using the graphql_flutter plugin. Once you’ve mastered the fundamentals, you can move on to designing your own workflow.

    Key topics and takeaways:

    * GraphQL

    * What is graphql_flutter?

    * Setting up graphql_flutter and GraphQLProvider

    * Queries

    * Mutations

    * Subscriptions

    GraphQL

    Looking to call multiple endpoints to populate data for a single screen? Wish you had more control over the data returned by the endpoint? Is it possible to get more data with a single endpoint call, or does the call only return the necessary data fields?

    Follow along to learn how to do this with GraphQL. GraphQL’s goal was to change the way data is supplied from the backend, and it allows you to specify the data structure you want.

    Let’s imagine that we have the table model in our database that looks like this:

    Movie {

     title

     genre

     rating

     year

    }

    These fields represent the properties of the Movie Model:

    • title property is the name of the Movie,
    • genre describes what kind of movie
    • rating represents viewers interests
    • year states when it is released

    We can get movies like this using REST:

    /GET localhost:8080/movies

    [
     {
       "title": "The Godfather",
       "genre":  "Drama",
       "rating": 9.2,
       "year": 1972
     }
    ]

    As you can see, whether or not we need them, REST returns all of the properties of each movie. In our frontend, we may just need the title and genre properties, yet all of them were returned.

    We can avoid redundancy by using GraphQL. We can specify the properties we wish to be returned using GraphQL, for example:

    query movies { Movie {   title   genre }}

    We’re informing the server that we only require the movie table’s title and genre properties. It provides us with exactly what we require:

    {
     "data": [
       {
         "title": "The Godfather",
         "genre": "Drama"
       }
     ]
    }

    GraphQL is a backend technology, whereas Flutter is a frontend SDK for developing mobile apps. We get the data displayed on the mobile app from a backend when we use mobile apps.

    It’s simple to create a Flutter app that retrieves data from a GraphQL backend. Simply make an HTTP request from the Flutter app, then use the returned data to set up and display the UI.

    The new graphql_flutter plugin includes APIs and widgets for retrieving and using data from GraphQL backends.

    What is graphql_flutter?

    The new graphql_flutter plugin includes APIs and widgets that make it simple to retrieve and use data from a GraphQL backend.

    graphql_flutter, as the name suggests, is a GraphQL client for Flutter. It exports widgets and providers for retrieving data from GraphQL backends, such as:

    • HttpLink — This is used to specify the backend’s endpoint or URL.
    • GraphQLClient — This class is used to retrieve a query or mutation from a GraphQL endpoint as well as to connect to a GraphQL server.
    • GraphQLCache — We use this class to cache our queries and mutations. It has an options store where we pass the type of store to it during its caching operation.
    • GraphQLProvider — This widget encapsulates the graphql flutter widgets, allowing them to perform queries and mutations. This widget is given to the GraphQL client to use. All widgets in this provider’s tree have access to this client.
    • Query — This widget is used to perform a backend GraphQL query.
    • Mutation — This widget is used to modify a GraphQL backend.
    • Subscription — This widget allows you to create a subscription.

    Setting up graphql_flutter and GraphQLProvider

    Create a Flutter project:

    flutter create flutter_graphqlcd flutter_graphql

    Next, install the graphql_flutter package:

    flutter pub add graphql_flutter

    The code above will set up the graphql_flutter package. This will include the graphql_flutter package in the dependencies section of your pubspec.yaml file:

    dependencies:graphql_flutter: ^5.0.0

    To use the widgets, we must import the package as follows:

    import 'package:graphql_flutter/graphql_flutter.dart';

    Before we can start making GraphQL queries and mutations, we must first wrap our root widget in GraphQLProvider. A GraphQLClient instance must be provided to the GraphQLProvider’s client property.

    GraphQLProvider( client: GraphQLClient(...))

    The GraphQLClient includes the GraphQL server URL as well as a caching mechanism.

    final httpLink = HttpLink(uri: "http://10.0.2.2:8000/");‍ValueNotifier<GraphQLClient> client = ValueNotifier( GraphQLClient(   cache: InMemoryCache(),   link: httpLink ));

    HttpLink is used to generate the URL for the GraphQL server. The GraphQLClient receives the instance of the HttpLink in the form of a link property, which contains the URL of the GraphQL endpoint.

    The cache passed to GraphQLClient specifies the cache mechanism to be used. To persist or store caches, the InMemoryCache instance makes use of an in-memory database.

    A GraphQLClient instance is passed to a ValueNotifier. This ValueNotifer holds a single value and has listeners that notify it when that value changes. This is used by graphql_flutter to notify its widgets when the data from a GraphQL endpoint changes, which helps graphql_flutter remain responsive.

    We’ll now encase our MaterialApp widget in GraphQLProvider:

    void main() { runApp(MyApp());}‍class MyApp extends StatelessWidget { // This widget is the root of your application. @override Widget build(BuildContext context) {   return GraphQLProvider(       client: client,       child: MaterialApp(         title: 'GraphQL Demo',         theme: ThemeData(primarySwatch: Colors.blue),         home: MyHomePage(title: 'GraphQL Demo'),       )); }}

    Queries

    We’ll use the Query widget to create a query with the graphql_flutter package.

    class MyHomePage extends StatelessWidget { @override Widget build(BuildContext) {   return Query(     options: QueryOptions(       document: gql(readCounters),       variables: {         'counterId': 23,       },       pollInterval: Duration(seconds: 10),     ),     builder: (QueryResult result,         { VoidCallback refetch, FetchMore fetchMore }) {       if (result.hasException) {         return Text(result.exception.toString());       }‍       if (result.isLoading) {         return Text('Loading');       }‍       // it can be either Map or List       List counters = result.data['counter'];‍       return ListView.builder(           itemCount: repositories.length,           itemBuilder: (context, index) {             return Text(counters[index]['name']);           });     },) }}

    The Query widget encloses the ListView widget, which will display the list of counters to be retrieved from our GraphQL server. As a result, the Query widget must wrap the widget where the data fetched by the Query widget is to be displayed.

    The Query widget cannot be the tree’s topmost widget. It can be placed wherever you want as long as the widget that will use its data is underneath or wrapped by it.

    In addition, two properties have been passed to the Query widget: options and builder.

    options

    options: QueryOptions( document: gql(readCounters), variables: {   'conuterId': 23, }, pollInterval: Duration(seconds: 10),),

    The option property is where the query configuration is passed to the Query widget. This options prop is a QueryOptions instance. The QueryOptions class exposes properties that we use to configure the Query widget.

    The query string or the query to be conducted by the Query widget is set or sent in via the document property. We passed in the readCounters string here:

    final String readCounters = """query readCounters($counterId: Int!) {   counter {       name       id   }}""";

    The variables attribute is used to send query variables to the Query widget. There is a ‘counterId’: 23 there. In the readCounters query string, this will be passed in place of $counterId.

    The pollInterval specifies how often the Query widget polls or refreshes the query data. The timer is set to 10 seconds, so the Query widget will perform HTTP requests to refresh the query data every 10 seconds.

    builder

    A function is the builder property. When the Query widget sends an HTTP request to the GraphQL server endpoint, this function is called. The Query widget calls the builder function with the data from the query, a function to re-fetch the data, and a function for pagination. This is used to get more information.

    The builder function returns widgets that are listed below the Query widget. The result argument is a QueryResult instance. The QueryResult class has properties that can be used to determine the query’s current state and the data returned by the Query widget.

    • If the query encounters an error, QueryResult.hasException is set.
    • If the query is still in progress, QueryResult.isLoading is set. We can use this property to show our users a UI progress bar to let them know that something is on its way.
    • The data returned by the GraphQL endpoint is stored in QueryResult.data.

    Mutations

    Let’s look at how to make mutation queries with the Mutation widget in graphql_flutter.

    The Mutation widget is used as follows:

    Mutation( options: MutationOptions(   document: gql(addCounter),   update: (GraphQLDataProxy cache, QueryResult result) {     return cache;   },   onCompleted: (dynamic resultData) {     print(resultData);   }, ), builder: (   RunMutation runMutation,   QueryResult result, ) {   return FlatButton(       onPressed: () => runMutation({             'counterId': 21,           }),       child: Text('Add Counter')); },);

    The Mutation widget, like the Query widget, accepts some properties.

    • options is a MutationOptions class instance. This is the location of the mutation string and other configurations.
    • The mutation string is set using a document. An addCounter mutation has been passed to the document in this case. The Mutation widget will handle it.
    • When we want to update the cache, we call update. The update function receives the previous cache (cache) and the outcome of the mutation. Anything returned by the update becomes the cache’s new value. Based on the results, we’re refreshing the cache.
    • When the mutations on the GraphQL endpoint have been called, onCompleted is called. The onCompleted function is then called with the mutation result builder to return the widget from the Mutation widget tree. This function is invoked with a RunMutation instance, runMutation, and a QueryResult instance result.
    • The Mutation widget’s mutation is executed using runMutation. The Mutation widget causes the mutation whenever it is called. The mutation variables are passed as parameters to the runMutation function. The runMutation function is invoked with the counterId variable, 21.

    When the Mutation’s mutation is finished, the builder is called, and the Mutation rebuilds its tree. runMutation and the mutation result are passed to the builder function.

    Subscriptions

    Subscriptions in GraphQL are similar to an event system that listens on a WebSocket and calls a function whenever an event is emitted into the stream.

    The client connects to the GraphQL server via a WebSocket. The event is passed to the WebSocket whenever the server emits an event from its end. So this is happening in real-time.

    The graphql_flutter plugin in Flutter uses WebSockets and Dart streams to open and receive real-time updates from the server.

    Let’s look at how we can use our Flutter app’s Subscription widget to create a real-time connection. We’ll start by creating our subscription string:

    final counterSubscription = '''subscription counterAdded {   counterAdded {       name       id   }}''';

    When we add a new counter to our GraphQL server, this subscription will notify us in real-time.

    Subscription(   options: SubscriptionOptions(     document: gql(counterSubscription),   ),   builder: (result) {     if (result.hasException) {       return Text("Error occurred: " + result.exception.toString());     }‍     if (result.isLoading) {       return Center(         child: const CircularProgressIndicator(),       );     }‍     return ResultAccumulator.appendUniqueEntries(         latest: result.data,         builder: (context, {results}) => ...     );   }),

    The Subscription widget has several properties, as we can see:

    • options holds the Subscription widget’s configuration.
    • document holds the subscription string.
    • builder returns the Subscription widget’s widget tree.

    The subscription result is used to call the builder function. The end result has the following properties:

    • If the Subscription widget encounters an error while polling the GraphQL server for updates, result.hasException is set.
    • If polling from the server is active, result.isLoading is set.

    The provided helper widget ResultAccumulator is used to collect subscription results, according to graphql_flutter’s pub.dev page.

    Conclusion

    This blog intends to help you understand what makes GraphQL so powerful, how to use it in Flutter, and how to take advantage of the reactive nature of graphql_flutter. You can now take the first steps in building your applications with GraphQL!