Category: Blogs

  • Setting up S3 & CloudFront to Deliver Static Assets Across the Web

    If you have a web application, you probably have static content. Static content might include files like images, videos, and music. One of the simpler approaches to serve your content on the internet is Amazon AWS’s “S3 Bucket.” S3 is very easy to set up and use.

    Problems with only using S3 to serve your resources

    But there are a few limitations of serving content directly using S3. Using S3, you will need:

    • Either keep the bucket public, which is not at all recommended
    • Or, create pre-signed urls to access the private resources. Now, if your application has tons of resources to be loaded, then it will add a lot of latency to pre-sign each and every resource before serving on the UI.

    For these reasons, we will also use AWS’s CloudFront.

    Why use CloudFront with S3?

    Amazon CloudFront (CDN) is designed to work seamlessly with S3 to serve your S3 content in a faster way. Also, using CloudFront to serve s3 content gives you a lot more flexibility and control.

    It has below advantages:

    • Using CloudFront provides authentication, so there’s no need to generate pre-signed urls for each resource.
    • Improved Latency, which results in a better end-user experience.
    • CloudFront provides caching, which can reduce the running costs as content is not always served from S3 when cached.
    • Another case for using CloudFront over S3 is that you can use an SSL certificate to a custom domain in CloudFront.

    Setting up S3 & CloudFront

    Creating an S3 bucket

    1. Navigate to S3 from the AWS console and click on Create Bucket. Enter a unique bucket name and select the AWS Region.

    2. Make sure the Block Public Access settings for this bucket is set to “Block All Public Access,” as it is recommended and we don’t need public access to buckets.

    3. Review other options and create a bucket. Once a bucket is created, you can see it on the S3 dashboard. Open the bucket to view its details, and next, let’s add some assets.

    4. Click on upload and add/drag all the files or folders you want to upload. 

    5. Review the settings and upload. You can see the status on successful upload. Go to bucket details, and, after opening up the uploaded asset, you can see the details of the uploaded asset.

    If you try to copy the object URL and open it in the browser, you will get the access denied error as we have blocked direct public access. 

    We will be using CloudFront to serve the S3 assets in the next step. CloudFront will restrict access to your S3 bucket to CloudFront endpoints rendering your content and application will become more secure and performant.

    Creating a CloudFront

    1. Navigate to CloudFront from AWS console and click on Create Distribution. For the Origin domain, select the bucket from which we want to serve the static assets.

    2. Next, we need Use a CloudFront origin access identity (OAI) to access the S3 bucket. This will enable us to access private S3 content via CloudFront. To enable this, under S3 bucket access, select “Yes use OAI.” Select an existing origin access identity or create a new identity.
    You can also choose to update the S3 bucket policy to allow read access to the OAI if it is not already configured previously.

    3. Review all the settings and create distribution. You can see the domain name once it is successfully created.

    4. The basic setup is done. If you can try to access the asset we uploaded via the CloudFront domain in your browser, it should serve the asset. You can access assets at {cloudfront domain name}/{s3 asset}
    for e.g.https://d1g71lhh75winl.cloudfront.net/sample.jpeg

    Even though we successfully served the assets via CloudFront. One thing to note is that all the assets are publicly accessible and not secured. In the next section, we will see how you can secure your CloudFront assets.

    Restricting public access

    Previously, while configuring CloudFront, we set Restrict Viewer access to No, which enabled us to access the assets publicly.

    Let’s see how to configure CloudFront to enable signed URLs for assets that should have restricted access. We will be using Trusted key groups, which is the AWS recommended way for restricting access.

    Creating key group

    To create a key pair for a trusted key group, perform the following steps:

    1. Creating the public–private key pair.

    The below commands will generate an RSA key pair and will store the public key & private key in public_key.pem & private_key.pem files respectively.

    openssl genrsa -out private_key.pem 2048
    openssl rsa -pubout -in private_key.pem -out public_key.pem

    Note: The above steps use OpenSSL as an example to create a key pair. There are other ways to create an RSA key pair as well.

    2. Uploading the Public Key to CloudFront.

    To upload, in the AWS console, open CloudFront console and navigate to Public Key. Choose Create Public Key. Add name and copy and paste the contents of public_key.pem file under Key. Once done, click Create Public Key.

    3. Adding the public key to a Key Group.

    To do this, navigate to Key Groups. Add name and select the public key we created. Once done, click Create Key Group.

    Adding key group signer to distribution

    1. Navigate to CloudFront and choose the distribution whose files you want to protect with signed URLs or signed cookies.
    2. Navigate to the Behaviors tab. Select the cache behavior, and then choose Edit.
    3. For Restrict Viewer Access (Use Signed URLs or Signed Cookies), choose Yes and choose Trusted Key Groups.
    4. For Trusted Key Groups, select the key group, and then choose Add.
    5. Once done, review and Save Changes.

    Cheers, you have successfully restricted public access to assets. If you try to open any asset urls in the browser, you will see something like this:

    You can either create signed urls or cookies using the private key to access the assets.

    Setting cookies and accessing CloudFront private urls

    You need to create and set cookies on the domain to access your content. Once cookies are set,  they will be sent along with every request by the browser.

    The cookies to be set are:

    • CloudFront-Policy: Your policy statement in JSON format, with white space removed, then base64 encoded.
    • CloudFront-Signature: A hashed, signed using the private key, and base64-encoded version of the JSON policy statement.
    • CloudFront-Key-Pair-Id: The ID for a CloudFront public key, e.g., K4EGX7PEAN4EN. The public key ID tells CloudFront which public key to use to validate the signed URL.

    Please note that the cookie names are case-sensitive. Make sure cookies are http only and secure.

    Set-Cookie: 
    CloudFront-Policy=base64 encoded version of the policy statement; 
    Domain=optional domain name; 
    Path=/optional directory path; 
    Secure; 
    HttpOnly
    
    
    Set-Cookie: 
    CloudFront-Signature=hashed and signed version of the policy statement; 
    Domain=optional domain name; 
    Path=/optional directory path; 
    Secure; 
    HttpOnly
    
    Set-Cookie: 
    CloudFront-Key-Pair-Id=public key ID for the CloudFront public key whose corresponding private key you're using to generate the signature; 
    Domain=optional domain name; 
    Path=/optional directory path; 
    Secure; 
    HttpOnly

    Cookies can be created in any language you are working on with help of the AWS SDK. For this blog, we will create cookies in python using the botocore module.

    import functools
    
    import rsa
    from botocore.signers import CloudFrontSigner
    
    CLOUDFRONT_RESOURCE = # IN format "{protocol}://{domain}/{resource}" for e.g. "https://d1g71lhh75winl.cloudfront.net/*"
    CLOUDFRONT_PUBLIC_KEY_ID = # The ID for a CloudFront public key
    CLOUDFRONT_PRIVATE_KEY = # contents of the private_key.pem file associated to public key e.g. open('private_key.pem','rb').read()
    EXPIRES_AT = # Enter datetime for expiry of cookies e.g.: datetime.datetime.now() + datetime.timedelta(hours=1)
    
    # load the private key
    key = rsa.PrivateKey.load_pkcs1(CLOUDFRONT_PRIVATE_KEY)
    # create a signer function that can sign message with the private key
    rsa_signer = functools.partial(rsa.sign, priv_key=key, hash_method="SHA-1")
    # Create a CloudFrontSigner boto3 object
    signer = CloudFrontSigner(CLOUDFRONT_PUBLIC_KEY_ID, rsa_signer)
    
    # build the CloudFront Policy
    policy = signer.build_policy(CLOUDFRONT_RESOURCE, EXPIRES_AT).encode("utf8")
    CLOUDFRONT_POLICY = signer._url_b64encode(policy).decode("utf8")
    
    # create CloudFront Signature
    signature = rsa_signer(policy)
    CLOUDFRONT_SIGNATURE = signer._url_b64encode(signature).decode("utf8")
    
    # you can set this cookies on response
    COOKIES = {
        "CloudFront-Policy": CLOUDFRONT_POLICY,
        "CloudFront-Signature": CLOUDFRONT_SIGNATURE,
        "CloudFront-Key-Pair-Id": CLOUDFRONT_PUBLIC_KEY_ID,
    }

    For more details, you can follow AWS official docs.

    Once you set cookies using the above guide, you should be able to access the asset.

    This is how you can effectively use CloudFront along with S3 to securely serve your content.

  • Taking Amazon’s Elastic Kubernetes Service for a Spin

    With the introduction of Elastic Kubernetes service at AWS re: Invent last year, AWS finally threw their hat in the ever booming space of managed Kubernetes services. In this blog post, we will learn the basic concepts of EKS, launch an EKS cluster and also deploy a multi-tier application on it.

    What is Elastic Kubernetes service (EKS)?

    Kubernetes works on a master-slave architecture. The master is also referred to as control plane. If the master goes down it brings our entire cluster down, thus ensuring high availability of master is absolutely critical as it can be a single point of failure. Ensuring high availability of master and managing all the worker nodes along with it becomes a cumbersome task in itself, thus it is most desirable for organizations to have managed Kubernetes cluster so that they can focus on the most important task which is to run their applications rather than managing the cluster. Other cloud providers like Google cloud and Azure already had their managed Kubernetes service named GKE and AKS respectively. Similarly now with EKS Amazon has also rolled out its managed Kubernetes cluster to provide a seamless way to run Kubernetes workloads.

    Key EKS concepts:

    EKS takes full advantage of the fact that it is running on AWS so instead of creating Kubernetes specific features from the scratch they have reused/plugged in the existing AWS services with EKS for achieving Kubernetes specific functionalities. Here is a brief overview:

    IAM-integration: Amazon EKS integrates IAM authentication with Kubernetes RBAC ( role-based access control system native to Kubernetes) with the help of Heptio Authenticator which is a tool that uses AWS IAM credentials to authenticate to a Kubernetes cluster. Here we can directly attach an RBAC role with an IAM entity this saves the pain of managing another set of credentials at the cluster level.

    Container Interface:  AWS has developed an open source cni plugin which takes advantage of the fact that multiple network interfaces can be attached to a single EC2 instance and these interfaces can have multiple secondary private ips associated with them, these secondary ips are used to provide pods running on EKS with real ip address from VPC cidr pool. This improves the latency for inter pod communications as the traffic flows without any overlay.  

    ELB Support:  We can use any of the AWS ELB offerings (classic, network, application) to route traffic to our service running on the working nodes.

    Auto scaling:  The number of worker nodes in the cluster can grow and shrink using the EC2 auto scaling service.

    Route 53: With the help of the External DNS project and AWS route53 we can manage the DNS entries for the load balancers which get created when we create an ingress object in our EKS cluster or when we create a service of type LoadBalancer in our cluster. This way the DNS names are always in sync with the load balancers and we don’t have to give separate attention to it.   

    Shared responsibility for cluster: The responsibilities of an EKS cluster is shared between AWS and customer. AWS takes care of the most critical part of managing the control plane (api server and etcd database) and customers need to manage the worker node. Amazon EKS automatically runs Kubernetes with three masters across three Availability Zones to protect against a single point of failure, control plane nodes are also monitored and replaced if they fail, and are also patched and updated automatically this ensures high availability of the cluster and makes it extremely simple to migrate existing workloads to EKS.

    Prerequisites for launching an EKS cluster:

    1.  IAM role to be assumed by the cluster: Create an IAM role that allows EKS to manage a cluster on your behalf. Choose EKS as the service which will assume this role and add AWS managed policies ‘AmazonEKSClusterPolicy’ and ‘AmazonEKSServicePolicy’ to it.

    2.  VPC for the cluster:  We need to create the VPC where our cluster is going to reside. We need a VPC with subnets, internet gateways and other components configured. We can use an existing VPC for this if we wish or create one using the CloudFormation script provided by AWS here or use the Terraform script available here. The scripts take ‘cidr’ block of the VPC and three other subnets as arguments.

    Launching an EKS cluster:

    1.  Using the web console: With the prerequisites in place now we can go to the EKS console and launch an EKS cluster when we try to launch an EKS cluster we need to provide a the name of the EKS cluster, choose the Kubernetes version to use, provide the IAM role we created in step one and also choose a VPC, once we choose a VPC we also need to select subnets from the VPC where we want our worker nodes to be launched by default all the subnets in the VPC are selected we also need to provide a security group which is applied to the elastic network interfaces (eni) that EKS creates to allow control plane communicate with the worker nodes.

    NOTE: Couple of things to note here is that the subnets must be in at least two different availability zones and the security group that we provided is later updated when we create worker node cluster so it is better to not use this security group with any other entity or be completely sure of the changes happening to it.

    2. Using awscli :

    aws eks create-cluster --name eks-blog-cluster --role-arn arn:aws:iam::XXXXXXXXXXXX:role/eks-service-role  
    --resources-vpc-config subnetIds=subnet-0b8da2094908e1b23,subnet-01a46af43b2c5e16c,securityGroupIds=sg-03fa0c02886c183d4

    {
        "cluster": {
            "status": "CREATING",
            "name": "eks-blog-cluster",
            "certificateAuthority": {},
            "roleArn": "arn:aws:iam::XXXXXXXXXXXX:role/eks-service-role",
            "resourcesVpcConfig": {
                "subnetIds": [
                    "subnet-0b8da2094908e1b23",
                    "subnet-01a46af43b2c5e16c"
                ],
                "vpcId": "vpc-0364b5ed9f85e7ce1",
                "securityGroupIds": [
                    "sg-03fa0c02886c183d4"
                ]
            },
            "version": "1.10",
            "arn": "arn:aws:eks:us-east-1:XXXXXXXXXXXX:cluster/eks-blog-cluster",
            "createdAt": 1535269577.147
        }
    }

    In the response, we see that the cluster is in creating state. It will take a few minutes before it is available. We can check the status using the below command:

    aws eks describe-cluster --name=eks-blog-cluster

    Configure kubectl for EKS:

    We know that in Kubernetes we interact with the control plane by making requests to the API server. The most common way to interact with the API server is via kubectl command line utility. As our cluster is ready now we need to install kubectl.

    1.  Install the kubectl binary

    curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s 
    https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl

    Give executable permission to the binary.

    chmod +x ./kubectl

    Move the kubectl binary to a folder in your system’s $PATH.

    sudo cp ./kubectl /bin/kubectl && export PATH=$HOME/bin:$PATH

    As discussed earlier EKS uses AWS IAM Authenticator for Kubernetes to allow IAM authentication for your Kubernetes cluster. So we need to download and install the same.

    2.  Install aws-iam-authenticator

    curl -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-07-26/bin/linux/amd64/aws-iam-authenticator

    Give executable permission to the binary

    chmod +x ./aws-iam-authenticator

    Move the aws-iam-authenticator binary to a folder in your system’s $PATH.

    sudo cp ./aws-iam-authenticator /bin/aws-iam-authenticator

    3.  Create the kubeconfig file

    First create the directory.

    mkdir -p ~/.kube

    Open a config file in the folder created above

    sudo vi .kube/config-eks-blog-cluster

    Paste the below code in the file

    clusters:      
    - cluster:       
    server: https://DBFE36D09896EECAB426959C35FFCC47.sk1.us-east-1.eks.amazonaws.com        
    certificate-authority-data: ”....................”        
    name: kubernetes        
    contexts:        
    - context:             
    cluster: kubernetes             
    user: aws          
    name: aws        
    current-context: aws        
    kind: Config       
    preferences: {}        
    users:           
    - name: aws            
    user:                
    exec:                    
    apiVersion: client.authentication.k8s.io/v1alpha1                    
    command: aws-iam-authenticator                    
    args:                       
    - "token"                       
    - "-i"                     
    - “eks-blog-cluster"

    Replace the values of the server and certificateauthority data with the values of your cluster and certificate and also update the cluster name in the args section. You can get these values from the web console as well as using the command.

    aws eks describe-cluster --name=eks-blog-cluster

    Save and exit.

    Add that file path to your KUBECONFIG environment variable so that kubectl knows where to look for your cluster configuration.

    export KUBECONFIG=$KUBECONFIG:~/.kube/config-eks-blog-cluster

    To verify that the kubectl is now properly configured :

    kubectl get all
    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    service/kubernetes ClusterIP 172.20.0.1  443/TCP 50m

    Launch and configure worker nodes :

    Now we need to launch worker nodes before we can start deploying apps. We can create the worker node cluster by using the CloudFormation script provided by AWS which is available here or use the Terraform script available here.

    • ClusterName: Name of the Amazon EKS cluster we created earlier.
    • ClusterControlPlaneSecurityGroup: Id of the security group we used in EKS cluster.
    • NodeGroupName: Name for the worker node auto scaling group.
    • NodeAutoScalingGroupMinSize: Minimum number of worker nodes that you always want in your cluster.
    • NodeAutoScalingGroupMaxSize: Maximum number of worker nodes that you want in your cluster.
    • NodeInstanceType: Type of worker node you wish to launch.
    • NodeImageId: AWS provides Amazon EKS-optimized AMI to be used as worker nodes. Currently AKS is available in only two AWS regions Oregon and N.virginia and the AMI ids are ami-02415125ccd555295 and ami-048486555686d18a0 respectively
    • KeyName: Name of the key you will use to ssh into the worker node.
    • VpcId: Id of the VPC that we created earlier.
    • Subnets: Subnets from the VPC we created earlier.

    To enable worker nodes to join your cluster, we need to download, edit and apply the AWS authenticator config map.

    Download the config map:

    curl -O https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-07-26/aws-auth-cm.yaml

    Open it in an editor

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: aws-auth
      namespace: kube-system
    data:
      mapRoles: |
        - rolearn: <ARN of instance role (not instance profile)>
          username: system:node:{{EC2PrivateDNSName}}
          groups:
            - system:bootstrappers
            - system:nodes

    Edit the value of rolearn with the arn of the role of your worker nodes. This value is available in the output of the scripts that you ran. Save the change and then apply

    kubectl apply -f aws-auth-cm.yaml

    Now you can check if the nodes have joined the cluster or not.

    kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    ip-10-0-2-171.ec2.internal Ready  12s v1.10.3
    ip-10-0-3-58.ec2.internal Ready  14s v1.10.3

    Deploying an application:

    As our cluster is completely ready now we can start deploying applications on it. We will deploy a simple books api application which connects to a mongodb database and allows users to store,list and delete book information.

    1. MongoDB Deployment YAML

    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      name: mongodb
    spec:
      template:
        metadata:
          labels:
            app: mongodb
        spec:
          containers:
          - name: mongodb
            image: mongo
            ports:
            - name: mongodbport
              containerPort: 27017
              protocol: TCP

    2. Test Application Development YAML

    apiVersion: apps/v1beta1
    kind: Deployment
    metadata:
      name: test-app
    spec:
      replicas: 1
      template:
        metadata:
          labels:
            app: test-app
        spec:
          containers:
          - name: test-app
            image: akash125/pyapp
            imagePullPolicy: IfNotPresent
            ports:
            - containerPort: 3000

    3. MongoDB Service YAML

    apiVersion: v1
    kind: Service
    metadata:
      name: mongodb-service
    spec:
      ports:
      - port: 27017
        targetPort: 27017
        protocol: TCP
        name: mongodbport
      selector:
        app: mongodb

    4. Test Application Service YAML

    apiVersion: v1
    kind: Service
    metadata:
      name: test-service
    spec:
      type: LoadBalancer
      ports:
      - name: test-service
        port: 80
        protocol: TCP
        targetPort: 3000
      selector:
        app: test-app

    Services

    $ kubectl create -f mongodb-service.yaml
    $ kubectl create -f testapp-service.yaml

    Deployments

    $ kubectl create -f mongodb-deployment.yaml
    $ kubectl create -f testapp-deployment.yaml$ kubectl get services
    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    kubernetes ClusterIP 172.20.0.1 <none> 443/TCP 12m
    mongodb-service ClusterIP 172.20.55.194 <none> 27017/TCP 4m
    test-service LoadBalancer 172.20.188.77 a7ee4f4c3b0ea 80:31427/TCP 3m

    In the EXTERNAL-IP section of the test-service we see dns of an load balancer we can now access the application from outside the cluster using this dns.

    To Store Data :

    curl -X POST -d '{"name":"A Game of Thrones (A Song of Ice and Fire)“, "author":"George R.R. Martin","price":343}' http://a7ee4f4c3b0ea11e8b0f912f36098e4d-672471149.us-east-1.elb.amazonaws.com/books
    {"id":"5b8fab49fa142b000108d6aa","name":"A Game of Thrones (A Song of Ice and Fire)","author":"George R.R. Martin","price":343}

    To Get Data :

    curl -X GET http://a7ee4f4c3b0ea11e8b0f912f36098e4d-672471149.us-east-1.elb.amazonaws.com/books
    [{"id":"5b8fab49fa142b000108d6aa","name":"A Game of Thrones (A Song of Ice and Fire)","author":"George R.R. Martin","price":343}]

    We can directly put the URL used in the curl operation above in our browser as well, we will get the same response.

    Now our application is deployed on EKS and can be accessed by the users.

    Comparison BETWEEN GKE, ECS and EKS:

    Cluster creation: Creating GKE and ECS cluster is way simpler than creating an EKS cluster. GKE being the simplest of all three.

    Cost: In case of both, GKE and ECS we pay only for the infrastructure that is visible to us i.e., servers, volumes, ELB etc. and there is no cost for master nodes or other cluster management services but with EKS there is a charge of 0.2 $ per hour for the control plane.

    Add-ons: GKE provides the option of using Calico as the network plugin which helps in defining network policies for controlling inter pod communication (by default all pods in k8s can communicate with each other).

    Serverless: ECS cluster can be created using Fargate which is container as a Service (CaaS) offering from AWS. Similarly EKS is also expected to support Fargate very soon.

    In terms of availability and scalability all the services are at par with each other.

    Conclusion:

    In this blog post we learned the basics concepts of EKS, launched our own EKS cluster and deployed an application as well. EKS is much awaited service from AWS especially for the folks who were already running their Kubernetes workloads on AWS, as now they can easily migrate to EKS and have a fully managed Kubernetes control plane. EKS is expected to be adopted by many organisations in near future.

    References:

  • How to Make Asynchronous Calls in Redux Without Middlewares

    Redux has greatly helped in reducing the complexities of state management. Its one way data flow is easier to reason about and it also provides a powerful mechanism to include middlewares which can be chained together to do our biding. One of the most common use cases for the middleware is to make async calls in the application. Different middlewares like redux-thunk, redux-sagas, redux-observable, etc are a few examples. All of these come with their own learning curve and are best suited for tackling different scenarios.

    But what if our use-case is simple enough and we don’t want to have the added complexities that implementing a middleware brings? Can we somehow implement the most common use-case of making async API calls using only redux and javascript?

    The answer is Yes! This blog will try to explain on how to implement async action calls in redux without the use of any middlewares.

    So let us first start by making a simple react project by using create-react-app

    npx create-react-app async-redux-without-middlewares
    cd async-redux-without-middlewares
    npm start

    Also we will be using react-redux in addition to redux to make our life a little easier. And to mock the APIs we will be using https://jsonplaceholder.typicode.com/

    We will just implement two API calls to not to over complicate things.

    Create a new file called api.js .It is the file in which we will keep the fetch calls to the endpoint.

    export const getPostsById = id => fetch(`https://jsonplaceholder.typicode.com/Posts/${id}`);
     
    export const getPostsBulk = () => fetch("https://jsonplaceholder.typicode.com/posts");

    Each API call has three base actions associated with it. Namely, REQUEST, SUCCESS and FAIL. Each of our APIs will be in one of these three states at any given time. And depending on these states we can decide how to show our UI. Like when it is in REQUEST state we can have the UI show a loader and when it is in FAIL state we can show a custom UI to tell the user that something has went wrong.

    So we create three constants of REQUEST, SUCCESS and FAIL for each API call which we will be making. In our case the constants.js file will look something like this:

    export const GET_POSTS_BY_ID_REQUEST = "getpostsbyidrequest";
    export const GET_POSTS_BY_ID_SUCCESS = "getpostsbyidsuccess";
    export const GET_POSTS_BY_ID_FAIL = "getpostsbyidfail";
     
    export const GET_POSTS_BULK_REQUEST = "getpostsbulkrequest";
    export const GET_POSTS_BULK_SUCCESS = "getpostsbulksuccess";
    export const GET_POSTS_BULK_FAIL = "getpostsbulkfail";

    The store.js file and the initialState of our application is as follows:

    import { createStore } from 'redux'
    import reducer from './reducers';
     
    const initialState = {
        byId: {
            isLoading: null,
            error: null,
            data: null
        },
        byBulk: {
            isLoading: null,
            error: null,
            data: null
        }
    };
     
    const store = createStore(reducer, initialState, window.__REDUX_DEVTOOLS_EXTENSION__ && window.__REDUX_DEVTOOLS_EXTENSION__());
     
    export default store;

    As can be seen from the above code, each of our APIs data lives in one object the the state object. Keys isLoading tells us if the API is in the REQUEST state.

    Now as we have our store defined, let us see how we will manipulate the statewith different phases that an API call can be in. Below is our reducers.js file.

    import {
        GET_POSTS_BY_ID_REQUEST,
        GET_POSTS_BY_ID_SUCCESS,
        GET_POSTS_BY_ID_FAIL,
     
        GET_POSTS_BULK_REQUEST,
        GET_POSTS_BULK_SUCCESS,
        GET_POSTS_BULK_FAIL
     
    } from "./constants";
     
    const reducer = (state, action) => {
        switch (action.type) {
            case GET_POSTS_BY_ID_REQUEST:
                return {
                    ...state,
                    byId: {
                        isLoading: true,
                        error: null,
                        data: null
                    }
                }
            case GET_POSTS_BY_ID_SUCCESS:
                return {
                    ...state,
                    byId: {
                        isLoading: false,
                        error: false,
                        data: action.payload
                    }
                }
            case GET_POSTS_BY_ID_FAIL:
                return {
                    ...state,
                    byId: {
                        isLoading: false,
                        error: action.payload,
                        data: false
                    }
                }
     
                case GET_POSTS_BULK_REQUEST:
                return {
                    ...state,
                    byBulk: {
                        isLoading: true,
                        error: null,
                        data: null
                    }
                }
            case GET_POSTS_BULK_SUCCESS:
                return {
                    ...state,
                    byBulk: {
                        isLoading: false,
                        error: false,
                        data: action.payload
                    }
                }
            case GET_POSTS_BULK_FAIL:
                return {
                    ...state,
                    byBulk: {
                        isLoading: false,
                        error: action.payload,
                        data: false
                    }
                }
            default: return state;
        }
    }
     
    export default reducer;

    By giving each individual API call its own variable to denote the loading phase we can now easily implement something like multiple loaders in the same screen according to which API call is in which phase.

    Now to actually implement the async behaviour in the actions we just need a normal JavaScript function which will pass the dispatch as the first argument. We pass dispatch to the function because it dispatches actions to the store. Normally a component has access to dispatch but since we want an external function to take control over dispatching, we need to give it control over dispatching.

    const getPostById = async (dispatch, id) => {
        dispatch({ type: GET_POSTS_BY_ID_REQUEST });
     
        try {
            const response = await getPostsById(id);
            const res = await response.json();
            dispatch({ type: GET_POSTS_BY_ID_SUCCESS, payload: res });
        } catch (e) {
            dispatch({ type: GET_POSTS_BY_ID_FAIL, payload: e });
        }
    };

    And a function to give dispatch in the above function’s scope:

    export const getPostByIdFunc = dispatch => {
        return id => getPostById(dispatch, id);
    }

    So now our complete actions.js file looks like this:

    import {
        GET_POSTS_BY_ID_REQUEST,
        GET_POSTS_BY_ID_SUCCESS,
        GET_POSTS_BY_ID_FAIL,
     
        GET_POSTS_BULK_REQUEST,
        GET_POSTS_BULK_SUCCESS,
        GET_POSTS_BULK_FAIL
     
    } from "./constants";
     
    import {
        getPostsById,
        getPostsBulk
    } from "./api";
     
    const getPostById = async (dispatch, id) => {
        dispatch({ type: GET_POSTS_BY_ID_REQUEST });
     
        try {
            const response = await getPostsById(id);
            const res = await response.json();
            dispatch({ type: GET_POSTS_BY_ID_SUCCESS, payload: res });
        } catch (e) {
            dispatch({ type: GET_POSTS_BY_ID_FAIL, payload: e });
        }
    };
     
    const getPostBulk = async dispatch => {
        dispatch({ type: GET_POSTS_BULK_REQUEST });
     
        try {
            const response = await getPostsBulk();
            const res = await response.json();
            dispatch({ type: GET_POSTS_BULK_SUCCESS, payload: res });
        } catch (e) {
            dispatch({ type: GET_POSTS_BULK_FAIL, payload: e });
        }
    };
     
    export const getPostByIdFunc = dispatch => {
        return id => getPostById(dispatch, id);
    }
     
    export const getPostsBulkFunc = dispatch => {
        return () => getPostBulk(dispatch);
    }

    Once this is done, all that is left to do is to pass these functions in mapDispatchToProps of our connected component.

    const mapDispatchToProps = dispatch => {
      return {
        getPostById: getPostByIdFunc(dispatch),
        getPostBulk: getPostsBulkFunc(dispatch)
      }
    };

    Our App.js file looks like the one below:

    import React, { Component } from 'react';
    import './App.css';
     
    import { connect } from 'react-redux';
    import { getPostByIdFunc, getPostsBulkFunc } from './actions';
     
    class App extends Component {
      render() {
        console.log(this.props);
        return (
          <div className="App">
            <button onClick={() => {
              this.props.getPostById(1);
            }}>By Id</button>
            <button onClick={() => {
              this.props.getPostBulk();
            }}>In bulk</button>
          </div>
        );
      }
    }
     
    const mapStateToProps = state => {
      return {
        state
      };
    }
     
    const mapDispatchToProps = dispatch => {
      return {
        getPostById: getPostByIdFunc(dispatch),
        getPostBulk: getPostsBulkFunc(dispatch)
      }
    };

    This is how we do async calls without middlewares in redux. This is a much simpler approach than using a middleware and the learning curve associated with it. If this approach covers all your use cases then by all means use it.

    Conclusion

    This type of approach really shines when you have to make a simple enough application like a demo of sorts, where API calls is all the side-effect that you need. In larger and more complicated applications there are a few inconveniences with this approach. First we have to pass dispatch around to which seems kind of yucky. Also, remember which call requires dispatch and which do not.

    The full code can be found here.

  • The Ultimate Beginner’s Guide to Jupyter Notebooks

    Jupyter Notebooks offer a great way to write and iterate on your Python code. It is a powerful tool for developing data science projects in an interactive way. Jupyter Notebook allows to showcase the source code and its corresponding output at a single place helping combine narrative text, visualizations and other rich media.The intuitive workflow promotes iterative and rapid development, making notebooks the first choice for data scientists. Creating Jupyter Notebooks is completely free as it falls under Project Jupyter which is completely open source.

    Project Jupyter is the successor to an earlier project IPython Notebook, which was first published as a prototype in 2010. Jupyter Notebook is built on top of iPython, an interactive tool for executing Python code in the terminal using REPL model(Read-Eval-Print-Loop). The iPython kernel executes the python code and communicates with the Jupyter Notebook front-end interface. Jupyter Notebooks also provide additional features like storing your code and output and keep the markdown by extending iPython.

    Although Jupyter Notebooks support using various programming languages, we will focus on Python and its application in this article.

    Getting Started with Jupyter Notebooks!

    Installation

    Prerequisites

    As you would have surmised from the above abstract we need to have Python installed on your machine. Either Python 2.7 or Python 3.+ will do.

    Install Using Anaconda

    The simplest way to get started with Jupyter Notebooks is by installing it using Anaconda. Anaconda installs both Python3 and Jupyter and also includes quite a lot of packages commonly used in the data science and machine learning community. You can follow the latest guidelines from here.

    Install Using Pip

    If, for some reason, you decide not to use Anaconda, then you can install Jupyter manually using Python pip package, just follow the below code:

    pip install jupyter

    Launching First Notebook

    Open your terminal, navigate to the directory where you would like to store you notebook and launch the Jupyter Notebooks. Then type the below command and the program will instantiate a local server at http://localhost:8888/tree.

    jupyter notebook

    A new window with the Jupyter Notebook interface will open in your internet browser. As you might have already noticed Jupyter starts up a local Python server to serve web apps in your browser, where you can access the Dashboard and work with the Jupyter Notebooks. The Jupyter Notebooks are platform independent which makes it easier to collaborate and share with others.

    The list of all files is displayed under the Files tab whereas all the running processes can be viewed by clicking on the Running tab and the third tab, Clusters is extended from IPython parallel, IPython’s parallel computing framework. It helps you to control multiple engines, extended from the IPython kernel.

    Let’s start by making a new notebook. We can easily do this by clicking on the New drop-down list in the top- right corner of the dashboard. You see that you have the option to make a Python 3 notebook as well as regular text file, a folder, and a terminal. Please select the Python 3 notebook option.

    Your Jupyter Notebook will open in a new tab as shown in below image.

    Now each notebook is opened in a new tab so that you can simultaneously work with multiple notebooks. If you go back to the dashboard tab, you will see the new file Untitled.ipynb and you should see some green icon to it’s left which indicates your new notebook is running.

     

    Why a .ipynb file?

    .ipynb is the standard file format for storing Jupyter Notebooks, hence the file name Untitled.ipynb. Let’s begin by first understanding what an .ipynb file is and what it might contain. Each .ipynb file is a text file that describes the content of your notebook in a JSON format. The content of each cell, whether it is text, code or image attachments that have been converted into strings, along with some additional metadata is stored in the .ipynb file. You can also edit the metadata by selecting “Edit > Edit Notebook Metadata” from the menu options in the notebook.

    You can also view the content of your notebook files by selecting “Edit” from the controls on the dashboard, there’s no reason to do so unless you really want to edit the file manually.

    Understanding the Notebook Interface

    Now that you have created a notebook, let’s have a look at the various menu options and functions, which are readily available. Take some time out to scroll through the the list of commands that opens up when you click on the keyboard icon (or press Ctrl + Shift + P).

    There are two prominent terminologies that you should care to learn about: cells and kernels are key both to understanding Jupyter and to what makes it more than just a content writing tool. Fortunately, these concepts are not difficult to understand.

    • A kernel is a program that interprets and executes the user’s code. The Jupyter Notebook App has an inbuilt kernel for Python code, but there are also kernels available for other programming languages.
    • A cell is a container which holds the executable code or normal text 

    Cells

    Cells form the body of a notebook. If you look at the screenshot above for a new notebook (Untitled.ipynb), the text box with the green border is an empty cell. There are 4 types of cells:

    • Code – This is where you type your code and when executed the kernel will display its output below the cell.
    • Markdown – This is where you type your text formatted using Markdown and the output is displayed in place when it is run.
    • Raw NBConvert – It’s a command line tool to convert your notebook into another format (like HTML, PDF etc.)
    • Heading – This is where you add Headings to separate sections and make your notebook look tidy and neat. This has now been merged into the Markdown option itself. Adding a ‘#’ at the beginning ensures that whatever you type after that will be taken as a heading.

    Let’s test out how the cells work with a basic “hello world” example. Type print(‘Hello World!’) in the cell and press Ctrl + Enter or click on the Run button in the toolbar at the top.

    print("Hello World!")

    Hello World!

    When you run the cell, the output will be displayed below, and the label to its left changes from In[ ] to In[1] . Moreover, to signify that the cell is still running, Jupyter changes the label to In[*]

    Additionally, it is important to note that the output of a code cell comes from any of the print statements in the code cell, as well as the value of the last line in the cell, irrespective of it being a variable, function call or some other code snippet.

    Markdown

    Markdown is a lightweight, markup language for formatting plain text. Its syntax has a one-to-one correspondence with HTML tags. As this article has been written in a Jupyter notebook, all of the narrative text and images you can see, are written in Markdown. Let’s go through the basics with the following example.

    # This is a level 1 heading 
    ### This is a level 3 heading
    This is how you write some plain text that would form a paragraph.
    You can emphasize the text by enclosing the text like "**" or "__" to make it bold and enclosing the text in "*" or "_" to make it italic. 
    Paragraphs are separated by an empty line.
    * We can include lists.
      * And also indent them.
    
    1. Getting Numbered lists is
    2. Also easy.
    
    [To include hyperlinks enclose the text with square braces and then add the link url in round braces](http://www.example.com)
    
    Inline code uses single backticks: `foo()`, and code blocks use triple backticks:
    
    ``` 
    foo()
    ```
    
    Or can be indented by 4 spaces: 
    
        foo()
        
    And finally, adding images is easy: ![Online Image](https://www.example.com/image.jpg) or ![Local Image](img/image.jpg) or ![Image Attachment](attachment:image.jpg)

    We have 3 different ways to attach images

    • Link the URL of an image from the web.
    • Use relative path of an image present locally
    • Add an attachment to the notebook by using “Edit>Insert Image” option; This method converts the image into a string and store it inside your notebook

    Note that adding an image as an attachment will make the .ipynb file much larger because it is stored inside the notebook in a string format.

    There are a lot more features available in Markdown. To learn more about markdown, you can refer to the official guide from the creator, John Gruber, on his website.

    Kernels

    Every notebook runs on top of a kernel. Whenever you execute a code cell, the content of the cell is executed within the kernel and any output is returned back to the cell for display. The kernel’s state applies to the document as a whole and not individual cells and is persisted over time.

    For example, if you declare a variable or import some libraries in a cell, they will be accessible in other cells. Now let’s understand this with the help of an example. First we’ll import a Python package and then define a function.

    import os, binascii
    def sum(x,y):
      return x+y

    Once the cell above  is executed, we can reference os, binascii and sum in any other cell.

    rand_hex_string = binascii.b2a_hex(os.urandom(15)) 
    print(rand_hex_string)
    x = 1
    y = 2
    z = sum(x,y)
    print('Sum of %d and %d is %d' % (x, y, z))

    The output should look something like this:

    c84766ca4a3ce52c3602bbf02a
    d1f7 Sum of 1 and 2 is 3

    The execution flow of a notebook is generally from top-to-bottom, but it’s common to go back to make changes. The order of execution is shown to the left of each cell, such as In [2] , will let you know whether any of your cells have stale output. Additionally, there are multiple options in the Kernel menu which often come very handy.

    • Restart: restarts the kernel, thus clearing all the variables etc that were defined.
    • Restart & Clear Output: same as above but will also wipe the output displayed below your code cells.
    • Restart & Run All: same as above but will also run all your cells in order from top-to-bottom.
    • Interrupt: If your kernel is ever stuck on a computation and you wish to stop it, you can choose the Interrupt option.

    Naming Your Notebooks

    It is always a best practice to give a meaningful name to your notebooks. You can rename your notebooks from the notebook app itself by double-clicking on the existing name at the top left corner. You can also use the dashboard or the file browser to rename the notebook file. We’ll head back to the dashboard to rename the file we created earlier, which will have the default notebook file name Untitled.ipynb.

    Now that you are back on the dashboard, you can simply select your notebook and click “Rename” in the dashboard controls

    Jupyter notebook - Rename

    Shutting Down your Notebooks

    We can shutdown a running notebook by selecting “File > Close and Halt” from the notebook menu. However, we can also shutdown the kernel either by selecting the notebook in the dashboard and clicking “Shutdown” or by going to “Kernel > Shutdown” from within the notebook app (see images below).

    Shutdown the kernel from Notebook App:

     

    Shutdown the kernel from Dashboard:

     

     

    Sharing Your Notebooks

    When we talk about sharing a notebook, there are two things that might come to our mind. In most cases, we would want to share the end-result of the work, i.e. sharing non-interactive, pre-rendered version of the notebook, very much similar to this article; however, in some cases we might want to share the code and collaborate with others on notebooks with the aid of version control systems such as Git which is also possible.

    Before You Start Sharing

    The state of the shared notebook including the output of any code cells is maintained when exported to a file. Hence, to ensure that the notebook is share-ready, we should follow below steps before sharing.

    1. Click “Cell > All Output > Clear”
    2. Click “Kernel > Restart & Run All”
    3. After the code cells have finished executing, validate the output. 

    This ensures that your notebooks don’t have a stale state or contain intermediary output.

    Exporting Your Notebooks

    Jupyter has built-in support for exporting to HTML, Markdown and PDF as well as several other formats, which you can find from the menu under “File > Download as” . It is a very convenient way to share the results with others. But if sharing exported files isn’t suitable for you, there are some other popular methods of sharing the notebooks directly on the web.

    • GitHub
    • With home to over 2 million notebooks, GitHub is the most popular place for sharing Jupyter projects with the world. GitHub has integrated support for rendering .ipynb files directly both in repositories and gists on its website.
    • You can just follow the GitHub guides for you to get started on your own.
    • Nbviewer
    • NBViewer is one of the most prominent notebook renderers on the web.
    • It also renders your notebook from GitHub and other such code storage platforms and provide a shareable URL along with it. nbviewer.jupyter.org provides a free rendering service as part of Project Jupyter.

    Data Analysis in a Jupyter Notebook

    Now that we’ve looked at what a Jupyter Notebook is, it’s time to look at how they’re used in practice, which should give you a clearer understanding of why they are so popular. As we walk through the sample analysis, you will be able to see how the flow of a notebook makes the task intuitive to work through ourselves, as well as for others to understand when we share it with them. We also hope to learn some of the more advanced features of Jupyter notebooks along the way. So let’s get started, shall we?

    Analyzing the Revenue and Profit Trends of Fortune 500 US companies from 1955-2013

    So, let’s say you’ve been tasked with finding out how the revenues and profits of the largest companies in the US changed historically over the past 60 years. We shall begin by gathering the data to analyze.

    Gathering the DataSet

    The data set that we will be using to analyze the revenue and profit trends of fortune 500 companies has been sourced from Fortune 500 Archives and Top Foreign Stocks. For your ease we have compiled the data from both the sources and created a CSV for you.

    Importing the Required Dependencies

    Let’s start off with a code cell specifically for imports and initial setup, so that if we need to add or change anything at a later point in time, we can simply edit and re-run the cell without having to change the other cells. We can start by importing Pandas to work with our data, Matplotlib to plot the charts and Seaborn to make our charts prettier.

    import pandas as pd
    import matplotlib.pyplot as plt
    import seaborn as sns
    import sys

    Set the design styles for the charts

    sns.set(style="darkgrid")

    Load the Input Data to be Analyzed

    As we plan on using pandas to aid in our analysis, let’s begin by importing our input data set into the most widely used pandas data-structure, DataFrame.

    df = pd.read_csv('../data/fortune500_1955_2013.csv')

    Now that we are done loading our input dataset, let us see how it looks like!

    df.head()

    Looking good. Each row corresponds to a single company per year and all the columns we need are present.

    Exploring the Dataset

    Next, let’s begin by exploring our data set. We will primarily look into the number of records imported and the data types for each of the different columns that were imported.

    As we have 500 data points per year and since the data set has records between 1955 and 2012, the total number of records in the dataset looks good!

    Now, let’s move on to the individual data types for each of the column.

    df.columns = ['year', 'rank', 'company', 'revenue', 'profit']
    len(df)

    df.dtypes

    As we can see from the output of the above command the data types for the columns revenue and profit are being shown as object whereas the expected data type should be float. It indicates that there may be some non-numeric values in the revenue and profit columns.

    So let’s first look at the details of imported values for revenue.

    non_numeric_revenues = df.revenue.str.contains('[^0-9.-]')
    df.loc[non_numeric_revenues].head()

    print("Number of Non-numeric revenue values: ", len(df.loc[non_numeric_revenues]))

    Number of Non-numeric revenue values:	1

    print("List of distinct Non-numeric revenue values: ", set(df.revenue[non_numeric_revenues]))

    List of distinct Non-numeric revenue values:	{'N.A.'}

    As the number of non-numeric revenue values is considerably less compared to the total size of our data set. Hence, it would be easier to just remove those rows.

    df = df.loc[~non_numeric_revenues]
    df.revenue = df.revenue.apply(pd.to_numeric)
    eval(In[6])

    Now that the data type issue for column revenue is resolved, let’s move on to values in column profit.

    non_numeric_profits = df.profit.str.contains('[^0-9.-]')
    df.loc[non_numeric_profits].head()

    print("Number of Non-numeric profit values: ", len(df.loc[non_numeric_profits]))

    Number of Non-numeric profit values:	374

    print("List of distinct Non-numeric profit values: ", set(df.profit[non_numeric_profits]))

    List of distinct Non-numeric profit values:	{'N.A.'}

    As the number of non-numeric profit values is around 1.5% which is a small percentage of our data set, but not completely inconsequential. Let’s take a quick look at the distribution of values and if the rows having N.A. values are uniformly distributed over the years then it would be wise to just remove the rows with missing values.

    bin_sizes, _, _ = plt.hist(df.year[non_numeric_profits], bins=range(1955, 2013))

    As observed from the histogram above, majority of invalid values in single year is fewer than 25, removing these values would account for less than 4% of the data as there are 500 data points per year. Also, other than a surge around 1990, most years have fewer than less than 10 values missing. Let’s assume that this is acceptable for us and move ahead with removing these rows.

    df = df.loc[~non_numeric_profits]
    df.profit = df.profit.apply(pd.to_numeric)

    We should validate if that worked!

    eval(In[6])

    Hurray! Our dataset has been cleaned up.

    Time to Plot the graphs

    Let’s begin with defining a function to plot the graph, set the title and add lables for the x-axis and y-axis.

    # function to plot the graphs for average revenues or profits of the fortune 500 companies against year
    def plot(x, y, ax, title, y_label):
        ax.set_title(title)
        ax.set_ylabel(y_label)
        ax.plot(x, y)
        ax.margins(x=0, y=0)
        
    # function to plot the graphs with superimposed standard deviation    
    def plot_with_std(x, y, stds, ax, title, y_label):
        ax.fill_between(x, y - stds, y + stds, alpha=0.2)
        plot(x, y, ax, title, y_label)

    Let’s plot the average profit by year and average revenue by year using Matplotlib.

    group_by_year = df.loc[:, ['year', 'revenue', 'profit']].groupby('year')
    avgs = group_by_year.mean()
    x = avgs.index
    y = avgs.profit
    
    fig, ax = plt.subplots()
    plot(x, y, ax, 'Increase in mean Fortune 500 company profits from 1955 to 2013', 'Profit (millions)')

    y2 = avgs.revenue
    fig, ax = plt.subplots()
    plot(x, y2, ax, 'Increase in mean Fortune 500 company revenues from 1955 to 2013', 'Revenue (millions)')

    Woah! The charts for profits has got some huge ups and downs. It seems like they correspond to the early 1990s recession, the dot-com bubble in the early 2000s and the Great Recession in 2008.

    On the other hand, the Revenues are constantly growing and are comparatively stable. Also it does help to understand how the average profits recovered so quickly after the staggering drops because of the recession.

    Let’s also take a look at how the average profits and revenues compare to their standard deviations.

    fig, (ax1, ax2) = plt.subplots(ncols=2)
    title = 'Increase in mean and std Fortune 500 company %s from 1955 to 2013'
    stds1 = group_by_year.std().profit.values
    stds2 = group_by_year.std().revenue.values
    plot_with_std(x, y.values, stds1, ax1, title % 'profits', 'Profit (millions)')
    plot_with_std(x, y2.values, stds2, ax2, title % 'revenues', 'Revenue (millions)')
    fig.set_size_inches(14, 4)
    fig.tight_layout()

     

    That’s astonishing, the standard deviations are huge. Some companies are making billions while some others are losing as much, and the risk certainly has increased along with rising profits and revenues over the years. Although we could keep on playing around with our data set and plot plenty more charts to analyze, it is time to bring this article to a close.

    Conclusion

    As part of this article we have seen various features of the Jupyter notebooks, from basics like installation, creating, and running code cells to more advanced features like plotting graphs. The power of Jupyter Notebooks to promote a productive working experience and provide an ease of use is evident from the above example, and I do hope that you feel confident to begin using Jupyter Notebooks in your own work and start exploring more advanced features. You can read more about data analytics using Pandas here.

    If you’d like to further explore and want to look at more examples, Jupyter has put together A Gallery of Interesting Jupyter Notebooks that you may find helpful and the Nbviewer homepage provides a lot of examples for further references. Find the entire code here on Github.

  • Continuous Deployment with Azure Kubernetes Service, Azure Container Registry & Jenkins

    Introduction

    Containerization has taken the application development world by storm. Kubernetes has become the standard way of deploying new containerized distributed applications used by the largest enterprises in a wide range of industries for mission-critical tasks, it has become one of the biggest open-source success stories.

    Although Google Cloud has been providing Kubernetes as a service since November 2014 (Note it started with a beta project), Microsoft with AKS (Azure Kubernetes Service) and Amazon with EKS (Elastic Kubernetes Service)  have jumped on to the scene in the second half of 2017.

    Example:

    AWS had KOPS

    Azure had Azure Container Service.

    However, they were wrapper tools available prior to these services which would help a user create a Kubernetes cluster, but the management and the maintenance (like monitoring and upgrades) needed efforts.

    Azure Container Registry:

    With container demand growing, there is always a need in the market for storing and protecting the container images. Microsoft provides a Geo Replica featured private repository as a service named Azure Container Registry.

    Azure Container Registry is a registry offering from Microsoft for hosting container images privately. It integrates well with orchestrators like Azure Container Service, including Docker Swarm, DC/OS, and the new Azure Kubernetes service. Moreover, ACR  provides capabilities such as Azure Active Directory-based authentication, webhook support, and delete operations.

    The coolest feature provided is Geo-Replication. This will create multiple copies of your image and distribute it across the globe and the container when spawned will have access to the image which is nearest.

    Although Microsoft has good documentation on how to set up ACR  in your Azure Subscription, we did encounter some issues and hence decided to write a blog on the precautions and steps required to configure the Registry in the correct manner.

    Note: We tried this using a free trial account. You can setup it up by referring the following link

    Prerequisites:

    • Make sure you have resource groups created in the supported region.
      Supported Regions: eastus, westeurope, centralus, canada central, canadaeast
    • If you are using Azure CLI for operations please make sure you use the version: 2.0.23 or 2.0.25 (This was the latest version at the time of writing this blog)

    Steps to install Azure CLI 2.0.23 or 2.0.25 (ubuntu 16.04 workstation):

    echo "deb [arch=amd64] https://packages.microsoft.com/repos/azure-cli/ wheezy main" |            
    sudo tee /etc/apt/sources.list.d/azure-cli.list
    sudo apt-key adv --keyserver packages.microsoft.com --recv-keys 52E16F86FEE04B979B07E28DB02C46DF417A0893
    sudo apt-get install apt-transport-httpssudo apt-get update && sudo apt-get install azure-cli
    
    Install a specific version:
    
    sudo apt install azure-cli=2.0.23-1
    sudo apt install azure-cli=2.0.25.1

    Steps for Container Registry Setup:

    • Login to your Azure Account:
    az  login --username --password

    • Create a resource group:
    az group create --name <RESOURCE-GROUP-NAME>  --location eastus
    Example : az group create --name acr-rg  --location eastus

    • Create a Container Registry:
    az acr create --resource-group <RESOURCE-GROUP-NAME> --name <CONTAINER-REGISTRY-NAME> --sku Basic --admin-enabled true
    Example : az acr create --resource-group acr-rg --name testacr --sku Basic --admin-enabled true

    Note: SKU defines the storage available for the registry for type Basic the storage available is 10GB, 1 WebHook and the billing amount is 11 Rs/day.

    For detailed information on the different SKU available visit the following link

    • Login to the registry :
    az acr login --name <CONTAINER-REGISTRY-NAME>
    Example :az acr login --name testacr

    • Sample docker file for a node application :
    FROM node:carbon
    # Create app directory
    WORKDIR /usr/src/app
    COPY package*.json ./
    # RUN npm install
    EXPOSE 8080
    CMD [ "npm", "start" ]

    • Build the docker image :
    docker build -t <image-tag>:<software>
    Example :docker build -t base:node8

    • Get the login server value for your ACR :
    az acr list --resource-group acr-rg --query "[].{acrLoginServer:loginServer}" --output table
    Output  :testacr.azurecr.io

    • Tag the image with the Login Server Value:
      Note: Get the image ID from docker images command

    Example:

    docker tag image-id testacr.azurecr.io/base:node8

    Push the image to the Azure Container Registry:Example:

    docker push testacr.azurecr.io/base:node8

    Microsoft does provide a GUI option to create the ACR.

    • List Images in the Registry:

    Example:

    az acr repository list --name testacr --output table

    • List tags for the Images:

    Example:

    az acr repository show-tags --name testacr --repository <name> --output table

    • How to use the ACR image in Kubernetes deployment: Use the login Server Name + the image name

    Example :

    containers:- 
    name: demo
    image: testacr.azurecr.io/base:node8

    Azure Kubernetes Service

    Microsoft released the public preview of Managed Kubernetes for Azure Container Service (AKS) on October 24, 2017. This service simplifies the deployment, management, and operations of Kubernetes. It features an Azure-hosted control plane, automated upgrades, self-healing, easy scaling.

    Similarly to Google AKE and Amazon EKS, this new service will allow access to the nodes only and the master will be managed by Cloud Provider. For more information visit the following link.

    Let’s now get our hands dirty and deploy an AKS infrastructure to play with:

    • Enable AKS preview for your Azure Subscription: At the time of writing this blog, AKS is in preview mode, it requires a feature flag on your subscription.
    az provider register -n Microsoft.ContainerService

    • Kubernetes Cluster Creation Command: Note: A new separate resource group should be created for the Kubernetes service.Since the service is in preview, it is available only to certain regions.

    Make sure you create a resource group under the following regions.

    eastus, westeurope, centralus, canadacentral, canadaeast
    az  group create  --name  <RESOURCE-GROUP>   --location eastus
    Example : az group create --name aks-rg --location eastus
    az aks create --resource-group <RESOURCE-GROUP-NAME> --name <CLUSTER-NAME>   --node-count 2 --generate-ssh-keys
    Example : az aks create --resource-group aks-rg --name akscluster  --node-count 2 --generate-ssh-keys

    Example with different arguments :

    Create a Kubernetes cluster with a specific version.

    az aks create -g MyResourceGroup -n MyManagedCluster --kubernetes-version 1.8.1

    Create a Kubernetes cluster with a larger node pool.

    az aks create -g MyResourceGroup -n MyManagedCluster --node-count 7

    Install the Kubectl CLI :

    To connect to the kubernetes cluster from the client computer Kubectl command line client is required.

    sudo az aks install-cli

    Note: If you’re using Azure CloudShell, kubectl is already installed. If you want to install it locally, run the above  command:

    • To configure kubectl to connect to your Kubernetes cluster :
    az aks get-credentials --resource-group=<RESOURCE-GROUP-NAME> --name=<CLUSTER-NAME>

    Example :

    CODE: <a href="https://gist.github.com/velotiotech/ac40b6014a435271f49ca0e3779e800f">https://gist.github.com/velotiotech/ac40b6014a435271f49ca0e3779e800f</a>.js

    • Verify the connection to the cluster :
    kubectl get nodes -o wide 

    • For all the command line features available for Azure check the link: https://docs.microsoft.com/en-us/cli/azure/aks?view=azure-cli-latest

    We had encountered a few issues while setting up the AKS cluster at the time of writing this blog. Listing them along with the workaround/fix:

    az aks create --resource-group aks-rg --name akscluster  --node-count 2 --generate-ssh-keys

    Error: Operation failed with status: ‘Bad Request’.

    Details: Resource provider registrations Microsoft.Compute, Microsoft.Storage, Microsoft.Network are needed we need to enable them.

    Fix: If you are using the trial account, click on subscriptions and check whether the following providers are registered or not :

    • Microsoft.Compute
    • Microsoft.Storage
    • Microsoft.Network
    • Microsoft.ContainerRegistry
    • Microsoft.ContainerService

    Error: We had encountered the following mentioned open issues at the time of writing this blog.

    1. Issue-1
    2. Issue-2
    3. Issue-3

    Jenkins setup for CI/CD with ACR, AKS

    Microsoft provides a solution template which will install the latest stable Jenkins version on a Linux (Ubuntu 14.04 LTS) VM along with tools and plugins configured to work with Azure. This includes:

    • git for source control
    • Azure Credentials plugin for connecting securely
    • Azure VM Agents plugin for elastic build, test and continuous integration
    • Azure Storage plugin for storing artifacts
    • Azure CLI to deploy apps using scripts

    Refer the below link to bring up the Instance

    Pipeline plan for Spinning up a Nodejs Application using ACR – AKS – Jenkins

    What the pipeline accomplishes :

    Stage 1:

    The code gets pushed in the Github. The Jenkins job gets triggered automatically. The Dockerfile is checked out from Github.

    Stage 2:

    Docker builds an image from the Dockerfile and then the image is tagged with the build number.Additionally, the latest tag is also attached to the image for the containers to use.

    Stage 3:

    We have default deployment and service YAML files stored on the Jenkins server. Jenkins makes a copy of the default YAML files, make the necessary changes according to the build and put them in a separate folder.

    Stage 4:

    kubectl was initially configured at the time of setting up AKS on the Jenkins server. The YAML files are fed to the kubectl util which in turn creates pods and services.

    Sample Jenkins pipeline code :

    node {      
      // Mark the code checkout 'stage'....        
        stage('Checkout the dockefile from GitHub') {            
          git branch: 'docker-file', credentialsId: 'git_credentials', url: 'https://gitlab.com/demo.git'        
        }        
        // Build and Deploy to ACR 'stage'...        
        stage('Build the Image and Push to Azure Container Registry') {                
          app = docker.build('testacr.azurecr.io/demo')                
          withDockerRegistry([credentialsId: 'acr_credentials', url: 'https://testacr.azurecr.io']) {                
          app.push("${env.BUILD_NUMBER}")                
          app.push('latest')                
          }        
         }        
         stage('Build the Kubernetes YAML Files for New App') {
    <The code here will differ depending on the YAMLs used for the application>        
      }        
      stage('Delpoying the App on Azure Kubernetes Service') {            
        app = docker.image('testacr.azurecr.io/demo:latest')            
        withDockerRegistry([credentialsId: 'acr_credentials', url: 'https://testacr.azurecr.io']) {            
        app.pull()            
        sh "kubectl create -f ."            
        }       
       }    
    }

    What we achieved:

    • We managed to create a private Docker registry on Azure using the ACR feature using az-cli 2.0.25.
    • Secondly, we were able to spin up a private Kubernetes cluster on Azure with 2 nodes.
    • Setup Up Jenkins using a pre-cooked template which had all the plugins necessary for communication with ACR and AKS.
    • Orchestrate  a Continuous Deployment pipeline in Jenkins which uses docker features.
  • Extending Kubernetes APIs with Custom Resource Definitions (CRDs)

    Introduction:

    Custom resources definition (CRD) is a powerful feature introduced in Kubernetes 1.7 which enables users to add their own/custom objects to the Kubernetes cluster and use it like any other native Kubernetes objects. In this blog post, we will see how we can add a custom resource to a Kubernetes cluster using the command line as well as using the Golang client library thus also learning how to programmatically interact with a Kubernetes cluster.

    What is a Custom Resource Definition (CRD)?

    In the Kubernetes API, every resource is an endpoint to store API objects of certain kind. For example, the built-in service resource contains a collection of service objects. The standard Kubernetes distribution ships with many inbuilt API objects/resources. CRD comes into picture when we want to introduce our own object into the Kubernetes cluster to full fill our requirements. Once we create a CRD in Kubernetes we can use it like any other native Kubernetes object thus leveraging all the features of Kubernetes like its CLI, security, API services, RBAC etc.

    The custom resource created is also stored in the etcd cluster with proper replication and lifecycle management. CRD allows us to use all the functionalities provided by a Kubernetes cluster for our custom objects and saves us the overhead of implementing them on our own.

    How to register a CRD using command line interface (CLI)

    Step-1: Create a CRD definition file sslconfig-crd.yaml

    apiVersion: "apiextensions.k8s.io/v1beta1"
    kind: "CustomResourceDefinition"
    metadata:
      name: "sslconfigs.blog.velotio.com"
    spec:
      group: "blog.velotio.com"
      version: "v1alpha1"
      scope: "Namespaced"
      names:
        plural: "sslconfigs"
        singular: "sslconfig"
        kind: "SslConfig"
      validation:
        openAPIV3Schema:
          required: ["spec"]
          properties:
            spec:
              required: ["cert","key","domain"]
              properties:
                cert:
                  type: "string"
                  minimum: 1
                key:
                  type: "string"
                  minimum: 1
                domain:
                  type: "string"
                  minimum: 1 

    Here we are creating a custom resource definition for an object of kind SslConfig. This object allows us to store the SSL configuration information for a domain. As we can see under the validation section specifying the cert, key and the domain are mandatory for creating objects of this kind, along with this we can store other information like the provider of the certificate etc. The name metadata that we specify must be spec.names.plural+”.”+spec.group.

    An API group (blog.velotio.com here) is a collection of API objects which are logically related to each other. We have also specified version for our custom objects (spec.version), as the definition of the object is expected to change/evolve in future so it’s better to start with alpha so that the users of the object knows that the definition might change later. In the scope, we have specified Namespaced, by default a custom resource name is clustered scoped. 

    # kubectl create -f crd.yaml
    # kubectl get crd NAME AGE sslconfigs.blog.velotio.com 5s

    Step-2:  Create objects using the definition we created above

    apiVersion: "blog.velotio.com/v1alpha1"
    kind: "SslConfig"
    metadata:
      name: "sslconfig-velotio.com"
    spec:
      cert: "my cert file"
      key : "my private  key"
      domain: "*.velotio.com"
      provider: "digicert"

    # kubectl create -f crd-obj.yaml
    # kubectl get sslconfig NAME AGE sslconfig-velotio.com 12s

    Along with the mandatory fields cert, key and domain, we have also stored the information of the provider ( certifying authority ) of the cert.

    How to register a CRD programmatically using client-go

    Client-go project provides us with packages using which we can easily create go client and access the Kubernetes cluster.  For creating a client first we need to create a connection with the API server.
    How we connect to the API server depends on whether we will be accessing it from within the cluster (our code running in the Kubernetes cluster itself) or if our code is running outside the cluster (locally)

    If the code is running outside the cluster then we need to provide either the path of the config file or URL of the Kubernetes proxy server running on the cluster.

    kubeconfig := filepath.Join(
    os.Getenv("HOME"), ".kube", "config",
    )
    config, err := clientcmd.BuildConfigFromFlags("", kubeconfig)
    if err != nil {
    log.Fatal(err)
    }

    OR

    var (
    // Set during build
    version string
    
    proxyURL = flag.String("proxy", "",
    `If specified, it is assumed that a kubctl proxy server is running on the
    given url and creates a proxy client. In case it is not given InCluster
    kubernetes setup will be used`)
    )
    if *proxyURL != "" {
    config, err = clientcmd.NewNonInteractiveDeferredLoadingClientConfig(
    &clientcmd.ClientConfigLoadingRules{},
    &clientcmd.ConfigOverrides{
    ClusterInfo: clientcmdapi.Cluster{
    Server: *proxyURL,
    },
    }).ClientConfig()
    if err != nil {
    glog.Fatalf("error creating client configuration: %v", err)
    }

    When the code is to be run as a part of the cluster then we can simply use

    import "k8s.io/client-go/rest"  ...  rest.InClusterConfig() 

    Once the connection is established we can use it to create clientset. For accessing Kubernetes objects, generally the clientset from the client-go project is used, but for CRD related operations we need to use the clientset from apiextensions-apiserver project

    apiextension “k8s.io/apiextensions-apiserver/pkg/client/clientset/clientset”

    kubeClient, err := apiextension.NewForConfig(config)
    if err != nil {
    glog.Fatalf("Failed to create client: %v.", err)
    }

    Now we can use the client to make the API call which will create the CRD for us.

    package v1alpha1
    
    import (
    	"reflect"
    
    	apiextensionv1beta1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1"
    	apiextension "k8s.io/apiextensions-apiserver/pkg/client/clientset/clientset"
    	apierrors "k8s.io/apimachinery/pkg/api/errors"
    	meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
    )
    
    const (
    	CRDPlural   string = "sslconfigs"
    	CRDGroup    string = "blog.velotio.com"
    	CRDVersion  string = "v1alpha1"
    	FullCRDName string = CRDPlural + "." + CRDGroup
    )
    
    func CreateCRD(clientset apiextension.Interface) error {
    	crd := &apiextensionv1beta1.CustomResourceDefinition{
    		ObjectMeta: meta_v1.ObjectMeta{Name: FullCRDName},
    		Spec: apiextensionv1beta1.CustomResourceDefinitionSpec{
    			Group:   CRDGroup,
    			Version: CRDVersion,
    			Scope:   apiextensionv1beta1.NamespaceScoped,
    			Names: apiextensionv1beta1.CustomResourceDefinitionNames{
    				Plural: CRDPlural,
    				Kind:   reflect.TypeOf(SslConfig{}).Name(),
    			},
    		},
    	}
    
    	_, err := clientset.ApiextensionsV1beta1().CustomResourceDefinitions().Create(crd)
    	if err != nil && apierrors.IsAlreadyExists(err) {
    		return nil
    	}
    	return err
    }

    In the create CRD function, we first create the definition of our custom object and then pass it to the create method which creates it in our cluster. Just like we did while creating our definition using CLI, here also we set the parameters like version, group, kind etc.

    Once our definition is ready we can create objects of its type just like we did earlier using the CLI. First we need to define our object.

    package v1alpha1
    
    import meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
    
    type SslConfig struct {
    	meta_v1.TypeMeta   `json:",inline"`
    	meta_v1.ObjectMeta `json:"metadata"`
    	Spec               SslConfigSpec   `json:"spec"`
    	Status             SslConfigStatus `json:"status,omitempty"`
    }
    type SslConfigSpec struct {
    	Cert   string `json:"cert"`
    	Key    string `json:"key"`
    	Domain string `json:"domain"`
    }
    
    type SslConfigStatus struct {
    	State   string `json:"state,omitempty"`
    	Message string `json:"message,omitempty"`
    }
    
    type SslConfigList struct {
    	meta_v1.TypeMeta `json:",inline"`
    	meta_v1.ListMeta `json:"metadata"`
    	Items            []SslConfig `json:"items"`
    }

    Kubernetes API conventions suggests that each object must have two nested object fields that govern the object’s configuration: the object spec and the object status. Objects must also have metadata associated with them. The custom objects that we define here comply with these standards. It is also recommended to create a list type for every type thus we have also created a SslConfigList struct.

    Now we need to write a function which will create a custom client which is aware of the new resource that we have created.

    package v1alpha1
    
    import (
    	meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
    	"k8s.io/apimachinery/pkg/runtime"
    	"k8s.io/apimachinery/pkg/runtime/schema"
    	"k8s.io/apimachinery/pkg/runtime/serializer"
    	"k8s.io/client-go/rest"
    )
    
    var SchemeGroupVersion = schema.GroupVersion{Group: CRDGroup, Version: CRDVersion}
    
    func addKnownTypes(scheme *runtime.Scheme) error {
    	scheme.AddKnownTypes(SchemeGroupVersion,
    		&SslConfig{},
    		&SslConfigList{},
    	)
    	meta_v1.AddToGroupVersion(scheme, SchemeGroupVersion)
    	return nil
    }
    
    func NewClient(cfg *rest.Config) (*SslConfigV1Alpha1Client, error) {
    	scheme := runtime.NewScheme()
    	SchemeBuilder := runtime.NewSchemeBuilder(addKnownTypes)
    	if err := SchemeBuilder.AddToScheme(scheme); err != nil {
    		return nil, err
    	}
    	config := *cfg
    	config.GroupVersion = &SchemeGroupVersion
    	config.APIPath = "/apis"
    	config.ContentType = runtime.ContentTypeJSON
    	config.NegotiatedSerializer = serializer.DirectCodecFactory{CodecFactory: serializer.NewCodecFactory(scheme)}
    	client, err := rest.RESTClientFor(&config)
    	if err != nil {
    		return nil, err
    	}
    	return &SslConfigV1Alpha1Client{restClient: client}, nil
    }

    Building the custom client library

    Once we have registered our custom resource definition with the Kubernetes cluster we can create objects of its type using the Kubernetes cli as we did earlier but for creating controllers for these objects or for developing some custom functionalities around them we need to build a client library also using which we can access them from go API. For native Kubernetes objects, this type of library is provided for each object.

    package v1alpha1
    
    import (
    	meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
    	"k8s.io/client-go/rest"
    )
    
    func (c *SslConfigV1Alpha1Client) SslConfigs(namespace string) SslConfigInterface {
    	return &sslConfigclient{
    		client: c.restClient,
    		ns:     namespace,
    	}
    }
    
    type SslConfigV1Alpha1Client struct {
    	restClient rest.Interface
    }
    
    type SslConfigInterface interface {
    	Create(obj *SslConfig) (*SslConfig, error)
    	Update(obj *SslConfig) (*SslConfig, error)
    	Delete(name string, options *meta_v1.DeleteOptions) error
    	Get(name string) (*SslConfig, error)
    }
    
    type sslConfigclient struct {
    	client rest.Interface
    	ns     string
    }
    
    func (c *sslConfigclient) Create(obj *SslConfig) (*SslConfig, error) {
    	result := &SslConfig{}
    	err := c.client.Post().
    		Namespace(c.ns).Resource("sslconfigs").
    		Body(obj).Do().Into(result)
    	return result, err
    }
    
    func (c *sslConfigclient) Update(obj *SslConfig) (*SslConfig, error) {
    	result := &SslConfig{}
    	err := c.client.Put().
    		Namespace(c.ns).Resource("sslconfigs").
    		Body(obj).Do().Into(result)
    	return result, err
    }
    
    func (c *sslConfigclient) Delete(name string, options *meta_v1.DeleteOptions) error {
    	return c.client.Delete().
    		Namespace(c.ns).Resource("sslconfigs").
    		Name(name).Body(options).Do().
    		Error()
    }
    
    func (c *sslConfigclient) Get(name string) (*SslConfig, error) {
    	result := &SslConfig{}
    	err := c.client.Get().
    		Namespace(c.ns).Resource("sslconfigs").
    		Name(name).Do().Into(result)
    	return result, err
    }

    We can add more methods like watch, update status etc. Their implementation will also be similar to the methods we have defined above. For looking at the methods available for various Kubernetes objects like pod, node etc. we can refer to the v1 package.

    Putting all things together

    Now in our main function we will get all the things together.

    package main
    
    import (
    	"flag"
    	"fmt"
    	"time"
    
    	"blog.velotio.com/crd-blog/v1alpha1"
    	"github.com/golang/glog"
    	apiextension "k8s.io/apiextensions-apiserver/pkg/client/clientset/clientset"
    	meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
    	"k8s.io/client-go/rest"
    	"k8s.io/client-go/tools/clientcmd"
    	clientcmdapi "k8s.io/client-go/tools/clientcmd/api"
    )
    
    var (
    	// Set during build
    	version string
    
    	proxyURL = flag.String("proxy", "",
    		`If specified, it is assumed that a kubctl proxy server is running on the
    		given url and creates a proxy client. In case it is not given InCluster
    		kubernetes setup will be used`)
    )
    
    func main() {
    
    	flag.Parse()
    	var err error
    
    	var config *rest.Config
    	if *proxyURL != "" {
    		config, err = clientcmd.NewNonInteractiveDeferredLoadingClientConfig(
    			&clientcmd.ClientConfigLoadingRules{},
    			&clientcmd.ConfigOverrides{
    				ClusterInfo: clientcmdapi.Cluster{
    					Server: *proxyURL,
    				},
    			}).ClientConfig()
    		if err != nil {
    			glog.Fatalf("error creating client configuration: %v", err)
    		}
    	} else {
    		if config, err = rest.InClusterConfig(); err != nil {
    			glog.Fatalf("error creating client configuration: %v", err)
    		}
    	}
    
    	kubeClient, err := apiextension.NewForConfig(config)
    	if err != nil {
    		glog.Fatalf("Failed to create client: %v", err)
    	}
    	// Create the CRD
    	err = v1alpha1.CreateCRD(kubeClient)
    	if err != nil {
    		glog.Fatalf("Failed to create crd: %v", err)
    	}
    
    	// Wait for the CRD to be created before we use it.
    	time.Sleep(5 * time.Second)
    
    	// Create a new clientset which include our CRD schema
    	crdclient, err := v1alpha1.NewClient(config)
    	if err != nil {
    		panic(err)
    	}
    
    	// Create a new SslConfig object
    
    	SslConfig := &v1alpha1.SslConfig{
    		ObjectMeta: meta_v1.ObjectMeta{
    			Name:   "sslconfigobj",
    			Labels: map[string]string{"mylabel": "test"},
    		},
    		Spec: v1alpha1.SslConfigSpec{
    			Cert:   "my-cert",
    			Key:    "my-key",
    			Domain: "*.velotio.com",
    		},
    		Status: v1alpha1.SslConfigStatus{
    			State:   "created",
    			Message: "Created, not processed yet",
    		},
    	}
    	// Create the SslConfig object we create above in the k8s cluster
    	resp, err := crdclient.SslConfigs("default").Create(SslConfig)
    	if err != nil {
    		fmt.Printf("error while creating object: %vn", err)
    	} else {
    		fmt.Printf("object created: %vn", resp)
    	}
    
    	obj, err := crdclient.SslConfigs("default").Get(SslConfig.ObjectMeta.Name)
    	if err != nil {
    		glog.Infof("error while getting the object %vn", err)
    	}
    	fmt.Printf("SslConfig Objects Found: n%vn", obj)
    	select {}
    }

    Now if we run our code then our custom resource definition will get created in the Kubernetes cluster and also an object of its type will be there just like with the cli. The docker image akash125/crdblog is build using the code discussed above it can be directly pulled from docker hub and run in a Kubernetes cluster. After the image is run successfully, the CRD definition that we discussed above will get created in the cluster along with an object of its type. We can verify the same using the CLI the way we did earlier, we can also check the logs of the pod running the docker image to verify it. The complete code is available here.

    Conclusion and future work

    We learned how to create a custom resource definition and objects using Kubernetes command line interface as well as the Golang client. We also learned how to programmatically access a Kubernetes cluster, using which we can build some really cool stuff on Kubernetes, we can now also create custom controllers for our resources which continuously watches the cluster for various life cycle events of our object and takes desired action accordingly. To read more about CRD refer the following links:

  • Tutorial: Developing Complex Plugins for Jenkins

    Introduction

    Recently, I needed to develop a complex Jenkins plug-in for a customer in the containers & DevOps space. In this process, I realized that there is lack of good documentation on Jenkins plugin development and good information is very hard to find. That’s why I decided to write this blog to share my knowledge on Jenkins plugin development.

    Topics covered in this Blog

    1. Setting up the development environment
    2. Jenkins plugin architecture: Plugin classes and understanding of the source code.
    3. Complex tasks: Tasks like the integration of REST API in the plugin and exposing environment variables through source code.
    4. Plugin debugging and deployment

    So let’s start, shall we?

    1. Setting up the development environment

    I have used Ubuntu 16.04 for this environment, but the steps remain identical for other flavors. The only difference will be in the commands used for each operating system.

    Let me give you a brief list of the requirements:

    1. Compatible JDK: Jenkins plugin development is done in Java. Thus a compatible JDK is what you need first. JDK 6 and above are supported as per the Jenkins documentation.
    2. Maven: Installation guide. I know many of us don’t like to use Maven, as it downloads stuff over the Internet at runtime but it’s required. Check this to understand why using Maven is a good idea.
    3. Jenkins: Check this Installation Guide. Obviously, you would need a Jenkins setup – can be local on hosted on a server/VM.
    4. IDE for development: An IDE like Netbeans, Eclipse or IntelliJ IDEA is preferred. I have used Netbeans 8.1 for this project.

    Before going forward, please ensure that you have the above prerequisites installed on your system. Jenkins does have official documentation for setting up the environment – Check this. If you would like to use an IDE besides Netbeans, the above document covers that too.

    Let’s start with the creation of your project. I will explain with Maven commands and with use of the IDE as well.

    First, let’s start with the approach of using commands.

    It may be helpful to add the following to your ~/.m2/settings.xml (Windows users will find them in %USERPROFILE%.m2settings.xml):

    <settings>
     <pluginGroups>
       <pluginGroup>org.jenkins-ci.tools</pluginGroup>
     </pluginGroups>
    
    <profiles>
       <!-- Give access to Jenkins plugins -->
       <profile>
         <id>jenkins</id>
         <activation>
           <activeByDefault>true</activeByDefault> <!-- change this to false, if you don't like to have it on per default -->
         </activation>
    
         <repositories>
           <repository>
             <id>repo.jenkins-ci.org</id>
             <url>http://repo.jenkins-ci.org/public/</url>
           </repository>
         </repositories>
         
         <pluginRepositories>
           <pluginRepository>
             <id>repo.jenkins-ci.org</id>
             <url>http://repo.jenkins-ci.org/public/</url>
           </pluginRepository>
         </pluginRepositories>
       </profile>
     </profiles>
     
     <mirrors>
       <mirror>
         <id>repo.jenkins-ci.org</id>
         <url>http://repo.jenkins-ci.org/public/</url>
         <mirrorOf>m.g.o-public</mirrorOf>
       </mirror>
     </mirrors>
    </settings>

    This basically lets you use short names in commands e.g. instead of org.jenkins-ci.tools:maven-hpi-plugin:1.61:create, you can use hpi:create. hpi is the packaging style used to deploy the plugins.

    Create the plugin

    $ mvn -U org.jenkins-ci.tools:maven-hpi-plugin:create


    This will ask you a few questions, like the groupId (the Maven jargon for the package name) and the artifactId (the Maven jargon for your project name), then create a skeleton plugin from which you can start with. This command should create the sample HelloWorldBuilder plugin.

    Command Explanation:

    • -U: Maven needs to update the relevant Maven plugins (check plugin updates).
    • hpi: this prefix specifies that the Jenkins HPI Plugin is being invoked, a plugin that supports the development of Jenkins plugins.
    • create is the goal which creates the directory layout and the POM for your new Jenkins plugin and it adds it to the module list.

    Source code tree would be like this:

    Your Project Name    
      Pom.xml      
        Src          
          Main              
            Java                  
              package folder(usually consist of groupId and artifactId)                      
                HelloWorldBuilder.java              
          Resources                  
              Package folder/HelloWorldBuilder/jelly files

    Run “mvn package” which compiles all sources, runs the tests and creates a package – when used by the HPI plugin it will create an *.hpi file.

    Building the Plugin:

    Run mvn install in the directory where pom.xml resides. This is similar to mvn package command but at the end, it will create your plugins .hpi file which you can deploy. Simply copy the create .hpi file and paste to /plugins folder of your Jenkins setup. Restart your Jenkins and you should see the plugin on Jenkins.

    Now let’s see how this can be done with IDE.

    With Netbeans IDE:

    I have used Netbeans for development(Download). Check with the JDK version. Latest version 8.2 works with JDK 8. Once you install Netbeans, install NetBeans plugin for Jenkins/Stapler development.

    You can now create plugin via New Project » Maven » Jenkins Plugin.

    This is the same as “mvn -U org.jenkins-ci.tools:maven-hpi-plugin:create” command which should create the simple “HelloWorldBuilder” application.

    Netbeans comes with Maven built-in so even if you don’t have Maven installed on your system this should work. But you may face error accessing the Jenkins repo. Remember we added some configuration settings in settings.xml in the very first step. Yes, if you have added that already then you shouldn’t face any problem but if you haven’t added that you can add that in Netbeans Maven settings.xml which you can find at: netbeans_installation_path/java/maven/conf/settings.xml

    Now you have your “HelloWorldBuilder” application ready.  This is shown as TODO plugin in Netbeans. Simply run it(F6). This creates the Jenkins instance and runs it on 8080 port. Now, if you already have local Jenkins setup then you need to stop it otherwise this will give you an exception. Go to localhost:8080/jenkins and create a simple job. In “Add Build Step” you should see “Say Hello World” plugin already there.

    Now how it got there and the source code explanation is next.

    2. Jenkins plugin architecture and understanding

    Now that we have our sample HelloWorldBuilder plugin ready,  let’s see its components.

    As you may know, Jenkins plugin has two parts: Build Step and Post Build Step. This sample application is designed to work for Build step and that’s why you see “Say Hello world” plugin in Build step. I am going to cover Build Step itself.

    Do you want to develop Post Build plugin? Don’t worry as these two don’t have much difference. The difference is only in the classes which we extend. For Build step, we extend “hudson.tasks.Builder” and for Post Build “hudson.tasks.Recorder” and with Descriptor class for Build step “BuildStepDescriptor<builder></builder>” for Post Build “BuildStepDescriptor<publisher></publisher>”.

    We will go through these classes in detail below:

    hudson.tasks.Builder Class:

    In brief, this simply tells Jenkins that you are writing a Build Step plugin. A full explanation is here. Now you will see “perform” method once you override this class.

    @Override
    public boolean perform(AbstractBuild build, Launcher launcher, BuildListener listener)

    Note that we are not implementing the ”SimpleBuildStep” interface which is there in HelloWorldBuilder source code. Perform method for that Interface is a  bit different from what I have given above. My explanation goes around this perform method.

    The perform method is basically called when you run your Build. If you see the Parameters passed you have full control over the Build configured, you can log to Jenkins console screen using listener object. What you should do here is access the values set by the user on UI and perform the plugin activity. Note that this method is returning a boolean, True means build is Successful and False is Build Failed.

    Understanding the Descriptor Class:  

    You will notice there is a static inner class in your main class named as DescriptorImpl. This class is basically used for handling configuration of your Plugin. When you click on “Configure” link on Jenkins it basically calls this method and loads the configured data.

    You can perform validations here, save the global configuration and many things. We will see these in detail as when required. Now there is an overridden method:

    @Override
    public String getDisplayName() {
    return "Say Hello World";
    }

    That’s why we see “Say Hello World” in the Build Step. You can rename it to what your plugin does.

    @Override
    public boolean configure(StaplerRequest req, JSONObject formData) throws FormException {
    // To persist global configuration information,
    // set that to properties and call save().
    useFrench = formData.getBoolean("useFrench");
    // ^Can also use req.bindJSON(this, formData);
    //(easier when there are many fields; need set* methods for this, like setUseFrench)
    save();
    return super.configure(req,formData);
    }

    This method basically saves your configuration, or you can even get global data like we have taken “useFrench” attribute which can be set from Jenkins global configuration. If you would like to set any global parameter you can place them in the global.jelly file.

    Understanding Action class and jelly files:

    To understand the main Action class and what it’s purpose is, let’s first understand the jelly files.

    There are two main jelly files: config.jelly and global.jelly. The global.jelly file is used to set global parameters while config.jelly is used for local parameters configuration. Jenkins uses these jelly files to show the parameters or fields on UI. So anything you write in config.jelly will show up on Jobs configuration page as configurable.

    <f:entry title="Name" field="name">
    <f:textbox />
    </f:entry>

    This is what is there in our HelloWorldBuilder application. It simply renders a textbox for entering name.

    Jelly has its own syntax and supports HTML and Javascript as well. It has radio buttons, checkboxes, dropdown lists and so on.

    How does Jenkins manage to pull the data set by the user? This is where our Action class comes into the picture. If you see the structure of the sample application, it has a private field as name and a constructor.

    @DataBoundConstructor
    public HelloWorldBuilder(String name) {
    this.name = name;
    }

    This DataBoundConstructor annotation tells Jenkins to bind the value of jelly fields. If you notice there’s field as “name” in jelly and the same is used here to put the data. Note that, whatever name you set in field attribute of jelly same you should use here as they are tightly coupled.

    Also, add getters for these fields so that Jenkins can access the values.

    @Override
    public DescriptorImpl getDescriptor() {
    return (DescriptorImpl)super.getDescriptor();
    }

    This method gives you the instance of Descriptor class. So if you want to access methods or properties of Descriptor class in your Action class you can use this.

    3. Complex tasks:

    We now have a good idea on how the Jenkins plugin structure is and how it works. Now let’s start with some complex stuff.

    On the internet, there are examples on how to render a selection box(drop-down) with static data. What if you want to load in a dynamic manner? I came with the below solution. We will use Amazon’s publicly available REST API for getting the coupons and load that data in the selection box.

    Here, the objective is to load the data in the selection box. I have the response for REST API as below:

    "offers" : {    
      "AmazonChimeDialin" : {      
        "offerCode" : "AmazonChimeDialin",      
        "versionIndexUrl" : "/offers/v1.0/aws/AmazonChimeDialin/index.json",      
        "currentVersionUrl" : "/offers/v1.0/aws/AmazonChimeDialin/current/index.json",     
        "currentRegionIndexUrl" : "/offers/v1.0/aws/AmazonChimeDialin/current/region_index.json"    
       },    
       "mobileanalytics" : {      
        "offerCode" : "mobileanalytics",      
        "versionIndexUrl" : "/offers/v1.0/aws/mobileanalytics/index.json",      
        "currentVersionUrl" : "/offers/v1.0/aws/mobileanalytics/current/index.json",      
        "currentRegionIndexUrl" : "/offers/v1.0/aws/mobileanalytics/current/region_index.json"    
        }
        }

    I have taken all these offers and created one dictionary and rendered it on UI. Thus the user will see the list of coupon codes and can choose anyone of them.

    Let’s understand how to create the selection box and load the data into it.

    <f:entry title="select Offer From Amazon" field="getOffer">   
     <f:select id="offer-${editorId}" onfocus="getOffers(this.id)"/>  
     </f:entry>

    This is the code which will generate the selection box on configuration page.  Now you will see here “getOffer” field means there’s field with the same name in the Action class.

    When you create any selection box, Jenkins needs doFill{fieldname}Items method in your descriptor class. As we have seen, Descriptor class is configuration class it tries to load the data from this method when you click on the configuration of the job. So in this case, “doFillGetOfferItems” method is required.

    After this, selection box should pop up on the configuration page of your plugin.

    Now here as we need to do dynamic actions, we will perform some action and will load the data.

    As an example, we will click on the button and load the data in Selection Box.

    <f:validateButton title="Get Amazon Offers" progress="Fetching Offers..."method="getAmazonOffers"/>

    Above is the code to create a button. In method attribute, specify the backend method which should be present in your Descriptor class. So when you click on this button “getAmazonOffers” method will get called at the backend and it will get the data from API.

    Now when we click on the selection box, we need to show the contents. As I said earlier, Jelly does support HTML and Javascript. Yes, if you want to do dynamic action use Javascript simply. If you see in selection box code of jelly I have used onfocus() method of Javascript which is pointing to getOffers() function.

    Now you need to have this function, define script tag like this.

    <script> 
    function getOffers(){ 
    }
    </script>

    Now here get the data from backend and load it in the selection box. To do this we need to understand some objects of Jenkins.

    1. Descriptor: As you now know, Descriptor is configuration class this object points to. So from jelly at any point, you can call the method from your Descriptor class.
    2. Instance: This is the object currently being configured on the configuration page. Null if it’s a newly added instance. Means by using this you can call the methods from your Action class. Like getters of field attribute.

    Now how to use these objects? To use you need to first set them.

    <st:bind var="backend" value="${descriptor}"/>

    Here you are binding descriptor object to backend variable and this variable is now ready for use anywhere in config.jelly.  Similarly for instance, <st:bind var=”backend” value=”${instance}”>.</st:bind>

    To make calls use backend.{backend method name}() and it should call your backend method.

    But if you are using this from JavaScript then you need use @JavaScriptMethod annotation over the method being called.

    We can now get the REST data from backend function in JavaScript and to load the data into the element you can use the document object of JavaScript.

    E.g. var selection = document.getElementById(“element-id”); This part is normal Javascript.

    So after clicking on “Get Amazon Offers” button and clicking on Selection box it should now load the data.

    Multiple Plugin Instance: If we are creating a multiple Build Step plugin then you can create multiple instances of your plugin while configuring it. If you try to do what we have done up till now, it will fail to load the data in the second instance. This is because the same element already exists on the UI with the same id. JavaScript will get confused while putting the data. We need to have a mechanism to create the different ids of the same fields.

    I thought of one approach for this. Get the index from backend while configuring the fields and add as a suffix in id attribute.

    @JavaScriptMethod
    public synchronized String createEditorId() {
    return String.valueOf(lastEditorId++);
    }

    This is the method which just returns the id+1 each time it gets called. You know now how to call backend methods from Jelly.

    <j:set var="editorId" value="${descriptor.createEditorId()}" />

    In this manner, we set the ID value in variable “editorId” and this can be used while creation of fields.

    (Check out the selection box creation code above. I have appended this variable in ID attribute)

    Now create as many instances you want in configuration page it should work fine.

    Exposing Environment Variables:

    Environment variables are needed quite often in Jenkins. Your plugin may require the support of some environment variables or the use of the built-in environment variables provided by Jenkins.

    First, you need to create the Envvars object.

    EnvVars envVars = new EnvVars();
    ** Assign it to the build environment.
    envVars = build.getEnvironment(listener);
    ** Put the values which you wanted to expose as environment variable.
    envVars.put("offer", getOffer);

    If you print this then you will get all the default Jenkins environment variables as well as variables which you have exposed. Using this you can even use third party plugins like “Parameterized Trigger Plugin” to export the current build’s environment variable to different jobs.You can even get the value of any environment variable using this.

    4. Plugin Debugging and Deployment:

    You have now got an idea on how to write a plugin in Jenkins, now we move on to perform some complex tasks. We will see how to debug the issue and deploy the plugin. If you are using the IDE then debugging is same like you do for Java program similar to setting up the breakpoints and running the project.

    If you want to perform any validation on fields, in the configuration class you would need to have docheck{fieldname} method which will return FormValidation object. In this example, we are validating the “name” field from our sample “HelloWorldBuilder” example.

    public FormValidation doCheckName(@QueryParameter String value)
    throws IOException, ServletException {
    if (value.length() == 0)
    return FormValidation.error("Please set a name"); 
    if (value.length() < 4)
    return FormValidation.warning("Isn't the name too short?");
    return FormValidation.ok(); 
    }

    Plugin deployment:  

    We have now created the plugin, how are we going to deploy it? We have created the plugin using Netbeans IDE and as I said earlier if you want to deploy it on your local Jenkins setup you need to use the Maven command mvn install and copy .hpi to /plugins/ folder.

    But what if you want to deploy it on Jenkins Marketplace? Well, it’s a pretty long process and thankfully Jenkins has good documentation for it.

    In short, you need to have a jenkins-ci.org account. Your public Git repo will have the plugin source code. Raise an issue on JIRA to get space on their Git repo and in this operation, they will have forked your git repo. Finally, release the plugin using Maven. The above document explains well what exactly needs to be done.

    Conclusion:

    We went through the basics of Jenkins plugin development such as classes, configuration, and some complex tasks.

    Jenkins plugin development is not difficult, but I feel the poor documentation is what makes the task challenging. I have tried to cover my understanding while developing the plugin, however, it is advisable to create a plugin only if the required functionality does not already exist.

    Below are some important links on plugin development:

    1. Jenkins post build plugin development: This is a very good blog which covers things like setting up the environment, plugin classes and developing Post build action.
    2. Basic guide to use jelly: This covers how to use jelly files in Jenkins and attributes of jelly. 

    You can check the code of the sample application discussed in this blog here. I hope this helps you to build interesting Jenkins plugins. Happy Coding!!

  • MQTT Protocol Overview – Everything You Need To Know

    MQTT is the open protocol. This is used for asynchronous message queuing. This has been developed and matured over several years. MQTT is a machine to machine protocol. It’s been widely used with embedded devices. Microsoft is having its own MQTT tool with huge support. Here, we are going to overview the MQTT protocol & its details.

    MQTT Protocol:

    MQTT is a very simple publish / subscribe protocol. It allows you to send messages on a topic (channels) passed through a centralized message broker.

    The MQTT module of API will take care of the publish/ subscribe mechanism along with additional features like authentication, retaining messages and sending duplicate messages to unreachable clients.

    There are three parts of MQTT architecture –

    • MQTT Broker – All messages passed from the client to the server should be sent via the broker.
    • MQTT Server – The API acts as an MQTT server. The MQTT server will be responsible for publishing the data to the clients.
    • MQTT Client – Any third party client who wishes to subscribe to data published by API, is considered as an MQTT Client.

    The MQTT Client and the MQTT Server need to connect to the Broker in order to publish or subscribe messages.

    MQTT Communication Program

    Suppose our API is sending sensor data to get more ideas on MQTT.
    API gathers the sensor data through the Monitoring module, and the MQTT module publishes the data to provide different channels. On the successful connection of external client to the MQTT module of the API, the client would receive sensor data on the subscribed channel.

    Below diagram shows the flow of data from the API Module to the External clients.

    MQTT Broker – EMQTT:

    EMQTT (Erlang MQTT Broker) is a massively scalable and clusterable MQTT V3.1/V3.1.1 broker, written in Erlang/OTP.

    Main responsibilities of a Broker are-

    • Receive all messages
    • Filter messages
    • Decide which are interested clients
    • Publish messages to all the subscribed clients

    All messages published are passed through the broker. The broker generates the Client ID and Message ID, maintains the message queue, and publishes the message.

    There are several brokers that can be used. Default EMQTT broker developed in ErLang.

    MQTT Topics:

    A topic is a string(UTF-8). Using this string, Broker filters messages for all connected clients. One topic may consist of one or more topic levels. Forward slash(topic level separator) is used for separating each topic level.

     

    When API starts, the Monitoring API will monitor the sensor data and publish it in a combination of topics. The third party client can subscribe to any of those topics, based on the requirement.

    The topics are framed in such a way that it provides options for the user to subscribe at level 1, level 2, level 3, level 4, or individual sensor level data.

    While subscribing to each level of sensor data, the client needs to specify the hierarchy of the IDs. For e.g. to subscribe to level 4 sensor data, the client needs to specify level 1 id/ level 2 id/ level 3 id/ level 4 id.

    The user can subscribe to any type of sensor by specifying the sensor role as the last part of the topic.

    If the user doesn’t specify the role, the client will be subscribed to all types of sensors on that particular level.

    The user can also specify the sensor id that they wish to subscribe to. In that case, they need to specify the whole hierarchy of the sensor, starting from project id and ending with sensor id.

    Following is the list of topics exposed by API on startup.

     

    Features supported by MQTT:

    1. Authentication:

    EMQTT provides authentication of every user who intends to publish or subscribe to particular data. The user id and password is stored in the API database, into a separate collection called ‘mqtt

    While connecting to EMQTT broker, we provide the username name and password, and the MQTT Broker will validate the credentials based on the values present in the database.

    2. Access Control:

    EMQTT determines which user is allowed to access which topics. This information is stored in MongoDB under the table ‘mqtt_acl’

    By default, all users are allowed to access all topics by specifying ‘#’ as the allowed topic to publish and subscribe for all users.

    3. QoS:

    The Quality of Service (QoS) level is the Quality transfer of messages which ensures the delivery of messages between sending body & receiving body. There are 3 QoS levels in MQTT:

    • At most once(0) –The message is delivered at most once, or it is not delivered at all.
    • At least once(1) – The message is always delivered at least once.
    • Exactly once(2) – The message is always delivered exactly once.

    4. Last Will Message:

    MQTT uses the Last Will & Testament(LWT) mechanism to notify ungraceful disconnection of a client to other clients. In this mechanism, when a client is connecting to a broker, each client specifies its last will message which is a normal MQTT message with QoS, topic, retained flag & payload. This message is stored by the Broker until it it detects that the client has disconnected ungracefully.

    5. Retain Message:

    MQTT also has a feature of Message Retention. It is done by setting TRUE to retain the flag. It then retained the last message & QoS for the topic. When a client subscribes to a topic, the broker matches the topic with a retained message. Clients will receive messages immediately if the topic and the retained message are matched. Brokers only store one retained message for each topic.

    6. Duplicate Message:

    If a publisher doesn’t receive the acknowledgement of the published packet, it will resend the packet with DUP flag set to true. A duplicate message contains the same Message ID as the original message.

    7. Session:

    In general, when a client connects with a broker for the first time, the client needs to create subscriptions for all topics for which they are willing to receive data/messages from the broker. Suppose a session is not maintained, or there is no persistent session, or the client lost a connection with the broker, then users have to resubscribe to all the topics after reconnecting to the broker. For the clients with limited resources, it would be very tedious to subscribe to all topics again. So brokers use a persistent session mechanism, in which it saves all information relevant to the client. ‘clientId’ provided by client is used as ‘session identifier’, when the client establishes a connection with the broker.

    Features not-supported by MQTT:

    1. Not RESTful:

    MQTT does not allow a client to expose RESTful API endpoints. The only way to communicate is through the publish /subscribe mechanism.

    2. Obtaining Subscription List:

    The MQTT Broker doesn’t have the Client IDs and the subscribed topics by the clients. Hence, the API needs to publish all data to all possible combinations of topics. This would lead to a problem of network congestion in case of large data.

    MQTT Wildcards:

    MQTT clients can subscribe to one or more topics. At a time, one can subscribe to a single topic only. So we can use the following two wildcards to create a topic which can subscribe to many topics to receive data/message.

    1. Plus sign(+):

    This is a single level wildcard. This is used to match specific topic level. We can use this wildcard when we want to subscribe at topic level.

    Example: Suppose we want to subscribe for all Floor level ‘AL’(Ambient light) sensors, we can use Plus (+) sign level wild card instead of a specific zone level. We can use following topic:

    <project_id>/<building_id>/<floor_id>/+/AL</floor_id></building_id></project_id>

    2. Hash Sign(#):

    This is a multi level wildcard. This wildcard can be used only at the end of a topic. All data/messages get subscribed which match to left-hand side of the ‘#’ wildcard.

    Example: In case we want to receive all the messages related to all sensors for floor1 , we can use Hash sing(#) multi level wildcard after floor name & the slash( / ). We can use following topic-

    <level 1_id=””>/<level 2_id=””>/<level 3_id=””>/#</level></level></level>

    MQTT Test tools:

    Following are some popular open source testing tools for MQTT.

    1. MQTT Lens
    2. MQTT SPY
    3. MQTT FX

    Difference between MQTT & AMQP:

    MQTT is designed for lightweight devices like Embedded systems, where bandwidth is costly and the minimum overhead is required. MQTT uses byte stream to exchange data and control everything. Byte stream has optimized 2 byte fixed header, which is prefered for IoT.

    AMQP is designed with more advanced features and uses more system resources. It provides more advanced features related to messaging, topic-based publish & subscribe messaging, reliable queuing, transactions, flexible routing and security.

    Difference between MQTT & HTTP:

    MQTT is data-centric, whereas HTTP is document-centric. HTTP is a request-response protocol for client-server, on the other hand, MQTT uses publish-subscribe mechanism. Publish/subscribe model provides clients with the independent existence from one another and enhances the reliability of the whole system. Even if any of the client is out of network, the system keeps itself up and running

    As compared to HTTP, MQTT is lightweight (very short message header and the smallest packet message size of 2 bytes), and allows to compose lengthy headers and messages.

    MQTT Protocol ensures high delivery guarantees compared to HTTP.

    There are 3 levels of Quality of Services:

    at most once: it guarantees that message will be delivered with the best effort.

    at least once: It guarantees that message will be delivered at a minimum of one time. But the message can also be delivered again..

    exactly once: It guarantees that message will be delivered one and only one time.

    Last will & testament and Retained messages are the options provided by MQTT to users. With Last Will & Testament, in case of unexpected disconnection of a client, all subscribed clients will get a message from the broker. Newly subscribed clients will get immediate status updates via Retained message.

    HTTP Protocol has none of these abilities.

    Conclusion:

    MQTT is one of its kind message queuing protocols, best suited for embedded hardware devices. On the software level, it supports all major operating systems and platforms. It has proven its certainty as an ISO standard in IoT platforms because of its more pragmatic security and message reliability.

  • Building A Scalable API Testing Framework With Jest And SuperTest

    Focus on API testing

    Before starting off, below listed are the reasons why API testing should be encouraged:

    • Identifies bugs before it goes to UI
    • Effective testing at a lower level over high-level broad-stack testing
    • Reduces future efforts to fix defects
    • Time-saving

    Well, QA practices are becoming more automation-centric with evolving requirements, but identifying the appropriate approach is the primary and the most essential step. This implies choosing a framework or a tool to develop a test setup which should be:

    • Scalable 
    • Modular
    • Maintainable
    • Able to provide maximum test coverage
    • Extensible
    • Able to generate test reports
    • Easy to integrate with source control tool and CI pipeline

    To attain the goal, why not develop your own asset rather than relying on the ready-made tools like Postman, JMeter, or any? Let’s have a look at why you should choose ‘writing your own code’ over depending on the API testing tools available in the market:

    1. Customizable
    2. Saves you from the trap of limitations of a ready-made tool
    3. Freedom to add configurations and libraries as required and not really depend on the specific supported plugins of the tool
    4. No limit on the usage and no question of cost
    5. Let’s take Postman for example. If we are going with Newman (CLI of Postman), there are several efforts that are likely to evolve with growing or changing requirements. Adding a new test requires editing in Postman, saving it in the collection, exporting it again and running the entire collection.json through Newman. Isn’t it tedious to repeat the same process every time?

    We can overcome such annoyance and meet our purpose using a self-built Jest framework using SuperTest. Come on, let’s dive in!

    Source: school.geekwall

    Why Jest?

    Jest is pretty impressive. 

    • High performance
    • Easy and minimal setup
    • Provides in-built assertion library and mocking support
    • Several in-built testing features without any additional configuration
    • Snapshot testing
    • Brilliant test coverage
    • Allows interactive watch mode ( jest –watch or jest –watchAll )

    Hold on. Before moving forward, let’s quickly visit Jest configurations, Jest CLI commands, Jest Globals and Javascript async/await for better understanding of the coming content.

    Ready, set, go!

    Creating a node project jest-supertest in our local and doing npm init. Into the workspace, we will install Jest, jest-stare for generating custom test reports, jest-serial-runner to disable parallel execution (since our tests might be dependent) and save these as dependencies.

    npm install jest jest-stare jest-serial-runner --save-dev

    Tags to the scripts block in our package.json. 

    
    "scripts": {
        "test": "NODE_TLS_REJECT_UNAUTHORIZED=0 jest --reporters default jest-stare --coverage --detectOpenHandles --runInBand --testTimeout=60000",
        "test:watch": "jest --verbose --watchAll"
      }

    npm run test command will invoke the test parameter with the following:

    • NODE_TLS_REJECT_UNAUTHORIZED=0: ignores the SSL certificate
    • jest: runs the framework with the configurations defined under Jest block
    • –reporters: default jest-stare 
    • –coverage: invokes test coverage
    • –detectOpenHandles: for debugging
    • –runInBand: serial execution of Jest tests
    • –forceExit: to shut down cleanly
    • –testTimeout = 60000 (custom timeout, default is 5000 milliseconds)

    Jest configurations:

    [Note: This is customizable as per requirements]

    "jest": {
        "verbose": true,
        "testSequencer": "/home/abc/jest-supertest/testSequencer.js",
        "coverageDirectory": "/home/abc/jest-supertest/coverage/my_reports/",
        "coverageReporters": ["html","text"],
        "coverageThreshold": {
          "global": {
            "branches": 100,
            "functions": 100,
            "lines": 100,
            "statements": 100
          }
        }
      }

    testSequencer: to invoke testSequencer.js in the workspace to customize the order of running our test files

    touch testSequencer.js

    Below code in testSequencer.js will run our test files in alphabetical order.

    const Sequencer = require('@jest/test-sequencer').default;
    
    class CustomSequencer extends Sequencer {
      sort(tests) {
        // Test structure information
        // https://github.com/facebook/jest/blob/6b8b1404a1d9254e7d5d90a8934087a9c9899dab/packages/jest-runner/src/types.ts#L17-L21
        const copyTests = Array.from(tests);
        return copyTests.sort((testA, testB) => (testA.path > testB.path ? 1 : -1));
      }
    }
    
    module.exports = CustomSequencer;

    • verbose: to display individual test results
    • coverageDirectory: creates a custom directory for coverage reports
    • coverageReporters: format of reports generated
    • coverageThreshold: minimum and maximum threshold enforcements for coverage results

    Testing endpoints with SuperTest

    SuperTest is a node library, superagent driven, to extensively test Restful web services. It hits the HTTP server to send requests (GET, POST, PATCH, PUT, DELETE ) and fetch responses.

    Install SuperTest and save it as a dependency.

    npm install supertest --save-dev

    "devDependencies": {
        "jest": "^25.5.4",
        "jest-serial-runner": "^1.1.0",
        "jest-stare": "^2.0.1",
        "supertest": "^4.0.2"
      }

    All the required dependencies are installed and our package.json looks like:

    {
      "name": "supertestjest",
      "version": "1.0.0",
      "description": "",
      "main": "index.js",
      "jest": {
        "verbose": true,
        "testSequencer": "/home/abc/jest-supertest/testSequencer.js",
        "coverageDirectory": "/home/abc/jest-supertest/coverage/my_reports/",
        "coverageReporters": ["html","text"],
        "coverageThreshold": {
          "global": {
            "branches": 100,
            "functions": 100,
            "lines": 100,
            "statements": 100
          }
        }
      },
      "scripts": {
        "test": "NODE_TLS_REJECT_UNAUTHORIZED=0 jest --reporters default jest-stare --coverage --detectOpenHandles --runInBand --testTimeout=60000",
        "test:watch": "jest --verbose --watchAll"
      },
      "author": "",
      "license": "ISC",
      "devDependencies": {
        "jest": "^25.5.4",
        "jest-serial-runner": "^1.1.0",
        "jest-stare": "^2.0.1",
        "supertest": "^4.0.2"
      }
    }

    Now we are ready to create our Jest tests with some defined conventions:

    • describe block assembles multiple tests or its
    • test block – (an alias usually used is ‘it’) holds single test 
    • expect() –  performs assertions 

    It recognizes the test files in __test__/ folder

    • with .test.js extension
    • with .spec.js extension

    Here is a reference app for API tests.

    Let’s write commonTests.js which will be required by every test file. This hits the app through SuperTest, logs in (if required) and saves authorization token. The aliases are exported from here to be used in all the tests. 

    [Note: commonTests.js, be created or not, will vary as per the test requirements]

    touch commonTests.js

    var supertest = require('supertest'); //require supertest
    const request = supertest('https://reqres.in/'); //supertest hits the HTTP server (your app)
    
    /*
    This piece of code is for getting the authorization token after login to your app.
    const token;
    test("Login to the application", function(){
        return request.post(``).then((response)=>{
            token = response.body.token  //to save the login token for further requests
        })
    }); 
    */
    
    module.exports = 
    {
        request
            //, token     -- export if token is generated
    }

    Moving forward to writing our tests on POST, GET, PUT and DELETE requests for the basic understanding of the setup. For that, we are creating two test files to also see and understand if the sequencer works.

    mkdir __test__/
    touch __test__/postAndGet.test.js __test__/putAndDelete.test.js

    As mentioned above and sticking to Jest protocols, we have our tests written.

    postAndGet.test.js test file:

    • requires commonTests.js into ‘request’ alias
    • POST requests to api/users endpoint, calls supertest.post() 
    • GET requests to api/users endpoint, calls supertest.get()
    • uses file system to write globals and read those across all the tests
    • validates response returned on hitting the HTTP endpoints
    const request = require('../commonTests');
    const fs = require('fs');
    let userID;
    
    //Create a new user
    describe("POST request", () => {
      
      try{
        let userDetails;
        beforeEach(function () {  
            console.log("Input user details!")
            userDetails = {
              "name": "morpheus",
              "job": "leader"
          }; //new user details to be created
          });
        
        afterEach(function () {
          console.log("User is created with ID : ", userID)
        });
    
    	  it("Create user data", async done => {
    
            return request.request.post(`api/users`) //post() of supertest
                    //.set('Authorization', `Token $  {request.token}`) //Authorization token
                    .send(userDetails) //Request header
                    .expect(201) //response to be 201
                    .then((res) => {
                        expect(res.body).toBeDefined(); //test if response body is defined
                        //expect(res.body.status).toBe("success")
                        userID = res.body.id;
                        let jsonContent = JSON.stringify({userId: res.body.id}); // create a json
                        fs.writeFile("data.json", jsonContent, 'utf8', function (err) //write user id into global json file to be used 
                        {
                        if (err) {
                            return console.log(err);
                        }
                        console.log("POST response body : ", res.body)
                        done();
                        });
                      })
                    })
                  }
                  catch(err){
                    console.log("Exception : ", err)
                  }
            });
    
    //GET all users      
    describe("GET all user details", () => {
      
      try{
          beforeEach(function () {
            console.log("GET all users details ")
        });
              
          afterEach(function () {
            console.log("All users' details are retrieved")
        });
    
          test("GET user output", async done =>{
            await request.request.get(`api/users`) //get() of supertest
                                    //.set('Authorization', `Token ${request.token}`) 
                                    .expect(200).then((response) =>{
                                    console.log("GET RESPONSE : ", response.body);
                                    done();
                        })
          })
        }
      catch(err){
        console.log("Exception : ", err)
        }
    });

    putAndDelete.test.js file:

    • requires commonsTests into ‘request’ alias
    • calls data.json into ‘data’ alias which was created by the file system in our previous test to write global variables into it
    • PUT sto api/users/${data.userId} endpoint, calls supertest.put() 
    • DELETE requests to api/users/${data.userId} endpoint, calls supertest.delete() 
    • validates response returned by the endpoints
    • removes data.json (similar to unsetting global variables) after all the tests are done
    const request = require('../commonTests');
    const fs = require('fs'); //file system
    const data = require('../data.json'); //data.json containing the global variables
    
    //Update user data
    describe("PUT user details", () => {
    
        try{
            let newDetails;
            beforeEach(function () {
                console.log("Input updated user's details");
                newDetails = {
                    "name": "morpheus",
                    "job": "zion resident"
                }; // details to be updated
      
            });
            afterEach(function () {
                console.log("user details are updated");
            });
      
            test("Update user now", async done =>{
    
                console.log("User to be updated : ", data.userId)
    
                const response = await request.request.put(`api/users/${data.userId}`).send(newDetails) //call put() of supertest
                                    //.set('Authorization', `Token ${request.token}`) 
                                            .expect(200)
                expect(response.body.updatedAt).toBeDefined();
                console.log("UPDATED RESPONSE : ", response.body);
                done();
        })
      }
        catch(err){
            console.log("ERROR : ", err)
        }
    });
    
    //DELETE the user
    describe("DELETE user details", () =>{
        try{
            beforeAll(function (){
                console.log("To delete user : ", data.userId)
            });
    
            test("Delete request", async done =>{
                const response = await request.request.delete(`api/users/${data.userId}`) //invoke delete() of supertest
                                            .expect(204) 
                console.log("DELETE RESPONSE : ", response.body);
                done(); 
            });
    
            afterAll(function (){
                console.log("user is deleted!!")
                fs.unlinkSync('data.json'); //remove data.json after all tests are run
            });
        }
    
        catch(err){
            console.log("EXCEPTION : ", err);
        }
    });

    And we are done with setting up a decent framework and just a command away!

    npm test

    Once complete, the test results will be immediately visible on the terminal.

    Test results HTML report is also generated as index.html under jest-stare/ 

    And test coverage details are created under coverage/my_reports/ in the workspace.

    Similarly, other HTTP methods can also be tested, like OPTIONS – supertest.options() which allows dealing with CORS, PATCH – supertest.patch(), HEAD – supertest.head() and many more.

    Wasn’t it a convenient and successful journey?

    Conclusion

    So, wrapping it up with a note that API testing needs attention, and as a QA, let’s abide by the concept of a testing pyramid which is nothing but the mindset of a tester and how to combat issues at a lower level and avoid chaos at upper levels, i.e. UI. 

    Testing Pyramid

    I hope you had a good read. Kindly spread the word. Happy coding!

  • Test Automation in React Native apps using Appium and WebdriverIO

    React Native provides a mobile app development experience without sacrificing user experience or visual performance. And when it comes to mobile app UI testing, Appium is a great way to test indigenous React Native apps out of the box. Creating native apps from the same code and being able to do it using JavaScript has made Appium popular. Apart from this, businesses are attracted by the fact that they can save a lot of money by using this application development framework.

    In this blog, we are going to cover how to add automated tests for React native apps using Appium & WebdriverIO with a Node.js framework. 

    What are React Native Apps

    React Native is an open-source framework for building Android and iOS apps using React and local app capabilities. With React Native, you can use JavaScript to access the APIs on your platform and define the look and behavior of your UI using React components: lots of usable, non-compact code. In the development of Android and iOS apps, “viewing” is the basic building block of a UI: this small rectangular object on the screen can be used to display text, photos, or user input. Even the smallest detail of an app, such as a text line or a button, is a kind of view. Some views may contain other views.

    What is Appium

    Appium is an open-source tool for traditional automation, web, and hybrid apps on iOS, Android, and Windows desktop mobile platforms. Indigenous apps are those written using iOS and Android. Mobile web applications are accessed using a mobile browser (Appium supports Safari for iOS apps and Chrome or the built-in ‘Browser’ for Android apps). Hybrid apps have a wrapper around “web view”—a traditional controller that allows you to interact with web content. Projects like Apache Cordova make it easy to build applications using web technology and integrate it into a traditional wrapper, creating a hybrid application.

    Importantly, Appium is “cross-platform”, allowing you to write tests against multiple platforms (iOS, Android), using the same API. This enables code usage between iOS, Android, and Windows test suites. It runs on iOS and Android applications using the WebDriver protocol.

    Fig:- Appium Architecture

    What is WebDriverIO

    WebdriverIO is a next-gen browser and Node.js automated mobile testing framework. It allows you to customize any application written with modern web frameworks for mobile devices or browsers, such as React, Angular, Polymeror, and Vue.js.

    WebdriverIO is a widely used test automation framework in JavaScript. It has various features like it supports many reports and services, Test Frameworks, and WDIO CLI Test Runners

    The following are examples of supported services:

    • Appium Service
    • Devtools Service
    • Firefox Profile Service
    • Selenium Standalone Service
    • Shared Store Service
    • Static Server Service
    • ChromeDriver Service
    • Report Portal Service
    • Docker Service

    The followings are supported by the test framework:

    • Mocha
    • Jasmine
    • Cucumber 
    Fig:- WebdriverIO Architecture

    Key features of Appium & WebdriverIO

    Appium

    • Does not require application source code or library
    • Provides a strong and active community
    • Has multi-platform support, i.e., it can run the same test cases on multiple platforms
    • Allows the parallel execution of test scripts
    • In Appium, a small change does not require reinstallation of the application
    • Supports various languages like C#, Python, Java, Ruby, PHP, JavaScript with node.js, and many others that have a Selenium client library

    WebdriverIO 

    • Extendable
    • Compatible
    • Feature-rich 
    • Supports modern web and mobile frameworks
    • Runs automation tests both for web applications as well as native mobile apps.
    • Simple and easy syntax
    • Integrates tests to third-party tools such as Appium
    • ‘Wdio setup wizard’ makes the setup simple and easy
    • Integrated test runner

    Installation & Configuration

    $ mkdir Demo_Appium_Project

    • Create a sample Appium Project
    $ npm init
    $ package name: (demo_appium_project) demo_appium_test
    $ version: (1.0.0) 1.0.0
    $ description: demo_appium_practice
    $ entry point: (index.js) index.js
    $ test command: "./node_modules/.bin/wdio wdio.conf.js"
    $ git repository: 
    $ keywords: 
    $ author: Pushkar
    $ license: (ISC) ISC

    This will also create a package.json file for test settings and project dependencies.

    • Install node packages
    $ npm install

    • Install Appium through npm or as a standalone app.
    $ npm install -g appium or npm install --save appium

    $ npm install -g webdriverio or npm install --save-dev webdriverio @wdio/cli
    • Install Chai Assertion library 
    $ npm install -g chai or npm install --save chai

    Make sure you have following versions installed: 

    $ node --version - v.14.17.0
    $ npm --version - 7.17.0
    $ appium --version - 1.21.0
    $ java --version - java 16.0.1
    $ allure --version - 2.14.0

    WebdriverIO Configuration 

    The web driver configuration file must be created to apply the configuration during the test Generate command below project:

    $ npx wdio config

    With the following series of questions, install the required dependencies,

    $ Where is your automation backend located? - On my local machine
    $ Which framework do you want to use? - mocha	
    $ Do you want to use a compiler? No!
    $ Where are your test specs located? - ./test/specs/**/*.js
    $ Do you want WebdriverIO to autogenerate some test files? - Yes
    $ Do you want to use page objects (https://martinfowler.com/bliki/PageObject.html)? - No
    $ Which reporter do you want to use? - Allure
    $ Do you want to add a service to your test setup? - No
    $ What is the base url? - http://localhost

    This is how wdio.conf.js looks:

    exports.config = {
     port: 4724,
     path: '/wd/hub/',
     runner: 'local',
     specs: ['./test/specs/*.js'],
     maxInstances: 1,
     capabilities: [
       {
         platformName: 'Android',
         platformVersion: '11',
         appPackage: 'com.facebook.katana',
         appActivity: 'com.facebook.katana.LoginActivity',
         automationName: 'UiAutomator2'
       }
     ],
     services: [
       [
         'appium',
         {
           args: {
             relaxedSecurity: true
            },
           command: 'appium'
         }
       ]
     ],
     logLevel: 'debug',
     bail: 0,
     baseUrl: 'http://localhost',
     waitforTimeout: 10000,
     connectionRetryTimeout: 90000,
     connectionRetryCount: 3,
     framework: 'mocha',
     reporters: [
       [
         'allure',
         {
           outputDir: 'allure-results',
           disableWebdriverStepsReporting: true,
           disableWebdriverScreenshotsReporting: false
         }
       ]
     ],
     mochaOpts: {
       ui: 'bdd',
       timeout: 60000
     },
     afterTest: function(test, context, { error, result, duration, passed, retries }) {
       if (!passed) {
           browser.takeScreenshot();
       }
     }
    }
    view raw

    For iOS Automation, just add the following capabilities in wdio.conf.js & the Appium Configuration: 

    {
      "platformName": "IOS",
      "platformVersion": "14.5",
      "app": "/Your_PATH/wdioNativeDemoApp.app",
      "deviceName": "iPhone 12 Pro Max"
    }

    Launch the iOS Simulator from Xcode

    Install Appium Doctor for iOS by using following command:

    npm install -g appium-doctor

    Fig:- Appium Doctor Installed

    This is how package.json will look:

    {
     "name": "demo_appium_test",
     "version": "1.0.0",
     "description": "demo_appium_practice",
     "main": "index.js",
     "scripts": {
       "test": "./node_modules/.bin/wdio wdio.conf.js"
     },
     "author": "Pushkar",
     "license": "ISC",
     "dependencies": {
       "@wdio/sync": "^7.7.4",
       "appium": "^1.21.0",
       "chai": "^4.3.4",
       "webdriverio": "^7.7.4"
     },
     "devDependencies": {
       "@wdio/allure-reporter": "^7.7.3",
       "@wdio/appium-service": "^7.7.3",
       "@wdio/cli": "^7.7.4",
       "@wdio/local-runner": "^7.7.4",
       "@wdio/mocha-framework": "^7.7.4",
       "@wdio/selenium-standalone-service": "^7.7.4"
     }
    }

    Steps to follow if npm legacy peer deeps problem occurred:

    npm install --save --legacy-peer-deps
    npm config set legacy-peer-deps true
    npm i --legacy-peer-deps
    npm config set legacy-peer-deps true
    npm cache clean --force

    This is how the folder structure will look in Appium with the WebDriverIO Framework:

    Fig:- Appium Framework Outline

    Step-by-Step Configuration of Android Emulator using Android Studio

    Fig:- Android Studio Launch

     

    Fig:- Android Studio AVD Manager

     

    Fig:- Create Virtual Device

     

    Fig:- Choose a device Definition

     

    Fig:- Select system image

    Fig:- License Agreement

     

    Fig:- Component Installer

     

    Fig:- System Image Download

     

    Fig:- Configuration Verification

    Fig:- Virtual Device Listing

    ‍Appium Desktop Configuration

    Fig:- Appium Desktop Launch

    Setup of ANDROID_HOME + ANDROID_SDK_ROOT &  JAVA_HOME

    Follow these steps for setting up ANDROID_HOME: 

    vi ~/.bash_profile
    Add following 
    export ANDROID_HOME=/Users/pushkar/android-sdk 
    export PATH=$PATH:$ANDROID_HOME/platform-tools 
    export PATH=$PATH:$ANDROID_HOME/tools 
    export PATH=$PATH:$ANDROID_HOME/tools/bin 
    export PATH=$PATH:$ANDROID_HOME/emulator
    Save ~/.bash_profile 
    source ~/.bash_profile 
    echo $ANDROID_HOME
    /Users/pushkar/Library/Android/sdk

    Follow these steps for setting up ANDROID_SDK_ROOT:

    vi ~/.bash_profile
    Add following 
    export ANDROID_HOME=/Users/pushkar/Android/sdk
    export ANDROID_SDK_ROOT=/Users/pushkar/Android/sdk
    export ANDROID_AVD_HOME=/Users/pushkar/.android/avd
    Save ~/.bash_profile 
    source ~/.bash_profile 
    echo $ANDROID_SDK_ROOT
    /Users/pushkar/Library/Android/sdk

    Follow these steps for setting up JAVA_HOME:

    java --version
    vi ~/.bash_profile
    Add following 
    export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk-16.0.1.jdk/Contents/Home.
    echo $JAVA_HOME
    /Library/Java/JavaVirtualMachines/jdk-16.0.1.jdk/Contents/Home

    Fig:- Environment Variables in Appium

     

    Fig:- Appium Server Starts 

     

    Fig:- Appium Start Inspector Session

    Fig:- Inspector Session Configurations

    Note – Make sure you need to install the app from Google Play Store. 

    Fig:- Android Emulator Launch  

     

    Fig: – Android Emulator with Facebook React Native Mobile App

     

    Fig:- Success of Appium with Emulator

     

    Fig:- Locating Elements using Appium Inspector

    How to write E2E React Native Mobile App Tests 

    Fig:- Test Suite Structure of Mocha

    ‍Here is an example of how to write E2E test in Appium:

    Positive Testing Scenario – Validate Login & Nav Bar

    1. Open Facebook React Native App 
    2. Enter valid email and password
    3. Click on Login
    4. Users should be able to login into Facebook 

    Negative Testing Scenario – Invalid Login

    1. Open Facebook React Native App
    2. Enter invalid email and password 
    3. Click on login 
    4. Users should not be able to login after receiving an “Incorrect Password” message popup

    Negative Testing Scenario – Invalid Element

    1. Open Facebook React Native App 
    2. Enter invalid email and  password 
    3. Click on login 
    4. Provide invalid element to capture message

    Make sure test_script should be under test/specs folder 

    var expect = require('chai').expect
    
    beforeEach(() => {
     driver.launchApp()
    })
    
    afterEach(() => {
     driver.closeApp()
    })
    
    describe('Verify Login Scenarios on Facebook React Native Mobile App', () => {
     it('User should be able to login using valid credentials to Facebook Mobile App', () => {   
       $(`~Username`).waitForDisplayed(20000)
       $(`~Username`).setValue('Valid-Email')
       $(`~Password`).waitForDisplayed(20000)
       $(`~Password`).setValue('Valid-Password')
       $('~Log In').click()
       browser.pause(10000)
     })
    
     it('User should not be able to login with invalid credentials to Facebook Mobile App', () => {
       $(`~Username`).waitForDisplayed(20000)
       $(`~Username`).setValue('Invalid-Email')
       $(`~Password`).waitForDisplayed(20000)
       $(`~Password`).setValue('Invalid-Password')   
       $('~Log In').click()
       $(
           '//android.widget.TextView[@resource-id="com.facebook.katana:id/(name removed)"]'
         )
         .waitForDisplayed(11000)
       const status = $(
         '//android.widget.TextView[@resource-id="com.facebook.katana:id/(name removed)"]'
       ).getText()
       expect(status).to.equal(
         `You Can't Use This Feature Right Now`     
       )
     })
    
     it('Test Case should Fail Because of Invalid Element', () => {
       $(`~Username`).waitForDisplayed(20000)
       $(`~Username`).setValue('Invalid-Email')
       $(`~Password`).waitForDisplayed(20000)
       $(`~Password`).setValue('Invalid-Pasword')   
       $('~Log In').click()
       $(
           '//android.widget.TextView[@resource-id="com.facebook.katana:id/(name removed)"'
         )
         .waitForDisplayed(11000)
       const status = $(
         '//android.widget.TextView[@resource-id="com.facebook.katana"'
       ).getText()
       expect(status).to.equal(
         `You Can't Use This Feature Right Now`     
       )
     })
    
    })

    How to Run Mobile Tests Scripts  

    $ npm test 
    This will create a Results folder with .xml report 

    Reporting

    The following are examples of the supported reporters:

    • Allure Reporter
    • Concise Reporter
    • Dot Reporter
    • JUnit Reporter
    • Spec Reporter
    • Sumologic Reporter
    • Report Portal Reporter
    • Video Reporter
    • HTML Reporter
    • JSON Reporter
    • Mochawesome Reporter
    • Timeline Reporter
    • CucumberJS JSON Reporter

    Here, we are using Allure Reporting. Allure Reporting in WebdriverIO is a plugin to create Allure Test Reports.

    The easiest way is to keep @wdio/allure-reporter as a devDependency in your package.json with

    $ npm install @wdio/allure-reporter --save-dev

    Reporter options can be specified in the wdio.conf.js configuration file 

    reporters: [
       [
         'allure',
         {
           outputDir: 'allure-results',
           disableWebdriverStepsReporting: true,
           disableWebdriverScreenshotsReporting: false
         }
       ]
     ]

    To convert Allure .xml report to .html report, run the following command: 

    $ allure generate && allure open
    Allure HTML report should be opened in browser

    This is what Allure Reports look like: 

    Fig:- Allure Report Overview 

     

    Fig:- Allure Categories

     

    Fig:- Allure Suites

     

    Fig: – Allure Graphs

     

    Fig:- Allure Timeline

     

    Fig:- Allure Behaviors

     

    Fig:- Allure Packages

    Limitations with Appium & WebDriverIO

    Appium 

    • Android versions lower than 4.2 are not supported for testing
    • Limited support for hybrid app testing
    • Doesn’t support image comparison.

    WebdriverIO

    • It has a custom implementation
    • It can be used for automating AngularJS apps, but it is not as customized as Protractor.

    Conclusion

    In the QA and developer ecosystem, using Appium to test React native applications is common. Appium makes it easy to record test cases on both Android and iOS platforms while working with React Native. Selenium, a basic web developer, acts as a bridge between Appium and mobile platforms for delivery and testing. Appium is a solid framework for automatic UI testing. This article explains that this framework is capable of conducting test cases quickly and reliably. Most importantly, it can test both Android and iOS apps developed by the React Native framework on the basis of a single code.

    Related Articles –

    References