Category: Blogs

  • Machine Learning for your Infrastructure: Anomaly Detection with Elastic + X-Pack

    Introduction

    The world continues to go through digital transformation at an accelerating pace. Modern applications and infrastructure continues to expand and operational complexity continues to grow. According to a recent ManageEngine Application Performance Monitoring Survey:

    • 28 percent use ad-hoc scripts to detect issues in over 50 percent of their applications.
    • 32 percent learn about application performance issues from end users.
    • 59 percent trust monitoring tools to identify most performance deviations.

    Most enterprises and web-scale companies have instrumentation & monitoring capabilities with an ElasticSearch cluster. They have a high amount of collected data but struggle to use it effectively. This available data can be used to improve availability and effectiveness of performance and uptime along with root cause analysis and incident prediction

    IT Operations & Machine Learning

    Here is the main question: How to make sense of the huge piles of collected data? The first step towards making sense of data is to understand the correlations between the time series data. But only understanding will not work since correlation does not imply causation. We need a practical and scalable approach to understand the cause-effect relationship between data sources and events across complex infrastructure of VMs, containers, networks, micro-services, regions, etc.

    It’s very likely that due to one component something goes wrong with another component. In such cases, operational historical data can be used to identify the root cause by investigating through a series of intermediate causes and effects. Machine learning is particularly useful for such problems where we need to identify “what changed”, since machine learning algorithms can easily analyze existing data to understand the patterns, thus making easier to recognize the cause. This is known as unsupervised learning, where the algorithm learns from the experience and identifies similar patterns when they come along again.

    Let’s see how you can setup Elastic + X-Pack to enable anomaly detection for your infrastructure & applications.

    Anomaly Detection using Elastic’s machine learning with X-Pack

    Step I: Setup

    1. Setup Elasticsearch: 

    According to Elastic documentation, it is recommended to use the Oracle JDK version 1.8.0_131. Check if you have required Java version installed on your system. It should be at least Java 8, if required install/upgrade accordingly.

    • Download elasticsearch tarball and untar it
    $ wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.5.1.tar.gz
    $ tar -xzvf elasticsearch-5.5.1.tar.gz

    • It will then create a folder named elasticsearch-5.5.1. Go into the folder.
    $ cd elasticsearch-5.5.1

    • Install X-Pack into Elasticsearch
    $ ./bin/elasticsearch-plugin install x-pack

    • Start elasticsearch
    $ bin/elasticsearch

    2. Setup Kibana

    Kibana is an open source analytics and visualization platform designed to work with Elasticsearch.

    • Download kibana tarball and untar it
    $ wget https://artifacts.elastic.co/downloads/kibana/kibana-5.5.1-linux-x86_64.tar.gz
    $ tar -xzf kibana-5.5.1-linux-x86_64.tar.gz

    • It will then create a folder named kibana-5.5.1. Go into the directory.
    $ cd kibana-5.5.1-linux-x86_64

    • Install X-Pack into Kibana
    $ ./bin/kibana-plugin install x-pack

    • Running kibana
    $ ./bin/kibana

    • Navigate to Kibana at http://localhost:5601/
    • Log in as the built-in user elastic and password changeme.
    • You will see the below screen:
    Kibana: X-Pack Welcome Page

     

    3. Metricbeat:

    Metricbeat helps in monitoring servers and the services they host by collecting metrics from the operating system and services. We will use it to get CPU utilization metrics of our local system in this blog.

    • Download Metric Beat’s tarball and untar it
    $ wget https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-5.5.1-linux-x86_64.tar.gz
    $ tar -xvzf metricbeat-5.5.1-linux-x86_64.tar.gz

    • It will create a folder metricbeat-5.5.1-linux-x86_64. Go to the folder
    $ cd metricbeat-5.5.1-linux-x86_64

    • By default, Metricbeat is configured to send collected data to elasticsearch running on localhost. If your elasticsearch is hosted on any server, change the IP and authentication credentials in metricbeat.yml file.
     Metricbeat Config

     

    • Metric beat provides following stats:
    • System load
    • CPU stats
    • IO stats
    • Per filesystem stats
    • Per CPU core stats
    • File system summary stats
    • Memory stats
    • Network stats
    • Per process stats
    • Start Metricbeat as daemon process
    $ sudo ./metricbeat -e -c metricbeat.yml &

    Now, all setup is done. Let’s go to step 2 to create machine learning jobs. 

    Step II: Time Series data

    • Real-time data: We have metricbeat providing us the real-time series data which will be used for unsupervised learning. Follow below steps to define index pattern metricbeat-*  in Kibana to search against this pattern in Elasticsearch:
      – Go to Management -> Index Patterns  
      – Provide Index name or pattern as metricbeat-*
      – Select Time filter field name as @timestamp
      – Click Create

    You will not be able to create an index if elasticsearch did not contain any metric beat data. Make sure your metric beat is running and output is configured as elasticsearch.

    • Saved Historic data: Just to see quickly how machine learning detect the anomalies you can also use data provided by Elastic. Download sample data by clicking here.
    • Unzip the files in a folder: tar -zxvf server_metrics.tar.gz
    • Download this script. It will be used to upload sample data to elastic.
    • Provide execute permissions to the file: chmod +x upload_server-metrics.sh
    • Run the script.
    • As we created index pattern for metricbeat data, in same way create index pattern server-metrics*

    Step III: Creating Machine Learning jobs

    There are two scenarios in which data is considered anomalous. First, when the behavior of key indicator changes over time relative to its previous behavior. Secondly, when within a population behavior of an entity deviates from other entities in population over single key indicator.

    To detect these anomalies, there are three types of jobs we can create:

    1. Single Metric job: This job is used to detect Scenario 1 kind of anomalies over only one key performance indicator.
    2. Multimetric job: Multimetric job also detects Scenario 1 kind of anomalies but in this type of job we can track more than one performance indicators, such as CPU utilization along with memory utilization.
    3. Advanced job: This kind of job is created to detect anomalies of type 2.

    For simplicity, we are creating following single metric jobs:

    1. Tracking CPU Utilization: Using metric beat data
    2. Tracking total requests made on server: Using sample server data

    Follow below steps to create single metric jobs:

    Job1: Tracking CPU Utilization

    Job2: Tracking total requests made on server

    • Go to http://localhost:5601/
    • Go to Machine learning tab on the left panel of Kibana.
    • Click on Create new job
    • Click Create single metric job
    • Select index we created in Step 2 i.e. metricbeat-* and server-metrics* respectively
    • Configure jobs by providing following values:
    1. Aggregation: Here you need to select an aggregation function that will be applied to a particular field of data we are analyzing.
    2. Field: It is a drop down, will show you all field that you have w.r.t index pattern.
    3. Bucket span: It is interval time for analysis. Aggregation function will be applied on selected field after every interval time specified here.
    • If your data contains so many empty buckets i.e. data is sparse and you don’t want to consider it as anomalous check the checkbox named sparse data  (if it appears).
    • Click on Use full <index pattern=””> data to use all available data for analysis.</index>
    Metricbeats Description
    Server Description
    • Click on play symbol
    • Provide job name and description
    • Click on Create Job

    After creating job the data available will be analyzed. Click on view results, you will see a chart which will show the actual and upper & lower bound of predicted value. If actual value lies outside of the range, it will be considered as anomalous. The Color of the circles represents the severity level.

    Here we are getting a high range of prediction values since it just started learning. As we get more data the prediction will get better.
    You can see here predictions are pretty good since there is a lot of data to understand the pattern
    • Click on machine learning tab in the left panel. The jobs we created will be listed here.
    • You will see the list of actions for every job you have created.
    • Since we are storing every minute data for Job1 using metricbeat. We can feed the data to the job in real time. Click on play button to start data feed. As we get more and more data prediction will improve.
    • You see details of anomalies by clicking Anomaly Viewer.
    Anomaly in metricbeats data
    Server metrics anomalies  

    We have seen how machine learning can be used to get patterns among the different statistics along with anomaly detection. After identifying anomalies, it is required to find the context of those events. For example, to know about what other factors are contributing to the problem? In such cases, we can troubleshoot by creating multimetric jobs.

  • Idiot-proof Coding with Node.js and Express.js

    Node.js has become the most popular framework for web development surpassing Ruby on Rails and Django in terms of the popularity.The growing popularity of full stack development along with the performance benefits of asynchronous programming has led to the rise of Node’s popularity. ExpressJs is a minimalistic, unopinionated and the most popular web framework built for Node which has become the de-facto framework for many projects.
    Note — This article is about building a Restful API server with ExpressJs . I won’t be delving into a templating library like handlebars to manage the views.

    A quick search on google will lead you a ton of articles agreeing with what I just said which could validate the theory. Your next step would be to go through a couple of videos about ExpressJS on Youtube, try hello world with a boilerplate template, choose few recommended middleware for Express (Helmet, Multer etc), an ORM (mongoose if you are using Mongo or Sequelize if you are using relational DB) and start building the APIs. Wow, that was so fast!

    The problem starts to appear after a few weeks when your code gets larger and complex and you realise that there is no standard coding practice followed across the client and the server code, refactoring or updating the code breaks something else, versioning of the APIs becomes difficult, call backs have made your life hell (you are smart if you are using Promises but have you heard of async-await?).

    Do you think you your code is not so idiot-proof anymore? Don’t worry! You aren’t the only one who thinks this way after reading this.

    Let me break the suspense and list down the technologies and libraries used in our idiot-proof code before you get restless.

    1. Node 8.11.3: This is the latest LTS release from Node. We are using all the ES6 features along with async-await. We have the latest version of ExpressJs (4.16.3).
    2. Typescript: It adds an optional static typing interface to Javascript and also gives us familiar constructs like classes (Es6 also gives provides class as a construct) which makes it easy to maintain a large codebase.
    3. Swagger: It provides a specification to easily design, develop, test and document RESTful interfaces. Swagger also provides many open source tools like codegen and editor that makes it easy to design the app.
    4. TSLint: It performs static code analysis on Typescript for maintainability, readability and functionality errors.
    5. Prettier: It is an opinionated code formatter which maintains a consistent style throughout the project. This only takes care of the styling like the indentation (2 or 4 spaces), should the arguments remain on the same line or go to the next line when the line length exceeds 80 characters etc.
    6. Husky: It allows you to add git hooks (pre-commit, pre-push) which can trigger TSLint, Prettier or Unit tests to automatically format the code and to prevent the push if the lint or the tests fail.

    Before you move to the next section I would recommend going through the links to ensure that you have a sound understanding of these tools.

    Now I’ll talk about some of the challenges we faced in some of our older projects and how we addressed these issues in the newer projects with the tools/technologies listed above.

    Formal API definition

    A problem that everyone can relate to is the lack of formal documentation in the project. Swagger addresses a part of this problem with their OpenAPI specification which defines a standard to design REST APIs which can be discovered by both machines and humans. As a practice, we first design the APIs in swagger before writing the code. This has 3 benefits:

    • It helps us to focus only on the design without having to worry about the code, scaffolder, naming conventions etc. Our API designs are consistent with the implementation because of this focused approach.
    • We can leverage tools like swagger-express-mw to internally wire the routes in the API doc to the controller, validate request and response object from their definitions etc.
    • Collaboration between teams becomes very easy, simple and standardised because of the Swagger specification.

    Code Consistency

    We wanted our code to look consistent across the stack (UI and Backend)and we use ESlint to enforce this consistency.
    Example –
    Node traditionally used “require” and the UI based frameworks used “import” based syntax to load the modules. We decided to follow ES6 style across the project and these rules are defined with ESLint.

    Note — We have made slight adjustments to the TSlint for the backend and the frontend to make it easy for the developers. For example, we allow upto 120 characters in React as some of our DOM related code gets lengthy very easily.

    Code Formatting

    This is as important as maintaining the code consistency in the project. It’s easy to read a code which follows a consistent format like indentation, spaces, line breaks etc. Prettier does a great job at this. We have also integrated Prettier with Typescript to highlight the formatting errors along with linting errors. IDE like VS Code also has prettier plugin which supports features like auto-format to make it easy.

    Strict Typing

    Typescript can be leveraged to the best only if the application follows strict typing. We try to enforce it as much as possible with exceptions made in some cases (mostly when a third party library doesn’t have a type definition). This has the following benefits:

    • Static code analysis works better when your code is strongly typed. We discover about 80–90% of the issues before compilation itself using the plugins mentioned above.
    • Refactoring and enhancements becomes very simple with Typescript. We first update the interface or the function definition and then follow the errors thrown by Typescript compiler to refactor the code.

    Git Hooks

    Husky’s “pre-push” hook runs TSLint to ensure that we don’t push the code with linting issues. If you follow TDD (in the way it’s supposed to be done), then you can also run unit tests before pushing the code. We decided to go with pre-hooks because
    – Not everyone has CI from the very first day. With a git hook, we at least have some code quality checks from the first day.
    – Running lint and unit tests on the dev’s system will leave your CI with more resources to run integration and other complex tests which are not possible to do in local environment.
    – You force the developer to fix the issues at the earliest which results in better code quality, faster code merges and release.

    Async-await

    We were using promises in our project for all the asynchronous operations. Promises would often lead to a long chain of then-error blocks which is not very comfortable to read and often result in bugs when it got very long (it goes without saying that Promises are much better than the call back function pattern). Async-await provides a very clean syntax to write asynchronous operations which just looks like sequential code. We have seen a drastic improvement in the code quality, fewer bugs and better readability after moving to async-await.

    Hope this article gave you some insights into tools and libraries that you can use to build a scalable ExpressJS app.

  • Cloud Native Applications — The Why, The What & The How

    Cloud-native is an approach to build & run applications that can leverage the advantages of the cloud computing model — On demand computing power & pay-as-you-go pricing model. These applications are built and deployed in a rapid cadence to the cloud platform and offer organizations greater agility, resilience, and portability across clouds.

    This blog explains the importance, the benefits and how to go about building Cloud Native Applications.

    CLOUD NATIVE – The Why?

    Early technology adapters like FANG (Facebook, Amazon, Netflix & Google) have some common themes when it comes to shipping software. They have invested heavily in building capabilities that enable them to release new features regularly (weekly, daily or in some cases even hourly). They have achieved this rapid release cadence while supporting safe and reliable operation of their applications; in turn allowing them to respond more effectively to their customers’ needs.

    They have achieved this level of agility by moving beyond ad-hoc automation and by adopting cloud native practices that deliver these predictable capabilities. DevOps,Continuous Delivery, micro services & containers form the 4 main tenets of Cloud Native patterns. All of them have the same overarching goal of making application development and operations team more efficient through automation.

    At this point though, these techniques have only been successfully proven at the aforementioned software driven companies. Smaller, more agile companies are also realising the value here. However, as per Joe Beda(creator of Kubernetes & CTO at Heptio) there are very few examples of this philosophy being applied outside these technology centric companies.

    Any team/company shipping products should seriously consider adopting Cloud Native practices if they want to ship software faster while reducing risk and in turn delighting their customers.

    CLOUD NATIVE – The What?

    Cloud Native practices comprise of 4 main tenets.

     

    Cloud native — main tenets
    • DevOps is the collaboration between software developers and IT operations with the goal of automating the process of software delivery & infrastructure changes.
    • Continuous Delivery enables applications to released quickly, reliably & frequently, with less risk.
    • Micro-services is an architectural approach to building an application as a collection of small independent services that run on their own and communicate over HTTP APIs.
    • Containers provide light-weight virtualization by dynamically dividing a single server into one or more isolated containers. Containers offer both effiiciency & speed compared to standard Virual Machines (VMs). Containers provide the ability to manage and migrate the application dependencies along with the application. while abstracting away the OS and the underlying cloud platform in many cases.

    The benefits that can be reaped by adopting these methodologies include:

    1. Self managing infrastructure through automation: The Cloud Native practice goes beyond ad-hoc automation built on top of virtualization platforms, instead it focuses on orchestration, management and automation of the entire infrastructure right upto the application tier.
    2. Reliable infrastructure & application: Cloud Native practice ensures that it much easier to handle churn, replace failed components and even easier to recover from unexpected events & failures.
    3. Deeper insights into complex applications: Cloud Native tooling provides visualization for health management, monitoring and notifications with audit logs making applications easy to audit & debug
    4. Security: Enable developers to build security into applications from the start rather than an afterthought.
    5. More efficient use of resources: Containers are lighter in weight that full systems. Deploying applications in containers lead to increased resource utilization.

    Software teams have grown in size and the amount of applications and tools that a company needs to be build has grown 10x over last few years. Microservices break large complex applications into smaller pieces so that they can be developed, tested and managed independently. This enables a single microservice to be updated or rolled-back without affecting other parts of the application. Also nowadays software teams are distributed and microservices enables each team to own a small piece with service contracts acting as the communication layer.

    CLOUD NATIVE – The How?

    Now, lets look at the various building blocks of the cloud native stack that help achieve the above described goals. Here, we have grouped tools & solutions as per the problem they solve. We start with the infrastructure layer at the bottom, then the tools used to provision the infrastructure, following which we have the container runtime environment; above that we have tools to manage clusters of container environments and then at the very top we have the tools, frameworks to develop the applications.

    1. Infrastructure: At the very bottom, we have the infrastructure layer which provides the compute, storage, network & operating system usually provided by the Cloud (AWS, GCP, Azure, Openstack, VMware).

    2. Provisioning: The provisioning layer consists of automation tools that help in provisioning the infrastructure, managing images and deploying the application. Chef, Puppet & Ansible are the DevOps tools that give the ability to manage their configuration & environments. Spinnaker, Terraform, Cloudformation provide workflows to provision the infrastructure. Twistlock, Clair provide the ability to harden container images.

    3. Runtime: The Runtime provides the environment in which the application runs. It consists of the Container Engines where the application runs along with the associated storage & networking. containerd & rkt are the most widely used Container engines. Flannel, OpenContrail provide the necessary overlay networking for containers to interact with each other and the outside world while Datera, Portworx, AppOrbit etc. provide the necessary persistent storage enabling easy movement of containers across clouds.

    4. Orchestration and Management: Tools like Kubernetes, Docker Swarm and Apache Mesos abstract the management container clusters allowing easy scheduling & orchestration of containers across multiple hosts. etcd, Consul provide service registries for discovery while AVI, Envoy provide proxy, load balancer etc. services.

    5. Application Definition & Development: We can build micro-services for applications across multiple langauges — Python, Spring/Java, Ruby, Node. Packer, Habitat & Bitnami provide image management for the application to run across all infrastructure — container or otherwise. 
    Jenkins, TravisCI, CircleCI and other build automation servers provide the capability to setup continuous integration and delivery pipelines.

    6. Monitoring, Logging & Auditing: One of the key features of managing Cloud Native Infrastructure is the ability to monitor & audit the applications & underlying infrastructure.

    All modern monitoring platforms like Datadog, Newrelic, AppDynamic support monitoring of containers & microservices.

    Splunk, Elasticsearch & fluentd help in log aggregration while Open Tracing and Zipkin help in debugging applications.

    7. Culture: Adopting cloud native practices needs a cultural change where teams no longer work in independent silos. End-to-end automation of software delivery pipelines is only possible when there is an increased collaboration between development and IT operations team with a shared responbility.

    When we put all the pieces together we get the complete Cloud Native Landscape as shown below.

    Cloud Native Landscape

    I hope this post gives an idea why Cloud Native is important and what the main benefits are. As you may have noticed in the above infographic, there are several projects, tools & companies trying to solve similar problems. The next questions in mind most likely will be How do i get started? Which tools are right for me? and so on. I will cover these topics and more in my following blog posts. Stay tuned!

    Please let us know what you think by adding comments to this blog or reaching out to chirag_jog or Velotio on Twitter.

    Learn more about what we do at Velotio here and how Velotio can get you started on your cloud native journey here.

    References:

  • Eliminate Render-blocking Resources using React and Webpack

    In the previous blog, we learned how a browser downloads many scripts and useful resources to render a webpage. But not all of them are necessary to show a page’s content. Because of this, the page rendering is delayed. However, most of them will be needed as the user navigates through the website’s various pages.

    In this article, we’ll learn to identify such resources and classify them as critical and non-critical. Once identified, we’ll inline the critical resources and defer the non-critical resources.

    For this blog, we’ll use the following tools:

    • Google Lighthouse and other Chrome DevTools to identify render-blocking resources.
    • Webpack and CRACO to fix it.

    Demo Configuration

    For the demo, I have added the JavaScript below to the <head></head> of index.html as a render-blocking JS resource. This script loads two more CSS resources on the page.

    https://use.fontawesome.com/3ec06e3d93.js

    Other configurations are as follows:

    • Create React App v4.0
    • Formik and Yup for handling form validations
    • Font Awesome and Bootstrap
    • Lazy loading and code splitting using Suspense, React lazy, and dynamic import
    • CRACO
    • html-critical-webpack-plugin
    • ngrok and serve for serving build

    Render-Blocking Resources

    A render-blocking resource typically refers to a script or link that prevents a browser from rendering the processed content.

    Lighthouse will flag the below as render-blocking resources:

    • A <script></script> tag in <head></head> that doesn’t have a defer or async attribute.
    • A <link rel=””stylesheet””> tag that doesn’t have a media attribute to match a user’s device or a disabled attribute to hint browser to not download if unnecessary.
    • A <link rel=””import””> that doesn’t have an async attribute.

    Identifying Render-Blocking Resources

    To reduce the impact of render-blocking resources, find out what’s critical for loading and what’s not.

    To do that, we’re going to use the Coverage Tab in Chrome DevTools. Follow the steps below:

    1. Open the Chrome DevTools (press F12)

    2. Go to the Sources tab and press the keys to Run command

    The below screenshot is taken on a macOS.

    3. Search for Show Coverage and select it, which will show the Coverage tab below. Expand the tab.

    4. Click on the reload button on the Coverage tab to reload the page and start instrumenting the coverage of all the resources loading on the current page.

    5. After capturing the coverage, the resources loaded on the page will get listed (refer to the screenshot below). This will show you the code being used vs. the code loaded on the page.

    The list will display coverage in 2 colors:

    a. Green (critical) – The code needed for the first paint

    b. Red (non-critical) – The code not needed for the first paint.

    After checking each file and the generated index.html after the build, I found three primary non-critical files –

    a. 5.20aa2d7b.chunk.css98% non-critical code

    b. https://use.fontawesome.com/3ec06e3d93.js – 69.8% non-critical code. This script loads below CSS –

    1. font-awesome-css.min.css – 100% non-critical code

    2. https://use.fontawesome.com/3ec06e3d93.css – 100% non-critical code

    c. main.6f8298b5.chunk.css – 58.6% non-critical code

    The above resources satisfy the condition of a render-blocking resource and hence are prompted by the Lighthouse Performance report as an opportunity to eliminate the render-blocking resources (refer screenshot). You can reduce the page size by only shipping the code that you need.

    Solution

    Once you’ve identified critical and non-critical code, it is time to extract the critical part as an inline resource in index.html and deferring the non-critical part by using the webpack plugin configuration.

    For Inlining and Preloading CSS: 

    Use html-critical-webpack-plugin to inline the critical CSS into index.html. This will generate a <style></style> tag in the <head> with critical CSS stripped out of the main CSS chunk and preloading the main file.</head>

    const path = require('path');
    const { whenProd } = require('@craco/craco');
    const HtmlCriticalWebpackPlugin = require('html-critical-webpack-plugin');
    
    module.exports = {
      webpack: {
        configure: (webpackConfig) => {
          return {
            ...webpackConfig,
            plugins: [
              ...webpackConfig.plugins,
              ...whenProd(
                () => [
                  new HtmlCriticalWebpackPlugin({
                    base: path.resolve(__dirname, 'build'),
                    src: 'index.html',
                    dest: 'index.html',
                    inline: true,
                    minify: true,
                    extract: true,
                    width: 320,
                    height: 565,
                    penthouse: {
                      blockJSRequests: false,
                    },
                  }),
                ],
                []
              ),
            ],
          };
        },
      },
    };

    Once done, create a build and deploy. Here’s a screenshot of the improved opportunities:

    To use CRACO, refer to its README file.

    NOTE: If you’re planning to use the critters-webpack-plugin please check these issues first: Could not find HTML asset and Incompatible with html-webpack-plugin v4.

    For Deferring Routes/Pages:

    Use lazy-loading and code-splitting techniques along with webpack’s magic comments as below to preload or prefetch a route/page according to your use case.

    import { Suspense, lazy } from 'react';
    import { Redirect, Route, Switch } from 'react-router-dom';
    import Loader from '../../components/Loader';
    
    import './style.scss';
    
    const Login = lazy(() =>
      import(
        /* webpackChunkName: "login" */ /* webpackPreload: true */ '../../containers/Login'
      )
    );
    const Signup = lazy(() =>
      import(
        /* webpackChunkName: "signup" */ /* webpackPrefetch: true */ '../../containers/Signup'
      )
    );
    
    const AuthLayout = () => {
      return (
        <Suspense fallback={<Loader />}>
          <Switch>
            <Route path="/auth/login" component={Login} />
            <Route path="/auth/signup" component={Signup} />
            <Redirect from="/auth" to="/auth/login" />
          </Switch>
        </Suspense>
      );
    };
    
    export default AuthLayout;

    The magic comments enable webpack to add correct attributes to defer the scripts according to the use-case.

    For Deferring External Scripts:

    For those who are using a version of webpack lower than 5, use script-ext-html-webpack-plugin or resource-hints-webpack-plugin.

    I would recommend following the simple way given below to defer an external script.

    // Add defer/async attribute to external render-blocking script
    <script async defer src="https://use.fontawesome.com/3ec06e3d93.js"></script>

    The defer and async attributes can be specified on an external script. The async attribute has a higher preference. For older browsers, it will fallback to the defer behaviour.

    If you want to know more about the async/defer, read the further reading section.

    Along with defer/async, we can also use media attributes to load CSS conditionally.

    It’s also suggested to load fonts locally instead of using full CDN in case we don’t need all the font-face rules added by Font providers.

    Now, let’s create and deploy the build once more and check the results.

    The opportunity to eliminate render-blocking resources shows no more in the list.

    We have finally achieved our goal!

    Final Thoughts

    The above configuration is a basic one. You can read the libraries’ docs for more complex implementation.

    Let me know if this helps you eliminate render-blocking resources from your app.

    If you want to check out the full implementation, here’s the link to the repo. I have created two branches—one with the problem and another with the solution. Read the further reading section for more details on the topics.

    Hope this helps.

    Happy Coding!

    Further Reading

  • Installing Redis Cluster with Persistent Storage on Mesosphere DC/OS

    In the first part of this blog, we saw how to install standalone redis service on DC/OS with Persistent storage using RexRay and AWS EBS volumes.

    A single server is a single point of failure in every system, so to ensure high availability of redis database, we can deploy a master-slave cluster of Redis servers. In this blog, we will see how to setup such 6 node (3 master, 3 slave) Redis cluster and persist data using RexRay and AWS EBS volumes. After that we will see how to import existing data into this cluster.

    Redis Cluster

    It is form of replicated Redis servers in multi-master architecture. All the data is sharded into 16384 buckets, where every master node is assigned subset of buckets out of them (generally evenly sharded) and each master replicated by its slaves.  It provides more resilience and scaling for production grade deployments where heavy workload is expected. Applications can connect to any node in cluster mode and the request will be redirected to respective master node.

     Source:  Octo

         

    Objective: To create a Redis cluster with number of services in DCOC environment with persistent storage and import the existing Redis dump.rdb data to the cluster.

    Prerequisites :  

    • Make sure rexray component is running and is in a healthy state for DCOS cluster.

    Steps:

    • As per Redis doc, the minimal cluster should have at least 3 master and 3 slave nodes, so making it a total 6 Redis services.
    • All services will use similar json configuration except changes in names of service, external volume, and port mappings.
    • We will deploy one Redis service for each Redis cluster node and once all services are running, we will form cluster among them.
    • We will use host network for Redis node containers, for that we will restrict Redis nodes to run on particular node. This will help us to troubleshoot cluster (fixed IP, so we can restart Redis node any time without data loss).
    • Using host network adds a prerequisites that number of dcos nodes >= number of Redis nodes.
    1. First create Redis node services on DCOS:
    2. Click on the Add button in Services tab of DCOS UI
    • Click on JSON configuration
    • Add below json config for Redis service, change the values which are written in BLOCK letters with # as prefix and suffix.
    • #NODENAME# – Name of Redis node (Ex. redis-node-1)
    • #NODEHOSTIP# – IP of dcos node on which this Redis node will run. This ip must be unique for each Redis node. (Ex. 10.2.12.23)
    • #VOLUMENAME# – Name of persistent volume, Give name to identify volume on AWS EBS (Ex. <dcos cluster=”” name=””>-redis-node-<node number=””>)</node></dcos>
    • #NODEVIP# – VIP For the Redis node. It must be ‘Redis’ for first Redis node, for others it can be the same as NODENAME (Ex. redis-node-2)
    {
       "id": "/#NODENAME#",
       "backoffFactor": 1.15,
       "backoffSeconds": 1,
       "constraints": [
         [
           "hostname",
           "CLUSTER",
           "#NODEHOSTIP#"
         ]
       ],
       "container": {
         "type": "DOCKER",
         "volumes": [
           {
             "external": {
               "name": "#VOLUMENAME#",
               "provider": "dvdi",
               "options": {
                 "dvdi/driver": "rexray"
               }
             },
             "mode": "RW",
             "containerPath": "/data"
           }
         ],
         "docker": {
           "image": "parvezkazi13/redis:latest",
           "forcePullImage": false,
           "privileged": false,
           "parameters": []
         }
       },
       "cpus": 0.5,
       "disk": 0,
       "fetch": [],
       "healthChecks": [],
       "instances": 1,
       "maxLaunchDelaySeconds": 3600,
       "mem": 4096,
       "gpus": 0,
       "networks": [
         {
           "mode": "host"
         }
       ],
       "portDefinitions": [
         {
           "labels": {
             "VIP_0": "/#NODEVIP#:6379"
           },
           "name": "#NODEVIP#",
           "protocol": "tcp",
           "port": 6379
         }
       ],
       "requirePorts": true,
       "upgradeStrategy": {
         "maximumOverCapacity": 0,
         "minimumHealthCapacity": 0.5
       },
       "killSelection": "YOUNGEST_FIRST",
       "unreachableStrategy": {
         "inactiveAfterSeconds": 300,
         "expungeAfterSeconds": 600
       }
     }

    • After updating the highlighted fields, copy above json to json configuration box, click on ‘Review & Run’ button in the right corner, this will start the service with above configuration.
    • Once above service is UP and Running, then repeat the step 2 to 4 for each Redis node with respective values for highlighted fields.
    • So if we go with 6 node cluster, at the end we will have 6 Redis nodes UP and Running, like:

    Note: Since we are using external volume for persistent storage, we can not scale our services, i.e. each service will only one instance max. If we try to scale, we will get below error :

    2. Form the Redis cluster between Redis node services:

    • To create or manage Redis-cluster, first deploy redis-cluster-util container on DCOS using below json config:
    {
     "id": "/infrastructure/redis-cluster-util",
     "backoffFactor": 1.15,
     "backoffSeconds": 1,
     "constraints": [],
     "container": {
       "type": "DOCKER",
       "volumes": [
         {
           "containerPath": "/backup",
           "hostPath": "backups",
           "mode": "RW"
         }
       ],
       "docker": {
         "image": "parvezkazi13/redis-util",
         "forcePullImage": true,
         "privileged": false,
         "parameters": []
       }
     },
     "cpus": 0.25,
     "disk": 0,
     "fetch": [],
     "instances": 1,
     "maxLaunchDelaySeconds": 3600,
     "mem": 4096,
     "gpus": 0,
     "networks": [
       {
         "mode": "host"
       }
     ],
     "portDefinitions": [],
     "requirePorts": true,
     "upgradeStrategy": {
       "maximumOverCapacity": 0,
       "minimumHealthCapacity": 0.5
     },
     "killSelection": "YOUNGEST_FIRST",
     "unreachableStrategy": {
       "inactiveAfterSeconds": 300,
       "expungeAfterSeconds": 600
     },
     "healthChecks": []
    }

    This will run service as :

    • Get the IP addresses of all Redis nodes to form the cluster, as Redis-cluster can not be created with node’s hostname / dns. This is an open issue.

    Since we are using host network, we need the dcos node IP on which Redis nodes are running.

    Get all Redis nodes IP using:

    NODE_BASE_NAME=redis-nodedcos task $NODE_BASE_NAME | grep -E "$NODE_BASE_NAME-<[0-9]>" | awk '{print $2":6379"}' | paste -s -d' '  

    Here Redis-node is the prefix used for all Redis nodes.

    Note the output of this command, we will use it in further steps.

    • Get the node where redis-cluster-util container is running and ssh to dcos node using:
    dcos node ssh --master-proxy --private-ip $(dcos task | grep "redis-cluster-util" | awk '{print $2}')

    • Now find the docker container id of redis-cluster-util and exec it using:
    docker exec -it $(docker ps -qf ancestor="parvezkazi13/redis-util") bash  

    • No we are inside the redis-cluster-util container. Run below command to form Redis cluster.
    redis-trib.rb create --replicas 1 <Space separated IP address:PORT pair of all Redis nodes>

    • Here use the Redis nodes IP addresses which retrieved in step 2.
    redis-trib.rb create --replicas 1 10.0.1.90:6379 10.0.0.19:6379 10.0.9.203:6379 10.0.9.79:6379 10.0.3.199:6379 10.0.9.104:6379

    • Parameters:
    • The option –replicas 1 means that we want a slave for every master created.
    • The other arguments are the list of addresses (host:port) of the instances we want to use to create the new cluster.
    • Output:
    • Select ‘yes’ when it prompts to set the slot configuration shown.
    • Run below command to check the status of the newly create cluster
    redis-trib.rb check <Any redis node host:PORT>
    Ex:
    redis-trib.rb check 10.0.1.90:6379

    • Parameters:
    • host:port of any node from the cluster.
    • Output:
    • If all OK, it will show OK with status, else it will show ERR with the error message.

    3. Import existing dump.rdb to Redis cluster

    • At this point, all the Redis nodes should be empty and each one should have an ID and some assigned slots:

    Before reuse an existing dump data, we have to reshard all slots to one instance. We specify the number of slots to move (all, so 16384), the id we move to (here Node 1 – 10.0.1.90:6379) and where we take these slots from (all other nodes).

    redis-trib.rb reshard 10.0.1.90:6379  

    Parameters:

    host:port of any node from the cluster.

    Output:

    It will prompt for number of slots to move – here all. i.e 16384

    Receiving node id – here id of node 10.0.1.90:6379 (redis-node-1)

    Source node IDs  – here all, as we want to shard  all slots to one node.

    And prompt to proceed – press ‘yes’

    • Now check again node 10.0.1.90:6379  
    redis-trib.rb check 10.0.1.90:6379  

    Parameters: host:port of any node from the cluster.

    Output: it will show all (16384) slots moved to node 10.0.1.90:6379

    • Next step is Importing our existing Redis dump data.  

    Now copy the existing dump.rdb to our redis-cluster-util container using below steps:

    – Copy existing dump.rdb to dcos node on which redis-cluster-util container is running. Can use scp from any other public server to dcos node.

    – Now we have dump .rdb in our dcos node, copy this dump.rdb to redis-cluster-util container using below command:

    docker cp dump.rdb $(docker ps -qf ancestor="parvezkazi13/redis-util"):/data

    Now we have dump.rdb in our redis-cluster-util container, we can import it to our Redis cluster. Execute and go to the redis-cluster-util container using:

    docker exec -it $(docker ps -qf ancestor="parvezkazi13/redis-util") bash

    It will execute redis-cluster-util container which is already running and start its bash cmd.

    Run below command to import dump.rdb to Redis cluster:

    rdb --command protocol /data/dump.rdb | redis-cli --pipe -h 10.0.1.90 -p 6379

    Parameters:

    Path to dump.rdb

    host:port of any node from the cluster.

    Output:

    If successful, you’ll see something like:

    All data transferred. Waiting for the last reply...Last reply received from server.errors: 0, replies: 4341259  

    as well as this in the Redis server logs:

    95086:M 01 Mar 21:53:42.071 * 10000 changes in 60 seconds. Saving...95086:M 01 Mar 21:53:42.072 * Background saving started by pid 9822398223:C 01 Mar 21:53:44.277 * DB saved on disk

    WARNING:
    Like our Oracle DB instance can have multiple databases, similarly Redis saves keys in keyspaces.
    Now when Redis is in cluster mode, it does not accept the dumps which has more than one keyspaces. As per documentation:

    Redis Cluster does not support multiple databases like the stand alone version of Redis. There is just database 0 and the SELECT command is not allowed. “

    So while importing such multi-keyspace Redis dump, server fails while starting on below issue :

    23049:M 16 Mar 17:21:17.772 * DB loaded from disk: 5.222 seconds
    23049:M 16 Mar 17:21:17.772 # You can't have keys in a DB different than DB 0 when in Cluster mode. Exiting.
    Solution / WA :

    There is redis-cli command “MOVE” to move keys from one keyspace to another keyspace.

    Also can run below command to move all the keys from keyspace 1 to keyspace 0 :

    redis-cli -h "$HOST" -p "$PORT" -n 1 --raw keys "*" |  xargs -I{} redis-cli -h "$HOST" -p "$PORT" -n 1 move {} 0

    • Verify import status, using below commands : (inside redis-cluster-util container)
    redis-cli -h 10.0.1.90 -p 6379 info keyspace

    It will run Redis info command on node 10.0.1.90:6379 and fetch keyspace information, like below:

    # Keyspace
    db0:keys=33283,expires=0,avg_ttl=0

    • Now reshard all the slots to all instances evenly

    The reshard command will again list the existing nodes, their IDs and the assigned slots.

    redis-trib.rb reshard 10.0.1.90:6379

    Parameters:

    host:port of any node from the cluster.

    Output:

    It will prompt for number of slots to move – here (16384 /3 Masters = 5461)

    Receiving node id – here id of master node 2  

    Source node IDs  – id of first instance which has currently all the slots. (master 1)

    And prompt to proceed – press ‘yes’

    Repeat above step and for receiving node id, give id of master node 3.

    • After the above step, all 3 masters will have equal slots and imported keys will be distributed among the master nodes.
    • Put keys to cluster for verification
    redis-cli -h 10.0.1.90 -p 6379 set foo bar
    OK
    redis-cli -h 10.0.1.90 -p 6379 set foo bar
    (error) MOVED 4813 10.0.9.203:6379

    Above error shows that server saved this key to instance 10.0.9.203:6379, so client redirected it. To follow redirection, use flag “-c” which says it is a cluster mode, like:

    redis-cli -h 10.0.1.90 -p 6379 -c set foo bar
    OK

    Redis Entrypoint

    Application entrypoint for Redis cluster is mostly depends how your Redis client handles cluster support. Generally connecting to one of master nodes should do the work.

    Use below host:port in your applications :

    redis.marathon.l4lb.thisdcos.directory:6379

    Automation of Redis Cluster Creation

    We have automation script in place to deploy 6 node Redis cluster and form a cluster between them.

    Script location: Github

    • It deploys 6 marathon apps for 6 Redis nodes. All nodes are deployed on different nodes with CLUSTER_NAME as prefix to volume name.
    • Once all nodes are up and running, it deploys redis-cluster-util app which will be used to form Redis cluster.
    • Then it will print the Redis nodes and their IP addresses and prompt the user to proceed cluster creation.
    • If user selects to proceed, it will run redis-cluster-util app and create the cluster using IP addresses collected. Util container will prompt for some input that the user has to select.

    Conclusion

    We learned about Redis cluster deployment on DCOS with Persistent storage using RexRay. We also learned how rexray automatically manages volumes over aws ebs and how to integrate them in DCOS apps/services. We saw how to use redis-cluster-util container to manage Redis cluster for different purposes, like forming cluster, resharding, importing existing dump.rdb data etc. Finally, we looked at the automation part of whole cluster setup using dcos cli and bash.

    Reference

  • Mesosphere DC/OS Masterclass : Tips and Tricks to Make Life Easier

    DC/OS is an open-source operating system and distributed system for data center built on Apache Mesos distributed system kernel. As a distributed system, it is a cluster of master nodes and private/public nodes, where each node also has host operating system which manages the underlying machine. 

    It enables the management of multiple machines as if they were a single computer. It automates resource management, schedules process placement, facilitates inter-process communication, and simplifies the installation and management of distributed services. Its included web interface and available command-line interface (CLI) facilitate remote management and monitoring of the cluster and its services.

    • Distributed System DC/OS is distributed system with group of private and public nodes which are coordinated by master nodes.
    • Cluster Manager : DC/OS  is responsible for running tasks on agent nodes and providing required resources to them. DC/OS uses Apache Mesos to provide cluster management functionality.
    • Container Platform : All DC/OS tasks are containerized. DC/OS uses two different container runtimes, i.e. docker and mesos. So that containers can be started from docker images or they can be native executables (binaries or scripts) which are containerized at runtime by mesos.
    • Operating System :  As name specifies, DC/OS is an operating system which abstracts cluster h/w and s/w resources and provide common services to applications.

    Unlike Linux, DC/OS is not a host operating system. DC/OS spans multiple machines, but relies on each machine to have its own host operating system and host kernel.

    The high level architecture of DC/OS can be seen below :

    For the detailed architecture and components of DC/OS, please click here.

    Adoption and usage of Mesosphere DC/OS:

    Mesosphere customers include :

    • 30% of the Fortune 50 U.S. Companies
    • 5 of the top 10 North American Banks
    • 7 of the top 12 Worldwide Telcos
    • 5 of the top 10 Highest Valued Startups

    Some companies using DC/OS are :

    • Cisco
    • Yelp
    • Tommy Hilfiger
    • Uber
    • Netflix
    • Verizon
    • Cerner
    • NIO

    Installing and using DC/OS

    A guide to installing DC/OS can be found here. After installing DC/OS on any platform, install dcos cli by following documentation found here.

    Using dcos cli, we can manager cluster nodes, manage marathon tasks and services, install/remove packages from universe and it provides great support for automation process as each cli command can be output to json.

    NOTE: The tasks below are executed with and tested on below tools:

    • DC/OS 1.11 Open Source
    • DC/OS cli 0.6.0
    • jq:1.5-1-a5b5cbe

    DC/OS commands and scripts

    Setup DC/OS cli with DC/OS cluster

    dcos cluster setup <CLUSTER URL>

    Example :

    dcos cluster setup http://dcos-cluster.com

    The above command will give you the link for oauth authentication and prompt for auth token. You can authenticate yourself with any of Google, Github or Microsoft account. Paste the token generated after authentication to cli prompt. (Provided oauth is enabled).

    DC/OS authentication token

    docs config show core.dcos_acs_token

    DC/OS cluster url

    dcos config show core.dcos_url

    DC/OS cluster name

    dcos config show cluster.name

    Access Mesos UI

    <DC/OS_CLUSTER_URL>/mesos

    Example:

    http://dcos-cluster.com/mesos

    Access Marathon UI

    <DC/OS_CLUSTER_URL>/service/marathon

    Example:

    http://dcos-cluster.com/service/marathon

    Access any DC/OS service, like Marathon, Kafka, Elastic, Spark etc.[DC/OS Services]

    <DC/OS_CLUSTER_URL>/service/<SERVICE_NAME>

    Example:

    http://dcos-cluster.com/service/marathon
    http://dcos-cluster.com/service/kafka

    Access DC/OS slaves info in json using Mesos API [Mesos Endpoints]

    curl -H "Authorization: Bearer $(dcos config show 
    core.dcos_acs_token)" $(dcos config show 
    core.dcos_url)/mesos/slaves | jq

    Access DC/OS slaves info in json using DC/OS cli

    dcos node --json

    Note : DC/OS cli ‘dcos node –json’ is equivalent to running mesos slaves endpoint (/mesos/slaves)

    Access DC/OS private slaves info using DC/OS cli

    dcos node --json | jq '.[] | select(.type | contains("agent")) | select(.attributes.public_ip == null) | "Private Agent : " + .hostname ' -r

    Access DC/OS public slaves info using DC/OS cli

    dcos node --json | jq '.[] | select(.type | contains("agent")) | select(.attributes.public_ip != null) | "Public Agent : " + .hostname ' -r

    Access DC/OS private and public slaves info using DC/OS cli

    dcos node --json | jq '.[] | select(.type | contains("agent")) | if (.attributes.public_ip != null) then "Public Agent : " else "Private Agent : " end + " - " + .hostname ' -r | sort

    Get public IP of all public agents

    #!/bin/bash
    
    for id in $(dcos node --json | jq --raw-output '.[] | select(.attributes.public_ip == "true") | .id'); 
    do 
          dcos node ssh --option StrictHostKeyChecking=no --option LogLevel=quiet --master-proxy --mesos-id=$id "curl -s ifconfig.co"
    done 2>/dev/null

    Note: As ‘dcos node ssh’ requires private key to be added to ssh. Make sure you add your private key as ssh identity using :

    ssh-add </path/to/private/key/file/.pem>

    Get public IP of master leader

    dcos node ssh --option StrictHostKeyChecking=no --option LogLevel=quiet --master-proxy --leader "curl -s ifconfig.co" 2>/dev/null

    Get all master nodes and their private ip

    dcos node --json | jq '.[] | select(.type | contains("master"))
    | .ip + " = " + .type' -r

    Get list of all users who have access to DC/OS cluster

    curl -s -H "Authorization: Bearer $(dcos config show core.dcos_acs_token)"
    "$(dcos config show core.dcos_url)/acs/api/v1/users" | jq ‘.array[].uid’ -r

    Add users to cluster using Mesosphere script (Run this on master)

    Users to add are given in list.txt, each user on new line

    for i in `cat list.txt`; do echo $i;
    sudo -i dcos-shell /opt/mesosphere/bin/dcos_add_user.py $i; done

    Add users to cluster using DC/OS API

    #!/bin/bash
    
    # Uage dcosAddUsers.sh <Users to add are given in list.txt, each user on new line>
    for i in `cat users.list`; 
    do 
      echo $i
      curl -X PUT -H "Authorization: Bearer $(dcos config show core.dcos_acs_token)" "$(dcos config show core.dcos_url)/acs/api/v1/users/$i" -d "{}"
    done

    Delete users from DC/OS cluster organization

    #!/bin/bash
    
    # Usage dcosDeleteUsers.sh <Users to delete are given in list.txt, each user on new line>
    
    for i in `cat users.list`; 
    do 
      echo $i
      curl -X DELETE -H "Authorization: Bearer $(dcos config show core.dcos_acs_token)" "$(dcos config show core.dcos_url)/acs/api/v1/users/$i" -d "{}"
    done

    Offers/resources from individual DC/OS agent

    In recent versions of the many dcos services, a scheduler endpoint at                

    http://yourcluster.com/service/<service-name>/v1/debug/offers

    will display an HTML table containing a summary of recently-evaluated offers. This table’s contents are currently very similar to what can be found in logs, but in a slightly more accessible format. Alternately, we can look at the scheduler’s logs in stdout. An offer is a set of resources all from one individual DC/OS agent.

    <DC/OS_CLUSTER_URL>/service/<service_name>/v1/debug/offers

    Example:

    http://dcos-cluster.com/service/kafka/v1/debug/offers
    http://dcos-cluster.com/service/elastic/v1/debug/offers

    Save JSON configs of all running Marathon apps

    #!/bin/bash
    
    # Save marathon configs in json format for all marathon apps
    # Usage : saveMarathonConfig.sh
    
    for service in `dcos marathon app list --quiet | tr -d "/" | sort`; do
      dcos marathon app show $service | jq '. | del(.tasks, .version, .versionInfo, .tasksHealthy, .tasksRunning, .tasksStaged, .tasksUnhealthy, .deployments, .executor, .lastTaskFailure, .args, .ports, .residency, .secrets, .storeUrls, .uris, .user)' >& $service.json
    done

    Get report of Marathon apps with details like container type, Docker image, tag or service version used by Marathon app.

    #!/bin/bash
    
    TMP_CSV_FILE=$(mktemp /tmp/dcos-config.XXXXXX.csv)
    TMP_CSV_FILE_SORT="${TMP_CSV_FILE}_sort"
    #dcos marathon app list --json | jq '.[] | if (.container.docker.image != null ) then .id + ",Docker Application," + .container.docker.image else .id + ",DCOS Service," + .labels.DCOS_PACKAGE_VERSION end' -r > $TMP_CSV_FILE
    dcos marathon app list --json | jq '.[] | .id + if (.container.type == "DOCKER") then ",Docker Container," + .container.docker.image else ",Mesos Container," + if(.labels.DCOS_PACKAGE_VERSION !=null) then .labels.DCOS_PACKAGE_NAME+":"+.labels.DCOS_PACKAGE_VERSION  else "[ CMD ]" end end' -r > $TMP_CSV_FILE
    sed -i "s|^/||g" $TMP_CSV_FILE
    sort -t "," -k2,2 -k3,3 -k1,1 $TMP_CSV_FILE > ${TMP_CSV_FILE_SORT}
    cnt=1
    printf '%.0s=' {1..150}
    printf "n  %-5s%-35s%-23s%-40s%-20sn" "No" "Application Name" "Container Type" "Docker Image" "Tag / Version"
    printf '%.0s=' {1..150}
    while IFS=, read -r app typ image; 
    do
            tag=`echo $image | awk -F':' -v im="$image" '{tag=(im=="[ CMD ]")?"NA":($2=="")?"latest":$2; print tag}'`
            image=`echo $image | awk -F':' '{print $1}'`
            printf "n  %-5s%-35s%-23s%-40s%-20s" "$cnt" "$app" "$typ" "$image" "$tag"
            cnt=$((cnt + 1))
            sleep 0.3
    done < $TMP_CSV_FILE_SORT
    printf "n"
    printf '%.0s=' {1..150}
    printf "n"

    Get DC/OS nodes with more information like node type, node ip, attributes, number of running tasks, free memory, free cpu etc.

    #!/bin/bash
    
    printf "n  %-15s %-18s%-18s%-10s%-15s%-10sn" "Node Type" "Node IP" "Attribute" "Tasks" "Mem Free (MB)" "CPU Free"
    printf '%.0s=' {1..90}
    printf "n"
    TAB=`echo -e "t"`
    dcos node --json | jq '.[] | if (.type | contains("leader")) then "Master (leader)" elif ((.type | contains("agent")) and .attributes.public_ip != null) then "Public Agent" elif ((.type | contains("agent")) and .attributes.public_ip == null) then "Private Agent" else empty end + "t"+ if(.type |contains("master")) then .ip else .hostname end + "t" +  (if (.attributes | length !=0) then (.attributes | to_entries[] | join(" = ")) else "NA" end) + "t" + if(.type |contains("agent")) then (.TASK_RUNNING|tostring) + "t" + ((.resources.mem - .used_resources.mem)| tostring) + "tt" +  ((.resources.cpus - .used_resources.cpus)| tostring)  else "ttNAtNAttNA"  end' -r | sort -t"$TAB" -k1,1d -k3,3d -k2,2d
    printf '%.0s=' {1..90}
    printf "n"

    Framework Cleaner

    Uninstall framework and clean reserved resources if any after framework is deleted/uninstalled. (applicable if running DC/OS 1.9 or older, if higher than 1.10, then only uninstall cli is sufficient)

    SERVICE_NAME=
    dcos package uninstall $SERVICE_NAME
    dcos node ssh --option StrictHostKeyChecking=no --master-proxy
    --leader "docker run mesosphere/janitor /janitor.py -r
    ${SERVICE_NAME}-role -p ${SERVICE_NAME}-principal -z dcos-service-${SERVICE_NAME}"

    Get DC/OS apps and their placement constraints

    dcos marathon app list --json | jq '.[] |
    if (.constraints != null) then .id, .constraints else empty end'

    Run shell command on all slaves

    #!/bin/bash
    
    # Run any shell command on all slave nodes (private and public)
    
    # Usage : dcosRunOnAllSlaves.sh <CMD= any shell command to run, Ex: ulimit -a >
    CMD=$1
    for i in `dcos node | egrep -v "TYPE|master" | awk '{print $1}'`; do 
       echo -e "n###> Running command [ $CMD ] on $i"
       dcos node ssh --option StrictHostKeyChecking=no --option LogLevel=quiet --master-proxy --private-ip=$i "$CMD"
       echo -e "======================================n"
    done

    Run shell command on master leader

    CMD=<shell command, Ex: ulimit -a >dcos node ssh --option StrictHostKeyChecking=no --option
    LogLevel=quiet --master-proxy --leader "$CMD"

    Run shell command on all master nodes

    #!/bin/bash
    
    # Run any shell command on all master nodes
    
    # Usage : dcosRunOnAllSlaves.sh <CMD= any shell command to run, Ex: ulimit -a >
    CMD=$1
    for i in `dcos node | egrep -v "TYPE|agent" | awk '{print $2}'` 
    do 
      echo -e "n###> Running command [ $CMD ] on $i"
      dcos node ssh --option StrictHostKeyChecking=no --option LogLevel=quiet --master-proxy --private-ip=$i "$CMD"
     echo -e "======================================n"
    done

    Add node attributes to dcos nodes and run apps on nodes with required attributes using placement constraints

    #!/bin/bash
    
    #1. SSH on node 
    #2. Create or edit file /var/lib/dcos/mesos-slave-common
    #3. Add contents as :
    #    MESOS_ATTRIBUTES=<key>:<value>
    #    Example:
    #    MESOS_ATTRIBUTES=TYPE:DB;DB_TYPE:MONGO;
    #4. Stop dcos-mesos-slave service
    #    systemctl stop dcos-mesos-slave
    #5. Remove link for latest slave metadata
    #    rm -f /var/lib/mesos/slave/meta/slaves/latest
    #6. Start dcos-mesos-slave service
    #    systemctl start dcos-mesos-slave
    #7. Wait for some time, node will be in HEALTHY state again.
    #8. Add app placement constraint with field = key and value = value
    #9. Verify attributes, run on any node
    #    curl -s http://leader.mesos:5050/state | jq '.slaves[]| .hostname ,.attributes'
    #    OR Check DCOS cluster UI
    #    Nodes => Select any Node => Details Tab
    
    tmpScript=$(mktemp "/tmp/addDcosNodeAttributes-XXXXXXXX")
    
    # key:value paired attribues, separated by ;
    ATTRIBUTES=NODE_TYPE:GPU_NODE
    
    cat <<EOF > ${tmpScript}
    echo "MESOS_ATTRIBUTES=${ATTRIBUTES}" | sudo tee /var/lib/dcos/mesos-slave-common
    sudo systemctl stop dcos-mesos-slave
    sudo rm -f /var/lib/mesos/slave/meta/slaves/latest
    sudo systemctl start dcos-mesos-slave
    EOF
    
    # Add the private ip of nodes on which you want to add attrubutes, one ip per line.
    for i in `cat nodes.txt`; do 
        echo $i
        dcos node ssh --master-proxy --option StrictHostKeyChecking=no --private-ip $i <$tmpScript
        sleep 10
    done

    Install DC/OS Datadog metrics plugin on all DC/OS nodes

    #!/bin/bash
    
    # Usage : bash installDCOSDataDogMetricsPlugin.sh <Datadog API KEY>
    
    DDAPI=$1
    
    if [[ -z $DDAPI ]]; then
        echo "[Datadog Plugin] Need datadog API key as parameter."
        echo "[Datadog Plugin] Usage : bash installDCOSDataDogMetricsPlugin.sh <Datadog API KEY>."
    fi
    tmpScriptMaster=$(mktemp "/tmp/installDatadogPlugin-XXXXXXXX")
    tmpScriptAgent=$(mktemp "/tmp/installDatadogPlugin-XXXXXXXX")
    
    declare agent=$tmpScriptAgent
    declare master=$tmpScriptMaster
    
    for role in "agent" "master"
    do
    cat <<EOF > ${!role}
    curl -s -o /opt/mesosphere/bin/dcos-metrics-datadog -L https://downloads.mesosphere.io/dcos-metrics/plugins/datadog
    chmod +x /opt/mesosphere/bin/dcos-metrics-datadog
    echo "[Datadog Plugin] Downloaded dcos datadog metrics plugin."
    export DD_API_KEY=$DDAPI
    export AGENT_ROLE=$role
    sudo curl -s -o /etc/systemd/system/dcos-metrics-datadog.service https://downloads.mesosphere.io/dcos-metrics/plugins/datadog.service
    echo "[Datadog Plugin] Downloaded dcos-metrics-datadog.service."
    sudo sed -i "s/--dcos-role master/--dcos-role \$AGENT_ROLE/g;s/--datadog-key .*/--datadog-key \$DD_API_KEY/g" /etc/systemd/system/dcos-metrics-datadog.service
    echo "[Datadog Plugin] Updated dcos-metrics-datadog.service with DD API Key and agent role."
    sudo systemctl daemon-reload
    sudo systemctl start dcos-metrics-datadog.service
    echo "[Datadog Plugin] dcos-metrics-datadog.service is started !"
    servStatus=\$(sudo systemctl is-failed dcos-metrics-datadog.service)
    echo "[Datadog Plugin] dcos-metrics-datadog.service status : \${servStatus}"
    #sudo systemctl status dcos-metrics-datadog.service | head -3
    #sudo journalctl -u dcos-metrics-datadog
    EOF
    done
    
    echo "[Datadog Plugin] Temp script for master saved at : $tmpScriptMaster"
    echo "[Datadog Plugin] Temp script for agent saved at : $tmpScriptAgent"
    
    for i in `dcos node | egrep -v "TYPE|master" | awk '{print $1}'` 
    do 
        echo -e "\n###> Node - $i"
        dcos node ssh --option LogLevel=quiet --option StrictHostKeyChecking=no --master-proxy --private-ip=$i < $tmpScriptAgent
        echo -e "======================================================="
    done
    
    for i in `dcos node | egrep -v "TYPE|agent" | awk '{print $2}'` 
    do 
        echo -e "\n###> Master Node - $i"
        dcos node ssh --option LogLevel=quiet --option StrictHostKeyChecking=no --master-proxy --private-ip=$i < $tmpScriptMaster
        echo -e "======================================================="
    done
    
    # Check status of dcos-metrics-datadog.service on all nodes.
    #for i in `dcos node | egrep -v "TYPE|master" | awk '{print $1}'` ; do  echo -e "\n###> $i"; dcos node ssh --option StrictHostKeyChecking=no --option LogLevel=quiet --master-proxy --private-ip=$i "sudo systemctl is-failed dcos-metrics-datadog.service"; echo -e "======================================\n"; done

    Get app / node metrics fetched by dcos-metrics component using metrics API

    • Get DC/OS node id [dcos node]
    • Get Node metrics (CPU, memory, local filesystems, networks, etc) :  <dc os_cluster_url=””>/system/v1/agent/<agent_id>/metrics/v0/node</agent_id></dc>
    • Get id of all containers running on that agent : <dc os_cluster_url=””>/system/v1/agent/<agent_id>/metrics/v0/containers</agent_id></dc>
    • Get Resource allocation and usage for the given container ID. : <dc os_cluster_url=””>/system/v1/agent/<agent_id>/metrics/v0/containers/<container_id></container_id></agent_id></dc>
    • Get Application-level metrics from the container (shipped in StatsD format using the listener available at STATSD_UDP_HOST and STATSD_UDP_PORT) : <dc os_cluster_url=””>/system/v1/agent/<agent_id>/metrics/v0/containers/<container_id>/app     </container_id></agent_id></dc>

    Get app / node metrics fetched by dcos-metrics component using dcos cli

    • Summary of container metrics for a specific task
    dcos task metrics summary <task-id>

    • All metrics in details for a specific task
    dcos task metrics details <task-id>

    • Summary of Node metrics for a specific node
    dcos task metrics summary <mesos-node-id>

    • All Node metrics in details for a specific node
    dcos node metrics details <mesos-node-id>

    NOTE – All above commands have ‘–json’ flag to use them programmatically.  

    Launch / run command inside container for a task

    DC/OS task exec cli only supports Mesos containers, this script supports both Mesos and Docker containers.

    #!/bin/bash
    
    echo "DCOS Task Exec 2.0"
    if [ "$#" -eq 0 ]; then
            echo "Need task name or id as input. Exiting."
            exit 1
    fi
    taskName=$1
    taskCmd=${2:-bash}
    TMP_TASKLIST_JSON=/tmp/dcostasklist.json
    dcos task --json > $TMP_TASKLIST_JSON
    taskExist=`cat /tmp/dcostasklist.json | jq --arg tname $taskName '.[] | if(.name == $tname ) then .name else empty end' -r | wc -l`
    if [[ $taskExist -eq 0 ]]; then 
            echo "No task with name $taskName exists."
            echo "Do you mean ?"
            dcos task | grep $taskName | awk '{print $1}'
            exit 1
    fi
    taskType=`cat $TMP_TASKLIST_JSON | jq --arg tname $taskName '[.[] | select(.name == $tname)][0] | .container.type' -r`
    TaskId=`cat $TMP_TASKLIST_JSON | jq --arg tname $taskName '[.[] | select(.name == $tname)][0] | .id' -r`
    if [[ $taskExist -ne 1 ]]; then
            echo -e "More than one instances. Please select task ID for executing command.n"
            #allTaskIds=$(dcos task $taskName | tee /dev/tty | grep -v "NAME" | awk '{print $5}' | paste -s -d",")
            echo ""
            read TaskId
    fi
    if [[ $taskType !=  "DOCKER" ]]; then
            echo "Task [ $taskName ] is of type MESOS Container."
            execCmd="dcos task exec --interactive --tty $TaskId $taskCmd"
            echo "Running [$execCmd]"
            $execCmd
    else
            echo "Task [ $taskName ] is of type DOCKER Container."
            taskNodeIP=`dcos task $TaskId | awk 'FNR == 2 {print $2}'`
            echo "Task [ $taskName ] with task Id [ $TaskId ] is running on node [ $taskNodeIP ]."
            taskContID=`dcos node ssh --option LogLevel=quiet --option StrictHostKeyChecking=no --private-ip=$taskNodeIP --master-proxy "docker ps -q --filter "label=MESOS_TASK_ID=$TaskId"" 2> /dev/null`
            taskContID=`echo $taskContID | tr -d 'r'`
            echo "Task Docker Container ID : [ $taskContID ]"
            echo "Running [ docker exec -it $taskContID $taskCmd ]"
            dcos node ssh --option StrictHostKeyChecking=no --option LogLevel=quiet --private-ip=$taskNodeIP --master-proxy "docker exec -it $taskContID $taskCmd" 2>/dev/null
    fi

    Get DC/OS tasks by node

    #!/bin/bash 
    
    function tasksByNodeAPI
    {
        echo "DC/OS Tasks By Node"
        if [ "$#" -eq 0 ]; then
            echo "Need node ip as input. Exiting."
            exit 1
        fi
        nodeIp=$1
        mesosId=`dcos node | grep $nodeIp | awk '{print $3}'`
        if [ -z "mesosId" ]; then
            echo "No node found with ip $nodeIp. Exiting."
            exit 1
        fi
        curl -s -H "Authorization: Bearer $(dcos config show core.dcos_acs_token)" "$(dcos config show core.dcos_url)/mesos/tasks?limit=10000" | jq --arg mesosId $mesosId '.tasks[] | select (.slave_id == $mesosId and .state == "TASK_RUNNING") | .name + "ttt" + .id'  -r
    }
    
    function tasksByNodeCLI
    {
            echo "DC/OS Tasks By Node"
            if [ "$#" -eq 0 ]; then
                    echo "Need node ip as input. Exiting."
                    exit 1
            fi
            nodeIp=$1
            dcos task | egrep "HOST|$nodeIp"
    }

    Get cluster metadata – cluster Public IP and cluster ID

    curl -s -H "Authorization: Bearer $(dcos config show core.dcos_acs_token)"           
    $(dcos config show core.dcos_url)/metadata 

    Sample Output:

    {
    "PUBLIC_IPV4": "123.456.789.012",
    "CLUSTER_ID": "abcde-abcde-abcde-abcde-abcde-abcde"
    }

    Get DC/OS metadata – DC/OS version

    curl -s -H "Authorization: Bearer $(dcos config show core.dcos_acs_token)"
    $(dcos config show core.dcos_url)/dcos-metadata/dcos-version.jsonq  

    Sample Output:

    {
    "version": "1.11.0",
    "dcos-image-commit": "b6d6ad4722600877fde2860122f870031d109da3",
    "bootstrap-id": "a0654657903fb68dff60f6e522a7f241c1bfbf0f"
    }

    Get Mesos version

    curl -s -H "Authorization: Bearer $(dcos config show core.dcos_acs_token)"
    $(dcos config show core.dcos_url)/mesos/version

    Sample Output:

    {
    "build_date": "2018-02-27 21:31:27",
    "build_time": 1519767087.0,
    "build_user": "",
    "git_sha": "0ba40f86759307cefab1c8702724debe87007bb0",
    "version": "1.5.0"
    }

    Access DC/OS cluster exhibitor UI (Exhibitor supervises ZooKeeper and provides a management web interface)

    <CLUSTER_URL>/exhibitor

    Access DC/OS cluster data from cluster zookeeper using Zookeeper Python client – Run inside any node / container

    from kazoo.client import KazooClient
    
    zk = KazooClient(hosts='leader.mesos:2181', read_only=True)
    zk.start()
    
    clusterId = ""
    # Here we can give znode path to retrieve its decoded data,
    # for ex to get cluster-id, use
    # data, stat = zk.get("/cluster-id")
    # clusterId = data.decode("utf-8")
    
    # Get cluster Id
    if zk.exists("/cluster-id"):
        data, stat = zk.get("/cluster-id")
        clusterId = data.decode("utf-8")
    
    zk.stop()
    
    print (clusterId)

    Access dcos cluster data from cluster zookeeper using exhibitor rest API

    # Get znode data using endpoint :
    # /exhibitor/exhibitor/v1/explorer/node-data?key=/path/to/node
    # Example : Get znode data for path = /cluster-id
    curl -s -H "Authorization: Bearer $(dcos config show core.dcos_acs_token)"
    $(dcos config show core.dcos_url)/exhibitor/exhibitor/v1/explorer/node-data?key=/cluster-id

    Sample Output:

    {
    "bytes": "3333-XXXXXX",
    "str": "abcde-abcde-abcde-abcde-abcde-",
    "stat": "XXXXXX"
    }

    Get cluster name using Mesos API

    curl -s -H "Authorization: Bearer $(dcos config show core.dcos_acs_token)"
    $(dcos config show core.dcos_url)/mesos/state-summary | jq .cluster -r

    Mark Mesos node as decommissioned

    Some times instances which are running as DC/OS node gets terminated and can not come back online, like AWS EC2 instances, once terminated due to any reason, can not start back. When Mesos detects that a node has stopped, it puts the node in the UNREACHABLE state because Mesos does not know if the node is temporarily stopped and will come back online, or if it is permanently stopped. In such case, we can explicitly tell Mesos to put a node in the GONE state if we know a node will not come back.

    dcos node decommission <mesos-agent-id>

    Conclusion

    We learned about Mesosphere DC/OS, its functionality and roles. We also learned how to setup and use DC/OS cli and use http authentication to access DC/OS APIs as well as using DC/OS cli for automating tasks.

    We went through different API endpoints like Mesos, Marathon, DC/OS metrics, exhibitor, DC/OS cluster organization etc. Finally, we looked at different tricks and scripts to automate DC/OS, like DC/OS node details, task exec, Docker report, DC/OS API http authentication etc.

  • Enable Real-time Functionality in Your App with GraphQL and Pusher

    The most recognized solution for real-time problems is WebSockets (WS), where there is a persistent connection between the client and the server, and either can start sending data at any time. One of the latest implementations of WS is GraphQL subscriptions.

    With GraphQL subscriptions, you can easily add real-time functionalities to your application. There is an easy and standard way to implement a subscription in the GraphQL app. The client just has to make a subscription query to the server, which specifies the event and the data shape. With this query, the client establishes a long-lived connection with the server on which it listens to specific events. Just as how GraphQL solves the over-fetching problem in the REST API, a subscription continues to extend the solution for real-time.

    In this post, we will learn how to bring real-time functionality to your app by implementing GraphQL subscriptions with Pusher to manage Pub/Sub capabilities. The goal is to configure a Pusher channel and implement two subscriptions to be exposed by your GraphQL server. We will be implementing this in a Node.js runtime environment.

    Why Pusher?

    Why are we doing this using Pusher? 

    • Pusher, being a hosted real-time services provider, relieves us from managing our own real-time infrastructure, which is a highly complex problem.
    • Pusher provides an easy and consistent API.
    • Pusher also provides an entire set of tools to monitor and debug your realtime events.
    • Events can be triggered by and consumed easily from different applications written in different frameworks.

    Project Setup

    We will start with a repository that contains a codebase for a simple GraphQL backend in Node.js, which is a minimal representation of a blog post application. The entities included are:

    1. Link – Represents an URL and a small description for the Link
    2. User – Link belongs to User
    3. Vote – Represents users vote for a Link

    In this application, a User can sign up and add or vote a Link in the application, and other users can upvote the Link. The database schema is built using Prisma and SQLite for quick bootstrapping. In the backend, we will use graphql-yoga as the GraphQL server implementation. To test our GraphQL backend, we will use the graphql-playground by Prisma, as a client, which will perform all queries and mutations on the server.

    To set up the application:

    1. Clone the repository here
    2. Install all dependencies using 
    npm install

    1. Set up a database using prisma-cli with following commands
    npx prisma migrate save --experimental
    #! Select ‘yes’ for the prompt to add an SQLite db after this command and enter a name for the migration. 
    npx prisma migrate up --experimental
    npx prisma generate

    Note: Migrations are experimental features of the Prisma ORM, but you can ignore them because you can have a different backend setup for DB interactions. The purpose of using Prisma here is to quickly set up the project and dive into subscriptions.

    A new directory, named Prisma, will be created containing the schema and database in SQLite. Now, you have your database and app set up and ready to use.

    To start the Node.js application, execute the command:

    npm start

    Navigate to http://localhost:4000 to see the graphql-playground where we will execute our queries and mutations.

    Our next task is to add a GraphQL subscription to our server to allow clients to listen to the following events:

    • A new Link is created
    • A Link is upvoted

    To add subscriptions, we will need an npm package called graphql-pusher-subscriptions to help us interact with the Pusher service from within the GraphQL resolvers. The module will trigger events and listen to events for a channel from the Pusher service.

    Before that, let’s first create a channel in Pusher. To configure a Pusher channel, head to their website at Pusher to create an account. Then, go to your dashboard and create a channels application. Choose a name, the cluster closest to your location, and frontend tech as React and backend tech as Node.js.

    You will receive the following code to start.

    Now, we add the graphql-pusher-subscription package. This package will take the Pusher channel configuration and give you an API to trigger and listen to events published on the channel.

    Now, we import the package in the src/index.js file.

    const { PusherChannel } = require('graphql-pusher-subscriptions');

    After the PusherChannel class provided by the module accepts a configuration for the channel, we need to instantiate the class and create a reference Pub/Sub to the object. We give the Pusher config object given while creating the channel.

    const pubsub = new PusherChannel({
      appId: '1046878',
      key: '3c84229419ed7b47e5b0',
      secret: 'e86868a98a2f052981a6',
      cluster: 'ap2',
      encrypted: true,
      channel: 'graphql-subscription'
    });

    Now, we add “pubsub” to the context so that it is available to all the resolvers. The channel field tells the client which channel to subscribe to. Here we have the channel “graphql-subscription”.

    const server = new GraphQLServer({
      typeDefs: './src/schema.graphql',
      resolvers,
      context: request => {
        return {
          ...request,
          prisma,
          pubsub
        }
      },
    })

    The above part enables us to access the methods we need to implement our subscriptions from inside our resolvers via context.pubsub.

    Subscribing to Link-created Event

    The first step to add a subscription is to extend the GraphQL schema definition.

    type Subscription {
      newLink: Link
    }

    Next, we implement the resolver for the “newLink” subscription type field. It is important to note that resolvers for subscriptions are different from queries and mutations in minor ways.

    1. They return an AsyncIterator instead of data, which is then used by a GraphQL server to publish the event payload to the subscriber client.

    2. The subscription resolvers are provided as a value of the resolve field inside an object. The object should also contain another field named “resolve” that returns the payload data from the data emitted by AsyncIterator.

    To add the resolvers for the subscription, we start by adding a new file called Subscriptions.js

    Inside the project directory, add the file as src/resolvers/Subscription.js

    Now, in the new file created, add the following code, which will be the subscription resolver for the “newLink” type we created in GraphQL schema.

    function newLinkSubscribe(parent, args, context, info) {
      return context.pubsub.asyncIterator("NEW_LINK")
    }
    
    const newLink = {
      subscribe: newLinkSubscribe,
      resolve: payload => {
        return payload
      },
    }
    
    module.exports = {
      newLink,
    }
    view raw

    In the code above, the subscription resolver function, newLinkSubscribe, is added as a field value to the property subscribe just as we described before. The context provides reference to the Pub/Sub object, which lets us use the asyncIterator() with “NEW_LINK” as a parameter. This function resolves subscriptions and publishes events.

    Adding Subscriptions to Your Resolvers

    The final step for our subscription implementation is to call the function above inside of a resolver. We add the following call to pubsub.publish() inside the post resolver function inside Mutation.js file.

    function post(parent, args, context, info) {
      const userId = getUserId(context)
      const newLink = await context.prisma.link.create({
        data: {
          url: args.url,
          description: args.description,
          postedBy: { connect: { id: userId } },
        }
      })
      context.pubsub.publish("NEW_LINK", newLink)
      return newLink
    }

    In the code above, we can see that we pass the same string “NEW_LINK” to the publish method as we did in the newLinkSubscribe function in the subscription function before. The “NEW_LINK” is the event name, and it will publish events to the Pusher service, and the same name will be used on the subscription resolver to bind to the particular event name. We also add the newLink as a second argument, which contains the data part for the event that will be published. The context.pubsub.publish function will be triggered before returning the newLink data.

    Now, we will update the main resolver object, which is given to the GraphQL server.

    First, import the subscription module inside of the index.js file.

    const Subscription = require('./resolvers/Subscription') 
    const resolvers = {
      Query,
      Mutation,
      Subscription,
      User,
      Link,
    }

    Now, with all code in place, we start testing our real time API. We will use multiple instances/tabs of GraphQL playground concurrently.

    Testing Subscriptions

    If your server is already running, then kill it with CTRL+C and restart with this command:

    npm start

    Next, open the browser and navigate to http://localhost:4000 to see the GraphQL playground. We will use one tab of the playground to perform the mutation to trigger the event to Pusher and invoke the subscriber.

    We will now start to execute the queries to add some entities in the application.

    First, let’s create a user in the application by using the signup mutation. We send the following mutation to the server to create a new User entity.

    mutation {
        signup(
        name: "Alice"
        email: "alice@prisma.io"
        password: "graphql"
      ) {
        token
        user {
          Id
        }
      }
    }

    You will see a response in the playground that contains the authentication token for the user. Copy the token, and open another tab in the playground. Inside that new tab, open the HTTP_HEADERS section in the bottom and add the Authorization header.

    Replace the __TOKEN__  placeholder from the below snippet with the copied token from above.

    {
      "Authorization": "Bearer __TOKEN__"
    }

    Now, all the queries or mutations executed from that tab will carry the authentication token. With this in place, we sent the following mutation to our GraphQL server.

    mutation {
    post(
        url: "http://velotio.com"
        description: "An awesome GraphQL blog"
      ) {
        id
      }
    }

    The mutations above create a Link entity inside the application. Now that we have created an entity, we now move to test the subscription part. In another tab, we will send the subscription query and create a persistent WebSocket connection to the server. Before firing out a subscription query, let us first understand the syntax of it. It starts with the keyword subscription followed by the subscription name. The subscription query is defined in the GraphQL schema and shows the data shape we can resolve to. Here, we want to subscribe to a newLink subscription name, and the data resolved by it consists of that of a Link entity. That means we can resolve any specific part of the Link entity. Here, we are asking for attributes like id, URL, description, and nested attributes of the postedBy field.

    subscription {
      newLink {
          id
          url
          description
          postedBy {
            id
            name
            email
          }
      }
    }

    The response of this operation is different from that of a mutation or query. You see a loading spinner, which indicates that it is waiting for an event to happen. This means the GraphQL client (playground) has established a connection with the server and is listening for response data.

    Before triggering a subscription, we will also keep an eye on the Pusher channel for events triggered to verify that our Pusher service is integrated successfully.

    To do this, we go to Pusher dasboard and navigate to the channel app we created and click on the debug console. The debug console will show us the events triggered in real-time.

    Now that the Pusher dashboard is visible, we will trigger the subscription event by running the following mutation inside a new Playground tab.

    mutation {
      post(
        url: "www.velotio.com"
        description: "Graphql remote schema stitching"
      ) {
        id
      }
    }

    Now, we observe the Playground where subscription was running.

    We can see that the newly created Link is visible in the response section, and the subscription continues to listen, and the event has reached the Pusher service.

    You will observe an event on the Pusher console that is the same event and data as sent by your post mutation.

     

    We have achieved our first goal, i.e., we have integrated the Pusher channel and implemented a subscription for a Link creation event.

    To achieve our second goal, i.e., to listen to Vote events, we repeat the same steps as we did for the Link subscription.

    We add a subscription resolver for Vote in the Subscriptions.js file and update the Subscription type in the GraphQL schema. To trigger a different event, we use “NEW_VOTE” as the event name and add the publish function inside the resolver for Vote mutation.

    function newVoteSubscribe(parent, args, context, info) {
      return context.pubsub.asyncIterator("NEW_VOTE")
    }
    
    const newVote = {
      subscribe: newVoteSubscribe,
      resolve: payload => {
        return payload
      },
    }
    view raw

    Update the export statement to add the newVote resolver.

    module.exports = {
      newLink,
      newVote,
    }

    Update the Vote mutation to add the publish call before returning the newVote data. Notice that the first parameter, “NEW_VOTE”, is being passed so that the listener can bind to the new event with that name.

    const newVote = context.prisma.vote.create({
        data: {
          user: { connect: { id: userId } },
          link: { connect: { id: Number(args.linkId) } },
        }
      })
      context.pubsub.publish("NEW_VOTE", newVote)
      return newVote
    }

    Now, restart the server and complete the signup process with setting HTTP_HEADERS as we did before. Add the following subscription to a new Playground tab.

    subscription {
      newVote {
        id
        link {
          url
          description
        }
        user {
          name
          email
        }
      }
    }

    In another Playground tab, send the following Vote mutation to the server to trigger the event, but do not forget to verify the Authorization header. The below mutation will add the Vote of the user to the Link.  Replace the “__LINK_ID__” with the linkId generated in the previous post mutation.

    mutation {
      vote(linkId: "__LINK_ID__") {
        link {
          url
          description
        }
        user {
          name
          email
        }
      }
    }

    Observe the event data on the response tab of the vote subscription. Also, you can check your event triggered on the pusher dashboard.

    The final codebase is available on a branch named with-subscription.

    Conclusion

    By following the steps above, we saw how easy it is to add real-time features to GraphQL apps with subscriptions. Also, establishing a connection with the server is no hassle, and it is much similar to how we implement the queries and mutations. Unlike the mainstream approach, where one has to build and manage the event handlers, the GraphQL subscriptions come with these features built-in for the client and server. Also, we saw how we can use a managed real-time service like Pusher can be for Pub/Sub events. Both GraphQL and Pusher can prove to be a solid combination for a reliable real-time system.

    Related Articles

    1. Build and Deploy a Real-Time React App Using AWS Amplify and GraphQL

    2. Scalable Real-time Communication With Pusher

  • A Step Towards Simplified Querying in NodeJS

    Recently, I came across a question on StackOverflow regarding querying the data on relationship table using sequalize and I went into flashback with the same situation and hence decided to write a blog over a better alternative Objection.js. When we choose ORM’s without looking into the use case we are tackling we usually end up with a mess.

    The question on StackOverflow was about converting the below query into sequalize query.

    SELECT a.* 
    FROM employees a, emp_dept_details b 
    WHERE b.Dept_Id=2 AND a.Emp_No = b.Emp_Id

    (Pardon me for naming in the query, it was asked by novice programmer and I wanted to keep it as it is for purity sake).

    Seems pretty straightforward right? So the solution is like below:

    Employee.findAll({ 
      include: [{ 
        model: EmployeeDeptDetails, 
        where: { 
          Emp_Id: Sequelize.col('employees.Emp_No'), 
          Dept_Id: 2 
        } 
      }] 
    });

    If you look at this it’s much complex solution for simple querying and this grows with added relationships. And also for simple queries like this, the sequalize documentation is not sufficient. Now if you ask me how it can be done in a better way with Objection.js below is the same query in objection.

    Employee.query()
      .joinRelation(‘employeeDeptDetails’)
      .where({ ‘employeeDeptDetails.Dept_Id’: 2 })

    Note: It’s assumed that relationship is defined (in model classes) in both examples.

    Now you guys can see the difference this is just one example I came across there are others on the internet for better understanding. So are you guys ready for diving into Objection.js?

    But before we dive in, I wanted to let you guys know whenever we check online for Node.js ORM, we always find some people saying “don’t use an ORM, just write plain SQL” and they are correct in their perception. If your app is small enough that you can write a bunch of query helper functions and carry out all the needed functionality, then don’t go with ORM approach, instead just use plain SQL.

    But when your app has an ample amount of tables and relationships between them that need to be defined and multiple-joint queries need to done, there comes the power of ORM.

    So when we search for the ORM’s (For relational DB) available in NodeJS arena we usually get the list below:

    1. Sequelize

    2. Objection.js

    3. typeORM

    There are others, I have just mentioned more popular ones.

    Well, I have personally used both Sequelize and Objection.js as they are the most popular ORM available today. So if you are a person who is deciding on which ORM you should be using for your next project or got frustrated with the relationship query complexity of `Sequelize` then you have landed on the correct place.

    I am going to be honest here, I am using Objection.js currently doesn’t make it the facto or best ORM for NodeJS. If you don’t love to write the SQL resembling queries and prefer the fully abstracted query syntax then I think `Sequelize` is the right option for you (though you might struggle with relationship queries as I did and land up with Objection.js later on) but if you want your queries to resemble the SQL one then you should read out this blog.

    What Makes Objection So Special?

    1. Objection under the hood uses KNEX.JS a   powerful SQL query builder

    2. Let’s you create models for tables with ES6 / ES7 classes and define the relationships between them

    3. Make queries with async / await

    4. Add validation to your models using JSON schema

    5. Perform graph inserts and upserts

    to name a few.

    The Learning Curve

    I have exclusively relied upon the documentation. The Knex.js and objection.js documentation is great and there are simple (One of them, I am going to use below for explanation) examples on the Objection GitHub. So if you have previously worked with any NodeJS ORM or you are a newbie, this will help you get started without any struggles.

    So let’s get started with some of the important topics while I explain to you the advantages over other ORM and usage along the way.

    For setup (package installation, configuration, etc.) and full code you can check out Github

    Creating and Managing DB Schema

    Migration is a good pattern to manage your changes database schema. Objection.js uses knex.js migration for this purpose.

    So what is Migration : Migrations are changes to a database’s schema specified within your ORM, so we will be defining the tables and columns of our database straight in JavaScript rather than using SQL.

    One of the best features of Knex is its robust migration support. To create a new migration simply use the knex cli:

    knex mirate:make migration_name

    After running this command you’ll notice that a new file is created within your migrations directory. This file will include a current timestamp as well as the name that you gave to your migration. The file will look like this:

    exports.up = function(knex, Promise) {
    
    };
    
    exports.down = function(knex, Promise) {
    
    };

    As you can notice the first is `exports.up`, which specifies the commands that should be run to make the database change that you’d like to make.e.g creating database tables, adding or removing a column from a table, changing indexes, etc.

    The second function within your migration file is `exports.down`. This functions goal is to do the opposite of what exports.up did. If `exports.up` created a table, then `exports.down` will drop that table. The reason to include `exports.down` is so that you can quickly undo a migration should you need to.

    For example:

    exports.up = knex => {
      return knex.schema
        .createTable('persons', table => {
          table.increments('id').primary();
          table
    	.integer('parentId')
    	.unsigned()
    	.references('id')
    	.inTable('persons')
    	.onDelete('SET NULL')
    	.index();
          table.string('firstName');
          table.string('lastName');
          table.integer('age');
          table.json('address');
        })
    };
    
    exports.down = knex => {
      return knex.schema
        .dropTableIfExists('persons');
    }; 

    It’s that simple to create the migration. Now you can run your migration like below.

    $ knex migrate:latest

    You can also pass the `–env` flag or set `NODE_ENV` to select an alternative environment:

    $ knex migrate:latest --env production

    To rollback the last batch of migrations:

    $ knex migrate:rollback

    Models

    Models are wrappers around the database tables, they help to encapsulate the business logic within those tables.

    Objection.js allows to create model using ES classes.

    Before diving into the example you guys need to clear your thoughts regarding model little bit as Objection.js Model does not create any table in DB. Yes! the only thing Models are used for are adding the validations and relationship mapping.  

    For example:

    const { Model } = require('objection');
    const Animal = require('./Animal');
    
    class Person extends Model {
      // Table name is the only required property.
      static get tableName() {
        return 'persons';
      }
    
      // Optional JSON schema. This is not the database schema. Nothing is generated
      // based on this. This is only used for validation. Whenever a model instance
      // is created it is checked against this schema. http://json-schema.org/.
      static get jsonSchema() {
        return {
          type: 'object',
          required: ['firstName', 'lastName'],
    
          properties: {
    	id: { type: 'integer' },
    	parentId: { type: ['integer', 'null'] },
    	firstName: { type: 'string', minLength: 1, maxLength: 255 },
    	lastName: { type: 'string', minLength: 1, maxLength: 255 },
    	age: { type: 'number' },
    	address: {
    	  type: 'object',
    	  properties: {
    	    street: { type: 'string' },
    	    city: { type: 'string' },
    	    zipCode: { type: 'string' }
    	  }
    	}
          }
        };
      }
    
      // This object defines the relations to other models.
      static get relationMappings() {
        return {
          pets: {
    	relation: Model.HasManyRelation,
    	// The related model. This can be either a Model subclass constructor or an
    	// absolute file path to a module that exports one.
    	modelClass: Animal,
    	join: {
    	  from: 'persons.id',
    	  to: 'animals.ownerId'
    	}
          }
        };	
      }
    }
    
    module.exports = Person;

    • Now let’s break it down, that static getter `tableName` return the table name.
    • We also have a second static getter method that defines the validations of each field and this is an optional thing to do. We can specify the required properties, type of the field i.e. number, string, object, etc and other validations as you can see in the example.
    • Third static getter function we see is `relationMappings` which defines this models relationship to other models. In this case, the key of the outside object `pets` is how we will refer to the child class. The join property in addition to the relation type defines how the models are related to one another. The from and to properties of the join object define the database columns through which the models are associated. The modelClass passed to the relation mappings is the class of the related model.

    So here `Person` has `HasManyRelation` with `Animal` model class and join is performed on persons `id` column and Animals `ownerId` column. So one person can have multiple pets.

    Queries

    Let’s start with simple SELECT queries:

    SELECT * FROM persons;

    Can be done like:

    const persons = await Person.query();

    Little advanced or should I say typical select query:

    SELECT * FROM persons where firstName = 'Ben' ORDER BY age;

    Can be done like:

    const persons = await Person.query()
      .where({ firstName: 'Ben' })
      .orderBy('age');

    So we can look how much objection queries resemble to the actual SQL queries so it’s always easy to transform SQL query easily into Objection.js one which is quite difficult with other ORMs.

    INSERT Queries:

    INSERT INTO persons (firstName) VALUES ('Ben');

    Can be done like:

    await Person.query().insert({ firstName: 'Ben' });

    UPDATE Queries:

    UPDATE persons set firstName = 'Brayn' where id = 1;

    Can be done like:

    await Person.query().patch({ firstName: 'Brayn' }).where({ id: 1 });

    DELETE Queries:

    DELETE from persons where id = 1;

    Can be done like:

    await Person.query().delete().where({ id: 1 });

    Relationship Queries:

    Suppose we want to fetch all the pets of Person whose first name is Ben.

    const pets = await person
      .$relatedQuery('pets')
      .where('name', 'Ben');

    Now suppose you want to insert person along with his pets. In this case we can use the graph queries.

    const personWithPets = {
      firstName: 'Matt',
      lastName: 'Damon',
      age: 43,
    
        pets: [
        {
          name: 'Doggo',
          species: 'dog'
        },
        {
          name: 'Kat',
          species: 'cat'
        }
      ]
    };
    
    // wrap `insertGraph` call in a transaction since its creating multiple queries.
    const insertedGraph = await transaction(Person.knex(), trx => {
      return (
        Person.query(trx).insertGraph(personWithPets)
      );
    });

    So here we can see the power of Objection queries and if try to compare these queries with other ORM queries you will find out the difference yourself which is better.

    Plugin Availability

    objection-password: This plugin automatically adds automatic password hashing to your Objection.js models. This makes it super-easy to secure passwords and other sensitive data.

    objection-graphql: Automatic GraphQL API generator for objection.js models.

    Verdict

    I am having fun time working with Objection and Knex currently! If you ask me to choose between sequalize and objection.js I would definitely go with objection.js to avoid all the relationship queries pain. It’s worth noting that Objection.js is unlike your other ORM’s, it’s just a wrapper over the KNEX.js query builder so its like using query builder with additional features.

  • Your Quintessential Guide to AWS Athena

    Introduction

    Serverless has become a new trend today and is here to stay for sure! Now when you think of wireless internet, you know that it still has some wires but you don’t need to worry about them as you don’t have to maintain them. Similarly, serverless has servers but you don’t have to keep worrying about handling or maintaining them. All you need to do is focus on your code and you’re good to go.

    It has some more benefits, such as:

    • Zero administration: You can deploy code without provisioning anything beforehand, or managing anything later. There is no concept of a fleet, an instance, or even an operating system.
    • Auto-scaling: It lets your service providers manage the scaling challenges. You don’t need to fire alerts or write scripts to scale up and down. It handles quick bursts of traffic and weekend lulls the same way.
    • Pay-per-use: The function-as-a-service compute and managed services are charged based on usage rather than pre-provisioned capacity. You can have complete resource utilization without paying a cent for idle time. The results? 90% cost-savings over a cloud VM, and the satisfaction of knowing that you never pay for resources you don’t use.

    What is AWS Athena?

    AWS Athena is a similar serverless service. It is more of an interactive query service than a code deployment service.

    Using Athena one can directly query the data stored in S3 buckets and using standard ANSI SQL.

    As mentioned earlier, it works on the principle of serverless, that is, there is no infrastructure to manage, and you pay only for the queries that you run.

    Athena is easy to use. You can simply point to your data in Amazon S3, define the schema, and start querying using standard SQL. Most results are delivered within seconds. With Athena, there’s no need for complex ETL jobs to prepare your data for analysis. This makes it easy for anyone with SQL skills to quickly analyze large-scale datasets.

    It is based on Facebook’s PrestoDB and can be used to query structured and semi-structured data.

    Some Exciting Features of Athena are:

    • Serverless. No ETL – Not having to set up and manage any servers or data warehouses.
    • Only pay for the data that is scanned.
    • You can ensure better performance by compressing, partitioning, and converting your data into columnar formats.
    • Can also handle complex analysis, including large joins, window functions, and arrays.
    • Athena automatically executes queries in parallel.
    • Need to provide a path to the S3 folder and when new files added automatically reflects in the table.
    • Supports –
    • Support CSV, Json, Parquet, ORC, Avro data formats
    • Complex Joins and datatypes
    • View creation
    • Does not Support –
    • User-defined functions and stored procedures
    • Hive or Presto transactions
    • LZO (Snappy is supported)

    Pricing of Athena

    • AWS Athena is priced $5 for each TB of data scanned.
    • Queries are rounded up to the nearest MB, with a 10 MB minimum.
    • Users pay for stored data at regular S3 rates.
    • Amazon advises users to use compressed data files, have data in columnar formats, and routinely delete old results sets to keep charges low. Partitioning data in tables can speed up queries and reduce query bills.

    Athena vs. Redshift Spectrum

    • AWS also has Redshift as data warehouse service, and we can use redshift spectrum to query S3 data, so then why should you use Athena?

    Advantages of Redshift Spectrum:

    • Allows creation of Redshift tables. You’re able to join Redshift tables with Redshift spectrum tables efficiently.

    If you do not need those things then you should consider Athena as well Athena differences from Redshift spectrum:

    • Billing. This is a major difference and depending on your use case you may find one much cheaper than the other Performance.
    • Athena slightly faster. SQL syntax and features.
    • Athena is derived from presto and is a bit different to Redshift which has its roots in Postgres.
    • It’s easy enough to connect to Athena using API, JDBC or ODBC but many more products offer “standard out of the box” connection to Redshift.
    • Athena has GIS functions and lambdas.

    So in nutshell, if you have existing instances of redshift you would probably go for Redshift Spectrum, if not then you can opt for Athena for querying the data. In some cases, you can use both in tandem.

    Example

    Here is a sample query to create a sample database having 3 tables basic_details, contact_details and bill_details, Uploaded csv file to s3:

    Basic_details:

    const outside = {weather: FRIGHTFUL}
    const inside = {fire: DELIGHTFUL}
    const go = places => places.some(p=>p>outside.weather)))
    
    const snow = () => (outside.weather < inside.fire && !go(places)) {
      let it = snow()
    }
    
    let it = snow()
    
    const FRIGHTFUL = 1
    const DELIGHTFUL = 1337

    Bill_details:

    CREATE EXTERNAL TABLE `bil_details`(
      `id` int COMMENT '', 
      `amount_paid` string COMMENT '', 
      `amount_due` string COMMENT '')
    ROW FORMAT DELIMITED 
      FIELDS TERMINATED BY ',' 
    STORED AS INPUTFORMAT 
      'org.apache.hadoop.mapred.TextInputFormat' 
    OUTPUTFORMAT 
      'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
    LOCATION
      's3://athena-blog/bill-details'
    TBLPROPERTIES (
      'has_encrypted_data'='false', 
      'skip.header.line.count'='1')

    Contact_details:

    CREATE EXTERNAL TABLE `contact_details`(
      `id` int COMMENT '', 
      `street` string COMMENT '', 
      `city` string COMMENT '', 
      `state` string COMMENT '', 
      `country` string COMMENT '', 
      `zip` string COMMENT '')
    ROW FORMAT DELIMITED 
      FIELDS TERMINATED BY ',' 
    STORED AS INPUTFORMAT 
      'org.apache.hadoop.mapred.TextInputFormat' 
    OUTPUTFORMAT 
      'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
    LOCATION
      's3://athena-blog/contact-details'
    TBLPROPERTIES (
      'has_encrypted_data'='false', 
      'skip.header.line.count'='1')

    Sample Query for – FirstNames of People from Minnesota with amount_due > $100

    WITH basic AS 
        (SELECT id,
             first_name
        FROM basic_details
        WHERE lower(gender) = 'male' ), bill AS 
        (SELECT id
        FROM bil_details
        WHERE CAST(amount_due AS INTEGER) > 100 ), contact AS 
        (SELECT contact_details.id
        FROM contact_details
        JOIN bill
            ON contact_details.id = bill.id
        WHERE state= 'Minnesota' )
    SELECT basic.first_name
    FROM basic
    JOIN contact
        ON basic.id = contact.id 

    Output:

    Some Other Sample Queries:

    1. Searching for Values in JSON

    WITH dataset AS (
      SELECT * FROM (VALUES
        (JSON '{"name": "Bob Smith", "org": "legal", "projects": ["project1"]}'),
        (JSON '{"name": "Susan Smith", "org": "engineering", "projects": ["project1", "project2", "project3"]}'),
        (JSON '{"name": "Jane Smith", "org": "finance", "projects": ["project1", "project2"]}')
      ) AS t (users)
    )
    SELECT json_extract_scalar(users, '$.name') AS user
    FROM dataset
    WHERE json_array_contains(json_extract(users, '$.projects'), 'project2')

    Output:

    2. Extracting properties

    WITH dataset AS (
      SELECT '{"name": "Susan Smith",
               "org": "engineering",
               "projects": [{"name":"project1", "completed":false},
               {"name":"project2", "completed":true}]}'
        AS blob
    )
    SELECT
      json_extract(blob, '$.name') AS name,
      json_extract(blob, '$.projects') AS projects
    FROM dataset

    Output:

    3. Converting JSON to Athena Data Types

    WITH dataset AS (
      SELECT
        CAST(JSON '"HELLO ATHENA"' AS VARCHAR) AS hello_msg,
        CAST(JSON '12345' AS INTEGER) AS some_int,
        CAST(JSON '{"a":1,"b":2}' AS MAP(VARCHAR, INTEGER)) AS some_map
    )
    SELECT * FROM dataset

    Output:

    Conclusion

    Hence, we can easily say that AWS Athena gives us an efficient way to query our raw data present in different formats in S3 object storage, without spawning a dedicated infrastructure and at minimal cost.

    Need help with setting up AWS Athena for your organization? Connect with the experts at Velotio!

  • API Testing Using Postman and Newman

    In the last few years, we have an exponential increase in the development and use of APIs. We are in the era of API-first companies like Stripe, Twilio, Mailgun etc. where the entire product or service is exposed via REST APIs. Web applications also today are powered by REST-based Web Services. APIs today encapsulate critical business logic with high SLAs. Hence it is important to test APIs as part of the continuous integration process to reduce errors, improve predictability and catch nasty bugs.

    In the context of API development, Postman is great REST client to test APIs. Although Postman is not just a REST Client, it contains a full-featured testing sandbox that lets you write and execute Javascript based tests for your API.

    Postman comes with a nifty CLI tool – Newman. Newman is the Postman’s Collection Runner engine that sends API requests, receives the response and then runs your tests against the response. Newman lets developments easily integrate Postman into continuous integration systems like Jenkins. Some of the important features of Postman & Newman include:-

    1. Ability to test any API and see the response instantly.
    2. Ability to create test suites or collections using a collection of API endpoints.
    3. Ability to collaborate with team members on these collections.
    4. Ability to easily export/import collections as JSON files.

    We are going to look at all these features, some are intuitive and some not so much unless you’ve been using Postman for a while.

    Setting up Your Postman

    You can install Postman either as a Chrome extension or as a native application

    Later, can then look it up in your installed apps and open it. You can choose to Sign Up & create an account if you want, this is important especially for saving your API collections and accessing them anytime on any machine. However, for this article, we can skip this. There’s a button for that towards the bottom when you first launch the app.

    Postman Collections

    Postman Collections in simple words is a collection of tests. It is essentially a test suite of related tests. These tests can be scenario-based tests or sequence/workflow-based tests.

    There’s a Collections tab on the top left of Postman, with an example Postman Echo collection. You can open and go through it.

    Just like in the above screenshot, select a API request and click on the Tests. Check the first line:

    tests["response code is 200"] = responseCode.code === 200;

    The above line is a simple test to check if the response code for the API is 200. This is the pattern for writing Assertions/Tests in Postman (using JavaScript), and this is actually how you are going to write the tests for API’s need to be tested.You can open the other API requests in the POSTMAN Echo collection to get a sense of how requests are made.

    Adding a COLLECTION

    To make your own collection, click on the ‘Add Collection‘ button on the top left of Postman and call it “Test API”

    You will be prompted to give details about the collection, I’ve added a name Github API and given it a description.

    Clicking on Create should add the collection to the left pane, above, or below the example “POSTMAN Echo” collection.

    If you need a hierarchy for maintaining relevance between multiple API’s inside a collection, APIs can further be added to a folder inside a collection. Folders are a great way of separating different parts of your API workflow. You can be added folders through the “3 dot” button beside Collection Name:

    Eg.: name the folder “Get Calls” and give a description once again.

    Now that we have the folder, the next task is to add an API call that is related to the TEST_API_COLLECTION to that folder. That API call is to https://api.github.com/.

    If you still have one of the TEST_API_COLLECTION collections open, you can close it the same way you close tabs in a browser, or just click on the plus button to add a new tab on the right pane where we make requests.

    Type in or paste in https://api.github.com/ and press Send to see the response.

    Once you get the response, you can click on the arrow next to the Save button on the far right, and select Save As, a pop up will be displayed asking where to save the API call.

    Give a name, it can be the request URL, or a name like “GET Github Basic”, and a description, then choose the collection and folder, in this case, TEST_API_COLLECTION> GET CALLS, then click on Save. The API call will be added to the Github Root API folder on the left pane.

    Whenever you click on this request from the collection, it will open in the center pane.

    Write the Tests

    We’ve seen that the GET Github Basic request has a JSON response, which is usually the case for most of the APIs.This response has properties such as current_user_url, emails_url, followers_url and following_url to pick a few. The current_user_url has a value of https://api.github.com/user.  Let’s add a test, for this URL. Click on the ‘GET Github Basic‘ and click on the test tab in the section just below where the URL is put.

    You will notice on the right pane, we have some snippets which Postman creates when you click so that you don’t have to write a lot of code. Let’s add Response Body: JSON value check. Clicking on it produces the following snippet.

    var jsonData = JSON.parse(responseBody);
    tests["Your test name"] = jsonData.value === 100;

    From these two lines, it is apparent that Postman stores the response in a global object called responseBody, and we can use this to access response and assert values in tests as required.

    Postman also has another global variable object called tests, which is an object you can use to name your tests, and equate it to a boolean expression. If the boolean expression returns true, then the test passes.

    tests['some random test'] = x === y

    If you click on Send to make the request, you will see one of the tests failing.

    Lets create a test that relevant to our usecase.

    var jsonData = JSON.parse(responseBody);
    var usersURL = "https://api.github.com/user"
    tests["Gets the correct users url"] = jsonData.current_user_url === usersURL;

    Clicking on ‘Send‘, you’ll see the test passing.

    Let’s modify the test further to test some of the properties we want to check

    Ideally the things to be tested in an API Response Body should be:

    • Response Code ( Assert Correct Response Code for any request)
    • Response Time ( to check api responds in an acceptable time range / is not delayed)
    • Response Body is not empty / null
    tests["Status code is 200"] = responseCode.code === 200;
    tests["Response time is less than 200ms"] = responseTime < 200;
    tests["Response time is acceptable"] = _.inRange(responseTime, 0, 500);
    tests["Body is not empty"] = (responseBody!==null || responseBody.length!==0);

    Newman CLI

    Once you’ve set up all your collections and written tests for them, it may be tedious to go through them one by one and clicking send to see if a given collection test passes. This is where Newman comes in. Newman is a command-line collection runner for Postman.

    All you need to do is export your collection and the environment variables, then use Newman to run the tests from your terminal.

    NOTE: Make sure you’ve clicked on ‘Save’ to save your collection first before exporting.

    USING NEWMAN

    So the first step is to export your collection and environment variables. Click on the Menu icon for Github API collection, and select export.

    Select version 2, and click on “Export”

    Save the JSON file in a location you can access with your terminal. I created a local directory/folder called “postman” and saved it there.

    Install Newman CLI globally, then navigate to the where you saved the collection.

    npm install -g newman 
    cd postman

    Using Newman is quite straight-forward, and the documentation is extensive. You can even require it as a Node.js module and run the tests there. However, we will use the CLI.

    Once you are in the directory, run newman run <collection_name.json>, </collection_name.json> replacing the collection_name with the name you used to save the collection.

    newman run TEST_API_COLLECTION.postman_collection.json     

    NEWMAN CLI Options

    Newman provides a rich set of options to customize a run. A list of options can be retrieved by running it with the -h flag.

    
    $ newman run -h
    Options - Additional args: 
    Utility:
    -h, --help output usage information
    -v, --version output the version number
    Basic setup:
    --folder [folderName] Specify a single folder to run from a collection.
    -e, --environment [file|URL] Specify a Postman environment as a JSON [file]
    -d, --data [file] Specify a data file to use either json or csv
    -g, --global [file] Specify a Postman globals file as JSON [file]
    -n, --iteration-count [number] Define the number of iterations to run
    Request options:
    --delay-request [number] Specify a delay (in ms) between requests [number] --timeout-request [number] Specify a request timeout (in ms) for a request
    Misc.:
    --bail Stops the runner when a test case fails
    --silent Disable terminal output --no-color Disable colored output
    -k, --insecure Disable strict ssl
    -x, --suppress-exit-code Continue running tests even after a failure, but exit with code=0
    --ignore-redirects Disable automatic following of 3XX responses

    Lets try out of some of the options.

    Iterations

    Lets use the -n option to set the number of iterations to run the collection.

    $ newman run mycollection.json -n 10 # runs the collection 10 times

    To provide a different set of data, i.e. variables for each iteration, you can use the -d to specify a JSON or CSV file. For example, a data file such as the one shown below will run 2 iterations, with each iteration using a set of variables.

    [{
    "url": "http://127.0.0.1:5000",
      "user_id": "1",
      "id": "1",
      "token_id": "123123",
    },{
      "url": "http://postman-echo.com",
      "user_id": "2",
      "id": "2",
      "token_id": "899899",
    }]$ newman run mycollection.json -d data.json

    Alternately, the CSV file for the above set of variables would look like:

    url, user_id, id, token_id 
    http://127.0.0.1:5000, 1, 1, 123123123 
    http://postman-echo.com, 2, 2, 899899

    Environment Variables

    Each environment is a set of key-value pairs, with the key as the variable name. These Environment configurations can be used to differentiate between configurations specific to your execution environments eg. Dev, Test & Production.

    To provide a different execution environment, you can use the -e to specify a JSON or CSV file. For example, a environment file such as the one shown below will provide the environment variables globally to all tests during execution.

    postman_dev_env.json
    {
    "id": "b5c617ad-7aaf-6cdf-25c8-fc0711f8941b",
    "name": "dev env",
    "values": [
    {
    "enabled": true,
    "key": "env",
    "value": "dev.example.com",
    "type": "text"
    }  
    ],
    "timestamp": 1507210123364,
    "_postman_variable_scope": "environment",
    "_postman_exported_at": "2017-10-05T13:28:45.041Z",
    "_postman_exported_using": "Postman/5.2.1"
    }

    Bail FLAG

    Newman, by default, exits with a status code of 0 if everything runs well i.e. without any exceptions. Continuous integration tools respond to these exit codes and correspondingly pass or fail a build. You can use the –bail flag to tell Newman to halt on a test case error with a status code of 1 which can then be picked up by a CI tool or build system.

    $ newman run PostmanCollection.json -e environment.json --bail newman

    Conclusion

    Postman and Newman can be used for a number of test cases, including creating usage scenarios, Suites, Packs for your API Test Cases. Further NEWMAN / POSTMAN can be very well Integrated with CI/CD Tools such as Jenkins, Travis etc.