Category: Type

  • How To Implement Chaos Engineering For Microservices Using Istio

    “Embrace Failures. Chaos and failures are your friends, not enemies.” A microservice ecosystem is going to fail at some point. The issue is not if you fail, but when you fail, will you notice or not. It’s between whether it will affect your users because all of your services are down, or it will affect only a few users and you can fix it at your own time.

    Chaos Engineering is a practice to intentionally introduce faults and failures into your microservice architecture to test the resilience and stability of your system. Istio can be a great tool to do so. Let’s have a look at how Istio made it easy.

    For more information on how to setup Istio and what are virtual service and Gateways, please have a look at the following blog, how to setup Istio on GKE.

    Fault Injection With Istio

    Fault injection is a testing method to introduce errors into your microservice architecture to ensure it can withstand the error conditions. Istio lets you injects errors at HTTP layer instead of delaying the packets or killing the pods at network layer. This way, you can generate various types of HTTP error codes and test the reaction of your services under those conditions. 

    Generating HTTP 503 Error

    Here we see that two pods are running two different versions of recommendation service using the recommended tutorial while installing the sample application.

    Currently, the traffic on the recommendation service is automatically load balanced between those two pods.

    kubectl get pods -l app=recommendation
    NAME                                  READY     STATUS    RESTARTS   AGE
    recommendation-v1-798bf87d96-d9d95   2/2       Running   0          1h
    recommendation-v2-7bc4f7f696-d9j2m   2/2       Running   0          1h

    Now let’s apply a fault injection using virtual service which will send 503 HTTP error codes in 30% of the traffic serving the above pods.

    To test whether it is working, check the output from the curl of customer service microservice endpoint. 

    You will find the 503 error on approximately 30% of the request coming to recommendation service.

    To restore normal operation, please delete the above virtual service using:

    kubectl delete -f recommendation-fault.yaml

    Delay

    The most common failure we see in production is not the down service, rather a delay service. To inject network latency as a chaos experiment, you can create another virtual service. Sometimes, it happens that your application doesn’t respond on time and creates chaos in the complete ecosystem. How to simulate that behavior, let’s have a look.

    Now, if you hit the URL of endpoints of the above service in a loop, you will see the delays in some of the requests. 

    Retry‍

    In some of the production services, we expect that instead of failing instantly, it should retry N number of times to get the desired output. If not succeeded, then only a request should be considered as failed.

    For that mechanism, you can insert retries on those services as follows:

    Now any request coming to recommendation will do 3 attempts before considering it as failed.

    Timeout‍

    In the real world, an application faces most failures due to timeouts. It can be because of more load on the application or any other latency in serving the request. Your application should have proper timeouts defined, before declaring any request as “Failed”. You can use Istio to simulate the timeout mechanism and give our application a limited amount of time to respond before giving up.

    Wait only for N seconds before failing and giving up.

    kind: VirtualService
    metadata:
      name: recommendation
    spec:
      hosts:
      - recommendation
      http:
      - route:
        - destination:
            host: recommendation
        timeout: 1.000s

    Conclusion‍

    Istio lets you inject faults at the HTTP layer for your application and improves its resilience and stability. But, the application must handle the failures and take appropriate course of action. Chaos Engineering is only effective when you know your application can take failures, otherwise, there is no point in testing for chaos if you know your application is definitely broken.

  • An Introduction to Asynchronous Programming in Python

    Introduction

    Asynchronous programming is a type of parallel programming in which a unit of work is allowed to run separately from the primary application thread. When the work is complete, it notifies the main thread about completion or failure of the worker thread. There are numerous benefits to using it, such as improved application performance and enhanced responsiveness.

    Asynchronous programming has been gaining a lot of attention in the past few years, and for good reason. Although it can be more difficult than the traditional linear style, it is also much more efficient.

    For example, instead of waiting for an HTTP request to finish before continuing execution, with Python async coroutines you can submit the request and do other work that’s waiting in a queue while waiting for the HTTP request to finish.

    Asynchronicity seems to be a big reason why Node.js so popular for server-side programming. Much of the code we write, especially in heavy IO applications like websites, depends on external resources. This could be anything from a remote database call to POSTing to a REST service. As soon as you ask for any of these resources, your code is waiting around with nothing to do. With asynchronous programming, you allow your code to handle other tasks while waiting for these other resources to respond.

    How Does Python Do Multiple Things At Once?

    1. Multiple Processes

    The most obvious way is to use multiple processes. From the terminal, you can start your script two, three, four…ten times and then all the scripts are going to run independently or at the same time. The operating system that’s underneath will take care of sharing your CPU resources among all those instances. Alternately you can use the multiprocessing library which supports spawning processes as shown in the example below.

    from multiprocessing import Process
    
    
    def print_func(continent='Asia'):
        print('The name of continent is : ', continent)
    
    if __name__ == "__main__":  # confirms that the code is under main function
        names = ['America', 'Europe', 'Africa']
        procs = []
        proc = Process(target=print_func)  # instantiating without any argument
        procs.append(proc)
        proc.start()
    
        # instantiating process with arguments
        for name in names:
            # print(name)
            proc = Process(target=print_func, args=(name,))
            procs.append(proc)
            proc.start()
    
        # complete the processes
        for proc in procs:
            proc.join()

    Output:

    The name of continent is :  Asia
    The name of continent is :  America
    The name of continent is :  Europe
    The name of continent is :  Africa

    2. Multiple Threads

    The next way to run multiple things at once is to use threads. A thread is a line of execution, pretty much like a process, but you can have multiple threads in the context of one process and they all share access to common resources. But because of this, it’s difficult to write a threading code. And again, the operating system is doing all the heavy lifting on sharing the CPU, but the global interpreter lock (GIL) allows only one thread to run Python code at a given time even when you have multiple threads running code. So, In CPython, the GIL prevents multi-core concurrency. Basically, you’re running in a single core even though you may have two or four or more.

    import threading
     
    def print_cube(num):
        """
        function to print cube of given num
        """
        print("Cube: {}".format(num * num * num))
     
    def print_square(num):
        """
        function to print square of given num
        """
        print("Square: {}".format(num * num))
     
    if __name__ == "__main__":
        # creating thread
        t1 = threading.Thread(target=print_square, args=(10,))
        t2 = threading.Thread(target=print_cube, args=(10,))
     
        # starting thread 1
        t1.start()
        # starting thread 2
        t2.start()
     
        # wait until thread 1 is completely executed
        t1.join()
        # wait until thread 2 is completely executed
        t2.join()
     
        # both threads completely executed
        print("Done!")

    Output:

    Square: 100
    Cube: 1000
    Done!

    3. Coroutines using yield:

    Coroutines are generalization of subroutines. They are used for cooperative multitasking where a process voluntarily yield (give away) control periodically or when idle in order to enable multiple applications to be run simultaneously. Coroutines are similar to generators but with few extra methods and slight change in how we use yield statement. Generators produce data for iteration while coroutines can also consume data.

    def print_name(prefix):
        print("Searching prefix:{}".format(prefix))
        try : 
            while True:
                    # yeild used to create coroutine
                    name = (yield)
                    if prefix in name:
                        print(name)
        except GeneratorExit:
                print("Closing coroutine!!")
     
    corou = print_name("Dear")
    corou.__next__()
    corou.send("James")
    corou.send("Dear James")
    corou.close()

    Output:

    Searching prefix:Dear
    Dear James
    Closing coroutine!!

    4. Asynchronous Programming

    The fourth way is an asynchronous programming, where the OS is not participating. As far as OS is concerned you’re going to have one process and there’s going to be a single thread within that process, but you’ll be able to do multiple things at once. So, what’s the trick?

    The answer is asyncio

    asyncio is the new concurrency module introduced in Python 3.4. It is designed to use coroutines and futures to simplify asynchronous code and make it almost as readable as synchronous code as there are no callbacks.

    asyncio uses different constructs: event loopscoroutines and futures.

    • An event loop manages and distributes the execution of different tasks. It registers them and handles distributing the flow of control between them.
    • Coroutines (covered above) are special functions that work similarly to Python generators, on await they release the flow of control back to the event loop. A coroutine needs to be scheduled to run on the event loop, once scheduled coroutines are wrapped in Tasks which is a type of Future.
    • Futures represent the result of a task that may or may not have been executed. This result may be an exception.

    Using Asyncio, you can structure your code so subtasks are defined as coroutines and allows you to schedule them as you please, including simultaneously. Coroutines contain yield points where we define possible points where a context switch can happen if other tasks are pending, but will not if no other task is pending.

    A context switch in asyncio represents the event loop yielding the flow of control from one coroutine to the next.

    In the example, we run 3 async tasks that query Reddit separately, extract and print the JSON. We leverage aiohttp which is a http client library ensuring even the HTTP request runs asynchronously.

    import signal  
    import sys  
    import asyncio  
    import aiohttp  
    import json
    
    loop = asyncio.get_event_loop()  
    client = aiohttp.ClientSession(loop=loop)
    
    async def get_json(client, url):  
        async with client.get(url) as response:
            assert response.status == 200
            return await response.read()
    
    async def get_reddit_top(subreddit, client):  
        data1 = await get_json(client, 'https://www.reddit.com/r/' + subreddit + '/top.json?sort=top&t=day&limit=5')
    
        j = json.loads(data1.decode('utf-8'))
        for i in j['data']['children']:
            score = i['data']['score']
            title = i['data']['title']
            link = i['data']['url']
            print(str(score) + ': ' + title + ' (' + link + ')')
    
        print('DONE:', subreddit + '\n')
    
    def signal_handler(signal, frame):  
        loop.stop()
        client.close()
        sys.exit(0)
    
    signal.signal(signal.SIGINT, signal_handler)
    
    asyncio.ensure_future(get_reddit_top('python', client))  
    asyncio.ensure_future(get_reddit_top('programming', client))  
    asyncio.ensure_future(get_reddit_top('compsci', client))  
    loop.run_forever()

    Output:

    50: Undershoot: Parsing theory in 1965 (http://jeffreykegler.github.io/Ocean-of-Awareness-blog/individual/2018/07/knuth_1965_2.html)
    12: Question about best-prefix/failure function/primal match table in kmp algorithm (https://www.reddit.com/r/compsci/comments/8xd3m2/question_about_bestprefixfailure_functionprimal/)
    1: Question regarding calculating the probability of failure of a RAID system (https://www.reddit.com/r/compsci/comments/8xbkk2/question_regarding_calculating_the_probability_of/)
    DONE: compsci
    
    336: /r/thanosdidnothingwrong -- banning people with python (https://clips.twitch.tv/AstutePluckyCocoaLitty)
    175: PythonRobotics: Python sample codes for robotics algorithms (https://atsushisakai.github.io/PythonRobotics/)
    23: Python and Flask Tutorial in VS Code (https://code.visualstudio.com/docs/python/tutorial-flask)
    17: Started a new blog on Celery - what would you like to read about? (https://www.python-celery.com)
    14: A Simple Anomaly Detection Algorithm in Python (https://medium.com/@mathmare_/pyng-a-simple-anomaly-detection-algorithm-2f355d7dc054)
    DONE: python
    
    1360: git bundle (https://dev.to/gabeguz/git-bundle-2l5o)
    1191: Which hashing algorithm is best for uniqueness and speed? Ian Boyd's answer (top voted) is one of the best comments I've seen on Stackexchange. (https://softwareengineering.stackexchange.com/questions/49550/which-hashing-algorithm-is-best-for-uniqueness-and-speed)
    430: ARM launchesFactscampaign against RISC-V (https://riscv-basics.com/)
    244: Choice of search engine on Android nuked byAnonymous Coward” (2009) (https://android.googlesource.com/platform/packages/apps/GlobalSearch/+/592150ac00086400415afe936d96f04d3be3ba0c)
    209: Exploiting freely accessible WhatsApp data orWhy does WhatsApp web know my phones battery level?” (https://medium.com/@juan_cortes/exploiting-freely-accessible-whatsapp-data-or-why-does-whatsapp-know-my-battery-level-ddac224041b4)
    DONE: programming

    Using Redis and Redis Queue(RQ):

    Using asyncio and aiohttp may not always be in an option especially if you are using older versions of python. Also, there will be scenarios when you would want to distribute your tasks across different servers. In that case we can leverage RQ (Redis Queue). It is a simple Python library for queueing jobs and processing them in the background with workers. It is backed by Redis – a key/value data store.

    In the example below, we have queued a simple function count_words_at_url using redis.

    from mymodule import count_words_at_url
    from redis import Redis
    from rq import Queue
    
    
    q = Queue(connection=Redis())
    job = q.enqueue(count_words_at_url, 'http://nvie.com')
    
    
    ******mymodule.py******
    
    import requests
    
    def count_words_at_url(url):
        """Just an example function that's called async."""
        resp = requests.get(url)
    
        print( len(resp.text.split()))
        return( len(resp.text.split()))

    Output:

    15:10:45 RQ worker 'rq:worker:EMPID18030.9865' started, version 0.11.0
    15:10:45 *** Listening on default...
    15:10:45 Cleaning registries for queue: default
    15:10:50 default: mymodule.count_words_at_url('http://nvie.com') (a2b7451e-731f-4f31-9232-2b7e3549051f)
    322
    15:10:51 default: Job OK (a2b7451e-731f-4f31-9232-2b7e3549051f)
    15:10:51 Result is kept for 500 seconds

    Conclusion:

    Let’s take a classical example chess exhibition where one of the best chess players competes against a lot of people. And if there are 24 games with 24 people to play with and the chess master plays with all of them synchronically, it’ll take at least 12 hours (taking into account that the average game takes 30 moves, the chess master thinks for 5 seconds to come up with a move and the opponent – for approximately 55 seconds). But using the asynchronous mode gives chess master the opportunity to make a move and leave the opponent thinking while going to the next one and making a move there. This way a move on all 24 games can be done in 2 minutes and all of them can be won in just one hour.

    So, this is what’s meant when people talk about asynchronous being really fast. It’s this kind of fast. Chess master doesn’t play chess faster, the time is just more optimized and it’s not get wasted on waiting around. This is how it works.

    In this analogy, the chess master will be our CPU and the idea is that we wanna make sure that the CPU doesn’t wait or waits the least amount of time possible. It’s about always finding something to do.

    A practical definition of Async is that it’s a style of concurrent programming in which tasks release the CPU during waiting periods, so that other tasks can use it. In Python, there are several ways to achieve concurrency, based on our requirement, code flow, data manipulation, architecture design  and use cases we can select any of these methods.

  • Hear from Sema’s Founder & CEO, Matt Van Itallie on the Golden Rules of Code Reviews

    We had a wonderful time talking to Matt about Sema, a much-awaited online code review and developer portfolio tool, and how Velotio helped in building it. He also talks about the basics of writing more meaningful code reviews and his team’s experience of working with Velotio. 

    Spradha: Talk us through your entrepreneurial journey, and what made you build Sema?

    Matt: I learned to code from my parents, who were both programmers, and then worked on the organizational side of technology. I decided to start Sema because of the challenges I saw with lower code quality and companies not managing the engineers’ careers the right way. And so, I wanted to build a company to help solve that. 

    At Sema, we are a 50-person team with engineers from all over the globe. Being a global team is one of my favorite things at Sema because I get to learn so much from people with so many different backgrounds. We have been around since 2017. 

    We are building products to help improve code quality and to help engineers improve their careers and their knowledge. We think that code is a craft. It’s not a competition. And so, to build tools for engineers to improve the code and skills must begin with treating code as a craft. 

    Spradha: What advice do you have for developers when it comes to code reviews?

    Matt: Code reviews are a great way to improve the quality of code and help developers improve their skills. Anyone can benefit from doing code reviews, as well as, of course, receiving code reviews. Even junior developers reviewing the code of seniors can provide meaningful feedback. They can sometimes teach the seniors while having meaningful learning moments as a reviewer themselves. 

    And so, code reviews are incredibly important. There are six tips I would share. 

    • Treat code reviews as part of coding and prioritize it

    It might be obvious, but developers and the engineering team, and the entire company should treat code reviews as part of coding, not as something to do when devs have time. The reason is that an incomplete code review is a blocker for another engineer. So, the faster we can get high-quality code reviews, the faster other engineers can progress. 

    • Always remember – it’s a human being on the other end of the review

    Code reviews should be clear, concise, and communicated the right way. It’s also important to deliver the message with empathy. You can always consider reading your code review out loud and asking yourself, “Is this something I would want to be said to me?” If not, change the tone or content.

    • Give clear recommendations and suggestions

    Never tell someone that the code needs to be fixed without giving suggestions or recommendations on what to fix or how to fix it.

    • Always assume good intent

    Code may not be written how you would write it. Let’s say that more clearly: code is rarely written the same way by two different people. After all, code is a craft, not a task on an assembly line. Tap into a sense of curiosity and appreciation while reviewing – curiosity to understand what the reviewer had in mind and gratitude for what the coder did or tried to do.

    • Clarify the action and the level of importance

    If you are making an optional suggestion, for example, a “nit” that isn’t necessary before the code is approved for production, say so clearly.

    • Don’t forget that code feedback – and all feedback – includes praise.

    It goes without saying that a benefit of doing code reviews is to make the code better and fix issues. But that’s only half of it. On the flip side, code reviews present an excellent opportunity to appreciate your colleagues’ work. If someone has written particularly elegant or maintainable code or has made a great decision about using a library, let them know!

    It’s always the right time to share positive feedback.

    Spradha: How has Velotio helped you build the code review tool at Sema?

    Matt: We’ve been working with Velotio for over 18 months. We have several amazing colleagues from Velotio, including software developers, DevOps engineers, and product managers. 

    Our Velotio colleagues have been instrumental in building our new product, a free code review assistant and developer portfolio tool. It includes a Chrome extension that makes code reviews more clear, more robust, and reusable. The information is available in dashboards for further future exploration too. 

    Developers can now create portfolios of their work that goes far way beyond a traditional developer portfolio. It is based on what other reviewers have said about your code and what you have said as a reviewer on other people’s code. It allows you to really tell a clear story about what you have worked on and how you have grown. 

    Spradha: How has your experience been working with Velotio?

    Matt: We have had an extraordinary experience working with the Velotio team at Sema. We consider our colleagues from Velotio as core team members and leaders of our organization. I have been so impressed with the quality, the knowledge, the energy, and the commitment that Velotio colleagues have shown. And we would not have been able to achieve as much as we have without their contribution.

    I can think of three crucial moments in particular when talking about the impact our Velotio colleagues have made. First, one of their engineers played a major role in designing and building the Chrome extension that we use. 

    Secondly, a DevOps engineer from Velotio has radically improved the setup, reliability, and ease of use of our DevOps systems. 

    And third, a product manager from Velotio has been an extraordinary project leader for a critical feature of the Sema tool, a library of over 20,000 best practices that coders can insert into code reviews, which saves time and helps make the code reviews more robust. 

    We literally would not be where we are if it was not for the great work of our colleagues from Velotio. 

    Spradha: How can people learn more about Sema?

    Matt: You can visit our website at www.semasoftware.com. And for those interested in using our tool to help with code reviews, whether it’s a commercial project or an open-source project, you can sign up for free. 

  • Create CI/CD Pipeline in GitLab in under 10 mins

    Why Chose GitLab Over Other CI tools?

    If there are many tools available in the market, like CircleCI, Github Actions, Travis CI, etc., what makes GitLab CI so special? The easiest way for you to decide if GitLab CI is right for you is to take a look at following use-case:

    GitLab knocks it out of the park when it comes to code collaboration and version control. Monitoring the entire code repository along with all branches becomes manageable. With other popular tools like Jenkins, you can only monitor some branches. If your development teams are spread across multiple locations globally, GitLab serves a good purpose. Regarding price, while Jenkins is free, you need to have a subscription to use all of Gitlab’s features.

    In GitLab, every branch can contain the gitlab-ci.yml file, which makes it easy to modify the workflows. For example, if you want to run unit tests on branch A and perform functional testing on branch B, you can simply modify the YAML configuration for CI/CD, and the runner will take care of running the job for you. Here is a comprehensive list of Pros and Cons of Gitlab to help you make a better decision.

    Intro

    GitLab is an open-source collaboration platform that provides powerful features beyond hosting a code repository. You can track issues, host packages and registries, maintain Wikis, set up continuous integration (CI) and continuous deployment (CD) pipelines, and more.

    In this tutorial, you will configure a pipeline with three stages: build, deploy, test. The pipeline will run for each commit pushed to the repository.

    GitLab and CI/CD

    As we all are aware, a fully-fledged CI/CD pipeline primarily includes the following stages:

    • Build
    • Test
    • Deploy

    Here is a pictorial representation of how GitLab covers CI and CD:

    Source: gitlab.com

    Let’s take a look at an example of an automation testing pipeline. Here, CI empowers test automation and CD automates the release process to various environments. The below image perfectly demonstrates the entire flow.

    Source: xenonstack.com

    Let’s create the basic 3-stage pipeline

    Step 1: Create a project > Create a blank project

    Visit gitlab.com and create your account if you don’t have one already. Once done, click “New Project,” and on the following screen, click “Create Blank Project.” Name it My First Project, leave other settings to default for now, and click Create.
    Alternatively, if you already have your codebase in GitLab, proceed to Step 2.

    Step 2: Create a GitLab YAML

    To create a pipeline in GitLab, we need to define it in a YAML file. This yaml file should reside in the root directory of your project and should be named gitlab-ci.yml. GitLab provides a set of predefined keywords that are used to define a pipeline. 

    In order to design a basic pipeline, let’s understand the structure of a pipeline. If you are already familiar with the basic structure given below, you may want to jump below to the advanced pipeline outline for various environments.

    The hierarchy in GitLab has Pipeline > Stages > Jobs as shown below. The Source or SRC  is often a git commit or a CRON job, which triggers the pipeline on a defined branch.

    Now, let’s understand the commonly used keywords to design a pipeline:

    1. stages: This is used to define stages in the pipeline.
    2. variables: Here you can define the environment variables that can be accessed in all the jobs.
    3. before_script: This is a list of commands to be executed before each job. For example: creating specific directories, logging, etc.
    4. artifacts: If your job creates any artifacts, you can mention the path to find them here.
    5. after_script: This is a list of commands to be executed after each job. For example: cleanup.
    6. tags: This is a tag/label to identify the runner or a GitLab agent to assign your jobs to. If the tags are not specified, the jobs run on shared runners.
    7. needs: If you want your jobs to be executed in a certain order or you want a particular job to be executed before the current job, then you can set this value to the specific job name.
    8. only/except: These keywords are used to control when the job should be added to the pipeline. Use ‘only’ to define when a job should be added, whereas ‘except’ is used to define when a job should not be added. Alternatively, the ‘rules’ keyword is also used to add/exclude jobs based on conditions.

    You can find more keywords here.

    Let’s create a sample YAML file.

    stages:
        - build
        - deploy
        - test
    
    variables:
      RAILS_ENV: "test"
      NODE_ENV: "test"
      GIT_STRATEGY: "clone"
      CHROME_VERSION: "103"
      DOCKER_VERSION: "20.10.14"
    
    build-job:
      stage: build
      script:
        - echo "Check node version and build your binary or docker image."
        - node -v
        - bash buildScript.sh
    
    deploy-code:
      stage: deploy
      needs: build-job
      script:
        - echo "Deploy your code "
        - cd to/your/desired/folder
        - bash deployScript.sh
    
    test-code:
      stage: test
      needs: deploy-code
      script:
        - echo "Run your tests here."
        - cd to/your/desired/folder
        - npm run test

    As you can see, if you have your scripts in a bash file, you can run them from here providing the correct path. 

    Once your YAML is ready, commit the file. 

    Step 3: Check Pipeline Status

    Navigate to CICD > Pipelines from the left navigation bar. You can check the status of the pipeline on this page.

    Here, you can check the commit ID, branch, the user who triggered the pipeline, stages, and their status.

    If you click on the status, you will get a detailed view of pipeline execution.

    If you click on a job under any stage, you can check console logs in detail.

    If you have any artifacts created in your pipeline jobs, you can find them by clicking on the 3 dots for the pipeline instance.

    Advanced Pipeline Outline

    For an advanced pipeline that consists of various environments, you can refer to the below YAML. Simply remove the echo statements and replace them with your set of commands.

    image: your-repo:tag
    variables:
    DOCKER_DRIVER: overlay2
    DOCKER_TLS_CERTDIR: ""
    DOCKER_HOST: tcp://localhost:2375
    SAST_DISABLE_DIND: "true"
    DS_DISABLE_DIND: "false"
    GOCACHE: "$CI_PROJECT_DIR/.cache"
    cache: # this section is used to cache libraries etc between pipeline runs thus reducing the amount of time required for pipeline to run
    key: ${CI_PROJECT_NAME}
    paths:
      - cache-path/
    #include:
     #- #echo "You can add other projects here."
     #- #project: "some/other/important/project"
       #ref: main
       #file: "src/project.yml"
    default:
    tags:
      - your-common-instance-tag
    stages:
    - build
    - test
    - deploy_dev
    - dev_tests
    - deploy_qa
    - qa_tests
    - rollback_qa
    - prod_gate
    - deploy_prod
    - rollback_prod
    - cleanup
    build:
    stage: build
    services:
      - docker:19.03.0-dind
    before_script:
      - echo "Run your pre-build commnadss here"
      - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
    script:
      - docker build -t $CI_REGISTRY/repo:$DOCKER_IMAGE_TAG  --build-arg GITLAB_USER=$GITLAB_USER --build-arg GITLAB_PASSWORD=$GITLAB_PASSWORD -f ./Dockerfile .
      - docker push $CI_REGISTRY/repo:$DOCKER_IMAGE_TAG
      - echo "Run your builds here"
    unit_test:
    stage: test
    image: your-repo:tag
    script:
      - echo "Run your unit tests here"
    linting:
    stage: test
    image: your-repo:tag
    script:
      - echo "Run your linting tests here"
    sast:
    stage: test
    image: your-repo:tag
    script:
      - echo "Run your Static application security testing here "
    deploy_dev:
    stage: deploy_dev
    image: your-repo:tag
    before_script:
      - source file.sh
      - export VARIABLE="$VALUE"
      - echo "deploy on dev"
    script:
      - echo "deploy on dev"
    after_script:
      #if deployment fails run rollback on dev
      - echo "Things to do after deployment is run"
    only:
      - master #Depends on your branching strategy
    integration_test_dev:
    stage: dev_tests
    image: your-repo:tag
    script:
      - echo "run test  on dev"
    only:
      - master
    allow_failure: true # In case failures are allowed
    deploy_qa:
    stage: deploy_qa
    image: your-repo:tag
    before_script:
      - source file.sh
      - export VARIABLE="$VALUE"
      - echo "deploy on qa"
    script:
      - echo "deploy on qa
    after_script:
      #if deployment fails run rollback on qa
      - echo "Things to do after deployment script is complete "
    only:
      - master
    needs: ["integration_test_dev", "deploy_dev"]
    allow_failure: false
    integration_test_qa:
    stage: qa_tests
    image: your-repo:tag
    script:
      - echo "deploy on qa
    only:
      - master
    allow_failure: true # in case you want to allow failures
     
    rollback_qa:
    stage: rollback_qa
    image: your-repo:tag
    before_script:
      - echo "Things to rollback after qa integration failure"
    script:
      - echo "Steps to rollback"
    after_script:
      - echo "Things to do after rollback"
    only:
      - master
    needs:
      [
        "deploy_qa",
      ]
    when: on_failure #This will run in case the qa deploy job fails
    allow_failure: false
     
    prod_gate: # this is manual gate for prod approval
    before_script:
      - echo "your commands here"
    stage: prod_gate
    only:
      - master
    needs:
      - deploy_qa
    when: manual
     
     
    deploy_prod:
    stage: deploy_prod
    image: your-repo:tag
    tags:
      - some-tag
    before_script:
      - source file.sh
      - echo "your commands here"
    script:
      - echo "your commands here"
    after_script:
      #if deployment fails
      - echo "your commands here"
    only:
      - master
    needs: [ "deploy_qa"]
    allow_failure: false
    rollback_prod: # This stage should be run only when prod deployment fails
    stage: rollback_prod
    image: your-repo:tag
    before_script:
      - export VARIABLE="$VALUE"
      - echo "your commands here"
    script:
      - echo "your commands here"
    only:
      - master
    needs: [ "deploy_prod"]
    allow_failure: false
    when: on_failure
    cleanup:
    stage: cleanup
    script:
      - echo "run cleanup"
      - rm -rf .cache/
    when: always

    Conclusion

    If you have worked with Jenkins, you know the pain points of working with groovy code. Thus, GitLab CI makes it easy to design, understand, and maintain the pipeline code. 

    Here are some pros and cons of using GitLab CI that will help you decide if this is the right tool for you!

  • Flutter vs React Native: A Detailed Comparison

    Flutter and React Native are two of the most popular cross-platform development frameworks on the market. Both of these technologies enable you to develop applications for iOS and Android with a single codebase. However, they’re not entirely interchangeable.

    Flutter allows developers to create Material Design-like applications with ease. React Native, on the other hand, has an active community of open source contributors, which means that it can be easily modified to meet almost any standard.

    In this blog, we have compared both of these technologies based on popularity, performance, learning curve, community support, and developer mindshare to help you decide which one you can use for your next project.

    But before digging into the comparison, let’s have a brief look at both these technologies:

    ‍About React Native

    React Native has gained the attention of many developers for its ease of use with JS code. Facebook has developed the framework to solve cross-platform application development using React and introduced React Native in their first React.js conference in 2015.

    React Native enables developers to create high-end mobile apps with the help of JavaScript. This eventually comes in handy for speeding up the process of developing mobile apps. The framework also makes use of the impressive features of JavaScript while maintaining excellent performance. React Native is highly feature-rich and allows you to create dynamic animations and gestures which are usually unavailable in the native platform.

    React Native has been adopted by many companies as their preferred technology. 

    For example:

    • Facebook
    • Instagram
    • Skype
    • Shopify
    • Tesla
    • Salesforce

    About Flutter

    Flutter is an open-source mobile development kit that makes it easy for developers to build high-quality applications for Android and iOS. It has a library with widgets to create the user interface of the application independent of the platform on which it is supported.

    Flutter has extended the reach of mobile app development by enabling developers to build apps on any platform without being restrained by mobile development limitations. The framework started as an internal project at Google back in 2015, with its first stable release in 2018

    Since its inception, Google aimed to provide a simplistic, usable programming language for building sophisticated apps and wanted to carry out Dart’s goal of replacing JavaScript as the next-generation web programming language.

    Let’s see what all apps are built using Flutter:

    • Google Ads
    • eBay
    • Alibaba
    • BMW
    • Philips Hue

    React Native vs. Flutter – An overall comparison

    Design Capabilities

    React Native is based on React.js, one of the most popular JavaScript libraries for building user interfaces. It is often used with Redux, which provides a solid basis for creating predictable web applications.

    Flutter, on the other hand, is Google’s new mobile UI framework. Flutter uses Dart language to write code, compiled to native code for iOS and Android apps.

    Both React Native and Flutter can be used to create applications with beautiful graphics and smooth animations.

    React Native

    In the React Native framework, UI elements look native to both iOS and Android platforms. These UI elements make it easier for developers to build apps because they only have to write them once. In addition, many of these components also render natively on each platform. The user experiences an interface that feels more natural and seamless while maintaining the capability to customize the app’s look and feel.

    The framework allows developers to use JavaScript or a combination of HTML/CSS/JavaScript for cross-platform development. While React Native allows you to build native apps, it does not mean that your app will look the same on both iOS and Android.

    Flutter

    Flutter is a toolkit for creating high-performance, high-fidelity mobile apps for iOS and Android. Flutter works with existing code, is used by developers and organizations worldwide, and is free and open source. The standard neutral style is what Flutter offers.

    Flutter has its own widgets library, which includes Material Design Components and Cupertino. 

    The Material package contains widgets that look like they belong on Android devices. The Cupertino package contains widgets that look like they belong on iOS devices. By default, a Flutter application uses Material widgets. If you want to use Cupertino widgets, then import the Cupertino library and change your app’s theme to CupertinoTheme.

    Community

    Flutter and React Native have a very active community of developers. Both frameworks have extensive support and documentation and an active GitHub repository, which means they are constantly being maintained and updated.

    With the Flutter community, we can even find exciting tools such as Flutter Inspector or Flutter WebView Plugin. In the case of React Native, Facebook has been investing heavily in this framework. Besides the fact that the development process is entirely open-source, Facebook has created various tools to make the developer’s life easier.

    Also, the more updates and versions come out, the more interest and appreciation the developer community shows. Let’s see how both frameworks stack up when it comes to community engagement.

    For React Native

    The Facebook community is the most significant contributor to the React Native framework, followed by the community members themselves.

    React Native has garnered over 1,162 contributors on GitHub since its launch in 2015. The number of commits (or changes) to the framework has increased over time. It increased from 1,183 commits in 2016 to 1,722 commits in 2017.

    This increase indicates that more and more developers are interested in improving React Native.

    Moreover, there are over 19.8k live projects where developers share their experiences to resolve existing issues. The official React Native website offers tutorials for beginners who want to get started quickly with developing applications for Android and iOS while also providing advanced users with the necessary documentation.

    Also, there are a few other platforms where you can ask your question to the community, meet other React Native developers, and gain new contacts:

    Reddit: https://www.reddit.com/r/reactnative/

    Stack Overflow: http://stackoverflow.com/questions/tagged/react-native

    Meetuphttps://www.meetup.com/topics/react-native/

    Facebook: https://www.facebook.com/groups/reactnativecommunity/

    For Flutter

    The Flutter community is smaller than React Native. The main reason is that Flutter is relatively new and is not yet widely used in production apps. But it’s not hard to see that its popularity is growing day by day. Flutter has excellent documentation with examples, articles, and tutorials that you can find online. It also has direct contact with its users through channels, such as Stack Overflow and Google Groups.

    The community of Flutter is growing at a steady pace with around 662 contributors. The total count of projects being forked by the community is 13.7k, where anybody can seek help for development purposes.

    Here are some platforms to connect with other developers in the Flutter community:

    GitHub: https://github.com/flutter

    Google Groups: https://groups.google.com/g/flutter-announce

    Stack Overflow: https://stackoverflow.com/tags/flutter

    Reddithttps://www.reddit.com/r/FlutterDev/

    Discordhttps://discord.com/invite/N7Yshp4

    Slack: https://fluttercommunity.slack.com/

    Learning curve

    The learning curve of Flutter is steeper than React Native. However, you can learn both frameworks within a reasonable time frame. So, let’s discuss what would be required to learn React Native and Flutter.

    React Native

    The language of React Native is JavaScript. Any person who knows how to write JS will be able to utilize this framework. But, it’s different from building web applications. So if you are a mobile developer, you need to get the hang of things that might take some time.

    However, React Native is relatively easy to learn for newbies. For starters, it offers a variety of resources, both online and offline. On the React website, users can find the documentation, guides, FAQs, and learning resources.

    Flutter

    Flutter has a bit steeper learning curve than React native. You need to know some basic concepts of native Android or iOS development. Flutter requires you to have experience in Java or Kotlin for Android or Objective-C or Swift for iOS. It can be a challenge if you’re accustomed to using new languages without type casts and generics. However, once you’ve learned how to use it, it can speed up your development process.

    Flutter provides great documentation of its APIs that you can refer to. Since the framework is still new, some information might not be updated yet.

    Team size

    The central aspect of choosing between React Native and Flutter is the team size. To set a realistic expectation on the cost, you need to consider the type of application you will develop.

    React Native

    Technically, React Native’s core library can be implemented by a single developer. This developer will have to build all native modules by himself, which is not an easy task. However, the required team size for React Native depends on the complexity of the mobile app you want to build. If you plan to create a simple mobile app, such as a mobile-only website, then one developer will be enough. However, if your project requires complex UI and animations, then you will need more skillful and experienced developers.

    Flutter

    Team size is a very important factor for the flutter app development. The number of people in your team might depend on the requirements and type of app you need to develop.

    Flutter makes it easy to use existing code that you might already have, or share code with other apps that you might already be building. You can even use Java or Kotlin if you prefer (though Dart is preferred).

    UI component

    When developing a cross-platform app, keep in mind that not all platforms behave identically. You will need to choose a library that supports the core element of the app consistent for each platform. We need the framework to have an API so that we can access the native modules.

    React Native

    There are two aspects to implementing React Native in your app development. The first one is writing the apps in JavaScript. This is the easiest part since it’s somewhat similar to writing web apps. The second aspect is the integration of third-party modules that are not part of the core framework.

    The reason third-party modules are required is that React Native does not support all native functionalities. For instance, if you want to implement an Alert box, you need to import the UIAlertController module from Applenv SDK.

    This makes the React Native framework somehow dependent on third-party libraries. There are lots of third-party libraries for React Native. You can use these libraries in your project to add native app features which are not available in React Native. Mostly, it is used to include maps, camera, sharing, and other native features.

    Flutter

    Flutter offers rich GUI components called widgets. A widget can be anything from simple text fields, buttons, switches, sliders, etc., to complex layouts that include multiple pages with split views, navigation bars, tab bars, etc., that are present in modern mobile apps.

    The Flutter toolkit is cross-platform and it has its own widgets, but it still needs third-party libraries to create applications. It also depends on the Android SDK and the iOS SDK for compilation and deployment. Developers can use any third-party library they want as long as it does not have any restrictions on open source licensing. Developers are also allowed to create their own libraries for Flutter app development.

    Testing Framework and Support

    React Native and Flutter have been used to develop many high-quality mobile applications. Of course, in any technology, a well-developed testing framework is essential.

    Based on this, we can see that both React Native and Flutter have a relatively mature testing framework. 

    React Native

    React Native uses the same UI components and APIs as a web application written in React.js. This means you can use the same frameworks and libraries for both platforms. Testing a React Native application can be more complex than a traditional web-based application because it relies heavily on the device itself. If you’re using a hybrid JavaScript approach, you can use tools like WebdriverIO or Appium to run the same tests across different browsers. Still, if you’re going native, you need to make sure you choose a tool with solid native support.

    Flutter

    Flutter has developed a testing framework that helps ensure your application is high quality. It is based on the premise of these three pillars: unit tests, widget tests, and integration tests. As you build out your Flutter applications, you can combine all three types of tests to ensure that your application works perfectly.

    Programming language

    One of the most important benefits of using Flutter and React Native to develop your mobile app is using a single programming language. This reduces the time required to hire developers and allows you to complete projects faster.

    React Native

    React Native breaks that paradigm by bridging the gap between native and JavaScript environments. It allows developers to build mobile apps that run across platforms by using JavaScript. It makes mobile app development faster, as it only requires one language—JavaScript—to create a cross-platform mobile app. This gives web developers a significant advantage over native application developers as they already know JavaScript and can build a mobile app prototype in a couple of days. There is no need to learn Java or Swift. They can even use the same JavaScript libraries they use at work, like Redux and ImmutableJS.

    Flutter

    Flutter provides tools to create native mobile apps for both Android and iOS. Furthermore, it allows you to reuse code between the platforms because it supports code sharing using libraries written in Dart.

    You can also choose between two different ways of creating layouts for Flutter apps. The first one is similar to CSS, while the second one is more like HTML. Both are very powerful and simple to use. By default, you should use widgets built by the Flutter team, but if needed, you can also create your own custom widgets or modify existing ones.

    Tooling and DX

    While using either Flutter or React Native for mobile app development, it is likely that your development team will also be responsible for the CI/CD pipeline used to release new versions of your app.

    CI/CD support for Flutter and React Native is very similar at the moment. Both frameworks have good support for continuous integration (CI), continuous delivery (CD), and continuous deployment (CD). Both offer a first-class experience for building, testing, and deploying apps.

    React Native

    The React Native framework has existed for some time now and is pretty mature. However, it still lacks documentation around continuous integration (CI) and continuous delivery (CD) solutions. Considering the maturity of the framework, we might expect to see more investment here. 

    Whereas Expo is a development environment and build tool for React Native. It lets you develop and run React Native apps on your computer just like you would do on any other web app.

    Expo turns a React Native app into a single JavaScript bundle, which is then published to one of the app stores using Expo’s tools. It provides all the necessary tooling—like bundling, building, and hot reloading—and manages the technical details of publishing to each app store. Expo provides the tooling and environment so that you can develop and test your app in a familiar way, while it also takes care of deploying to production.

    Flutter

    Flutter’s open-source project is complete, so the next step is to develop a rich ecosystem around it. The good news is that Flutter uses the same command-line interface as XCode, Android Studio, IntelliJ IDEA, and other fully-featured IDE’s. This means Flutter can easily integrate with continuous integration/continuous deployment tools. Some CI/CD tools for Flutter include Bitrise and Codemagic. All of these tools are free to use, although they offer paid plans for more features.

    Here is an example of a to-do list app built with React Native and Flutter.

    Flutter: https://github.com/velotiotech/simple_todo_flutter

    React Native: https://github.com/velotiotech/react-native-todo-example

    Conclusion

    As you can see, both Flutter and React Native are excellent cross-platform app development tools that will be able to offer you high-quality apps for iOS and Android. The choice between React Native vs Flutter will depend on the complexity of the app that you are looking to create, your team size, and your needs for the app. Still, all in all, both of these frameworks are great options to consider to develop cross-platform native mobile applications.

  • Kickstart your Critical Listening Skills – Learn to Analyze Hi-Res/High Quality Audio with a Spectrogram

    Audio is an inherently complex signal.

    Anything and everything we hear can be described by an audio signal.

    Every sound has its own characteristics, and our ears can isolate and identify them.

    High-resolution music has become quite the buzzword these days, but can we identify it simply by listening to it? The audio quality may also vary depending on factors like encoder settings, type of compression, speaker quality, etc. We will interpret this via an audio spectrogram.

    What you will learn:

    • Why is the frequency domain so important for audio? 
    • How to estimate the audio quality of a track?
    • How to relate what you hear to a spectrogram?
    • What does a high-quality musical instrument look like in a spectrogram?

    What is an audio spectrogram?

    An audio spectrogram is a useful tool for analyzing digital audio, allowing you to visualize and understand how the audio signal evolves over time.

    Features like frequency distribution, audio bandwidth, tone quality, etc. can be determined. 

    High-quality audio at a glance:

    • File is large in size as more details are stored.
    • Spectrum is spread over a wide range of frequencies.
      eg: High-resolution audio contains frequencies up to192 kHz. 

    What is time domain?  

    Observing an audio signal in the time domain gives us an idea of the overall volume of the track  and how it varies with time. The loud, soft, and silent components of a track can be easily identified. This, however, does not tell us much about the quality of the instrument being played.

    To identify sound quality, we need to look at it from the frequency domain.

    Guitar Melody

    Violin Chord


    What is frequency domain?

    Frequencies are the fundamental component of any sound.

    High-quality audio contains a wide range of frequencies.

    In any time interval, the resultant sound is due to the constructive and destructive interference of multiple frequencies. 

    Another metric of audio quality is timbre.

    Timbre can be called audio flavor or tone. It allows a listener to distinguish between the musical tone of a violin or trumpet even if the tone is played at the same pitch with the same loudness.

    The timbre of an instrument is determined by the overtones it emphasizes.

    An overtone is any harmonic with a frequency greater than the fundamental.

    The composition of a musical instrument’s sound in terms of its partials can be visualized by a spectrogram. Now, let’s try to understand what a spectrogram really is.

    Interpreting a spectrogram:

    A spectrogram is a heatmap type of visualization of all the frequencies in an audio track.

    The higher the energy of a frequency, i.e., the louder it gets, the brighter it looks in the heatmap. 

    Looking at the patterns of the distribution of frequency energy, we gain valuable insights regarding an audio signal which cannot be identified in the time domain.

    2 kHz Sine Wave

    3D rendering of a 2 kHz sine wave:

    We see a constant energy level in the 2 kHz band. 

    What do the frequencies tell us?

    Frequency patterns relate to the different components of a song. You can identify instruments, vocals, and tunes from lead instruments. The fundamental frequencies can be easily identified as they are the brightest in color in a given interval. The overtones are stacked above with decreasing volume. 

    Here, we see a single note played on an organ. The fundamental frequency is the brightest while the overtones are stacked above with decreasing volume.

    Electric Organ Single Note

    3D rendering of a single note played on an organ:

    Sound dynamics and control in a spectrogram:

    A trained musician can control the loudness and sustain of an instrument, which is a marked element of performance. The bright colors in the spectrum are the loudest frequencies. This will relate to a lead tune that will dominate the sound. 

    This is a spectrogram of an arpeggio played on a piano.
    Piano Slow Melody

    How to judge the audio quality of a track?

    A wide range of frequencies indicates better audio fidelity.

    The sound’s complexity depends on the interference of the overtones. 

    More overtones symbolize a much richer sound created by musical instruments.

    The volume modulation indicates the control and feel a musician is attempting to generate.

    Example: spectrogram of a guitar

    Guitar Melody

    The guitar spectrogram contains frequencies up to 16 kHz.

    We can see the notes being plucked in bright colors, with their fundamental frequencies in the range of 512 Hz.

    The overtones are stacked in the higher frequency bands with decreasing volume.

    A brief pause every couple of notes is also apparent.

    We can see the sharpness and sustain of each note being played.

    If we look at the fundamental frequency, we can guess the scale.

    Example: spectrogram of a violin

    Violin Melody

    This is a spectrogram of notes played on a violin.

    It is softer in sound, smooth, and connected; in musical terms, it’s a clear example of legato.

    Softer overtones are visible, which add to the complexity of the sound.

    All notes have approximately the same volume and sustain.

    Conclusion:

    A spectrogram is a great way to try and understand the sounds you hear in the world around you. With one, you will be able to analyze the characteristics of any sound source. Your entire music listening experience will become more intricate and fulfilling. 

    Do apply the above principles and remember to have fun while doing so. Stay tuned for similar content.

  • Optimize React App Performance By Code Splitting

    Prerequisites

    This blog post is written assuming you have an understanding of the basics of React and routing inside React SPAs with react-router. We will also be using Chrome DevTools for measuring the actual performance benefits achieved in the example. We will be using Webpack, a well-known bundler for JavaScript projects.

    What is Code Splitting?

    Code splitting is simply dividing huge code bundles into smaller code chunks that can be loaded ad hoc. Usually, when the SPAs grow in terms of components and plugins, the need to split the code into smaller-sized chunks arises. Bundlers like Webpack and Rollup provide support for code splitting.

    Several different code splitting strategies can be implemented depending on the application structure. We will be taking a look at an example in which we implement code splitting inside an admin dashboard for better performance.

    Let’s Get Started

    We will be starting with a project configured with Webpack as a bundler having a considerable bundle size. This simple Github repository dashboard will have four routes for showing various details regarding the same repository. The dashboard uses some packages to show details in the app such as react-table, TinyMCE, and recharts.

    Before optimizing the bundle

    Just to get an idea of performance changes, let us note the metrics from the prior bundle of the app. Let’s check loading time in the network tab with the following setup: 

    • Browser incognito tab
    • Cache disabled
    • Throttling enabled to Fast 3G

    Development Build

    As you can see, the development bundle without any optimization has around a 1.3 MBs network transfer size, which takes around 7.85 seconds to load for the first time on a fast 3G connection.

    However, we know that we will probably never want to serve this unoptimized development bundle in production. So, let’s figure out metrics for the production bundle with the same setup.

    Production Build

    The project is already configured for generating a webpack production build. The production bundle is much smaller, with a 534 kBs network transfer size compared to the development bundle, which takes around 3.54 seconds to load on a fast 3G connection. This is still a problem as the best practice suggests keeping the page load times below 3 seconds. Let’s check what happens if we check with a slow 3G connection.

    The production bundle took 12.70 seconds to load for the first time on a slow 3G connection. Now, this can annoy users.

    If we look at the lighthouse report, we see a warning indicating that we’re loading more code than needed:

    As per the warning, we’re loading some unused code while rendering the first time, which we can get rid of and load later instead. Lighthouse report indicates that we can save up to 404 KiBs while loading the page for the first time. 

    There’s one more suggestion for splitting the bundle using React.lazy(). The lighthouse also gives us various metrics that can be measured for improvement of the application. However, we will be focusing on bundle size in this case.

    The extra unused code inside the bundle is not only bad in terms of download size, but it also impacts the user experience. Let’s use the performance tab for figuring out how this is affecting the user experience. Navigate to the performance tab and profile the page. It shows that it takes around 10 seconds for the user to see actual content on the page reload:

    Webpack Bundle Analyzer Report

    We can visualize the bundles with the webpack bundle analyzer tool, which gives us a way to track and measure the bundle size changes over time. Please follow the installation instructions given here.

    So, this is what our production build bundle report looks like:

    As we can see, our current production build has a giant chunk man.201d82c8.js, which can be divided into smaller chunks.

    The bundle analyzer report not only gives us information about the chunk sizes but also what modules the chunk contains and their size. This gives an opportunity to find out and free up such modules and achieve better performance. Here for example that adds considerable size to our main bundle:

    Using React.lazy() for Code Splitting

    React.lazy allows us to use dynamically imported components. This means that we can load these components when they’re needed and reduce bundle size. As our dashboard app has four top-level routes that are wrapped inside react-router’s Switch, we know that they will never need to be at once. 

    So, apparently, we can split these top-level components into four different bundle chunks and load them ad hoc. For doing that, we need to convert our imports from:

    import Collaborators from './Collaborators';
    import PullRequests from './PullRequests';
    import Statistics from './Statistics';
    import Sidebar from './Sidebar';

    CODE: https://gist.github.com/velotiotech/972999ef8126c7618814be299d326a62.js

    To:

    const Commits = React.lazy(() => import('./Commits'));
    const Collaborators = React.lazy(() => import('./Collaborators'));
    const PullRequests = React.lazy(() => import('./PullRequests'));
    const Statistics = React.lazy(() => import('./Statistics'));

    This also requires us to implement a Suspense wrapper around our routes, which does the work of showing fallback visuals till the dynamically loading component is visible.

    <Suspense fallback={<div>Loading...</div>}>
               <Switch>
                 <Route path="/" exact component={Commits} />
                 <Route path="/collaborators" exact component={Collaborators} />
                 <Route path="/prs" exact component={PullRequests} />
                 <Route path="/stats" exact component={Statistics} />
               </Switch>
             </Suspense>

    Just after this, the Webpack recognizes the dynamic imports and splits the main chunk into smaller chunks. In the production build, we can notice the following bundles being downloaded. We have reduced the load time for the main bundle chunk from 12 seconds to 3.10 seconds, which is quite good. This is an improvement as we’re not loading unnecessary JS for the first time. 

    As we can see in the waterfall view of the requests tab, other required chunks are loaded parallel as soon as the main chunk is loaded.

    If we look at the lighthouse report, the warning for removing unused JS has been gone and we can see the check passing.

    This is good for the landing page. How about the other routes when we visit them? The following shows that we are now loading more small chunks when we render that lazily loaded component on menu item click.

    With the current setup, we should be able to see improved performance inside our applications. We can always go ahead and tweak Webpack chunks when needed.

    To measure how this change affects user experience, we can again generate the performance report with Chrome DevTools. We can quickly notice that the idle frame time has dropped to around 1 second—far better than the previous setup. 

    If we read through the timeline, we can see the user sees a blank frame up to 1 second, and they’re able to see the sidebar in the next second. Once the main bundle is loaded, we’re loading the lazy-loaded commits chunk till that time we see our fallback loading component.

    Also, when we navigate to the other routes, we can see the chunks loaded lazily when they’re needed.

    Let’s have a look at the bundle analyzer report generated after the changes. We can easily see that the chunks are divided into smaller chunks. Also, we can notice that the chunks contain only the code they need. For example, the 51.573370a6.js chunk is actually the commits route containing the react-table code. It’s similar for the charts module in the other chunk.

    Conclusion

    Depending on the project structure, we can easily set up code-splitting inside the React applications, which is useful for better-performing applications and leads to a positive impact for the users.

    You can find the referenced code in this repo.

  • Scalable Real-time Communication With Pusher

    What and why?

    Pusher is a hosted API service which makes adding real-time data and functionality to web and mobile applications seamless. 

    Pusher works as a real-time communication layer between the server and the client. It maintains persistent connections at the client using WebSockets, as and when new data is added to your server. If a server wants to push new data to clients, they can do it instantly using Pusher. It is highly flexible, scalable, and easy to integrate. Pusher has exposed over 40+ SDKs that support almost all tech stacks.

    In the context of delivering real-time data, there are other hosted and self-hosted services available. It depends on the use case of what exactly one needs, like if you need to broadcast data across all the users or something more complex having specific target groups. In our use case, Pusher was well-suited, as the decision was based on the easy usage, scalability, private and public channels, webhooks, and event-based automation. Other options which we considered were Socket.IO, Firebase & Ably, etc. 

    Pusher is categorically well-suited for communication and collaboration features using WebSockets. The key difference with  Pusher: it’s a hosted service/API.  It takes less work to get started, compared to others, where you need to manage the deployment yourself. Once we do the setup, it comes to scaling, that reduces future efforts/work.

    Some of the most common use cases of Pusher are:

    1. Notification: Pusher can inform users if there is any relevant change.  Notifications can also be thought of as a form of signaling, where there is no representation of the notification in the UI. Still, it triggers a reaction within an application.

    2. Activity streams: Stream of activities which are published when something changes on the server or someone publishes it across all channels.

    3. Live Data Visualizations: Pusher allows you to broadcast continuously changing data when needed.

    4. Chats: You can use Pusher for peer to peer or peer to multichannel communication.

    In this blog, we will be focusing on using Channels, which is an alias for Pub/Sub messaging API for a JavaScript-based application. Pusher also comes with Chatkit and Beams (Push Notification) SDK/APIs.

    • Chatkit is designed to make chat integration to your app as simple as possible. It allows you to add group chat and 1 to 1 chat feature to your app. It also allows you to add file attachments and online indicators.
    • Beams are used for adding Push Notification in your Mobile App. It includes SDKs to seamlessly manage push token and send notifications.

    Step 1: Getting Started

    Setup your account on the Pusher dashboard and get your free API keys.

    Image Source: Pusher

    1. Click on Channels
    2. Create an App. Add details based on the project and the environment
    3. Click on the App Keys tab to get the app keys.
    4. You can also check the getting started page. It will give code snippets to get you started.

    Add Pusher to your project:

    var express = require('express');
    var bodyParser = require('body-parser');
    
    var app = express();
    app.use(bodyParser.json());
    app.use(bodyParser.urlencoded({ extended: false }));
    
    app.post('/pusher/auth', function(req, res) {
      var socketId = req.body.socket_id;
      var channel = req.body.channel_name;
      var auth = pusher.authenticate(socketId, channel);
      res.send(auth);
    });
    
    var port = process.env.PORT || 5000;
    app.listen(port);

    CODE: https://gist.github.com/velotiotech/f09f14363bacd51446d5318e5050d628.js

    or using npm

    npm i pusher

    CODE: https://gist.github.com/velotiotech/423115d0943c1b882c913e437c529d11.js

    Step 2: Subscribing to Channels

    There are three types of channels in Pusher: Public, Private, and Presence.

    • Public channels: These channels are public in nature, so anyone who knows the channel name can subscribe to the channel and start receiving messages from the channel. Public channels are commonly used to broadcast general/public information, which does not contain any secure information or user-specific data.
    • Private channels: These channels have an access control mechanism that allows the server to control who can subscribe to the channel and receive data from the channel. All private channels should have a private- prefixed to the name. They are commonly used when the sever needs to know who can subscribe to the channel and validate the subscribers.
    • Presence channels: It is an extension to the private channel. In addition to the properties which private channels have, it lets the server ‘register’ users information on subscription to the channel. It also enables other members to identify who is online.

    In your application, you can create a subscription and start listening to events on: 

    // Here my-channel is the channel name
    // all the event published to this channel would be available
    // once you subscribe to the channel and start listing to it.
    
    var channel = pusher.subscribe('my-channel');
    
    channel.bind('my-event', function(data) {
      alert('An event was triggered with message: ' + data.message);
    });

    CODE: https://gist.github.com/velotiotech/d8c27960e2fac408a8db57b92f1e846d.js

    Step 3: Creating Channels

    For creating channels, you can use the dashboard or integrate it with your server. For more details on how to integrate Pusher with your server, you can read (Server API). You need to create an app on your Pusher dashboard and can use it to further trigger events to your app.

    or 

    Integrate Pusher with your server. Here is a sample snippet from our node App:

    var Pusher = require('pusher');
    
    var pusher = new Pusher({
      appId: 'APP_ID',
      key: 'APP_KEY',
      secret: 'APP_SECRET',
      cluster: 'APP_CLUSTER'
    });
    
    // Logic which will then trigger events to a channel
    function trigger(){
    ...
    ...
    pusher.trigger('my-channel', 'my-event', {"message": "hello world"});
    ...
    ...
    }

    CODE: https://gist.github.com/velotiotech/6f5b0f6407c0a74a0bce4b398a849410.js

    Step 4: Adding Security

    As a default behavior, anyone who knows your public app key can open a connection to your channels app. This behavior does not add any security risk, as connections can only access data on channels. 

    For more advanced use cases, you need to use the “Authorized Connections” feature. It authorizes every single connection to your channels, and hence, avoids unwanted/unauthorized connection. To enable the authorization, set up an auth endpoint, then modify your client code to look like this.

    const channels = new Pusher(APP_KEY, {
      cluster: APP_CLUSTER,
      authEndpoint: '/your_auth_endpoint'
    });
    
    const channel = channels.subscribe('private-<channel-name>');

    CODE: https://gist.github.com/velotiotech/9369051e5661a95352f08b1fdd8bf9ed.js

    For more details on how to create an auth endpoint for your server, read this. Here is a snippet from Node.js app

    var express = require('express');
    var bodyParser = require('body-parser');
    
    var app = express();
    app.use(bodyParser.json());
    app.use(bodyParser.urlencoded({ extended: false }));
    
    app.post('/pusher/auth', function(req, res) {
      var socketId = req.body.socket_id;
      var channel = req.body.channel_name;
      var auth = pusher.authenticate(socketId, channel);
      res.send(auth);
    });
    
    var port = process.env.PORT || 5000;
    app.listen(port);

    CODE: https://gist.github.com/velotiotech/fb67d5efe3029174abc6991089a910e1.js

    Step 5: Scale as you grow

     

    Pusher comes with a wide range of plans which you can subscribe to based on your usage. You can scale your application as it grows. Here is a snippet from available plans for mode details you can refer this.

    Image Source: Pusher

    Conclusion

    This article has covered a brief description of Pusher, its use cases, and how you can use it to build a scalable real-time application. Using Pusher may vary based on different use cases; it is no real debate on what one can choose. Pusher approach is simple and API based. It enables developers to add real-time functionality to any application in very little time.

    If you want to get hands-on tutorials/blogs, please visit here.

  • The Ultimate Cheat Sheet on Splitting Dynamic Redux Reducers

    This post is specific to need of code-splitting in React/Redux projects. While exploring the possibility to optimise the application, the common problem occurs with reducers. This article specifically focuses on how do we split reducers to be able to deliver them in chunks.

    What are the benefits of splitting reducers in chunks?

    1) True code splitting is possible

    2) A good architecture can be maintained by keeping page/component level reducers isolated from other parts of application minimising the dependency on other parts of application.

    Why Do We Need to Split Reducers?

    1. For fast page loads

    Splitting reducers will have and advantage of loading only required part of web application which in turn makes it very efficient in rendering time of main pages

    2. Organization of code

    Splitting reducers on page level or component level will give a better code organization instead of just putting all reducers at one place. Since reducer is loaded only when page/component is loaded will ensure that there are standalone pages which are not dependent on other parts of application. That ensures seamless development since it will essentially avoid cross references in reducers and throwing away complexities

    3. One page/component

    One reducer design pattern. Things are better written, read and understood when they are modular. With dynamic reducers it becomes possible to achieve it.

    4. SEO

    SEO is vast topic but it gets hit very hard if your website is having huge response times which happens in case if code is not split. With reducer level code splitting, reducers can be code split on component level which will reduce the loading time of website thereby increasing SEO rankings.

    What Exists Today?

    A little googling around the topic shows us some options. Various ways has been discussed here.  

    Dan Abramov’s answer is what we are following in this post and we will be writing a simple abstraction to have dynamic reducers but with more functionality.

    A lot of solutions already exists, so why do we need to create our own? The answer is simple and straightforward:

       1) The ease of use

    Every library out there is little catchy is some way. Some have complex api’s while some have too much boilerplate codes. We will be targeting to be near react-redux api.

       2) Limitation to add reducers at top level only

    This is a very common problem that a lot of existing libraries have as of today. That’s what we will be targeting to solve in this post. This opens new doors for possibilities to do code splitting on component level.

    A quick recap of redux facts:

    1) Redux gives us following methods:
    – “getState”,
    – “dispatch(action)”
    – “subscribe(listener)”
    – “replaceReducer(nextReducer)”

    2) reducers are plain functions returning next state of application

    3) “replaceReducer” requires the entire root reducer.

    What we are going to do?

    We will be writing abstraction around “replaceReducer” to develop an API to allow us to inject a reducer at a given key dynamically.

    A simple Redux store definition goes like the following:

    Let’s simplify the store creation wrapper as:

    What it Does?

    “dynamicActionGenerator” and “isValidReducer” are helper function to determine if given reducer is valid or not.

    For e.g.

    CODE:

    isValidReducer(() => { return ) // should return true
    isValidReducer(1) //should return false
    isValidReducer(true) //should return false
    isValidReducer(“example”) //should return false

    This is an essential check to ensure all inputs to our abstraction layer over createStore should be valid reducers.

    “createStore” takes initial Root reducer, initial state and enhancers that will be applicable to created store.

    In addition to that we are maintaining, “asyncReducers” and “attachReducer” on store object.

    “asyncReducers” keeps the mapping of dynamically added reducers.

    “attachReducer” is partial in above implementation and we will see the complete implementation below. The basic use of “attachReducer” is to add reducer from any part of web application.

    Given that our store object now becomes like follows:

    Store:

    CODE:

    - getState: Func
    - dispatch(action): Func
    - subscribe(listener): Func
    - replaceReducer(RootReducer): Func
    - attachReducer(reducer): Func
    - asyncReducers: JSONObject

    Now here is an interesting problem, replaceReducer requires a final root reducer function. That means we will have to recreate the root reducers every time.
    So we will create a dynamicRootReducer function itself to simply the process.

    So now our store object becomes as follows:
    Store:

    CODE:

    - getState: Func
    - dispatch(action) : Func
    - subscribe(listener) : Func
    - replaceReducer(RootReducer) : Func
    - attachReducer(reducer) : Func

    What does dynamicRootReducer does?
    1) Processes initial root reducer passed to it
    2) Executes dynamic reducers to get next state.

    So we now have an api exposed as :
    store.attachReducer(“home”, (state = {}, action) => { return state }); // Will add a dynamic reducer after the store has been created

    store.attachReducer(“home.grid”, (state={}, action) => { return state}); // Will add a dynamic reducer at a given nested key in store.

    Final Implementation:

    Working Example:

    Further implementations based on simplified code:

    Based on it I have simplified implementations into two libraries:

    Conclusion

    In this way, we can achieve code splitting with reducers which is a very common problem in almost every react-redux application. With above solution you can do code splitting on page level, component level and can also create reusable stateful components which uses redux state. The simplified approach will reduce your application boilerplate. Moreover common complex components like grid or even the whole pages like login can be exported and imported from one project to another making development faster than ever!

  • Automating Serverless Framework Deployment using Watchdog

    These days, we see that most software development is moving towards serverless architecture, and that’s no surprise. Almost all top cloud service providers have serverless services that follow a pay-as-you-go model. This way, consumers don’t have to pay for any unused resources. Also, there’s no need to worry about procuring dedicated servers, network/hardware management, operating system security updates, etc.

    Unfortunately, for cloud developers, serverless tools don’t provide auto-deploy services for updating local environments. This is still a headache. The developer must deploy and test changes manually. Web app projects using Node or Django have a watcher on the development environment during app bundling on their respective server runs. Thus, when changes happen in the code directory, the server automatically restarts with these new changes, and the developer can check if the changes are working as expected.

    In this blog, we will talk about automating serverless application deployment by changing the local codebase. We are using AWS as a cloud provider and primarily focusing on lambda to demonstrate the functionality.

    Prerequisites:

    • This article uses AWS, so command and programming access are necessary.
    • This article is written with deployment to AWS in mind, so AWS credentials are needed to make changes in the Stack. In the case of other cloud providers, we would require that provider’s command-line access.
    • We are using a serverless application framework for deployment. (This example will also work for other tools like Zappa.) So, some serverless context would be required.

    Before development, let’s divide the problem statement into sub-tasks and build them one step at a time.

    Problem Statement

    Create a codebase watcher service that would trigger either a stack update on AWS or run a local test. By doing this, developers would not have to worry about manual deployment on the cloud provider. This service needs to keep an eye on the code and generate events when an update/modify/copy/delete occurs in the given codebase.

    Solution

    First, to watch the codebase, we need logic that acts as a trigger and notifies when underlining files changes. For this, there are already packages present in different programming languages. In this example, we are using ‘python watchdog.’

    from watchdog.observers import Observer
    from watchdog.events import FileSystemEventHandler
    
    CODE_PATH = "<codebase path>"
    
    class WatchMyCodebase:
        # Set the directory on watch
        def __init__(self):
            self.observer = Observer()
    
        def run(self):
            event_handler = EventHandler()
            # recursive flag decides if watcher should collect changes in CODE_PATH directory tree.
            self.observer.schedule(event_handler, CODE_PATH, recursive=True)
            self.observer.start()
            self.observer.join()
    
    
    class EventHandler(FileSystemEventHandler):
        """Handle events generated by Watchdog Observer"""
    
        @classmethod
        def on_any_event(cls, event):
            if event.is_directory:
                """Ignore directory level events, like creating new empty directory etc.."""
                return None
    
            elif event.event_type == 'modified':
                print("file under codebase directory is modified...")
    
    if __name__ == '__main__':
        watch = WatchMyCodebase()
        watch.run()

    Here, the on_any_event() class method gets called on any updates in the mentioned directory, and we need to add deployment logic here. However, we can’t just deploy once it receives a notification from the watcher because modern IDEs save files as soon as the user changes them. And if we add logic that deploys on every change, then most of the time, it will deploy half-complete services. 

    To handle this, we must add some timeout before deploying the service.

    Here, the program will wait for some time after the file is changed. And if it finds that, for some time, there have been no new changes in the codebase, it will deploy the service.

    import time
    import subprocess
    import threading
    from watchdog.observers import Observer
    from watchdog.events import FileSystemEventHandler
    
    valid_events = ['created', 'modified', 'deleted', 'moved']
    DEPLOY_AFTER_CHANGE_THRESHOLD = 300
    STAGE_NAME = ""
    CODE_PATH = "<codebase path>"
    
    def deploy_env():
        process = subprocess.Popen(['sls', 'deploy', '--stage', STAGE_NAME, '-v'],
                                   stdout=subprocess.PIPE,
                                   stderr=subprocess.PIPE)
        stdout, stderr = process.communicate()
        print(stdout, stderr)
    
    def deploy_service_on_change():
        while True:
            if EventHandler.last_update_time and (int(time.time() - EventHandler.last_update_time) > DEPLOY_AFTER_CHANGE_THRESHOLD):
                EventHandler.last_update_time = None
                deploy_env()
            time.sleep(5)
    
    def start_interval_watcher_thread():
        interval_watcher_thread = threading.Thread(target=deploy_service_on_change)
        interval_watcher_thread.start()
    
    
    class WatchMyCodebase:
        # Set the directory on watch
        def __init__(self):
            self.observer = Observer()
    
        def run(self):
            event_handler = EventHandler()
            self.observer.schedule(event_handler, CODE_PATH, recursive=True)
            self.observer.start()
            self.observer.join()
    
    
    class EventHandler(FileSystemEventHandler):
        """Handle events generated by Watchdog Observer"""
        last_update_time = None
    
        @classmethod
        def on_any_event(cls, event):
            if event.is_directory:
                """Ignore directory level events, like creating new empty directory etc.."""
                return None
    
            elif event.event_type in valid_events and '.serverless' not in event.src_path:
                # Ignore events related to changes in .serverless directory, serverless creates few temp file while deploy
                cls.last_update_time = time.time()
    
    
    if __name__ == '__main__':
        start_interval_watcher_thread()
        watch = WatchMyCodebase()
        watch.run()

    The specified valid_events acts as a filter to deploy, and we are only considering these events and acting upon them.

    Moreover, to add a delay after file changes and ensure that there are no new changes, we added interval_watcher_thread. This checks the difference between current and last directory update time, and if it’s greater than the specified threshold, we deploy serverless resources.

    def deploy_service_on_change():
        while True:
            if EventHandler.last_update_time and (int(time.time() - EventHandler.last_update_time) > DEPLOY_AFTER_CHANGE_SEC):
                EventHandler.last_update_time = None
                deploy_env()
            time.sleep(5)
    
    def start_interval_watcher_thread():
        interval_watcher_thread = threading.Thread(target=deploy_service_on_change)
        interval_watcher_thread.start()

    Here, the sleep time in deploy_service_on_change is important. It will prevent the program from consuming more CPU cycles to check whether the condition to deploy serverless is satisfied. Also, too much delay would cause more delay in the deployment than the specified value of DEPLOY_AFTER_CHANGE_THRESHOLD.

    Note: With programming languages like Golang, and its features like goroutine and channels, we can build an even more efficient application—or even with Python with the help of thread signals.

    Let’s build one lambda function that automatically deploys on a change. Let’s also be a little lazy and develop a basic python lambda that takes a number as an input and returns its factorial value.

    import math
    
    def lambda_handler(event, context):
        """
        Handler for get factorial
        """
    
        number = event['number']
        return math.factorial(number)

    We are using a serverless application framework, so to deploy this lambda, we need a serverless.yml file that specifies stack details like execution environment, cloud provider, environment variables, etc. More parameters are listed in this guide

    service: get-factorial
    
    provider:
      name: aws
      runtime: python3.7
    
    functions:
      get_factorial:
        handler: handler.lambda_handler

    We need to keep both handler.py and serverless.yml in the same folder, or we need to update the function handler in serverless.yml.

    We can deploy it manually using this serverless command: 

    sls deploy --stage production -v

    Note: Before deploying, export AWS credentials.

    The above command deployed a stack using cloud formation:

    •  –stage is how to specify the environment where the stack should be deployed. Like any other software project, it can have stage names such as production, dev, test, etc.
    • -v specifies verbose.

    To auto-deploy changes from now on, we can use the watcher.

    Start the watcher with this command: 

    python3  auto_deploy_sls.py

    This will run continuously and keep an eye on the codebase directory, and if any changes are detected, it will deploy them. We can customize this to some extent, like post-deploy, so it can run test cases against a new stack.

    If you are worried about network traffic when the stack has lots of dependencies, using an actual cloud provider for testing might increase billing. However, we can easily fix this by using serverless local development.

    Here is a serverless blog that specifies local development and testing of a cloudformation stack. It emulates cloud behavior on the local setup, so there’s no need to worry about cloud service billing.

    One great upgrade supports complex directory structure.

    In the above example, we are assuming that only one single directory is present, so it’s fine to deploy using the command: 

    sls deploy --stage production -v

    But in some projects, one might have multiple stacks present in the codebase at different hierarchies. Consider the below example: We have three different lambdas, so updating in the `check-prime` directory requires updating only that lambda and not the others. 

    ├── check-prime
    │   ├── handler.py
    │   └── serverless.yml
    ├── get-factorial
    │   ├── handler.py
    │   └── serverless.yml
    └── get-factors
        ├── handler.py
        └── serverless.yml

    The above can be achieved in on_any_event(). By using the variable event.src_path, we can learn the file path that received the event.

    Now, deployment command changes to: 

    cd <updated_directory> && sls deploy --stage <your-stage> -v

    This will deploy only an updated stack.

    Conclusion

    We learned that even if serverless deployment is a manual task, it can be automated with the help of Watchdog for better developer workflow.

    With the help of serverless local development, we can test changes as we are making them without needing an explicit deployment to the cloud environment manually to test all the changes being made.

    We hope this helps you improve your serverless development experience and close the loop faster.

    Related Articles

    1. To Go Serverless Or Not Is The Question

    2. Building Your First AWS Serverless Application? Here’s Everything You Need to Know