Tag: node js

  • Beginner’s Guide for Writing Unit Test Cases with Jest Framework

    ‍Prerequisite

    Basic JavaScript, TypeScript

    Objective

    To make the reader understand the use/effect of test cases in software development.

    What’s in it for you?‍

    In the world of coding, we’re often in a rush to complete work before a deadline hits. And let’s be honest, writing test cases isn’t usually at the top of our priority list. We get it—they seem tedious, so we’d rather skip this extra step. But here’s the thing: those seemingly boring lines of code have superhero potential. Don’t believe me? You will.

    In this blog, we’re going to break down the mystery around test cases. No jargon, just simple talk. We’ll chat about what they are, explore a handy tool called Jest, and uncover why these little lines are actually the unsung heroes of coding. So, let’s ditch the complications and discover why giving some attention to test cases can level up our coding game. Ready? Let’s dive in!

    What are test cases?

    A test case is a detailed document specifying conditions under which a developer assesses whether a software application aligns with customer requirements. It includes preconditions, the case name, input conditions, and expected results. Derived from test scenarios, test cases cover both positive and negative inputs, providing a roadmap for test execution. This one-time effort aids future regression testing.

    Test cases offer insights into testing strategy, process, preconditions, and expected outputs. Executed during testing, they ensure the software performs its intended tasks. Linking defects to test case IDs facilitates efficient defect reporting. The comprehensive documentation acts as a safeguard, catching any oversights during test case execution and reinforcing the development team’s efforts.

    Different types of test cases exist, including integration, functional, non-functional, and unit.
    For this blog, we will talk about unit test cases.

    What are unit test cases?

    Unit testing is the process of testing the smallest functional unit of code. A functional unit could be a class member or simply a function that does something to your input and provides an output. Test cases around those functional units are called unit test cases.

    Purpose of unit test cases

    • To validate that each unit of the software works as intended and meets the requirements:
      For example, if your requirement is that the function returns an object with specific properties, a unit test will detect whether the code is written accordingly.
    • To check the robustness of code:
      Unit tests are automated and run each time the code is changed to ensure that new code does not break existing functionality.
    • To check the errors and bugs beforehand:
      If a case fails or doesn’t fulfill the requirement, it helps the developer isolate the area and recheck it for bugs before testing on demo/UAT/staging.

    Different frameworks for writing unit test cases

    There are various frameworks for unit test cases, including:

    • Mocha
    • Storybook
    • Cypress
    • Jasmine
    • Puppeteer
    • Jest
    Source: https://raygun.com/blog/javascript-unit-testing-frameworks/

    Why Jest?

    Jest is used and recommended by Facebook and officially supported by the React dev team.

    It has a great community and active support, so if you run into a problem and can’t find a solution in the comprehensive documentation, there are thousands of developers out there who could help you figure it out within hours.

    1. Performance: Ideal for larger projects with continuous deployment needs, Jest delivers enhanced performance.

    2. Compatibility: While Jest is widely used for testing React applications, it seamlessly integrates with other frameworks like Angular, Node, Vue, and Babel-based projects.

    3. Auto Mocking: Jest automatically mocks imported libraries in test files, reducing boilerplate and facilitating smoother testing workflows.

    4. Extended API: Jest comes with a comprehensive API, eliminating the necessity for additional libraries in most cases.

    5. Timer Mocks: Featuring a Time mocking system, Jest accelerates timeout processes, saving valuable testing time.

    6. Active Development & Community: Jest undergoes continuous improvement, boasting the most active community support for rapid issue resolution and updates.

    Components of a test case in Jest‍

    Describe

    • As the name indicates, they are responsible for describing the module we are going to test.
    • It should only describe the module, not the tests, as this describe module is generally not tested by Jest.

    It

    • Here, the actual code is tested and verified with actual or fake (spy, mocks) outputs.
      We can nest various it modules under the describe module.
    • It’s good to describe what the test does or doesn’t do in the description of the it module.

    Matchers

    • Matchers match the output with a real/fake output.
    • A test case without a matcher will always be a true/trivial test case.
    // For each unit test you write,
    // answer these questions:
    
    describe('What component aspect are you testing?', () => {
        it('What should the feature do?', () => {
            const actual = 'What is the actual output?'
            const expected = 'What is the expected output?'
    
            expect(actual).toEqual(expected) // matcher
    
        })
      })

    ‍Mocks and spies in Jest

    Mocks: They are objects or functions that simulate the behavior of real components. They are used to create controlled environments for testing by replacing actual components with simulated ones. Mocks are employed to isolate the code being tested, ensuring that the test focuses solely on the unit or component under examination without interference from external dependencies.

    It is mainly used for mocking a library or function that is most frequently used in the whole file or unit test case.

    Let Code.ts be the file you want to test.

    import { v4 as uuidv4 } from uuid
    
    export const functionToTest = () => {
    
        const id = uuidv4()
        // rest of the code
        return id;
    
    }

    As this is a unit test, we won’t be testing the uuidV4 function, so we will mock the whole uuid module using jest.mock.

    jest.mock('uuid', () => { uuidv4: () => 'random id value' }))  // mocking uuid module which will have uuidV4 as function
    describe('testing code.ts', () => {
        it('i have mocked uuid module', ()=> {
    
        const res = functionToTest()
        expect(res).tobeEqual('random id value')
    })
    
    })

    And that’s it. You have mocked the entire uuid module, so when it is coded during a test, it will return uuidV4 function, and that function, when executed, will give a random id value.

    Spies: They are functions or objects that “spy” on other functions by tracking calls made to them. They allow you to observe and verify the behavior of functions during testing. Spies are useful for checking if certain functions are called, how many times they are called, and with what arguments. They help ensure that functions are interacting as expected.

    This is by far the most used method, as this method works on object values and thus can be used to spy class methods efficiently.

    class DataService {
        fetchData() 
        {
            // code to fetch data
            return { 'real data'}
        }
    }

    describe('DataService Class', () => {
    
        it('should spy on the fetchData method with mockImplementation', () => {
            const dataServiceInstance = new DataService();
            const fetchDataSpy = jest.spyon(DataService.prototype, 'fetchData'); // prototype makes class method to a object
            fetchDataSpy.mockImplementation(() => 'Mocked Data'); // will return mocked data whenever function will be called
    
            const result = dataServiceInstance.fetchData(); // mocked Data
            expect(fetchDataSpy).toHaveBeenCalledTimes(1)
            expect(result).toBe('Mocked Data');
        }
      
      }

    Mocking database call‍

    One of the best uses of Jest is to mock a database call, i.e., mocking create, put, post, and delete calls for a database table.

    We can complete the same action with the help of only Jest spies.

    Let us suppose we have a database called DB, and it has lots of tables in it. Let’s say it has Table Student in it, and we want to mock create a Student database call.

    function async AddStudent(student: Student) 
      {
            await db.Student.create(student) // the call we want to mock
     }

    Now, as we are using the Jest spy method, we know that it will only be applicable to objects, so we will first make the Db. Students table into an object with create as method inside it, which will be jest.fn() (a function which can be used for mocking functions).

    Students an object with create as method inside object which will be jest.fn() (a function which can be used for mocking functions in one line without actually calling that function).

    describe('mocking data base call', () => {
            it('mocking create function', async () => {
                db.Student = {
                    create: jest.fn()
                }
    
                const tempStudent = {
                    name: 'john',
                    age: '12',
                    Rollno: 12
                     }
    
                const mock = jest.spyon(db.Student, 'create').
                    mockResolvedvalue('Student has been created successfully')
    
                await AddStudent(tempStudent)
                expect(mock).tohaveBeenCalledwith(tempStudent);
    
            })
    
        })

    Testing private methods‍

    Sometime, in development, we write private code for classes that can only be used within the class itself. But when writing test cases, we call the function by creating a class instance, and the private functions won’t be accessible to us, so we will not be able to test private functions.

    But in core JavaScript, there is no concept of private and public functions; it is introduced to us as TypeScript. So, we can actually test the private function as a normal public function by using the //@ts-ignore comment just above calling the private function.

     class Test()
      {
    
            private private_fun() {
                console.log("i am in private function");
                return "i am in private function"
            }
    
        }

    describe('Testing test class', () => {
            it('testing private function', () => {
                const test = new Test() 
                
                //calling code with ts-ignore comment
    
                //@ts-ignore
                const res = test.private_fun() //  output ->> "i am in private function "//
                expect(res).toBeEqual("i am in private function")
    
            })
        })

    P.S. One thing to note is that this will only work with TypeScript/JavaScript files.

    The importance of test cases in software development

    Makes code agile:

    In software development, one may have to change the structure or design of your code to add new features. Changing the already-tested code can be risky and costly. When you do the unit test, you just need to test the newly added code instead of the entire program.

    Improves code quality:

    A lot of bugs in software development occur due to unforeseen edge cases. If you forget to predict a single input, you may encounter a major bug in your application. When you write unit tests, think carefully about the edge cases of every function in your application.

    Provides Documentation:

    The unit test gives a basic idea of what the code does, and all the different use cases are covered through the program. It makes documentation easier, increasing the readability and understandability of the code. Anytime other developers can go through the unit test interface, understand the program better, and work on it fast and easily.

    Easy Debugging:

    Unit testing has made debugging a lot easier and quicker. If the test fails at any stage, you only need to debug the latest changes made in the code instead of the entire program. We have also mentioned how unit testing makes debugging easier at the next stage of integration testing as well.

    Conclusion

    So, if you made it to the end, you must have some understanding of the importance of test cases in your code.

    We’ve covered the best framework to choose from and how to write your first test case in Jest. And now, you are more confident in proving bug-free, robust, clean, documented, and tested code in your next MR/PR.

  • Automating test cases for text-messaging (SMS) feature of your application was never so easy

    Almost all the applications that you work on or deal with throughout the day use SMS (short messaging service) as an efficient and effective way to communicate with end users.

    Some very common use-cases include: 

    • Receiving an OTP for authenticating your login 
    • Getting deals from the likes of Flipkart and Amazon informing you regarding the latest sale.
    • Getting reminder notifications for the doctor’s appointment that you have
    • Getting details for your debit and credit transactions.

    The practical use cases for an SMS can be far-reaching. 

    Even though SMS integration forms an integral part of any application, due to the limitations and complexities involved in automating it via web automation tools like selenium, these are often neglected to be automated.

    Teams often opt for verifying these sets of test cases manually, which, even though is important in getting bugs earlier, it does pose some real-time challenges.

    Pitfalls with Manual Testing

    With these limitations, you obviously do not want your application sending faulty Text Messages after that major Release.

    Automation Testing … #theSaviour ‍

    To overcome the limitations of manual testing, delegating your task to a machine comes in handy.

    Now that we have talked about the WHY, we will look into HOW the feature can be automated.
    Technically, you shouldn’t / can’t use selenium to read the SMS via mobile.
    So, we were looking for a third-party library that is 

    • Easy to integrate with the existing code base
    • Supports a range of languages 
    • Does not involve highly complex codes and focuses on the problem at hand
    • Supports both incoming and outgoing messages

    After a lot of research, we settled with Twilio.

    In this article, we will look at an example of working with Twilio APIs to Read SMS and eventually using it to automate SMS flows.

    Twilio supports a bunch of different languages. For this article, we stuck with Node.js

    Account Setup

    Registration

    To start working with the service, you need to register.

    Once that is done, Twilio will prompt you with a bunch of simple questions to understand why you want to use their service.

    Twilio Dashboard

    A trial balance of $15.50 is received upon signing up for your usage. This can be used for sending and receiving text messages. A unique Account SID and Auth Token is also generated for your account.

    ‍Buy a Number


    Navigate to the buy a number link under Phone Numbers > Manage and purchase a number that you would eventually be using in your automation scripts for receiving text messages from the application.

    Note – for the free trial, Twilio does not support Indian Number (+91)

    Code Setup

    Install Twilio in your code base

     

    Code snippet

    For simplification,
    Just pass in the accountSid and authToken that you will receive from the Dashboard Console to the twilio library.This would return you with a client object containing the list of all the messages in your inbox.

    const accountSid = 'AC13fb4ed9a621140e19581a14472a75b0'
    const authToken = 'fac9498ac36ac29e8dae647d35624af7'
    const client = require('twilio')(accountSid, authToken)
    let messageBody
    let messageContent
    let sentFrom
    let sentTo
    let OTP
    describe('My Login application', () => {
      it('Read Text Message', () => {
        const username = $('#login_field');
        const pass = $('#password');
        const signInBtn = $('input[type="submit"]');
        const otpField = $('#otp');
        const verifyBtn = $(
          'form[action="/sessions/two-factor"] button[type="submit"]'
        );
        browser.url('https://github.com/login');
        username.setValue('your_email@mail.com');
        pass.setValue('your_pass123');
        signInBtn.click();
        // Get Message ...
        const latestMsg = await client.messages.list({ limit: 1 })
        
        messageContent = JSON.stringify(latestMsg,null,"\t")
        messageBody = JSON.stringify(latestMsg.body)
        sentFrom = JSON.stringify(latestMsg.from)
        sentTo = JSON.stringify(latestMsg.to)
        OTP = JSON.stringify(latestMsg.body.match(/\d+/)[0])
        otpField.setValue(OTP);
        verifyBtn.click();
        expect(browser).toHaveUrl('https://github.com/');
      });
    })

    List of other APIs to read an SMS provided by Twilio

    List all messages: Using this API Here you can see how to retrieve all messages from your account.

    const accountSid = process.env.TWILIO_ACCOUNT_SID;
    const authToken = process.env.TWILIO_AUTH_TOKEN;
    const client = require('twilio')(accountSid, authToken);
    
    client.messages.list({limit: 20})
                   .then(messages => messages.forEach(m => console.log(m.sid)));

    List Messages matching filter criteria: If you’d like to have Twilio narrow down this list of messages for you, you can do so by specifying a To number, From the number, and a DateSent.

    const accountSid = process.env.TWILIO_ACCOUNT_SID;
    const authToken = process.env.TWILIO_AUTH_TOKEN;
    const client = require('twilio')(accountSid, authToken);
    
    client.messages
          .list({
             dateSent: new Date(Date.UTC(2016, 7, 31, 0, 0, 0)),
             from: '+15017122661',
             to: '+15558675310',
             limit: 20
           })
          .then(messages => messages.forEach(m => console.log(m.sid)));

    Get a Message : If you know the message SID (i.e., the message’s unique identifier), then you can retrieve that specific message directly. Using this method, you can send emails without attachments.

    const accountSid = process.env.TWILIO_ACCOUNT_SID;
    const authToken = process.env.TWILIO_AUTH_TOKEN;
    const client = require('twilio')(accountSid, authToken);
    
    client.messages('MM800f449d0399ed014aae2bcc0cc2f2ec')
          .fetch()
          .then(message => console.log(message.to));

    Delete a message : If you want to delete a message from history, you can easily do so by deleting the Message instance resource.

    const accountSid = process.env.TWILIO_ACCOUNT_SID;
    const authToken = process.env.TWILIO_AUTH_TOKEN;
    const client = require('twilio')(accountSid, authToken);
    
    client.messages('MM800f449d0399ed014aae2bcc0cc2f2ec').remove();

    Limitations with a Trial Twilio Account

    • The trial version does not support Indian numbers (+91).
    • The trial version just provides an initial balance of $15.50.
      This is sufficient enough for your use case that involves only receiving messages on your Twilio number. But if the use case requires sending back the message from the Twilio number, a paid version can solve the purpose.
    • Messages sent via a short code (557766) are not received on the Twilio number.
      Only long codes are accepted in the trial version.
    • You can buy only a single number with the trial version. If purchasing multiple numbers is required, the user may have to switch to a paid version.

    Conclusion

    In a nutshell, we saw how important it is to thoroughly verify the SMS functionality of our application since it serves as one of the primary ways of communicating with the end users.
    We also saw what the limitations are with following the traditional manual testing approach and how automating SMS scenarios would help us deliver high-quality products.
    Finally, we demonstrated a feasible, efficient and easy-to-use way to Automate SMS test scenarios using Twilio APIs.

    Hope this was a useful read and that you will now be able to easily automate SMS scenarios.
    Happy testing… Do like and share …

  • Idiot-proof Coding with Node.js and Express.js

    Node.js has become the most popular framework for web development surpassing Ruby on Rails and Django in terms of the popularity.The growing popularity of full stack development along with the performance benefits of asynchronous programming has led to the rise of Node’s popularity. ExpressJs is a minimalistic, unopinionated and the most popular web framework built for Node which has become the de-facto framework for many projects.
    Note — This article is about building a Restful API server with ExpressJs . I won’t be delving into a templating library like handlebars to manage the views.

    A quick search on google will lead you a ton of articles agreeing with what I just said which could validate the theory. Your next step would be to go through a couple of videos about ExpressJS on Youtube, try hello world with a boilerplate template, choose few recommended middleware for Express (Helmet, Multer etc), an ORM (mongoose if you are using Mongo or Sequelize if you are using relational DB) and start building the APIs. Wow, that was so fast!

    The problem starts to appear after a few weeks when your code gets larger and complex and you realise that there is no standard coding practice followed across the client and the server code, refactoring or updating the code breaks something else, versioning of the APIs becomes difficult, call backs have made your life hell (you are smart if you are using Promises but have you heard of async-await?).

    Do you think you your code is not so idiot-proof anymore? Don’t worry! You aren’t the only one who thinks this way after reading this.

    Let me break the suspense and list down the technologies and libraries used in our idiot-proof code before you get restless.

    1. Node 8.11.3: This is the latest LTS release from Node. We are using all the ES6 features along with async-await. We have the latest version of ExpressJs (4.16.3).
    2. Typescript: It adds an optional static typing interface to Javascript and also gives us familiar constructs like classes (Es6 also gives provides class as a construct) which makes it easy to maintain a large codebase.
    3. Swagger: It provides a specification to easily design, develop, test and document RESTful interfaces. Swagger also provides many open source tools like codegen and editor that makes it easy to design the app.
    4. TSLint: It performs static code analysis on Typescript for maintainability, readability and functionality errors.
    5. Prettier: It is an opinionated code formatter which maintains a consistent style throughout the project. This only takes care of the styling like the indentation (2 or 4 spaces), should the arguments remain on the same line or go to the next line when the line length exceeds 80 characters etc.
    6. Husky: It allows you to add git hooks (pre-commit, pre-push) which can trigger TSLint, Prettier or Unit tests to automatically format the code and to prevent the push if the lint or the tests fail.

    Before you move to the next section I would recommend going through the links to ensure that you have a sound understanding of these tools.

    Now I’ll talk about some of the challenges we faced in some of our older projects and how we addressed these issues in the newer projects with the tools/technologies listed above.

    Formal API definition

    A problem that everyone can relate to is the lack of formal documentation in the project. Swagger addresses a part of this problem with their OpenAPI specification which defines a standard to design REST APIs which can be discovered by both machines and humans. As a practice, we first design the APIs in swagger before writing the code. This has 3 benefits:

    • It helps us to focus only on the design without having to worry about the code, scaffolder, naming conventions etc. Our API designs are consistent with the implementation because of this focused approach.
    • We can leverage tools like swagger-express-mw to internally wire the routes in the API doc to the controller, validate request and response object from their definitions etc.
    • Collaboration between teams becomes very easy, simple and standardised because of the Swagger specification.

    Code Consistency

    We wanted our code to look consistent across the stack (UI and Backend)and we use ESlint to enforce this consistency.
    Example –
    Node traditionally used “require” and the UI based frameworks used “import” based syntax to load the modules. We decided to follow ES6 style across the project and these rules are defined with ESLint.

    Note — We have made slight adjustments to the TSlint for the backend and the frontend to make it easy for the developers. For example, we allow upto 120 characters in React as some of our DOM related code gets lengthy very easily.

    Code Formatting

    This is as important as maintaining the code consistency in the project. It’s easy to read a code which follows a consistent format like indentation, spaces, line breaks etc. Prettier does a great job at this. We have also integrated Prettier with Typescript to highlight the formatting errors along with linting errors. IDE like VS Code also has prettier plugin which supports features like auto-format to make it easy.

    Strict Typing

    Typescript can be leveraged to the best only if the application follows strict typing. We try to enforce it as much as possible with exceptions made in some cases (mostly when a third party library doesn’t have a type definition). This has the following benefits:

    • Static code analysis works better when your code is strongly typed. We discover about 80–90% of the issues before compilation itself using the plugins mentioned above.
    • Refactoring and enhancements becomes very simple with Typescript. We first update the interface or the function definition and then follow the errors thrown by Typescript compiler to refactor the code.

    Git Hooks

    Husky’s “pre-push” hook runs TSLint to ensure that we don’t push the code with linting issues. If you follow TDD (in the way it’s supposed to be done), then you can also run unit tests before pushing the code. We decided to go with pre-hooks because
    – Not everyone has CI from the very first day. With a git hook, we at least have some code quality checks from the first day.
    – Running lint and unit tests on the dev’s system will leave your CI with more resources to run integration and other complex tests which are not possible to do in local environment.
    – You force the developer to fix the issues at the earliest which results in better code quality, faster code merges and release.

    Async-await

    We were using promises in our project for all the asynchronous operations. Promises would often lead to a long chain of then-error blocks which is not very comfortable to read and often result in bugs when it got very long (it goes without saying that Promises are much better than the call back function pattern). Async-await provides a very clean syntax to write asynchronous operations which just looks like sequential code. We have seen a drastic improvement in the code quality, fewer bugs and better readability after moving to async-await.

    Hope this article gave you some insights into tools and libraries that you can use to build a scalable ExpressJS app.

  • Scalable Real-time Communication With Pusher

    What and why?

    Pusher is a hosted API service which makes adding real-time data and functionality to web and mobile applications seamless. 

    Pusher works as a real-time communication layer between the server and the client. It maintains persistent connections at the client using WebSockets, as and when new data is added to your server. If a server wants to push new data to clients, they can do it instantly using Pusher. It is highly flexible, scalable, and easy to integrate. Pusher has exposed over 40+ SDKs that support almost all tech stacks.

    In the context of delivering real-time data, there are other hosted and self-hosted services available. It depends on the use case of what exactly one needs, like if you need to broadcast data across all the users or something more complex having specific target groups. In our use case, Pusher was well-suited, as the decision was based on the easy usage, scalability, private and public channels, webhooks, and event-based automation. Other options which we considered were Socket.IO, Firebase & Ably, etc. 

    Pusher is categorically well-suited for communication and collaboration features using WebSockets. The key difference with  Pusher: it’s a hosted service/API.  It takes less work to get started, compared to others, where you need to manage the deployment yourself. Once we do the setup, it comes to scaling, that reduces future efforts/work.

    Some of the most common use cases of Pusher are:

    1. Notification: Pusher can inform users if there is any relevant change.  Notifications can also be thought of as a form of signaling, where there is no representation of the notification in the UI. Still, it triggers a reaction within an application.

    2. Activity streams: Stream of activities which are published when something changes on the server or someone publishes it across all channels.

    3. Live Data Visualizations: Pusher allows you to broadcast continuously changing data when needed.

    4. Chats: You can use Pusher for peer to peer or peer to multichannel communication.

    In this blog, we will be focusing on using Channels, which is an alias for Pub/Sub messaging API for a JavaScript-based application. Pusher also comes with Chatkit and Beams (Push Notification) SDK/APIs.

    • Chatkit is designed to make chat integration to your app as simple as possible. It allows you to add group chat and 1 to 1 chat feature to your app. It also allows you to add file attachments and online indicators.
    • Beams are used for adding Push Notification in your Mobile App. It includes SDKs to seamlessly manage push token and send notifications.

    Step 1: Getting Started

    Setup your account on the Pusher dashboard and get your free API keys.

    Image Source: Pusher

    1. Click on Channels
    2. Create an App. Add details based on the project and the environment
    3. Click on the App Keys tab to get the app keys.
    4. You can also check the getting started page. It will give code snippets to get you started.

    Add Pusher to your project:

    var express = require('express');
    var bodyParser = require('body-parser');
    
    var app = express();
    app.use(bodyParser.json());
    app.use(bodyParser.urlencoded({ extended: false }));
    
    app.post('/pusher/auth', function(req, res) {
      var socketId = req.body.socket_id;
      var channel = req.body.channel_name;
      var auth = pusher.authenticate(socketId, channel);
      res.send(auth);
    });
    
    var port = process.env.PORT || 5000;
    app.listen(port);

    CODE: https://gist.github.com/velotiotech/f09f14363bacd51446d5318e5050d628.js

    or using npm

    npm i pusher

    CODE: https://gist.github.com/velotiotech/423115d0943c1b882c913e437c529d11.js

    Step 2: Subscribing to Channels

    There are three types of channels in Pusher: Public, Private, and Presence.

    • Public channels: These channels are public in nature, so anyone who knows the channel name can subscribe to the channel and start receiving messages from the channel. Public channels are commonly used to broadcast general/public information, which does not contain any secure information or user-specific data.
    • Private channels: These channels have an access control mechanism that allows the server to control who can subscribe to the channel and receive data from the channel. All private channels should have a private- prefixed to the name. They are commonly used when the sever needs to know who can subscribe to the channel and validate the subscribers.
    • Presence channels: It is an extension to the private channel. In addition to the properties which private channels have, it lets the server ‘register’ users information on subscription to the channel. It also enables other members to identify who is online.

    In your application, you can create a subscription and start listening to events on: 

    // Here my-channel is the channel name
    // all the event published to this channel would be available
    // once you subscribe to the channel and start listing to it.
    
    var channel = pusher.subscribe('my-channel');
    
    channel.bind('my-event', function(data) {
      alert('An event was triggered with message: ' + data.message);
    });

    CODE: https://gist.github.com/velotiotech/d8c27960e2fac408a8db57b92f1e846d.js

    Step 3: Creating Channels

    For creating channels, you can use the dashboard or integrate it with your server. For more details on how to integrate Pusher with your server, you can read (Server API). You need to create an app on your Pusher dashboard and can use it to further trigger events to your app.

    or 

    Integrate Pusher with your server. Here is a sample snippet from our node App:

    var Pusher = require('pusher');
    
    var pusher = new Pusher({
      appId: 'APP_ID',
      key: 'APP_KEY',
      secret: 'APP_SECRET',
      cluster: 'APP_CLUSTER'
    });
    
    // Logic which will then trigger events to a channel
    function trigger(){
    ...
    ...
    pusher.trigger('my-channel', 'my-event', {"message": "hello world"});
    ...
    ...
    }

    CODE: https://gist.github.com/velotiotech/6f5b0f6407c0a74a0bce4b398a849410.js

    Step 4: Adding Security

    As a default behavior, anyone who knows your public app key can open a connection to your channels app. This behavior does not add any security risk, as connections can only access data on channels. 

    For more advanced use cases, you need to use the “Authorized Connections” feature. It authorizes every single connection to your channels, and hence, avoids unwanted/unauthorized connection. To enable the authorization, set up an auth endpoint, then modify your client code to look like this.

    const channels = new Pusher(APP_KEY, {
      cluster: APP_CLUSTER,
      authEndpoint: '/your_auth_endpoint'
    });
    
    const channel = channels.subscribe('private-<channel-name>');

    CODE: https://gist.github.com/velotiotech/9369051e5661a95352f08b1fdd8bf9ed.js

    For more details on how to create an auth endpoint for your server, read this. Here is a snippet from Node.js app

    var express = require('express');
    var bodyParser = require('body-parser');
    
    var app = express();
    app.use(bodyParser.json());
    app.use(bodyParser.urlencoded({ extended: false }));
    
    app.post('/pusher/auth', function(req, res) {
      var socketId = req.body.socket_id;
      var channel = req.body.channel_name;
      var auth = pusher.authenticate(socketId, channel);
      res.send(auth);
    });
    
    var port = process.env.PORT || 5000;
    app.listen(port);

    CODE: https://gist.github.com/velotiotech/fb67d5efe3029174abc6991089a910e1.js

    Step 5: Scale as you grow

     

    Pusher comes with a wide range of plans which you can subscribe to based on your usage. You can scale your application as it grows. Here is a snippet from available plans for mode details you can refer this.

    Image Source: Pusher

    Conclusion

    This article has covered a brief description of Pusher, its use cases, and how you can use it to build a scalable real-time application. Using Pusher may vary based on different use cases; it is no real debate on what one can choose. Pusher approach is simple and API based. It enables developers to add real-time functionality to any application in very little time.

    If you want to get hands-on tutorials/blogs, please visit here.

  • A Step Towards Simplified Querying in NodeJS

    Recently, I came across a question on StackOverflow regarding querying the data on relationship table using sequalize and I went into flashback with the same situation and hence decided to write a blog over a better alternative Objection.js. When we choose ORM’s without looking into the use case we are tackling we usually end up with a mess.

    The question on StackOverflow was about converting the below query into sequalize query.

    SELECT a.* 
    FROM employees a, emp_dept_details b 
    WHERE b.Dept_Id=2 AND a.Emp_No = b.Emp_Id

    (Pardon me for naming in the query, it was asked by novice programmer and I wanted to keep it as it is for purity sake).

    Seems pretty straightforward right? So the solution is like below:

    Employee.findAll({ 
      include: [{ 
        model: EmployeeDeptDetails, 
        where: { 
          Emp_Id: Sequelize.col('employees.Emp_No'), 
          Dept_Id: 2 
        } 
      }] 
    });

    If you look at this it’s much complex solution for simple querying and this grows with added relationships. And also for simple queries like this, the sequalize documentation is not sufficient. Now if you ask me how it can be done in a better way with Objection.js below is the same query in objection.

    Employee.query()
      .joinRelation(‘employeeDeptDetails’)
      .where({ ‘employeeDeptDetails.Dept_Id’: 2 })

    Note: It’s assumed that relationship is defined (in model classes) in both examples.

    Now you guys can see the difference this is just one example I came across there are others on the internet for better understanding. So are you guys ready for diving into Objection.js?

    But before we dive in, I wanted to let you guys know whenever we check online for Node.js ORM, we always find some people saying “don’t use an ORM, just write plain SQL” and they are correct in their perception. If your app is small enough that you can write a bunch of query helper functions and carry out all the needed functionality, then don’t go with ORM approach, instead just use plain SQL.

    But when your app has an ample amount of tables and relationships between them that need to be defined and multiple-joint queries need to done, there comes the power of ORM.

    So when we search for the ORM’s (For relational DB) available in NodeJS arena we usually get the list below:

    1. Sequelize

    2. Objection.js

    3. typeORM

    There are others, I have just mentioned more popular ones.

    Well, I have personally used both Sequelize and Objection.js as they are the most popular ORM available today. So if you are a person who is deciding on which ORM you should be using for your next project or got frustrated with the relationship query complexity of `Sequelize` then you have landed on the correct place.

    I am going to be honest here, I am using Objection.js currently doesn’t make it the facto or best ORM for NodeJS. If you don’t love to write the SQL resembling queries and prefer the fully abstracted query syntax then I think `Sequelize` is the right option for you (though you might struggle with relationship queries as I did and land up with Objection.js later on) but if you want your queries to resemble the SQL one then you should read out this blog.

    What Makes Objection So Special?

    1. Objection under the hood uses KNEX.JS a   powerful SQL query builder

    2. Let’s you create models for tables with ES6 / ES7 classes and define the relationships between them

    3. Make queries with async / await

    4. Add validation to your models using JSON schema

    5. Perform graph inserts and upserts

    to name a few.

    The Learning Curve

    I have exclusively relied upon the documentation. The Knex.js and objection.js documentation is great and there are simple (One of them, I am going to use below for explanation) examples on the Objection GitHub. So if you have previously worked with any NodeJS ORM or you are a newbie, this will help you get started without any struggles.

    So let’s get started with some of the important topics while I explain to you the advantages over other ORM and usage along the way.

    For setup (package installation, configuration, etc.) and full code you can check out Github

    Creating and Managing DB Schema

    Migration is a good pattern to manage your changes database schema. Objection.js uses knex.js migration for this purpose.

    So what is Migration : Migrations are changes to a database’s schema specified within your ORM, so we will be defining the tables and columns of our database straight in JavaScript rather than using SQL.

    One of the best features of Knex is its robust migration support. To create a new migration simply use the knex cli:

    knex mirate:make migration_name

    After running this command you’ll notice that a new file is created within your migrations directory. This file will include a current timestamp as well as the name that you gave to your migration. The file will look like this:

    exports.up = function(knex, Promise) {
    
    };
    
    exports.down = function(knex, Promise) {
    
    };

    As you can notice the first is `exports.up`, which specifies the commands that should be run to make the database change that you’d like to make.e.g creating database tables, adding or removing a column from a table, changing indexes, etc.

    The second function within your migration file is `exports.down`. This functions goal is to do the opposite of what exports.up did. If `exports.up` created a table, then `exports.down` will drop that table. The reason to include `exports.down` is so that you can quickly undo a migration should you need to.

    For example:

    exports.up = knex => {
      return knex.schema
        .createTable('persons', table => {
          table.increments('id').primary();
          table
    	.integer('parentId')
    	.unsigned()
    	.references('id')
    	.inTable('persons')
    	.onDelete('SET NULL')
    	.index();
          table.string('firstName');
          table.string('lastName');
          table.integer('age');
          table.json('address');
        })
    };
    
    exports.down = knex => {
      return knex.schema
        .dropTableIfExists('persons');
    }; 

    It’s that simple to create the migration. Now you can run your migration like below.

    $ knex migrate:latest

    You can also pass the `–env` flag or set `NODE_ENV` to select an alternative environment:

    $ knex migrate:latest --env production

    To rollback the last batch of migrations:

    $ knex migrate:rollback

    Models

    Models are wrappers around the database tables, they help to encapsulate the business logic within those tables.

    Objection.js allows to create model using ES classes.

    Before diving into the example you guys need to clear your thoughts regarding model little bit as Objection.js Model does not create any table in DB. Yes! the only thing Models are used for are adding the validations and relationship mapping.  

    For example:

    const { Model } = require('objection');
    const Animal = require('./Animal');
    
    class Person extends Model {
      // Table name is the only required property.
      static get tableName() {
        return 'persons';
      }
    
      // Optional JSON schema. This is not the database schema. Nothing is generated
      // based on this. This is only used for validation. Whenever a model instance
      // is created it is checked against this schema. http://json-schema.org/.
      static get jsonSchema() {
        return {
          type: 'object',
          required: ['firstName', 'lastName'],
    
          properties: {
    	id: { type: 'integer' },
    	parentId: { type: ['integer', 'null'] },
    	firstName: { type: 'string', minLength: 1, maxLength: 255 },
    	lastName: { type: 'string', minLength: 1, maxLength: 255 },
    	age: { type: 'number' },
    	address: {
    	  type: 'object',
    	  properties: {
    	    street: { type: 'string' },
    	    city: { type: 'string' },
    	    zipCode: { type: 'string' }
    	  }
    	}
          }
        };
      }
    
      // This object defines the relations to other models.
      static get relationMappings() {
        return {
          pets: {
    	relation: Model.HasManyRelation,
    	// The related model. This can be either a Model subclass constructor or an
    	// absolute file path to a module that exports one.
    	modelClass: Animal,
    	join: {
    	  from: 'persons.id',
    	  to: 'animals.ownerId'
    	}
          }
        };	
      }
    }
    
    module.exports = Person;

    • Now let’s break it down, that static getter `tableName` return the table name.
    • We also have a second static getter method that defines the validations of each field and this is an optional thing to do. We can specify the required properties, type of the field i.e. number, string, object, etc and other validations as you can see in the example.
    • Third static getter function we see is `relationMappings` which defines this models relationship to other models. In this case, the key of the outside object `pets` is how we will refer to the child class. The join property in addition to the relation type defines how the models are related to one another. The from and to properties of the join object define the database columns through which the models are associated. The modelClass passed to the relation mappings is the class of the related model.

    So here `Person` has `HasManyRelation` with `Animal` model class and join is performed on persons `id` column and Animals `ownerId` column. So one person can have multiple pets.

    Queries

    Let’s start with simple SELECT queries:

    SELECT * FROM persons;

    Can be done like:

    const persons = await Person.query();

    Little advanced or should I say typical select query:

    SELECT * FROM persons where firstName = 'Ben' ORDER BY age;

    Can be done like:

    const persons = await Person.query()
      .where({ firstName: 'Ben' })
      .orderBy('age');

    So we can look how much objection queries resemble to the actual SQL queries so it’s always easy to transform SQL query easily into Objection.js one which is quite difficult with other ORMs.

    INSERT Queries:

    INSERT INTO persons (firstName) VALUES ('Ben');

    Can be done like:

    await Person.query().insert({ firstName: 'Ben' });

    UPDATE Queries:

    UPDATE persons set firstName = 'Brayn' where id = 1;

    Can be done like:

    await Person.query().patch({ firstName: 'Brayn' }).where({ id: 1 });

    DELETE Queries:

    DELETE from persons where id = 1;

    Can be done like:

    await Person.query().delete().where({ id: 1 });

    Relationship Queries:

    Suppose we want to fetch all the pets of Person whose first name is Ben.

    const pets = await person
      .$relatedQuery('pets')
      .where('name', 'Ben');

    Now suppose you want to insert person along with his pets. In this case we can use the graph queries.

    const personWithPets = {
      firstName: 'Matt',
      lastName: 'Damon',
      age: 43,
    
        pets: [
        {
          name: 'Doggo',
          species: 'dog'
        },
        {
          name: 'Kat',
          species: 'cat'
        }
      ]
    };
    
    // wrap `insertGraph` call in a transaction since its creating multiple queries.
    const insertedGraph = await transaction(Person.knex(), trx => {
      return (
        Person.query(trx).insertGraph(personWithPets)
      );
    });

    So here we can see the power of Objection queries and if try to compare these queries with other ORM queries you will find out the difference yourself which is better.

    Plugin Availability

    objection-password: This plugin automatically adds automatic password hashing to your Objection.js models. This makes it super-easy to secure passwords and other sensitive data.

    objection-graphql: Automatic GraphQL API generator for objection.js models.

    Verdict

    I am having fun time working with Objection and Knex currently! If you ask me to choose between sequalize and objection.js I would definitely go with objection.js to avoid all the relationship queries pain. It’s worth noting that Objection.js is unlike your other ORM’s, it’s just a wrapper over the KNEX.js query builder so its like using query builder with additional features.

  • Test Automation in React Native apps using Appium and WebdriverIO

    React Native provides a mobile app development experience without sacrificing user experience or visual performance. And when it comes to mobile app UI testing, Appium is a great way to test indigenous React Native apps out of the box. Creating native apps from the same code and being able to do it using JavaScript has made Appium popular. Apart from this, businesses are attracted by the fact that they can save a lot of money by using this application development framework.

    In this blog, we are going to cover how to add automated tests for React native apps using Appium & WebdriverIO with a Node.js framework. 

    What are React Native Apps

    React Native is an open-source framework for building Android and iOS apps using React and local app capabilities. With React Native, you can use JavaScript to access the APIs on your platform and define the look and behavior of your UI using React components: lots of usable, non-compact code. In the development of Android and iOS apps, “viewing” is the basic building block of a UI: this small rectangular object on the screen can be used to display text, photos, or user input. Even the smallest detail of an app, such as a text line or a button, is a kind of view. Some views may contain other views.

    What is Appium

    Appium is an open-source tool for traditional automation, web, and hybrid apps on iOS, Android, and Windows desktop mobile platforms. Indigenous apps are those written using iOS and Android. Mobile web applications are accessed using a mobile browser (Appium supports Safari for iOS apps and Chrome or the built-in ‘Browser’ for Android apps). Hybrid apps have a wrapper around “web view”—a traditional controller that allows you to interact with web content. Projects like Apache Cordova make it easy to build applications using web technology and integrate it into a traditional wrapper, creating a hybrid application.

    Importantly, Appium is “cross-platform”, allowing you to write tests against multiple platforms (iOS, Android), using the same API. This enables code usage between iOS, Android, and Windows test suites. It runs on iOS and Android applications using the WebDriver protocol.

    Fig:- Appium Architecture

    What is WebDriverIO

    WebdriverIO is a next-gen browser and Node.js automated mobile testing framework. It allows you to customize any application written with modern web frameworks for mobile devices or browsers, such as React, Angular, Polymeror, and Vue.js.

    WebdriverIO is a widely used test automation framework in JavaScript. It has various features like it supports many reports and services, Test Frameworks, and WDIO CLI Test Runners

    The following are examples of supported services:

    • Appium Service
    • Devtools Service
    • Firefox Profile Service
    • Selenium Standalone Service
    • Shared Store Service
    • Static Server Service
    • ChromeDriver Service
    • Report Portal Service
    • Docker Service

    The followings are supported by the test framework:

    • Mocha
    • Jasmine
    • Cucumber 
    Fig:- WebdriverIO Architecture

    Key features of Appium & WebdriverIO

    Appium

    • Does not require application source code or library
    • Provides a strong and active community
    • Has multi-platform support, i.e., it can run the same test cases on multiple platforms
    • Allows the parallel execution of test scripts
    • In Appium, a small change does not require reinstallation of the application
    • Supports various languages like C#, Python, Java, Ruby, PHP, JavaScript with node.js, and many others that have a Selenium client library

    WebdriverIO 

    • Extendable
    • Compatible
    • Feature-rich 
    • Supports modern web and mobile frameworks
    • Runs automation tests both for web applications as well as native mobile apps.
    • Simple and easy syntax
    • Integrates tests to third-party tools such as Appium
    • ‘Wdio setup wizard’ makes the setup simple and easy
    • Integrated test runner

    Installation & Configuration

    $ mkdir Demo_Appium_Project

    • Create a sample Appium Project
    $ npm init
    $ package name: (demo_appium_project) demo_appium_test
    $ version: (1.0.0) 1.0.0
    $ description: demo_appium_practice
    $ entry point: (index.js) index.js
    $ test command: "./node_modules/.bin/wdio wdio.conf.js"
    $ git repository: 
    $ keywords: 
    $ author: Pushkar
    $ license: (ISC) ISC

    This will also create a package.json file for test settings and project dependencies.

    • Install node packages
    $ npm install

    • Install Appium through npm or as a standalone app.
    $ npm install -g appium or npm install --save appium

    $ npm install -g webdriverio or npm install --save-dev webdriverio @wdio/cli
    • Install Chai Assertion library 
    $ npm install -g chai or npm install --save chai

    Make sure you have following versions installed: 

    $ node --version - v.14.17.0
    $ npm --version - 7.17.0
    $ appium --version - 1.21.0
    $ java --version - java 16.0.1
    $ allure --version - 2.14.0

    WebdriverIO Configuration 

    The web driver configuration file must be created to apply the configuration during the test Generate command below project:

    $ npx wdio config

    With the following series of questions, install the required dependencies,

    $ Where is your automation backend located? - On my local machine
    $ Which framework do you want to use? - mocha	
    $ Do you want to use a compiler? No!
    $ Where are your test specs located? - ./test/specs/**/*.js
    $ Do you want WebdriverIO to autogenerate some test files? - Yes
    $ Do you want to use page objects (https://martinfowler.com/bliki/PageObject.html)? - No
    $ Which reporter do you want to use? - Allure
    $ Do you want to add a service to your test setup? - No
    $ What is the base url? - http://localhost

    This is how wdio.conf.js looks:

    exports.config = {
     port: 4724,
     path: '/wd/hub/',
     runner: 'local',
     specs: ['./test/specs/*.js'],
     maxInstances: 1,
     capabilities: [
       {
         platformName: 'Android',
         platformVersion: '11',
         appPackage: 'com.facebook.katana',
         appActivity: 'com.facebook.katana.LoginActivity',
         automationName: 'UiAutomator2'
       }
     ],
     services: [
       [
         'appium',
         {
           args: {
             relaxedSecurity: true
            },
           command: 'appium'
         }
       ]
     ],
     logLevel: 'debug',
     bail: 0,
     baseUrl: 'http://localhost',
     waitforTimeout: 10000,
     connectionRetryTimeout: 90000,
     connectionRetryCount: 3,
     framework: 'mocha',
     reporters: [
       [
         'allure',
         {
           outputDir: 'allure-results',
           disableWebdriverStepsReporting: true,
           disableWebdriverScreenshotsReporting: false
         }
       ]
     ],
     mochaOpts: {
       ui: 'bdd',
       timeout: 60000
     },
     afterTest: function(test, context, { error, result, duration, passed, retries }) {
       if (!passed) {
           browser.takeScreenshot();
       }
     }
    }
    view raw

    For iOS Automation, just add the following capabilities in wdio.conf.js & the Appium Configuration: 

    {
      "platformName": "IOS",
      "platformVersion": "14.5",
      "app": "/Your_PATH/wdioNativeDemoApp.app",
      "deviceName": "iPhone 12 Pro Max"
    }

    Launch the iOS Simulator from Xcode

    Install Appium Doctor for iOS by using following command:

    npm install -g appium-doctor

    Fig:- Appium Doctor Installed

    This is how package.json will look:

    {
     "name": "demo_appium_test",
     "version": "1.0.0",
     "description": "demo_appium_practice",
     "main": "index.js",
     "scripts": {
       "test": "./node_modules/.bin/wdio wdio.conf.js"
     },
     "author": "Pushkar",
     "license": "ISC",
     "dependencies": {
       "@wdio/sync": "^7.7.4",
       "appium": "^1.21.0",
       "chai": "^4.3.4",
       "webdriverio": "^7.7.4"
     },
     "devDependencies": {
       "@wdio/allure-reporter": "^7.7.3",
       "@wdio/appium-service": "^7.7.3",
       "@wdio/cli": "^7.7.4",
       "@wdio/local-runner": "^7.7.4",
       "@wdio/mocha-framework": "^7.7.4",
       "@wdio/selenium-standalone-service": "^7.7.4"
     }
    }

    Steps to follow if npm legacy peer deeps problem occurred:

    npm install --save --legacy-peer-deps
    npm config set legacy-peer-deps true
    npm i --legacy-peer-deps
    npm config set legacy-peer-deps true
    npm cache clean --force

    This is how the folder structure will look in Appium with the WebDriverIO Framework:

    Fig:- Appium Framework Outline

    Step-by-Step Configuration of Android Emulator using Android Studio

    Fig:- Android Studio Launch

     

    Fig:- Android Studio AVD Manager

     

    Fig:- Create Virtual Device

     

    Fig:- Choose a device Definition

     

    Fig:- Select system image

    Fig:- License Agreement

     

    Fig:- Component Installer

     

    Fig:- System Image Download

     

    Fig:- Configuration Verification

    Fig:- Virtual Device Listing

    ‍Appium Desktop Configuration

    Fig:- Appium Desktop Launch

    Setup of ANDROID_HOME + ANDROID_SDK_ROOT &  JAVA_HOME

    Follow these steps for setting up ANDROID_HOME: 

    vi ~/.bash_profile
    Add following 
    export ANDROID_HOME=/Users/pushkar/android-sdk 
    export PATH=$PATH:$ANDROID_HOME/platform-tools 
    export PATH=$PATH:$ANDROID_HOME/tools 
    export PATH=$PATH:$ANDROID_HOME/tools/bin 
    export PATH=$PATH:$ANDROID_HOME/emulator
    Save ~/.bash_profile 
    source ~/.bash_profile 
    echo $ANDROID_HOME
    /Users/pushkar/Library/Android/sdk

    Follow these steps for setting up ANDROID_SDK_ROOT:

    vi ~/.bash_profile
    Add following 
    export ANDROID_HOME=/Users/pushkar/Android/sdk
    export ANDROID_SDK_ROOT=/Users/pushkar/Android/sdk
    export ANDROID_AVD_HOME=/Users/pushkar/.android/avd
    Save ~/.bash_profile 
    source ~/.bash_profile 
    echo $ANDROID_SDK_ROOT
    /Users/pushkar/Library/Android/sdk

    Follow these steps for setting up JAVA_HOME:

    java --version
    vi ~/.bash_profile
    Add following 
    export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk-16.0.1.jdk/Contents/Home.
    echo $JAVA_HOME
    /Library/Java/JavaVirtualMachines/jdk-16.0.1.jdk/Contents/Home

    Fig:- Environment Variables in Appium

     

    Fig:- Appium Server Starts 

     

    Fig:- Appium Start Inspector Session

    Fig:- Inspector Session Configurations

    Note – Make sure you need to install the app from Google Play Store. 

    Fig:- Android Emulator Launch  

     

    Fig: – Android Emulator with Facebook React Native Mobile App

     

    Fig:- Success of Appium with Emulator

     

    Fig:- Locating Elements using Appium Inspector

    How to write E2E React Native Mobile App Tests 

    Fig:- Test Suite Structure of Mocha

    ‍Here is an example of how to write E2E test in Appium:

    Positive Testing Scenario – Validate Login & Nav Bar

    1. Open Facebook React Native App 
    2. Enter valid email and password
    3. Click on Login
    4. Users should be able to login into Facebook 

    Negative Testing Scenario – Invalid Login

    1. Open Facebook React Native App
    2. Enter invalid email and password 
    3. Click on login 
    4. Users should not be able to login after receiving an “Incorrect Password” message popup

    Negative Testing Scenario – Invalid Element

    1. Open Facebook React Native App 
    2. Enter invalid email and  password 
    3. Click on login 
    4. Provide invalid element to capture message

    Make sure test_script should be under test/specs folder 

    var expect = require('chai').expect
    
    beforeEach(() => {
     driver.launchApp()
    })
    
    afterEach(() => {
     driver.closeApp()
    })
    
    describe('Verify Login Scenarios on Facebook React Native Mobile App', () => {
     it('User should be able to login using valid credentials to Facebook Mobile App', () => {   
       $(`~Username`).waitForDisplayed(20000)
       $(`~Username`).setValue('Valid-Email')
       $(`~Password`).waitForDisplayed(20000)
       $(`~Password`).setValue('Valid-Password')
       $('~Log In').click()
       browser.pause(10000)
     })
    
     it('User should not be able to login with invalid credentials to Facebook Mobile App', () => {
       $(`~Username`).waitForDisplayed(20000)
       $(`~Username`).setValue('Invalid-Email')
       $(`~Password`).waitForDisplayed(20000)
       $(`~Password`).setValue('Invalid-Password')   
       $('~Log In').click()
       $(
           '//android.widget.TextView[@resource-id="com.facebook.katana:id/(name removed)"]'
         )
         .waitForDisplayed(11000)
       const status = $(
         '//android.widget.TextView[@resource-id="com.facebook.katana:id/(name removed)"]'
       ).getText()
       expect(status).to.equal(
         `You Can't Use This Feature Right Now`     
       )
     })
    
     it('Test Case should Fail Because of Invalid Element', () => {
       $(`~Username`).waitForDisplayed(20000)
       $(`~Username`).setValue('Invalid-Email')
       $(`~Password`).waitForDisplayed(20000)
       $(`~Password`).setValue('Invalid-Pasword')   
       $('~Log In').click()
       $(
           '//android.widget.TextView[@resource-id="com.facebook.katana:id/(name removed)"'
         )
         .waitForDisplayed(11000)
       const status = $(
         '//android.widget.TextView[@resource-id="com.facebook.katana"'
       ).getText()
       expect(status).to.equal(
         `You Can't Use This Feature Right Now`     
       )
     })
    
    })

    How to Run Mobile Tests Scripts  

    $ npm test 
    This will create a Results folder with .xml report 

    Reporting

    The following are examples of the supported reporters:

    • Allure Reporter
    • Concise Reporter
    • Dot Reporter
    • JUnit Reporter
    • Spec Reporter
    • Sumologic Reporter
    • Report Portal Reporter
    • Video Reporter
    • HTML Reporter
    • JSON Reporter
    • Mochawesome Reporter
    • Timeline Reporter
    • CucumberJS JSON Reporter

    Here, we are using Allure Reporting. Allure Reporting in WebdriverIO is a plugin to create Allure Test Reports.

    The easiest way is to keep @wdio/allure-reporter as a devDependency in your package.json with

    $ npm install @wdio/allure-reporter --save-dev

    Reporter options can be specified in the wdio.conf.js configuration file 

    reporters: [
       [
         'allure',
         {
           outputDir: 'allure-results',
           disableWebdriverStepsReporting: true,
           disableWebdriverScreenshotsReporting: false
         }
       ]
     ]

    To convert Allure .xml report to .html report, run the following command: 

    $ allure generate && allure open
    Allure HTML report should be opened in browser

    This is what Allure Reports look like: 

    Fig:- Allure Report Overview 

     

    Fig:- Allure Categories

     

    Fig:- Allure Suites

     

    Fig: – Allure Graphs

     

    Fig:- Allure Timeline

     

    Fig:- Allure Behaviors

     

    Fig:- Allure Packages

    Limitations with Appium & WebDriverIO

    Appium 

    • Android versions lower than 4.2 are not supported for testing
    • Limited support for hybrid app testing
    • Doesn’t support image comparison.

    WebdriverIO

    • It has a custom implementation
    • It can be used for automating AngularJS apps, but it is not as customized as Protractor.

    Conclusion

    In the QA and developer ecosystem, using Appium to test React native applications is common. Appium makes it easy to record test cases on both Android and iOS platforms while working with React Native. Selenium, a basic web developer, acts as a bridge between Appium and mobile platforms for delivery and testing. Appium is a solid framework for automatic UI testing. This article explains that this framework is capable of conducting test cases quickly and reliably. Most importantly, it can test both Android and iOS apps developed by the React Native framework on the basis of a single code.

    Related Articles –

    References 

  • Understanding Node.js Async Flows: Parallel, Serial, Waterfall and Queues

    Promises in Javascript has been around for a long time now. It helped solve the problem of callback hell. But as soon as the requirements get complicated with control flows, promises start getting unmanageable and harder to work with. This is where async flows come to the rescue. In this blog, let’s talk about the various async flows which are used frequently rather than raw promises and callbacks.

    Async Utility Module

    Async is a utility module which provides straight-forward, powerful functions for working with asynchronous JavaScript. Although it is built on top of promises, it makes asynchronous code look and behave a little more like synchronous code, making it easier to read and maintain.

    Async utility has a number of control flows. Let’s discuss the most popular ones and their use cases:

    1. Parallel

    When we have to run multiple tasks independent of each other without waiting until the previous task has completed, parallel comes into the picture.

    async.parallel(tasks, callback)

    Tasks: A collection of functions to run. It can be an array, an object or any iterable.

    Callback: This is the callback where all the task results are passed and is executed once all the task execution has completed.

    In case an error is passed to a function’s callback, the main callback is immediately called with the error. Although parallel is about starting I/O tasks in parallel, it’s not about parallel execution since Javascript is single-threaded.

    An example of Parallel is shared below:

    async.parallel([
      function(callback) {
        setTimeout(function() {
          console.log('Task One');
          callback(null, 1);
        }, 200);
      },
      function(callback) {
        setTimeout(function() {
          console.log('Task Two');
          callback(null, 2);
        }, 100);
      }
    ],
    function(err, results) {
      console.log(results);
      // the results array will equal [1, 2] even though
      // the second function had a shorter timeout.
    });
    
    // an example using an object instead of an array
    async.parallel({
      task1: function(callback) {
        setTimeout(function() {
          console.log('Task One');
          callback(null, 1);
        }, 200);
      },
      task2: function(callback) {
        setTimeout(function() {
          console.log('Task Two');
          callback(null, 2);
        }, 100);
        }
    }, function(err, results) {
      console.log(results);
      // results now equals to: { task1: 1, task2: 2 }
    });

    2. Series

    When we have to run multiple tasks which depend on the output of the previous task, series comes to our rescue.

    async.series(tasks, callback)

    Tasks: A collection of functions to run. It can be an array, an object or any iterable.

    Callback: This is the callback where all the task results are passed and is executed once all the task execution has completed.

    Callback function receives an array of result objects when all the tasks have been completed. If an error is encountered in any of the task, no more functions are run but the final callback is called with the error value.

    An example of Series is shared below:

    async.series([
      function(callback) {
        console.log('one');
        callback(null, 1);
      },
      function(callback) {
        console.log('two');
        callback(null, 2);
      },
      function(callback) {
        console.log('three');
        callback(null, 3);
      }
    ],
    function(err, results) {
      console.log(result);
      // results is now equal to [1, 2, 3]
    });
    
    async.series({
      1: function(callback) {
        setTimeout(function() {
          console.log('Task 1');
          callback(null, 'one');
        }, 200);
      },
      2: function(callback) {
        setTimeout(function() {
          console.log('Task 2');
          callback(null, 'two');
        }, 300);
      },
      3: function(callback) {
        setTimeout(function() {
          console.log('Task 3');
          callback(null, 'three');
        }, 100);
      }
    },
    function(err, results) {
      console.log(results);
      // results is now equal to: { 1: 'one', 2: 'two', 3:'three' }
    });

    3. Waterfall

    When we have to run multiple tasks which depend on the output of previous task, Waterfall can be helpful.

    async.waterfall(tasks, callback)

    Tasks: A collection of functions to run. It can be an array, an object or any iterable structure.

    Callback: This is the callback where all the task results are passed and is executed once all the task execution has completed.

    It will run one function at a time and pass the result of the previous function to the next one.

    An example of Waterfall is shared below:

    async.waterfall([
      function(callback) {
        callback(null, 'Task 1', 'Task 2');
      },
      function(arg1, arg2, callback) {
        // arg1 now equals 'Task 1' and arg2 now equals 'Task 2'
        let arg3 = arg1 + ' and ' + arg2;
        callback(null, arg3);
      },
      function(arg1, callback) {
        // arg1 now equals 'Task1 and Task2'
        arg1 += ' completed';
        callback(null, arg1);
      }
    ], function(err, result) {
      // result now equals to 'Task1 and Task2 completed'
      console.log(result);
    });
    
    // Or, with named functions:
    async.waterfall([
      myFirstFunction,
      mySecondFunction,
      myLastFunction,
    ], function(err, result) {
      // result now equals 'Task1 and Task2 completed'
      console.log(result);
    });
    
    function myFirstFunction(callback) {
      callback(null, 'Task 1', 'Task 2');
    }
    function mySecondFunction(arg1, arg2, callback) {
      // arg1 now equals 'Task 1' and arg2 now equals 'Task 2'
      let arg3 = arg1 + ' and ' + arg2;
      callback(null, arg3);
    }
    function myLastFunction(arg1, callback) {
      // arg1 now equals 'Task1 and Task2'
      arg1 += ' completed';
      callback(null, arg1);
    }

    4. Queue

    When we need to run a set of tasks asynchronously, queue can be used. A queue object based on an asynchronous function can be created which is passed as worker.

    async.queue(task, concurrency)

    Task: Here, it takes two parameters, first – the task to be performed and second – the callback function.

    Concurrency: It is the number of functions to be run in parallel.

    async.queue returns a queue object that supports few properties:

    • push: Adds tasks to the queue to be processed.
    • drain: The drain function is called after the last task of the queue.
    • unshift: Adds tasks in front of the queue.

    An example of Queue is shared below:

    // create a queue object with concurrency 2
    var q = async.queue(function(task, callback) {
      console.log('Hello ' + task.name);
      callback();
    }, 2);
    
    // assign a callback
    q.drain = function() {
      console.log('All items have been processed');
    };
    
    // add some items to the queue
    q.push({name: 'foo'}, function(err) {
      console.log('Finished processing foo');
    });
    
    q.push({name: 'bar'}, function (err) {
      console.log('Finished processing bar');
    });
    
    // add some items to the queue (batch-wise)
    q.push([{name: 'baz'},{name: 'bay'},{name: 'bax'}], function(err) {
      console.log('Finished processing item');
    });
    
    // add some items to the front of the queue
    q.unshift({name: 'bar'}, function (err) {
      console.log('Finished processing bar');
    });

    5. Priority Queue

    It is the same as queue, the only difference being that a priority can be assigned to the tasks which is considered in ascending order.

    async.priorityQueue(task,concurrency)

    Task: Here, it takes three parameters:

    • First – task to be performed.
    • Second – priority, it is a number that determines the sequence of execution. For array of tasks, the priority remains same for all of them.
    • Third – Callback function.

    The async.priorityQueue does not support ‘unshift’ property of the queue.

    An example of Priority Queue is shared below:

    // create a queue object with concurrency 1
    var q = async.priorityQueue(function(task, callback) {
      console.log('Hello ' + task.name);
      callback();
    }, 1);
    
    // assign a callback
    q.drain = function() {
      console.log('All items have been processed');
    };
    
    // add some items to the queue with priority
    q.push({name: 'foo'}, 3, function(err) {
      console.log('Finished processing foo');
    });
    
    q.push({name: 'bar'}, 2, function (err) {
      console.log('Finished processing bar');
    });
    
    // add some items to the queue (batch-wise) which will have same priority
    q.push([{name: 'baz'},{name: 'bay'},{name: 'bax'}], 1, function(err) {
      console.log('Finished processing item');
    });

    6. Race

    It runs all the tasks in parallel, but as soon as any of the function completes its execution or passes error to its callback, the main callback is immediately called.

    async.race(tasks, callback)

    Task: Here, it is a collection of functions to run. It is an array or any iterable.

    Callback: The result of the first complete execution is passed. It may be the result or error.

    An example of Race is shared below:

    async.race([
      function (callback) {
        setTimeout(function () {
          callback(null, 'one');
        }, 300);
      },
      function (callback) {
        setTimeout(function () {
          callback(null, 'two');
        }, 100);
      },
      function (callback) {
        setTimeout(function () {
          callback(null, 'three');
        }, 200);
      }
    ],
      // main callback
      function (err, result) {
        // the result will be equal to 'two' as it finishes earlier than the other 2
        console.log('The result is ', result);
      });

    Combining Async Flows

    In complex scenarios, the async flows like parallel and series can be combined and nested. This helps in achieving the expected output with the benefits of async utilities.

    However, the only difference between Waterfall and Series async utility is that the final callback in series receives an array of results of all the task whereas in Waterfall, the result object of the final task is received by the final callback.

    Conclusion

    Async Utilities has an upper hand over promises due to its concise and clean code, better error handling and easier debugging. It makes us realize how simple and easy asynchronous code can be without the syntactical mess of promises and callback hell.

  • Building Google Photos Alternative Using AWS Serverless

    Being an avid Google Photos user, I really love some of its features, such as album, face search, and unlimited storage. However, when Google announced the end of unlimited storage on June 1st, 2021, I started thinking about how I could create a cheaper solution that would meet my photo backup requirement.

    “Taking an image, freezing a moment, reveals how rich reality truly is.”

    – Anonymous

    Google offers 100 GB of storage for 130 INR. This storage can be used across various Google applications. However, I don’t use all the space in one go. For me, I snap photos randomly. Sometimes, I visit places and take random snaps with my DSLR and smartphone. So, in general, I upload approximately 200 photos monthly. The size of these photos varies in the range of 4MB to 30MB. On average, I may be using 4GB of monthly storage for backup on my external hard drive to keep raw photos, even the bad ones. Photos backed up on the cloud should be visually high-quality, and it’s good to have a raw copy available at the same time, so that you may do some lightroom changes (although I never touch them 😛). So, here is my minimal requirement:

    • Should support social authentication (Google sign-in preferred).
    • Photos should be stored securely in raw format.
    • Storage should be scaled with usage.
    • Uploading and downloading photos should be easy.
    • Web view for preview would be a plus.
    • Should have almost no operations headache and solution should be as cheap as possible 😉.

    Selecting Tech Stack

    To avoid operation headaches with servers going down, scaling, or maybe application crashing and overall monitoring, I opted for a serverless solution with AWS. The AWS S3 is infinite scalable storage and you only pay for the amount of storage you used. On top of that, you can opt for the S3 storage class, which is efficient and cost-effective.

    – Infrastructure Stack

    1. AWS API Gateway (http api)
    2. AWS Lambda (for processing images and API gateway queries)
    3. Dynamodb (for storing image metadata)
    4. AWS Cognito (for authentication)
    5. AWS S3 Bucket (for storage and web application hosting)
    6. AWS Certificate Manager (to use SSL certificate for a custom domain with API gateway)

    – Software Stack

    1. NodeJS
    2. ReactJS and Material-UI (front-end framework and UI)
    3. AWS Amplify (for simplifying auth flow with cognito)
    4. Sharp (high-speed nodejs library for converting images)
    5. Express and serversless-http
    6. Infinite Scroller (for gallery view)
    7. Serverless Framework (for ease of deployment and Infrastructure as Code)

    Create S3 Buckets:

    We will create three S3 buckets. Create one for hosting a frontend application (refer to architecture diagram, more on this discussed later in the build and hosting part). The second one is for temporarily uploading images. The third one is for actual backup and storage (enable server-side encryption on this bucket). A temporary upload bucket will process uploaded images. 

    During pre-processing, we will resize the original image into two different sizes. One is for thumbnail purposes (400px width), another one is for viewing purposes, but with reduced quality (webp format). Once images are resized, upload all three (raw, thumbnail, and webview) to the third S3 bucket and create a record in dynamodb. Set up object expiry policy on the temporary bucket for 1 day. This way, uploaded objects are automatically deleted from the temporary bucket.

    Setup trigger on the temporary bucket for uploaded images:

    We will need to set up an S3 PUT event, which will trigger our Lambda function to download and process images. We will filter the suffix jpg (and jpeg) for an event trigger, meaning that any file with extension .jpg and .jpeg uploaded to our temporary bucket will automatically invoke a lambda function with the event payload. The lambda function with the help of the event payload will download the uploaded file and perform processing. Your serverless function definition would look like:

    functions:
     lambda:
       handler: index.handler
       memorySize: 512
       timeout: 60
       layers:
         - {Ref: PhotoParserLibsLambdaLayer}
       events:
         - s3:
             bucket: your-temporary-bucket-name
             event: s3:ObjectCreated:*
             rules:
               - suffix: .jpg
             existing: true
         - s3:
             bucket: your-temporary-bucket-name
             event: s3:ObjectCreated:*
             rules:
               - suffix: .jpeg
             existing: true

    Notice that in the YAML events section, we set “existing:true”. This ensures that the bucket will not be created during the serverless deployment. However, if you plan not to manually create your s3 bucket, you can let the framework create a bucket for you.

    DynamoDB as metadatadb:

    AWS dynamodb is a key-value document db that is suitable for our use case. Dynamodb will help us retrieve the list of photos available in the time series. Dynamodb uses a primary key for uniquely identifying each record. A primary key can be composed of a hash key and range key (also called a sort key). A range key is optional. We will use a federated identity ID (discussed in setup authorization) as the hash key (partition key) and name it the username for attribute definition with the type string. We will use the timestamp attribute definition name as a range key with a type number. Range key will help us query results with time-series (Unix epoch). We can also use dynamodb secondary indexes to sort results more specifically. However, to keep the application simple, we’re going to opt-out of this feature for now. Your serverless resource definition would look like:

    resources:
     Resources:
       MetaDataDB:
         Type: AWS::DynamoDB::Table
         Properties:
           TableName: your-dynamodb-table-name
           AttributeDefinitions:
             - AttributeName: username
               AttributeType: S
             - AttributeName: timestamp
               AttributeType: N
           KeySchema:
             - AttributeName: username
               KeyType: HASH
             - AttributeName: timestamp
               KeyType: RANGE
           BillingMode: PAY_PER_REQUEST

    Finally, you also need to set up the IAM role so that the process image lambda function would have access to the S3 bucket and dynamodb. Here is the serverless definition for the IAM role.

    # you can add statements to the Lambda function's IAM Role here
     iam:
       role:
         statements:
         - Effect: "Allow"
           Action:
             - "s3:ListBucket"
           Resource:
             - arn:aws:s3:::your-temporary-bucket-name
             - arn:aws:s3:::your-actual-photo-bucket-name
         - Effect: "Allow"
           Action:
             - "s3:GetObject"
             - "s3:DeleteObject"
           Resource: arn:aws:s3:::your-temporary-bucket-name/*
         - Effect: "Allow"
           Action:
             - "s3:PutObject"
           Resource: arn:aws:s3:::your-actual-photo-bucket-name/*
         - Effect: "Allow"
           Action:
             - "dynamodb:PutItem"
           Resource:
             - Fn::GetAtt: [ MetaDataDB, Arn ]

    Setup Authentication:

    Okay, to set up a Cognito user pool, head to the Cognito console and create a user pool with below config:

    1. Pool Name: photobucket-users

    2. How do you want your end-users to sign in?

    • Select: Email Address or Phone Number
    • Select: Allow Email Addresses
    • Check: (Recommended) Enable case insensitivity for username input

    3. Which standard attributes are required?

    • email

    4. Keep the defaults for “Policies”

    5. MFA and Verification:

    • I opted to manually reset the password for each user (since this is internal app)
    • Disabled user verification

    6. Keep the default for Message Customizations, tags, and devices.

    7. App Clients :

    • App client name: myappclient
    • Let the refresh token, access token, and id token be default
    • Check all “Auth flow configurations”
    • Check enable token revocation

    8. Skip Triggers

    9. Review and create the pool

    Once created, goto app integration -> domain name. Create a domain Cognito subdomain of your choice and note this. Next, I plan to use the Google sign-in feature with Cognito Federation Identity Providers. Use this guide to set up a Google social identity with Cognito.

    Setup Authorization:

    Once the user identity is verified, we need to allow them to access the s3 bucket with limited permissions. Head to the Cognito console, select federated identities, and create a new identity pool. Follow these steps to configure:

    1. Identity pool name: photobucket_auth

    2. Keep Unauthenticated and Authentication flow settings unchecked.

    3. Authentication providers:

    • User Pool I: Enter the user pool ID obtained during authentication setup
    • App Client I: Enter the app client ID generated during the authentication setup. (Cognito user pool -> App Clients -> App client ID)

    4. Setup permissions:

    • Expand view details (Role Summary)
    • For authenticated identities: edit policy document and use the below JSON policy and skip unauthenticated identities with the default configuration.
    {
       "Version": "2012-10-17",
       "Statement": [
           {
               "Effect": "Allow",
               "Action": [
                   "mobileanalytics:PutEvents",
                   "cognito-sync:*",
                   "cognito-identity:*"
               ],
               "Resource": [
                   "*"
               ]
           },
           {
               "Sid": "ListYourObjects",
               "Effect": "Allow",
               "Action": "s3:ListBucket",
               "Resource": [
                   "arn:aws:s3:::your-actual-photo-bucket-name"
               ],
               "Condition": {
                   "StringLike": {
                       "s3:prefix": [
                           "${cognito-identity.amazonaws.com:sub}/",
                           "${cognito-identity.amazonaws.com:sub}/*"
                       ]
                   }
               }
           },
           {
               "Sid": "ReadYourObjects",
               "Effect": "Allow",
               "Action": [
                   "s3:GetObject"
               ],
               "Resource": [
                   "arn:aws:s3:::your-actual-photo-bucket-name/${cognito-identity.amazonaws.com:sub}",
                   "arn:aws:s3:::your-actual-photo-bucket-name/${cognito-identity.amazonaws.com:sub}/*"
               ]
           }
       ]
    }

    ${cognito-identity.amazonaws.com:sub} is a special AWS variable. When a user is authenticated with a federated identity, each user is assigned a unique identity. What the above policy means is that any user who is authenticated should have access to objects prefixed by their own identity ID. This is how we intend users to gain authorization in a limited area within the S3 bucket.

    Copy the Identity Pool ID (from sample code section). You will need this in your backend to get the identity id of the authenticated user via JWT token.

    Amplify configuration for the frontend UI sign-in:

    This object helps you set up the minimal configuration for your application. This is all that we need to sign in via Cognito and access the S3 photo bucket.

    const awsconfig = {
       Auth : {
           identityPoolId: "idenity pool id created during authorization setup",
           region : "your aws region",
           identityPoolRegion: "same as above if cognito is in same region",
           userPoolId : "cognito user pool id created during authentication setup",
           userPoolWebClientId : "cognito app client id",
           cookieStorage : {
               domain : "https://your-app-domain-name", //this is very important
               secure: true
           },
           oauth: {
               domain : "{cognito domain name}.auth.{cognito region name}.amazoncognito.com",
               scope : ["profile","email","openid"],
               redirectSignIn: 'https://your-app-domain-name',
               redirectSignOut: 'https://your-app-domain-name',
               responseType : "token"
           }
       },
       Storage: {
           AWSS3 : {
               bucket: "your-actual-bucket-name",
               region: "region-of-your-bucket"
           }
       }
    };
    export default awsconfig;

    You can then use the below code to configure and sign in via social authentication.

    import Amplify, {Auth} from 'aws-amplify';
    import awsconfig from './aws-config';
    Amplify.configure(awsconfig);
    //once the amplify is configured you can use below call with onClick event of buttons or any other visual component to sign in.
    //Example
    <Button startIcon={<img alt="Sigin in With Google" src={logo} />} fullWidth variant="outlined" color="primary" onClick={() => Auth.federatedSignIn({provider: 'Google'})}>
       Sign in with Google
    </Button>

    Gallery View:

    When the application is loaded, we use the PhotoGallery component to load photos and view thumbnails on-page. The Photogallery component is a wrapper around the InfinityScoller component, which keeps loading images as the user scrolls. The idea here is that we query a max of 10 images in one go. Our backend returns a list of 10 images (just the map and metadata to the S3 bucket). We must load these images from the S3 bucket and then show thumbnails on-screen as a gallery view. When the user reaches the bottom of the screen or there is empty space left, the InfiniteScroller component loads 10 more images. This continues untill our backend replies with a stop marker.

    The key point here is that we need to send the JWT Token as a header to our backend service via an ajax call. The JWT Token is obtained post a sign-in from Amplify framework. An example of obtaininga JWT token:

    let authsession = await Auth.currentSession();
    let jwtToken = authsession.getIdToken().jwtToken;
    let photoList = await axios.get(url,{
       headers : {
           Authorization: jwtToken
       },
       responseType : "json"
    });

    An example of an infinite scroller component usage is given below. Note that “gallery” is JSX composed array of photo thumbnails. The “loadMore” method calls our ajax function to the server-side backend and updates the “gallery” variable and sets the “hasMore” variable to true/false so that the infinite scroller component can stop queering when there are no photos left to display on the screen.

    <InfiniteScroll
       loadMore={this.fetchPhotos}
       hasMore={this.state.hasMore}
       loader={<div style={{padding:"70px"}} key={0}><LinearProgress color="secondary" /></div>}
    >
       <div style={{ marginTop: "80px", position: "relative", textAlign: "center" }}>
           <div className="image-grid" style={{ marginTop: "30px" }}>
               {gallery}
           </div>
           {this.state.openLightBox ?
           <LightBox src={this.state.lightBoxImg} callback={this.closeLightBox} />
           : null}
       </div>
    </InfiniteScroll>

    The Lightbox component gives a zoom effect to the thumbnail. When the thumbnail is clicked, a higher resolution picture (webp version) is downloaded from the S3 bucket and shown on the screen. We use a storage object from the Amplify library. Downloaded content is a blob and must be converted into image data. To do so, we use the javascript native method, createObjectURL. Below is the sample code that downloads the object from the s3 bucket and then converts it into a viewable image for the HTML IMG tag.

    thumbClick = (index) => {
       const urlCreater = window.URL || window.webkitURL;
       try {
           this.setState({
               openLightBox: true
           });
           Storage.get(this.state.photoList[index].src,{download: true}).then(data=>{
               let image = urlCreater.createObjectURL(data.Body);
               this.setState({
                   lightBoxImg : image
               });
           });
              
       } catch (error) {
           console.log(error);
           this.setState({
               openLightBox: false,
               lightBoxImg : null
           });
       }
    };

    Uploading Photos:

    The S3 SDK lets you generate a pre-signed POST URL. Anyone who gets this URL will be able to upload objects to the S3 bucket directly without needing credentials. Of course, we can actually set up some boundaries, like a max object size, key of the uploaded object, etc. Refer to this AWS blog for more on pre-signed URLs. Here is the sample code to generate a pre-signed URL.

    let s3Params = {
       Bucket: "your-temporary-bucket-name,
       Conditions : [
           ["content-length-range",1,31457280]
       ],
       Fields : {
           key: "path/to/your/object"
       },
       Expires: 300 //in seconds
    };
    const s3 = new S3({region : process.env.AWSREGION });
    s3.createPresignedPost(s3Params)

    For a better UX, we can allow our users to upload more than one photo at a time. However, a pre-signed URL lets you upload a single object at a time. To overcome this, we generate multiple pre-signed URLs. Initially, we send a request to our backend asking to upload photos with expected keys. This request is originated once the user selects photos to upload. Our backend then generates pre-signed URLs for us. Our frontend React app then provides the illusion that all photos are being uploaded as a whole.

    When the upload is successful, the S3 PUT event is triggered, which we discussed earlier. The complete flow of the application is given in a sequence diagram. You can find the complete source code here in my GitHub repository.

    React Build Steps and Hosting:

    The ideal way to build the react app is to execute an npm run build. However, we take a slightly different approach. We are not using the S3 static website for serving frontend UI. For one reason, S3 static websites are non-SSL unless we use CloudFront. Therefore, we will make the API gateway our application’s entry point. Thus, the UI will also be served from the API gateway. However, we want to reduce calls made to the API gateway. For this reason, we will only deliver the index.html file hosted with the help API gateway/Lamda, and the rest of the static files (react supporting JS files) from S3 bucket.

    Your index.html should have all the reference paths pointed to the S3 bucket. The build mustexclusively specify that static files are located in a different location than what’s relative to the index.html file. Your S3 bucket needs to be public with the right bucket policy and CORS set so that the end-user can only retrieve files and not upload nasty objects. Those who are confused about how the S3 static website and S3 public bucket differ may refer to here. Below are the react build steps, bucket policy, and CORS.

    PUBLIC_URL=https://{your-static-bucket-name}.s3.{aws_region}.amazonaws.com/ npm run build
    //Bucket Policy
    {
       "Version": "2012-10-17",
       "Id": "http referer from your domain only",
       "Statement": [
           {
               "Sid": "Allow get requests originating from",
               "Effect": "Allow",
               "Principal": "*",
               "Action": "s3:GetObject",
               "Resource": "arn:aws:s3:::{your-static-bucket-name}/static/*",
               "Condition": {
                   "StringLike": {
                       "aws:Referer": [
                           "https://your-app-domain-name"
                       ]
                   }
               }
           }
       ]
    }
    //CORS
    [
       {
           "AllowedHeaders": [
               "*"
           ],
           "AllowedMethods": [
               "GET"
           ],
           "AllowedOrigins": [
               "https://your-app-domain-name"
           ],
           "ExposeHeaders": []
       }
    ]

    Once a build is complete, upload index.html to a lambda that serves your UI. Run the below shell commands to compress static contents and host them on our static S3 bucket.

    #assuming you are in your react app directory
    mkdir /tmp/s3uploads
    cp -ar build/static /tmp/s3uploads/
    cd /tmp/s3uploads
    #add gzip encoding to all the files
    gzip -9 `find ./ -type f`
    #remove .gz extension from compressed files
    for i in `find ./ -type f`
    do
       mv $i ${i%.*}
    done
    #sync your files to s3 static bucket and mention that these files are compressed with gzip encoding
    #so that browser will not treat them as regular files
    aws s3 --region $AWSREGION sync . s3://${S3_STATIC_BUCKET}/static/ --content-encoding gzip --delete --sse
    cd -
    rm -rf /tmp/s3uploads

    Our backend uses nodejs express framework. Since this is a serverless application, we need to wrap express with a serverless-http framework to work with lambda. Sample source code is given below, along with serverless framework resource definition. Notice that, except for the UI home endpoint ( “/” ), the rest of the API endpoints are authenticated with Cognito on the API gateway itself.

    const serverless = require("serverless-http");
    const express = require("express");
    const app = express();
    .
    .
    .
    .
    .
    .
    app.get("/",(req,res)=> {
     res.sendFile(path.join(__dirname + "/index.html"));
    });
    module.exports.uihome = serverless(app);

    provider:
     name: aws
     runtime: nodejs12.x
     lambdaHashingVersion: 20201221
     httpApi:
       authorizers:
         cognitoJWTAuth:
           identitySource: $request.header.Authorization
           issuerUrl: https://cognito-idp.{AWS_REGION}.amazonaws.com/{COGNITO_USER_POOL_ID}
           audience:
             - COGNITO_APP_CLIENT_ID
    .
    .
    .
    .
    .
    .
    .
    functions:
     react-serve-ui:
       handler: handler.uihome
       memorySize: 256
       timeout: 29
       layers:
         - {Ref: CommonLibsLambdaLayer}
       events:
         - httpApi:
             path: /prep/photoupload
             method: post
             authorizer:
               name: cognitoJWTAuth
         - httpApi:
             path: /list/photos
             method: get
             authorizer:
               name: cognitoJWTAuth
         - httpApi:
             path: /
             method: get

    Final Steps :

    Lastly, we will setup up a custom domain so that we don’t need to use the gibberish domain name generated by the API gateway and certificate for our custom domain. You don’t need to use route53 for this part. If you have an existing domain, you can create a subdomain and point it to the API gateway. First things first: head to the AWS ACM console and generate a certificate for the domain name. Once the request is generated, you need to validate your domain by creating a TXT record as per the ACM console. The ACM is a free service. Domain verification may take few minutes to several hours. Once you have the certificate ready, head back to the API gateway console. Navigate to “custom domain names” and click create.

    1. Enter your application domain name
    2. Check TLS 1.2 as TLS version
    3. Select Endpoint type as Regional
    4. Select ACM certificate from dropdown list
    5. Create domain name

    Select the newly created custom domain. Note the API gateway domain name from Domain Details -> Configuration tab. You will need this to map a CNAME/ALIAS record with your DNS provider. Click on the API mappings tab. Click configure API mappings. From the dropdown, select your API gateway, select stage as default, and click save. You are done here.

    Future Scope and Improvements :

    To improve application latency, we can use CloudFront as CDN. This way, our entry point could be S3, and we no longer need to use API gateway regional endpoint. We can also add AWS WAF as an added security in front of our API gateway to inspect incoming requests and payloads. We can also use Dynamodb secondary indexes so that we can efficiently search metadata in the table. Adding a lifecycle rule on raw photos which have not been accessed for more than a year can be transited to the S3 Glacier storage class. You can further add glacier deep storage transition to save more on storage costs.

  • Node.js – Async Your Way out of Callback Hell with Promises, Async & Async/Await

    In this blog, I will compare various methods to avoid the dreaded callback hells that are common in Node.js. What exactly am I talking about? Have a look at this piece of code below. Every child function executes only when the result of its parent function is available. Callbacks are the very essence of the unblocking (and hence performant) nature of Node.js.

    foo(arg, (err, val) => {
         if (err) {
              console.log(err);
         } else {
              val += 1;
              bar(val, (err1, val1) => {
                   if (err) {
                        console.log(err1);
                   } else {
                        val1 += 2;
                        baz(val1, (err2, result) => {
                             if (err2) {
                                  console.log(err2);
                             } else {
                                  result += 3;
                                  console.log(result); // 6
                             }
                        });
                   }
              });
         }
    });

    Convinced yet? Even though there is some seemingly unnecessary error handling done here, I assume you get the drift! The problem with such code is more than just indentation. Instead, our programs entire flow is based on side effects – one function only incidentally calling the inner function.

    There are multiple ways in which we can avoid writing such deeply nested code. Let’s have a look at our options:

    Promises

    According to the official specification, promise represents an eventual result of an asynchronous operation. Basically, it represents an operation that has not completed yet but is expected to in the future. The then method is a major component of a promise. It is used to get the return value (fulfilled or rejected) of a promise. Only one of these two values will ever be set. Let’s have a look at a simple file read example without using promises:

    fs.readFile(filePath, (err, result) => {
         if (err) { console.log(err); }
         console.log(data);
    });

    Now, if readFile function returned a promise, the same logic could be written like so:

    var fileReadPromise = fs.readFile(filePath);
    fileReadPromise.then(console.log, console.error)

    The fileReadPromise can then be passed around multiple times in a code where you need to read a file. This helps in writing robust unit tests for your code since you now only have to write a single test for a promise. And more readable code!

    Chaining using promises

    The then function itself returns a promise which can again be used to do the next operation. Changing the first code snippet to using promises results in this:

    foo(arg, (err, val) => {
         if (err) {
              console.log(err);
         } else {
              val += 1;
              bar(val, (err1, val1) => {
                   if (err) {
                        console.log(err1);
                   } else {
                        val1 += 2;
                        baz(val1, (err2, result) => {
                             if (err2) {
                                  console.log(err2);
                             } else {
                                  result += 3;
                                  console.log(result); // 6
                             }
                        });
                   }
              });
         }
    });

    As in evident, it makes the code more composed, readable and easier to maintain. Also, instead of chaining we could have used Promise.all. Promise.all takes an array of promises as input and returns a single promise that resolves when all the promises supplied in the array are resolved. Other useful information on promises can be found here.

    The async utility module

    Async is an utility module which provides a set of over 70 functions that can be used to elegantly solve the problem of callback hells. All these functions follow the Node.js convention of error-first callbacks which means that the first callback argument is assumed to be an error (null in case of success). Let’s try to solve the same foo-bar-baz problem using async module. Here is the code snippet:

    function foo(arg, callback) {
      if (arg < 0) {
        callback('error');
        return;
      }
      callback(null, arg+1);
    }
    
    function bar(arg, callback) {
      if (arg < 0) {
        callback('error');
        return;
      }
      callback(null, arg+2);
    }
    
    function baz(arg, callback) {
      if (arg < 0) {
        callback('error');
        return;
      }
      callback(null, arg+3);
    }
    
    async.waterfall([
      (cb) => {
        foo(0, cb);
      },
      (arg, cb) => {
        bar(arg, cb);
      },
      (arg, cb) => {
        baz(arg, cb);
      }
    ], (err, result) => {
      if (err) {
        console.log(err);
      } else {
        console.log(result); //6
      }
    });

    Here, I have used the async.waterfall function as an example. There are a multiple functions available according to the nature of the problem you are trying to solve like async.each – for parallel execution, async.eachSeries – for serial execution etc.

    Async/Await

    Now, this is one of the most exciting features coming to Javascript in near future. It internally uses promises but handles them in a more intuitive manner. Even though it seems like promises and/or 3rd party modules like async would solve most of the problems, a further simplification is always welcome! For those of you who have worked with C# async/await, this concept is directly cribbed from there and being brought into ES7. 

    Async/await enables us to write asynchronous promise-based code as if it were synchronous, but without blocking the main thread. An async function always returns a promise whether await is used or not. But whenever an await is observed, the function is paused until the promise either resolves or rejects. Following code snippet should make it clearer:

    async function asyncFun() {
      try {
        const result = await promise;
      } catch(error) {
        console.log(error);
      }
    }

    Here,  asyncFun is an async function which captures the promised result using await. This has made the code readable and a major convenience for developers who are more comfortable with linearly executed languages, without blocking the main thread. 

    Now, like before, lets solve the foo-bar-baz problem using async/await. Note that foo, bar and baz individually return promises just like before. But instead of chaining, we have written the code linearly.

    async fooBarBaz(arg) {
      try {
      const fooResponse = await foo(arg+1);
      const barResponse = await bar(arg+2);
      const bazResponse = await baz(arg+3);
    
      return bazResponse;
      } catch (error) {
        return Error(error);
      }
    }

    How long should you (a)wait for async to come to fore?

    Well, it’s already here in the Chrome 55 release and the latest update of the V8 engine.  The native support in the language means that we should see a much more widespread use of this feature. The only, catch is that if you would want to use async/await on a codebase which isn’t promise aware and based completely on callbacks, it probably will require a lot of wrapping the existing functions to make them usable.

    To wrap up, async/await definitely make coding numerous async operations an easier job. Although promises and callbacks would do the job for most, async/await looks like the way to make some architectural problems go away and improve code quality.