Tag: react

  • Building Google Photos Alternative Using AWS Serverless

    Being an avid Google Photos user, I really love some of its features, such as album, face search, and unlimited storage. However, when Google announced the end of unlimited storage on June 1st, 2021, I started thinking about how I could create a cheaper solution that would meet my photo backup requirement.

    “Taking an image, freezing a moment, reveals how rich reality truly is.”

    – Anonymous

    Google offers 100 GB of storage for 130 INR. This storage can be used across various Google applications. However, I don’t use all the space in one go. For me, I snap photos randomly. Sometimes, I visit places and take random snaps with my DSLR and smartphone. So, in general, I upload approximately 200 photos monthly. The size of these photos varies in the range of 4MB to 30MB. On average, I may be using 4GB of monthly storage for backup on my external hard drive to keep raw photos, even the bad ones. Photos backed up on the cloud should be visually high-quality, and it’s good to have a raw copy available at the same time, so that you may do some lightroom changes (although I never touch them 😛). So, here is my minimal requirement:

    • Should support social authentication (Google sign-in preferred).
    • Photos should be stored securely in raw format.
    • Storage should be scaled with usage.
    • Uploading and downloading photos should be easy.
    • Web view for preview would be a plus.
    • Should have almost no operations headache and solution should be as cheap as possible 😉.

    Selecting Tech Stack

    To avoid operation headaches with servers going down, scaling, or maybe application crashing and overall monitoring, I opted for a serverless solution with AWS. The AWS S3 is infinite scalable storage and you only pay for the amount of storage you used. On top of that, you can opt for the S3 storage class, which is efficient and cost-effective.

    – Infrastructure Stack

    1. AWS API Gateway (http api)
    2. AWS Lambda (for processing images and API gateway queries)
    3. Dynamodb (for storing image metadata)
    4. AWS Cognito (for authentication)
    5. AWS S3 Bucket (for storage and web application hosting)
    6. AWS Certificate Manager (to use SSL certificate for a custom domain with API gateway)

    – Software Stack

    1. NodeJS
    2. ReactJS and Material-UI (front-end framework and UI)
    3. AWS Amplify (for simplifying auth flow with cognito)
    4. Sharp (high-speed nodejs library for converting images)
    5. Express and serversless-http
    6. Infinite Scroller (for gallery view)
    7. Serverless Framework (for ease of deployment and Infrastructure as Code)

    Create S3 Buckets:

    We will create three S3 buckets. Create one for hosting a frontend application (refer to architecture diagram, more on this discussed later in the build and hosting part). The second one is for temporarily uploading images. The third one is for actual backup and storage (enable server-side encryption on this bucket). A temporary upload bucket will process uploaded images. 

    During pre-processing, we will resize the original image into two different sizes. One is for thumbnail purposes (400px width), another one is for viewing purposes, but with reduced quality (webp format). Once images are resized, upload all three (raw, thumbnail, and webview) to the third S3 bucket and create a record in dynamodb. Set up object expiry policy on the temporary bucket for 1 day. This way, uploaded objects are automatically deleted from the temporary bucket.

    Setup trigger on the temporary bucket for uploaded images:

    We will need to set up an S3 PUT event, which will trigger our Lambda function to download and process images. We will filter the suffix jpg (and jpeg) for an event trigger, meaning that any file with extension .jpg and .jpeg uploaded to our temporary bucket will automatically invoke a lambda function with the event payload. The lambda function with the help of the event payload will download the uploaded file and perform processing. Your serverless function definition would look like:

    functions:
     lambda:
       handler: index.handler
       memorySize: 512
       timeout: 60
       layers:
         - {Ref: PhotoParserLibsLambdaLayer}
       events:
         - s3:
             bucket: your-temporary-bucket-name
             event: s3:ObjectCreated:*
             rules:
               - suffix: .jpg
             existing: true
         - s3:
             bucket: your-temporary-bucket-name
             event: s3:ObjectCreated:*
             rules:
               - suffix: .jpeg
             existing: true

    Notice that in the YAML events section, we set “existing:true”. This ensures that the bucket will not be created during the serverless deployment. However, if you plan not to manually create your s3 bucket, you can let the framework create a bucket for you.

    DynamoDB as metadatadb:

    AWS dynamodb is a key-value document db that is suitable for our use case. Dynamodb will help us retrieve the list of photos available in the time series. Dynamodb uses a primary key for uniquely identifying each record. A primary key can be composed of a hash key and range key (also called a sort key). A range key is optional. We will use a federated identity ID (discussed in setup authorization) as the hash key (partition key) and name it the username for attribute definition with the type string. We will use the timestamp attribute definition name as a range key with a type number. Range key will help us query results with time-series (Unix epoch). We can also use dynamodb secondary indexes to sort results more specifically. However, to keep the application simple, we’re going to opt-out of this feature for now. Your serverless resource definition would look like:

    resources:
     Resources:
       MetaDataDB:
         Type: AWS::DynamoDB::Table
         Properties:
           TableName: your-dynamodb-table-name
           AttributeDefinitions:
             - AttributeName: username
               AttributeType: S
             - AttributeName: timestamp
               AttributeType: N
           KeySchema:
             - AttributeName: username
               KeyType: HASH
             - AttributeName: timestamp
               KeyType: RANGE
           BillingMode: PAY_PER_REQUEST

    Finally, you also need to set up the IAM role so that the process image lambda function would have access to the S3 bucket and dynamodb. Here is the serverless definition for the IAM role.

    # you can add statements to the Lambda function's IAM Role here
     iam:
       role:
         statements:
         - Effect: "Allow"
           Action:
             - "s3:ListBucket"
           Resource:
             - arn:aws:s3:::your-temporary-bucket-name
             - arn:aws:s3:::your-actual-photo-bucket-name
         - Effect: "Allow"
           Action:
             - "s3:GetObject"
             - "s3:DeleteObject"
           Resource: arn:aws:s3:::your-temporary-bucket-name/*
         - Effect: "Allow"
           Action:
             - "s3:PutObject"
           Resource: arn:aws:s3:::your-actual-photo-bucket-name/*
         - Effect: "Allow"
           Action:
             - "dynamodb:PutItem"
           Resource:
             - Fn::GetAtt: [ MetaDataDB, Arn ]

    Setup Authentication:

    Okay, to set up a Cognito user pool, head to the Cognito console and create a user pool with below config:

    1. Pool Name: photobucket-users

    2. How do you want your end-users to sign in?

    • Select: Email Address or Phone Number
    • Select: Allow Email Addresses
    • Check: (Recommended) Enable case insensitivity for username input

    3. Which standard attributes are required?

    • email

    4. Keep the defaults for “Policies”

    5. MFA and Verification:

    • I opted to manually reset the password for each user (since this is internal app)
    • Disabled user verification

    6. Keep the default for Message Customizations, tags, and devices.

    7. App Clients :

    • App client name: myappclient
    • Let the refresh token, access token, and id token be default
    • Check all “Auth flow configurations”
    • Check enable token revocation

    8. Skip Triggers

    9. Review and create the pool

    Once created, goto app integration -> domain name. Create a domain Cognito subdomain of your choice and note this. Next, I plan to use the Google sign-in feature with Cognito Federation Identity Providers. Use this guide to set up a Google social identity with Cognito.

    Setup Authorization:

    Once the user identity is verified, we need to allow them to access the s3 bucket with limited permissions. Head to the Cognito console, select federated identities, and create a new identity pool. Follow these steps to configure:

    1. Identity pool name: photobucket_auth

    2. Keep Unauthenticated and Authentication flow settings unchecked.

    3. Authentication providers:

    • User Pool I: Enter the user pool ID obtained during authentication setup
    • App Client I: Enter the app client ID generated during the authentication setup. (Cognito user pool -> App Clients -> App client ID)

    4. Setup permissions:

    • Expand view details (Role Summary)
    • For authenticated identities: edit policy document and use the below JSON policy and skip unauthenticated identities with the default configuration.
    {
       "Version": "2012-10-17",
       "Statement": [
           {
               "Effect": "Allow",
               "Action": [
                   "mobileanalytics:PutEvents",
                   "cognito-sync:*",
                   "cognito-identity:*"
               ],
               "Resource": [
                   "*"
               ]
           },
           {
               "Sid": "ListYourObjects",
               "Effect": "Allow",
               "Action": "s3:ListBucket",
               "Resource": [
                   "arn:aws:s3:::your-actual-photo-bucket-name"
               ],
               "Condition": {
                   "StringLike": {
                       "s3:prefix": [
                           "${cognito-identity.amazonaws.com:sub}/",
                           "${cognito-identity.amazonaws.com:sub}/*"
                       ]
                   }
               }
           },
           {
               "Sid": "ReadYourObjects",
               "Effect": "Allow",
               "Action": [
                   "s3:GetObject"
               ],
               "Resource": [
                   "arn:aws:s3:::your-actual-photo-bucket-name/${cognito-identity.amazonaws.com:sub}",
                   "arn:aws:s3:::your-actual-photo-bucket-name/${cognito-identity.amazonaws.com:sub}/*"
               ]
           }
       ]
    }

    ${cognito-identity.amazonaws.com:sub} is a special AWS variable. When a user is authenticated with a federated identity, each user is assigned a unique identity. What the above policy means is that any user who is authenticated should have access to objects prefixed by their own identity ID. This is how we intend users to gain authorization in a limited area within the S3 bucket.

    Copy the Identity Pool ID (from sample code section). You will need this in your backend to get the identity id of the authenticated user via JWT token.

    Amplify configuration for the frontend UI sign-in:

    This object helps you set up the minimal configuration for your application. This is all that we need to sign in via Cognito and access the S3 photo bucket.

    const awsconfig = {
       Auth : {
           identityPoolId: "idenity pool id created during authorization setup",
           region : "your aws region",
           identityPoolRegion: "same as above if cognito is in same region",
           userPoolId : "cognito user pool id created during authentication setup",
           userPoolWebClientId : "cognito app client id",
           cookieStorage : {
               domain : "https://your-app-domain-name", //this is very important
               secure: true
           },
           oauth: {
               domain : "{cognito domain name}.auth.{cognito region name}.amazoncognito.com",
               scope : ["profile","email","openid"],
               redirectSignIn: 'https://your-app-domain-name',
               redirectSignOut: 'https://your-app-domain-name',
               responseType : "token"
           }
       },
       Storage: {
           AWSS3 : {
               bucket: "your-actual-bucket-name",
               region: "region-of-your-bucket"
           }
       }
    };
    export default awsconfig;

    You can then use the below code to configure and sign in via social authentication.

    import Amplify, {Auth} from 'aws-amplify';
    import awsconfig from './aws-config';
    Amplify.configure(awsconfig);
    //once the amplify is configured you can use below call with onClick event of buttons or any other visual component to sign in.
    //Example
    <Button startIcon={<img alt="Sigin in With Google" src={logo} />} fullWidth variant="outlined" color="primary" onClick={() => Auth.federatedSignIn({provider: 'Google'})}>
       Sign in with Google
    </Button>

    Gallery View:

    When the application is loaded, we use the PhotoGallery component to load photos and view thumbnails on-page. The Photogallery component is a wrapper around the InfinityScoller component, which keeps loading images as the user scrolls. The idea here is that we query a max of 10 images in one go. Our backend returns a list of 10 images (just the map and metadata to the S3 bucket). We must load these images from the S3 bucket and then show thumbnails on-screen as a gallery view. When the user reaches the bottom of the screen or there is empty space left, the InfiniteScroller component loads 10 more images. This continues untill our backend replies with a stop marker.

    The key point here is that we need to send the JWT Token as a header to our backend service via an ajax call. The JWT Token is obtained post a sign-in from Amplify framework. An example of obtaininga JWT token:

    let authsession = await Auth.currentSession();
    let jwtToken = authsession.getIdToken().jwtToken;
    let photoList = await axios.get(url,{
       headers : {
           Authorization: jwtToken
       },
       responseType : "json"
    });

    An example of an infinite scroller component usage is given below. Note that “gallery” is JSX composed array of photo thumbnails. The “loadMore” method calls our ajax function to the server-side backend and updates the “gallery” variable and sets the “hasMore” variable to true/false so that the infinite scroller component can stop queering when there are no photos left to display on the screen.

    <InfiniteScroll
       loadMore={this.fetchPhotos}
       hasMore={this.state.hasMore}
       loader={<div style={{padding:"70px"}} key={0}><LinearProgress color="secondary" /></div>}
    >
       <div style={{ marginTop: "80px", position: "relative", textAlign: "center" }}>
           <div className="image-grid" style={{ marginTop: "30px" }}>
               {gallery}
           </div>
           {this.state.openLightBox ?
           <LightBox src={this.state.lightBoxImg} callback={this.closeLightBox} />
           : null}
       </div>
    </InfiniteScroll>

    The Lightbox component gives a zoom effect to the thumbnail. When the thumbnail is clicked, a higher resolution picture (webp version) is downloaded from the S3 bucket and shown on the screen. We use a storage object from the Amplify library. Downloaded content is a blob and must be converted into image data. To do so, we use the javascript native method, createObjectURL. Below is the sample code that downloads the object from the s3 bucket and then converts it into a viewable image for the HTML IMG tag.

    thumbClick = (index) => {
       const urlCreater = window.URL || window.webkitURL;
       try {
           this.setState({
               openLightBox: true
           });
           Storage.get(this.state.photoList[index].src,{download: true}).then(data=>{
               let image = urlCreater.createObjectURL(data.Body);
               this.setState({
                   lightBoxImg : image
               });
           });
              
       } catch (error) {
           console.log(error);
           this.setState({
               openLightBox: false,
               lightBoxImg : null
           });
       }
    };

    Uploading Photos:

    The S3 SDK lets you generate a pre-signed POST URL. Anyone who gets this URL will be able to upload objects to the S3 bucket directly without needing credentials. Of course, we can actually set up some boundaries, like a max object size, key of the uploaded object, etc. Refer to this AWS blog for more on pre-signed URLs. Here is the sample code to generate a pre-signed URL.

    let s3Params = {
       Bucket: "your-temporary-bucket-name,
       Conditions : [
           ["content-length-range",1,31457280]
       ],
       Fields : {
           key: "path/to/your/object"
       },
       Expires: 300 //in seconds
    };
    const s3 = new S3({region : process.env.AWSREGION });
    s3.createPresignedPost(s3Params)

    For a better UX, we can allow our users to upload more than one photo at a time. However, a pre-signed URL lets you upload a single object at a time. To overcome this, we generate multiple pre-signed URLs. Initially, we send a request to our backend asking to upload photos with expected keys. This request is originated once the user selects photos to upload. Our backend then generates pre-signed URLs for us. Our frontend React app then provides the illusion that all photos are being uploaded as a whole.

    When the upload is successful, the S3 PUT event is triggered, which we discussed earlier. The complete flow of the application is given in a sequence diagram. You can find the complete source code here in my GitHub repository.

    React Build Steps and Hosting:

    The ideal way to build the react app is to execute an npm run build. However, we take a slightly different approach. We are not using the S3 static website for serving frontend UI. For one reason, S3 static websites are non-SSL unless we use CloudFront. Therefore, we will make the API gateway our application’s entry point. Thus, the UI will also be served from the API gateway. However, we want to reduce calls made to the API gateway. For this reason, we will only deliver the index.html file hosted with the help API gateway/Lamda, and the rest of the static files (react supporting JS files) from S3 bucket.

    Your index.html should have all the reference paths pointed to the S3 bucket. The build mustexclusively specify that static files are located in a different location than what’s relative to the index.html file. Your S3 bucket needs to be public with the right bucket policy and CORS set so that the end-user can only retrieve files and not upload nasty objects. Those who are confused about how the S3 static website and S3 public bucket differ may refer to here. Below are the react build steps, bucket policy, and CORS.

    PUBLIC_URL=https://{your-static-bucket-name}.s3.{aws_region}.amazonaws.com/ npm run build
    //Bucket Policy
    {
       "Version": "2012-10-17",
       "Id": "http referer from your domain only",
       "Statement": [
           {
               "Sid": "Allow get requests originating from",
               "Effect": "Allow",
               "Principal": "*",
               "Action": "s3:GetObject",
               "Resource": "arn:aws:s3:::{your-static-bucket-name}/static/*",
               "Condition": {
                   "StringLike": {
                       "aws:Referer": [
                           "https://your-app-domain-name"
                       ]
                   }
               }
           }
       ]
    }
    //CORS
    [
       {
           "AllowedHeaders": [
               "*"
           ],
           "AllowedMethods": [
               "GET"
           ],
           "AllowedOrigins": [
               "https://your-app-domain-name"
           ],
           "ExposeHeaders": []
       }
    ]

    Once a build is complete, upload index.html to a lambda that serves your UI. Run the below shell commands to compress static contents and host them on our static S3 bucket.

    #assuming you are in your react app directory
    mkdir /tmp/s3uploads
    cp -ar build/static /tmp/s3uploads/
    cd /tmp/s3uploads
    #add gzip encoding to all the files
    gzip -9 `find ./ -type f`
    #remove .gz extension from compressed files
    for i in `find ./ -type f`
    do
       mv $i ${i%.*}
    done
    #sync your files to s3 static bucket and mention that these files are compressed with gzip encoding
    #so that browser will not treat them as regular files
    aws s3 --region $AWSREGION sync . s3://${S3_STATIC_BUCKET}/static/ --content-encoding gzip --delete --sse
    cd -
    rm -rf /tmp/s3uploads

    Our backend uses nodejs express framework. Since this is a serverless application, we need to wrap express with a serverless-http framework to work with lambda. Sample source code is given below, along with serverless framework resource definition. Notice that, except for the UI home endpoint ( “/” ), the rest of the API endpoints are authenticated with Cognito on the API gateway itself.

    const serverless = require("serverless-http");
    const express = require("express");
    const app = express();
    .
    .
    .
    .
    .
    .
    app.get("/",(req,res)=> {
     res.sendFile(path.join(__dirname + "/index.html"));
    });
    module.exports.uihome = serverless(app);

    provider:
     name: aws
     runtime: nodejs12.x
     lambdaHashingVersion: 20201221
     httpApi:
       authorizers:
         cognitoJWTAuth:
           identitySource: $request.header.Authorization
           issuerUrl: https://cognito-idp.{AWS_REGION}.amazonaws.com/{COGNITO_USER_POOL_ID}
           audience:
             - COGNITO_APP_CLIENT_ID
    .
    .
    .
    .
    .
    .
    .
    functions:
     react-serve-ui:
       handler: handler.uihome
       memorySize: 256
       timeout: 29
       layers:
         - {Ref: CommonLibsLambdaLayer}
       events:
         - httpApi:
             path: /prep/photoupload
             method: post
             authorizer:
               name: cognitoJWTAuth
         - httpApi:
             path: /list/photos
             method: get
             authorizer:
               name: cognitoJWTAuth
         - httpApi:
             path: /
             method: get

    Final Steps :

    Lastly, we will setup up a custom domain so that we don’t need to use the gibberish domain name generated by the API gateway and certificate for our custom domain. You don’t need to use route53 for this part. If you have an existing domain, you can create a subdomain and point it to the API gateway. First things first: head to the AWS ACM console and generate a certificate for the domain name. Once the request is generated, you need to validate your domain by creating a TXT record as per the ACM console. The ACM is a free service. Domain verification may take few minutes to several hours. Once you have the certificate ready, head back to the API gateway console. Navigate to “custom domain names” and click create.

    1. Enter your application domain name
    2. Check TLS 1.2 as TLS version
    3. Select Endpoint type as Regional
    4. Select ACM certificate from dropdown list
    5. Create domain name

    Select the newly created custom domain. Note the API gateway domain name from Domain Details -> Configuration tab. You will need this to map a CNAME/ALIAS record with your DNS provider. Click on the API mappings tab. Click configure API mappings. From the dropdown, select your API gateway, select stage as default, and click save. You are done here.

    Future Scope and Improvements :

    To improve application latency, we can use CloudFront as CDN. This way, our entry point could be S3, and we no longer need to use API gateway regional endpoint. We can also add AWS WAF as an added security in front of our API gateway to inspect incoming requests and payloads. We can also use Dynamodb secondary indexes so that we can efficiently search metadata in the table. Adding a lifecycle rule on raw photos which have not been accessed for more than a year can be transited to the S3 Glacier storage class. You can further add glacier deep storage transition to save more on storage costs.

  • Cleaner, Efficient Code with Hooks and Functional Programming

    React Hooks were introduced in 2018 and ever since numerous POCs have been built around the same. Hooks come in at a time when React has become a norm and class components are becoming increasingly complex. With this blog, I will showcase how Hooks can reduce the size of your code up to 90%. Yes, you heard it right. Exciting, isn’t it? 

    Hooks are a powerful upgrade coming with React 16.8 and utilize the functional programming paradigm. React, however, also acknowledges the volume of class components already built, and therefore, comes with backward compatibility. You can practice by refactoring a small chunk of your codebase to use React Hooks, while not impacting the existing functionality. 

    With this article, I tried to show you how Hooks can help you write cleaner, smaller and more efficient code. 90% Remember!

    First, let’s list out the common problems we all face with React Components as they are today:

    1. Huge Components – caused by the distributed logic in lifecycle Hooks

    2. Wrapper Hell – caused by re-using components

    3. Confusing and hard to understand classes

    In my opinion, these are the symptoms of one big problem i.e. React does not provide a stateful primitive simpler, smaller and more lightweight than class component. That is why solving one problem worsens the other. For example, if we put all of the logic in components to fix Wrapper Hell, it leads to Huge Components, that makes it hard to refactor. On the other hand, if we divide the huge components into smaller reusable pieces, it leads to more nests than in the component tree i.e. Wrapper Hell. In either case, there’s always confusion around the classes.

    Let’s approach these problems one by one and solve them in isolation.

    Huge Components –

    We all have used lifecycle Hooks and often with time they contain more and more stateful logic. It is also observed that stateful logic is shared amongst lifecycle Hooks. For example, consider you have a code that adds an event listener in componentDidMount. The componentDidUpdate method might also contain some logic for setting up the event listeners. Now the cleanup code will be written in componentWillUnmount. See how the logic for the same thing is split between these lifecycle Hooks.

    // Class component
    
    import React from "react";
    
    export default class LazyLoader extends React.Component {
      constructor(props) {
        super(props);
    
        this.state = { data: [] };
      }
    
      loadMore = () => {
        // Load More Data
        console.log("loading data");
      };
    
      handleScroll = () => {
        if (!this.props.isLoading && this.props.isCompleted) {
          this.loadMore();
        }
      };
    
      componentDidMount() {
        this.loadMore();
        document.addEventListener("scroll", this.handleScroll, false);
        // more subscribers and event listeners
      }
    
      componentDidUpdate() {
        //
      }
    
      componentWillUnmount() {
        document.removeEventListener("scroll", this.handleScroll, false);
        // unsubscribe and remove listeners
      }
    
      render() {
        return <div>{this.state.data}</div>;
      }
    }

    React Hooks approach this with useEffect.

    import React, { useEffect, useState } from "react";
    
    export const LazyLoader = ({ isLoading, isCompleted }) => {
      const [data, setData] = useState([]);
    
      const loadMore = () => {
        // Load and setData here
      };
    
      const handleScroll = () => {
        if (!isLoading && isCompleted) {
          loadMore();
        }
      };
    
      // cDM and cWU
      useEffect(() => {
        document.addEventListener("scroll", handleScroll, false);
        // more subscribers and event listeners
    
        return () => {
          document.removeEventListener("scroll", handleScroll, false);
          // unsubscribe and remove listeners
        };
      }, []);
    
      // cDU
      useEffect(() => {
        //
      }, [/** dependencies */]);
    
      return data && <div>{data}</div>;
    };

    Now, let’s move the logic to a custom Hook.

    import { useEffect, useState } from "react";
    
    export function useScroll() {
      const [data, setData] = useState([]);
    
      const loadMore = () => {
        // Load and setData here
      };
    
      const handleScroll = () => {
        if (!isLoading && isCompleted) {
          loadMore();
        }
      };
    
      // cDM and cWU
      useEffect(() => {
        document.addEventListener("scroll", handleScroll, false);
        // more subscribers and event listeners
    
        return () => {
          document.removeEventListener("scroll", handleScroll, false);
          // unsubscribe and remove listeners
        };
      }, []);
    
      return data;
    };

    import React, { useEffect } from "react";
    import { useScroll } from "./useScroll";
    
    const LazyLoader = ({ isLoading, isCompleted }) => {
      const data = useScroll();
    
      // cDU
      useEffect(() => {
        //
      }, [/** dependencies */]);
    
      return data && <div>{data}</div>;
    };

    useEffect puts the code that changes together in one place, making the code more readable and easy to understand. You can also write multiple useEffects. The advantage of this is again to separate out the mutually unrelated code.

    Wrapper Hell –

    If you’re well versed with React, you probably know it doesn’t provide a pattern of attaching a reusable code to the component (like “connect” in react-redux). React solves this problem of data sharing by render props and higher-order components patterns. But using this, requires restructuring of your components, that is hard to follow and, at times, cumbersome. This typically leads to a problem called Wrapper Hell. One can check this by looking at the application in React DevTools. There you can see components wrapped by a number of providers, consumers, HOCs and other abstractions. Because of this, React needed a better way of sharing the logic.

    The below code is inspired from React Conf 2018 – 90% cleaner react w/ Hooks.

    import React from "react";
    import Media from "./components/Media";
    
    function App() {
      return (
        <Media query="(max-width: 480px)">
          {small => (
            <Media query="(min-width: 1024px)">
              {large => (
                <div className="media">
                  <h1>Media</h1>
                  <p>{small ? "small screen" : "not a small screen"}</p>
                  <p>{large ? "large screen" : "not a large screen"}</p>
                </div>
              )}
            </Media>
          )}
        </Media>
      );
    }
    
    export default App;

    import React from "react";
    
    export default class Media extends React.Component {
      removeListener = () => null;
    
      constructor(props) {
        super(props);
        this.state = {
          matches: window.matchMedia(this.props.query).matches
        };
      }
    
      componentDidMount() {
        this.init();
      }
    
      init() {
        const media = window.matchMedia(this.props.query);
        if (media.matches !== this.state.matches) {
          this.setState({ matches: media.matches });
        }
    
        const listener = () => this.setState({ matches: media.matches });
        media.addListener(listener);
        this.removeListener = () => media.removeListener(listener);
      }
    
      componentDidUpdate(prevProps) {
        if (prevProps.query !== this.props.query) {
          this.removeListener();
          this.init();
        }
      }
    
      componentWillUnmount() {
        this.removeListener();
      }
    
      render() {
        return this.props.children(this.state.matches);
      }
    }

    We can check the below example to see how Hooks fix this problem.

    import { useState, useEffect } from "react";
    
    export default function(query) {
      let [matches, setMatches] = useState(window.matchMedia(query).matches);
    
      useEffect(() => {
        let media = window.matchMedia(query);
        if (media.matches !== matches) {
          setMatches(media.matches);
        }
        const listener = () => setMatches(media.matches);
        media.addListener(listener);
        return () => media.removeListener(listener);
      }, [query, matches]);
    
      return matches;
    }

    import React from "react";
    import useMedia from "./hooks/useMedia";
    
    function App() {
      let small = useMedia("(max-width: 480px)");
      let large = useMedia("(min-width: 1024px)");
      return (
        <div className="media">
          <h1>Media</h1>
          <p>{small ? "small screen" : "not a small screen"}</p>
          <p>{large ? "large screen" : "not a large screen"}</p>
        </div>
      );
    }
    
    export default App;

    Hooks provide you with a way to extract a reusable stateful logic from a component without affecting the component hierarchy. This enables it to be tested independently.

    Confusing and hard to understand classes

    Classes pose more problems than it solves. We’ve known React for a very long time and there’s no denying that it is hard for humans as well as for machines. It confuses both of them. Here’s why:

    For Humans –

    1. There’s a fair amount of boilerplate when defining a class.

    2. Beginners and even expert developers find it difficult to bind methods and writing class components.

    3. People often couldn’t decide between functional and class components, as with time they might need state.

    For Machines –

    1. In the minified version of a component file, the method names are not minified and the unused methods are not stripped out, as it’s not possible to tell how all the methods fit together.

    2. Classes make it difficult for React to implement hot loading reliably.

    3. Classes encourage patterns that make it difficult for the compiler to optimize.

    Due to the above problems, classes can be a large barrier in learning React. To keep the React relevant, the community has been experimenting with component folding and Prepack, but the classes make optimizations fall back to the slower path. Hence, the community wanted to present an API that makes it more likely for code to stay on the optimizable path.

    React components have always been closer to functions. And since Hooks introduced stateful logic into functional components, it lets you use more of React’s features without classes. Hooks embrace functions without compromising the practical spirit of React. Hooks don’t require you to learn complex functional and reactive programming techniques.

    Conclusion –

    React Hooks got me excited and I am learning new things every day. Hooks are a way to write far less code for the same usecase. Also, Hooks do not ask the developers who are already busy with shipping, to rewrite everything. You can redo small components with Hooks and slowly move to the complex components later.

    The thinking process in Hooks is meant to be gradual. I hope this blog makes you want to get your hands dirty with Hooks. Do share your thoughts and experiences with Hooks. Finally, I would strongly recommend this official documentation which has great content.

    Recommended Reading: React Today and Tomorrow and 90% cleaner React with Hook

  • Acquiring Temporary AWS Credentials with Browser Navigated Authentication

    In one of my previous blog posts (Hacking your way around AWS IAM Roles), we demonstrated how users can access AWS resources without having to store AWS credentials on disk. This was achieved by setting up an OpenVPN server and client-side route that gets automatically pushed when the user is connected to the VPN. To this date, I really find this as a complaint-friendly solution without forcing users to do any manual configuration on their system. It also makes sense to have access to AWS resources as long as they are connected on VPN. One of the downsides to this method is maintaining an OpenVPN server, keeping it secure and having it running in a highly available (HA) state. If the OpenVPN server is compromised, our credentials are at stake. Secondly, all the users connected on VPN get the same level of access.

    In this blog post, we present to you a CLI utility written in Rust that writes temporary AWS credentials to a user profile (~/.aws/credentials file) using web browser navigated Google authentication. This utility is inspired by gimme-aws-creds (written in python for Okta authenticated AWS farm) and heroku cli (written in nodejs and utilizes oclif framework). We will refer to our utility as aws-authcreds throughout this post.

    “If you have an apple and I have an apple and we exchange these apples then you and I will still each have one apple. But if you have an idea and I have an idea and we exchange these ideas, then each of us will have two ideas.”

    – George Bernard Shaw

    What does this CLI utility (auth-awscreds) do?

    When the user fires a command (auth-awscreds) on the terminal, our program reads utility configuration from file .auth-awscreds located in the user home directory. If this file is not present, the utility prompts for setting the configuration for the first time. Utility configuration file is INI format. Program then opens a default web browser and navigates to the URL read from the configuration file. At this point, the utility waits for the browser URL to navigate and authorize. Web UI then navigates to Google Authentication. If authentication is successful, a callback is shared with CLI utility along with temporary AWS credentials, which is then written to ~/.aws/credentials file.

    Block Diagram

    Tech Stack Used

    As stated earlier, we wrote this utility in Rust. One of the reasons for choosing Rust is because we wanted a statically typed binary (ELF) file (executed independent of interpreter), which ships as it is when compiled. Unlike programs written in Python or Node.js, one needs a language interpreter and has supporting libraries installed for your program. The golang would have also suffice our purpose, but I prefer Rust over golang.

    Software Stack:

    • Rust (for CLI utility)
    • Actix Web – HTTP Server
    • Node.js, Express, ReactJS, serverless-http, aws-sdk, AWS Amplify, axios
    • Terraform and serverless framework

    Infrastructure Stack:

    • AWS Cognito (User Pool and Federated Identities)
    • AWS API Gateway (HTTP API)
    • AWS Lambda
    • AWS S3 Bucket (React App)
    • AWS CloudFront (For Serving React App)
    • AWS ACM (SSL Certificate)

    Recipe

    Architecture Diagram

    CLI Utility: auth-awscreds

    Our goal is, when the auth-awscreds command is fired, we first check if the user’s home directory ~/.aws/credentials file exists. If not, we create a ~/.aws directory. This is the default AWS credentials directory, where usually AWS SDK looks for credentials (unless exclusively specified by env var AWS_SHARED_CREDENTIALS_FILE). The next step would be to check if a ~/.auth-awscredds file exists. If this file doesn’t exist, we create a prompt user with two inputs: 

    1. AWS credentials profile name (used by SDK, default is preferred) 

    2. Application domain URL (Our backend app domain is used for authentication)

    let app_profile_file = format!("{}/.auth-awscreds",&user_home_dir);
     
       let config_exist : bool = Path::new(&app_profile_file).exists();
     
       let mut profile_name = String::new();
       let mut app_domain = String::new();
     
       if !config_exist {
           //ask the series of questions
           print!("Which profile to write AWS Credentials [default] : ");
           io::stdout().flush().unwrap();
           io::stdin()
               .read_line(&mut profile_name)
               .expect("Failed to read line");
     
           print!("App Domain : ");
           io::stdout().flush().unwrap();
          
           io::stdin()
               .read_line(&mut app_domain)
               .expect("Failed to read line");
          
           profile_name=String::from(profile_name.trim());
           app_domain=String::from(app_domain.trim());
          
           config_profile(&profile_name,&app_domain);
          
       }
       else {
           (profile_name,app_domain) = read_profile();
       }

    These two properties are written in ~/.auth-awscreds under the default section. Followed by this, our utility generates RSA asymmetric 1024 bit public and private key. Both the keypair are converted to base64.

    pub fn genkeypairs() -> (String,String) {
       let rsa = Rsa::generate(1024).unwrap();
     
       let private_key: Vec<u8> = rsa.private_key_to_pem_passphrase(Cipher::aes_128_cbc(),"Sagar Barai".as_bytes()).unwrap();
       let public_key: Vec<u8> = rsa.public_key_to_pem().unwrap();
     
       (base64::encode(private_key) , base64::encode(public_key))
    }

    We then launch a browser window and navigate to the specified app domain URL. At this stage, our utility starts a temporary web server with the help of the Actix Web framework and listens on 63442 port of localhost.

    println!("Opening web ui for authentication...!");
       open::that(&app_domain).unwrap();
     
       HttpServer::new(move || {
           //let stopper = tx.clone();
           let cors = Cors::permissive();
           App::new()
           .wrap(cors)
           //.app_data(stopper)
           .app_data(crypto_data.clone())
           .service(get_public_key)
           .service(set_aws_creds)
       })
       .bind(("127.0.0.1",63442))?
       .run()
       .await

    Localhost web server has two end points.

    1. GET Endpoint (/publickey): This endpoint is called by our React app after authentication and returns the public key created during the initialization process. Since the web server hosted by the Rust application is insecure (non ssl),  when actual AWS credentials are received, they should be posted as an encrypted string with the help of this public key.

    #[get("/publickey")]
    pub async fn get_public_key(data: web::Data<AppData>) -> impl Responder {
       let public_key = &data.public_key;
      
       web::Json(HTTPResponseData{
           status: 200,
           msg: String::from("Ok"),
           success: true,
           data: String::from(public_key)
       })
    }

    2. POST Endpoint (/setcreds): This endpoint is called when the react app has successfully retrieved credentials from API Gateway. Credentials are decrypted by private key and then written to ~/.aws/credentials file defined by profile name in utility configuration. 

    let encrypted_data = payload["data"].as_array().unwrap();
       let username = payload["username"].as_str().unwrap();
     
       let mut decypted_payload = vec![];
     
       for str in encrypted_data.iter() {
           //println!("{}",str.to_string());
           let s = str.as_str().unwrap();
           let decrypted = decrypt_data(&private_key, &s.to_string());
           decypted_payload.extend_from_slice(&decrypted);
       }
     
       let credentials : serde_json::Value = serde_json::from_str(&String::from_utf8(decypted_payload).unwrap()).unwrap();
     
       let aws_creds = AWSCreds{
           profile_name: String::from(profile_name),
           aws_access_key_id: String::from(credentials["AccessKeyId"].as_str().unwrap()),
           aws_secret_access_key: String::from(credentials["SecretAccessKey"].as_str().unwrap()),
           aws_session_token: String::from(credentials["SessionToken"].as_str().unwrap())
       };
     
       println!("Authenticated as {}",username);
       println!("Updating AWS Credentials File...!");
     
       configcreds(&aws_creds);

    One of the interesting parts of this code is the decryption process, which iterates through an array of strings and is joined by method decypted_payload.extend_from_slice(&decrypted);. RSA 1024 is 128-byte encryption, and we used OAEP padding, which uses 42 bytes for padding and the rest for encrypted data. Thus, 86 bytes can be encrypted at max. So, when credentials are received they are an array of 128 bytes long base64 encoded data. One has to decode the bas64 string to a data buffer and then decrypt data piece by piece.

    To generate a statically typed binary file, run: cargo build –release

    AWS Cognito and Google Authentication

    This guide does not cover how to set up Cognito and integration with Google Authentication. You can refer to our old post for a detailed guide on setting up authentication and authorization. (Refer to the sections Setup Authentication and Setup Authorization).

    React App:

    The React app is launched via our Rust CLI utility. This application is served right from the S3 bucket via CloudFront. When our React app is loaded, it checks if the current session is authenticated. If not, then with the help of the AWS Amplify framework, our app is redirected to Cognito-hosted UI authentication, which in turn auto redirects to Google Login page.

    render(){
       return (
         <div className="centerdiv">
           {
             this.state.appInitialised ?
               this.state.user === null ? Auth.federatedSignIn({provider: 'Google'}) :
               <Aux>
                 {this.state.pageContent}
               </Aux>
             :
             <Loader/>
           }
         </div>
       )
     }

    Once the session is authenticated, we set the react state variables and then retrieve the public key from the actix web server (Rust CLI App: auth-awscreds) by calling /publickey GET method. Followed by this, an Ajax POST request (/auth-creds) is made via axios library to API Gateway. The payload contains a public key, and JWT token for authentication. Expected response from API gateway is encrypted AWS temporary credentials which is then proxied to our CLI application.

    To ease this deployment, we have written a terraform code (available in the repository) that takes care of creating an S3 bucket, CloudFront distribution, ACM, React build, and deploying it to the S3 bucket. Navigate to vars.tf file and change the respective default variables). The Terraform script will fail at first launch since the ACM needs a DNS record validation. You can create a CNAME record for DNS validation and re-run the Terraform script to continue deployment. The React app expects few environment variables. Below is the sample .env file; update the respective values for your environment.

    REACT_APP_IDENTITY_POOL_ID=
    REACT_APP_COGNITO_REGION=
    REACT_APP_COGNITO_USER_POOL_ID=
    REACT_APP_COGNTIO_DOMAIN_NAME=
    REACT_APP_DOMAIN_NAME=
    REACT_APP_CLIENT_ID=
    REACT_APP_CLI_APP_URL=
    REACT_APP_API_APP_URL=

    Finally, deploy the React app using below sample commands.

    $ terraform plan -out plan     #creates plan for revision
    $ terraform apply plan         #apply plan and deploy

    API Gateway HTTP API and Lambda Function

    When a request is first intercepted by API Gateway, it validates the JWT token on its own. API Gateway natively supports Cognito integration. Thus, any payload with invalid authorization header is rejected at API Gateway itself. This eases our authentication process and validates the identity. If the request is valid, it is then received by our Lambda function. Our Lambda function is written in Node.js and wrapped by serverless-http framework around express app. The Express app has only one endpoint.

    /auth-creds (POST): once the request is received, it retrieves the ID from Cognito and logs it to stdout for audit purpose.

    let identityParams = {
               IdentityPoolId: process.env.IDENTITY_POOL_ID,
               Logins: {}
           };
      
           identityParams.Logins[`${process.env.COGNITOIDP}`] = req.headers.authorization;
      
           const ci = new CognitoIdentity({region : process.env.AWSREGION});
      
           let idpResponse = await ci.getId(identityParams).promise();
      
           console.log("Auth Creds Request Received from ",JSON.stringify(idpResponse));

    The app then extracts the base64 encoded public key. Followed by this, an STS api call (Security Token Service) is made and temporary credentials are derived. These credentials are then encrypted with a public key in chunks of 86 bytes.

    const pemPublicKey = Buffer.from(public_key,'base64').toString();
     
           const authdata=await sts.assumeRole({
               ExternalId: process.env.STS_EXTERNAL_ID,
               RoleArn: process.env.IAM_ROLE_ARN,
               RoleSessionName: "DemoAWSAuthSession"
           }).promise();
     
           const creds = JSON.stringify(authdata.Credentials);
           const splitData = creds.match(/.{1,86}/g);
          
           const encryptedData = splitData.map(d=>{
               return publicEncrypt(pemPublicKey,Buffer.from(d)).toString('base64');
           });

    Here, the assumeRole calls the IAM role, which has appropriate policy documents attached. For the sake of this demo, we attached an Administrator role. However, one should consider a hardening policy document and avoid attaching Administrator policy directly to the role.

    resources:
     Resources:
       AuthCredsAssumeRole:
         Type: AWS::IAM::Role
         Properties:
           AssumeRolePolicyDocument:
             Version: "2012-10-17"
             Statement:
               -
                 Effect: Allow
                 Principal:
                   AWS: !GetAtt IamRoleLambdaExecution.Arn
                 Action: sts:AssumeRole
                 Condition:
                   StringEquals:
                     sts:ExternalId: ${env:STS_EXTERNAL_ID}
           RoleName: auth-awscreds-api
           ManagedPolicyArns:
             - arn:aws:iam::aws:policy/AdministratorAccess

    Finally, the response is sent to the React app. 

    We have used the Serverless framework to deploy the API. The Serverless framework creates API gateway, lambda function, Lambda Layer, and IAM role, and takes care of code deployment to lambda function.

    To deploy this application, follow the below steps.

    1. cd layer/nodejs && npm install && cd ../.. && npm install

    2. npm install -g serverless (on mac you can skip this step and use the npx serverless command instead) 

    3. Create .env file and below environment variables to file and set the respective values.

    AWSREGION=ap-south-1
    COGNITO_USER_POOL_ID=
    IDENTITY_POOL_ID=
    COGNITOIDP=
    APP_CLIENT_ID=
    STS_EXTERNAL_ID=
    IAM_ROLE_ARN=
    DEPLOYMENT_BUCKET=
    APP_DOMAIN=

    4. serverless deploy or npx serverless deploy

    Entire codebase for CLI APP, React App, and Backend API  is available on the GitHub repository.

    Testing:

    Assuming that you have compiled binary (auth-awscreds) available in your local machine and for the sake of testing you have installed `aws-cli`, you can then run /path/to/your/auth-awscreds. 

    App Testing

    If you selected your AWS profile name as “demo-awscreds,” you can then export the AWS_PROFILE environment variable. If you prefer a “default” profile, you don’t need to export the environment variable as AWS SDK selects a “default” profile on its own.

    [demo-awscreds]
    aws_access_key_id=ASIAUAOF2CHC77SJUPZU
    aws_secret_access_key=r21J4vwPDnDYWiwdyJe3ET+yhyzFEj7Wi1XxdIaq
    aws_session_token=FwoGZXIvYXdzEIj//////////wEaDHVLdvxSNEqaQZPPQyK2AeuaSlfAGtgaV1q2aKBCvK9c8GCJqcRLlNrixCAFga9n+9Vsh/5AWV2fmea6HwWGqGYU9uUr3mqTSFfh+6/9VQH3RTTwfWEnQONuZ6+E7KT9vYxPockyIZku2hjAUtx9dSyBvOHpIn2muMFmizZH/8EvcZFuzxFrbcy0LyLFHt2HI/gy9k6bLCMbcG9w7Ej2l8vfF3dQ6y1peVOQ5Q8dDMahhS+CMm1q/T1TdNeoon7mgqKGruO4KJrKiZoGMi1JZvXeEIVGiGAW0ro0/Vlp8DY1MaL7Af8BlWI1ZuJJwDJXbEi2Y7rHme5JjbA=

    To validate, you can then run “aws s3 ls.” You should see S3 buckets listed from your AWS account. Note that these credentials are only valid for 60 minutes. This means you will have to re-run the command and acquire a new pair of AWS credentials. Of course, you can configure your IAM role to extend expiry for an “assume role.” 

    auth-awscreds in Action:

    Summary

    Currently, “auth-awscreds” is at its early development stage. This post demonstrates how AWS credentials can be acquired temporarily without having to worry about key rotation. One of the features that we are currently working on is RBAC, with the help of AWS Cognito. Since this tool currently doesn’t support any command line argument, we can’t reconfigure utility configuration. You can manually edit or delete the utility configuration file, which triggers a prompt for configuring during the next run. We also want to add multiple profiles so that multiple AWS accounts can be used.

  • How to Implement Server Sent Events Using Python Flask and React

    A typical Request Response cycle works such that client sends request to server and server responds to that request. But there are few use cases where we might need to send data from server without request or client is expecting a data that can arrive at anonymous time. There are few mechanisms available to solve this problem.

    Server Sent Events

    Broadly we can classify these as client pull and server push mechanisms. Websockets is a bi directional mechanism where data is transmitted via full duplex TCP protocol. Client Pull can be done using various mechanisms like –

    1. Manual refresh – where client is refreshed manually
    2. Long polling where a client sends request to server and waits until response is received, as soon as it gets response, a new request is sent.
    3. Short Polling is when a client continuously sends request to server in a definite short intervals.

    Server sent events are a type of Server Push mechanism, where client subscribes to a stream of updates generated by a server and, whenever a new event occurs, a notification is sent to the client.

    Why ServerSide events are better than polling:

    • Scaling and orchestration of backend in real time needs to be managed as users grow.
    • When mobile devices rapidly switch between WiFi and cellular networks or lose connections, and the IP address changes, long polling needs to re-establish connections.
    • With long polling, we need to manage the message queue and catch up missed message.
    • Long polling needs to provide load balancing or fail-over support across multiple servers.

    SSE vs Websockets

    SSEs cannot provide bidirectional client-server communication as opposed to WebSockets. Use cases that require such communication are real-time multiplayer games and messaging and chat apps. When there’s no need for sending data from a client, SSEs might be a better option than WebSockets. Examples of such use cases are status updates, news feeds and other automated data push mechanisms. And backend implementation could be easy with SSE than with Websockets. Also number of open connections is limited for browser for SSE.

    Also, learn about WS vs SSE here.

    Implementation

    The server side code for this can be implemented in any of the high level language. Here is a sample code for Python Flask SSE. Flask SSE requires a broker such as Redis to store the message. Here we are also using Flask APScheduler, to schedule background processes with flask .

    Here we need to install and import ‘flask_sse’ and ‘apscheduler.’

    from flask import Flask, render_template
    from flask_sse import sse
    from apscheduler.schedulers.background import BackgroundScheduler

    Now we need to initialize flask app and provide config for Redis and a route or an URL where the client would be listening to this event.

    app = Flask(__name__)
    app.config["REDIS_URL"] = "redis://localhost"
    app.register_blueprint(sse, url_prefix='/stream')

    To publish data to a stream we need to call publish method from SSE and provide a type of stream.

    sse.publish({"message": datetime.datetime.now()}, type='publish')

    In client, we need to add an event listener which would listen to our stream and read messages.

    var source = new EventSource("{{ url_for('sse.stream') }}");
        source.addEventListener('publish', function(event) {
            var data = JSON.parse(event.data);
            console.log("The server says " + data.message);
        }, false);
        source.addEventListener('error', function(event) {
            console.log("Error"+ event)
            alert("Failed to connect to event stream. Is Redis running?");
        }, false);

    Check out a sample Flask-React-Redis based application demo for server side events.

    Here are some screenshots of client –

    Fig: First Event

     Fig: Second Event

    Server logs:

    api_1    | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 31, 0, 24564))
    api_1    | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 31, 14, 30164))
    api_1    | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 31, 28, 37840))
    api_1    | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 31, 42, 58162))
    api_1    | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 31, 56, 46456))
    api_1    | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 32, 10, 56276))
    api_1    | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 32, 24, 58445))
    api_1    | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 32, 38, 57183))
    api_1    | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 32, 52, 65886))
    api_1    | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 33, 6, 49818))
    api_1    | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 33, 20, 22731))
    api_1    | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 33, 34, 59084))
    api_1    | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 33, 48, 70346))
    api_1    | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 34, 2, 58889))
    api_1    | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 34, 16, 26020))
    api_1    | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 34, 30, 44040))
    api_1    | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 34, 44, 61620))
    api_1    | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 34, 58, 38699))
    api_1    | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 35, 12, 26067))
    api_1    | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 35, 26, 71504))
    api_1    | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 35, 40, 31429))
    api_1    | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 35, 54, 74451))
    api_1    | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 36, 8, 63001))
    api_1    | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 36, 22, 47671))
    api_1    | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 36, 36, 55458))
    api_1    | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 36, 50, 68975))
    api_1    | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 37, 4, 62491))
    api_1    | ('Event SchedINFO:apscheduler.executors.default:Job "server_side_event (trigger: interval[0:00:14], next run at: 2019-05-01 07:37:31 UTC)" executed successfully
    api_1    | INFO:apscheduler.executors.default:Running job "server_side_event (trigger: interval[0:00:16], next run at: 2019-05-01 07:37:38 UTC)" (scheduled at 2019-05-01 07:37:22.362874+00:00)
    api_1    | INFO:apscheduler.executors.default:Job "server_side_event (trigger: interval[0:00:16], next run at: 2019-05-01 07:37:38 UTC)" executed successfully
    api_1    | INFO:apscheduler.executors.default:Running job "server_side_event (trigger: interval[0:00:14], next run at: 2019-05-01 07:37:31 UTC)" (scheduled at 2019-05-01 07:37:31.993944+00:00)
    api_1    | INFO:apscheduler.executors.default:Job "server_side_event (trigger: interval[0:00:14], next run at: 2019-05-01 07:37:45 UTC)" executed successfully
    api_1    | INFO:apscheduler.executors.default:Running job "server_side_event (trigger: interval[0:00:16], next run at: 2019-05-01 07:37:54 UTC)" (scheduled at 2019-05-01 07:37:38.362874+00:00)
    api_1    | INFO:apscheduler.executors.default:Job "server_side_event (trigger: interval[0:00:16], next run at: 2019-05-01 07:37:54 UTC)" executed successfully

    Use Cases of Server Sent Events

    Let’s see the use case with an example – Consider we have a real time graph showing on our web app, one of the possible options is polling where continuously client will poll the server to get new data. Other option would be to use server sent events, which are asynchronous, here the server will send data when updates happen.

    Other applications could be

    • Real time stock price analysis system
    • Real time social media feeds
    • Resource monitoring for health, uptime

    Conclusion

    In this blog, we have covered how we can implement server sent events using Python Flask and React and also how we can use background schedulers with that. This can be used to implement a data delivery from the server to the client using server push.

  • Creating Faster and High Performing User Interfaces in Web Apps With Web Workers

    The data we render on a UI originates from different sources like databases, APIs, files, and more. In React applications, when the data is received, we first store it in state and then pass it to the other components in multiple ways for rendering.

    But most of the time, the format of the data is inconvenient for the rendering component. So, we have to format data and perform some prior calculations before we give it to the rendering component.

    Sending data directly to the rendering component and processing the data inside that component is not recommended. Not only data processing but also any heavy background jobs that we would have to depend on the backend can now be done on the client-side because React allows the holding of business logic on the front-end.

    A good practice is to create a separate function for processing that data which is isolated from the rendering logic, so that data processing and data representation will be done separately.

    Why? There are two possible reasons:

    – The processed data can be shared/used by other components, too.

    – The main reason to avoid this is: if the data processing is a time-consuming task, you will see some lag on the UI, or in the worst-case scenario, sometimes the page may become unresponsive.

    As JavaScript is a single-threaded environment, it has only one call stack to execute scripts (in a simple way, you cannot run more than one script at the same time).

    For example, suppose you have to do some DOM manipulations and, at the same time, want to do some complex calculations. You can not perform these two operations in parallel. If the JavaScript engine is busy computing the complex computation, then all the other tasks like event listeners and rendering callbacks will get blocked for that amount of time, and the page may become unresponsive.

    ‍How can you solve this problem?

    Though JavaScript is single-threaded, many developers mimic the concurrency with the help of timer functions and event handlers. Like by breaking heavy (time-consuming) tasks into tiny chunks and by using the timers you can split their execution. Let’s take a look at the following example.

    Here, the processDataArray function uses the timer function to split the execution, which internally uses the setTimeout method for processing some items of array, again after a dedicated time passed execute more items, once all the array elements have been processed, send the processed result back by using the finishCallback

    const processDataArray = (dataArray, finishCallback) => {
     // take a new copy of array
     const todo = dataArray.concat();
     // to store each processed data
     let result = [];
     // timer function
     const timedProcessing = () => {
       const start = +new Date();
       do {
         // process each data item and store it's result
         const singleResult = processSingleData(todo.shift());
         result.push(singleResult);
         // check if todo has something to process and the time difference must not be greater than 50
       } while (todo.length > 0 && +new Date() - start < 50);
     
       // check for remaining items to process
       if (todo.length > 0) {
         setTimeout(timedProcessing, 25);
       } else {
         // finished with all the items, initiate finish callback
         finishCallback(result);
       }
     };
     setTimeout(timedProcessing, 25);
    };
    
    
    
    const processSingleData = data => {
     // process data
     return processedData;
    };

    You can find more about how JavaScript timers work internally here.

    The problem is not solved yet, and the main thread is still busy in the computation so you can see the delay in the UI events like button clicks or mouse scroll. This is a bad user experience when you have a big array computation going on and an impatient web user.

    The better and real multithreading way to solve this problem and to run multiple scripts in parallel is by using Web Workers.

    What are Web Workers?‍

    Web Workers provide a mechanism to spawn a separate script in the background. Where you can do any type of calculations without disturbing the UI. Web Workers run outside the context of the HTML document’s script, making it easiest to allow concurrent execution of JavaScript programs. You can experience multithreading behavior while using Web Workers.

    Communication between the page (main thread) and the worker happens using a simple mechanism. They can send messages to each other using the postMessage method, and they can receive the messages using onmessage callback function. Let’s take a look at a simple example:

    In this example, we will delegate the work of multiplying all the numbers in an array to a Web Worker, and the Web Worker returns the result back to the main thread.

    import "./App.css";
    import { useEffect, useState } from "react";
     
    function App() {
     // This will load and execute the worker.js script in the background.
     const [webworker] = useState(new window.Worker("worker.js"));
     const [result, setResult] = useState("Calculating....");
     
     useEffect(() => {
       const message = { multiply: { array: new Array(1000).fill(2) } };
       webworker.postMessage(message);
    
    
    
       webworker.onerror = () => {
         setResult("Error");
       };
     
       webworker.onmessage = (e) => {
         if (e.data) {
           setResult(e.data.result);
         } else {
           setResult("Error");
         }
       };
     }, []);
    
    useEffect(() => {
       return () => {
         webworker.terminate();
       };
     }, []);
    
     return (
       <div className="App">
         <h1>Webworker Example In React</h1>
         <header className="App-header">
           <h1>Multiplication Of large array</h1>
           <h2>Result: {result}</h2>
         </header>
       </div>
     );
    }
     
    export default App;

    onmessage = (e) => {
     const { multiply } = e.data;
     // check data is correctly framed
     if (multiply && multiply.array.length) {
       // intentionally delay the execution
       setTimeout(() => {
         // this post back the result to the page
         postMessage({
           result: multiply.array.reduce(
             (firstItem, secondItem) => firstItem * secondItem
           ),
         });
       }, 2000);
     } else {
       postMessage({ result: 0 });
     }
    };

    If the worker script throws an exception, you can handle it by attaching a callback function to the onerror property of the worker in the App.js script.

    From the main thread, you can terminate the worker immediately if you want by using the worker’s terminate method. Once the worker is terminated, the worker variable becomes undefined. You need to create another instance if needed.

    You can find a working example here.

    Use cases of Web Workers:

    Charting middleware – Suppose you have to design a dashboard that represents the analytics of businesses engagement for a business retention application by means of a pivot table, pie charts, and bar charts. It involves heavy processing of data to convert it to the expected format of a table, pie chart, a bar chart. This may result in the UI failing to update, freezing, or the page becoming unresponsive because of single-threaded behavior. Here, we can use Web Workers and delegate the processing logic to it. So that the main thread is always available to handle other UI events.

    Emulating excel functionality – For example, if you have thousands of rows in the spreadsheet and each one of them needs some calculations (longer), you can write custom functions containing the processing logic and put them in the WebWorker’s script.

    Real-time text analyzer – This is another good example where we can use WebWorker to show the word count, characters count, repeated word count, etc., by analyzing the text typed by the user in real-time. With a traditional implementation, you may experience performance issues as the text size grows, but this can be optimized by using WebWorkers.

    Web Worker limitations:

    Yes, Web Workers are amazing and quite simple to use, but as the WebWorker is a separate thread, it does not have access to the window object, document object, and parent object. And we can not pass functions through postmessage.

    But Web Workers have access to:

    – Navigator object

    – Location object (read-only)

    – XMLHttpRequest

    – setTimeout, setInterval, clearTimeout, clearInterval

    – You can import other scripts in WebWorker using the importScripts() method

    Here are some other types of workers:
    Shared Worker
    Service Worker
    Audio Worklet

    Conclusion:

    Web Workers make our life easier by doing jobs in parallel in the background, but Web Workers are relatively heavy-weight and come with high startup performance cost and a high per-instance memory cost, so as per the WHATWG community, they are not intended to be used in large numbers.