Category: Software Engineering & Architecture

  • Eliminate Render-blocking Resources using React and Webpack

    In the previous blog, we learned how a browser downloads many scripts and useful resources to render a webpage. But not all of them are necessary to show a page’s content. Because of this, the page rendering is delayed. However, most of them will be needed as the user navigates through the website’s various pages.

    In this article, we’ll learn to identify such resources and classify them as critical and non-critical. Once identified, we’ll inline the critical resources and defer the non-critical resources.

    For this blog, we’ll use the following tools:

    • Google Lighthouse and other Chrome DevTools to identify render-blocking resources.
    • Webpack and CRACO to fix it.

    Demo Configuration

    For the demo, I have added the JavaScript below to the <head></head> of index.html as a render-blocking JS resource. This script loads two more CSS resources on the page.

    https://use.fontawesome.com/3ec06e3d93.js

    Other configurations are as follows:

    • Create React App v4.0
    • Formik and Yup for handling form validations
    • Font Awesome and Bootstrap
    • Lazy loading and code splitting using Suspense, React lazy, and dynamic import
    • CRACO
    • html-critical-webpack-plugin
    • ngrok and serve for serving build

    Render-Blocking Resources

    A render-blocking resource typically refers to a script or link that prevents a browser from rendering the processed content.

    Lighthouse will flag the below as render-blocking resources:

    • A <script></script> tag in <head></head> that doesn’t have a defer or async attribute.
    • A <link rel=””stylesheet””> tag that doesn’t have a media attribute to match a user’s device or a disabled attribute to hint browser to not download if unnecessary.
    • A <link rel=””import””> that doesn’t have an async attribute.

    Identifying Render-Blocking Resources

    To reduce the impact of render-blocking resources, find out what’s critical for loading and what’s not.

    To do that, we’re going to use the Coverage Tab in Chrome DevTools. Follow the steps below:

    1. Open the Chrome DevTools (press F12)

    2. Go to the Sources tab and press the keys to Run command

    The below screenshot is taken on a macOS.

    3. Search for Show Coverage and select it, which will show the Coverage tab below. Expand the tab.

    4. Click on the reload button on the Coverage tab to reload the page and start instrumenting the coverage of all the resources loading on the current page.

    5. After capturing the coverage, the resources loaded on the page will get listed (refer to the screenshot below). This will show you the code being used vs. the code loaded on the page.

    The list will display coverage in 2 colors:

    a. Green (critical) – The code needed for the first paint

    b. Red (non-critical) – The code not needed for the first paint.

    After checking each file and the generated index.html after the build, I found three primary non-critical files –

    a. 5.20aa2d7b.chunk.css98% non-critical code

    b. https://use.fontawesome.com/3ec06e3d93.js – 69.8% non-critical code. This script loads below CSS –

    1. font-awesome-css.min.css – 100% non-critical code

    2. https://use.fontawesome.com/3ec06e3d93.css – 100% non-critical code

    c. main.6f8298b5.chunk.css – 58.6% non-critical code

    The above resources satisfy the condition of a render-blocking resource and hence are prompted by the Lighthouse Performance report as an opportunity to eliminate the render-blocking resources (refer screenshot). You can reduce the page size by only shipping the code that you need.

    Solution

    Once you’ve identified critical and non-critical code, it is time to extract the critical part as an inline resource in index.html and deferring the non-critical part by using the webpack plugin configuration.

    For Inlining and Preloading CSS: 

    Use html-critical-webpack-plugin to inline the critical CSS into index.html. This will generate a <style></style> tag in the <head> with critical CSS stripped out of the main CSS chunk and preloading the main file.</head>

    const path = require('path');
    const { whenProd } = require('@craco/craco');
    const HtmlCriticalWebpackPlugin = require('html-critical-webpack-plugin');
    
    module.exports = {
      webpack: {
        configure: (webpackConfig) => {
          return {
            ...webpackConfig,
            plugins: [
              ...webpackConfig.plugins,
              ...whenProd(
                () => [
                  new HtmlCriticalWebpackPlugin({
                    base: path.resolve(__dirname, 'build'),
                    src: 'index.html',
                    dest: 'index.html',
                    inline: true,
                    minify: true,
                    extract: true,
                    width: 320,
                    height: 565,
                    penthouse: {
                      blockJSRequests: false,
                    },
                  }),
                ],
                []
              ),
            ],
          };
        },
      },
    };

    Once done, create a build and deploy. Here’s a screenshot of the improved opportunities:

    To use CRACO, refer to its README file.

    NOTE: If you’re planning to use the critters-webpack-plugin please check these issues first: Could not find HTML asset and Incompatible with html-webpack-plugin v4.

    For Deferring Routes/Pages:

    Use lazy-loading and code-splitting techniques along with webpack’s magic comments as below to preload or prefetch a route/page according to your use case.

    import { Suspense, lazy } from 'react';
    import { Redirect, Route, Switch } from 'react-router-dom';
    import Loader from '../../components/Loader';
    
    import './style.scss';
    
    const Login = lazy(() =>
      import(
        /* webpackChunkName: "login" */ /* webpackPreload: true */ '../../containers/Login'
      )
    );
    const Signup = lazy(() =>
      import(
        /* webpackChunkName: "signup" */ /* webpackPrefetch: true */ '../../containers/Signup'
      )
    );
    
    const AuthLayout = () => {
      return (
        <Suspense fallback={<Loader />}>
          <Switch>
            <Route path="/auth/login" component={Login} />
            <Route path="/auth/signup" component={Signup} />
            <Redirect from="/auth" to="/auth/login" />
          </Switch>
        </Suspense>
      );
    };
    
    export default AuthLayout;

    The magic comments enable webpack to add correct attributes to defer the scripts according to the use-case.

    For Deferring External Scripts:

    For those who are using a version of webpack lower than 5, use script-ext-html-webpack-plugin or resource-hints-webpack-plugin.

    I would recommend following the simple way given below to defer an external script.

    // Add defer/async attribute to external render-blocking script
    <script async defer src="https://use.fontawesome.com/3ec06e3d93.js"></script>

    The defer and async attributes can be specified on an external script. The async attribute has a higher preference. For older browsers, it will fallback to the defer behaviour.

    If you want to know more about the async/defer, read the further reading section.

    Along with defer/async, we can also use media attributes to load CSS conditionally.

    It’s also suggested to load fonts locally instead of using full CDN in case we don’t need all the font-face rules added by Font providers.

    Now, let’s create and deploy the build once more and check the results.

    The opportunity to eliminate render-blocking resources shows no more in the list.

    We have finally achieved our goal!

    Final Thoughts

    The above configuration is a basic one. You can read the libraries’ docs for more complex implementation.

    Let me know if this helps you eliminate render-blocking resources from your app.

    If you want to check out the full implementation, here’s the link to the repo. I have created two branches—one with the problem and another with the solution. Read the further reading section for more details on the topics.

    Hope this helps.

    Happy Coding!

    Further Reading

  • A Primer To Flutter

    In this blog post, we will explore the basics of cross platform mobile application development using Flutter, compare it with existing cross-platform solutions and create a simple to-do application to demonstrate how quickly we can build apps with Flutter.

    Brief introduction

    Flutter is a free and open source UI toolkit for building natively compiled applications for mobile platforms like Android and iOS, and for the web and desktop as well. Some of the prominent features are native performance, single codebase for multiple platforms, quick development, and a wide range of beautifully designed widgets.

    Flutter apps are written in Dart programming language, which is a very intuitive language with a C-like syntax. Dart is optimized for performance and developer friendliness. Apps written in Dart can be as fast as native applications because Dart code compiles down to machine instructions for ARM and x64 processors and to Javascript for the web platform. This, along with the Flutter engine, makes Flutter apps platform agnostic.

    Other interesting Dart features used in Flutter apps is the just-in-time (JIT) compiler, used during development and debugging, which powers the hot reload functionality. And the ahead-of-time (AOT) compiler which is used when building applications for the target platforms such as Android or iOS, resulting in native performance.

    Everything composed on the screen with Flutter is a widget including stuff like padding, alignment or opacity. The Flutter engine draws and controls each pixel on the screen using its own graphics engine called Skia.

    Flutter vs React-Native

    Flutter apps are truly native and hence offer great performance, whereas apps built with react-native requires a JavaScript bridge to interact with OEM widgets. Flutter apps are much faster to develop because of a wide range of built-in widgets, good amount of documentation, hot reload, and several other developer-friendly choices made by Google while building Dart and Flutter. 

    React Native, on the other hand, has the advantage of being older and hence has a large community of businesses and developers who have experience in building react-native apps. It also has more third party libraries and packages as compared to Flutter. That said, Flutter is catching up and rapidly gaining momentum as evident from Stackoverflow’s 2019 developer survey, where it scored 75.4% under “Most Loved Framework, Libraries and Tools”.

     

    All in all, Flutter is a great tool to have in our arsenal as mobile developers in 2020.

    Getting started with a sample application

    Flutter’s official docs are really well written and include getting started guides for different OS platforms, API documentation, widget catalogue along with several cookbooks and codelabs that one can follow along to learn more about Flutter.

    To get started with development, we will follow the official guide which is available here. Flutter requires Flutter SDK as well as native build tools to be installed on the machine to begin development. To write apps, one may use Android Studios or VS Code, or any text editor can be used with Flutter’s command line tools. But a good rule of thumb is to install Android Studio because it offers better support for management of Android SDK, build tools and virtual devices. It also includes several built-in tools such as the icons and assets editor.

    Once done with the setup, we will start by creating a project. Open VS Code and create a new Flutter project:

    We should see the main file main.Dart with some sample code (the counter application). We will start editing this file to create our to-do app.

    Some of the features we will add to our to-do app:

    • Display a list of to-do items
    • Mark to-do items as completed
    • Add new item to the list

    Let’s start by creating a widget to hold our list of to-do items. This is going to be a StatefulWidget, which is a type of widget with some state. Flutter tracks changes to the state and redraws the widget when a new change in the state is detected.

    After creating theToDoList widget, our main.Dart file looks like this:

    /// imports widgets from the material design 
    import 'package:flutter/material.dart';
    
    void main() => runApp(TodoApp());
    
    /// Stateless widgets must implement the build() method and return a widget. 
    /// The first parameter passed to build function is the context in which this widget is built
    class TodoApp extends StatelessWidget {
      @override
      Widget build(BuildContext context) {
        return MaterialApp(
          title: 'TODO',
          theme: ThemeData(
            primarySwatch: Colors.blue,
          ),
          home: TodoList(),
        );
      }
    }
    
    /// Stateful widgets must implement the createState method
    /// State of a stateless widget against has a build() method with context
    class TodoList extends StatefulWidget {
      @override
      State<StatefulWidget> createState() => TodoListState();
    }
    
    class TodoListState extends State<TodoList> {
      @override
      Widget build(BuildContext context) {
        return Scaffold(
          appBar: AppBar(
            title: Text('Todo'),
          ),
          body: Text('Todo List'),
        );
      }
    }

    The ToDoApp class here extends Stateless widget i.e. a widget without any state whereas ToDoList extends StatefulWidget. All Flutter apps are a combination of these two types of widgets. StatelessWidgets must implement the build() method whereas Stateful widgets must implement the createState() method.

    Some built-in widgets used here are the MaterialApp widget, the Scaffold widget and AppBar and Text widgets. These all are imported from Flutter’s implementation of material design, available in the material.dart package. Similarly, to use native looking iOS widgets in applications, we can import widgets from the flutter/cupertino.dart package.

    Next, let’s create a model class that represents an individual to-do item. We will keep this simple i.e. only store label and completed status of the to-do item.

    class Todo {
      final String label;
      bool completed;
      Todo(this.label, this.completed);
    }

    The constructor we wrote in the code above is implemented using one of Dart’s syntactic sugar to assign a constructor argument to the instance variable. For more such interesting tidbits, take the Dart language tour.

    Now let’s modify the ToDoListState class to store a list of to-do items in its state and also display it in a list. We will use ListView.builder to create a dynamic list of to-do items. We will also use Checkbox and Text widget to display to-do items.

    /// State is composed all the variables declared in the State implementation of a Stateful widget
    class TodoListState extends State<TodoList> {
      final List<Todo> todos = List<Todo>();
      @override
      Widget build(BuildContext context) {
        return Scaffold(
          appBar: AppBar(
            title: Text('Todo'),
          ),
          body: Padding(
            padding: EdgeInsets.all(16.0),
            child: todos.length > 0
                ? ListView.builder(
                    itemCount: todos.length,
                    itemBuilder: _buildRow,
                  )
                : Text('There is nothing here yet. Start by adding some Todos'),
          ),
        );
      }
    
      /// build a single row of the list
      Widget _buildRow(context, index) => Row(
            children: <Widget>[
              Checkbox(
                  value: todos[index].completed,
                  onChanged: (value) => _changeTodo(index, value)),
              Text(todos[index].label,
                  style: TextStyle(
                      decoration: todos[index].completed
                          ? TextDecoration.lineThrough
                          : null))
            ],
          );
    
      /// toggle the completed state of a todo item
      _changeTodo(int index, bool value) =>
          setState(() => todos[index].completed = value);
    }

    A few things to note here are: private functions start with an underscore, functions with a single line of body can be written using fat arrows (=>) and most importantly, to change the state of any variable contained in a Stateful widget, one must call the setState method.

    The ListView.builder constructor allows us to work with very large lists, since list items are created only when they are scrolled.

    Another takeaway here is the fact that Dart is such an intuitive language that it is quite easy to understand and you can start writing Dart code immediately.

    Everything on a screen, like padding, alignment or opacity, is a widget. Notice in the code above, we have used Padding as a widget that wraps the list or a text widget depending on the number of to-do items. If there’s nothing in the list, a text widget is displayed with some default message.

    Also note how we haven’t used the new keyword when creating instances of a class, say Text. That’s because using the new keyword is optional in Dart and discouraged, according to the effective Dart guidelines.

    Running the application

    At this point, let’s run the code and see how the app looks on a device. Press F5, then select a virtual device and wait for the app to get installed. If you haven’t created a virtual device yet, refer to the getting started guide.

    Once the virtual device launches, we should see the following screen in a while. During development, the first launch always takes a while because the entire app gets built and installed on the virtual device, but subsequent changes to code are instantly reflected on the device, thanks to Flutter’s amazing hot reload feature. This reduces development time and also allows developers and designers to experiment more frequently with the interface changes.

    As we can see, there are no to-dos here yet. Now let’s add a floating action button that opens a dialog which we will use to add new to-do items.

    Adding the FAB is as easy as passing floatingActionButton parameter to the scaffold widget.

    floatingActionButton: FloatingActionButton(
      child: Icon(Icons.add),                                /// uses the built-in icons
      onPressed: () => _promptDialog(context),
    ),

    And declare a function inside ToDoListState that displays a popup (AlertDialog) with a text input box.

    /// display a dialog that accepts text
      _promptDialog(BuildContext context) {
        String _todoLabel = '';
        return showDialog(
            context: context,
            builder: (context) {
              return AlertDialog(
                title: Text('Enter TODO item'),
                content: TextField(
                    onChanged: (value) => _todoLabel = value,
                    decoration: InputDecoration(hintText: 'Add new TODO item')),
                actions: <Widget>[
                  FlatButton(
                    child: new Text('CANCEL'),
                    onPressed: () => Navigator.of(context).pop(),
                  ),
                  FlatButton(
                    child: new Text('ADD'),
                    onPressed: () {
                      setState(() => todos.add(Todo(_todoLabel, false)));
                      /// dismisses the alert dialog
                      Navigator.of(context).pop();
                    },
                  )
                ],
              );
            });
      }

    At this point, saving changes to the file should result in the application getting updated on the virtual device (hot reload), so we can just click on the new floating action button that appeared on the bottom right of the screen and start testing how the dialog looks.

    We used a few more built-in widgets here:

    • AlertDialog: a dialog prompt that opens up when clicking on the FAB
    • TextField: text input field for accepting user input
    • InputDecoration: a widget that adds style to the input field
    • FlatButton: a variation of button with no border or shadow
    • FloatingActionButton: a floating icon button, used to trigger primary action on the screen

    Here’s a quick preview of how the application should look and function at this point:

    And just like that, in less than 100 lines of code, we’ve built the user interface of a simple, cross platform to-do application.

    The source code for this application is available here.

    A few links to further explore Flutter:

    Conclusion:

    To conclude, Flutter is  an extremely powerful toolkit to build cross platform applications that have native performance and are beautiful to look at. Dart, the language behind Flutter, is designed considering the nuances of user interface development and Flutter offers a wide range of built-in widgets. This makes development fun and development cycles shorter; something that we experienced while building the to-do app. With Flutter, time to market is also greatly reduced which enables teams to experiment more often, collect more feedback and ship applications faster.  And finally, Flutter has a very enthusiastic and thriving community of designers and developers who are always experimenting and adding to the Flutter ecosystem.

  • Building Type Safe Backend Apps with Typegoose and TypeGraphQL

    In this article, we will be trying to solve the most common problem encountered while trying to model MongoDB backend schema with TypeScript and Mongoose. We will also try to address and solve the difficulties of maintaining GraphQL types. 

    Almost every serious JavaScript developer uses TypeScript. However, many aged libraries do not support it natively, which becomes an increasing issue as the project grows. Then, if you add up GraphQL, which is a great modern API development solution, it becomes too much of a boilerplate.

    Prerequisites

    This article assumes that you have working knowledge of TypeScript, MongoDB, and GraphQL. We’ll be using Mongoose for specifying models, which is the go-to Object Document Mapper (ODM) solution for MongoDB.

    Let’s consider a basic example of a Mongoose model written in TypeScript. This might look something like the one mentioned below, a user model with basic model properties (email, first name, last name, and password):

    import { Document, Model, Schema } from "mongoose";
    import { db } from "../util/database";
    
    export interface IUserProps {
      email: string;
      firstName: string;
      lastName: string;
      password: string;
    }
    
    export interface IUserDocument extends IUserProps, Document {
    }
    
    export interface IUserModel extends Model<IUserDocument> {
      dateCreated: Date;
      lastUpdated: Date;
      hashPassword(password: string): string;
    }
    
    const UserSchema: Schema = new Schema(
      {
        email: {
          type: String,
          unique: true,
        },
        firstName: {
          type: String,
        },
        password: {
          type: String,
        },
      },
      { timestamps: true }
    );
    
    const hashPassword = (_password: string) => {
      // logic to hash passwords
    }
    
    UserSchema.method("hashPassword", hashPassword);
    
    export const User: IUserModel = db.model<IUserDocument, IUserModel>(
      "User",
      UserSchema
    );

    As you can see, it would be cumbersome to add and maintain interfaces manually with Mongoose. We would need at least 2-3 interfaces to occupy the typing needs to get model properties and methods working with proper typing.

    Moving forward to add our queries and mutations, we need to create resolvers for the model above, assuming we have a service that deals with models. Here’s what our resolver looks like:

    import { ObjectId } from 'bson';
    import { IResolvers } from 'graphql-tools';
    import { IUserProps } from './user.model';
    import { UserService } from './user.service';
    
    const userService = new UserService();
    export const userResolvers: IResolvers = {
      Query: {
        User: (_root: unknown, args: { id: ObjectId }) => userService.get(args.id),
        //...
      },
      Mutation: {
        createUser: async (_root: unknown, args: IUserProps) => await userService.create(args),
        //...
      }
    };

    Not bad, we got our model and service and the resolver also looks good. But wait, we need to add GraphQL types as well. Here we are intentionally not including inputs to keep it short. Let’s do that:

    type Query {
      User(id: String): User
    }
    
    type Mutation {
      createUser(
        email: String,
        firstName: String,
        lastName: String,
        password: String,
      ): User
    }
    
    type User {
      id: String!
      email: String!
      firstName: String!
      lastName: String!
      password: String!
    }

    Now, we have to club the schemas and resolvers together then pass them onto the GraphQL Express server—Apollo Server in this case:

    import * as path from 'path';
    import * as fs from 'fs';
    import { ApolloServer } from 'apollo-server'
    import { makeExecutableSchema }  from 'graphql-tools';
    import { resolvers } from './src/resolvers';
    
    const userSchema = path.join(__dirname, 'src/user/user.schema.graphql');
    const schemaDef = fs.readFileSync(userSchema, 'utf8');
    
    const schema = makeExecutableSchema({ typeDefs: schemaDef });
    
    const server = new ApolloServer({ schema, resolvers });
    
    server.listen().then(({ url }) => {
      console.log(`🚀 Server ready at ${url}`);
    });

    With this setup, we got four files per model: model, resolver, service, and GraphQL schema file.

    That’s too many things to keep in sync in real life. Imagine you need to add a new property to the above model after reaching production. You’ll end up doing at least following:

    1. Add a migration to sync the DB
    2. Update the interfaces
    3. Update the model schema
    4. Update the GraphQL schema

    Possible Solution

    As we know, after this setup, we’re mostly dealing with the entity models and struggling to keep its types and relations in sync.

    If the model itself can handle it somehow, we can definitely save some effort, which  means we can sort things out if these entity model classes can represent both the database schema and its types.

    Adding TypeGoose

    Mongoose schema declarations with TypeScript can get tricky—or there might be a better way. Let’s add TypeGoose, so you no longer have to maintain interfaces (arguably). Here’s what the same user model looks like:

    import { DocumentType, getModelForClass, prop as Property } from '@typegoose/typegoose';
    import { getSchemaOptions } from 'src/util/typegoose';
    import { Field as GqlField, ObjectType as GqlType } from 'type-graphql';
    
    export class User {
      readonly _id: string;
    
      @Property({ required: true })
      firstName: string;
    
      @Property({ required: false })
      lastName: string;
    
      @Property({ required: true })
      password: string;
    
      @Property({ required: true, unique: true })
      email: string;
    
      hashPassword(this: DocumentType<User>, _password: string) {
        // logic to hash passwords
      }
    }

    Alright, no need for adding interfaces for the model and documents. You could have an interface for model implementation, but it’s not necessary.

    With Reflect, which is used internally by TypeGoose, we managed to skip the need for additional interfaces.

    If we want to add custom validations and messages, TypeGoose allows us to do that too. The prop decorator offers almost all the things you can expect from a mongoose model schema definition.

    @Property({ required: false, unique: true })

    Adding TypeGraphQL

    Alright, TypeGoose has helped us with handling mongoose schema smoothly. But, we still need to define types for GraphQL. Also, we need to update the model types whenever we change our models. 

    Let’s add TypeGraphQL

    import { DocumentType, getModelForClass, prop as Property } from '@typegoose/typegoose';
    import { getSchemaOptions } from 'src/util/typegoose';
    import { Field as GqlField, ObjectType as GqlType } from 'type-graphql';
    
    @GqlType()
    export class User {
      @GqlField(_type => String)
      readonly _id: string;
    
      @GqlField(_type => String)
      @Property({ required: true })
      firstName: string;
    
      @GqlField(_type => String, { nullable: true })
      @Property({ required: false })
      lastName: string;
    
      @GqlField(_type => String)
      @Property({ required: true })
      password: string;
    
      @GqlField(_type => String)
      @Property({ required: true, unique: true })
      email: string;
    
      hashPassword(this: DocumentType<User>, _password: string) {
        // logic to hash passwords
      }
    }

    What we just did is use the same TypeScript user class to define the schema as well as its GraphQL type—pretty neat.

    Because we have added TypeGraphQL, our resolvers no longer need extra interfaces. We can add input classes for parameter types. Consider common input types such as CreateInput, UpdateInput, and FilterInput.

    import { Arg, Ctx, Mutation, Resolver } from 'type-graphql';
    import { User } from './user.model';
    import { UserService } from './user.service';
    
    @Resolver(_of => User)
    
    export class UserResolver {
      private __userService: UserService;
    
      constructor() {
        this.__userService = new UserService();    
      }
    
      @Mutation(_returns => User)
      async createUser(@Arg('data', type => UserCreateInput) data: UserCreateInput, @Ctx() ctx: any) {
       return this.__userService.create(data)
      }
    }

    You can learn more about the syntax and input definition in the official docs.

    That’s it. We are ready with our setup, and we can now simply build a schema and pass it to the server entry point just like that. No need to import schema files and merge resolvers. Simply pass array of resolvers to buildSchema:

    import {ApolloServer} from 'apollo-server'
    
    import { resolvers } from './src/resolvers';
    import { buildSchema }  from 'type-graphql';
    
    const schema = buildSchema({
      resolvers,
    });
    
    const server = new ApolloServer({ schema, resolvers });
    
    server.listen().then(({ url }) => {
      console.log(`🚀 Server ready at ${url}`);
    });

    Once implemented, this is how our custom demo project architecture might look:

    Fig:- Application Architecture

    Limitations and Alternatives

    Though these packages save some work for us, one may decide not to go for them since they use experimental features such as experimental decorators. However, the acceptance of these experimental features is growing.

    TypeGoose:

    Though TypeGoose offers a great extension to Mongoose, they’ve recently introduced some breaking changes. Upgrading from recent versions might be a risk. One alternative to TypeGoose for decorator-based schema definitions is TypeORM. Though, it currently has basic experimental support for MongoDB. 

    TypeGraphQL:

    TypeGraphQL is a well-maintained library. There are other options, like Nest.js and graphql-schema-decorators, which supports decorators for GraphQL schema. 

    However, as Nest.js’s GraphQL support is more framework-oriented, it might be more than needed. The other one is not supported any longer. You can even integrate TypeGraphQL with Nest.js with some caveats.

    Conclusion

    Unsurprisingly, both of these libraries use experimental decorators API with Reflect Metadata. Reflect Metadata adds additional metadata support to the class and its members. The concept might look innovative but it’s nothing new. Languages like C# and Java support attributes or annotations that add metadata to types. With these added, it becomes handy to create and maintain well-typed applications.

    One thing to note here would be—though the article introduces the benefits of using TypeGraphQL and TypeGoose together—it does not mean you can’t use them separately. Depending upon your requirements, you may use either of the tools or a combination of them.

    This article covers a very basic setup for introduction of the mentioned technologies. You might want to learn more about advanced real-life needs with these tools and techniques from some of the articles mentioned below.

    Further Reading

    You can find the referenced code at this repo.

  • Automating Serverless Framework Deployment using Watchdog

    These days, we see that most software development is moving towards serverless architecture, and that’s no surprise. Almost all top cloud service providers have serverless services that follow a pay-as-you-go model. This way, consumers don’t have to pay for any unused resources. Also, there’s no need to worry about procuring dedicated servers, network/hardware management, operating system security updates, etc.

    Unfortunately, for cloud developers, serverless tools don’t provide auto-deploy services for updating local environments. This is still a headache. The developer must deploy and test changes manually. Web app projects using Node or Django have a watcher on the development environment during app bundling on their respective server runs. Thus, when changes happen in the code directory, the server automatically restarts with these new changes, and the developer can check if the changes are working as expected.

    In this blog, we will talk about automating serverless application deployment by changing the local codebase. We are using AWS as a cloud provider and primarily focusing on lambda to demonstrate the functionality.

    Prerequisites:

    • This article uses AWS, so command and programming access are necessary.
    • This article is written with deployment to AWS in mind, so AWS credentials are needed to make changes in the Stack. In the case of other cloud providers, we would require that provider’s command-line access.
    • We are using a serverless application framework for deployment. (This example will also work for other tools like Zappa.) So, some serverless context would be required.

    Before development, let’s divide the problem statement into sub-tasks and build them one step at a time.

    Problem Statement

    Create a codebase watcher service that would trigger either a stack update on AWS or run a local test. By doing this, developers would not have to worry about manual deployment on the cloud provider. This service needs to keep an eye on the code and generate events when an update/modify/copy/delete occurs in the given codebase.

    Solution

    First, to watch the codebase, we need logic that acts as a trigger and notifies when underlining files changes. For this, there are already packages present in different programming languages. In this example, we are using ‘python watchdog.’

    from watchdog.observers import Observer
    from watchdog.events import FileSystemEventHandler
    
    CODE_PATH = "<codebase path>"
    
    class WatchMyCodebase:
        # Set the directory on watch
        def __init__(self):
            self.observer = Observer()
    
        def run(self):
            event_handler = EventHandler()
            # recursive flag decides if watcher should collect changes in CODE_PATH directory tree.
            self.observer.schedule(event_handler, CODE_PATH, recursive=True)
            self.observer.start()
            self.observer.join()
    
    
    class EventHandler(FileSystemEventHandler):
        """Handle events generated by Watchdog Observer"""
    
        @classmethod
        def on_any_event(cls, event):
            if event.is_directory:
                """Ignore directory level events, like creating new empty directory etc.."""
                return None
    
            elif event.event_type == 'modified':
                print("file under codebase directory is modified...")
    
    if __name__ == '__main__':
        watch = WatchMyCodebase()
        watch.run()

    Here, the on_any_event() class method gets called on any updates in the mentioned directory, and we need to add deployment logic here. However, we can’t just deploy once it receives a notification from the watcher because modern IDEs save files as soon as the user changes them. And if we add logic that deploys on every change, then most of the time, it will deploy half-complete services. 

    To handle this, we must add some timeout before deploying the service.

    Here, the program will wait for some time after the file is changed. And if it finds that, for some time, there have been no new changes in the codebase, it will deploy the service.

    import time
    import subprocess
    import threading
    from watchdog.observers import Observer
    from watchdog.events import FileSystemEventHandler
    
    valid_events = ['created', 'modified', 'deleted', 'moved']
    DEPLOY_AFTER_CHANGE_THRESHOLD = 300
    STAGE_NAME = ""
    CODE_PATH = "<codebase path>"
    
    def deploy_env():
        process = subprocess.Popen(['sls', 'deploy', '--stage', STAGE_NAME, '-v'],
                                   stdout=subprocess.PIPE,
                                   stderr=subprocess.PIPE)
        stdout, stderr = process.communicate()
        print(stdout, stderr)
    
    def deploy_service_on_change():
        while True:
            if EventHandler.last_update_time and (int(time.time() - EventHandler.last_update_time) > DEPLOY_AFTER_CHANGE_THRESHOLD):
                EventHandler.last_update_time = None
                deploy_env()
            time.sleep(5)
    
    def start_interval_watcher_thread():
        interval_watcher_thread = threading.Thread(target=deploy_service_on_change)
        interval_watcher_thread.start()
    
    
    class WatchMyCodebase:
        # Set the directory on watch
        def __init__(self):
            self.observer = Observer()
    
        def run(self):
            event_handler = EventHandler()
            self.observer.schedule(event_handler, CODE_PATH, recursive=True)
            self.observer.start()
            self.observer.join()
    
    
    class EventHandler(FileSystemEventHandler):
        """Handle events generated by Watchdog Observer"""
        last_update_time = None
    
        @classmethod
        def on_any_event(cls, event):
            if event.is_directory:
                """Ignore directory level events, like creating new empty directory etc.."""
                return None
    
            elif event.event_type in valid_events and '.serverless' not in event.src_path:
                # Ignore events related to changes in .serverless directory, serverless creates few temp file while deploy
                cls.last_update_time = time.time()
    
    
    if __name__ == '__main__':
        start_interval_watcher_thread()
        watch = WatchMyCodebase()
        watch.run()

    The specified valid_events acts as a filter to deploy, and we are only considering these events and acting upon them.

    Moreover, to add a delay after file changes and ensure that there are no new changes, we added interval_watcher_thread. This checks the difference between current and last directory update time, and if it’s greater than the specified threshold, we deploy serverless resources.

    def deploy_service_on_change():
        while True:
            if EventHandler.last_update_time and (int(time.time() - EventHandler.last_update_time) > DEPLOY_AFTER_CHANGE_SEC):
                EventHandler.last_update_time = None
                deploy_env()
            time.sleep(5)
    
    def start_interval_watcher_thread():
        interval_watcher_thread = threading.Thread(target=deploy_service_on_change)
        interval_watcher_thread.start()

    Here, the sleep time in deploy_service_on_change is important. It will prevent the program from consuming more CPU cycles to check whether the condition to deploy serverless is satisfied. Also, too much delay would cause more delay in the deployment than the specified value of DEPLOY_AFTER_CHANGE_THRESHOLD.

    Note: With programming languages like Golang, and its features like goroutine and channels, we can build an even more efficient application—or even with Python with the help of thread signals.

    Let’s build one lambda function that automatically deploys on a change. Let’s also be a little lazy and develop a basic python lambda that takes a number as an input and returns its factorial value.

    import math
    
    def lambda_handler(event, context):
        """
        Handler for get factorial
        """
    
        number = event['number']
        return math.factorial(number)

    We are using a serverless application framework, so to deploy this lambda, we need a serverless.yml file that specifies stack details like execution environment, cloud provider, environment variables, etc. More parameters are listed in this guide

    service: get-factorial
    
    provider:
      name: aws
      runtime: python3.7
    
    functions:
      get_factorial:
        handler: handler.lambda_handler

    We need to keep both handler.py and serverless.yml in the same folder, or we need to update the function handler in serverless.yml.

    We can deploy it manually using this serverless command: 

    sls deploy --stage production -v

    Note: Before deploying, export AWS credentials.

    The above command deployed a stack using cloud formation:

    •  –stage is how to specify the environment where the stack should be deployed. Like any other software project, it can have stage names such as production, dev, test, etc.
    • -v specifies verbose.

    To auto-deploy changes from now on, we can use the watcher.

    Start the watcher with this command: 

    python3  auto_deploy_sls.py

    This will run continuously and keep an eye on the codebase directory, and if any changes are detected, it will deploy them. We can customize this to some extent, like post-deploy, so it can run test cases against a new stack.

    If you are worried about network traffic when the stack has lots of dependencies, using an actual cloud provider for testing might increase billing. However, we can easily fix this by using serverless local development.

    Here is a serverless blog that specifies local development and testing of a cloudformation stack. It emulates cloud behavior on the local setup, so there’s no need to worry about cloud service billing.

    One great upgrade supports complex directory structure.

    In the above example, we are assuming that only one single directory is present, so it’s fine to deploy using the command: 

    sls deploy --stage production -v

    But in some projects, one might have multiple stacks present in the codebase at different hierarchies. Consider the below example: We have three different lambdas, so updating in the `check-prime` directory requires updating only that lambda and not the others. 

    ├── check-prime
    │   ├── handler.py
    │   └── serverless.yml
    ├── get-factorial
    │   ├── handler.py
    │   └── serverless.yml
    └── get-factors
        ├── handler.py
        └── serverless.yml

    The above can be achieved in on_any_event(). By using the variable event.src_path, we can learn the file path that received the event.

    Now, deployment command changes to: 

    cd <updated_directory> && sls deploy --stage <your-stage> -v

    This will deploy only an updated stack.

    Conclusion

    We learned that even if serverless deployment is a manual task, it can be automated with the help of Watchdog for better developer workflow.

    With the help of serverless local development, we can test changes as we are making them without needing an explicit deployment to the cloud environment manually to test all the changes being made.

    We hope this helps you improve your serverless development experience and close the loop faster.

    Related Articles

    1. To Go Serverless Or Not Is The Question

    2. Building Your First AWS Serverless Application? Here’s Everything You Need to Know

  • Flutter vs React Native: A Detailed Comparison

    Flutter and React Native are two of the most popular cross-platform development frameworks on the market. Both of these technologies enable you to develop applications for iOS and Android with a single codebase. However, they’re not entirely interchangeable.

    Flutter allows developers to create Material Design-like applications with ease. React Native, on the other hand, has an active community of open source contributors, which means that it can be easily modified to meet almost any standard.

    In this blog, we have compared both of these technologies based on popularity, performance, learning curve, community support, and developer mindshare to help you decide which one you can use for your next project.

    But before digging into the comparison, let’s have a brief look at both these technologies:

    ‍About React Native

    React Native has gained the attention of many developers for its ease of use with JS code. Facebook has developed the framework to solve cross-platform application development using React and introduced React Native in their first React.js conference in 2015.

    React Native enables developers to create high-end mobile apps with the help of JavaScript. This eventually comes in handy for speeding up the process of developing mobile apps. The framework also makes use of the impressive features of JavaScript while maintaining excellent performance. React Native is highly feature-rich and allows you to create dynamic animations and gestures which are usually unavailable in the native platform.

    React Native has been adopted by many companies as their preferred technology. 

    For example:

    • Facebook
    • Instagram
    • Skype
    • Shopify
    • Tesla
    • Salesforce

    About Flutter

    Flutter is an open-source mobile development kit that makes it easy for developers to build high-quality applications for Android and iOS. It has a library with widgets to create the user interface of the application independent of the platform on which it is supported.

    Flutter has extended the reach of mobile app development by enabling developers to build apps on any platform without being restrained by mobile development limitations. The framework started as an internal project at Google back in 2015, with its first stable release in 2018

    Since its inception, Google aimed to provide a simplistic, usable programming language for building sophisticated apps and wanted to carry out Dart’s goal of replacing JavaScript as the next-generation web programming language.

    Let’s see what all apps are built using Flutter:

    • Google Ads
    • eBay
    • Alibaba
    • BMW
    • Philips Hue

    React Native vs. Flutter – An overall comparison

    Design Capabilities

    React Native is based on React.js, one of the most popular JavaScript libraries for building user interfaces. It is often used with Redux, which provides a solid basis for creating predictable web applications.

    Flutter, on the other hand, is Google’s new mobile UI framework. Flutter uses Dart language to write code, compiled to native code for iOS and Android apps.

    Both React Native and Flutter can be used to create applications with beautiful graphics and smooth animations.

    React Native

    In the React Native framework, UI elements look native to both iOS and Android platforms. These UI elements make it easier for developers to build apps because they only have to write them once. In addition, many of these components also render natively on each platform. The user experiences an interface that feels more natural and seamless while maintaining the capability to customize the app’s look and feel.

    The framework allows developers to use JavaScript or a combination of HTML/CSS/JavaScript for cross-platform development. While React Native allows you to build native apps, it does not mean that your app will look the same on both iOS and Android.

    Flutter

    Flutter is a toolkit for creating high-performance, high-fidelity mobile apps for iOS and Android. Flutter works with existing code, is used by developers and organizations worldwide, and is free and open source. The standard neutral style is what Flutter offers.

    Flutter has its own widgets library, which includes Material Design Components and Cupertino. 

    The Material package contains widgets that look like they belong on Android devices. The Cupertino package contains widgets that look like they belong on iOS devices. By default, a Flutter application uses Material widgets. If you want to use Cupertino widgets, then import the Cupertino library and change your app’s theme to CupertinoTheme.

    Community

    Flutter and React Native have a very active community of developers. Both frameworks have extensive support and documentation and an active GitHub repository, which means they are constantly being maintained and updated.

    With the Flutter community, we can even find exciting tools such as Flutter Inspector or Flutter WebView Plugin. In the case of React Native, Facebook has been investing heavily in this framework. Besides the fact that the development process is entirely open-source, Facebook has created various tools to make the developer’s life easier.

    Also, the more updates and versions come out, the more interest and appreciation the developer community shows. Let’s see how both frameworks stack up when it comes to community engagement.

    For React Native

    The Facebook community is the most significant contributor to the React Native framework, followed by the community members themselves.

    React Native has garnered over 1,162 contributors on GitHub since its launch in 2015. The number of commits (or changes) to the framework has increased over time. It increased from 1,183 commits in 2016 to 1,722 commits in 2017.

    This increase indicates that more and more developers are interested in improving React Native.

    Moreover, there are over 19.8k live projects where developers share their experiences to resolve existing issues. The official React Native website offers tutorials for beginners who want to get started quickly with developing applications for Android and iOS while also providing advanced users with the necessary documentation.

    Also, there are a few other platforms where you can ask your question to the community, meet other React Native developers, and gain new contacts:

    Reddit: https://www.reddit.com/r/reactnative/

    Stack Overflow: http://stackoverflow.com/questions/tagged/react-native

    Meetuphttps://www.meetup.com/topics/react-native/

    Facebook: https://www.facebook.com/groups/reactnativecommunity/

    For Flutter

    The Flutter community is smaller than React Native. The main reason is that Flutter is relatively new and is not yet widely used in production apps. But it’s not hard to see that its popularity is growing day by day. Flutter has excellent documentation with examples, articles, and tutorials that you can find online. It also has direct contact with its users through channels, such as Stack Overflow and Google Groups.

    The community of Flutter is growing at a steady pace with around 662 contributors. The total count of projects being forked by the community is 13.7k, where anybody can seek help for development purposes.

    Here are some platforms to connect with other developers in the Flutter community:

    GitHub: https://github.com/flutter

    Google Groups: https://groups.google.com/g/flutter-announce

    Stack Overflow: https://stackoverflow.com/tags/flutter

    Reddithttps://www.reddit.com/r/FlutterDev/

    Discordhttps://discord.com/invite/N7Yshp4

    Slack: https://fluttercommunity.slack.com/

    Learning curve

    The learning curve of Flutter is steeper than React Native. However, you can learn both frameworks within a reasonable time frame. So, let’s discuss what would be required to learn React Native and Flutter.

    React Native

    The language of React Native is JavaScript. Any person who knows how to write JS will be able to utilize this framework. But, it’s different from building web applications. So if you are a mobile developer, you need to get the hang of things that might take some time.

    However, React Native is relatively easy to learn for newbies. For starters, it offers a variety of resources, both online and offline. On the React website, users can find the documentation, guides, FAQs, and learning resources.

    Flutter

    Flutter has a bit steeper learning curve than React native. You need to know some basic concepts of native Android or iOS development. Flutter requires you to have experience in Java or Kotlin for Android or Objective-C or Swift for iOS. It can be a challenge if you’re accustomed to using new languages without type casts and generics. However, once you’ve learned how to use it, it can speed up your development process.

    Flutter provides great documentation of its APIs that you can refer to. Since the framework is still new, some information might not be updated yet.

    Team size

    The central aspect of choosing between React Native and Flutter is the team size. To set a realistic expectation on the cost, you need to consider the type of application you will develop.

    React Native

    Technically, React Native’s core library can be implemented by a single developer. This developer will have to build all native modules by himself, which is not an easy task. However, the required team size for React Native depends on the complexity of the mobile app you want to build. If you plan to create a simple mobile app, such as a mobile-only website, then one developer will be enough. However, if your project requires complex UI and animations, then you will need more skillful and experienced developers.

    Flutter

    Team size is a very important factor for the flutter app development. The number of people in your team might depend on the requirements and type of app you need to develop.

    Flutter makes it easy to use existing code that you might already have, or share code with other apps that you might already be building. You can even use Java or Kotlin if you prefer (though Dart is preferred).

    UI component

    When developing a cross-platform app, keep in mind that not all platforms behave identically. You will need to choose a library that supports the core element of the app consistent for each platform. We need the framework to have an API so that we can access the native modules.

    React Native

    There are two aspects to implementing React Native in your app development. The first one is writing the apps in JavaScript. This is the easiest part since it’s somewhat similar to writing web apps. The second aspect is the integration of third-party modules that are not part of the core framework.

    The reason third-party modules are required is that React Native does not support all native functionalities. For instance, if you want to implement an Alert box, you need to import the UIAlertController module from Applenv SDK.

    This makes the React Native framework somehow dependent on third-party libraries. There are lots of third-party libraries for React Native. You can use these libraries in your project to add native app features which are not available in React Native. Mostly, it is used to include maps, camera, sharing, and other native features.

    Flutter

    Flutter offers rich GUI components called widgets. A widget can be anything from simple text fields, buttons, switches, sliders, etc., to complex layouts that include multiple pages with split views, navigation bars, tab bars, etc., that are present in modern mobile apps.

    The Flutter toolkit is cross-platform and it has its own widgets, but it still needs third-party libraries to create applications. It also depends on the Android SDK and the iOS SDK for compilation and deployment. Developers can use any third-party library they want as long as it does not have any restrictions on open source licensing. Developers are also allowed to create their own libraries for Flutter app development.

    Testing Framework and Support

    React Native and Flutter have been used to develop many high-quality mobile applications. Of course, in any technology, a well-developed testing framework is essential.

    Based on this, we can see that both React Native and Flutter have a relatively mature testing framework. 

    React Native

    React Native uses the same UI components and APIs as a web application written in React.js. This means you can use the same frameworks and libraries for both platforms. Testing a React Native application can be more complex than a traditional web-based application because it relies heavily on the device itself. If you’re using a hybrid JavaScript approach, you can use tools like WebdriverIO or Appium to run the same tests across different browsers. Still, if you’re going native, you need to make sure you choose a tool with solid native support.

    Flutter

    Flutter has developed a testing framework that helps ensure your application is high quality. It is based on the premise of these three pillars: unit tests, widget tests, and integration tests. As you build out your Flutter applications, you can combine all three types of tests to ensure that your application works perfectly.

    Programming language

    One of the most important benefits of using Flutter and React Native to develop your mobile app is using a single programming language. This reduces the time required to hire developers and allows you to complete projects faster.

    React Native

    React Native breaks that paradigm by bridging the gap between native and JavaScript environments. It allows developers to build mobile apps that run across platforms by using JavaScript. It makes mobile app development faster, as it only requires one language—JavaScript—to create a cross-platform mobile app. This gives web developers a significant advantage over native application developers as they already know JavaScript and can build a mobile app prototype in a couple of days. There is no need to learn Java or Swift. They can even use the same JavaScript libraries they use at work, like Redux and ImmutableJS.

    Flutter

    Flutter provides tools to create native mobile apps for both Android and iOS. Furthermore, it allows you to reuse code between the platforms because it supports code sharing using libraries written in Dart.

    You can also choose between two different ways of creating layouts for Flutter apps. The first one is similar to CSS, while the second one is more like HTML. Both are very powerful and simple to use. By default, you should use widgets built by the Flutter team, but if needed, you can also create your own custom widgets or modify existing ones.

    Tooling and DX

    While using either Flutter or React Native for mobile app development, it is likely that your development team will also be responsible for the CI/CD pipeline used to release new versions of your app.

    CI/CD support for Flutter and React Native is very similar at the moment. Both frameworks have good support for continuous integration (CI), continuous delivery (CD), and continuous deployment (CD). Both offer a first-class experience for building, testing, and deploying apps.

    React Native

    The React Native framework has existed for some time now and is pretty mature. However, it still lacks documentation around continuous integration (CI) and continuous delivery (CD) solutions. Considering the maturity of the framework, we might expect to see more investment here. 

    Whereas Expo is a development environment and build tool for React Native. It lets you develop and run React Native apps on your computer just like you would do on any other web app.

    Expo turns a React Native app into a single JavaScript bundle, which is then published to one of the app stores using Expo’s tools. It provides all the necessary tooling—like bundling, building, and hot reloading—and manages the technical details of publishing to each app store. Expo provides the tooling and environment so that you can develop and test your app in a familiar way, while it also takes care of deploying to production.

    Flutter

    Flutter’s open-source project is complete, so the next step is to develop a rich ecosystem around it. The good news is that Flutter uses the same command-line interface as XCode, Android Studio, IntelliJ IDEA, and other fully-featured IDE’s. This means Flutter can easily integrate with continuous integration/continuous deployment tools. Some CI/CD tools for Flutter include Bitrise and Codemagic. All of these tools are free to use, although they offer paid plans for more features.

    Here is an example of a to-do list app built with React Native and Flutter.

    Flutter: https://github.com/velotiotech/simple_todo_flutter

    React Native: https://github.com/velotiotech/react-native-todo-example

    Conclusion

    As you can see, both Flutter and React Native are excellent cross-platform app development tools that will be able to offer you high-quality apps for iOS and Android. The choice between React Native vs Flutter will depend on the complexity of the app that you are looking to create, your team size, and your needs for the app. Still, all in all, both of these frameworks are great options to consider to develop cross-platform native mobile applications.

  • Publish APIs For Your Customers: Deploy Serverless Developer Portal For Amazon API Gateway

    Amazon API Gateway is a fully managed service that allows you to create, secure, publish, test and monitor your APIs. We often come across scenarios where customers of these APIs expect a platform to learn and discover APIs that are available to them (often with examples).

    The Serverless Developer Portal is one such application that is used for developer engagement by making your APIs available to your customers. Further, your customers can use the developer portal to subscribe to an API, browse API documentation, test published APIs, monitor their API usage, and submit their feedback.

    This blog is a detailed step-by-step guide for deploying the Serverless Developer Portal for APIs that are managed via Amazon API Gateway.

    Advantages

    The users of the Amazon API Gateway can be vaguely categorized as –

    API Publishers – They can use the Serverless Developer Portal to expose and secure their APIs for customers which can be integrated with AWS Marketplace for monetary benefits. Furthermore, they can customize the developer portal, including content, styling, logos, custom domains, etc. 

    API Consumers – They could be Frontend/Backend developers, third party customers, or simply students. They can explore available APIs, invoke the APIs, and go through the documentation to get an insight into how each API works with different requests. 

    Developer Portal Architecture

    We would need to establish a basic understanding of how the developer portal works. The Serverless Developer Portal is a serverless application built on microservice architecture using Amazon API Gateway, Amazon Cognito, AWS Lambda, Simple Storage Service and Amazon CloudFront. 

    The developer portal comprises multiple microservices and components as described in the following figure.

    Source: AWS

    There are a few key pieces in the above architecture –

    1. Identity Management: Amazon Cognito is basically the secure user directory of the developer portal responsible for user management. It allows you to configure triggers for registration, authentication, and confirmation, thereby giving you more control over the authentication process. 
    2. Business Logic: AWS Cloudfront is configured to serve your static content hosted in a private S3 bucket. The static content is built using the React JS framework which interacts with backend APIs dictating the business logic for various events. 
    3. Catalog Management: Developer portal uses catalog for rendering the APIs with Swagger specifications on the APIs page. The catalog file (catalog.json in S3 Artifact bucket) is updated whenever an API is published or removed. This is achieved by creating an S3 trigger on AWS Lambda responsible for studying the content of the catalog directory and generating a catalog for the developer portal.  
    4. API Key Creation: API Key is created for consumers at the time of registration. Whenever you subscribe to an API, associated Usage Plans are updated to your API key, thereby giving you access to those APIs as defined by the usage plan. Cognito User – API key mapping is stored in the DynamoDB table along with other registration related details.
    5. Static Asset Uploader: AWS Lambda (Static-Asset-Uploader) is responsible for updating/deploying static assets for the developer portal. Static assets include – content, logos, icons, CSS, JavaScripts, and other media files.

    Let’s move forward to building and deploying a simple Serverless Developer Portal.

    Building Your API

    Start with deploying an API which can be accessed using API Gateway from 

    https://<api-id>.execute-api.region.amazonaws.com/stage

    If you do not have any such API available, create a simple application by jumping to the section, “API Performance Across the Globe,” on this blog.

    Setup custom domain name

    For professional projects, I recommend that you create a custom domain name as they provide simpler and more intuitive URLs you can provide to your API users.

    Make sure your API Gateway domain name is updated in the Route53 record set created after you set up your custom domain name. 

    See more on Setting up custom domain names for REST APIs – Amazon API Gateway

    Enable CORS for an API Resource

    There are two ways you can enable CORS on a resource:

    1. Enable CORS Using the Console
    2. Enable CORS on a resource using the import API from Amazon API Gateway

    Let’s discuss the easiest way to do it using a console.

    1. Open API Gateway console.
    2. Select the API Gateway for your API from the list.
    3. Choose a resource to enable CORS for all the methods under that resource.
      Alternatively, you could choose a method under the resource to enable CORS for just this method.
    4. Select Enable CORS from the Actions drop-down menu.
    5. In the Enable CORS form, do the following:
      – Leave Access-Control-Allow-Headers and Access-Control-Allow-Origin header to default values.
      – Click on Enable CORS and replace existing CORS headers.
    6. Review the changes in Confirm method changes popup, choose Yes, overwrite existing values to apply your CORS settings.

    Once enabled, you can see a mock integration on the OPTIONS method for the selected resource. You must enable CORS for ${proxy} resources too. 

    To verify the CORS is enabled on API resource, try curl on OPTIONS method

    curl -v -X OPTIONS -H "Access-Control-Request-Method: POST" -H "Origin: http://example.com" https://api-id.execute-api.region.amazonaws.com/stage
    

    You should see the response OK in the header:

    < HTTP/1.1 200 OK
    < Content-Type: application/json
    < Content-Length: 0
    < Connection: keep-alive
    < Date: Mon, 13 Apr 2020 16:27:44 GMT
    < x-amzn-RequestId: a50b97b5-2437-436c-b99c-22e00bbe9430
    < Access-Control-Allow-Origin: *
    < Access-Control-Allow-Headers: Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token
    < x-amz-apigw-id: K7voBHDZIAMFu9g=
    < Access-Control-Allow-Methods: DELETE,GET,HEAD,OPTIONS,PATCH,POST,PUT
    < X-Cache: Miss from cloudfront
    < Via: 1.1 1c8c957c4a5bf1213bd57bd7d0ec6570.cloudfront.net (CloudFront)
    < X-Amz-Cf-Pop: BOM50-C1
    < X-Amz-Cf-Id: OmxFzV2-TH2BWPVyOohNrhNlJ-s1ZhYVKyoJaIrA_zyE9i0mRTYxOQ==

    Deploy Developer Portal

    There are two ways to deploy the developer portal for your API. 

    Using SAR

    An easy way will be to deploy api-gateway-dev-portal directly from AWS Serverless Application Repository. 

    Note -If you intend to upgrade your Developer portal to a major version then you need to refer to the Upgrading Instructions which is currently under development.

    Using AWS SAM

    1. Ensure that you have the latest AWS CLI and AWS SAM CLI installed and configured.
    2. Download or clone the API Gateway Serverless Developer Portal repository.
    3. Update the Cloudformation template file – cloudformation/template.yaml.

    Parameters you must configure and verify includes: 

    • ArtifactsS3BucketName
    • DevPortalSiteS3BucketName
    • DevPortalCustomersTableName
    • DevPortalPreLoginAccountsTableName
    • DevPortalAdminEmail
    • DevPortalFeedbackTableName
    • CognitoIdentityPoolName
    • CognitoDomainNameOrPrefix
    • CustomDomainName
    • CustomDomainNameAcmCertArn
    • UseRoute53Nameservers
    • AccountRegistrationMode

    You can view your template file in AWS Cloudformation Designer to get a better idea of all the components/services involved and how they are connected.

    See Developer portal settings for more information about parameters.

    1. Replace the static files in your project with the ones you would like to use.
      dev-portal/public/custom-content
      lambdas/static-asset-uploader/build
      api-logo contains the logos you would like to show on the API page (in png format). Portal checks for an api-id_stage.png file when rendering the API page. If not found, it chooses the default logo – default.png.
      content-fragments includes various markdown files comprising the content of the different pages in the portal. 
      Other static assets including favicon.ico, home-image.png and nav-logo.png that appear on your portal. 
    2. Let’s create a ZIP file of your code and dependencies, and upload it to Amazon S3. Running below command creates an AWS SAM template packaged.yaml, replacing references to local artifacts with the Amazon S3 location where the command uploaded the artifacts:
    sam package --template-file ./cloudformation/template.yaml --output-template-file ./cloudformation/packaged.yaml --s3-bucket {your-lambda-artifacts-bucket-name}

    1. Run the following command from the project root to deploy your portal, replace:
      – {your-template-bucket-name}
      with the name of your Amazon S3 bucket.
      – {custom-prefix}
      with a prefix that is globally unique.
      – {cognito-domain-or-prefix}
      with a unique string.
    sam deploy --template-file ./cloudformation/packaged.yaml --s3-bucket {your-template-bucket-name} --stack-name "{custom-prefix}-dev-portal" --capabilities CAPABILITY_NAMED_IAM

    Note: Ensure that you have required privileges to make deployments, as, during the deployment process, it attempts to create various resources such as AWS Lambda, Cognito User Pool, IAM roles, API Gateway, Cloudfront Distribution, etc. 

    After your developer portal has been fully deployed, you can get its URL by following.

    1. Open the AWS CloudFormation console.
    2. Select your stack you created above.
    3. Open the Outputs section. The URL for the developer portal is specified in the WebSiteURL property.

    Create Usage Plan

    Create a usage plan, to list your API under a subscribable APIs category allowing consumers to access the API using their API keys in the developer portal. Ensure that the API gateway stage is configured for the usage plan.

    Publishing an API

    Only Administrators have permission to publish an API. To create an Administrator account for your developer portal –

    1. Go to the WebSiteURL obtained after the successful deployment. 

    2. On the top right of the home page click on Register.

    Source: Github

    3. Fill the registration form and hit Sign up.

    4. Enter the confirmation code received on your email address provided in the previous step.

    5. Promote the user as Administrator by adding it to AdminGroup. 

    • Open Amazon Cognito User Pool console.
    • Select the User Pool created for your developer portal.
    • From the General Settings > Users and Groups page, select the User you want to promote as Administrator.
    • Click on Add to group and then select the Admin group from the dropdown and confirm.

    6. You will be required to log in again to log in as an Administrator. Click on the Admin Panel and choose the API you wish to publish from the APIs list.

    Setting up an account

    The signup process depends on the registration mode selected for the developer portal. 

    For request registration mode, you need to wait for the Administrator to approve your registration request.

    For invite registration mode, you can only register on the portal when invited by the portal administrator. 

    Subscribing an API

    1. Sign in to the developer portal.
    2. Navigate to the Dashboard page and Copy your API Key.
    3. Go to APIs Page to see a list of published APIs.
    4. Select an API you wish to subscribe to and hit the Subscribe button.

    Tips

    1. When a user subscribes to API, all the APIs published under that usage plan are accessible no matter whether they are published or not.
    2. Whenever you subscribe to an API, the catalog is exported from API Gateway resource documentation. You can customize the workflow or override the catalog swagger definition JSON in S3 bucket as defined in ArtifactsS3BucketName under /catalog/<apiid>_<stage>.json</stage></apiid>
    3. For backend APIs, CORS requests are allowed only from custom domain names selected for your developer portal.
    4. Ensure to set the CORS response header from the published API in order to invoke them from the developer portal.

    Summary

    You’ve seen how to deploy a Serverless Developer Portal and publish an API. If you are creating a serverless application for the first time, you might want to read more on Serverless Computing and AWS Gateway before you get started. 

    Start building your own developer portal. To know more on distributing your API Gateway APIs to your customers follow this AWS guide.

  • Getting Started With Golang Channels! Here’s Everything You Need to Know

    We live in a world where speed is important. With cutting-edge technology coming into the telecommunications and software industry, we expect to get things done quickly. We want to develop applications that are fast, can process high volumes of data and requests, and keep the end-user happy. 

    This is great, but of course, it’s easier said than done. That’s why concurrency and parallelism are important in application development. We must process data as fast as possible. Every programming language has its own way of dealing with this, and we will see how Golang does it.  

    Now, many of us choose Golang because of its concurrency, and the inclusion of goroutines and channels has massively impacted the concurrency.

    This blog will cover channels and how they work internally, as well as their key components. To benefit the most from this content, it will help to know a little about goroutines and channels as this blog gets into  the internals of channels. If you don’t know anything, then don’t worry, we’ll be starting off with an introduction to channels, and then we’ll see how they operate.

    What are channels?

    Normally, when we talk about channels, we think of the ones in applications like RabbitMQ, Redis, AWS SQS, and so on. Anyone with no or only a small amount of Golang knowledge would think like this. But Channels in Golang are different from a work queue system. In the work queue system like above, there are TCP connections to the channels, but in Go, the channel is a data structure or  even a design pattern, which we’ll explain later. So, what are the channels in Golang exactly?

    Channels are the medium through which goroutines can communicate with each other. In simple terms, a channel is a pipe that allows a goroutine to either put or read the data. 

    What are goroutines?

    So, a channel is a communication medium for goroutines. Now, let’s give a quick overview of what goroutines are. If you know this already, feel free to skip this section.

    Technically, a goroutine is a function that executes independently in a concurrent fashion. In simple terms, it’s a lightweight thread that’s managed by go runtime. 

    You can create a goroutine by using a Go keyword before a function call.

    Let’s say there’s a function called PrintHello, like this:

    func PrintHello() {
       fmt.Println("Hello")
    }

    You can make this into a goroutine simply by calling this function, as below:

    //create goroutine
     go PrintHello()

    Now, let’s head back to channels, as that’s the important topic of this blog. 

    How to define a channel?

    Let’s see a syntax that will declare a channel. We can do so by using the chan keyword provided by Go.

    You must specify the data type as the channel can handle data of the same data type. 

    //create channel
     var c chan int

    Very simple! But this is not useful since it would create a Nil channel. Let’s print it and see.

    fmt.Println(c)
    fmt.Printf("Type of channel: %T", c)
    <nil>
    Type of channel: chan int

    As you can see, we have just declared the channel, but we can’t transport data through it. So, to create a useful channel, we must use the make function.

    //create channel
    c := make(chan int)
    fmt.Printf("Type of `c`: %T\n", c)
    fmt.Printf("Value of `c` is %v\n", c)
     
    Type of `c`: chan int
    Value of `c` is 0xc000022120

    As you may notice here, the value of c is a memory address. Keep in mind that channels are nothing but pointers. That’s why we can pass them to goroutines, and we can easily put the data or read the data. Now, let’s quickly see how to read and write the data to a channel.

    Read and write operations on a channel:

    Go provides an easy way to read and write data to a channel by using the left arrow.

    c <- 10

    This is a simple syntax to put the value in our created channel. The same syntax is used to define the “send” only type of channels.

    And to get/read the data from channel, we do this:

    <-c

    This is also the way to define the “receive” only type of channels.

    Let’s see a simple program to use the channels.

    func printChannelData(c chan int) {
       fmt.Println("Data in channel is: ", <-c)
    }

    This simple function just prints whatever data is in the channel. Now, let’s see the main function that will push the data into the channel.

    func main() {
       fmt.Println("Main started...")
       //create channel of int
       c := make(chan int)
       // call to goroutine
       go printChannelData(c)
       // put the data in channel
       c <- 10
       fmt.Println("Main ended...")
    }

    This yields to the output:

    Main started...
    Data in channel is:  10
    Main ended...

    Let’s talk about the execution of the program. 

    1. We declared a printChannelData function, which accepts a channel c of data type integer. In this function, we are just reading data from channel c and printing it.

    2. Now, this method will first print “main started…” to the console.

    3. Then, we have created the channel c of data type integer using the make keyword.

    4. We now pass the channel to the function printChannelData, and as we saw earlier, it’s a goroutine. 

    5. At this point, there are two goroutines. One is the main goroutine, and the other is what we have declared. 

    6. Now, we are putting 10 as data in the channel, and at this point, our main goroutine is blocked and waiting for some other goroutine to read the data. The reader, in this case, is the printChannelData goroutine, which was previously blocked because there was no data in the channel. Now that we’ve pushed the data onto the channel, the Go scheduler (more on this later in the blog) now schedules printChannelData goroutine, and it will read and print the value from the channel.

    7. After that, the main goroutine again activates and prints “main ended…” and the program stops. 

    So, what’s happening here? Basically, blocking and unblocking operations are done over goroutines by the Go scheduler. Unless there’s data in a channel you can’t read from it, which is why our printChannelData goroutine was blocked in the first place, the written data has to be read first to resume further operations. This happened in case of our main goroutine.

    With this, let’s see how channels operate internally. 

    Internals of channels:

    Until now, we have seen how to define a goroutine, how to declare a channel, and how to read and write data through a channel with a very simple example. Now, let’s look at how Go handles this blocking and unblocking nature internally. But before that, let’s quickly see the types of channels.

    Types of channels:

    There are two basic types of channels: buffered channels and unbuffered channels. The above example illustrates the behaviour of unbuffered channels. Let’s quickly see the definition of these:

    • Unbuffered channel: This is what we have seen above. A channel that can hold a single piece of data, which has to be consumed before pushing other data. That’s why our main goroutine got blocked when we added data into the channel. 
    • Buffered channel: In a buffered channel, we specify the data capacity of a channel. The syntax is very simple. c := make(chan int,10)  the second argument in the make function is the capacity of a channel. So, we can put up to ten elements in a channel. When the capacity is full, then that channel would get blocked so that the receiver goroutine can start consuming it.

    Properties of a channel:

    A channel does lot of things internally, and it holds some of the properties below:

    • Channels are goroutine-safe.
    • Channels can store and pass values between goroutines.
    • Channels provide FIFO semantics.
    • Channels cause goroutines to block and unblock, which we just learned about. 

    As we see the internals of a channel, you’ll learn about the first three properties.

    Channel Structure:

    As we learned in the definition, a channel is data structure. Now, looking at the properties above, we want a mechanism that handles goroutines in a synchronized manner and with a FIFO semantics. This can be solved using a queue with a lock. So, the channel internally behaves in that fashion. It has a circular queue, a lock, and some other fields. 

    When we do this c := make(chan int,10) Go creates a channel using hchan struct, which has the following fields: 

    type hchan struct {
       qcount   uint           // total data in the queue
       dataqsiz uint           // size of the circular queue
       buf      unsafe.Pointer // points to an array of dataqsiz elements
       elemsize uint16
       closed   uint32
       elemtype *_type // element type
       sendx    uint   // send index
       recvx    uint   // receive index
       recvq    waitq  // list of recv waiters
       sendq    waitq  // list of send waiters
     
       // lock protects all fields in hchan, as well as several
       // fields in sudogs blocked on this channel.
       //
       // Do not change another G's status while holding this lock
       // (in particular, do not ready a G), as this can deadlock
       // with stack shrinking.
       lock mutex
    }

    (Above info taken from Golang.org]

    This is what a channel is internally. Let’s see one-by-one what these fields are. 

    qcount holds the count of items/data in the queue. 

    dataqsize is the size of a circular queue. This is used in case of buffered channels and is the second parameter used in the make function.

    elemsize is the size of a channel with respect to a single element.

    buf is the actual circular queue where the data is stored when we use buffered channels.

    closed indicates whether the channel is closed. The syntax to close the channel is close(<channel_name>). The default value of this field is 0, which is set when the channel gets created, and it’s set to 1 when the channel is closed. 

    sendx and recvx indicates the current index of a buffer or circular queue. As we add the data into the buffered channel, sendx increases, and as we start receiving, recvx increases.

    recvq and sendq are the waiting queue for the blocked goroutines that are trying to either read data from or write data to the channel.

    lock is basically a mutex to lock the channel for each read or write operation as we don’t want goroutines to go into deadlock state.

    These are the important fields of a hchan struct, which comes into the picture when we create a channel. This hchan struct basically resides on a heap and the make function gives us a pointer to that location. There’s another struct known as sudog, which also comes into the picture, but we’ll learn more about that later. Now, let’s see what happens when we write and read the data.

    Read and write operations on a channel:

    We are considering buffered channels in this. When one goroutine, let’s say G1, wants to write the data onto a channel, it does following:

    • Acquire the lock: As we saw before, if we want to modify the channel, or hchan struct, we must acquire a lock. So, G1 in this case, will acquire a lock before writing the data.
    • Perform enqueue operation: We now know that buf is actually a circular queue that holds the data. But before enqueuing the data, goroutine does a memory copy operation on the data and puts the copy into the buffer slot. We will see an example of this.
    • Release the lock: After performing an enqueue operation, it just releases the lock and goes on performing further executions.

    When goroutine, let’s say G2, reads the above data, it performs the same operation, except instead of enqueue, it performs dequeue while also performing the memory copy operation. This states that in channels there’s no shared memory, so the goroutines only share the hchan struct, which is protected by mutex. Others are just copies of memory.

    This satisfies the famous Golang quote:  “Do not communicate by sharing memory instead share memory by communicating.” 

    Now, let’s look at a small example of this memory copy operation.

    func printData(c chan *int) {
       time.Sleep(time.Second * 3)
       data := <-c
       fmt.Println("Data in channel is: ", *data)
    }
     
    func main() {
       fmt.Println("Main started...")
       var a = 10
       b := &a
       //create channel
       c := make(chan *int)
       go printData(c)
       fmt.Println("Value of b before putting into channel", *b)
       c <- b
       a = 20
       fmt.Println("Updated value of a:", a)
       fmt.Println("Updated value of b:", *b)
       time.Sleep(time.Second * 2)
       fmt.Println("Main ended...")
    }

    And the output of this is:

    Main started...
    Value of b before putting into channel 10
    Updated value of a: 20
    Updated value of b: 20
    Data in channel is:  10
    Main ended...

    So, as you can see, we have added the value of variable a into the channel, and we modify that value before the channel can access it. However, the value in the channel stays the same, i.e., 10. Because here, the main goroutine has performed a memory copy operation before putting the value onto the channel. So, even if you change the value later, the value in the channel does not change. 

    Write in case of buffer overflow:

    We’ve seen that the Go routine can add data up to the buffer capacity, but what happens when the buffer capacity is reached? When the buffer has no more space and a goroutine, let’s say G1, wants to write the data, the go scheduler blocks/pauses G1, which will wait until a receive happens from another goroutine, say G2. Now, since we are talking about buffer channels, when G2 consumes all the data, the Go scheduler makes G1 active again and G2 pauses. Remember this scenario, as we’ll use G1 and G2 frequently here onwards.

    We know that goroutine works in a pause and resume fashion, but who controls it? As you might have guessed, the Go scheduler does the magic here. There are few things that the Go scheduler does and those are very important considering the goroutines and channels.

    Go Runtime Scheduler

    You may already know this, but goroutines are user-space threads. Now, the OS can schedule and manage threads, but it’s overhead to the OS, considering the properties that threads carry. 

    That’s why the Go scheduler handles the goroutines, and it basically multiplexes the goroutines on the OS threads. Let’s see how.

    There are scheduling models, like 1:1, N:1, etc., but the Go scheduler uses the M:N scheduling model.

    Basically, this means that there are a number of goroutines and OS threads, and the scheduler basically schedules the M goroutines on N OS threads. For example:

    OS Thread 1:

    OS Thread 2:

    As you can see, there are two OS threads, and the scheduler is running six goroutines by swapping them as needed. The Go scheduler has three structures as below:

    • M: M represents the OS thread, which is entirely managed by the OS, and it’s similar to POSIX thread. M stands for machine.
    • G: G represents the goroutine. Now, a goroutine is a resizable stack that also includes information about scheduling, any channel it’s blocked on, etc.
    • P: P is a context for scheduling. This is like a single thread that runs the Go code to multiplex M goroutines to N OS threads. This is important part, and that’s why P stands for processor.

    Diagrammatically, we can represent the scheduler as:

    (This diagram is referenced from The Go scheduler]

    The P processor basically holds the queue of runnable goroutines—or simply run queues.

    So, anytime the goroutine (G) wants to run it on a OS thread (M), that OS thread first gets hold of P i.e., the context. Now, this behaviour occurs when a goroutine needs to be paused and some other goroutines must run. One such case is a buffered channel. When the buffer is full, we pause the sender goroutine and activate the receiver goroutine.

    Imagine the above scenario: G1 is a sender that tries to send a full buffered channel, and G2 is a receiver goroutine. Now, when G1 wants to send a full channel, it calls into the runtime Go scheduler and signals it as gopark. So, now scheduler, or M, changes the state of G1 from running to waiting, and it will schedule another goroutine from the run queue, say G2.

    This transition diagram might help you better understand:

    As you can see, after the gopark call, G1 is in a waiting state and G2 is running. We haven’t paused the OS thread (M); instead, we’ve blocked the goroutine and scheduled another one. So, we are using maximum throughput of an OS thread. The context switching of goroutine is handled by the scheduler (P), and because of this, it adds complexity to the scheduler. 

    This is great. But how do we resume G1 now because it still wants to add the data/task on a channel, right? So, before G1 sends the gopark signal, it actually sets a state of itself on a hchan struct, i.e., our channel in the sendq field. Remember the sendq and recvq fields? They’re waiting senders and receivers. 

    Now, G1 stores the state of itself as a sudog struct. A sudog is simply a goroutine that is waiting on an element. The sudog struct has these elements:

    type sudog struct{
       g *g
       isSelect bool
       next *sudog
       prev *sudog
       elem unsafe.Pointer //data element
       ...
    }

    g is a waiting goroutine, next and prev are the pointers to sudog/goroutine respectively if there’s any next or previous goroutine present, and elem is the actual element it’s waiting on.

    So, considering our example, G1 is basically waiting to write the data so it will create a state of itself, which we’ll call sudog as below:

    Cool. Now we know, before going into the waiting state, what operations G1 performs. Currently, G2 is in a running state, and it will start consuming the channel data.

    As soon as it receives the first data/task, it will check the waiting goroutine in the sendq attribute of an hchan struct, and it will find that G1 is waiting to push data or a task. Now, here is the interesting thing: G2 will copy that data/task to the buffer, and it will call the scheduler, and the scheduler will put G1 from the waiting state to runnable, and it will add G1 to the run queue and return to G2. This call from G2 is known as goready, and it will happen for G1. Impressive, right? Golang behaves like this because when G1 runs, it doesn’t want to hold onto a lock and push the data/task. That extra overhead is handled by G2. That’s why the sudog has the data/task and the details for the waiting goroutine. So, the state of G1 is like this:

    As you can see, G1 is placed on a run queue. Now we know what’s done by the goroutine and the go scheduler in case of buffered channels. In this example, the sender gorountine came first, but what if the receiver goroutine comes first? What if there’s no data in the channel and the receiver goroutine is executed first? The receiver goroutine (G2) will create a sudog in recvq on the hchan struct. Things are a little twisted when G1 goroutine activates. It will now see whether there are any goroutines waiting in the recvq, and if there is, it will copy the task to the waiting goroutine’s (G2) memory location, i.e., the elem attribute of the sudog. 

    This is incredible! Instead of writing to the buffer, it will write the task/data to the waiting goroutine’s space simply to avoid G2’s overhead when it activates. We know that each goroutine has its own resizable stack, and they never use each other’s space except in case of channels. Until now, we have seen how the send and receive happens in a buffered channel.

    This may have been confusing, so let me give you the summary of the send operation. 

    Summary of a send operation for buffered channels:

    1. Acquire lock on the entire channel or the hchan struct.
    2. Check if there’s any sudog or a waiting goroutine in the recvq. If so, then put the element directly into its stack. We saw this just now with G1 writing to G2’s stack.
    3. If recvq is empty, then check whether the buffer has space. If yes, then do a memory copy of the data. 
    4. If the buffer is full, then create a sudog under sendq of the hchan struct, which will have details, like a currently executing goroutine and the data to put on the channel.

    We have seen all the above steps in detail, but concentrate on the last point. 

    It’s kind of similar to an unbuffered channel. We know that for unbuffered channels, every read must have a write operation first and vice versa.

    So, keep in mind that an unbuffered channel always works like a direct send. So, a summary of a read and write operation in unbuffered channel could be:

    • Sender first: At this point, there’s no receiver, so the sender will create a sudog of itself and the receiver will receive the value from the sudog.
    • Receiver first: The receiver will create a sudog in recvq, and the sender will directly put the data in the receiver’s stack.

    With this, we have covered the basics of channels. We’ve learned how read and write operates in a buffered and unbuffered channel, and we talked about the Go runtime scheduler.

    Conclusion:

    Channels is a very interesting Golang topic. They seem to be difficult to understand, but when you learn the mechanism, they’re very powerful and help you to achieve concurrency in applications. Hopefully, this blog helps your understanding of the fundamental concepts and the operations of channels.

  • Automating test cases for text-messaging (SMS) feature of your application was never so easy

    Almost all the applications that you work on or deal with throughout the day use SMS (short messaging service) as an efficient and effective way to communicate with end users.

    Some very common use-cases include: 

    • Receiving an OTP for authenticating your login 
    • Getting deals from the likes of Flipkart and Amazon informing you regarding the latest sale.
    • Getting reminder notifications for the doctor’s appointment that you have
    • Getting details for your debit and credit transactions.

    The practical use cases for an SMS can be far-reaching. 

    Even though SMS integration forms an integral part of any application, due to the limitations and complexities involved in automating it via web automation tools like selenium, these are often neglected to be automated.

    Teams often opt for verifying these sets of test cases manually, which, even though is important in getting bugs earlier, it does pose some real-time challenges.

    Pitfalls with Manual Testing

    With these limitations, you obviously do not want your application sending faulty Text Messages after that major Release.

    Automation Testing … #theSaviour ‍

    To overcome the limitations of manual testing, delegating your task to a machine comes in handy.

    Now that we have talked about the WHY, we will look into HOW the feature can be automated.
    Technically, you shouldn’t / can’t use selenium to read the SMS via mobile.
    So, we were looking for a third-party library that is 

    • Easy to integrate with the existing code base
    • Supports a range of languages 
    • Does not involve highly complex codes and focuses on the problem at hand
    • Supports both incoming and outgoing messages

    After a lot of research, we settled with Twilio.

    In this article, we will look at an example of working with Twilio APIs to Read SMS and eventually using it to automate SMS flows.

    Twilio supports a bunch of different languages. For this article, we stuck with Node.js

    Account Setup

    Registration

    To start working with the service, you need to register.

    Once that is done, Twilio will prompt you with a bunch of simple questions to understand why you want to use their service.

    Twilio Dashboard

    A trial balance of $15.50 is received upon signing up for your usage. This can be used for sending and receiving text messages. A unique Account SID and Auth Token is also generated for your account.

    ‍Buy a Number


    Navigate to the buy a number link under Phone Numbers > Manage and purchase a number that you would eventually be using in your automation scripts for receiving text messages from the application.

    Note – for the free trial, Twilio does not support Indian Number (+91)

    Code Setup

    Install Twilio in your code base

     

    Code snippet

    For simplification,
    Just pass in the accountSid and authToken that you will receive from the Dashboard Console to the twilio library.This would return you with a client object containing the list of all the messages in your inbox.

    const accountSid = 'AC13fb4ed9a621140e19581a14472a75b0'
    const authToken = 'fac9498ac36ac29e8dae647d35624af7'
    const client = require('twilio')(accountSid, authToken)
    let messageBody
    let messageContent
    let sentFrom
    let sentTo
    let OTP
    describe('My Login application', () => {
      it('Read Text Message', () => {
        const username = $('#login_field');
        const pass = $('#password');
        const signInBtn = $('input[type="submit"]');
        const otpField = $('#otp');
        const verifyBtn = $(
          'form[action="/sessions/two-factor"] button[type="submit"]'
        );
        browser.url('https://github.com/login');
        username.setValue('your_email@mail.com');
        pass.setValue('your_pass123');
        signInBtn.click();
        // Get Message ...
        const latestMsg = await client.messages.list({ limit: 1 })
        
        messageContent = JSON.stringify(latestMsg,null,"\t")
        messageBody = JSON.stringify(latestMsg.body)
        sentFrom = JSON.stringify(latestMsg.from)
        sentTo = JSON.stringify(latestMsg.to)
        OTP = JSON.stringify(latestMsg.body.match(/\d+/)[0])
        otpField.setValue(OTP);
        verifyBtn.click();
        expect(browser).toHaveUrl('https://github.com/');
      });
    })

    List of other APIs to read an SMS provided by Twilio

    List all messages: Using this API Here you can see how to retrieve all messages from your account.

    const accountSid = process.env.TWILIO_ACCOUNT_SID;
    const authToken = process.env.TWILIO_AUTH_TOKEN;
    const client = require('twilio')(accountSid, authToken);
    
    client.messages.list({limit: 20})
                   .then(messages => messages.forEach(m => console.log(m.sid)));

    List Messages matching filter criteria: If you’d like to have Twilio narrow down this list of messages for you, you can do so by specifying a To number, From the number, and a DateSent.

    const accountSid = process.env.TWILIO_ACCOUNT_SID;
    const authToken = process.env.TWILIO_AUTH_TOKEN;
    const client = require('twilio')(accountSid, authToken);
    
    client.messages
          .list({
             dateSent: new Date(Date.UTC(2016, 7, 31, 0, 0, 0)),
             from: '+15017122661',
             to: '+15558675310',
             limit: 20
           })
          .then(messages => messages.forEach(m => console.log(m.sid)));

    Get a Message : If you know the message SID (i.e., the message’s unique identifier), then you can retrieve that specific message directly. Using this method, you can send emails without attachments.

    const accountSid = process.env.TWILIO_ACCOUNT_SID;
    const authToken = process.env.TWILIO_AUTH_TOKEN;
    const client = require('twilio')(accountSid, authToken);
    
    client.messages('MM800f449d0399ed014aae2bcc0cc2f2ec')
          .fetch()
          .then(message => console.log(message.to));

    Delete a message : If you want to delete a message from history, you can easily do so by deleting the Message instance resource.

    const accountSid = process.env.TWILIO_ACCOUNT_SID;
    const authToken = process.env.TWILIO_AUTH_TOKEN;
    const client = require('twilio')(accountSid, authToken);
    
    client.messages('MM800f449d0399ed014aae2bcc0cc2f2ec').remove();

    Limitations with a Trial Twilio Account

    • The trial version does not support Indian numbers (+91).
    • The trial version just provides an initial balance of $15.50.
      This is sufficient enough for your use case that involves only receiving messages on your Twilio number. But if the use case requires sending back the message from the Twilio number, a paid version can solve the purpose.
    • Messages sent via a short code (557766) are not received on the Twilio number.
      Only long codes are accepted in the trial version.
    • You can buy only a single number with the trial version. If purchasing multiple numbers is required, the user may have to switch to a paid version.

    Conclusion

    In a nutshell, we saw how important it is to thoroughly verify the SMS functionality of our application since it serves as one of the primary ways of communicating with the end users.
    We also saw what the limitations are with following the traditional manual testing approach and how automating SMS scenarios would help us deliver high-quality products.
    Finally, we demonstrated a feasible, efficient and easy-to-use way to Automate SMS test scenarios using Twilio APIs.

    Hope this was a useful read and that you will now be able to easily automate SMS scenarios.
    Happy testing… Do like and share …

  • UI Automation and API Testing with Cypress – A Step-by-step Guide

    These days, most web applications are driven by JavaScript frameworks that include front-end and back-end development. So, we need to have a robust QA automation framework that covers APIs as well as end-to-end tests (E2E tests). These tests check the user flow over a web application and confirm whether it meets the requirement. 

    Full-stack QA testing is critical in stabilizing APIs and UI, ensuring a high-quality product that satisfies user needs.

    To test UI and APIs independently, we can use several tools and frameworks, like Selenium, Postman, Rest-Assured, Nightwatch, Katalon Studio, and Jest, but this article will be focusing on Cypress.

    We will cover how we can do full stack QA testing using Cypress. 

    What exactly is Cypress?

    Cypress is a free, open-source, locally installed Test Runner and Dashboard Service for recording your tests. It is a frontend and backend test automation tool built for the next generation of modern web applications.

    It is useful for developers as well as QA engineers to test real-life applications developed in React.js, Angular.js, Node.js, Vue.js, and other front-end technologies.

    How does Cypress Work Functionally?

    Cypress is executed in the same run loop as your application. Behind Cypress is a Node.js server process.

    Most testing tools operate by running outside of the browser and executing remote commands across the network. Cypress does the opposite, while at the same time working outside of the browser for tasks that require higher privileges.

    Cypress takes snapshots of your application and enables you to time travel back to the state it was in when commands ran. 

    Why Use Cypress Over Other Automation Frameworks?

    Cypress is a JavaScript test automation solution for web applications.

    This all-in-one testing framework provides a chai assertion library with mocking and stubbing all without Selenium. Moreover, it supports the Mocha test framework, which can be used to develop web test automations.

    Key Features of Cypress:

    • Mocking – By mocking the server response, it has the ability to test edge cases.
    • Time Travel – It takes snapshots as your tests run, allowing users to go back and forth in time during test scenarios.
    • Flake Resistant – It automatically waits for commands and assertions before moving on.
    • Spies, Stubs, and Clocks – It can verify and control the behavior of functions, server responses, or timers.
    • Real Time Reloads – It automatically reloads whenever you make changes to your tests.
    • Consistent Results – It gives consistent and reliable tests that aren’t flaky.
    • Network Traffic Control – Easily control, stub, and test edge cases without involving your server.
    • Automatic Waiting – It automatically waits for commands and assertions without ever adding waits or sleeps to your tests. No more async hell. 
    • Screenshots and Videos – View screenshots taken automatically on failure, or videos of your entire test suite when it has run smoothly.
    • Debuggability – Readable error messages help you to debug quickly.

       


    Fig:- How Cypress works 

     

     Installation and Configuration of the Cypress Framework

    This will also create a package.json file for the test settings and project dependencies.

    The test naming convention should be test_name.spec.js 

    • To run the Cypress test, use the following command:
    $ npx cypress run --spec "cypress/integration/examples/tests/e2e_test.spec.js"

    • This is how the folder structure will look: 
    Fig:- Cypress Framework Outline

    REST API Testing Using Cypress

    It’s important to test APIs along with E2E UI tests, and it can also be helpful to stabilize APIs and prepare data to interact with third-party servers.

    Cypress provides the functionality to make an HTTP request.

    Using Cypress’s Request() method, we can validate GET, POST, PUT, and DELETE API Endpoints.

    Here are some examples: 

    describe(“Testing API Endpoints Using Cypress”, () => {
    
          it(“Test GET Request”, () => {
                cy.request(“http://localhost:3000/api/posts/1”)
                     .then((response) => {
                            expect(response.body).to.have.property('code', 200);
                })
          })
    
          it(“Test POST Request”, () => {
                cy.request({
                     method: ‘POST’,
                     url: ‘http://localhost:3000/api/posts’,
                     body: {
                         “id” : 2,
                         “title”:“Automation”
                     }
                }).then((response) => { 
                        expect(response.body).has.property(“title”,“Automation”); 
                })
          })
    
          it(“Test PUT Request”, () => {
                cy.request({
                        method: ‘PUT’,
                        url: ‘http://localhost:3000/api/posts/2’,
                        body: { 
                           “id”: 2,
                           “title” : “Test Automation”
                        }
                }).then((response) => { 
                        expect(response.body).has.property(“title”,“ Test Automation”); 
                })          
          })        
    
          it(“Test DELETE Request”, () => {
                cy.request({
                          method : ‘DELETE’,
                          url: ‘http://localhost:3000/api/post/2’
                          }).then((response) => {
                            expect(response.body).to.be.empty;
                })	
          })
       
     })

    How to Write End-to-End UI Tests Using Cypress

    With Cypress end-to-end testing, you can replicate user behaviour on your application and cross-check whether everything is working as expected. In this section, we’ll check useful ways to write E2E tests on the front-end using Cypress. 

    Here is an example of how to write E2E test in Cypress: 

    How to Pass Test Case Using Cypress

    1. Navigate to the Google website
    2. Click on the search input field 
    3. Type Cypress and press enter  
    4. The search results should contain Cypress

    How to Fail Test Case Using Cypress

    1. Navigate to the wrong URL http://locahost:8080
    2. Click on the search input field 
    3. Type Cypress and press enter
    4. The search results should contain Cypress  
    describe('Testing Google Search', () => {
    
         // To Pass the Test Case 1 
    
         it('I can search for Valid Content on Google', () => {
    
              cy.visit('https://www.google.com');
              cy.get("input[title='Search']").type('Cypress').type(‘{enter}’);
              cy.contains('https://www.cypress.io'); 
    
         });
    
         // To Fail the Test Case 2
    
         it('I can navigate to Wrong URL’', () => {
    
              cy.visit('http://localhost:8080');
              cy.get("input[title='Search']").type('Cypress').type(‘{enter}’);
              cy.contains('https://www.cypress.io'); 
    
         });
     
    });

    Cross Browser Testing Using Cypress 

    Cypress can run tests across the latest releases of multiple browsers. It currently has support for Chrome and Firefox (beta). 

    Cypress supports the following browsers:

    • Google Chrome
    • Firefox (beta)
    • Chromium
    • Edge
    • Electron

    Browsers can be specified via the –browser flag when using the run command to launch Cypress. npm scripts can be used as shortcuts in package.json to launch Cypress with a specific browser more conveniently. 

    To run tests on browsers:

    $ npx cypress run --browser chrome --spec “cypress/integration/examples/tests/e2e_test.spec.js”

    Here is an example of a package.json file to show how to define the npm script:

    "scripts": {
      "cy:run:chrome": "cypress run --browser chrome",
      "cy:run:firefox": "cypress run --browser firefox"
    }

    Cypress Reporting

    Reporter options can be specified in the cypress.json configuration file or via CLI options. Cypress supports the following reporting capabilities:

    • Mocha Built-in Reporting – As Cypress is built on top of Mocha, it has the default Mochawesome reporting 
    • JUnit and TeamCity – These 3rd party Mocha reporters are built into Cypress.

    To install additional dependencies for report generation: 

    Installing Mochaawesome:

    $ npm install mochawesome

    Or installing JUnit:

    $ npm install junit

    Examples of a config file and CLI for the Mochawesome report 

    • Cypress.json config file:
    {
        "reporter": "mochawesome",
        "reporterOptions":
        {
            "reportDir": "cypress/results",
            "overwrite": false,
            "html": true,
            "json": true
        }
    }

    • CLI Reporting:
    $ npx cypress run --reporter mochawesome --spec “cypress/integration/examples/tests/e2e_test.spec.js”

    Examples of a config File and CLI for the JUnit Report: 

    • Cypress.json config file for JUnit: 
    {
        "reporter": "junit",
        "reporterOptions": 
        {
            "reportDir": "cypress/results",
            "mochaFile": "results/my-test-output.xml",
            "toConsole": true
        }
    }

    • CLI Reporting: <cypress_junit_reporting></cypress_junit_reporting>
    $ npx cypress run --reporter junit  --reporter-options     “mochaFile=results/my-test-output.xml,toConsole=true

    Fig:- Collapsed View of Mochawesome Report

     

    Fig:- Expanded View of Mochawesome Report

     

    Fig:- Mochawesome Report Settings

    Additional Possibilities of Using Cypress 

    There are several other things we can do using Cypress that we could not cover in this article, although we’ve covered the most important aspects of the tool..

    Here are some other usages of Cypress that we could not explore here:

    • Continuous integration and continuous deployment with Jenkins 
    • Behavior-driven development (BDD) using Cucumber
    • Automating applications with XHR
    • Test retries and retry ability
    • Custom commands
    • Environment variables
    • Plugins
    • Visual testing
    • Slack integration 
    • Model-based testing
    • GraphQL API Testing 

    Limitations with Cypress

    Cypress is a great tool with a great community supporting it. Although it is still young, it is being continuously developed and is quickly catching up with the other full-stack automation tools on the market.

    So, before you decide to use Cypress, we would like to touch upon some of its limitations. These limitations are for version 5.2.0, the latest version of Cypress at the time of this article’s publishing.

    Here are the current limitations of using Cypress:

    • It can’t use two browsers at the same time.
    • It doesn’t provide support for multi-tabs.
    • It only supports the JavaScript language for creating test cases.
    • It doesn’t currently provide support for browsers like Safari and IE.
    • It has limited support for iFrames.

    Conclusion

    Cypress is a great tool with a growing feature-set. It makes setting up, writing, running, and debugging tests easy for QA automation engineers. It also has a quicker learning cycle with a good, baked-in execution environment.

    It is fully JavaScript/MochaJS-oriented with specific new APIs to make scripting easier. It also provides a flexible test execution plan that can implement significant and unexpected changes.

    In this blog, we talked about how Cypress works functionally, performed end-to-end UI testing, and touched upon its limitations. We hope you learned more about using Cypress as a full-stack test automation tool.

    Related QA Articles

    1. Building a scalable API testing framework with Jest and SuperTest
    2. Automation testing with Nightwatch JS and Cucumber: Everything you need to know
    3. API testing using Postman and Newman
  • A Beginner’s Guide to Python Tornado

    The web is a big place now. We need to support thousands of clients at a time, and here comes Tornado. Tornado is a Python web framework and asynchronous network library, originally developed at FriendFreed.

    Tornado uses non-blocking network-io. Due to this, it can handle thousands of active server connections. It is a saviour for applications where long polling and a large number of active connections are maintained.

    Tornado is not like most Python frameworks. It’s not based on WSGI, while it supports some features of WSGI using module `tornado.wsgi`. It uses an event loop design that makes Tornado request execution faster.  

    What is Synchronous Program?

    A function blocks, performs its computation, and returns, once done . A function may block for many reasons: network I/O, disk I/O, mutexes, etc.

    Application performance depends on how efficiently application uses CPU cycles, that’s why blocking statements/calls must be taken seriously. Consider password hashing functions like bcrypt, which by design use hundreds of milliseconds of CPU time, far more than a typical network or disk access. As the CPU is not idle, there is no need to go for asynchronous functions.

    A function can be blocking in one, and non-blocking in others. In the context of Tornado, we generally consider blocking due to network I/O and disk, although all kinds of blocking need to be minimized.

    What is Asynchronous Program?

    1) Single-threaded architecture:

        Means, it can’t do computation-centric tasks parallely.

    2) I/O concurrency:

        It can handover IO tasks to the operating system and continue to the next task to achieve parallelism.

    3) epoll/ kqueue:

        Underline system-related construct that allows an application to get events on a file descriptor or I/O specific tasks.

    4) Event loop:

        It uses epoll or kqueue to check if any event has happened, and executes callback that is waiting for those network events.

    Asynchronous vs Synchronous Web Framework:

    In case of synchronous model, each request or task is transferred to thread or routing, and as it finishes, the result is handed over to the caller. Here, managing things are easy, but creating new threads is too much overhead.

    On the other hand, in Asynchronous framework, like Node.js, there is a single-threaded model, so very less overhead, but it has complexity.

    Let’s imagine thousands of requests coming through and a server uses event loop and callback. Now, until request gets processed, it has to efficiently store and manage the state of that request to map callback result to the actual client.

    Node.js vs Tornado

    Most of these comparison points are tied to actual programming language and not the framework: 

    • Node.js has one big advantage that all of its libraries are Async. In Python, there are lots of available packages, but very few of them are asynchronous
    • As Node.js is JavaScript runtime, and we can use JS for both front and back-end, developers can keep only one codebase and share the same utility library
    • Google’s V8 engine makes Node.js faster than Tornado. But a lot of Python libraries are written in C and can be faster alternatives.

    A Simple ‘Hello World’ Example

    import tornado.ioloop
    import tornado.web
    
    class MainHandler(tornado.web.RequestHandler):
        def get(self):
            self.write("Hello, world")
    
    def make_app():
        return tornado.web.Application([
            (r"/", MainHandler),
        ])
    
    if __name__ == "__main__":
        app = make_app()
        app.listen(8888)
        tornado.ioloop.IOLoop.current().start()

    Note: This example does not use any asynchronous feature.

    Using AsyncHTTPClient module, we can do REST call asynchronously.

    from tornado.httpclient import AsyncHTTPClient
    from tornado import gen
    
    @gen.coroutine
    def async_fetch_gen(url):
        http_client = AsyncHTTPClient()
        response = yield http_client.fetch(url)
        raise gen.Return(response.body)

    As you can see `yield http_client.fetch(url)` will run as a coroutine.

    Complex Example of Tornado Async

    Please have a look at Asynchronous Request handler.

    WebSockets Using Tornado:

    Tornado has built-in package for WebSockets that can be easily used with coroutines to achieve concurrency, here is one example:

    import logging
    import tornado.escape
    import tornado.ioloop
    import tornado.options
    import tornado.web
    import tornado.websocket
    from tornado.options import define, options
    from tornado.httpserver import HTTPServer
    
    define("port", default=8888, help="run on the given port", type=int)
    
    
    # queue_size = 1
    # producer_num_items = 5
    # q = queues.Queue(queue_size)
    
    def isPrime(num):
        """
        Simple worker but mostly IO/network call
        """
        if num > 1:
            for i in range(2, num // 2):
                if (num % i) == 0:
                    return ("is not a prime number")
            else:
                return("is a prime number")
        else:
            return ("is not a prime number")
    
    class Application(tornado.web.Application):
        def __init__(self):
            handlers = [(r"/chatsocket", TornadoWebSocket)]
            super(Application, self).__init__(handlers)
    
    class TornadoWebSocket(tornado.websocket.WebSocketHandler):
        clients = set()
    
        # enable cross domain origin
        def check_origin(self, origin):
            return True
    
        def open(self):
            TornadoWebSocket.clients.add(self)
    
        # when client closes connection
        def on_close(self):
            TornadoWebSocket.clients.remove(self)
    
        @classmethod
        def send_updates(cls, producer, result):
    
            for client in cls.clients:
    
                # check if result is mapped to correct sender
                if client == producer:
                    try:
                        client.write_message(result)
                    except:
                        logging.error("Error sending message", exc_info=True)
    
        def on_message(self, message):
            try:
                num = int(message)
            except ValueError:
                TornadoWebSocket.send_updates(self, "Invalid input")
                return
            TornadoWebSocket.send_updates(self, isPrime(num))
    
    def start_websockets():
        tornado.options.parse_command_line()
        app = Application()
        server = HTTPServer(app)
        server.listen(options.port)
        tornado.ioloop.IOLoop.current().start()
    
    
    
    if __name__ == "__main__":
        start_websockets()

    One can use a WebSocket client application to connect to the server, message can be any integer. After processing, the client receives the result if the integer is prime or not.  
    Here is one more example of actual async features of Tornado. Many will find it similar to Golang’s Goroutine and channels.

    In this example, we can start worker(s) and they will listen to the ‘tornado.queue‘. This queue is asynchronous and very similar to the asyncio package.

    # Example 1
    from tornado import gen, queues
    from tornado.ioloop import IOLoop
    
    @gen.coroutine
    def consumer(queue, num_expected):
        for _ in range(num_expected):
            # heavy I/O or network task
            print('got: %s' % (yield queue.get()))
    
    
    @gen.coroutine
    def producer(queue, num_items):
        for i in range(num_items):
            print('putting %s' % i)
            yield queue.put(i)
    
    @gen.coroutine
    def main():
        """
        Starts producer and consumer and wait till they finish
        """
        yield [producer(q, producer_num_items), consumer(q, producer_num_items)]
    
    queue_size = 1
    producer_num_items = 5
    q = queues.Queue(queue_size)
    
    results = IOLoop.current().run_sync(main)
    
    
    # Output:
    # putting 0
    # putting 1
    # got: 0
    # got: 1
    # putting 2
    # putting 3
    # putting 4
    # got: 2
    # got: 3
    # got: 4
    
    
    # Example 2
    # Condition
    # A condition allows one or more coroutines to wait until notified.
    from tornado import gen
    from tornado.ioloop import IOLoop
    from tornado.locks import Condition
    
    my_condition = Condition()
    
    @gen.coroutine
    def waiter():
        print("I'll wait right here")
        yield my_condition.wait()
        print("Received notification now doing my things")
    
    @gen.coroutine
    def notifier():
        yield gen.sleep(60)
        print("About to notify")
        my_condition.notify()
        print("Done notifying")
    
    @gen.coroutine
    def runner():
        # Wait for waiter() and notifier() in parallel
        yield([waiter(), notifier()])
    
    results = IOLoop.current().run_sync(runner)
    
    
    # output:
    
    # I'll wait right here
    # About to notify
    # Done notifying
    # Received notification now doing my things

    Conclusion

    1) Asynchronous frameworks are not much of use when most of the computations are CPU centric and not I/O.

    2) Due to a single thread per core model and event loop, it can manage thousands of active client connections.

    3) Many say Django is too big, Flask is too small, and Tornado is just right:)