Beginnings are often messy. Be it in any IT project, we often see that at a certain point, people look forward to revamping things. With revamping, there comes additional costs and additional time. And those will be a lot costlier if not addressed at the right time to meet the customers’ feature demands. Most things like code organization, reusability, code cleanup, documentation are often left unattended at the beginning only to realize that they hold the key to faster development and ensure quick delivery of requested features in the future, as projects grow into platforms to serve huge numbers of users.
We are going to look at how to write a scalable frontend platform with React and Lerna within an hour in this blog post. The goal is to have an organized modular architecture that makes it easy to maintain existing products and can quickly deliver new modules as they arrive.
Going Mono-Repo:
Most projects start with a single git/bitbucket repository and end up in chaos with time. With mono-repo, we can make it more manageable. That being said, we will use Lerna, npm and YARN to initialize our monorepo.
After this, we will have Lerna installed globally.
Initializing a project with Lerna.
mkdir startup_ui && cd startup_uinpx lerna init
Let’s go through the files generated with it. So we have the following files:
– package.json
– lerna.json
– packages/
package.json is the same as any other npm package.json file. It specifies the name of the project and some basic stuff that we normally define like adding husky for pre-commit hooks.
lerna.json is a configuration file for configuring Lerna. You can find more about Lerna configuration and supported options at Lerna concepts.
packages/ is a directory where all our modules will be defined and Lerna will take care of referencing them in each other. Lerna will simply create symlinks of referenced local modules to make it available for other modules to use.
For better performance, we will go with YARN. So how do we configure YARN with Lerna?
It’s pretty simple as shown below.
We just need to add “npmClient”: “yarn” to lerna.json
Using YARN workspaces with Lerna
YARN workspace is a quick way to get around the mess of `yarn link` i.e. referencing one module into another. Lerna already provides it so why go with YARN workspaces?
The answer is excellent bootstrapping time provided by YARN workspaces.
This will take care of linking different modules in a UI platform which are mentioned in the packages folder.
Once this is done, we can proceed to bootstrap, which in other terms, means forcefully telling Lerna to link the packages. As of now, we do not have anything under it, but we can run it to check if this setup can bootstrap and work properly.
yarn installlerna bootstrap
So it’s all about the Lerna and YARN set up, but how should one really organize a UI package to build in a manageable and modular way.
What most React projects are made of
1 – Component Libraries 2 – Modules 3 – Utils libraries 4 – Abstractions over React-Redux ( Optional but most organizations go with it)
Components: Most organizations end up building their own component libraries and it is crucial to have it separated from the codebase for reusability of components. There exist a lot of libraries, but when you start building the product, you realize every library has something and misses something. Most commonly available libraries convey a standard UX design pattern. What if your designs don’t fit into those at a later point? So we need a separate components library where we can maintain organization-specific components.
Modules: At first you may have one module or product, but over time it will grow. To avoid breaking existing modules over time and keeping change lists smaller and limited to individual products, it’s essential to split a monolith into multiple modules to manage the chaos of each module within it without impacting other stable modules.
Utils: These are common to any project. Almost every one of us ends up creating utils folders in projects to help with little functions like converting currency or converting large numbers like 100000 as 100K and many more. Most of these functions are common and specific to organizations. E.g. a company working with statistics is going to have a helper function to convert large numbers into human-readable figures and eventually they end up copying the same code. Keeping utils separated gives us a unique opportunity to avoid code duplication of such cases and keep consistency across different modules.
Abstractions over React-Redux: A lot of organizations prefer to do it. AWS, Microsoft Outlook, and many more have already adopted this strategy to abstract React & Redux bindings to create their own simplified functions to quickly bootstrap a new module/product into an existing ecosystem. This helps in faster delivery of new modules since developers don’t get into the same problems of bootstrapping and can focus on product problems rather than setting up the environment.
One of the most simplified approaches is presented at react-redux-patch to reduce boilerplate. We will not go into depth in this article since it’s a vast topic and a lot of people have their opinion on how this should be built.
Example:
We will use create-react-app & create-react-library to create a base for our libraries and modules.
Installing create-react-library globally.
yarn global add create-react-library
Creating a components library:
create-react-library takes away the pain of complex configurations and enables us to create components and utility libraries with ease.
cd packagescreate-react-library ui-components
For starters, just create a simple button component in the library.
Publishing your packages to a private npm directory is a chaotic thing in monorepo. Lerna provides a very simple approach towards publishing packages.
You will just need to add the following to package.json
"publishConfig": {"registry": REGISTRY_URL_HERE}
"publishConfig": {"registry": REGISTRY_URL_HERE}
Now you can simply run
lerna publish
It will try to publish the packages that have been changed since the last commit.
Summary:
With the use of Lerna and YARN, we can create an efficient front-end architecture to quickly deliver new features with less impact on existing modules. Of course with additional bootstrapping tools like yeoman generator along with abstractions over React and Redux, it makes the process of introducing to new modules a piece of a cake. Over time, you can easily split these modules and components into individual repositories by utilizing the private npm repositories. But for the initial chaos of getting things working and quick prototyping of your next big company’s UI architecture, Lerna and YARN are perfectly suited tools!!!
Pusher is a hosted API service which makes adding real-time data and functionality to web and mobile applications seamless.
Pusher works as a real-time communication layer between the server and the client. It maintains persistent connections at the client using WebSockets, as and when new data is added to your server. If a server wants to push new data to clients, they can do it instantly using Pusher. It is highly flexible, scalable, and easy to integrate. Pusher has exposed over 40+ SDKs that support almost all tech stacks.
In the context of delivering real-time data, there are other hosted and self-hosted services available. It depends on the use case of what exactly one needs, like if you need to broadcast data across all the users or something more complex having specific target groups. In our use case, Pusher was well-suited, as the decision was based on the easy usage, scalability, private and public channels, webhooks, and event-based automation. Other options which we considered were Socket.IO, Firebase & Ably, etc.
Pusher is categorically well-suited for communication and collaboration features using WebSockets. The key difference with Pusher: it’s a hosted service/API. It takes less work to get started, compared to others, where you need to manage the deployment yourself. Once we do the setup, it comes to scaling, that reduces future efforts/work.
Some of the most common use cases of Pusher are:
1. Notification: Pusher can inform users if there is any relevant change. Notifications can also be thought of as a form of signaling, where there is no representation of the notification in the UI. Still, it triggers a reaction within an application.
2. Activity streams: Stream of activities which are published when something changes on the server or someone publishes it across all channels.
3. Live Data Visualizations: Pusher allows you to broadcast continuously changing data when needed.
4. Chats: You can use Pusher for peer to peer or peer to multichannel communication.
In this blog, we will be focusing on using Channels, which is an alias for Pub/Sub messaging API for a JavaScript-based application. Pusher also comes with Chatkit and Beams (Push Notification) SDK/APIs.
Chatkit is designed to make chat integration to your app as simple as possible. It allows you to add group chat and 1 to 1 chat feature to your app. It also allows you to add file attachments and online indicators.
Beams are used for adding Push Notification in your Mobile App. It includes SDKs to seamlessly manage push token and send notifications.
There are three types of channels in Pusher: Public, Private, and Presence.
Public channels: These channels are public in nature, so anyone who knows the channel name can subscribe to the channel and start receiving messages from the channel. Public channels are commonly used to broadcast general/public information, which does not contain any secure information or user-specific data.
Private channels: These channels have an access control mechanism that allows the server to control who can subscribe to the channel and receive data from the channel. All private channels should have a private- prefixed to the name. They are commonly used when the sever needs to know who can subscribe to the channel and validate the subscribers.
Presence channels: It is an extension to the private channel. In addition to the properties which private channels have, it lets the server ‘register’ users information on subscription to the channel. It also enables other members to identify who is online.
In your application, you can create a subscription and start listening to events on:
// Here my-channel is the channel name// all the event published to this channel would be available// once you subscribe to the channel and start listing to it.var channel = pusher.subscribe('my-channel');channel.bind('my-event', function(data) {alert('An event was triggered with message: '+ data.message);});
For creating channels, you can use the dashboard or integrate it with your server. For more details on how to integrate Pusher with your server, you can read (Server API). You need to create an app on your Pusher dashboard and can use it to further trigger events to your app.
or
Integrate Pusher with your server. Here is a sample snippet from our node App:
var Pusher =require('pusher');var pusher =newPusher({ appId: 'APP_ID', key: 'APP_KEY', secret: 'APP_SECRET', cluster: 'APP_CLUSTER'});// Logic which will then trigger events to a channelfunctiontrigger(){......pusher.trigger('my-channel', 'my-event', {"message": "hello world"});......}
As a default behavior, anyone who knows your public app key can open a connection to your channels app. This behavior does not add any security risk, as connections can only access data on channels.
For more advanced use cases, you need to use the “Authorized Connections” feature. It authorizes every single connection to your channels, and hence, avoids unwanted/unauthorized connection. To enable the authorization, set up an auth endpoint, then modify your client code to look like this.
Pusher comes with a wide range of plans which you can subscribe to based on your usage. You can scale your application as it grows. Here is a snippet from available plans for mode details you can refer this.
This article has covered a brief description of Pusher, its use cases, and how you can use it to build a scalable real-time application. Using Pusher may vary based on different use cases; it is no real debate on what one can choose. Pusher approach is simple and API based. It enables developers to add real-time functionality to any application in very little time.
If you want to get hands-on tutorials/blogs, please visit here.
This post is specific to need of code-splitting in React/Redux projects. While exploring the possibility to optimise the application, the common problem occurs with reducers. This article specifically focuses on how do we split reducers to be able to deliver them in chunks.
What are the benefits of splitting reducers in chunks?
1) True code splitting is possible
2) A good architecture can be maintained by keeping page/component level reducers isolated from other parts of application minimising the dependency on other parts of application.
Why Do We Need to Split Reducers?
1. For fast page loads
Splitting reducers will have and advantage of loading only required part of web application which in turn makes it very efficient in rendering time of main pages
2. Organization of code
Splitting reducers on page level or component level will give a better code organization instead of just putting all reducers at one place. Since reducer is loaded only when page/component is loaded will ensure that there are standalone pages which are not dependent on other parts of application. That ensures seamless development since it will essentially avoid cross references in reducers and throwing away complexities
3. One page/component
One reducer design pattern. Things are better written, read and understood when they are modular. With dynamic reducers it becomes possible to achieve it.
4. SEO
SEO is vast topic but it gets hit very hard if your website is having huge response times which happens in case if code is not split. With reducer level code splitting, reducers can be code split on component level which will reduce the loading time of website thereby increasing SEO rankings.
What Exists Today?
A little googling around the topic shows us some options. Various ways has been discussed here.
Dan Abramov’s answer is what we are following in this post and we will be writing a simple abstraction to have dynamic reducers but with more functionality.
A lot of solutions already exists, so why do we need to create our own? The answer is simple and straightforward:
1) The ease of use
Every library out there is little catchy is some way. Some have complex api’s while some have too much boilerplate codes. We will be targeting to be near react-redux api.
2) Limitation to add reducers at top level only
This is a very common problem that a lot of existing libraries have as of today. That’s what we will be targeting to solve in this post. This opens new doors for possibilities to do code splitting on component level.
A quick recap of redux facts:
1) Redux gives us following methods: – “getState”, – “dispatch(action)” – “subscribe(listener)” – “replaceReducer(nextReducer)”
2) reducers are plain functions returning next state of application
3) “replaceReducer” requires the entire root reducer.
What we are going to do?
We will be writing abstraction around “replaceReducer” to develop an API to allow us to inject a reducer at a given key dynamically.
A simple Redux store definition goes like the following:
Let’s simplify the store creation wrapper as:
What it Does?
“dynamicActionGenerator” and “isValidReducer” are helper function to determine if given reducer is valid or not.
This is an essential check to ensure all inputs to our abstraction layer over createStore should be valid reducers.
“createStore” takes initial Root reducer, initial state and enhancers that will be applicable to created store.
In addition to that we are maintaining, “asyncReducers” and “attachReducer” on store object.
“asyncReducers” keeps the mapping of dynamically added reducers.
“attachReducer” is partial in above implementation and we will see the complete implementation below. The basic use of “attachReducer” is to add reducer from any part of web application.
Given that our store object now becomes like follows:
Now here is an interesting problem, replaceReducer requires a final root reducer function. That means we will have to recreate the root reducers every time. So we will create a dynamicRootReducer function itself to simply the process.
So now our store object becomes as follows: Store:
What does dynamicRootReducer does? 1) Processes initial root reducer passed to it 2) Executes dynamic reducers to get next state.
So we now have an api exposed as : store.attachReducer(“home”, (state = {}, action) => { return state }); // Will add a dynamic reducer after the store has been created
store.attachReducer(“home.grid”, (state={}, action) => { return state}); // Will add a dynamic reducer at a given nested key in store.
In this way, we can achieve code splitting with reducers which is a very common problem in almost every react-redux application. With above solution you can do code splitting on page level, component level and can also create reusable stateful components which uses redux state. The simplified approach will reduce your application boilerplate. Moreover common complex components like grid or even the whole pages like login can be exported and imported from one project to another making development faster than ever!
According to their site, “Gatsby is a free and open source framework based on React that helps developers build blazing fast websites and apps”. Gatsby allows the developers to make a site using React and work with any data source (CMSs, Markdown, etc) of their choice. And then at the build time it pulls the data from these sources and spits out a bunch of static files that are optimized by Gatsby for performance. Gatsby loads only the critical HTML, CSS and JavaScript so that the site loads as fast as possible. Once loaded, Gatsby prefetches resources for other pages so clicking around the site feels incredibly fast.
What Gatsby Tries to Achieve?
Construct new, higher-level web building blocks: Gatsby is trying to build abstractions like gatsby-image, gatsby-link which will make web development easier by providing building blocks instead of making a new component for each project.
Create a cohesive “content mesh system”: The Content Management System (CMS) was developed to make the content sites possible. Traditionally, a CMS solution was a monolith application to store content, build sites and deliver them to users. But with time, the industry moved to using specialized tools to handle the key areas like search, analytics, payments, etc which have improved rapidly, while the quality of monolithic enterprise CMS applications like Adobe Experience Manager and Sitecore has stayed roughly the same.
To tackle this modular CMS architecture, Gatsby aims to build a “content mesh” – the infrastructure layer for a decoupled website. The content mesh stitches together content systems in a modern development environment while optimizing website delivery for performance. The content mesh empowers developers while preserving content creators’ workflows. It gives you access to best-of-breed services without the pain of manual integration.
Make building websites fun by making them simple: Each of the stakeholder in a website project should be able to see their creation quickl Using these building blocks along with the content mesh, website building feels fun no matter how big it gets. As Alan Kay said, “you get simplicity by finding slightly more sophisticated building blocks”.
An example of this can be seen in gatsby-image component. First lets consider how a single image gets on a website:
1. A page is designed 2. Specific images are chosen 3. The images are resized (with ideally multiple thumbnails to fit different devices) 4. And finally, the image(s) are included in the HTML/CSS/JS (or React component) for the page.
gatsby-image is integrated into Gatsby’s data layer and uses its image processing capabilities along with graphql to query for differently sized and shaped images.
We also skip the complexity around lazy-loading the images which are placed within placeholders. Also the complexity in generating the right sized image thumbnails is also taken care of.
So instead of a long pipeline of tasks to setup optimized images for your site, the steps now are: 1. Install gatsby-image 2. Decide what size of image you need 3. Add your query and the gatsby-image component to your page 4. And…that’s it!
Now images are fun!
Build a better web – qualities like speed, security, maintainability SEO, etc should be baked into the framework being used. If they are implemented on a per-site basis then it is a luxury. Gatsby bakes these qualities by default so that the right thing is the easy thing. The most high-impact way to make the web better is to make it high-quality by default.
It is More Than Just a Static Site Generator
Gatsby is not just for creating static sites. Gatsby is fully capable of generating a PWA with all the things we think that a modern web app can do, including auth, dynamic interactions, fetching data etc.
Gatsby does this by generating the static content using React DOM server-side APIs. Once this basic HTML is generated by Gatsby, React picks up where we left off. That basically means that Gatsby renders as much, upfront, as possible statically then client side React picks up and now we can do whatever a traditional React web app can do.
Best of Both Worlds
Generating statically generated HTML and then giving client-side React to do whatever it needs to do, using Gatsby gives us the best of both the worlds.
Statically rendered pages maximize SEO, provide a better TTI, general web performance, etc. Static sites have an easy global distribution and are easier to deploy
Conclusion
If the code runs successfully in the development mode (Gatsby develop) it doesn’t mean that there will be no issues with the build version. An easy solution is to build the code regularly and solve the issues. It is easy enough for where the build has to be generated after every change and the build time is a couple of minutes. But if there are frequent changes and the build gets created a few times a week or month, then it might be harder to do as multiple issues will have to be solved at the build time.
If you have a very big site with a lot of styled components and libraries then the build time increases substantially. If the build takes half an hour to build then it is no longer feasible to run the build after every change which makes finding the build issues regularly complicated.
If you are coming from a robust framework, such as Angular or any other major full-stack framework, you have probably asked yourself why a popular library like React (yes, it’s not a framework, hence this blog) has the worst tooling and developer experience.
They’ve done the least amount of work possible to build this framework: no routing, no support for SSR, nor a decent design system, or CSS support. While some people might disagree—“The whole idea is to keep it simple so that people can bootstrap their own framework.” –Dan Abramov. However, here’s the catch: Most people don’t want to go through the tedious process of setting up.
Many just want to install and start building some robust applications, and with the new release of Next.js (12), it’s more production-ready than your own setup can ever be.
Before we get started discussing what Next.js 12 can do for us, let’s get some facts straight:
React is indeed a library that could be used with or without JSX.
Next.js is a framework (Not entirely UI ) for building full-stack applications.
Next.js is opinionated, so if your plan is to do whatever you want or how you want, maybe Next isn’t the right thing for you (mind that it’s for production).
Although Next is one of the most updated code bases and has a massive community supporting it, a huge portion of it is handled by Vercel, and like other frameworks backed by a tech giant… be ready for occasional Vendor-lockin (don’t forget React–[Meta] ).
This is not a Next.js tutorial; I won’t be going over Next.js. I will be going over the features that are released with V12 that make it go over the inflection point where Next could be considered as the primary framework for React apps.
ES module support
ES modules bring a standardized module system to the entire JS ecosystem. They’re supported by all major browsers and node.js, enabling your build to have smaller package sizes. This lets you use any package using a URL—no installation or build step required—use any CDN that serves ES module as well as the design tools of the future (Framer already does it –https://www.framer.com/ ).
import Card from'https://framer.com/m/Card-3Yxh.js@gsb1Gjlgc5HwfhuD1VId';import Head from'next/head';exportdefaultclassMyDocumentextendsDocument {render() {return ( <> <Head> <title>URL imports for Next 12</title> </Head> <div> <Cardvariant='R3F' /> </div> </> ); }}
As you can see, we are importing a Card component directly from the framer CDN on the go with all its perks. This would, in turn, be the start of seamless integration with all your developer environments in the not-too-distant future. If you want to learn more about URL imports and how to enable the alpha version, go here.
New engine for faster DEV run and production build:
Next.js 12 comes with a new Rust compiler that comes with a native infrastructure. This is built on top of SWC, an open platform for fast tooling systems. It comes with an impressive stat of having 3 times faster local refresh and 5 times faster production builds.
Contrary to most productions builds with React using webpack, which come with a ton of overheads and don’t really run on the native system, SWC is going to save you a ton of time that you waste during your mundane workloads.
If you are anything like me, you’ve probably had some changes that you aren’t really sure about and just want to go through them with the designer, but you don’t really wanna push the code to PROD. Taking a call with the designer and sharing your screen isn’t really the best way to do it. If only there were a way to share your workflow on-the-go with your team with some collaboration feature that just wouldn’t take up an entire day to setup. Well, Next.js Live lets you do just that.
With the help of ES module system and native support for webassembly, Next.js Live runs entirely on the browser, and irrespective of where you host it, the development engine behind it will soon be open source so that more platforms can actually take advantage of this, but for now, it’s all Next.js.
These are just repetitive pieces of code that you think could run on their own out of your actual backend. The best part about this is that you don’t really need to place these close to your backend. Before the request gets completed, you can potentially rewrite, redirect, add headers, or even stream HTML., Depending upon how you host your middleware using Vercel edge functions or lambdas with AWS, they can potentially handle
Authentication
Bot protection
Redirects
Browser support
Feature flags
A/B tests
Server-side analytics
Logging
And since this is part of the Next build output, you can technically use any hosting providers with an Edge network (No Vendor lock-in)
For implementing middleware, we can create a file _middleware inside any pages folder that will run before any requests at that particular route (routename)
pages/routeName/_middleware.ts.
importtype { NextFetchEvent } from'next/server';import { NextResponse } from'next/server';exportfunctionmiddleware(event:NextFetchEvent) {// gram the user's location or use India for defaultconstcountry= event.request.geo.country.toLowerCase() ||'IND';//rewrite to static, cached page for each localreturn event.respondWith(NextResponse.rewrite(`/routeName/${country}`));}
Since this middleware, each request will be cached, and rewriting the response change the URL in your client Next.js can make the difference and still provide you the country flag.
Server-side streaming:
React 18 now supports server-side suspense API and SSR streaming. One big drawback of SSR was that it wasn’t restricted to the strict run time of REST fetch standard. So, in theory, any page that needed heavy lifting from the server could give you higher FCP (first contentful paint). Now this will allow you to stream server-rendered pages using HTTP streaming that will solve your problem for higher render time you can take a look at the alpha version by adding.
React server components allow us to render almost everything, including the components themselves inside the server. This is fundamentally different from SSR where you are just generating HTML on the server, with server components, there’s zero client-side Javascript needed, making the rendering process much faster (basically no hydration process). This could also be deemed as including the best parts of server rendering with client-side interactivity.
As you can see in the above SSR example, while we are fetching the stories from the endpoint, our client is actually waiting for a response with a blank page, and depending upon how fast your APIs are, this is a pretty big problem—and the reason we don’t just use SSR blindly everywhere.
Now, let’s take a look at a server component example:
Any file ending with .server.js/.ts will be treated as a server component in your Next.js application.
This implementation will stream your components progressively and eventually show your data as it gets generated in the server component–by-component. The difference is huge; it will be the next level of code-splitting ,and allow you to do data fetching at the component level and you don’t need to worry about making an API call in the browser.
And functions like getStaticProps and getserverSideProps will be a liability of the past.
And this also identifies the React Hooks model, going with the de-centralized component model. It also removes the choice we often need to make between static or dynamic, bringing the best of both worlds. In the future, the feature of incremental static regeneration will be based on a per-component level, removing the all or nothing page caching and in terms will allow decisive / intelligent caching based on your needs.
Next.js is internally working on a data component, which is basically the React suspense API but with surrogate keys, revalidate, and fallback, which will help to realize these things in the future. Defining your caching semantics at the component level
Conclusion:
Although all the features mentioned above are still in the development stage, just the inception of these will take the React world and frontend in general into a particular direction, and it’s the reason you should be keeping it as your default go-to production framework.
Being an avid Google Photos user, I really love some of its features, such as album, face search, and unlimited storage. However, when Google announced the end of unlimited storage on June 1st, 2021, I started thinking about how I could create a cheaper solution that would meet my photo backup requirement.
“Taking an image, freezing a moment, reveals how rich reality truly is.”
– Anonymous
Google offers 100 GB of storage for 130 INR. This storage can be used across various Google applications. However, I don’t use all the space in one go. For me, I snap photos randomly. Sometimes, I visit places and take random snaps with my DSLR and smartphone. So, in general, I upload approximately 200 photos monthly. The size of these photos varies in the range of 4MB to 30MB. On average, I may be using 4GB of monthly storage for backup on my external hard drive to keep raw photos, even the bad ones. Photos backed up on the cloud should be visually high-quality, and it’s good to have a raw copy available at the same time, so that you may do some lightroom changes (although I never touch them 😛). So, here is my minimal requirement:
Should support social authentication (Google sign-in preferred).
Photos should be stored securely in raw format.
Storage should be scaled with usage.
Uploading and downloading photos should be easy.
Web view for preview would be a plus.
Should have almost no operations headache and solution should be as cheap as possible 😉.
Selecting Tech Stack
To avoid operation headaches with servers going down, scaling, or maybe application crashing and overall monitoring, I opted for a serverless solution with AWS. The AWS S3 is infinite scalable storage and you only pay for the amount of storage you used. On top of that, you can opt for the S3 storage class, which is efficient and cost-effective.
– Infrastructure Stack 1. AWS API Gateway (http api) 2. AWS Lambda (for processing images and API gateway queries) 3. Dynamodb (for storing image metadata) 4. AWS Cognito (for authentication) 5. AWS S3 Bucket (for storage and web application hosting) 6. AWS Certificate Manager (to use SSL certificate for a custom domain with API gateway)
We will create three S3 buckets. Create one for hosting a frontend application (refer to architecture diagram, more on this discussed later in the build and hosting part). The second one is for temporarily uploading images. The third one is for actual backup and storage (enable server-side encryption on this bucket). A temporary upload bucket will process uploaded images.
During pre-processing, we will resize the original image into two different sizes. One is for thumbnail purposes (400px width), another one is for viewing purposes, but with reduced quality (webp format). Once images are resized, upload all three (raw, thumbnail, and webview) to the third S3 bucket and create a record in dynamodb. Set up object expiry policy on the temporary bucket for 1 day. This way, uploaded objects are automatically deleted from the temporary bucket.
Setup trigger on the temporary bucket for uploaded images:
We will need to set up an S3 PUT event, which will trigger our Lambda function to download and process images. We will filter the suffix jpg (and jpeg) for an event trigger, meaning that any file with extension .jpg and .jpeg uploaded to our temporary bucket will automatically invoke a lambda function with the event payload. The lambda function with the help of the event payload will download the uploaded file and perform processing. Your serverless function definition would look like:
Notice that in the YAML events section, we set “existing:true”. This ensures that the bucket will not be created during the serverless deployment. However, if you plan not to manually create your s3 bucket, you can let the framework create a bucket for you.
DynamoDB as metadatadb:
AWS dynamodb is a key-value document db that is suitable for our use case. Dynamodb will help us retrieve the list of photos available in the time series. Dynamodb uses a primary key for uniquely identifying each record. A primary key can be composed of a hash key and range key (also called a sort key). A range key is optional. We will use a federated identity ID (discussed in setup authorization) as the hash key (partition key) and name it the username for attribute definition with the type string. We will use the timestamp attribute definition name as a range key with a type number. Range key will help us query results with time-series (Unix epoch). We can also use dynamodb secondary indexes to sort results more specifically. However, to keep the application simple, we’re going to opt-out of this feature for now. Your serverless resource definition would look like:
Finally, you also need to set up the IAM role so that the process image lambda function would have access to the S3 bucket and dynamodb. Here is the serverless definition for the IAM role.
Okay, to set up a Cognito user pool, head to the Cognito console and create a user pool with below config:
1. Pool Name: photobucket-users
2. How do you want your end-users to sign in?
Select: Email Address or Phone Number
Select: Allow Email Addresses
Check: (Recommended) Enable case insensitivity for username input
3. Which standard attributes are required?
email
4. Keep the defaults for “Policies”
5. MFA and Verification:
I opted to manually reset the password for each user (since this is internal app)
Disabled user verification
6. Keep the default for Message Customizations, tags, and devices.
7. App Clients :
App client name: myappclient
Let the refresh token, access token, and id token be default
Check all “Auth flow configurations”
Check enable token revocation
8. Skip Triggers
9. Review and create the pool
Once created, goto app integration -> domain name. Create a domain Cognito subdomain of your choice and note this. Next, I plan to use the Google sign-in feature with Cognito Federation Identity Providers. Use this guide to set up a Google social identity with Cognito.
Setup Authorization:
Once the user identity is verified, we need to allow them to access the s3 bucket with limited permissions. Head to the Cognito console, select federated identities, and create a new identity pool. Follow these steps to configure:
1. Identity pool name: photobucket_auth
2. Keep Unauthenticated and Authentication flow settings unchecked.
${cognito-identity.amazonaws.com:sub} is a special AWS variable. When a user is authenticated with a federated identity, each user is assigned a unique identity. What the above policy means is that any user who is authenticated should have access to objects prefixed by their own identity ID. This is how we intend users to gain authorization in a limited area within the S3 bucket.
Copy the Identity Pool ID (from sample code section). You will need this in your backend to get the identity id of the authenticated user via JWT token.
Amplify configuration for the frontend UI sign-in:
This object helps you set up the minimal configuration for your application. This is all that we need to sign in via Cognito and access the S3 photo bucket.
constawsconfig= { Auth : { identityPoolId: "idenity pool id created during authorization setup", region : "your aws region", identityPoolRegion: "same as above if cognito is in same region", userPoolId : "cognito user pool id created during authentication setup", userPoolWebClientId : "cognito app client id", cookieStorage : { domain : "https://your-app-domain-name", //this is very important secure: true }, oauth: { domain : "{cognito domain name}.auth.{cognito region name}.amazoncognito.com", scope : ["profile","email","openid"], redirectSignIn: 'https://your-app-domain-name', redirectSignOut: 'https://your-app-domain-name', responseType : "token" } }, Storage: { AWSS3 : { bucket: "your-actual-bucket-name", region: "region-of-your-bucket" } }};exportdefault awsconfig;
You can then use the below code to configure and sign in via social authentication.
import Amplify, {Auth} from'aws-amplify';import awsconfig from'./aws-config';Amplify.configure(awsconfig);//once the amplify is configured you can use below call with onClick event of buttons or any other visual component to sign in.//Example<ButtonstartIcon={<imgalt="Sigin in With Google"src={logo} />} fullWidthvariant="outlined"color="primary"onClick={() => Auth.federatedSignIn({provider: 'Google'})}> Sign in with Google</Button>
Gallery View:
When the application is loaded, we use the PhotoGallery component to load photos and view thumbnails on-page. The Photogallery component is a wrapper around the InfinityScoller component, which keeps loading images as the user scrolls. The idea here is that we query a max of 10 images in one go. Our backend returns a list of 10 images (just the map and metadata to the S3 bucket). We must load these images from the S3 bucket and then show thumbnails on-screen as a gallery view. When the user reaches the bottom of the screen or there is empty space left, the InfiniteScroller component loads 10 more images. This continues untill our backend replies with a stop marker.
The key point here is that we need to send the JWT Token as a header to our backend service via an ajax call. The JWT Token is obtained post a sign-in from Amplify framework. An example of obtaininga JWT token:
An example of an infinite scroller component usage is given below. Note that “gallery” is JSX composed array of photo thumbnails. The “loadMore” method calls our ajax function to the server-side backend and updates the “gallery” variable and sets the “hasMore” variable to true/false so that the infinite scroller component can stop queering when there are no photos left to display on the screen.
The Lightbox component gives a zoom effect to the thumbnail. When the thumbnail is clicked, a higher resolution picture (webp version) is downloaded from the S3 bucket and shown on the screen. We use a storage object from the Amplify library. Downloaded content is a blob and must be converted into image data. To do so, we use the javascript native method, createObjectURL. Below is the sample code that downloads the object from the s3 bucket and then converts it into a viewable image for the HTML IMG tag.
The S3 SDK lets you generate a pre-signed POST URL. Anyone who gets this URL will be able to upload objects to the S3 bucket directly without needing credentials. Of course, we can actually set up some boundaries, like a max object size, key of the uploaded object, etc. Refer to this AWS blog for more on pre-signed URLs. Here is the sample code to generate a pre-signed URL.
For a better UX, we can allow our users to upload more than one photo at a time. However, a pre-signed URL lets you upload a single object at a time. To overcome this, we generate multiple pre-signed URLs. Initially, we send a request to our backend asking to upload photos with expected keys. This request is originated once the user selects photos to upload. Our backend then generates pre-signed URLs for us. Our frontend React app then provides the illusion that all photos are being uploaded as a whole.
When the upload is successful, the S3 PUT event is triggered, which we discussed earlier. The complete flow of the application is given in a sequence diagram. You can find the complete source code here in my GitHub repository.
React Build Steps and Hosting:
The ideal way to build the react app is to execute an npm run build. However, we take a slightly different approach. We are not using the S3 static website for serving frontend UI. For one reason, S3 static websites are non-SSL unless we use CloudFront. Therefore, we will make the API gateway our application’s entry point. Thus, the UI will also be served from the API gateway. However, we want to reduce calls made to the API gateway. For this reason, we will only deliver the index.html file hosted with the help API gateway/Lamda, and the rest of the static files (react supporting JS files) from S3 bucket.
Your index.html should have all the reference paths pointed to the S3 bucket. The build mustexclusively specify that static files are located in a different location than what’s relative to the index.html file. Your S3 bucket needs to be public with the right bucket policy and CORS set so that the end-user can only retrieve files and not upload nasty objects. Those who are confused about how the S3 static website and S3 public bucket differ may refer to here. Below are the react build steps, bucket policy, and CORS.
PUBLIC_URL=https://{your-static-bucket-name}.s3.{aws_region}.amazonaws.com/ npm run build//Bucket Policy{"Version": "2012-10-17","Id": "http referer from your domain only","Statement": [ {"Sid": "Allow get requests originating from","Effect": "Allow","Principal": "*","Action": "s3:GetObject","Resource": "arn:aws:s3:::{your-static-bucket-name}/static/*","Condition": {"StringLike": {"aws:Referer": ["https://your-app-domain-name" ] } } } ]}//CORS[ {"AllowedHeaders": ["*" ],"AllowedMethods": ["GET" ],"AllowedOrigins": ["https://your-app-domain-name" ],"ExposeHeaders": [] }]
Once a build is complete, upload index.html to a lambda that serves your UI. Run the below shell commands to compress static contents and host them on our static S3 bucket.
#assuming you are in your react app directorymkdir /tmp/s3uploadscp -ar build/static /tmp/s3uploads/cd /tmp/s3uploads#add gzip encoding to all the filesgzip -9`find ./ -type f`#remove .gz extension from compressed filesfor i in`find ./ -type f`do mv $i ${i%.*}done#sync your files to s3 static bucket and mention that these files are compressed with gzip encoding#so that browser will not treat them asregularfilesaws s3 --region $AWSREGION sync . s3://${S3_STATIC_BUCKET}/static/ --content-encoding gzip --delete --ssecd -rm -rf /tmp/s3uploads
Our backend uses nodejs express framework. Since this is a serverless application, we need to wrap express with a serverless-http framework to work with lambda. Sample source code is given below, along with serverless framework resource definition. Notice that, except for the UI home endpoint ( “/” ), the rest of the API endpoints are authenticated with Cognito on the API gateway itself.
Lastly, we will setup up a custom domain so that we don’t need to use the gibberish domain name generated by the API gateway and certificate for our custom domain. You don’t need to use route53 for this part. If you have an existing domain, you can create a subdomain and point it to the API gateway. First things first: head to the AWS ACM console and generate a certificate for the domain name. Once the request is generated, you need to validate your domain by creating a TXT record as per the ACM console. The ACM is a free service. Domain verification may take few minutes to several hours. Once you have the certificate ready, head back to the API gateway console. Navigate to “custom domain names” and click create.
Enter your application domain name
Check TLS 1.2 as TLS version
Select Endpoint type as Regional
Select ACM certificate from dropdown list
Create domain name
Select the newly created custom domain. Note the API gateway domain name from Domain Details -> Configuration tab. You will need this to map a CNAME/ALIAS record with your DNS provider. Click on the API mappings tab. Click configure API mappings. From the dropdown, select your API gateway, select stage as default, and click save. You are done here.
Future Scope and Improvements :
To improve application latency, we can use CloudFront as CDN. This way, our entry point could be S3, and we no longer need to use API gateway regional endpoint. We can also add AWS WAF as an added security in front of our API gateway to inspect incoming requests and payloads. We can also use Dynamodb secondary indexes so that we can efficiently search metadata in the table. Adding a lifecycle rule on raw photos which have not been accessed for more than a year can be transited to the S3 Glacier storage class. You can further add glacier deep storage transition to save more on storage costs.
React Hooks were introduced in 2018 and ever since numerous POCs have been built around the same. Hooks come in at a time when React has become a norm and class components are becoming increasingly complex. With this blog, I will showcase how Hooks can reduce the size of your code up to 90%. Yes, you heard it right. Exciting, isn’t it?
Hooks are a powerful upgrade coming with React 16.8 and utilize the functional programming paradigm. React, however, also acknowledges the volume of class components already built, and therefore, comes with backward compatibility. You can practice by refactoring a small chunk of your codebase to use React Hooks, while not impacting the existing functionality.
With this article, I tried to show you how Hooks can help you write cleaner, smaller and more efficient code. 90% Remember!
First, let’s list out the common problems we all face with React Components as they are today:
1. Huge Components – caused by the distributed logic in lifecycle Hooks
2. Wrapper Hell – caused by re-using components
3. Confusing and hard to understand classes
In my opinion, these are the symptoms of one big problem i.e. React does not provide a stateful primitive simpler, smaller and more lightweight than class component. That is why solving one problem worsens the other. For example, if we put all of the logic in components to fix Wrapper Hell, it leads to Huge Components, that makes it hard to refactor. On the other hand, if we divide the huge components into smaller reusable pieces, it leads to more nests than in the component tree i.e. Wrapper Hell. In either case, there’s always confusion around the classes.
Let’s approach these problems one by one and solve them in isolation.
Huge Components –
We all have used lifecycle Hooks and often with time they contain more and more stateful logic. It is also observed that stateful logic is shared amongst lifecycle Hooks. For example, consider you have a code that adds an event listener in componentDidMount. The componentDidUpdate method might also contain some logic for setting up the event listeners. Now the cleanup code will be written in componentWillUnmount. See how the logic for the same thing is split between these lifecycle Hooks.
// Class componentimport React from"react";exportdefaultclassLazyLoaderextendsReact.Component {constructor(props) {super(props);this.state = { data: [] }; }loadMore= () => {// Load More Data console.log("loading data"); };handleScroll= () => {if (!this.props.isLoading &&this.props.isCompleted) {this.loadMore(); } };componentDidMount() {this.loadMore(); document.addEventListener("scroll", this.handleScroll, false);// more subscribers and event listeners }componentDidUpdate() {// }componentWillUnmount() { document.removeEventListener("scroll", this.handleScroll, false);// unsubscribe and remove listeners }render() {return <div>{this.state.data}</div>; }}
React Hooks approach this with useEffect.
import React, { useEffect, useState } from"react";exportconstLazyLoader= ({ isLoading, isCompleted }) => {const [data, setData] =useState([]);constloadMore= () => {// Load and setData here };consthandleScroll= () => {if (!isLoading && isCompleted) {loadMore(); } };// cDM and cWUuseEffect(() => { document.addEventListener("scroll", handleScroll, false);// more subscribers and event listenersreturn () => { document.removeEventListener("scroll", handleScroll, false);// unsubscribe and remove listeners }; }, []);// cDUuseEffect(() => {// }, [/** dependencies */]);return data && <div>{data}</div>;};
Now, let’s move the logic to a custom Hook.
import { useEffect, useState } from"react";exportfunctionuseScroll() {const [data, setData] =useState([]);constloadMore= () => {// Load and setData here };consthandleScroll= () => {if (!isLoading && isCompleted) {loadMore(); } };// cDM and cWUuseEffect(() => { document.addEventListener("scroll", handleScroll, false);// more subscribers and event listenersreturn () => { document.removeEventListener("scroll", handleScroll, false);// unsubscribe and remove listeners }; }, []);return data;};
useEffect puts the code that changes together in one place, making the code more readable and easy to understand. You can also write multiple useEffects. The advantage of this is again to separate out the mutually unrelated code.
Wrapper Hell –
If you’re well versed with React, you probably know it doesn’t provide a pattern of attaching a reusable code to the component (like “connect” in react-redux). React solves this problem of data sharing by render props and higher-order components patterns. But using this, requires restructuring of your components, that is hard to follow and, at times, cumbersome. This typically leads to a problem called Wrapper Hell. One can check this by looking at the application in React DevTools. There you can see components wrapped by a number of providers, consumers, HOCs and other abstractions. Because of this, React needed a better way of sharing the logic.
import React from"react";import useMedia from"./hooks/useMedia";functionApp() {let small =useMedia("(max-width: 480px)");let large =useMedia("(min-width: 1024px)");return ( <divclassName="media"> <h1>Media</h1> <p>{small ?"small screen":"not a small screen"}</p> <p>{large ?"large screen":"not a large screen"}</p> </div> );}exportdefault App;
Hooks provide you with a way to extract a reusable stateful logic from a component without affecting the component hierarchy. This enables it to be tested independently.
Confusing and hard to understand classes
Classes pose more problems than it solves. We’ve known React for a very long time and there’s no denying that it is hard for humans as well as for machines. It confuses both of them. Here’s why:
For Humans –
1. There’s a fair amount of boilerplate when defining a class.
2. Beginners and even expert developers find it difficult to bind methods and writing class components.
3. People often couldn’t decide between functional and class components, as with time they might need state.
For Machines –
1. In the minified version of a component file, the method names are not minified and the unused methods are not stripped out, as it’s not possible to tell how all the methods fit together.
2. Classes make it difficult for React to implement hot loading reliably.
3. Classes encourage patterns that make it difficult for the compiler to optimize.
Due to the above problems, classes can be a large barrier in learning React. To keep the React relevant, the community has been experimenting with component folding and Prepack, but the classes make optimizations fall back to the slower path. Hence, the community wanted to present an API that makes it more likely for code to stay on the optimizable path.
React components have always been closer to functions. And since Hooks introduced stateful logic into functional components, it lets you use more of React’s features without classes. Hooks embrace functions without compromising the practical spirit of React. Hooks don’t require you to learn complex functional and reactive programming techniques.
Conclusion –
React Hooks got me excited and I am learning new things every day. Hooks are a way to write far less code for the same usecase. Also, Hooks do not ask the developers who are already busy with shipping, to rewrite everything. You can redo small components with Hooks and slowly move to the complex components later.
The thinking process in Hooks is meant to be gradual. I hope this blog makes you want to get your hands dirty with Hooks. Do share your thoughts and experiences with Hooks. Finally, I would strongly recommend this official documentation which has great content.
In one of my previous blog posts (Hacking your way around AWS IAM Roles), we demonstrated how users can access AWS resources without having to store AWS credentials on disk. This was achieved by setting up an OpenVPN server and client-side route that gets automatically pushed when the user is connected to the VPN. To this date, I really find this as a complaint-friendly solution without forcing users to do any manual configuration on their system. It also makes sense to have access to AWS resources as long as they are connected on VPN. One of the downsides to this method is maintaining an OpenVPN server, keeping it secure and having it running in a highly available (HA) state. If the OpenVPN server is compromised, our credentials are at stake. Secondly, all the users connected on VPN get the same level of access.
In this blog post, we present to you a CLI utility written in Rust that writes temporary AWS credentials to a user profile (~/.aws/credentials file) using web browser navigated Google authentication. This utility is inspired by gimme-aws-creds (written in python for Okta authenticated AWS farm) and heroku cli (written in nodejs and utilizes oclif framework). We will refer to our utility as aws-authcreds throughout this post.
“If you have an apple and I have an apple and we exchange these apples then you and I will still each have one apple. But if you have an idea and I have an idea and we exchange these ideas, then each of us will have two ideas.”
– George Bernard Shaw
What does this CLI utility (auth-awscreds) do?
When the user fires a command (auth-awscreds) on the terminal, our program reads utility configuration from file .auth-awscreds located in the user home directory. If this file is not present, the utility prompts for setting the configuration for the first time. Utility configuration file is INI format. Program then opens a default web browser and navigates to the URL read from the configuration file. At this point, the utility waits for the browser URL to navigate and authorize. Web UI then navigates to Google Authentication. If authentication is successful, a callback is shared with CLI utility along with temporary AWS credentials, which is then written to ~/.aws/credentials file.
Block Diagram
Tech Stack Used
As stated earlier, we wrote this utility in Rust. One of the reasons for choosing Rust is because we wanted a statically typed binary (ELF) file (executed independent of interpreter), which ships as it is when compiled. Unlike programs written in Python or Node.js, one needs a language interpreter and has supporting libraries installed for your program. The golang would have also suffice our purpose, but I prefer Rust over golang.
Our goal is, when the auth-awscreds command is fired, we first check if the user’s home directory ~/.aws/credentials file exists. If not, we create a ~/.aws directory. This is the default AWS credentials directory, where usually AWS SDK looks for credentials (unless exclusively specified by env var AWS_SHARED_CREDENTIALS_FILE). The next step would be to check if a ~/.auth-awscredds file exists. If this file doesn’t exist, we create a prompt user with two inputs:
1. AWS credentials profile name (used by SDK, default is preferred)
2. Application domain URL (Our backend app domain is used for authentication)
let app_profile_file =format!("{}/.auth-awscreds",&user_home_dir);let config_exist :bool= Path::new(&app_profile_file).exists();let mut profile_name = String::new();let mut app_domain = String::new();if!config_exist {//ask the series of questionsprint!("Which profile to write AWS Credentials [default] : ");io::stdout().flush().unwrap();io::stdin() .read_line(&mut profile_name) .expect("Failed to read line");print!("App Domain : ");io::stdout().flush().unwrap();io::stdin() .read_line(&mut app_domain) .expect("Failed to read line"); profile_name=String::from(profile_name.trim()); app_domain=String::from(app_domain.trim());config_profile(&profile_name,&app_domain); }else { (profile_name,app_domain) =read_profile(); }
These two properties are written in ~/.auth-awscreds under the default section. Followed by this, our utility generates RSA asymmetric 1024 bit public and private key. Both the keypair are converted to base64.
We then launch a browser window and navigate to the specified app domain URL. At this stage, our utility starts a temporary web server with the help of the Actix Web framework and listens on 63442 port of localhost.
println!("Opening web ui for authentication...!");open::that(&app_domain).unwrap();HttpServer::new(move || {//let stopper = tx.clone(); let cors = Cors::permissive(); App::new() .wrap(cors)//.app_data(stopper) .app_data(crypto_data.clone()) .service(get_public_key) .service(set_aws_creds) }) .bind(("127.0.0.1",63442))? .run() .await
Localhost web server has two end points.
1. GET Endpoint (/publickey): This endpoint is called by our React app after authentication and returns the public key created during the initialization process. Since the web server hosted by the Rust application is insecure (non ssl), when actual AWS credentials are received, they should be posted as an encrypted string with the help of this public key.
2. POST Endpoint (/setcreds): This endpoint is called when the react app has successfully retrieved credentials from API Gateway. Credentials are decrypted by private key and then written to ~/.aws/credentials file defined by profile name in utility configuration.
let encrypted_data = payload["data"].as_array().unwrap();let username = payload["username"].as_str().unwrap();let mut decypted_payload = vec![]; for str in encrypted_data.iter() {//println!("{}",str.to_string());let s = str.as_str().unwrap();let decrypted =decrypt_data(&private_key, &s.to_string()); decypted_payload.extend_from_slice(&decrypted); }let credentials :serde_json::Value= serde_json::from_str(&String::from_utf8(decypted_payload).unwrap()).unwrap();let aws_creds = AWSCreds{ profile_name: String::from(profile_name), aws_access_key_id: String::from(credentials["AccessKeyId"].as_str().unwrap()), aws_secret_access_key: String::from(credentials["SecretAccessKey"].as_str().unwrap()), aws_session_token: String::from(credentials["SessionToken"].as_str().unwrap()) };println!("Authenticated as {}",username);println!("Updating AWS Credentials File...!");configcreds(&aws_creds);
One of the interesting parts of this code is the decryption process, which iterates through an array of strings and is joined by method decypted_payload.extend_from_slice(&decrypted);. RSA 1024 is 128-byte encryption, and we used OAEP padding, which uses 42 bytes for padding and the rest for encrypted data. Thus, 86 bytes can be encrypted at max. So, when credentials are received they are an array of 128 bytes long base64 encoded data. One has to decode the bas64 string to a data buffer and then decrypt data piece by piece.
To generate a statically typed binary file, run: cargo build –release
AWS Cognito and Google Authentication
This guide does not cover how to set up Cognito and integration with Google Authentication. You can refer to our old post for a detailed guide on setting up authentication and authorization. (Refer to the sections Setup Authentication and Setup Authorization).
React App:
The React app is launched via our Rust CLI utility. This application is served right from the S3 bucket via CloudFront. When our React app is loaded, it checks if the current session is authenticated. If not, then with the help of the AWS Amplify framework, our app is redirected to Cognito-hosted UI authentication, which in turn auto redirects to Google Login page.
Once the session is authenticated, we set the react state variables and then retrieve the public key from the actix web server (Rust CLI App: auth-awscreds) by calling /publickey GET method. Followed by this, an Ajax POST request (/auth-creds) is made via axios library to API Gateway. The payload contains a public key, and JWT token for authentication. Expected response from API gateway is encrypted AWS temporary credentials which is then proxied to our CLI application.
To ease this deployment, we have written a terraform code (available in the repository) that takes care of creating an S3 bucket, CloudFront distribution, ACM, React build, and deploying it to the S3 bucket. Navigate to vars.tf file and change the respective default variables). The Terraform script will fail at first launch since the ACM needs a DNS record validation. You can create a CNAME record for DNS validation and re-run the Terraform script to continue deployment. The React app expects few environment variables. Below is the sample .env file; update the respective values for your environment.
Finally, deploy the React app using below sample commands.
$ terraform plan -out plan #creates plan for revision$ terraform apply plan #apply plan and deploy
API Gateway HTTP API and Lambda Function
When a request is first intercepted by API Gateway, it validates the JWT token on its own. API Gateway natively supports Cognito integration. Thus, any payload with invalid authorization header is rejected at API Gateway itself. This eases our authentication process and validates the identity. If the request is valid, it is then received by our Lambda function. Our Lambda function is written in Node.js and wrapped by serverless-http framework around express app. The Express app has only one endpoint.
/auth-creds (POST): once the request is received, it retrieves the ID from Cognito and logs it to stdout for audit purpose.
let identityParams = { IdentityPoolId: process.env.IDENTITY_POOL_ID, Logins: {} }; identityParams.Logins[`${process.env.COGNITOIDP}`] = req.headers.authorization;constci=newCognitoIdentity({region : process.env.AWSREGION});let idpResponse =await ci.getId(identityParams).promise(); console.log("Auth Creds Request Received from ",JSON.stringify(idpResponse));
The app then extracts the base64 encoded public key. Followed by this, an STS api call (Security Token Service) is made and temporary credentials are derived. These credentials are then encrypted with a public key in chunks of 86 bytes.
Here, the assumeRole calls the IAM role, which has appropriate policy documents attached. For the sake of this demo, we attached an Administrator role. However, one should consider a hardening policy document and avoid attaching Administrator policy directly to the role.
We have used the Serverless framework to deploy the API. The Serverless framework creates API gateway, lambda function, Lambda Layer, and IAM role, and takes care of code deployment to lambda function.
To deploy this application, follow the below steps.
1. cd layer/nodejs && npm install && cd ../.. && npm install
2. npm install -g serverless (on mac you can skip this step and use the npx serverless command instead)
3. Create .env file and below environment variables to file and set the respective values.
Assuming that you have compiled binary (auth-awscreds) available in your local machine and for the sake of testing you have installed `aws-cli`, you can then run /path/to/your/auth-awscreds.
App Testing
If you selected your AWS profile name as “demo-awscreds,” you can then export the AWS_PROFILE environment variable. If you prefer a “default” profile, you don’t need to export the environment variable as AWS SDK selects a “default” profile on its own.
To validate, you can then run “aws s3 ls.” You should see S3 buckets listed from your AWS account. Note that these credentials are only valid for 60 minutes. This means you will have to re-run the command and acquire a new pair of AWS credentials. Of course, you can configure your IAM role to extend expiry for an “assume role.”
auth-awscreds in Action:
Summary
Currently, “auth-awscreds” is at its early development stage. This post demonstrates how AWS credentials can be acquired temporarily without having to worry about key rotation. One of the features that we are currently working on is RBAC, with the help of AWS Cognito. Since this tool currently doesn’t support any command line argument, we can’t reconfigure utility configuration. You can manually edit or delete the utility configuration file, which triggers a prompt for configuring during the next run. We also want to add multiple profiles so that multiple AWS accounts can be used.
A typical Request Response cycle works such that client sends request to server and server responds to that request. But there are few use cases where we might need to send data from server without request or client is expecting a data that can arrive at anonymous time. There are few mechanisms available to solve this problem.
Server Sent Events
Broadly we can classify these as client pull and server push mechanisms. Websockets is a bi directional mechanism where data is transmitted via full duplex TCP protocol. Client Pull can be done using various mechanisms like –
Manual refresh – where client is refreshed manually
Long polling where a client sends request to server and waits until response is received, as soon as it gets response, a new request is sent.
Short Polling is when a client continuously sends request to server in a definite short intervals.
Server sent events are a type of Server Push mechanism, where client subscribes to a stream of updates generated by a server and, whenever a new event occurs, a notification is sent to the client.
Why ServerSide events are better than polling:
Scaling and orchestration of backend in real time needs to be managed as users grow.
When mobile devices rapidly switch between WiFi and cellular networks or lose connections, and the IP address changes, long polling needs to re-establish connections.
With long polling, we need to manage the message queue and catch up missed message.
Long polling needs to provide load balancing or fail-over support across multiple servers.
SSE vs Websockets
SSEs cannot provide bidirectional client-server communication as opposed to WebSockets. Use cases that require such communication are real-time multiplayer games and messaging and chat apps. When there’s no need for sending data from a client, SSEs might be a better option than WebSockets. Examples of such use cases are status updates, news feeds and other automated data push mechanisms. And backend implementation could be easy with SSE than with Websockets. Also number of open connections is limited for browser for SSE.
The server side code for this can be implemented in any of the high level language. Here is a sample code for Python Flask SSE. Flask SSE requires a broker such as Redis to store the message. Here we are also using Flask APScheduler, to schedule background processes with flask .
Here we need to install and import ‘flask_sse’ and ‘apscheduler.’
api_1 | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 31, 0, 24564))api_1 | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 31, 14, 30164))api_1 | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 31, 28, 37840))api_1 | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 31, 42, 58162))api_1 | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 31, 56, 46456))api_1 | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 32, 10, 56276))api_1 | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 32, 24, 58445))api_1 | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 32, 38, 57183))api_1 | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 32, 52, 65886))api_1 | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 33, 6, 49818))api_1 | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 33, 20, 22731))api_1 | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 33, 34, 59084))api_1 | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 33, 48, 70346))api_1 | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 34, 2, 58889))api_1 | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 34, 16, 26020))api_1 | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 34, 30, 44040))api_1 | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 34, 44, 61620))api_1 | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 34, 58, 38699))api_1 | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 35, 12, 26067))api_1 | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 35, 26, 71504))api_1 | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 35, 40, 31429))api_1 | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 35, 54, 74451))api_1 | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 36, 8, 63001))api_1 | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 36, 22, 47671))api_1 | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 36, 36, 55458))api_1 | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 36, 50, 68975))api_1 | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 37, 4, 62491))api_1 | ('Event SchedINFO:apscheduler.executors.default:Job "server_side_event (trigger: interval[0:00:14], next run at: 2019-05-01 07:37:31 UTC)" executed successfullyapi_1 |INFO:apscheduler.executors.default:Running job "server_side_event (trigger: interval[0:00:16], next run at: 2019-05-01 07:37:38 UTC)" (scheduled at 2019-05-0107:37:22.362874+00:00)api_1 |INFO:apscheduler.executors.default:Job "server_side_event (trigger: interval[0:00:16], next run at: 2019-05-01 07:37:38 UTC)" executed successfullyapi_1 |INFO:apscheduler.executors.default:Running job "server_side_event (trigger: interval[0:00:14], next run at: 2019-05-01 07:37:31 UTC)" (scheduled at 2019-05-0107:37:31.993944+00:00)api_1 |INFO:apscheduler.executors.default:Job "server_side_event (trigger: interval[0:00:14], next run at: 2019-05-01 07:37:45 UTC)" executed successfullyapi_1 |INFO:apscheduler.executors.default:Running job "server_side_event (trigger: interval[0:00:16], next run at: 2019-05-01 07:37:54 UTC)" (scheduled at 2019-05-0107:37:38.362874+00:00)api_1 |INFO:apscheduler.executors.default:Job "server_side_event (trigger: interval[0:00:16], next run at: 2019-05-01 07:37:54 UTC)" executed successfully
Use Cases of Server Sent Events
Let’s see the use case with an example – Consider we have a real time graph showing on our web app, one of the possible options is polling where continuously client will poll the server to get new data. Other option would be to use server sent events, which are asynchronous, here the server will send data when updates happen.
Other applications could be
Real time stock price analysis system
Real time social media feeds
Resource monitoring for health, uptime
Conclusion
In this blog, we have covered how we can implement server sent events using Python Flask and React and also how we can use background schedulers with that. This can be used to implement a data delivery from the server to the client using server push.
The data we render on a UI originates from different sources like databases, APIs, files, and more. In React applications, when the data is received, we first store it in state and then pass it to the other components in multiple ways for rendering.
But most of the time, the format of the data is inconvenient for the rendering component. So, we have to format data and perform some prior calculations before we give it to the rendering component.
Sending data directly to the rendering component and processing the data inside that component is not recommended. Not only data processing but also any heavy background jobs that we would have to depend on the backend can now be done on the client-side because React allows the holding of business logic on the front-end.
A good practice is to create a separate function for processing that data which is isolated from the rendering logic, so that data processing and data representation will be done separately.
Why? There are two possible reasons: – The processed data can be shared/used by other components, too.
– The main reason to avoid this is: if the data processing is a time-consuming task, you will see some lag on the UI, or in the worst-case scenario, sometimes the page may become unresponsive.
As JavaScript is a single-threaded environment, it has only one call stack to execute scripts (in a simple way, you cannot run more than one script at the same time).
For example, suppose you have to do some DOM manipulations and, at the same time, want to do some complex calculations. You can not perform these two operations in parallel. If the JavaScript engine is busy computing the complex computation, then all the other tasks like event listeners and rendering callbacks will get blocked for that amount of time, and the page may become unresponsive.
How can you solve this problem?
Though JavaScript is single-threaded, many developers mimic the concurrency with the help of timer functions and event handlers. Like by breaking heavy (time-consuming) tasks into tiny chunks and by using the timers you can split their execution. Let’s take a look at the following example.
Here, the processDataArray function uses the timer function to split the execution, which internally uses the setTimeout method for processing some items of array, again after a dedicated time passed execute more items, once all the array elements have been processed, send the processed result back by using the finishCallback.
constprocessDataArray= (dataArray, finishCallback) => {// take a new copy of arrayconsttodo= dataArray.concat();// to store each processed datalet result = [];// timer functionconsttimedProcessing= () => {conststart=+newDate();do {// process each data item and store it's resultconstsingleResult=processSingleData(todo.shift()); result.push(singleResult);// check if todo has something to process and the time difference must not be greater than 50 } while (todo.length>0&&+newDate() - start <50);// check for remaining items to processif (todo.length>0) {setTimeout(timedProcessing, 25); } else {// finished with all the items, initiate finish callbackfinishCallback(result); } };setTimeout(timedProcessing, 25);};constprocessSingleData=data=> {// process datareturn processedData;};
You can find more about how JavaScript timers work internally here.
The problem is not solved yet, and the main thread is still busy in the computation so you can see the delay in the UI events like button clicks or mouse scroll. This is a bad user experience when you have a big array computation going on and an impatient web user.
The better and real multithreading way to solve this problem and to run multiple scripts in parallel is by using Web Workers.
What are Web Workers?
Web Workers provide a mechanism to spawn a separate script in the background. Where you can do any type of calculations without disturbing the UI. Web Workers run outside the context of the HTML document’s script, making it easiest to allow concurrent execution of JavaScript programs. You can experience multithreading behavior while using Web Workers.
Communication between the page (main thread) and the worker happens using a simple mechanism. They can send messages to each other using the postMessage method, and they can receive the messages using onmessage callback function. Let’s take a look at a simple example:
In this example, we will delegate the work of multiplying all the numbers in an array to a Web Worker, and the Web Worker returns the result back to the main thread.
import"./App.css";import { useEffect, useState } from"react";functionApp() {// This will load and execute the worker.js script in the background.const [webworker] =useState(new window.Worker("worker.js"));const [result, setResult] =useState("Calculating....");useEffect(() => {constmessage= { multiply: { array: newArray(1000).fill(2) } }; webworker.postMessage(message); webworker.onerror= () => {setResult("Error"); }; webworker.onmessage= (e) => {if (e.data) {setResult(e.data.result); } else {setResult("Error"); } }; }, []);useEffect(() => {return () => { webworker.terminate(); }; }, []);return ( <divclassName="App"> <h1>Webworker Example In React</h1> <headerclassName="App-header"> <h1>Multiplication Of large array</h1> <h2>Result: {result}</h2> </header> </div> );}exportdefault App;
onmessage= (e) => {const { multiply } = e.data;// check data is correctly framedif (multiply && multiply.array.length) {// intentionally delay the executionsetTimeout(() => {// this post back the result to the pagepostMessage({ result: multiply.array.reduce( (firstItem, secondItem) => firstItem * secondItem ), }); }, 2000); } else {postMessage({ result: 0 }); }};
If the worker script throws an exception, you can handle it by attaching a callback function to the onerror property of the worker in the App.js script.
From the main thread, you can terminate the worker immediately if you want by using the worker’s terminate method. Once the worker is terminated, the worker variable becomes undefined. You need to create another instance if needed.
Charting middleware – Suppose you have to design a dashboard that represents the analytics of businesses engagement for a business retention application by means of a pivot table, pie charts, and bar charts. It involves heavy processing of data to convert it to the expected format of a table, pie chart, a bar chart. This may result in the UI failing to update, freezing, or the page becoming unresponsive because of single-threaded behavior. Here, we can use Web Workers and delegate the processing logic to it. So that the main thread is always available to handle other UI events.
– Emulating excel functionality – For example, if you have thousands of rows in the spreadsheet and each one of them needs some calculations (longer), you can write custom functions containing the processing logic and put them in the WebWorker’s script.
– Real-time text analyzer – This is another good example where we can use WebWorker to show the word count, characters count, repeated word count, etc., by analyzing the text typed by the user in real-time. With a traditional implementation, you may experience performance issues as the text size grows, but this can be optimized by using WebWorkers.
Web Worker limitations:
Yes, Web Workers are amazing and quite simple to use, but as the WebWorker is a separate thread, it does not have access to the window object, document object, and parent object. And we can not pass functions through postmessage.
Web Workers make our life easier by doing jobs in parallel in the background, but Web Workers are relatively heavy-weight and come with high startup performance cost and a high per-instance memory cost, so as per the WHATWG community, they are not intended to be used in large numbers.