Take a look at these two JavaScript code snippets. They look nearly identical — but do they behave the same?
Snippet 1 (without semicolon):
constpromise1=newPromise((resolve, reject) => {resolve('printing content of promise1');})(async () => {constres=await promise1; console.log('logging result ->', res);})();
Snippet 2 (with semicolon):
constpromise1=newPromise((resolve, reject) => {resolve('printing content of promise1');});(async () => {constres=await promise1; console.log('logging result ->', res);})();
What Happens When You Run Them?
❌ Snippet 1 Output:
TypeError: (intermediate value) is not a function
✅ Snippet 2 Output:
logging result -> printing content of promise1
Why Does a Single Semicolon Make Such a Big Difference?
We’ve always heard that semicolons are optional in JavaScript. So why does omitting just one lead to a runtime error here?
Let’s investigate.
What’s Really Going On?
The issue boils down to JavaScript’s Automatic Semicolon Insertion (ASI).
When you omit a semicolon, JavaScript tries to infer where it should end your statements. Usually, it does a decent job. But it’s not perfect.
In the first snippet, JavaScript parses this like so:
const promise1 = new Promise(…)(async () => { … })();
Here, it thinks you are calling the result of new Promise(…) as a function, which is not valid — hence the TypeError.
But Wait, Aren’t Semicolons Optional in JavaScript?
They are — until they’re not.
Here’s the trap:
If a new line starts with:
(
[
+ or –
/ (as in regex)
JavaScript might interpret it as part of the previous expression.
That’s what’s happening here. The async IIFE starts with (, so JavaScript assumes it continues the previous line unless you forcefully break it with a semicolon.
Key Takeaways:
ASI is not foolproof and can lead to surprising bugs.
A semicolon before an IIFE ensures it is not misinterpreted as part of the preceding line.
This is especially important when using modern JavaScript features like async/await, arrow functions, and top-level code.
Why You Should Use Semicolons Consistently
Even though many style guides (like those from Prettier or StandardJS) allow you to skip semicolons, using them consistently provides:
✅ Clarity
You eliminate ambiguity and make your code more readable and predictable.
✅ Fewer Bugs
You avoid hidden edge cases like this one, which are hard to debug — especially in production code.
✅ Compatibility
Not all environments handle ASI equally. Tools like Babel, TypeScript, or older browsers might behave differently.
Conclusion
The difference between working and broken code here is one semicolon. JavaScript’s ASI mechanism is helpful, but it can fail — especially when lines begin with characters like ( or [.
If you’re writing clean, modular, modern JavaScript, consider adding that semicolon. It’s a tiny keystroke that saves a lot of headaches.
Happy coding — and remember, when in doubt, punctuate!
When React got introduced, it had an edge over other libraries and frameworks present in that era because of a very interesting concept called one-way data binding or in simpler words uni-directional flow of data introduced as a part of Virtual DOM.
It made for a fantastic developer experience where one didn’t have to think about how the updates flow in the UI when data (”state” to be more technical) changes.
However, as more and more hooks got introduced there were some syntactical rules to make sure they perform in the most optimum way. Essentially, a deviation from the original purpose of React which is unidirectional flow or explicit mutations
To call out a few
Filling out the dependency arrays correctly
Memoizing the right values or callbacks for rendering optimization
Consciously avoiding prop drilling
And possibly a few more that if done the wrong way could cause some serious performance issues i.e. everything just re-renders. A slight deviation from the original purpose of just writing components to build UIs.
The use of signals is a good example of how adopting Reactive programming primitives can help remove all this complexity and help improve developer experience by shifting focus on the right things without having to explicitly follow a set of syntactical rules for gaining performance.
What Is a Signal?
A signal is one of the key primitives of Reactive programming. Syntactically, they are very similar to states in React. However, the reactive capabilities of a signal is what gives it the edge.
const [state, setState] =useState(0);// state -> value// setState -> setterconst [signal, setSignal] =createSignal(0);// signal -> getter // setSignal -> setter
At this point, they look pretty much the same—except that useState returns a value and useSignal returns a getter function.
How is a signal better than a state?
Once useState returns a value, the library generally doesn’t concern itself with how the value is used. It’s the developer who has to decide where to use that value and has to explicitly make sure that any effects, memos or callbacks that want to subscribe to changes to that value has that value mentioned in their dependency list and in addition to that, memoizing that value to avoid unnecessary re-renders. A lot of additional effort.
functionParentComponent() {const [state, setState] =useState(0);conststateVal=useMemo(() => {returndoSomeExpensiveStateCalculation(state); }, [state]); // Explicitly memoize and make sure dependencies are accurateuseEffect(() => {sendDataToServer(state); }, [state]); // Explicilty call out subscription to statereturn ( <div> <ChildComponentstateVal={stateVal} /> </div> );}
A createSignal, however, returns a getter function since signals are reactive in nature. To break it down further, signals keep track of who is interested in the state’s changes, and if the changes occur, it notifies these subscribers.
To gain this subscriber information, signals keep track of the context in which these state getters, which are essentially a function, are called. Invoking the getter creates a subscription.
This is super helpful as the library is now, by itself, taking care of the subscribers who are subscribing to the state’s changes and notifying them without the developer having to explicitly call it out.
createEffect(() => {updateDataElswhere(state());}); // effect only runs when `state` changes - an automatic subscription
The contexts (not to be confused with React Context API) that are invoking the getter are the only ones the library will notify, which means memoizing, explicitly filling out large dependency arrays, and the fixing of unnecessary re-renders can all be avoided. This helps to avoid using a lot of additional hooks meant for this purpose, such as useRef, useCallback, useMemo, and a lot of re-renders.
This greatly enhances the developer experience and shifts focus back on building components for the UI rather than spending that extra 10% of developer efforts in abiding by strict syntactical rules for performance optimization.
functionParentComponent() {const [state, setState] =createSignal(0);conststateVal=doSomeExpensiveStateCalculation(state()); // no need memoize explicitycreateEffect(() => {sendDataToServer(state()); }); // will only be fired if state changes - the effect is automatically added as a subscriberreturn ( <div> <ChildComponentstateVal={stateVal} /> </div> );}
Conclusion
It might look like there’s a very biased stance toward using signals and reactive programming in general. However, that’s not the case.
React is a high-performance, optimized library—even though there are some gaps or misses in using your state in an optimum way, which leads to unnecessary re-renders, it’s still really fast. After years of using React a certain way, frontend developers are used to visualizing a certain flow of data and re-rendering, and replacing that entirely with a reactive programming mindset is not natural. React is still the de facto choice for building user interfaces, and it will continue to be with every iteration and new feature added.
Reactive programming, in addition to performance enhancements, also makes the developer experience much simpler by boiling down to three major primitives: Signal, Memo, and Effects. This helps focus more on building components for UIs rather than worrying about dealing explicitly with performance optimization.
Signals are increasingly getting popular and are a part of many modern web frameworks, such as Solid.js, Preact, Qwik, and Vue.js.
“Hope this email finds you well” is how 2020-2021 has been in a nutshell. Since we’ve all been working remotely since last year, actively collaborating with teammates became one notch harder, from activities like brainstorming a topic on a whiteboard to building documentation.
Having tools powered by collaborative systems had become a necessity, and to explore the same following the principle of build fast fail fast, I started building up a collaborative editor using existing available, open-source tools, which can eventually be extended for needs across different projects.
Conflicts, as they say, are inevitable, when multiple users are working on the same document constantly modifying it, especially if it’s the same block of content. Ultimately, the end-user experience is defined by how such conflicts are resolved.
There are various conflict resolution mechanisms, but two of the most commonly discussed ones are Operational Transformation (OT) and Conflict-Free Replicated Data Type (CRDT). So, let’s briefly talk about those first.
Operational Transformation
The order of operations matter in OT, as each user will have their own local copy of the document, and since mutations are atomic, such as insert V at index 4 and delete X at index 2. If the order of these operations is changed, the end result will be different. And that’s why all the operations are synchronized through a central server. The central server can then alter the indices and operations and then forward to the clients. For example, in the below image, User2 makes a delete(0) operation, but as the OT server realizes that User1 has made an insert operation, the User2’s operation needs to be changed as delete(1) before applying to User1.
OT with a central server is typically easier to implement. Plain text operations with OT in its basic form only has three defined operations: insert, delete, and apply.
“Fully distributed OT and adding rich text operations are very hard, and that’s why there’s a million papers.”
CRDT
Instead of performing operations directly on characters like in OT, CRDT uses a complex data structure to which it can then add/update/remove properties to signify transformation, enabling scope for commutativity and idempotency. CRDTs guarantee eventual consistency.
There are different algorithms, but in general, CRDT has two requirements: globally unique characters and globally ordered characters. Basically, this involves a global reference for each object, instead of positional indices, in which the ordering is based on the neighboring objects. Fractional indices can be used to assign index to an object.
As all the objects have their own unique reference, delete operation becomes idempotent. And giving fractional indices is one way to give unique references while insertion and updation.
There are two types of CRDT, one is state-based, where the whole state (or delta) is shared between the instances and merged continuously. The other is operational based, where only individual operations are sent between replicas. If you want to dive deep into CRDT, here’s a nice resource.
For our purposes, we choose CRDT since it can also support peer-to-peer networks. If you directly want to jump to the code, you can visit the repo here.
Tools used for this project:
As our goal was for a quick implementation, we targeted off-the-shelf tools for editor and backend to manage collaborative operations.
Quill.js is an API-driven WYSIWYG rich text editor built for compatibility and extensibility. We choose Quill as our editor because of the ease to plug it into your application and availability of extensions.
Yjs is a framework that provides shared editing capabilities by exposing its different shared data types (Array, Map, Text, etc) that are synced automatically. It’s also network agnostic, so the changes are synced when a client is online. We used it because it’s a CRDT implementation, and surprisingly had readily available bindings for quill.js.
Prerequisites:
To keep it simple, we’ll set up a client and server both in the same code base. Initialize a project with npm init and install the below dependencies:
npm i quill quill-cursors webpack webpack-cli webpack-dev-server y-quill y-websocket yjs
Quill: Quill is the WYSIWYG rich text editor we will use as our editor.
quill-cursors is an extension that helps us to display cursors of other connected clients to the same editor room.
Webpack, webpack-cli, and webpack-dev-server are developer utilities, webpack being the bundler that creates a deployable bundle for your application.
The Y-quill module provides bindings between Yjs and QuillJS with use of the SharedType y.Text. For more information, you can check out the module’s source on Github.
Y-websocket provides a WebsocketProvider to communicate with Yjs server in a client-server manner to exchange awareness information and data.
Yjs, this is the CRDT framework which orchestrates conflict resolution between multiple clients.
This is a basic webpack config where we have provided which file is the starting point of our frontend project, i.e., the index.js file. Webpack then uses that file to build the internal dependency graph of your project. The output property is to define where and how the generated bundles should be saved. And the devServer config defines necessary parameters for the local dev server, which runs when you execute “npm start”.
We’ll first create an index.html file to define the basic skeleton:
The index.html has a pretty basic structure. In <head>, we’ve provided the path of the bundled js file that will be created by webpack, and the css theme for the quill editor. And for the <body> part, we’ve just created a button to connect/disconnect from the backend and a placeholder div where the quill editor will be plugged.
Here, we’ve just made the imports, registered quill-cursors extension, and added an event listener for window load:
import Quill from"quill";import*as Y from'yjs';import { QuillBinding } from'y-quill';import { WebsocketProvider } from'y-websocket';import QuillCursors from"quill-cursors";// Register QuillCursors module to add the ability to show multiple cursors on the editor.Quill.register('modules/cursors', QuillCursors);window.addEventListener('load', () => {// We'll add more blocks as we continue});
Let’s initialize the Yjs document, socket provider, and load the document:
Conflict resolution approaches are not relatively new, but with the trend of remote culture, it is important to have good collaborative systems in place to enhance productivity.
Although this example was just on rich text editing capabilities, we can extend existing resources to build more features and structures like tabular data, graphs, charts, etc. Yjs shared types can be used to define your own data format based on how your custom editor represents data internally.
Redux has greatly helped in reducing the complexities of state management. Its one way data flow is easier to reason about and it also provides a powerful mechanism to include middlewares which can be chained together to do our biding. One of the most common use cases for the middleware is to make async calls in the application. Different middlewares like redux-thunk, redux-sagas, redux-observable, etc are a few examples. All of these come with their own learning curve and are best suited for tackling different scenarios.
But what if our use-case is simple enough and we don’t want to have the added complexities that implementing a middleware brings? Can we somehow implement the most common use-case of making async API calls using only redux and javascript?
The answer is Yes! This blog will try to explain on how to implement async action calls in redux without the use of any middlewares.
So let us first start by making a simple react project by using create-react-app
Also we will be using react-redux in addition to redux to make our life a little easier. And to mock the APIs we will be using https://jsonplaceholder.typicode.com/
We will just implement two API calls to not to over complicate things.
Create a new file called api.js .It is the file in which we will keep the fetch calls to the endpoint.
Each API call has three base actions associated with it. Namely, REQUEST, SUCCESS and FAIL. Each of our APIs will be in one of these three states at any given time. And depending on these states we can decide how to show our UI. Like when it is in REQUEST state we can have the UI show a loader and when it is in FAIL state we can show a custom UI to tell the user that something has went wrong.
So we create three constants of REQUEST, SUCCESS and FAIL for each API call which we will be making. In our case the constants.js file will look something like this:
As can be seen from the above code, each of our APIs data lives in one object the the state object. Keys isLoading tells us if the API is in the REQUEST state.
Now as we have our store defined, let us see how we will manipulate the statewith different phases that an API call can be in. Below is our reducers.js file.
By giving each individual API call its own variable to denote the loading phase we can now easily implement something like multiple loaders in the same screen according to which API call is in which phase.
Now to actually implement the async behaviour in the actions we just need a normal JavaScript function which will pass the dispatch as the first argument. We pass dispatch to the function because it dispatches actions to the store. Normally a component has access to dispatch but since we want an external function to take control over dispatching, we need to give it control over dispatching.
This is how we do async calls without middlewares in redux. This is a much simpler approach than using a middleware and the learning curve associated with it. If this approach covers all your use cases then by all means use it.
Conclusion
This type of approach really shines when you have to make a simple enough application like a demo of sorts, where API calls is all the side-effect that you need. In larger and more complicated applications there are a few inconveniences with this approach. First we have to pass dispatch around to which seems kind of yucky. Also, remember which call requires dispatch and which do not.
Node.js has become the most popular framework for web development surpassing Ruby on Rails and Django in terms of the popularity.The growing popularity of full stack development along with the performance benefits of asynchronous programming has led to the rise of Node’s popularity. ExpressJs is a minimalistic, unopinionated and the most popular web framework built for Node which has become the de-facto framework for many projects. Note — This article is about building a Restful API server with ExpressJs . I won’t be delving into a templating library like handlebars to manage the views.
A quick search on google will lead you a ton of articles agreeing with what I just said which could validate the theory. Your next step would be to go through a couple of videos about ExpressJS on Youtube, try hello world with a boilerplate template, choose few recommended middleware for Express (Helmet, Multer etc), an ORM (mongoose if you are using Mongo or Sequelize if you are using relational DB) and start building the APIs. Wow, that was so fast!
The problem starts to appear after a few weeks when your code gets larger and complex and you realise that there is no standard coding practice followed across the client and the server code, refactoring or updating the code breaks something else, versioning of the APIs becomes difficult, call backs have made your life hell (you are smart if you are using Promises but have you heard of async-await?).
Do you think you your code is not so idiot-proof anymore? Don’t worry! You aren’t the only one who thinks this way after reading this.
Let me break the suspense and list down the technologies and libraries used in our idiot-proof code before you get restless.
Node 8.11.3: This is the latest LTS release from Node. We are using all the ES6 features along with async-await. We have the latest version of ExpressJs (4.16.3).
Typescript: It adds an optional static typing interface to Javascript and also gives us familiar constructs like classes (Es6 also gives provides class as a construct) which makes it easy to maintain a large codebase.
Swagger: It provides a specification to easily design, develop, test and document RESTful interfaces. Swagger also provides many open source tools like codegen and editor that makes it easy to design the app.
TSLint: It performs static code analysis on Typescript for maintainability, readability and functionality errors.
Prettier: It is an opinionated code formatter which maintains a consistent style throughout the project. This only takes care of the styling like the indentation (2 or 4 spaces), should the arguments remain on the same line or go to the next line when the line length exceeds 80 characters etc.
Husky: It allows you to add git hooks (pre-commit, pre-push) which can trigger TSLint, Prettier or Unit tests to automatically format the code and to prevent the push if the lint or the tests fail.
Before you move to the next section I would recommend going through the links to ensure that you have a sound understanding of these tools.
Now I’ll talk about some of the challenges we faced in some of our older projects and how we addressed these issues in the newer projects with the tools/technologies listed above.
Formal API definition
A problem that everyone can relate to is the lack of formal documentation in the project. Swagger addresses a part of this problem with their OpenAPI specification which defines a standard to design REST APIs which can be discovered by both machines and humans. As a practice, we first design the APIs in swagger before writing the code. This has 3 benefits:
It helps us to focus only on the design without having to worry about the code, scaffolder, naming conventions etc. Our API designs are consistent with the implementation because of this focused approach.
We can leverage tools like swagger-express-mw to internally wire the routes in the API doc to the controller, validate request and response object from their definitions etc.
Collaboration between teams becomes very easy, simple and standardised because of the Swagger specification.
Code Consistency
We wanted our code to look consistent across the stack (UI and Backend)and we use ESlint to enforce this consistency. Example – Node traditionally used “require” and the UI based frameworks used “import” based syntax to load the modules. We decided to follow ES6 style across the project and these rules are defined with ESLint.
Note — We have made slight adjustments to the TSlint for the backend and the frontend to make it easy for the developers. For example, we allow upto 120 characters in React as some of our DOM related code gets lengthy very easily.
Code Formatting
This is as important as maintaining the code consistency in the project. It’s easy to read a code which follows a consistent format like indentation, spaces, line breaks etc. Prettier does a great job at this. We have also integrated Prettier with Typescript to highlight the formatting errors along with linting errors. IDE like VS Code also has prettier plugin which supports features like auto-format to make it easy.
Strict Typing
Typescript can be leveraged to the best only if the application follows strict typing. We try to enforce it as much as possible with exceptions made in some cases (mostly when a third party library doesn’t have a type definition). This has the following benefits:
Static code analysis works better when your code is strongly typed. We discover about 80–90% of the issues before compilation itself using the plugins mentioned above.
Refactoring and enhancements becomes very simple with Typescript. We first update the interface or the function definition and then follow the errors thrown by Typescript compiler to refactor the code.
Git Hooks
Husky’s “pre-push” hook runs TSLint to ensure that we don’t push the code with linting issues. If you follow TDD (in the way it’s supposed to be done), then you can also run unit tests before pushing the code. We decided to go with pre-hooks because – Not everyone has CI from the very first day. With a git hook, we at least have some code quality checks from the first day. – Running lint and unit tests on the dev’s system will leave your CI with more resources to run integration and other complex tests which are not possible to do in local environment. – You force the developer to fix the issues at the earliest which results in better code quality, faster code merges and release.
Async-await
We were using promises in our project for all the asynchronous operations. Promises would often lead to a long chain of then-error blocks which is not very comfortable to read and often result in bugs when it got very long (it goes without saying that Promises are much better than the call back function pattern). Async-await provides a very clean syntax to write asynchronous operations which just looks like sequential code. We have seen a drastic improvement in the code quality, fewer bugs and better readability after moving to async-await.
Hope this article gave you some insights into tools and libraries that you can use to build a scalable ExpressJS app.
This blog post explores the performance cost of inline functions in a React application. Before we begin, let’s try to understand what inline function means in the context of a React application.
What is an inline function?
Simply put, an inline function is a function that is defined and passed down inside the render method of a React component.
Let’s understand this with a basic example of what an inline function might look like in a React application:
The onClick prop, in the example above, is being passed as an inline function that calls this.setState. The function is defined within the render method, often inline with JSX. In the context of React applications, this is a very popular and widely used pattern.
Let’s begin by listing some common patterns and techniques where inline functions are used in a React application:
Render prop: A component prop that expects a function as a value. This function must return a JSX element, hence the name. Render prop is a good candidate for inline functions.
DOM event handlers: DOM event handlers often make a call to setState or invoke some effect in the React application such as sending data to an API server.
Custom function or event handlerspassed to child: Oftentimes, a child component requires a custom event handler to be passed down as props. Inline function is usually used in this scenario.
Bind in constructor: One of the most common patterns is to define the function within the class component and then bind context to the function in constructor. We only need to bind the current context if we want to use this keyword inside the handler function.
Bind in render: Another common pattern is to bind the context inline when the function is passed down. Eventually, this gets repetitive and hence the first approach is more popular.
There are several other approaches that React dev community has come up with, like using a helper method to bind all functions automatically in the constructor.
After understanding inline functions with its examples and also taking a look at a few alternatives, let’s see why inline functions are so popular and widely used.
Why use inline function
Inline function definitions are right where they are invoked or passed down. This means inline functions are easier to write, especially when the body of the function is of a few instructions such as calling setState. This works well within loops as well.
For example, when rendering a list and assigning a DOM event handler to each list item, passing down an inline function feels much more intuitive. For the same reason, inline functions also make code more organized and readable.
Inline arrow functions preserve context that means developers can use this without having to worry about current execution context or explicitly bind a context to the function.
Inline functions make value from parent scope available within the function definition. It results in more intuitive code and developers need to pass down fewer parameters. Let’s understand this with an example.
Here, the value of count is readily available to onClick event handlers. This behavior is called closing over.
For these reasons, React developers make use of inline functions heavily. That said, inline function has also been a hot topic of debate because of performance concerns. Let’s take a look at a few of these arguments.
Arguments against inline functions
A new function is defined every time the render method is called. It results in frequent garbage collection, and hence performance loss.
There is an eslint config that advises against using inline function jsx-no-bind. The idea behind this rule is when an inline function is passed down to a child component, React uses reference checks to re-render the component. This can result in child component rendering again and again as a reference to the passed prop value i.e. inline function. In this case, it doesn’t match the original one.
Suppose ListItem component implements shouldComponentUpdate method where it checks for onClick prop reference. Since inline functions are created every time a component re-renders, this means that the ListItem component will reference a new function every time, which points to a different location in memory. The comparison checks in shouldComponentUpdate and tells React to re-render ListItem even though the inline function’s behavior doesn’t change. This results in unnecessary DOM updates and eventually reduces the performance of applications.
Performance concerns revolving around the Function.prototype.bind methods: when not using arrow functions, the inline function being passed down must be bound to a context if using this keyword inside the function. The practice of calling .bind before passing down an inline function raises performance concerns, but it has been fixed. For older browsers, Function.prototype.bind can be supplemented with a polyfill for performance.
Now that we’ve summarized a few arguments in favor of inline functions and a few arguments against it, let’s investigate and see how inline functions really perform.
Pre-optimization can often lead to bad code. For instance, let’s try to get rid of all the inline function definitions in the component above and move them to the constructor because of performance concerns.
We’d then have to define 3 custom event handlers in the class definition and bind context to all three functions in the constructor.
This would increase the initialization time of the component significantly as opposed to inline function declarations where only one or two functions are defined and used at a time based on the result of condition timeThen > timeNow.
Concerns around render props: A render prop is a method that returns a React element and is used to share state among React components.
Render props are meant to be invoked on each render since they share state between parent components and enclosed React elements. Inline functions are a good candidate for use in render prop and won’t cause any performance concern.
Here, the render prop of ListView component returns a label enclosed in a div. Since the enclosed component can never know what the value of the item variable is, it can never be a PureComponent or have a meaningful implementation of shouldComponentUpdate(). This eliminates the concerns around use of inline function in render prop. In fact, promotes it in most cases.
In my experience, inline render props can sometimes be harder to maintain especially when render prop returns a larger more complicated component in terms of code size. In such cases, breaking down the component further or having a separate method that gets passed down as render prop has worked well for me.
Concerns around PureComponents and shouldComponentUpdate(): Pure components and various implementations of shouldComponentUpdate both do a strict type comparison of props and state. These act as performance enhancers by letting React know when or when not to trigger a render based on changes to state and props. Since inline functions are created on every render, when such a method is passed down to a pure component or a component that implements the shouldComponentUpdate method, it can lead to an unnecessary render. This is because of the changed reference of the inline function.
To overcome this, consider skipping checks on all function props in shouldComponentUpdate(). This assumes that inline functions passed to the component are only different in reference and not behavior. If there is a difference in the behavior of the function passed down, it will result in a missed render and eventually lead to bugs in the component’s state and effects.
Conclusion
A common rule of thumb is to measure performance of the app and only optimize if needed. Performance impact of inline function, often categorized under micro-optimizations, is always a tradeoff between code readability, performance gain, code organization, etc that must be thought out carefully on a case by case basis and pre-optimization should be avoided.
In this blog post, we observed that inline functions don’t necessarily bring in a lot of performance cost. They are widely used because of ease of writing, reading and organizing inline functions, especially when inline function definitions are short and simple.
It is amazing how the software industry has evolved. Back in the day, a software was a simple program. Some of the first software applications like The Apollo Missions Landing modules and Manchester Baby were basic stored procedures. Software was primarily used for research and mathematical purposes.
The invention of personal computers and the prominence of the Internet changed the software world. Desktop applications like word processors, spreadsheets, and games grew. Websites gradually emerged. Back then, simple pages were delivered to the client as static documents for viewing. By the mid-1990s, with Netscape introducing client-side scripting language, JavaScript and Macromedia bringing in Flash, the browser became more powerful, allowing websites to become richer and more interactive. In 1999, the Java language introduced Servlets. And thus born the Web Application. Nevertheless, these developments and applications were still simpler. Engineers didn’t emphasize enough on structuring them and mostly built unstructured monolithic applications.
The advent of disruptive technologies like cloud computing and Big data paved the way for more intricate, convolute web and native mobile applications. From e-commerce and video streaming apps to social media and photo editing, we had applications doing some of the most complicated data processing and storage tasks. The traditional monolithic way now posed several challenges in terms of scalability, team collaboration and integration/deployment, and often led to huge and messy The Ball Of Mud codebases.
To untangle this ball of software, came in a number of service-oriented architectures. The most promising of them was Microservices – breaking an application into smaller chunks that can be developed, deployed and tested independently but worked as a single cohesive unit. Its benefits of scalability and ease of deployment by multiple teams proved as a panacea to most of the architectural problems. A few front-end architectures also came up, such as MVC, MVVM, Web Components, to name a few. But none of them were fully able to reap the benefits of Microservices.
Micro Frontends first came up in ThoughtWorks Technology Radar where they assessed, tried and eventually adopted the technology after noticing significant benefits. It is a Microservice approach to front-end web development where independently deliverable front-end applications are composed as a whole.
With Microservices, Micro Frontends breaks the last monolith to create a complete Micro-Architecture design pattern for web applications. It is entirely composed of loosely coupled vertical slices of business functionality rather than in horizontals. We can term these verticals as ‘Microapps’. This concept is not new and has appeared in Scaling with Microservices and Vertical Decomposition. It first presented the idea of every vertical being responsible for a single business domain and having its presentation layer, persistence layer, and a separate database. From the development perspective, every vertical is implemented by exactly one team and no code is shared among different systems.
Fig: Micro Frontends with Microservices (Micro-architecture)
Why Micro Frontends?
A microservice architecture has a whole slew of advantages when compared to monolithic architectures.
Ease of Upgrades – Micro Frontends build strict bounded contexts in the application. Applications can be updated in a more incremental and isolated fashion without worrying about the risks of breaking up another part of the application.
Scalability – Horizontal scaling is easy for Micro Frontends. Each Micro Frontend has to be stateless for easier scalability.
Ease of deployability: Each Micro Frontend has its CI/CD pipeline, that builds, tests and deploys it to production. So it doesn’t matter if another team is working on a feature and has pushed a bug fix or if a cutover or refactoring is taking place. There should be no risks involved in pushing changes done on a Micro Frontend as long as there is only one team working on it.
Team Collaboration and Ownership: The Scrum Guide says that “Optimal Development Team size is small enough to remain nimble and large enough to complete significant work within a Sprint”. Micro Frontends are perfect for multiple cross-functional teams that can completely own a stack (Micro Frontend) of an application from UX to Database design. In case of an E-commerce site, the Product team and the Payment team can concurrently work on the app without stepping on each other’s toes.
Micro Frontend Integration Approaches
There is a multitude of ways to implement Micro Frontends. It is recommended that any approach for this should take a Runtime integration route instead of a Build Time integration, as the former has to re-compile and release on every single Micro Frontend to release any one of the Micro Frontend’s changes.
We shall learn some of the prominent approaches of Micro Frontends by building a simple Pet Store E-Commerce site. The site has the following aspects (or Microapps, if you will) – Home or Search, Cart, Checkout, Product, and Contact Us. We shall only be working on the Front-end aspect of the site. You can assume that each Microapp has a microservice dedicated to it in the backend. You can view the project demo here and the code repository here. Each way of integration has a branch in the repo code that you can check out to view.
Single Page Frontends –
The simplest way (but not the most elegant) to implement Micro Frontends is to treat each Micro Frontend as a single page.
Fig: Single Page Micro Frontends: Each HTML file is a frontend.
!DOCTYPE html><htmllang="zxx"><head> <title>The MicroFrontend - eCommerce Template</title></head><body> <headerclass="header-section header-normal"> <!-- Header is repeated in each frontend which is difficult to maintain --> .... .... </header <main> </main> <footer<!--Footerisrepeatedineachfrontendwhichmeanswehavetomultiplechangesacrossallfrontends--> </footer> <script> <!-- Cross Cutting features like notification, authentication are all replicated in all frontends--> </script></body>
It is one of the purest ways of doing Micro Frontends because no container or stitching element binds the front ends together into an application. Each Micro Frontend is a standalone app with each dependency encapsulated in it and no coupling with the others. The flipside of this approach is that each frontend has a lot of duplication in terms of cross-cutting concerns like headers and footers, which adds redundancy and maintenance burden.
JavaScript Rendering Components (Or Web Components, Custom Element)-
As we saw above, single-page Micro Frontend architecture has its share of drawbacks. To overcome these, we should opt for an architecture that has a container element that builds the context of the app and the cross-cutting concerns like authentication, and stitches all the Micro Frontends together to create a cohesive application.
// A virtual class from which all micro-frontends would extendclassMicroFrontend {beforeMount() {// do things before the micro front-end mounts }onChange() {// do things when the attributes of a micro front-end changes }render() {// html of the micro frontendreturn'<div></div>'; }onDismount() {// do things after the micro front-end dismounts }}
classCartextendsMicroFrontend {beforeMount() {// get previously saved cart from backend }render() {return`<!-- Page --> <div class="page-area cart-page spad"> <div class="container"> <div class="cart-table"> <table> <thead> ..... ` }addItemToCart(){... }deleteItemFromCart () {... }applyCouponToCart() {... }onDismount() {// save Cart for the user to get back to afterwards }}
<!DOCTYPE html><htmllang="zxx"><head> <title>PetStore - because Pets love pampering</title> <metacharset="UTF-8 <link rel="stylesheet"href="css/style.css"/></head><body> <!-- Header section --> <headerclass="header-section"> .... </header> <!-- Header section end --> <mainid='microfrontend'> <!-- This is where the Micro-frontend gets rendered by utility renderMicroFrontend.js--> </main> <!-- Header section --> <footerclass="header-section"> .... </footer> <!-- Footer section end --> <scriptsrc="frontends/MicroFrontend.js"></script> <scriptsrc="frontends/Home.js"></script> <scriptsrc="frontends/Cart.js"></script> <scriptsrc="frontends/Checkout.js"></script> <scriptsrc="frontends/Product.js"></script> <scriptsrc="frontends/Contact.js"></script> <scriptsrc="routes.js"></script> <scriptsrc="renderMicroFrontend.js"></script>
functionrenderMicroFrontend(pathname) {constmicroFrontend= routes[pathname || window.location.hash];constroot= document.getElementById('microfrontend'); root.innerHTML = microFrontend ?newmicroFrontend().render():newHome().render();$(window).scrollTop(0);}$(window).bind( 'hashchange', function(e) { renderFrontend(window.location.hash); });renderFrontend(window.location.hash);utility routes.js (A map of the hash route to the Microfrontend class)constroutes= {'#': Home,'': Home,'#home': Home,'#cart': Cart,'#checkout': Checkout,'#product': Product,'#contact': Contact,};
As you can see, this approach is pretty neat and encapsulates a separate class for Micro Frontends. All other Micro Frontends extend from this. Notice how all the functionality related to Microapp is encapsulated in the respective Micro Frontend. This makes sure that concurrent work on a Micro Frontend doesn’t mess up some other Micro Frontends.
Everything will work in a similar paradigm when it comes to Web Components and Custom Elements.
React
With the client-side JavaScript frameworks being very popular, it is impossible to leave React from any Front End discussion. React being a component-based JS library, much of the things discussed above will also apply to React. I am going to discuss some of the technicalities and challenges when it comes to Micro Frontends with React.
Styling
Since there should be minimum sharing of code between any Micro Frontend, styling the React components can be challenging, considering the global and cascading nature of CSS. We should make sure styles are targeted on a specific Micro Frontend without spilling over to other Micro Frontends. Inline CSS, CSS in JS libraries like Radium, and CSS Modules, can be used with React.
Redux
Using React with Redux is kind of a norm in today’s front-end world. The convention is to use Redux as a single global store for the entire app for cross application communication. A Micro Frontend should be self-contained with no dependencies. Hence each Micro Frontend should have its own Redux store, moving towards a multiple Redux store architecture.
Other Noteworthy Integration Approaches –
Server-side Rendering – One can use a server to assemble Micro Frontend templates before dispatching it to the browser. SSI techniques can be used too.
iframes – Each Micro Frontend can be an iframe. They also offer a good degree of isolation in terms of styling, and global variables don’t interfere with each other.
Summary
With Microservices, Micro Frontends promise to bring in a lot of benefits when it comes to structuring a complex application and simplifying its development, deployment and maintenance.
But there is a wonderful saying that goes “there is no one-size-fits-all approach that anyone can offer you. The same hot water that softens a carrot hardens an egg”. Micro Frontend is no silver bullet for your architectural problems and comes with its own share of downsides. With more repositories, more tools, more build/deploy pipelines, more servers, more domains to maintain, Micro Frontends can increase the complexity of an app. It may render cross-application communication difficult to establish. It can also lead to duplication of dependencies and an increase in application size.
Your decision to implement this architecture will depend on many factors like the size of your organization and the complexity of your application. Whether it is a new or legacy codebase, it is advisable to apply the technique gradually over time and review its efficacy over time.