By migrating their monolithic IPTV solution and integrating it with a microservices-based OTT platform, the client achieved a scalable, cost-efficient, and future-ready system.
The successful integration aimed to reduce costs, diminish operational complexity and streamline operations, while maintaining exceptional service quality. The modernization also enhanced the platform’s marketability, unlocking new revenue opportunities, which positioned the client as a leader in offering innovative solutions to media and entertainment providers.
Results Delivered
Cost Savings: Unified operations and database migration reduced expenses significantly
Enhanced Scalability: Processed sensitive data for millions of users efficiently with modular microservices
Improved Marketability: adding NPVR feature as SaaS unlocked new revenue streams
Operational Excellence: Streamlined redundant tasks and enhanced monitoring for better customer experience
Reliability: Achieved five nines (99.999%) uptime with zero downtime since deployment.
A major telecom operator in Western Europe, serving over 6 million subscribers, went through a digital transformation that transcended typical system upgrades. By reimagining their Voucher and Reload Management System, initially built on monolithic architecture, our client not only resolved their immediate operational challenges, but also got a dynamic, scalable platform, which positioned them for sustained growth.
The modernization process shows how strategic technological innovation has the power to reshape an organization’s operational capabilities, turning constraints into opportunities. Leveraging the benefits of microservices architecture, our approach allowed them to overcome legacy constraints and reduce costs.
Results Delivered
100% uptime since launch and zero customer support incidents
Successful management of 6+ million customer data records
Eliminated database licensing costs, saving resources for growth initiatives
Enhanced security due to robust architecture and technology stack
Future-ready system supporting seamless scalability and operational efficiency
The digital transformation journey positioned our client as a technology leader in the legal sector. By creating a scalable foundation for future growth, the client significantly reduced operational complexity and improved service delivery capabilities. This transformation not only enhanced their competitive edge but also ensured long-term sustainability and success in the rapidly evolving legal tech landscape!
Operational Efficiency • 65% reduction in maintenance costs through a consolidated technology stack. • Reduced deployment frequency from weeks to multiple times in a day. • 99.999% system availability. • 70% decrease in incident resolution time.
Customer Value • 45% faster document processing speed. • 80% reduction in onboarding time for new clients. • 90% improvement in feature delivery time
Financial Performance • 50% reduction in infrastructure costs. • 60% decrease in operational overhead
Innovation Highlights: The transformation introduced several groundbreaking features: • Smart contract analysis with AI assistance. • Automated compliance checking across jurisdictions. • Real-time document collaboration with version control. • Integrated e-signature and workflow automation. • Advanced analytics and reporting dashboard
To know all the details, fill out the form to access the full case study!
“Embrace Failures. Chaos and failures are your friends, not enemies.” A microservice ecosystem is going to fail at some point. The issue is not if you fail, but when you fail, will you notice or not. It’s between whether it will affect your users because all of your services are down, or it will affect only a few users and you can fix it at your own time.
Chaos Engineering is a practice to intentionally introduce faults and failures into your microservice architecture to test the resilience and stability of your system. Istio can be a great tool to do so. Let’s have a look at how Istio made it easy.
For more information on how to setup Istio and what are virtual service and Gateways, please have a look at the following blog, how to setup Istio on GKE.
Fault Injection With Istio
Fault injection is a testing method to introduce errors into your microservice architecture to ensure it can withstand the error conditions. Istio lets you injects errors at HTTP layer instead of delaying the packets or killing the pods at network layer. This way, you can generate various types of HTTP error codes and test the reaction of your services under those conditions.
Generating HTTP 503 Error
Here we see that two pods are running two different versions of recommendation service using the recommended tutorial while installing the sample application.
Currently, the traffic on the recommendation service is automatically load balanced between those two pods.
Now let’s apply a fault injection using virtual service which will send 503 HTTP error codes in 30% of the traffic serving the above pods.
To test whether it is working, check the output from the curl of customer service microservice endpoint.
You will find the 503 error on approximately 30% of the request coming to recommendation service.
To restore normal operation, please delete the above virtual service using:
kubectl delete-f recommendation-fault.yaml
Delay
The most common failure we see in production is not the down service, rather a delay service. To inject network latency as a chaos experiment, you can create another virtual service. Sometimes, it happens that your application doesn’t respond on time and creates chaos in the complete ecosystem. How to simulate that behavior, let’s have a look.
Now, if you hit the URL of endpoints of the above service in a loop, you will see the delays in some of the requests.
Retry
In some of the production services, we expect that instead of failing instantly, it should retry N number of times to get the desired output. If not succeeded, then only a request should be considered as failed.
For that mechanism, you can insert retries on those services as follows:
Now any request coming to recommendation will do 3 attempts before considering it as failed.
Timeout
In the real world, an application faces most failures due to timeouts. It can be because of more load on the application or any other latency in serving the request. Your application should have proper timeouts defined, before declaring any request as “Failed”. You can use Istio to simulate the timeout mechanism and give our application a limited amount of time to respond before giving up.
Wait only for N seconds before failing and giving up.
Istio lets you inject faults at the HTTP layer for your application and improves its resilience and stability. But, the application must handle the failures and take appropriate course of action. Chaos Engineering is only effective when you know your application can take failures, otherwise, there is no point in testing for chaos if you know your application is definitely broken.
GraphQL has revolutionized how a client queries a server. With the thin layer of GraphQL middleware, the client has the ability to query the data more comprehensively than what’s provided by the usual REST APIs.
One of the key principles of GraphQL involves having a single data graph of the implementing services that will allow the client to have a unified interface to access more data and services through a single query. Having said that, it can be challenging to follow this principle for an enterprise-level application on a single, monolith GraphQL server.
The Need for Federated Services
James Baxley III, the Engineering Manager at Apollo, in his talk here, puts forward the rationale behind choosing an independently managed federated set of services very well.
To summarize his point, let’s consider a very complex enterprise product. This product would essentially have multiple teams responsible for maintaining different modules of the product. Now, if we’re considering implementing a GraphQL layer at the backend, it would only make sense to follow the one graph principle of GraphQL: this says that to maximize the value of GraphQL, we should have a single unified data graph that’s operating at the data layer of this product. With that, it will be easier for a client to query a single graph and get all the data without having to query different graphs for different data portions.
However, it would be challenging to have all of the huge enterprise data graphs’ layer logic residing on a single codebase. In addition, we want teams to be able to independently implement, maintain, and ship different schemas of the data graph on their own release cycles.
Though there is only one graph, the implementation of that graph should be federated across multiple teams.
Now, let’s consider a massive enterprise e-commerce platform as an example. The different schemas of the e-commerce platform look something like:
Fig:- E-commerce platform set of schemas
Considering the above example, it would be a chaotic task to maintain the graph implementation logic of all these schemas on a single code base. Another overhead that this would bring is having to scale a huge monolith that’s implementing all these services.
Thus, one solution is a federation of services for a single distributed data graph. Each service can be implemented independently by individual teams while maintaining their own release cycles and having their own iterations of their services. Also, a federated set of services would still follow the Onegraph principle of GraphQL, which will allow the client to query a single endpoint for fetching any part of the data graph.
To further demonstrate the example above, let’s say the client asks for the top-five products, their reviews, and the vendor selling them. In a usual monolith GraphQL server, this query would involve writing a resolver that’s a mesh of the data sources of these individual schemas. It would be a task for teams to collaborate and come up with their individual implementations. Let’s consider a federated approach with separate services implementing products, reviews, and vendors. Each service is responsible for resolving only the part of the data graph that includes the schema and data source. This makes it extremely streamlined to allow different teams managing different schemas to collaborate easily.
Another advantage would be handling the scaling of individual services rather than maintaining a compute-heavy monolith for a huge data graph. For example, the products service is used the most on the platform, and the vendors service is scarcely used. In case of a monolith approach, the scaling would’ve had to take place on the overall server. This is eliminated with federated services where we can independently maintain and scale individual services like the products service.
Federated Implementation of GraphQL Services
A monolith GraphQL server that implements a lot of services for different schemas can be challenging to scale. Instead of implementing the complete data graph on a single codebase, the responsibilities of different parts of the data graph can be split across multiple composable services. Each one will contain the implementation of only the part of the data graph it is responsible for. Apollo Federation allows this division of services and follows a declarative programming model to allow splitting of concerns.
Architecture Overview
This article will not cover the basics of GraphQL, such as writing resolvers and schemas. If you’re not acquainted with the basics of GraphQL and setting up a basic GraphQL server using Apollo, I would highly recommend reading about it here. Then, you can come back here to understand the implementation of federated services using Apollo Federation.
Apollo Federation has two principal parts to it:
A collection of services that distinctly define separate GraphQL schemas
A gateway that builds the federated data graph and acts as a forefront to distinctly implement queries for different services
Fig:- Apollo Federation Architecture
Separation of Concerns
The usual way of going about implementing federated services would be by splitting an existing monolith based on the existing schemas defined. Although this way seems like a clear approach, it will quickly cause problems when multiple Schemas are involved.
To illustrate, this is a typical way to split services from a monolith based on the existing defined Schemas:
In the example above, although the tweets field belongs to the User schema, it wouldn’t make sense to populate this field in the User service. The tweets field of a User should be declared and resolved in the Tweet service itself. Similarly, it wouldn’t be right to resolve the creator field inside the Tweet service.
The reason behind this approach is the separation of concerns. The User service might not even have access to the Tweet datastore to be able to resolve the tweets field of a user. On the other hand, the Tweet service might not have access to the User datastore to resolve the creator field of the Tweet schema.
Considering the above schemas, each service is responsible for resolving the respective field of each Schema it is responsible for.
Implementation
To illustrate an Apollo Federation, we’ll be considering a Nodejs server built with Typescript. The packages used are provided by the Apollo libraries.
npm i --save apollo-server @apollo/federation @apollo/gateway
Some additional libraries to help run the services in parallel:
npm i --save nodemon ts-node concurrently
Let’s go ahead and write the structure for the gateway service first. Let’s create a file gateway.ts:
Note the serviceList is an empty array for now since we’ve yet to implement the individual services. In addition, we pass the subscriptions: false option to the apollo server config because currently, Apollo Federation does not support subscriptions.
Next, let’s add the User service in a separate file user.ts using:
The @key directive helps other services understand the User schema is, in fact, an entity that can be extended within other individual services. The fields will help other services uniquely identify individual instances of the User schema based on the id.
The Query and the Mutation types need to be extended by all implementing services according to the Apollo Federation documentation since they are always defined on a gateway level.
As a side note, the User model imported from datasources/model/User
import User from ‘./datasources/models/User’; is essentially a Mongoose ORM Model for MongoDB that will help in all the CRUD operations of a User entity in a MongoDB database. In addition, the mongoStore() function is responsible for establishing a connection to the MongoDB database server.
The User model implementation internally in Mongoose ORM looks something like this:
In the Query type, the users and the user(id: ID!) queries fetch a list or the details of individual users.
In the resolvers, we define a __resolveReference function responsible for returning an instance of the User entity to all other implementing services, which just have a reference id of a User entity and need to return an instance of the User entity. The ref parameter is an object { id: ‘userEntityId’ } that contains the id of an instance of the User entity that may be passed down from other implementing services that need to resolve the reference of a User entity based on the reference id. Internally, we fire a mongoose .findOne query to return an instance of the User from the users database based on the reference id. To illustrate the resolver,
At the end of the file, we make sure the service is running on a unique port number 4001, which we pass as an option while running the apollo server. That concludes the User service.
Next, let’s add the tweet service by creating a file tweet.ts using:
touch tweet.ts
The following code goes as a part of the tweet service:
The Tweet schema has the text field, which is the content of the tweet, a unique id of the tweet, and a creator field, which is of the User entity type and resolves into the details of the user that created the tweet:
We extend the User entity schema in this service, which has the id field with an @external directive. This helps the Tweet service understand that based on the given id field of the User entity schema, the instance of the User entity needs to be derived from another service (user service in this case).
As we discussed previously, the tweets field of the extended User schema for the user entity should be resolved in the Tweet service since all the resolvers and access to the data sources with respect to the Tweets entity resides in this service.
The Query and Mutation types of the Tweet service are pretty straightforward; we have a tweets and a tweet(id: ID!) queries to resolve a list or resolve an individual instance of the Tweet entity.
To resolve the creator field of the Tweet entity, the Tweet service needs to tell the gateway that this field will be resolved by the User service. Hence, we pass the id of the User and a __typename for the gateway to be able to call the right service to resolve the User entity instance. In the User service earlier, we wrote a __resolveReference resolver, which will resolve the reference of a User based on an id.
Now, we need to resolve the tweets field of the User entity extended in the Tweet service. We need to write a resolver where we get the parent user entity reference in the first argument of the resolver using which we can fire a Mongoose ORM query to return all the tweets created by the user given its id.
At the end of the file, similar to the User service, we make sure the Tweet service runs on a different port by adding the port: 4002 option to the Apollo server config. That concludes both our implementing services.
Now that we have our services ready, let’s update our gateway.ts file to reflect the added services:
The concurrently library helps run 3 separate scripts in parallel. The server:* scripts spin up a dev server using nodemon to watch and reload the server for changes and ts-node to execute Typescript node.
Let’s spin up our server:
npm start
On visiting the http://localhost:4000, you should see the GraphQL query playground running an Apollo server:
Querying and Mutation from the Client
Initially, let’s fire some mutations to create two users and some tweets by those users.
Mutations
Here we have created a user with the username “@elonmusk” that returns the id of the user. Fire the following mutations in the GraphQL playground:
We will create another user named “@billgates” and take a note of the ID.
Here is a simple mutation to create a tweet by the user “@elonmusk”. Now that we have two created users, let’s fire some mutations to create tweets by those users:
Here is another mutation that creates a tweet by the user“@billgates”.
After adding a couple of those, we are good to fire our queries, which will allow the gateway to compose the data by resolving fields through different services.
Queries
Initially, let’s list all the tweets along with their creator, which is of type User. The query will look something like:
{ tweets { text creator { username } }}
When the gateway encounters a query asking for tweet data, it forwards that query to the Tweet service since the Tweet service that extends the Query type has a tweet query defined in it.
On encountering the creator field of the tweet schema, which is of the type User, the creator resolver within the Tweet service is invoked. This is essentially just passing a __typename and an id, which tells the gateway to resolve this reference from another service.
In the User service, we have a __resolveReference function, which returns the complete instance of a user given it’s id passed from the Tweet service. It also helps all other implementing services that need the reference of a User entity resolved.
On firing the query, the response should look something like:
{"data": {"tweets": [ {"text": "I own Tesla","creator": {"username": "@elonmusk" } }, {"text": "I own SpaceX","creator": {"username": "@elonmusk" } }, {"text": "I own PayPal","creator": {"username": "@elonmusk" } }, {"text": "I own Microsoft","creator": {"username": "@billgates" } }, {"text": "I own XBOX","creator": {"username": "@billgates" } } ] }}
Now, let’s try it the other way round. Let’s list all users and add the field tweets that will be an array of all the tweets created by that user. The query should look something like:
{ users { username tweets { text } }}
When the gateway encounters the query of type users, it passes down that query to the user service. The User service is responsible for resolving the username field of the query.
On encountering the tweets field of the users query, the gateway checks if any other implementing service has extended the User entity and has a resolver written within the service to resolve any additional fields of the type User.
The Tweet service has extended the type User and has a resolver for the User type to resolve the tweets field, which will fetch all the tweets created by the user given the id of the user.
On firing the query, the response should be something like:
{"data": {"users": [ {"username": "@elonmusk","tweets": [ {"text": "I own Tesla" }, {"text": "I own SpaceX" }, {"text": "I own PayPal" } ] }, {"username": "@billgates","tweets": [ {"text": "I own Microsoft" }, {"text": "I own XBOX" } ] } ] }}
Conclusion
To scale an enterprise data graph on a monolith GraphQL service brings along a lot of challenges. Having the ability to distribute our data graph into implementing services that can be individually maintained or scaled using Apollo Federation helps to quell any concerns.
There are further advantages of federated services. Considering our example above, we could have two different kinds of datastores for the User and the Tweet service. While the User data could reside on a NoSQL database like MongoDB, the Tweet data could be on a SQL database like Postgres or SQL. This would be very easy to implement since each service is only responsible for resolving references only for the type they own.
Final Thoughts
One of the key advantages of having different services that can be maintained individually is the ability to deploy each service separately. In addition, this also enables deployment of different services independently to different platforms such as Firebase, Lambdas, etc.
A single monolith GraphQL server deployed on an instance or a single serverless platform can have some challenges with respect to scaling an instance or handling high concurrency as mentioned above.
By splitting out the services, we could have a separate serverless function for each implementing service that can be maintained or scaled individually and also a separate function on which the gateway can be deployed.
One popular usage of GraphQL Federation can be seen in this Netflix Technology blog, where they’ve explained how they solved a bottleneck with the GraphQL APIs in Netflix Studio . What they did was create a federated GraphQL microservices architecture, along with a Schema store using Apollo Federation. This solution helped them create a unified schema but with distributed ownership and implementation.
It is amazing how the software industry has evolved. Back in the day, a software was a simple program. Some of the first software applications like The Apollo Missions Landing modules and Manchester Baby were basic stored procedures. Software was primarily used for research and mathematical purposes.
The invention of personal computers and the prominence of the Internet changed the software world. Desktop applications like word processors, spreadsheets, and games grew. Websites gradually emerged. Back then, simple pages were delivered to the client as static documents for viewing. By the mid-1990s, with Netscape introducing client-side scripting language, JavaScript and Macromedia bringing in Flash, the browser became more powerful, allowing websites to become richer and more interactive. In 1999, the Java language introduced Servlets. And thus born the Web Application. Nevertheless, these developments and applications were still simpler. Engineers didn’t emphasize enough on structuring them and mostly built unstructured monolithic applications.
The advent of disruptive technologies like cloud computing and Big data paved the way for more intricate, convolute web and native mobile applications. From e-commerce and video streaming apps to social media and photo editing, we had applications doing some of the most complicated data processing and storage tasks. The traditional monolithic way now posed several challenges in terms of scalability, team collaboration and integration/deployment, and often led to huge and messy The Ball Of Mud codebases.
To untangle this ball of software, came in a number of service-oriented architectures. The most promising of them was Microservices – breaking an application into smaller chunks that can be developed, deployed and tested independently but worked as a single cohesive unit. Its benefits of scalability and ease of deployment by multiple teams proved as a panacea to most of the architectural problems. A few front-end architectures also came up, such as MVC, MVVM, Web Components, to name a few. But none of them were fully able to reap the benefits of Microservices.
Micro Frontends first came up in ThoughtWorks Technology Radar where they assessed, tried and eventually adopted the technology after noticing significant benefits. It is a Microservice approach to front-end web development where independently deliverable front-end applications are composed as a whole.
With Microservices, Micro Frontends breaks the last monolith to create a complete Micro-Architecture design pattern for web applications. It is entirely composed of loosely coupled vertical slices of business functionality rather than in horizontals. We can term these verticals as ‘Microapps’. This concept is not new and has appeared in Scaling with Microservices and Vertical Decomposition. It first presented the idea of every vertical being responsible for a single business domain and having its presentation layer, persistence layer, and a separate database. From the development perspective, every vertical is implemented by exactly one team and no code is shared among different systems.
Fig: Micro Frontends with Microservices (Micro-architecture)
Why Micro Frontends?
A microservice architecture has a whole slew of advantages when compared to monolithic architectures.
Ease of Upgrades – Micro Frontends build strict bounded contexts in the application. Applications can be updated in a more incremental and isolated fashion without worrying about the risks of breaking up another part of the application.
Scalability – Horizontal scaling is easy for Micro Frontends. Each Micro Frontend has to be stateless for easier scalability.
Ease of deployability: Each Micro Frontend has its CI/CD pipeline, that builds, tests and deploys it to production. So it doesn’t matter if another team is working on a feature and has pushed a bug fix or if a cutover or refactoring is taking place. There should be no risks involved in pushing changes done on a Micro Frontend as long as there is only one team working on it.
Team Collaboration and Ownership: The Scrum Guide says that “Optimal Development Team size is small enough to remain nimble and large enough to complete significant work within a Sprint”. Micro Frontends are perfect for multiple cross-functional teams that can completely own a stack (Micro Frontend) of an application from UX to Database design. In case of an E-commerce site, the Product team and the Payment team can concurrently work on the app without stepping on each other’s toes.
Micro Frontend Integration Approaches
There is a multitude of ways to implement Micro Frontends. It is recommended that any approach for this should take a Runtime integration route instead of a Build Time integration, as the former has to re-compile and release on every single Micro Frontend to release any one of the Micro Frontend’s changes.
We shall learn some of the prominent approaches of Micro Frontends by building a simple Pet Store E-Commerce site. The site has the following aspects (or Microapps, if you will) – Home or Search, Cart, Checkout, Product, and Contact Us. We shall only be working on the Front-end aspect of the site. You can assume that each Microapp has a microservice dedicated to it in the backend. You can view the project demo here and the code repository here. Each way of integration has a branch in the repo code that you can check out to view.
Single Page Frontends –
The simplest way (but not the most elegant) to implement Micro Frontends is to treat each Micro Frontend as a single page.
Fig: Single Page Micro Frontends: Each HTML file is a frontend.
!DOCTYPE html><htmllang="zxx"><head> <title>The MicroFrontend - eCommerce Template</title></head><body> <headerclass="header-section header-normal"> <!-- Header is repeated in each frontend which is difficult to maintain --> .... .... </header <main> </main> <footer<!--Footerisrepeatedineachfrontendwhichmeanswehavetomultiplechangesacrossallfrontends--> </footer> <script> <!-- Cross Cutting features like notification, authentication are all replicated in all frontends--> </script></body>
It is one of the purest ways of doing Micro Frontends because no container or stitching element binds the front ends together into an application. Each Micro Frontend is a standalone app with each dependency encapsulated in it and no coupling with the others. The flipside of this approach is that each frontend has a lot of duplication in terms of cross-cutting concerns like headers and footers, which adds redundancy and maintenance burden.
JavaScript Rendering Components (Or Web Components, Custom Element)-
As we saw above, single-page Micro Frontend architecture has its share of drawbacks. To overcome these, we should opt for an architecture that has a container element that builds the context of the app and the cross-cutting concerns like authentication, and stitches all the Micro Frontends together to create a cohesive application.
// A virtual class from which all micro-frontends would extendclassMicroFrontend {beforeMount() {// do things before the micro front-end mounts }onChange() {// do things when the attributes of a micro front-end changes }render() {// html of the micro frontendreturn'<div></div>'; }onDismount() {// do things after the micro front-end dismounts }}
classCartextendsMicroFrontend {beforeMount() {// get previously saved cart from backend }render() {return`<!-- Page --> <div class="page-area cart-page spad"> <div class="container"> <div class="cart-table"> <table> <thead> ..... ` }addItemToCart(){... }deleteItemFromCart () {... }applyCouponToCart() {... }onDismount() {// save Cart for the user to get back to afterwards }}
<!DOCTYPE html><htmllang="zxx"><head> <title>PetStore - because Pets love pampering</title> <metacharset="UTF-8 <link rel="stylesheet"href="css/style.css"/></head><body> <!-- Header section --> <headerclass="header-section"> .... </header> <!-- Header section end --> <mainid='microfrontend'> <!-- This is where the Micro-frontend gets rendered by utility renderMicroFrontend.js--> </main> <!-- Header section --> <footerclass="header-section"> .... </footer> <!-- Footer section end --> <scriptsrc="frontends/MicroFrontend.js"></script> <scriptsrc="frontends/Home.js"></script> <scriptsrc="frontends/Cart.js"></script> <scriptsrc="frontends/Checkout.js"></script> <scriptsrc="frontends/Product.js"></script> <scriptsrc="frontends/Contact.js"></script> <scriptsrc="routes.js"></script> <scriptsrc="renderMicroFrontend.js"></script>
functionrenderMicroFrontend(pathname) {constmicroFrontend= routes[pathname || window.location.hash];constroot= document.getElementById('microfrontend'); root.innerHTML = microFrontend ?newmicroFrontend().render():newHome().render();$(window).scrollTop(0);}$(window).bind( 'hashchange', function(e) { renderFrontend(window.location.hash); });renderFrontend(window.location.hash);utility routes.js (A map of the hash route to the Microfrontend class)constroutes= {'#': Home,'': Home,'#home': Home,'#cart': Cart,'#checkout': Checkout,'#product': Product,'#contact': Contact,};
As you can see, this approach is pretty neat and encapsulates a separate class for Micro Frontends. All other Micro Frontends extend from this. Notice how all the functionality related to Microapp is encapsulated in the respective Micro Frontend. This makes sure that concurrent work on a Micro Frontend doesn’t mess up some other Micro Frontends.
Everything will work in a similar paradigm when it comes to Web Components and Custom Elements.
React
With the client-side JavaScript frameworks being very popular, it is impossible to leave React from any Front End discussion. React being a component-based JS library, much of the things discussed above will also apply to React. I am going to discuss some of the technicalities and challenges when it comes to Micro Frontends with React.
Styling
Since there should be minimum sharing of code between any Micro Frontend, styling the React components can be challenging, considering the global and cascading nature of CSS. We should make sure styles are targeted on a specific Micro Frontend without spilling over to other Micro Frontends. Inline CSS, CSS in JS libraries like Radium, and CSS Modules, can be used with React.
Redux
Using React with Redux is kind of a norm in today’s front-end world. The convention is to use Redux as a single global store for the entire app for cross application communication. A Micro Frontend should be self-contained with no dependencies. Hence each Micro Frontend should have its own Redux store, moving towards a multiple Redux store architecture.
Other Noteworthy Integration Approaches –
Server-side Rendering – One can use a server to assemble Micro Frontend templates before dispatching it to the browser. SSI techniques can be used too.
iframes – Each Micro Frontend can be an iframe. They also offer a good degree of isolation in terms of styling, and global variables don’t interfere with each other.
Summary
With Microservices, Micro Frontends promise to bring in a lot of benefits when it comes to structuring a complex application and simplifying its development, deployment and maintenance.
But there is a wonderful saying that goes “there is no one-size-fits-all approach that anyone can offer you. The same hot water that softens a carrot hardens an egg”. Micro Frontend is no silver bullet for your architectural problems and comes with its own share of downsides. With more repositories, more tools, more build/deploy pipelines, more servers, more domains to maintain, Micro Frontends can increase the complexity of an app. It may render cross-application communication difficult to establish. It can also lead to duplication of dependencies and an increase in application size.
Your decision to implement this architecture will depend on many factors like the size of your organization and the complexity of your application. Whether it is a new or legacy codebase, it is advisable to apply the technique gradually over time and review its efficacy over time.
There are some new players in town for server programming and this time it’s all about Google. Golang has rapidly been gaining popularity ever since Google started using it for their own production systems. And since the inception of Microservice Architecture, people have been focusing on modern data communication solutions like gRPC along with Protobuf. In this post, I will walk you through each of these briefly.
Golang
Golang or Go is an open source, general purpose programming language by Google. It has been gaining popularity recently for all the good reasons. It may come as a surprise to most people that language is almost 10 years old and has been production ready for almost 7 years, according to Google.
Golang is designed to be simple, modern, easy to understand, and quick to grasp. The creators of the language designed it in such a way that an average programmer can have a working knowledge of the language over a weekend. I can attest to the fact that they definitely succeeded. Speaking of the creators, these are the experts that have been involved in the original draft of the C language so we can be assured that these guys know what they are doing.
That’s all good but why do we need another language?
For most of the use cases, we actually don’t. In fact, Go doesn’t solve any new problems that haven’t been solved by some other language/tool before. But it does try to solve a specific set of relevant problems that people generally face in an efficient, elegant, and intuitive manner. Go’s primary focus is the following:
First class support for concurrency
An elegant, modern language that is very simple to its core
Very good performance
First hand support for the tools required for modern software development
I’m going to briefly explain how Go provides all of the above. You can read more about the language and its features in detail from Go’s official website.
Concurrency
Concurrency is one of the primary concerns in most of the server applications and it should be the primary concern of the language, considering the modern microprocessors. Go introduces a concept called a ‘goroutine’. A ‘goroutine’ is analogous to a ‘lightweight user-space thread’. It is much more complicated than that in reality as several goroutines multiplex on a single thread but the above expression should give you a general idea. These are light enough that you can actually spin up a million goroutines simultaneously as they start with a very tiny stack. In fact, that’s recommended. Any function/method in Go can be used to spawn a Goroutine. You can just do ‘go myAsyncTask()’ to spawn a goroutine from ‘myAsyncTask’ function. The following is an example:
// This function performs the given task concurrently by spawing a goroutine// for each of those tasks.func performAsyncTasks(task []Task) { for _, task := range tasks {// This will spawn a separate goroutine to carry out this task.// This call is non-blocking go task.Execute() }}
Yes, it’s that easy and it is meant to be that way as Go is a simple language and you are expected to spawn a goroutine for every independent async task without caring much. Go’s runtime automatically takes care of running the goroutines in parallel if multiple cores are available. But how do these goroutines communicate? The answer is channels.
‘Channel’ is also a language primitive that is meant to be used for communication among goroutines. You can pass anything from a channel to another goroutine (A primitive Go type or a Go struct or even other channels). A channel is essentially a blocking double ended queue (can be single ended too). If you want a goroutine(s) to wait for a certain condition to be met before continuing further you can implement cooperative blocking of goroutines with the help of channels.
These two primitives give a lot of flexibility and simplicity in writing asynchronous or parallel code. Other helper libraries like a goroutine pool can be easily created from the above primitives. One basic example is:
package executorimport ("log""sync/atomic")// The Executor struct is the main executor for tasks.// 'maxWorkers' represents the maximum number of simultaneous goroutines.// 'ActiveWorkers' tells the number of active goroutines spawned by the Executor at given time.// 'Tasks' is the channel on which the Executor receives the tasks.// 'Reports' is channel on which the Executor publishes the every tasks reports.// 'signals' is channel that can be used to control the executor. Right now, only the termination// signal is supported which is essentially is sending '1' on this channel by the client.typeExecutor struct { maxWorkers int64 ActiveWorkers int64 Tasks chan Task Reports chan Report signals chan int}// NewExecutor creates a new Executor.// 'maxWorkers' tells the maximum number of simultaneous goroutines.// 'signals' channel can be used to control the Executor.func NewExecutor(maxWorkers int, signals chan int) *Executor {chanSize :=1000if maxWorkers > chanSize { chanSize = maxWorkers }executor := Executor{maxWorkers: int64(maxWorkers),Tasks: make(chan Task, chanSize),Reports: make(chan Report, chanSize),signals: signals, } go executor.launch()return&executor}// launch starts the main loop for polling on the all the relevant channels and handling differents// messages.func (executor *Executor) launch() int {reports :=make(chan Report, executor.maxWorkers) for { select {casesignal :=<-executor.signals:if executor.handleSignals(signal) ==0 {return0 }caser :=<-reports: executor.addReport(r)default:if executor.ActiveWorkers < executor.maxWorkers &&len(executor.Tasks) >0 {task :=<-executor.Tasks atomic.AddInt64(&executor.ActiveWorkers, 1) go executor.launchWorker(task, reports) } } }}// handleSignals is called whenever anything is received on the 'signals' channel.// It performs the relevant task according to the received signal(request) and then responds either// with 0 or 1 indicating whether the request was respected(0) or rejected(1).func (executor *Executor) handleSignals(signal int) int {if signal ==1 { log.Println("Received termination request...")if executor.Inactive() { log.Println("No active workers, exiting...") executor.signals <-0return0 } executor.signals <-1 log.Println("Some tasks are still active...") }return1}// launchWorker is called whenever a new Task is received and Executor can spawn more workers to spawn// a new Worker.// Each worker is launched on a new goroutine. It performs the given task and publishes the report on// the Executor's internal reports channel.func (executor *Executor) launchWorker(task Task, reports chan<- Report) {report := task.Execute()iflen(reports) <cap(reports) { reports <- report } else { log.Println("Executor's report channel is full...") } atomic.AddInt64(&executor.ActiveWorkers, -1)}// AddTask is used to submit a new task to the Executor is a non-blocking way. The Client can submit// a new task using the Executor's tasks channel directly but that will block if the tasks channel is// full.// It should be considered that this method doesn't add the given task if the tasks channel is full// and it is up to client to try again later.func (executor *Executor) AddTask(task Task) bool {iflen(executor.Tasks) ==cap(executor.Tasks) {returnfalse } executor.Tasks <- taskreturntrue}// addReport is used by the Executor to publish the reports in a non-blocking way. It client is not// reading the reports channel or is slower that the Executor publishing the reports, the Executor's// reports channel is going to get full. In that case this method will not block and that report will// not be added.func (executor *Executor) addReport(report Report) bool {iflen(executor.Reports) ==cap(executor.Reports) {returnfalse } executor.Reports <- reportreturntrue}// Inactive checks if the Executor is idle. This happens when there are no pending tasks, active// workers and reports to publish.func (executor *Executor) Inactive() bool {return executor.ActiveWorkers ==0&&len(executor.Tasks) ==0&&len(executor.Reports) ==0}
Simple Language
Unlike a lot of other modern languages, Golang doesn’t have a lot of features. In fact, a compelling case can be made for the language being too restrictive in its feature set and that’s intended. It is not designed around a programming paradigm like Java or designed to support multiple programming paradigms like Python. It’s just bare bones structural programming. Just the essential features thrown into the language and not a single thing more.
After looking at the language, you may feel that the language doesn’t follow any particular philosophy or direction and it feels like every feature is included in here to solve a specific problem and nothing more than that. For example, it has methods and interfaces but not classes; the compiler produces a statically linked binary but still has a garbage collector; it has strict static typing but doesn’t support generics. The language does have a thin runtime but doesn’t support exceptions.
The main idea here that the developer should spend the least amount of time expressing his/her idea or algorithm as code without thinking about “What’s the best way to do this in x language?” and it should be easy to understand for others. It’s still not perfect, it does feel limiting from time to time and some of the essential features like Generics and Exceptions are being considered for the ‘Go 2’.
Performance
Single threaded execution performance NOT a good metric to judge a language, especially when the language is focused around concurrency and parallelism. But still, Golang sports impressive benchmark numbers only beaten by hardcore system programming languages like C, C++, Rust, etc. and it is still improving. The performance is actually very impressive considering its a Garbage collected language and is good enough for almost every use case.
The adoption of a new tool/language directly depends on its developer experience. And the adoption of Go does speak for its tooling. Here we can see that same ideas and tooling is very minimal but sufficient. It’s all achieved by the ‘go’ command and its subcommands. It’s all command line.
There is no package manager for the language like pip, npm. But you can get any community package by just doing
go get github.com/velotiotech/WebCrawler/blob/master/executor/executor.go
Yes, it works. You can just pull packages directly from github or anywhere else. They are just source files.
But what about package.json..? I don’t see any equivalent for `go get`. Because there isn’t. You don’t need to specify all your dependency in a single file. You can directly use:
import"github.com/xlab/pocketsphinx-go/sphinx"
In your source file itself and when you do `go build` it will automatically `go get` it for you. You can see the full source file here:
This binds the dependency declaration with source itself.
As you can see by now, it’s simple, minimal and yet sufficient and elegant. There is first hand support for both unit tests and benchmarks with flame charts too. Just like the feature set, it also has its downsides. For example, `go get` doesn’t support versions and you are locked to the import URL passed in you source file. It is evolving and other tools have come up for dependency management.
Golang was originally designed to solve the problems that Google had with their massive code bases and the imperative need to code efficient concurrent apps. It makes coding applications/libraries that utilize the multicore nature of modern microchips very easy. And, it never gets into a developer’s way. It’s a simple modern language and it never tries to become anything more that that.
Protobuf (Protocol Buffers)
Protobuf or Protocol Buffers is a binary communication format by Google. It is used to serialize structured data. A communication format? Kind of like JSON? Yes. It’s more than 10 years old and Google has been using it for a while now.
But don’t we have JSON and it’s so ubiquitous…
Just like Golang, Protobufs doesn’t really solve anything new. It just solves existing problems more efficiently and in a modern way. Unlike Golang, they are not necessarily more elegant than the existing solutions. Here are the focus points of protobuf:
It’s a binary format, unlike JSON and XML, which are text based and hence it’s vastly space efficient.
First hand and sophisticated support for schemas.
First hand support for generating parsing and consumer code in various languages.
Binary format and speed
So are protobuf really that fast? The short answer is, yes. According to the Google Developers they are 3 to 10 times smaller and 20 to 100 times faster than XML. It’s not a surprise as it is a binary format, the serialized data is not human readable.
Protobufs take a more planned approach. You define `.proto` files which are kind of the schema files but are much more powerful. You essentially define how you want your messages to be structured, which fields are optional or required, their data types etc. After that the protobuf compiler will generate the data access classes for you. You can use these classes in your business logic to facilitate communication.
Looking at a `.proto` file related to a service will also give you a very clear idea of the specifics of the communication and the features that are exposed. A typical .proto file looks like this:
message Person { required string name =1; required int32 id =2; optional string email =3;enumPhoneType {MOBILE=0;HOME=1;WORK=2; } message PhoneNumber { required string number =1; optional PhoneType type =2 [default =HOME]; } repeated PhoneNumber phone =4;}
Fun Fact: Jon Skeet, the king of Stack Overflow is one of the main contributors in the project.
gRPC
gRPC, as you guessed it, is a modern RPC (Remote Procedure Call) framework. It is a batteries included framework with built in support for load balancing, tracing, health checking, and authentication. It was open sourced by Google in 2015 and it’s been gaining popularity ever since.
An RPC framework…? What about REST…?
SOAP with WSDL has been used long time for communication between different systems in a Service Oriented Architecture. At the time, the contracts used to be strictly defined and systems were big and monolithic, exposing a large number of such interfaces.
Then came the concept of ‘browsing’ where the server and client don’t need to be tightly coupled. A client should be able to browse service offerings even if they were coded independently. If the client demanded the information about a book, the service along with what’s requested may also offer a list of related books so that client can browse. REST paradigm was essential to this as it allows the server and client to communicate freely without strict restriction using some primitive verbs.
As you can see above, the service is behaving like a monolithic system, which along with what is required is also doing n number of other things to provide the client with the intended `browsing` experience. But this is not always the use case. Is it?
Enter the Microservices
There are many reasons to adopt for a Microservice Architecture. The prominent one being the fact that it is very hard to scale a Monolithic system. While designing a big system with Microservices Architecture each business or technical requirement is intended to be carried out as a cooperative composition of several primitive ‘micro’ services.
These services don’t need to be comprehensive in their responses. They should perform specific duties with expected responses. Ideally, they should behave like pure functions for seamless composability.
Now using REST as a communication paradigm for such services doesn’t provide us with much of a benefit. However, exposing a REST API for a service does enable a lot of expression capability for that service but again if such expression power is neither required nor intended we can use a paradigm that focuses more on other factors.
gRPC intends to improve upon the following technical aspects over traditional HTTP requests:
HTTP/2 by default with all its goodies.
Protobuf as machines are talking.
Dedicated support for streaming calls thanks to HTTP/2.
Pluggable auth, tracing, load balancing and health checking because you always need these.
As it’s an RPC framework, we again have concepts like Service Definition and Interface Description Language which may feel alien to the people who were not there before REST but this time it feels a lot less clumsy as gRPC uses Protobuf for both of these.
Protobuf is designed in such a way that it can be used as a communication format as well as a protocol specification tool without introducing anything new. A typical gRPC service definition looks like this:
You just write a `.proto` file for your service describing the interface name, what it expects, and what it returns as Protobuf messages. Protobuf compiler will then generate both the client and server side code. Clients can call this directly and server-side can implement these APIs to fill in the business logic.
Conclusion
Golang, along with gRPC using Protobuf is an emerging stack for modern server programming. Golang simplifies making concurrent/parallel applications and gRPC with Protobuf enables efficient communication with a pleasing developer experience.