GitHub Actions jobs are run in the cloud by default; however, sometimes we want to run jobs in our own customized/private environment where we have full control. That is where a self-hosted runner saves us from this problem.
To get a basic understanding of running self-hosted runners on the Kubernetes cluster, this blog is perfect for you.
We’ll be focusing on running GitHub Actions on a self-hosted runner on Kubernetes.
An example use case would be to create an automation in GitHub Actions to execute MySQL queries on MySQL Database running in a private network (i.e., MySQL DB, which is not accessible publicly).
A self-hosted runner requires the provisioning and configuration of a virtual machine instance; here, we are running it on Kubernetes. For running a self-hosted runner on a Kubernetes cluster, the action-runner-controller helps us to make that possible.
This blog aims to try out self-hosted runners on Kubernetes and covers:
Deploying MySQL Database on minikube, which is accessible only within Kubernetes Cluster.
Deploying self-hosted action runners on the minikube.
Running GitHub Action on minikube to execute MySQL queries on MySQL Database.
Steps for completing this tutorial:
Create a GitHub repository
Create a private repository on GitHub. I am creating it with the name velotio/action-runner-poc.
By default, actions-runner-controller uses cert-manager for certificate management of admission webhook, so we have to make sure cert-manager is installed on Kubernetes before we install actions-runner-controller.
Run the below helm commands to install cert-manager on minikube.
Verify installation using “kubectl –namespace cert-manager get all”. If everything is okay, you will see an output as below:
Setting Up Authentication for Hosted Runners
There are two ways for actions-runner-controller to authenticate with the GitHub API (only 1 can be configured at a time, however):
Using a GitHub App (not supported for enterprise-level runners due to lack of support from GitHub.)
Using a PAT (personal access token)
To keep this blog simple, we are going with PAT.
To authenticate an action-runner-controller with the GitHub API, we can use a PAT with the action-runner-controller registers a self-hosted runner.
Go to account > Settings > Developers settings > Personal access token. Click on “Generate new token”. Under scopes, select “Full control of private repositories”.
Click on the “Generate token” button.
Copy the generated token and run the below commands to create a Kubernetes secret, which will be used by action-runner-controller deployment.
Verify that the action-runner-controller installed properly using below command
kubectl --namespaceactions-runner-systemgetall
Create a Repository Runner
Create a RunnerDeployment Kubernetes object, which will create a self-hosted runner named k8s-action-runner for the GitHub repository velotio/action-runner-poc
Please Update Repo name from “velotio/action-runner-poc” to “<Your-repo-name>”
To create the RunnerDeployment object, create the file runner.yaml as follows:
Check that the pod is running using the below command:
kubectl get pod -n actions-runner-system | grep -i "k8s-action-runner"
If everything goes well, you should see two action runners on the Kubernetes, and the same are registered on Github. Check under Settings > Actions > Runner of your repository.
Check the pod with kubectl get po -n actions-runner-system
Install a MySQL Database on the Kubernetes cluster
Create the file mysql-svc-deploy.yaml and add the below content to mysql-svc-deploy.yaml
Here, we have used MYSQL_ROOT_PASSWORD as “password”.
apiVersion: v1kind: Servicemetadata:name: mysqlspec:ports:-port: 3306selector:app: mysqlclusterIP: None---apiVersion: apps/v1kind: Deploymentmetadata:name: mysqlspec:selector:matchLabels:app: mysqlstrategy:type: Recreatetemplate:metadata:labels:app: mysqlspec:containers:-image: mysql:5.6name: mysqlenv: # Use secret in real usage-name: MYSQL_ROOT_PASSWORDvalue: passwordports:-containerPort: 3306name: mysqlvolumeMounts:-name: mysql-persistent-storagemountPath: /var/lib/mysqlvolumes:-name: mysql-persistent-storagepersistentVolumeClaim:claimName: mysql-pv-claim
Create the service and deployment
kubectl create -f mysql-svc-deploy.yaml -n mysql
Verify that the MySQL database is running
kubectl get po -n mysql
Create a GitHub repository secret to store MySQL password
As we will use MySQL password in the GitHub action workflow file as a good practice, we should not use it in plain text. So we will store MySQL password in GitHub secrets, and we will use this secret in our GitHub action workflow file.
Create a secret in the GitHub repository and give the name to the secret as “MYSQL_PASS”, and in the values, enter “password”.
Create a GitHub workflow file
YAML syntax is used to write GitHub workflows. For each workflow, we use a separate YAML file, which we store at .github/workflows/ directory. So, create a .github/workflows/ directory in your repository and create a file .github/workflows/mysql_workflow.yaml as follows.
---name: Example 1on:push:branches: [ main ]jobs:build:name: Build-job runs-on: self-hostedsteps:-name: Checkoutuses: actions/checkout@v2-name: MySQLQueryenv:PASS: ${{ secrets.MYSQL_PASS }}run: | docker run -v ${GITHUB_WORKSPACE}:/var/lib/docker --rm mysql:5.6sh -c"mysql -u root -p$PASS -hmysql.mysql.svc.cluster.local </var/lib/docker/test.sql"
If you check the docker run command in the mysql_workflow.yaml file, we are referring to the .sql file, i.e., test.sql. So, create a test.sql file in your repository as follows:
use mysql;CREATETABLEIFNOTEXISTSPersons ( PersonID int, LastName varchar(255), FirstName varchar(255), Address varchar(255), City varchar(255));SHOWTABLES;
In test.sql, we are running MySQL queries like create tables.
Push changes to your repository main branch.
If everything is fine, you will be able to see that the GitHub action is getting executed in a self-hosted runner pod. You can check it under the “Actions” tab of your repository.
You can check the workflow logs to see the output of SHOW TABLES—a command we have used in the test.sql file—and check whether the persons tables is created.
As businesses move their data to the public cloud, one of the most pressing issues is how to keep it safe from illegal access.
Using a tool like HashiCorp Vault gives you greater control over your sensitive credentials and fulfills cloud security regulations.
In this blog, we’ll walk you through HashiCorp Vault High Availability Setup.
Hashicorp Vault
Hashicorp Vault is an open-source tool that provides a secure, reliable way to store and distribute sensitive information like API keys, access tokens, passwords, etc. Vault provides high-level policy management, secret leasing, audit logging, and automatic revocation to protect this information using UI, CLI, or HTTP API.
High Availability
Vault can run in a High Availability mode to protect against outages by running multiple Vault servers. When running in HA mode, Vault servers have two additional states, i.e., active and standby. Within a Vault cluster, only a single instance will be active, handling all requests, and all standby instances redirect requests to the active instance.
Integrated Storage Raft
The Integrated Storage backend is used to maintain Vault’s data. Unlike other storage backends, Integrated Storage does not operate from a single source of data. Instead, all the nodes in a Vault cluster will have a replicated copy of Vault’s data. Data gets replicated across all the nodes via the Raft Consensus Algorithm.
Raft is officially supported by Hashicorp.
Architecture
Prerequisites
This setup requires Vault, Sudo access on the machines, and the below configuration to create the cluster.
Install Vault v1.6.3+ent or later on all nodes in the Vault cluster
In this example, we have 3 CentOs VMs provisioned using VMware.
Setup
1. Verify the Vault version on all the nodes using the below command (in this case, we have 3 nodes node1, node2, node3):
vault --version
2. Configure SSL certificates
Note: Vault should always be used with TLS in production to provide secure communication between clients and the Vault server. It requires a certificate file and key file on each Vault host.
We can generate SSL certs for the Vault Cluster on the Master and copy them on the other nodes in the cluster.
You can view the systemd unit file if interested by:
cat /etc/systemd/system/vault.servicesystemctl enable vault.servicesystemctl start vault.servicesystemctl status vault.service
9. Check Vault status on all nodes:
vault status
10. Initialize Vault with the following command on vault node 1 only. Store unseal keys securely.
[user@node1 vault.d]$ vault operator init -key-shares=1-key-threshold=1Unseal Key 1: HPY/g5OiT8ivD6L4Bqfjx9L1We2MVb4WZAqKZk6zFf8=Initial Root Token: hvs.j4qTq1IZP9nscILMtN2p9GE0Vault initialized with1 key shares and a key threshold of1.Please securely distribute the key shares printed above. When the Vault is re-sealed, restarted, or stopped, you must supply at least 1of these keys to unseal itbefore it can start servicing requests.Vault does not store the generated root key. Without at least 1 keys to reconstruct the root key, Vault will remain permanently sealed!It is possible to generate new unseal keys, provided you have aquorum of existing unseal keys shares. See "vault operator rekey" for more information.
11. Set Vault token environment variable for the vault CLI command to authenticate to the server. Use the following command, replacing <initial-root- token> with the value generated in the previous step.
exportVAULT_TOKEN=<initial-root-token>echo "export VAULT_TOKEN=$VAULT_TOKEN" >> /root/.bash_profile### Repeat this step for the other 2 servers.
12. Unseal Vault1 using the unseal key generated in step 10. Notice the Unseal Progress key-value change as you present each key. After meeting the key threshold, the status of the key value for Sealed should change from true to false.
[user@node1 vault.d]$ vault operator unseal HPY/g5OiT8ivD6L4Bqfjx9L1We2MVb4WZAqKZk6zFf8=Key Value--------Seal Type shamirInitialized trueSealed falseTotal Shares 1Threshold 1Version 1.11.0Build Date 2022-06-17T15:48:44ZStorage Type raftCluster Name POCCluster ID 109658fe-36bd-7d28-bf92-f095c77e860cHA Enabled trueHA Cluster https://node1.int.us-west-1-dev.central.example.com:8201HA Mode activeActive Since 2022-06-29T12:50:46.992698336ZRaft Committed Index 36Raft Applied Index 36
13. Unseal Vault2 (Use the same unseal key generated in step 10 for Vault1):
[user@node2 vault.d]$ vault operator unseal HPY/g5OiT8ivD6L4Bqfjx9L1We2MVb4WZAqKZk6zFf8=Key Value--------Seal Type shamirInitialized trueSealed trueTotal Shares 1Threshold 1Unseal Progress 0/1Unseal Nonce n/aVersion 1.11.0Build Date 2022-06-17T15:48:44ZStorage Type raftHA Enabled true[user@node2 vault.d]$ vault statusKey Value--------Seal Type shamirInitialized trueSealed trueTotal Shares 1Threshold 1Version 1.11.0Build Date 2022-06-17T15:48:44ZStorage Type raftCluster Name POCCluster ID 109658fe-36bd-7d28-bf92-f095c77e860cHA Enabled trueHA Cluster https://node1.int.us-west-1-dev.central.example.com:8201HA Mode standbyActive Node Address https://node1.int.us-west-1-dev.central.example.com:8200Raft Committed Index 37Raft Applied Index 37
14. Unseal Vault3 (Use the same unseal key generated in step 10 for Vault1):
[user@node3 ~]$ vault operator unseal HPY/g5OiT8ivD6L4Bqfjx9L1We2MVb4WZAqKZk6zFf8=Key Value--------Seal Type shamirInitialized trueSealed trueTotal Shares 1Threshold 1Unseal Progress 0/1Unseal Nonce n/aVersion 1.11.0Build Date 2022-06-17T15:48:44ZStorage Type raftHA Enabled true[user@node3 ~]$ vault statusKey Value--------Seal Type shamirInitialized trueSealed falseTotal Shares 1Threshold 1Version 1.11.0Build Date 2022-06-17T15:48:44ZStorage Type raftCluster Name POCCluster ID 109658fe-36bd-7d28-bf92-f095c77e860cHA Enabled trueHA Cluster https://node1.int.us-west-1-dev.central.example.com:8201HA Mode standbyActive Node Address https://node1.int.us-west-1-dev.central.example.com:8200Raft Committed Index 39Raft Applied Index 39
15. Check the cluster’s raft status with the following command:
16. Currently, node1 is the active node. We can experiment to see what happens if node1 steps down from its active node duty.
In the terminal where VAULT_ADDR is set to: https://node1.int.us-west-1-dev.central.example.com, execute the step-down command.
$ vault operator step-down # equivalent of stopping the node or stopping the systemctl serviceSuccess! Stepped down: https://node2.int.us-west-1-dev.central.example.com:8200
In the terminal, where VAULT_ADDR is set to https://node2.int.us-west-1-dev.central.example.com:8200, examine the raft peer set.
Vault servers are now operational in High Availability mode, and we can test this by writing a secret from either the active or standby Vault instance and see it succeed as a test of request forwarding. Also, we can shut down the active vault instance (sudo systemctl stop vault) to simulate a system failure and see the standby instance assumes the leadership.
Intelligent Revenue Cycle Management (iRCM) is driving a dramatic revolution in the healthcare industry. This innovative approach incorporates advanced technologies like Artificial Intelligence (AI), Machine Learning (ML), and other critical tools to revolutionize revenue cycles. AI and ML are at the forefront of this transformation, playing critical roles in optimizing Revenue Cycles. Beyond the commonly employed AI applications for eligibility/benefits verification and payment estimations, there is a broader scope for AI in RCM.
A research study by Change Healthcare found that approximately two-thirds of healthcare facilities and health systems are leveraging AI in their revenue cycle management (RCM) processes. Among these organizations, 72% use AI for eligibility/benefits verification, and 64% employ it for payment estimations (likely due to the No Surprises Act). However, the role of AI in RCM extends beyond these areas.
The 2022 State of Revenue Integrity survey by the National Association of Healthcare Revenue Integrity highlights other AI-enabled functions in RCM, including charge description master (CDM) maintenance, charge capture, denials management, payer contract management, physician credentialing, and claim auditing, among others.
Shedding Light on the Power of AI and Advanced Technologies in iRCM
In addition to AI, technologies such as natural language processing (NLP), robotic process automation (RPA), and advanced analytics improve Intelligent Revenue Cycle Management (iRCM). NLP enables the extraction of essential insights from unstructured data sources, hence assisting with exact coding, documentation, and compliance. RPA automates repetitive and rule-based processes, liberating resources and lowering administrative burdens. Advanced analytics provides data-driven insights, empowering healthcare organizations to make informed decisions, identify areas for improvement, and drive continuous optimization.
The adoption of artificial intelligence and other technologies into iRCM represents a paradigm change in healthcare financial management. It opens new avenues for healthcare organizations to increase revenue capture, save costs, streamline processes, and improve patient happiness.
iRCM: Unleashing the Advantages of Advanced Technologies for A Healthcare System’s Financial Success
iRCM leverages modern technologies to enhance financial success in healthcare. iRCM streamlines and optimizes the revenue cycle processes, leading to improved revenue capture, reduced costs, and enhanced financial outcomes for healthcare organizations. Here’s how Intelligent Revenue Cycle Management unleashes the advantages of advanced technologies like AI, ML, and RPA for healthcare’s financial success:
AI in iRCM
Claims Processing: AI algorithms can analyze and handle enormous amounts of claims data, automate claim validation, and detect trends or abnormalities that may result in denials or mistakes. This can save manual work, enhance accuracy, and shorten the claims processing cycle.
Prioritization and Workflow Optimization: Artificial intelligence can intelligently prioritize jobs based on urgency, complexity, or revenue impact. It has the ability to automate work and optimize workflow management, ensuring that high-priority matters are addressed quickly and efficiently.
Intelligent Coding and Documentation: AI techniques like Natural Language Processing (NLP) are vital in analyzing medical records, extracting relevant information, identifying medical concepts, and assisting in coding. AI models trained on large datasets provide intelligent code recommendations, streamlining workflows in revenue cycle management for healthcare organizations.
ML in iRCM:
Predictive Analytics: ML algorithms can analyze historical data to identify trends, patterns, and correlations related to revenue cycle performance. By leveraging this information, organizations can predict potential issues, optimize processes, and make data-driven decisions to improve financial outcomes.
Denial Management: ML can help learn from past denials and identify patterns contributing to claim rejections. This enables proactive measures to be taken, such as improving the documentation or addressing common billing errors, to minimize future denials.
RPA in iRCM:
Automated Data Entry and Extraction: RPA can automate repetitive manual tasks, like data entry from paper documents or extracting information from records. This reduces errors, saves time, and improves efficiency in revenue cycle processes.
Claim Status and Follow-up: RPA bots can automatically retrieve claim status from payer portals, update the system with real-time information, and initiate follow-up actions based on predefined rules. This streamlines the claims management process and enhances revenue capture.
Eligibility Verification: RPA can automate the verification of patient insurance eligibility by extracting relevant information from various sources and validating it against payer databases. This ensures accurate billing and minimizes claim denials.
Revolutionize Your Revenue Cycle: Embrace Intelligent Revenue Cycle Management Powered by Advanced Technologies
iRCM (Intelligent Revenue Cycle Management) represents a cutting-edge approach to optimizing financial success in the healthcare industry. By harnessing the power of advanced technologies like AI, ML, and RPA, iRCM revolutionizes revenue cycle management processes. R Systems, a renowned provider of intelligent Revenue Cycle Management Services, offers comprehensive solutions integrating AI algorithms for intelligent coding and documentation, ML models for predictive analytics, and RPA for streamlined automation.
With R Systems’ innovative approach, healthcare organizations can unlock the full potential of their revenue cycle, ensuring accurate coding, minimizing claim denials, and maximizing revenue capture. By embracing R Systems’ expertise and leveraging intelligent technologies, healthcare providers can experience efficient operations, improved financial outcomes, and elevated standards of patient care in their revenue cycle management.
We will cover the security concepts of Kafka and walkthrough the implementation of encryption, authentication, and authorization for the Kafka cluster.
This article will explain how to configure SASL_SSL (simple authentication security layer) security for your Kafka cluster and how to protect the data in transit. SASL_SSL is a communication type in which clients use authentication mechanisms like PLAIN, SCRAM, etc., and the server uses SSL certificates to establish secure communication. We will use the SCRAM authentication mechanism here for the client to help establish mutual authentication between the client and server. We’ll also discuss authorization and ACLs, which are important for securing your cluster.
Prerequisites
Running Kafka Cluster, basic understanding of security components.
Need for Kafka Security
The primary reason is to prevent unlawful internet activities for the purpose of misuse, modification, disruption, and disclosure. So, to understand the security in Kafka cluster a secure Kafka cluster, we need to know three terms:
Authentication – It is a security method used for servers to determine whether users have permission to access their information or website.
Authorization – The authorization security method implemented with authentication enables servers to have a methodology of identifying clients for access. Basically, it gives limited access, which is sufficient for the client.
Encryption – It is the process of transforming data to make it distorted and unreadable without a decryption key. Encryption ensures that no other client can intercept and steal or read data.
Here is the quick start guide by Apache Kafka, so check it out if you still need to set up Kafka.
We’ll not cover the theoretical aspects here, but you can find a ton of sources on how these three components work internally. For now, we’ll focus on the implementation part and how Kafka revolves around security.
This image illustrates SSL communication between the Kafka client and server.
We are going to implement the steps in the below order:
Create a Certificate Authority
Create a Truststore & Keystore
Certificate Authority – It is a trusted entity that issues SSL certificates. As such, a CA is an independent entity that acts as a trusted third party, issuing certificates for use by others. A certificate authority validates the credentials of a person or organization that requests a certificate before issuing one.
Truststore – A truststore contains certificates from other parties with which you want to communicate or certificate authorities that you trust to identify other parties. In simple words, a list of CAs that can validate the certificate signed by the trusted CA.
KeyStore – A KeyStore contains private keys and certificates with their corresponding public keys. Keystores can have one or more CA certificates depending upon what’s needed.
For Kafka Server, we need a server certificate, and here, Keystore comes into the picture since it stores a server certificate. The server certificate should be signed by Certificate Authority (CA). The KeyStore requests to sign the server certificate and in response, CA send a signed CRT to Keystore.
We will create our own certificate authority for demonstration purposes. If you don’t want to create a private certificate authority, there are many certificate providers you can go with, like IdenTrust and GoDaddy. Since we are creating one, we need to tell our Kafka client to trust our private certificate authority using the Trust Store.
This block diagram shows you how all the components communicate with each other and their role to generate the final certificate.
So, let’s create our Certificate Authority. Run the below command in your terminal:
It will ask for a passphrase, and keep it safe for future use cases. After successfully executing the command, we should have two files named private_key_name and public_certificate_name.
Now, let’s create a KeyStore and trust store for brokers; we need both because brokers also interact internally with each other. Let’s understand with the help of an example: Broker A wants to connect with Broker B, so Broker A acts as a client and Broker B as a server. We are using the SASL_SSL protocol, so A needs SASL credentials, and B needs a certificate for authentication. The reverse is also possible where Broker B wants to connect with Broker A, so we need both a KeyStore and a trust store for authentication.
Now let’s create a trust store. Execute the below command in the terminal, and it should ask for the password. Save the password for future use:
“keytool -keystore <truststore_name.jks> -alias <alias name of the entry to process> -import -file <public_certificate_name>”
Here, we are using the .jks extension for the file, which stands for Java KeyStore. You can also use Public-Key Cryptography Standards #12 (pkcs12) instead of .jks, but that’s totally up to you. public_certificate_name is the same certificate while we create CA.
For the KeyStore configuration, run the below command and store the password:
This action creates the KeyStore file in the current working directory. The question “First and Last Name” requires you to enter a fully qualified domain name because some certificate authorities, such as VeriSign, expect this property to be a fully qualified domain name. Not all CAs require a fully qualified domain name, but I recommend using a fully qualified domain name for portability. All other information should be valid. If the information cannot be verified, a certificate authority such as VeriSign will not sign the CSR generated for that record. I’m using localhost for the domain name here, as seen in the above command itself.
Keystore has an entry with alias_name. It contains the private key and information needed for generating a CSR. Now let’s create a signing certificate request, so it will be used to get a signed certificate from Certificate Authority.
So, we have generated a signing certificate request using a KeyStore (the KeyStore name and alias name should be the same). It should ask for the KeyStore password, so enter the same one used while creating the KeyStore.
Now, execute the below command. It will ask for the password, so enter the CA password, and now we have a signed certificate:
Finally, we need to add the public certificate of CA and signed certificate in the KeyStore, so run the below command. It will add the CA certificate to the KeyStore.
As of now, we have generated all the security files for the broker. For internal broker communication, we are using SASL_SSL (see security.inter.broker.protocol in server.properties). Now we need to create a broker username and password using the SCRAM method. For more details, click here.
NOTE: If you are using an external jaas config file, then remove the ScramLoginModule line and set this environment variable before starting broker. “export KAFKA_OPTS=-Djava.security.auth.login.config={path/to/broker.conf}”
Now, if we run Kafka, the broker should be running on port 9092 without any failure, and if you have multiple brokers inside Kafka, the same config file can be replicated among them, but the port should be different for each broker.
Producers and consumers need a username and a password to access the broker, so let’s create their credentials and update respective configurations.
Create a producer user and update producer.properties inside the bin directory, so execute the below command in your terminal.
We need a trust store file for our clients (producer and consumer), but as we already know how to create a trust store, this is a small task for you. It is suggested that producers and consumers should have separate trust stores because when we move Kafka to production, there could be multiple producers and consumers on different machines.
As of now, we have implemented encryption and authentication for Kafka brokers. To verify that our producer and consumer are working properly with SCRAM credentials, run the console producer and consumer on some topics.
Authorization is not implemented yet. Kafka uses access control lists (ACLs) to specify which users can perform which actions on specific resources or groups of resources. Each ACL has a principal, a permission type, an operation, a resource type, and a name.
The default authorizer is ACLAuthorizer provided by Kafka; Confluent also provides the Confluent Server Authorizer, which is totally different from ACLAuthorizer. An authorizer is a server plugin used by Kafka to authorize actions. Specifically, the authorizer controls whether operations should be authorized based on the principal and resource being accessed.
Format of ACLs – Principal P is [Allowed/Denied] Operation O from Host H on any Resource R matching ResourcePattern RP
Execute the below command to create an ACL with writing permission for the producer:
The above line indicates that AclAuthorizer class is used for authorization.
# consumer group idgroup.id=<consumer_group_name>
Consumer group-id is mandatory, and if we do not specify any group, a consumer will not be able to access the data from topics, so to start a consumer, group-id should be provided.
Let’s test the producer and consumer one by one, run the console producer and also run the console consumer in another terminal; both should be running without error.
console-producerconsole-consumer
Voila!! Your Kafka is secured.
Summary
In a nutshell, we have implemented security in our Kafka using the SASL_SSL mechanism and learned how to create ACLs and give different permission to different users.
Apache Kafka is the wild west without security. By default, there is no encryption, authentication, or access control list. Any client can communicate with the Kafka broker using the PLAINTEXT port. Access using this port should be restricted to trusted clients only. You can use network segmentation and/or authentication ACLs to restrict access to trusted IP addresses in these cases. If none of these are used, the cluster is wide open and available to anyone. A basic knowledge of Kafka authentication, authorization, encryption, and audit trails is required to safely move a system into production.
All architectures have one common goal: to manage the complexity of our application. We may not need to worry about it on a smaller project, but it becomes a lifesaver on larger ones. The purpose of Clean Architecture is to minimize code complexity by preventing implementation complexity.
We must first understand a few things to implement the Clean Architecture in an Android project.
Entities: Encapsulate enterprise-wide critical business rules. An entity can be an object with methods or data structures and functions.
Use cases: It demonstrates data flow to and from the entities.
Controllers, gateways, presenters: A set of adapters that convert data from the use cases and entities format to the most convenient way to pass the data to the upper level (typically the UI).
UI, external interfaces, DB, web, devices: The outermost layer of the architecture, generally composed of frameworks such as database and web frameworks.
Here is one thumb rule we need to follow. First, look at the direction of the arrows in the diagram. Entities do not depend on use cases and use cases do not depend on controllers, and so on. A lower-level module should always rely on something other than a higher-level module. The dependencies between the layers must be inwards.
Advantages of Clean Architecture:
Strict architecture—hard to make mistakes
Business logic is encapsulated, easy to use, and tested
Enforcement of dependencies through encapsulation
Allows for parallel development
Highly scalable
Easy to understand and maintain
Testing is facilitated
Let’s understand this using the small case study of the Android project, which gives more practical knowledge rather than theoretical.
A pragmatic approach
A typical Android project typically needs to separate the concerns between the UI, the business logic, and the data model, so taking “the theory” into account, we decided to split the project into three modules:
Domain Layer: contains the definitions of the business logic of the app, the data models, the abstract definition of repositories, and the definition of the use cases.
Domain Module
Data Layer: This layer provides the abstract definition of all the data sources. Any application can reuse this without modifications. It contains repositories and data sources implementations, the database definition and its DAOs, the network APIs definitions, some mappers to convert network API models to database models, and vice versa.
Data Module
Presentation layer: This is the layer that mainly interacts with the UI. It’s Android-specific and contains fragments, view models, adapters, activities, composable, and so on. It also includes a service locator to manage dependencies.
Presentation Module
Marvel’s comic characters App
To elaborate on all the above concepts related to Clean Architecture, we are creating an app that lists Marvel’s comic characters using Marvel’s developer API. The app shows a list of Marvel characters, and clicking on each character will show details of that character. Users can also bookmark their favorite characters. It seems like nothing complicated, right?
Before proceeding further into the sample, it’s good to have an idea of the following frameworks because the example is wholly based on them.
Jetpack Compose – Android’s recommended modern toolkit for building native UI.
Retrofit 2 – A type-safe HTTP client for Android for Network calls.
ViewModel – A class responsible for preparing and managing the data for an activity or a fragment.
Kotlin – Kotlin is a cross-platform, statically typed, general-purpose programming language with type inference.
To get a characters list, we have used marvel’s developer API, which returns the list of marvel characters.
http://gateway.marvel.com/v1/public/characters
The domain layer
In the domain layer, we define the data model, the use cases, and the abstract definition of the character repository. The API returns a list of characters, with some info like name, description, and image links.
data classCharacterEntity(valid: Long,valname: String,valdescription: String,valimageUrl: String,valbookmarkStatus: Boolean)
interfaceMarvelDataRepository { suspend fun getCharacters(dataSource:DataSource):Flow<List<CharacterEntity>> suspend fun getCharacter(characterId:Long):Flow<CharacterEntity> suspend fun toggleCharacterBookmarkStatus(characterId:Long):Boolean suspend fun getComics(dataSource:DataSource, characterId:Long):Flow<List<ComicsEntity>>}
As we said before, the data layer must implement the abstract definition of the domain layer, so we need to put the repository’s concrete implementation in this layer. To do so, we can define two data sources, a “local” data source to provide persistence and a “remote” data source to fetch the data from the API.
classMarvelDataRepositoryImpl(privatevalmarvelRemoteService: MarvelRemoteService,privatevalcharactersDao: CharactersDao,privatevalcomicsDao: ComicsDao,privatevalioDispatcher: CoroutineDispatcher = Dispatchers.IO) : MarvelDataRepository {override suspend fun getCharacters(dataSource:DataSource):Flow<List<CharacterEntity>> = flow {emitAll(when (dataSource) { is DataSource.Cache -> getCharactersCache().map { list ->if (list.isEmpty()) {getCharactersNetwork() } else { list.toDomain() } } .flowOn(ioDispatcher) is DataSource.Network -> flowOf(getCharactersNetwork()) .flowOn(ioDispatcher) } ) }private suspend fun getCharactersNetwork():List<CharacterEntity> = marvelRemoteService.getCharacters().body()?.data?.results?.let { remoteData ->if (remoteData.isNotEmpty()) { charactersDao.upsert(remoteData.toCache()) } remoteData.toDomain() } ?:emptyList()private fun getCharactersCache():Flow<List<CharacterCache>> = charactersDao.getCharacters()override suspend fun getCharacter(characterId:Long):Flow<CharacterEntity> = charactersDao.getCharacterFlow(id = characterId).map { it.toDomain() }override suspend fun toggleCharacterBookmarkStatus(characterId:Long):Boolean { val status = charactersDao.getCharacter(characterId)?.bookmarkStatus?.not() ?:falsereturn charactersDao.toggleCharacterBookmarkStatus(id = characterId, status = status) >0 }override suspend fun getComics(dataSource:DataSource,characterId:Long ):Flow<List<ComicsEntity>> = flow {emitAll(when (dataSource) { is DataSource.Cache -> getComicsCache(characterId= characterId).map { list ->if (list.isEmpty()) {getComicsNetwork(characterId = characterId) } else { list.toDomain() } } is DataSource.Network -> flowOf(getComicsNetwork(characterId= characterId)) .flowOn(ioDispatcher) } ) }private suspend fun getComicsNetwork(characterId:Long):List<ComicsEntity> = marvelRemoteService.getComics(characterId = characterId) .body()?.data?.results?.let { remoteData ->if (remoteData.isNotEmpty()) { comicsDao.upsert(remoteData.toCache(characterId = characterId)) } remoteData.toDomain() } ?:emptyList()private fun getComicsCache(characterId:Long):Flow<List<ComicsCache>> = comicsDao.getComics(characterId = characterId)}
Since we defined the data source to manage persistence, in this layer, we also need to determine the database for which we are using the room database. In addition, it’s good practice to create some mappers to map the API response to the corresponding database entity.
fun List<Characters>.toCache() = map { character -> character.toCache() }fun Characters.toCache() =CharacterCache( id = id ?:0, name = name ?:"", description = description ?:"", imageUrl = thumbnail?.let {"${it.path}.${it.extension}" } ?: "")fun List<Characters>.toDomain() = map { character -> character.toDomain() }fun Characters.toDomain() =CharacterEntity( id = id ?:0, name = name ?:"", description = description ?:"", imageUrl = thumbnail?.let {"${it.path}.${it.extension}" } ?: "", bookmarkStatus =false)
In this layer, we need a UI component like fragments, activity, or composable to display the list of characters; here, we can use the widely used MVVM approach. The view model takes the use cases in its constructors and invokes the corresponding use case according to user actions (get a character, characters & comics, etc.).
Each use case will invoke the appropriate method in the repository.
classCharactersListViewModel(privatevalgetCharacters: GetCharactersUseCase,privatevaltoggleCharacterBookmarkStatus: ToggleCharacterBookmarkStatus) : ViewModel() {private val _characters=MutableStateFlow<UiState<List<CharacterViewState>>>(UiState.Loading()) val characters:StateFlow<UiState<List<CharacterViewState>>> = _characters init { _characters.value = UiState.Loading()getAllCharacters() }private fun getAllCharacters(forceRefresh:Boolean=false) {getCharacters(forceRefresh) .catch { error -> error.printStackTrace()when (error) { is UnknownHostException, is ConnectException, is SocketTimeoutException -> _characters.value = UiState.NoInternetError(error)else-> _characters.value = UiState.ApiError(error) } }.map { list -> _characters.value = UiState.Loaded(list.toViewState()) }.launchIn(viewModelScope) } fun refresh(showLoader:Boolean=false) {if (showLoader) { _characters.value = UiState.Loading() }getAllCharacters(forceRefresh =true) } fun bookmarkCharacter(characterId:Long) { viewModelScope.launch {toggleCharacterBookmarkStatus(characterId = characterId) } }}
As you can see, each layer communicates only with the closest one, keeping inner layers independent from lower layers, this way, we can quickly test each module separately, and the separation of concerns will help developers to collaborate on the different modules of the project.
With the evolving architectural design of web applications, microservices have been a successful new trend in architecting the application landscape. Along with the advancements in application architecture, transport method protocols, such as REST and gRPC are getting better in efficiency and speed. Also, containerizing microservice applications help greatly in agile development and high-speed delivery.
In this blog, I will try to showcase how simple it is to build a cloud-native application on the microservices architecture using Go.
We will break the solution into multiple steps. We will learn how to:
1) Build a microservice and set of other containerized services which will have a very specific set of independent tasks and will be related only with the specific logical component.
2) Use go-kit as the framework for developing and structuring the components of each service.
3) Build APIs that will use HTTP (REST) and Protobuf (gRPC) as the transport mechanisms, PostgreSQL for databases and finally deploy it on Azure stack for API management and CI/CD.
Note: Deployment, setting up the CI-CD and API-Management on Azure or any other cloud is not in the scope of the current blog.
Prerequisites:
A beginner’s level of understanding of web services, Rest APIs and gRPC
GoLand/ VS Code
Properly installed and configured Go. If not, check it out here
Set up a new project directory under the GOPATH
Understanding of the standard Golang project. For reference, visit here
PostgreSQL client installed
Go kit
What are we going to do?
We will develop a simple web application working on the following problem statement:
A global publishing company that publishes books and journals wants to develop a service to watermark their documents. A document (books, journals) has a title, author and a watermark property
The watermark operation can be in Started, InProgress and Finished status
The specific set of users should be able to do the watermark on a document
Once the watermark is done, the document can never be re-marked
Example of a document:
{content: “book”, title: “The Dark Code”, author: “Bruce Wayne”, topic: “Science”}
For a detailed understanding of the requirement, please refer to this.
Architecture:
In this project, we will have 3 microservices: Authentication Service, Database Service and the Watermark Service. We have a PostgreSQL database server and an API-Gateway.
Authentication Service:
The application is supposed to have a role-based and user-based access control mechanism. This service will authenticate the user according to its specific role and return HTTP status codes only. 200 when the user is authorized and 401 for unauthorized users.
APIs:
/user/access, Method: GET, Secured: True, payload: user: <name></name> It will take the user name as an input and the auth service will return the roles and the privileges assigned to it
/authenticate, Method: GET, Secured: True, payload: user: <name>, operation: <op></op></name> It will authenticate the user with the passed operation if it is accessible for the role
/healthz, Method: GET, Secured: True It will return the status of the service
Database Service:
We will need databases for our application to store the user, their roles and the access privileges to that role. Also, the documents will be stored in the database without the watermark. It is a requirement that any document cannot have a watermark at the time of creation. A document is said to be created successfully only when the data inputs are valid and the database service returns the success status.
We will be using two databases for two different services for them to be consumed. This design is not necessary, but just to follow the “Single Database per Service” rule under the microservice architecture.
APIs:
/get, Method: GET, Secured: True, payload: filters: []filter{“field-name”: “value”} It will return the list of documents according to the specific filters passed
/update, Method: POST, Secured: True, payload: “Title”: <id>, document: {“field”: “value”, …}</id> It will update the document for the given title id
/add, Method: POST, Secured: True, payload: document: {“field”: “value”, …} It will add the document and return the title-ID
/remove Method: POST, Secured: True, payload: title: <id></id> It will remove the document entry according to the passed title-id
/healthz, Method: GET, Secured: True It will return the status of the service
Watermark Service:
This is the main service that will perform the API calls to watermark the passed document. Every time a user needs to watermark a document, it needs to pass the TicketID in the watermark API request along with the appropriate Mark. It will try to call the database Update API internally with the provided request and returns the status of the watermark process which will be initially “Started”, then in some time the status will be “InProgress” and if the call was valid, the status will be “Finished”, or “Error”, if the request is not valid.
APIs:
/get, Method: GET, Secured: True, payload: filters: []filter{“field-name”: “value”} It will return the list of documents according to the specific filters passed
/status, Method: GET, Secured: True, payload: “Ticket”: <id></id> It will return the status of the document for watermark operation for the passed ticket-id
/addDocument, Method: POST, Secured: True, payload: document: {“field”: “value”, …} It will add the document and return the title-ID
/watermark, Method: POST, Secured: True, payload: title: <id>, mark: “string”</id> It is the main watermark operation API which will accept the mark string
/healthz, Method: GET, Secured: True It will return the status of the service
Operations and Flow:
Watermark Service APIs are the only ones that will be used by the user/actor to request watermark or add the document. Authentication and Database service APIs are the private ones that will be called by other services internally. The only URL accessible to the user is the API Gateway URL.
The user will access the API Gateway URL with the required user name, the ticket-id and the mark with which the user wants the document to apply watermark
The user should not know about the authentication or database services
Once the request is made by the user, it will be accepted by the API Gateway. The gateway will validate the request along with the payload
An API forwarding rule of configuring the traffic of a specific request to a service should be defined in the gateway. The request when validated, will be forwarded to the service according to that rule.
We will define an API forwarding rule where the request made for any watermark will be first forwarded to the authentication service which will authenticate the request, check for authorized users and return the appropriate status code.
The authorization service will check for the user from which the request has been made, into the user database and its roles and permissions. It will send the response accordingly
Once the request has been authorized by the service, it will be forwarded back to the actual watermark service
The watermark service then performs the appropriate operation of putting the watermark on the document or add a new entry of the document or any other request
The operation from the watermark service of Get, Watermark or AddDocument will be performed by calling the database CRUD APIs and forwarded to the user
If the request is to AddDocument then the service should return the “TicketID” or if it is for watermark then it should return the status of the operation
Note:
Each user will have some specific roles, based on which the access controls will be identified for the user. For the sake of simplicity, the roles will be based on the type of document only, not the specific name of the book or journal
Getting Started:
Let’s start by creating a folder for our application in the $GOPATH. This will be the root folder containing our set of services.
Project Layout:
The project will follow the standard Golang project layout. If you want the full working code, please refer here
api: Stores the versions of the APIs swagger files and also the proto and pb files for the gRPC protobuf interface.
cmd: This will contain the entry point (main.go) files for all the services and also any other container images if any
docs: This will contain the documentation for the project
config: All the sample files or any specific configuration files should be stored here
deploy: This directory will contain the deployment files used to deploy the application
internal: This package is the conventional internal package identified by the Go compiler. It contains all the packages which need to be private and imported by its child directories and immediate parent directory. All the packages from this directory are common across the project
pkg: This directory will have the complete executing code of all the services in separate packages.
tests: It will have all the integration and E2E tests
vendor: This directory stores all the third-party dependencies locally so that the version doesn’t mismatch later
We are going to use the Go kit framework for developing the set of services. The official Go kit examples of services are very good, though the documentation is not that great.
Watermark Service:
1. Under the Go kit framework, a service should always be represented by an interface.
Create a package named watermark in the pkg folder. Create a new service.go file in that package. This file is the blueprint of our service.
package watermarkimport ("context""github.com/velotiotech/watermark-service/internal")typeService interface {// Get the list of all documents Get(ctx context.Context, filters ...internal.Filter) ([]internal.Document, error) Status(ctx context.Context, ticketID string) (internal.Status, error) Watermark(ctx context.Context, ticketID, mark string) (int, error) AddDocument(ctx context.Context, doc *internal.Document) (string, error) ServiceStatus(ctx context.Context) (int, error)}
2. As per the functions defined in the interface, we will need five endpoints to handle the requests for the above methods. If you are wondering why we are using a context package, please refer here. Contexts enable the microservices to handle the multiple concurrent requests, but maybe in this blog, we are not using it too much. It’s just the best way to work with it.
3. Implementing our service:
package watermarkimport ("context""net/http""os""github.com/velotiotech/watermark-service/internal""github.com/go-kit/kit/log""github.com/lithammer/shortuuid/v3")typewatermarkService struct{}func NewService() Service { return&watermarkService{} }func (w *watermarkService) Get(_ context.Context, filters ...internal.Filter) ([]internal.Document, error) {// query the database using the filters and return the list of documents// return error if the filter (key) is invalid and also return error if no item founddoc := internal.Document{Content: "book",Title: "Harry Potter and Half Blood Prince",Author: "J.K. Rowling",Topic: "Fiction and Magic", }return []internal.Document{doc}, nil}func (w *watermarkService) Status(_ context.Context, ticketID string) (internal.Status, error) {// query database using the ticketID and return the document info// return err if the ticketID is invalid or no Document exists for that ticketIDreturn internal.InProgress, nil}func (w *watermarkService) Watermark(_ context.Context, ticketID, mark string) (int, error) {// update the database entry with watermark field as non empty// first check if the watermark status is not already in InProgress, Started or Finished state// If yes, then return invalid request// return error if no item found using the ticketIDreturn http.StatusOK, nil}func (w *watermarkService) AddDocument(_ context.Context, doc *internal.Document) (string, error) {// add the document entry in the database by calling the database service// return error if the doc is invalid and/or the database invalid entry errornewTicketID := shortuuid.New()return newTicketID, nil}func (w *watermarkService) ServiceStatus(_ context.Context) (int, error) { logger.Log("Checking the Service health...")return http.StatusOK, nil}var logger log.Loggerfunc init() { logger = log.NewLogfmtLogger(log.NewSyncWriter(os.Stderr)) logger = log.With(logger, "ts", log.DefaultTimestampUTC)}
We have defined the new type watermarkService empty struct which will implement the above-defined service interface. This struct implementation will be hidden from the rest of the world.
NewService() is created as the constructor of our “object”. This is the only function available outside this package to instantiate the service.
4. Now we will create the endpoints package which will contain two files. One is where we will store all types of requests and responses. The other file will be endpoints which will have the actual implementation of the requests parsing and calling the appropriate service function.
– Create a file named reqJSONMap.go. We will define all the requests and responses struct with the fields in this file such as GetRequest, GetResponse, StatusRequest, StatusResponse, etc. Add the necessary fields in these structs which we want to have input in a request or we want to pass the output in the response.
In this file, we have a struct Set which is the collection of all the endpoints. We have a constructor for the same. We have the internal constructor functions which will return the objects which implement the generic endpoint. Endpoint interface of Go kit such as MakeGetEndpoint(), MakeStatusEndpoint() etc.
In order to expose the Get, Status, Watermark, ServiceStatus and AddDocument APIs, we need to create endpoints for all of them. These functions handle the incoming requests and call the specific service methods
5. Adding the Transports method to expose the services. Our services will support HTTP and will be exposed using Rest APIs and protobuf and gRPC.
Create a separate package of transport in the watermark directory. This package will hold all the handlers, decoders and encoders for a specific type of transport mechanism
6. Create a file http.go: This file will have the transport functions and handlers for HTTP with a separate path as the API routes.
This file is the map of the JSON payload to their requests and responses. It contains the HTTP handler constructor which registers the API routes to the specific handler function (endpoints) and also the decoder-encoder of the requests and responses respectively into a server object for a request. The decoders and encoders are basically defined just to translate the request and responses in the desired form to be processed. In our case, we are just converting the requests/responses using the json encoder and decoder into the appropriate request and response structs.
We have the generic encoder for the response output, which is a simple JSON encoder.
7. Create another file in the same transport package with the name grpc.go. Similar to above, the name of the file is self-explanatory. It is the map of protobuf payload to their requests and responses. We create a gRPC handler constructor which will create the set of grpcServers and registers the appropriate endpoint to the decoders and encoders of the request and responses
– Before moving on to the implementation, we have to create a proto file that acts as the definition of all our service interface and the requests response structs, so that the protobuf files (.pb) can be generated to be used as an interface between services to communicate.
– Create package pb in the api/v1 package path. Create a new file watermarksvc.proto. Firstly, we will create our service interface, which represents the remote functions to be called by the client. Refer to this for syntax and deep understanding of the protobuf.
We will convert the service interface to the service interface in the proto file. Also, we have created the request and response structs exactly the same once again in the proto file so that they can be understood by the RPC defined in the service.
Note: Creating the proto files and generating the pb files using protoc is not the scope of this blog. We have assumed that you already know how to create a proto file and generate a pb file from it. If not, please refer protobuf and protoc gen
I have also created a script to generate the pb file, which just needs the path with the name of the proto file.
#!/usr/bin/env sh# Install proto3 from source# brew install autoconf automake libtool# git clone https://github.com/google/protobuf# ./autogen.sh ; ./configure ; make ; make install## Update protoc Go bindings via# go get -u github.com/golang/protobuf/{proto,protoc-gen-go}## See also# https://github.com/grpc/grpc-go/tree/master/examplesREPO_ROOT="${REPO_ROOT:-$(cd "$(dirname "$0")/../.." && pwd)}"PB_PATH="${REPO_ROOT}/api/v1/pb"PROTO_FILE=${1:-"watermarksvc.proto"}echo "Generating pb files for ${PROTO_FILE} service"protoc -I="${PB_PATH}""${PB_PATH}/${PROTO_FILE}"--go_out=plugins=grpc:"${PB_PATH}"
8. Now, once the pb file is generated in api/v1/pb/watermark package, we will create a new struct grpcserver, grouping all the endpoints for gRPC. This struct should implement pb.WatermarkServer which is the server interface referred by the services.
To implement these services, we are defining the functions such as func (g *grpcServer) Get(ctx context.Context, r *pb.GetRequest) (*pb.GetReply, error). This function should take the request param and run the ServeGRPC() function and then return the response. Similarly, we should implement the ServeGRPC() functions for the rest of the functions.
These functions are the actual Remote Procedures to be called by the service.
We will also need to add the decode and encode functions for the request and response structs from protobuf structs. These functions will map the proto Request/Response struct to the endpoint req/resp structs. For example: func decodeGRPCGetRequest(_ context.Context, grpcReq interface{}) (interface{}, error). This will assert the grpcReq to pb.GetRequest and use its fields to fill the new struct of type endpoints.GetRequest{}. The decoding and encoding functions should be implemented similarly for the other requests and responses.
9. Finally, we just have to create the entry point files (main) in the cmd for each service. As we already have mapped the appropriate routes to the endpoints by calling the service functions and also we mapped the proto service server to the endpoints by calling ServeGRPC() functions, now we have to call the HTTP and gRPC server constructors here and start them.
Create a package watermark in the cmd directory and create a file watermark.go which will hold the code to start and stop the HTTP and gRPC server for the service
package mainimport ("fmt""net""net/http""os""os/signal""syscall" pb "github.com/velotiotech/watermark-service/api/v1/pb/watermark""github.com/velotiotech/watermark-service/pkg/watermark""github.com/velotiotech/watermark-service/pkg/watermark/endpoints""github.com/velotiotech/watermark-service/pkg/watermark/transport""github.com/go-kit/kit/log" kitgrpc "github.com/go-kit/kit/transport/grpc""github.com/oklog/oklog/pkg/group""google.golang.org/grpc")const ( defaultHTTPPort ="8081" defaultGRPCPort ="8082")func main() {var ( logger log.Logger httpAddr = net.JoinHostPort("localhost", envString("HTTP_PORT", defaultHTTPPort)) grpcAddr = net.JoinHostPort("localhost", envString("GRPC_PORT", defaultGRPCPort)) ) logger = log.NewLogfmtLogger(log.NewSyncWriter(os.Stderr)) logger = log.With(logger, "ts", log.DefaultTimestampUTC)var ( service = watermark.NewService() eps = endpoints.NewEndpointSet(service) httpHandler = transport.NewHTTPHandler(eps) grpcServer = transport.NewGRPCServer(eps) )var g group.Group {// The HTTP listener mounts the Go kit HTTP handler we created. httpListener, err := net.Listen("tcp", httpAddr)if err != nil { logger.Log("transport", "HTTP", "during", "Listen", "err", err) os.Exit(1) } g.Add(func() error { logger.Log("transport", "HTTP", "addr", httpAddr) return http.Serve(httpListener, httpHandler) }, func(error) { httpListener.Close() }) } {// The gRPC listener mounts the Go kit gRPC server we created. grpcListener, err := net.Listen("tcp", grpcAddr)if err != nil { logger.Log("transport", "gRPC", "during", "Listen", "err", err) os.Exit(1) } g.Add(func() error { logger.Log("transport", "gRPC", "addr", grpcAddr)// we add the Go Kit gRPC Interceptor to our gRPC service as it is used by// the here demonstrated zipkin tracing middleware. baseServer := grpc.NewServer(grpc.UnaryInterceptor(kitgrpc.Interceptor)) pb.RegisterWatermarkServer(baseServer, grpcServer) return baseServer.Serve(grpcListener) }, func(error) { grpcListener.Close() }) } {// This function just sits and waits for ctrl-C.cancelInterrupt :=make(chan struct{}) g.Add(func() error { c :=make(chan os.Signal, 1) signal.Notify(c, syscall.SIGINT, syscall.SIGTERM) select { case sig :=<-c: return fmt.Errorf("received signal %s", sig) case <-cancelInterrupt: return nil } }, func(error) {close(cancelInterrupt) }) } logger.Log("exit", g.Run())}func envString(env, fallback string) string {e := os.Getenv(env)if e =="" {return fallback }return e}
Let’s walk you through the above code. Firstly, we will use the fixed ports to make the server listen to them. 8081 for HTTP Server and 8082 for gRPC Server. Then in these code stubs, we will create the HTTP and gRPC servers, endpoints of the service backend and the service.
service = watermark.NewService()eps = endpoints.NewEndpointSet(service)grpcServer = transport.NewGRPCServer(eps)httpHandler = transport.NewHTTPHandler(eps)
Now the next step is interesting. We are creating a variable of oklog.Group. If you are new to this term, please refer here. Group helps you elegantly manage the group of Goroutines. We are creating three Goroutines: One for HTTP server, second for gRPC server and the last one for watching on the cancel interrupts. Just like this:
Similarly, we will start a gRPC server and a cancel interrupt watcher. Great!! We are done here. Now, let’s run the service.
go run ./cmd/watermark/watermark.go
The server has started locally. Now, just open a Postman or run curl to one of the endpoints. See below: We ran the HTTP server to check the service status:
We have successfully created a service and ran the endpoints.
Further:
I really like to make a project complete always with all the other maintenance parts revolving around. Just like adding the proper README, have proper .gitignore, .dockerignore, Makefile, Dockerfiles, golang-ci-lint config files, and CI-CD config files etc.
I have created a separate Dockerfile for each of the three services in path /images/.
I have created a multi-staged dockerfile to create the binary of the service and run it. We will just copy the appropriate directories of code in the docker image, build the image all in one and then create a new image in the same file and copy the binary in it from the previous one. Similarly, the dockerfiles are created for other services also.
In the dockerfile, we have given the CMD as go run watermark. This command will be the entry point of the container. I have also created a Makefile which has two main targets: build-image and build-push. The first one is to build the image and the second is to push it.
Note: I am keeping this blog concise as it is difficult to cover all the things. The code in the repo that I have shared in the beginning covers most of the important concepts around services. I am still working and continue committing improvements and features.
Let’s see how we can deploy:
We will see how to deploy all these services in the containerized orchestration tools (ex: Kubernetes). Assuming you have worked on Kubernetes with at least a beginner’s understanding before.
In deploy dir, create a sample deployment having three containers: auth, watermark and database. Since for each container, the entry point commands are already defined in the dockerfiles, we don’t need to send any args or cmd in the deployment.
We will also need the service which will be used to route the external traffic of request from another load balancer service or nodeport type service. To make it work, we might have to create a nodeport type of service to expose the watermark-service to make it running for now.
Another important and very interesting part is to deploy the API Gateway. It is required to have at least some knowledge of any cloud provider stack to deploy the API Gateway. I have used Azure stack to deploy an API Gateway using the resource called as “API-Management” in the Azure plane. Refer the rules config files for the Azure APIM api-gateway:
Further, only a proper CI/CD setup is remaining which is one of the most essential parts of a project after development. I would definitely like to discuss all the above deployment-related stuff in more detail but that is not in the scope of my current blog. Maybe I will post another blog for the same.
Wrapping up:
We have learned how to build a complete project with three microservices in Golang using one of the best-distributed system development frameworks: Go kit. We have also used the database PostgreSQL using the GORM used heavily in the Go community. We did not stop just at the development but also we tried to theoretically cover the development lifecycle of the project by understanding what, how and where to deploy.
We created one microservice completely from scratch. Go kit makes it very simple to write the relationship between endpoints, service implementations and the communication/transport mechanisms. Now, go and try to create other services from the problem statement.
A serverless architecture is a way to implement and run applications and services or micro-services without need to manage infrastructure. Your application still runs on servers, but all the servers management is done by AWS. Now we don’t need to provision, scale or maintain servers to run our applications, databases and storage systems. Services which are developed by developers who don’t let developers build application from scratch.
Why Serverless
More focus on development rather than managing servers.
Cost Effective.
Application which scales automatically.
Quick application setup.
Services For ServerLess
For implementing serverless architecture there are multiple services which are provided by cloud partners though we will be exploring most of the services from AWS. Following are the services which we can use depending on the application requirement.
Lambda: It is used to write business logic / schedulers / functions.
S3: It is mostly used for storing objects but it also gives the privilege to host WebApps. You can host a static website on S3.
API Gateway: It is used for creating, publishing, maintaining, monitoring and securing REST and WebSocket APIs at any scale.
Cognito: It provides authentication, authorization & user management for your web and mobile apps. Your users can sign in directly sign in with a username and password or through third parties such as Facebook, Amazon or Google.
DynamoDB: It is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability.
Three-tier Serverless Architecture
So, let’s take a use case in which you want to develop a three tier serverless application. The three tier architecture is a popular pattern for user facing applications, The tiers that comprise the architecture include the presentation tier, the logic tier and the data tier. The presentation tier represents the component that users directly interact with web page / mobile app UI. The logic tier contains the code required to translate user action at the presentation tier to the functionality that drives the application’s behaviour. The data tier consists of your storage media (databases, file systems, object stores) that holds the data relevant to the application. Figure shows the simple three-tier application.
Figure: Simple Three-Tier Architectural Pattern
Presentation Tier
The presentation tier of the three tier represents the View part of the application. Here you can use S3 to host static website. On a static website, individual web pages include static content and they also contain client side scripting.
The following is a quick procedure to configure an Amazon S3 bucket for static website hosting in the S3 console.
To configure an S3 bucket for static website hosting
1. Log in to the AWS Management Console and open the S3 console at
2. In the Bucket name list, choose the name of the bucket that you want to enable static website hosting for.
3. Choose Properties.
4. Choose Static Website Hosting
Once you enable your bucket for static website hosting, browsers can access all of your content through the Amazon S3 website endpoint for your bucket.
5. Choose Use this bucket to host.
A. For Index Document, type the name of your index document, which is typically named index.html. When you configure a S3 bucket for website hosting, you must specify an index document, which will be returned by S3 when requests are made to the root domain or any of the subfolders.
B. (Optional) For 4XX errors, you can optionally provide your own custom error document that provides additional guidance for your users. Type the name of the file that contains the custom error document. If an error occurs, S3 returns an error document.
C. (Optional) If you want to give advanced redirection rules, In the edit redirection rule text box, you have to XML to describe the rule. E.g.
7. Add a bucket policy to the website bucket that grants access to the object in the S3 bucket for everyone. You must make the objects that you want to serve publicly readable, when you configure a S3 bucket as a website. To do so, you write a bucket policy that grants everyone S3:GetObject permission. The following bucket policy grants everyone access to the objects in the example-bucket bucket.
Note: If you choose Disable Website Hosting, S3 removes the website configuration from the bucket, so that the bucket no longer accessible from the website endpoint, but the bucket is still available at the REST endpoint.
Logic Tier
The logic tier represents the brains of the application. Here the two core services for serverless will be used i.e. API Gateway and Lambda to form your logic tier can be so revolutionary. The feature of the 2 services allow you to build a serverless production application which is highly scalable, available and secure. Your application could use number of servers, however by leveraging this pattern you do not have to manage a single one. In addition, by using these managed services together you get following benefits:
No operating system to choose, secure or manage.
No servers to right size, monitor.
No risk to your cost by over-provisioning.
No Risk to your performance by under-provisioning.
API Gateway
API Gateway is a fully managed service for defining, deploying and maintaining APIs. Anyone can integrate with the APIs using standard HTTPS requests. However, it has specific features and qualities that result it being an edge for your logic tier.
Integration with Lambda
API Gateway gives your application a simple way to leverage the innovation of AWS lambda directly (HTTPS Requests). API Gateway forms the bridge that connects your presentation tier and the functions you write in Lambda. After defining the client / server relationship using your API, the contents of the client’s HTTPS requests are passed to Lambda function for execution. The content include request metadata, request headers and the request body.
API Performance Across the Globe
Each deployment of API Gateway includes an Amazon CloudFront distribution under the covers. Amazon CloudFront is a content delivery web service that used Amazon’s global network of edge locations as connection points for clients integrating with API. This helps drive down the total response time latency of your API. Through its use of multiple edge locations across the world, Amazon CloudFront also provides you capabilities to combat distributed denial of service (DDoS) attack scenarios.
You can improve the performance of specific API requests by using API Gateway to store responses in an optional in-memory cache. This not only provides performance benefits for repeated API requests, but is also reduces backend executions, which can reduce overall cost.
Let’s dive into each step
1. Create Lambda Function Login to Aws Console and head over to Lambda Service and Click on “Create A Function”
A. Choose first option “Author from scratch” B. Enter Function Name C. Select Runtime e.g. Python 2.7 D. Click on “Create Function”
As your function is ready, you can see your basic function will get generated in language you choose to write. E.g.
import jsondef lambda_handler(event, context): # TODO implement return { 'statusCode': 200, 'body': json.dumps('Hello from Lambda!') }
2. Testing Lambda Function
Click on “Test” button at the top right corner where we need to configure test event. As we are not sending any events, just give event a name, for example, “Hello World” template as it is and “Create” it.
Now, when you hit the “Test” button again, it runs through testing the function we created earlier and returns the configured value.
Create & Configure API Gateway connecting to Lambda
We are done with creating lambda functions but how to invoke function from outside world ? We need endpoint, right ?
Go to API Gateway & click on “Get Started” and agree on creating an Example API but we will not use that API we will create “New API”. Give it a name by keeping “Endpoint Type” regional for now.
Create the API and you will go on the page “resources” page of the created API Gateway. Go through the following steps:
A. Click on the “Actions”, then click on “Create Method”. Select Get method for our function. Then, “Tick Mark” on the right side of “GET” to set it up. B. Choose “Lambda Function” as integration type. C. Choose the region where we created earlier. D. Write the name of Lambda Function we created E. Save the method where it will ask you for confirmation of “Add Permission to Lambda Function”. Agree to that & that is done. F. Now, we can test our setup. Click on “Test” to run API. It should give the response text we had on the lambda test screen.
Now, to get endpoint. We need to deploy the API. On the Actions dropdown, click on Deploy API under API Actions. Fill in the details of deployment and hit Deploy.
After that, we will get our HTTPS endpoint.
On the above screen you can see the things like cache settings, throttling, logging which can be configured. Save the changes and browse the invoke URL from which we will get the response which was earlier getting from Lambda. So, here is our logic tier of serverless application is to be done.
Data Tier
By using Lambda as your logic tier, you have a number of data storage options for your data tier. These options fall into broad categories: Amazon VPC hosted data stores and IAM-enabled data stores. Lambda has the ability to integrate with both securely.
Amazon VPC Hosted Data Stores
Amazon RDS
Amazon ElasticCache
Amazon Redshift
IAM-Enabled Data Stores
Amazon DynamoDB
Amazon S3
Amazon ElasticSearch Service
You can use any of those for storage purpose, But DynamoDB is one of best suited for ServerLess application.
Why DynamoDB ?
It is NoSQL DB, also that is fully managed by AWS.
It provides fast & prectable performance with seamless scalability.
DynamoDB lets you offload the administrative burden of operating and scaling a distributed system.
It offers encryption at rest, which eliminates the operational burden and complexity involved in protecting sensitive data.
You can scale up/down your tables throughput capacity without downtime/performance degradation.
It provides On-Demand backups as well as enable point in time recovery for your DynamoDB tables.
DynamoDB allows you to delete expired items from table automatically to help you reduce storage usage and the cost of storing data that is no longer relevant.
Following is the sample script for DynamoDB with Python which you can use with lambda.
from __future__ import print_function # Python 2/3 compatibilityimport boto3import jsonimport decimalfrom boto3.dynamodb.conditions import Key, Attrfrom botocore.exceptions import ClientError# Helper class to convert a DynamoDB item to JSON.class DecimalEncoder(json.JSONEncoder): def default(self, o): if isinstance(o, decimal.Decimal): if o % 1 > 0: return float(o) else: return int(o) return super(DecimalEncoder, self).default(o)dynamodb = boto3.resource("dynamodb", region_name='us-west-2', endpoint_url="http://localhost:8000")table = dynamodb.Table('Movies')title = "The Big New Movie"year = 2015try: response = table.get_item( Key={ 'year': year, 'title': title } )except ClientError as e: print(e.response['Error']['Message'])else: item = response['Item'] print("GetItem succeeded:") print(json.dumps(item, indent=4, cls=DecimalEncoder))
Note: To run the above script successfully you need to attach policy to your role for lambda. So in this case you need to attach policy for DynamoDB operations to take place & for CloudWatch if required to store your logs. Following is the policy which you can attach to your role for DB executions.
You can implement the following popular architecture patterns using API Gateway & Lambda as your logic tier, Amazon S3 for presentation tier, DynamoDB as your data tier. For each example, we will only use AWS Service that do not require users to manage their own infrastructure.
Mobile Backend
1. Presentation Tier: A mobile application running on each user’s smartphone.
2. Logic Tier: API Gateway & Lambda. The logic tier is globally distributed by the Amazon CloudFront distribution created as part of each API Gateway each API. A set of lambda functions can be specific to user / device identity management and authentication & managed by Amazon Cognito, which provides integration with IAM for temporary user access credentials as well as with popular third party identity providers. Other Lambda functions can define core business logic for your Mobile Back End.
3. Data Tier: The various data storage services can be leveraged as needed; options are given above in data tier.
Amazon S3 Hosted Website
1. Presentation Tier: Static website content hosted on S3, distributed by Amazon CLoudFront. Hosting static website content on S3 is a cost effective alternative to hosting content on server-based infrastructure. However, for a website to contain rich feature, the static content often must integrate with a dynamic back end.
2. Logic Tier: API Gateway & Lambda, static web content hosted in S3 can directly integrate with API Gateway, which can be CORS complaint.
3. Data Tier: The various data storage services can be leveraged based on your requirement.
ServerLess Costing
At the top of the AWS invoice, we can see the total costing of AWS Services. The bill was processed for 2.1 million API request & all of the infrastructure required to support them.
Following is the list of services with their costing.
Note: You can get your costing done from AWS Calculator using following links;
The three-tier architecture pattern encourages the best practice of creating application component that are easy to maintain, develop, decoupled & scalable. Serverless Application services varies based on the requirements over development.
In this blog, we will try to understand Istio and its YAML configurations. You will also learn why Istio is great for managing traffic and how to set it up using Google Kubernetes Engine (GKE). I’ve also shed some light on deploying Istio in various environments and applications like intelligent routing, traffic shifting, injecting delays, and testing the resiliency of your application.
What is Istio?
The Istio’s website says it is “An open platform to connect, manage, and secure microservices”.
As a network of microservices known as ‘Service Mesh’ grows in size and complexity, it can become tougher to understand and manage. Its requirements can include discovery, load balancing, failure recovery, metrics, and monitoring, and often more complex operational requirements such as A/B testing, canary releases, rate limiting, access control, and end-to-end authentication. Istio claims that it provides complete end to end solution to these problems.
Why Istio?
Provides automatic load balancing for various protocols like HTTP, gRPC, WebSocket, and TCP traffic. It means you can cater to the needs of web services and also frameworks like Tensorflow (it uses gRPC).
To control the flow of traffic and API calls between services, make calls more reliable, and make the network more robust in the face of adverse conditions.
To gain understanding of the dependencies between services and the nature and flow of traffic between them, providing the ability to quickly identify issues etc.
Let’s explore the architecture of Istio.
Istio’s service mesh is split logically into two components:
Data plane – set of intelligent proxies (Envoy) deployed as sidecars to the microservice they control communications between microservices.
Control plane – manages and configures proxies to route traffic. It also enforces policies.
Envoy – Istio uses an extended version of envoy (L7 proxy and communication bus designed for large modern service-oriented architectures) written in C++. It manages inbound and outbound traffic for service mesh.
Enough of theory, now let us setup Istio to see things in action. A notable point is that Istio is pretty fast. It’s written in Go and adds a very tiny overhead to your system.
Setup Istio on GKE
You can either setup Istio via command line or via UI. We have used command line installation for this blog.
Sample Book Review Application
Following this link, you can easily
The Bookinfo application is broken into four separate microservices:
productpage. The productpage microservice calls the details and reviews microservices to populate the page.
details. The details microservice contains book information.
reviews. The reviews microservice contains book reviews. It also calls the ratings microservice.
ratings. The ratings microservice contains book ranking information that accompanies a book review.
There are 3 versions of the reviews microservice:
Version v1 doesn’t call the ratings service.
Version v2 calls the ratings service and displays each rating as 1 to 5 black stars.
Version v3 calls the ratings service and displays each rating as 1 to 5 red stars.
The end-to-end architecture of the application is shown below.
If everything goes well, You will have a web app like this (served at http://GATEWAY_URL/productpage)
Let’s take a case where 50% of traffic is routed to v1 and the remaining 50% to v3.
This is how the config file looks like (/path/to/istio-0.2.12/samples/bookinfo/kube/route-rule-reviews-50-v3.yaml)
Istio provides a simple Domain-specific language (DSL) to control how API calls and layer-4 traffic flow across various services in the application deployment.
In the above configuration, we are trying to Add a “Route Rule”. It means we will be routing the traffic coming to destinations. The destination is the name of the service to which the traffic is being routed. The route labels identify the specific service instances that will receive traffic.
In this Kubernetes deployment of Istio, the route label “version: v1” and “version: v3” indicates that only pods containing the label “version: v1” and “version: v3” will receive 50% traffic each.
Now multiple route rules could be applied to the same destination. The order of evaluation of rules corresponding to a given destination, when there is more than one, can be specified by setting the precedence field of the rule.
The precedence field is an optional integer value, 0 by default. Rules with higher precedence values are evaluated first. If there is more than one rule with the same precedence value the order of evaluation is undefined.
When is precedence useful? Whenever the routing story for a particular service is purely weight based, it can be specified in a single rule.
Once a rule is found that applies to the incoming request, it will be executed and the rule-evaluation process will terminate. That’s why it’s very important to carefully consider the priorities of each rule when there is more than one.
In short, it means route label “version: v1” is given preference over route label “version: v3”.
Intelligent Routing Using Istio
We will demonstrate an example in which we will be aiming to get more control over routing the traffic coming to our app. Before reading ahead, make sure that you have installed Istio and book review application.
First, we will set a default version for all microservices.
Then wait a few seconds for the rules to propagate to all pods before attempting to access the application. This will set the default route to v1 version (which doesn’t call rating service). Now we want a specific user, say Velotio, to see v2 version. We write a yaml (test-velotio.yaml) file.
Now if any other user logs in it won’t see any ratings (it will see v1 version) but when “velotio” user logs in it will see v2 version!
This is how we can intelligently do content-based routing. We used Istio to send 100% of the traffic to the v1 version of each of the Bookinfo services. You then set a rule to selectively send traffic to version v2 of the reviews service based on a header (i.e., a user cookie) in a request.
Traffic Shifting
Now Let’s take a case in which we have to shift traffic from an old service to a new service.
We can use Istio to gradually transfer traffic from one microservice to another one. For example, we can move 10, 20, 25..100% of traffic. Here for simplicity of the blog, we will move traffic from reviews:v1 to reviews:v3 in two steps 40% to 100%.
Now, Refresh the productpage in your browser and you should now see red colored star ratings approximately 40% of the time. Once that is stable, we transfer all the traffic to v3.
Inject Delays and Test the Resiliency of Your Application
Here we will check fault injection using HTTP delay. To test our Bookinfo application microservices for resiliency, we will inject a 7s delay between the reviews:v2 and ratings microservices, for user “Jason”. Since the reviews:v2 service has a 10s timeout for its calls to the ratings service, we expect the end-to-end flow to continue without any errors.
> istioctl get routerule ratings-test-delay -o yaml
Now we allow several seconds to account for rule propagation delay to all pods. Log in as user “Jason”. If the application’s front page was set to correctly handle delays, we expect it to load within approximately 7 seconds.
Conclusion
In this blog we only explored the routing capabilities of Istio. We found Istio to give us good amount of control over routing, fault injection etc in microservices. Istio has a lot more to offer like load balancing and security. We encourage you guys to toy around with Istio and tell us about your experiences.
After spending a couple of years in JavaScript development, I’ve realized how incredibly important design patterns are, in modern JavaScript (ES6). And I’d love to share my experience and knowledge on the subject, hoping you’d make this a critical part of your development process as well.
Note: All the examples covered in this post are implemented with ES6 features, but you can also integrate the design patterns with ES5.
At Velotio, we always follow best practices to achieve highly maintainable and more robust code. And we are strong believers of using design patterns as one of the best ways to write clean code.
In the post below, I’ve listed the most useful design patterns I’ve implemented so far and how you can implement them too:
1. Module
The module pattern simply allows you to keep units of code cleanly separated and organized.
Modules promote encapsulation, which means the variables and functions are kept private inside the module body and can’t be overwritten.
// usageimport { sum } from'modules/sum';constresult=sum(20, 30); // 50
ES6 also allows us to export the module as default. The following example gives you a better understanding of this.
// All the variables and functions which are not exported are private within the module and cannot be used outside. Only the exported members are public and can be used by importing them.// Here the businessList is private member to city moduleconstbusinessList=newWeakMap();// Here City uses the businessList member as it’s in same moduleclassCity {constructor() { businessList.set(this, ['Pizza Hut', 'Dominos', 'Street Pizza']); }// public method to access the private ‘businessList’getBusinessList() {return businessList.get(this); }// public method to add business to ‘businessList’addBusiness(business) { businessList.get(this).push(business); }}// export the City class as default moduleexportdefault City;
// usageimport City from'modules/city';constcity=newCity();city.getBusinessList();
There is a great article written on the features of ES6 modules here.
2. Factory
Imagine creating a Notification Management application where your application currently only allows for a notification through Email, so most of the code lives inside the EmailNotification class. And now there is a new requirement for PushNotifications. So, to implement the PushNotifications, you have to do a lot of work as your application is mostly coupled with the EmailNotification. You will repeat the same thing for future implementations.
To solve this complexity, we will delegate the object creation to another object called factory.
An observer pattern maintains the list of subscribers so that whenever an event occurs, it will notify them. An observer can also remove the subscriber if the subscriber no longer wishes to be notified.
On YouTube, many times, the channels we’re subscribed to will notify us whenever a new video is uploaded.
// PublisherclassVideo {constructor(observable, name, content) {this.observable = observable;this.name = name;this.content = content;// publish the ‘video-uploaded’ eventthis.observable.publish('video-uploaded', { name, content, }); }}// SubscriberclassUser {constructor(observable) {this.observable = observable;this.intrestedVideos = [];// subscribe with the event naame and the call back functionthis.observable.subscribe('video-uploaded', this.addVideo.bind(this)); }addVideo(video) {this.intrestedVideos.push(video); }}// Observer classObservable {constructor() {this.handlers = []; }subscribe(event, handler) {this.handlers[event] =this.handlers[event] || [];this.handlers[event].push(handler); }publish(event, eventData) {consteventHandlers=this.handlers[event];if (eventHandlers) {for (var i =0, l = eventHandlers.length; i < l; ++i) { eventHandlers[i].call({}, eventData); } } }}// usageconstobservable=newObservable();constuser=newUser(observable);constvideo=newVideo(observable, 'ES6 Design Patterns', videoFile);
4. Mediator
The mediator pattern provides a unified interface through which different components of an application can communicate with each other.
If a system appears to have too many direct relationships between components, it may be time to have a central point of control that components communicate through instead.
The mediator promotes loose coupling.
A real-time analogy could be a traffic light signal that handles which vehicles can go and stop, as all the communications are controlled from a traffic light.
Let’s create a chatroom (mediator) through which the participants can register themselves. The chatroom is responsible for handling the routing when the participants chat with each other.
// each participant represented by Participant objectclassParticipant {constructor(name) {this.name = name; }getParticiantDetails() {returnthis.name; }}// MediatorclassChatroom {constructor() {this.participants = {}; }register(participant) {this.participants[participant.name] = participant; participant.chatroom =this; }send(message, from, to) {if (to) {// single message to.receive(message, from); } else {// broadcast message to everyonefor (key inthis.participants) {if (this.participants[key] !== from) {this.participants[key].receive(message, from); } } } }}// usage// Create two participants constjohn=newParticipant('John');constsnow=newParticipant('Snow');// Register the participants to Chatroomvar chatroom =newChatroom(); chatroom.register(john); chatroom.register(snow);// Participants now chat with each other john.send('Hey, Snow!'); john.send('Are you there?'); snow.send('Hey man', yoko); snow.send('Yes, I heard that!');
5. Command
In the command pattern, an operation is wrapped as a command object and passed to the invoker object. The invoker object passes the command to the corresponding object, which executes the command.
The command pattern decouples the objects executing the commands from objects issuing the commands. The command pattern encapsulates actions as objects. It maintains a stack of commands whenever a command is executed, and pushed to stack. To undo a command, it will pop the action from stack and perform reverse action.
You can consider a calculator as a command that performs addition, subtraction, division and multiplication, and each operation is encapsulated by a command object.
// The list of operations can be performedconstaddNumbers= (num1, num2) => num1 + num2;constsubNumbers= (num1, num2) => num1 - num2;constmultiplyNumbers= (num1, num2) => num1 * num2;constdivideNumbers= (num1, num2) => num1 / num2;// CalculatorCommand class initialize with execute function, undo function // and the value classCalculatorCommand {constructor(execute, undo, value) {this.execute = execute;this.undo = undo;this.value = value; }}// Here we are creating the command objectsconstDoAddition=value=>newCalculatorCommand(addNumbers, subNumbers, value);constDoSubtraction=value=>newCalculatorCommand(subNumbers, addNumbers, value);constDoMultiplication=value=>newCalculatorCommand(multiplyNumbers, divideNumbers, value);constDoDivision=value=>newCalculatorCommand(divideNumbers, multiplyNumbers, value);// AdvancedCalculator which maintains the list of commands to execute and // undo the executed commandclassAdvancedCalculator {constructor() {this.current =0;this.commands = []; }execute(command) {this.current = command.execute(this.current, command.value);this.commands.push(command); }undo() {let command =this.commands.pop();this.current = command.undo(this.current, command.value); }getCurrentValue() {returnthis.current; }}// usageconstadvCal=newAdvancedCalculator();// invoke commandsadvCal.execute(newDoAddition(50)); //50advCal.execute(newDoSubtraction(25)); //25advCal.execute(newDoMultiplication(4)); //100advCal.execute(newDoDivision(2)); //50// undo commandsadvCal.undo();advCal.getCurrentValue(); //100
6. Facade
The facade pattern is used when we want to show the higher level of abstraction and hide the complexity behind the large codebase.
A great example of this pattern is used in the common DOM manipulation libraries like jQuery, which simplifies the selection and events adding mechanism of the elements.
Though it seems simple on the surface, there is an entire complex logic implemented when performing the operation.
The following Account Creation example gives you clarity about the facade pattern:
// Here AccountManager is responsible to create new account of type // Savings or Current with the unique account numberlet currentAccountNumber =0;classAccountManager {createAccount(type, details) {constaccountNumber= AccountManager.getUniqueAccountNumber();let account;if (type ==='current') { account =newCurrentAccount(); } else { account =newSavingsAccount(); }return account.addAccount({ accountNumber, details }); }staticgetUniqueAccountNumber() {return++currentAccountNumber; }}// class Accounts maintains the list of all accounts createdclassAccounts {constructor() {this.accounts = []; }addAccount(account) {this.accounts.push(account);returnthis.successMessage(complaint); }getAccount(accountNumber) {returnthis.accounts.find(account=> account.accountNumber === accountNumber); }successMessage(account) {}}// CurrentAccounts extends the implementation of Accounts for providing more specific success messages on successful account creationclassCurrentAccountsextendsAccounts {constructor() {super();if (CurrentAccounts.exists) {return CurrentAccounts.instance; } CurrentAccounts.instance =this; CurrentAccounts.exists =true;returnthis; }successMessage({ accountNumber, details }) {return`Current Account created with ${details}. ${accountNumber} is your account number.`; }}// Same here, SavingsAccount extends the implementation of Accounts for providing more specific success messages on successful account creationclassSavingsAccountextendsAccounts {constructor() {super();if (SavingsAccount.exists) {return SavingsAccount.instance; } SavingsAccount.instance =this; SavingsAccount.exists =true;returnthis; }successMessage({ accountNumber, details }) {return`Savings Account created with ${details}. ${accountNumber} is your account number.`; }}// usage// Here we are hiding the complexities of creating accountconstaccountManager=newAccountManager();constcurrentAccount= accountManager.createAccount('current', { name: 'John Snow', address: 'pune' });constsavingsAccount= accountManager.createAccount('savings', { name: 'Petter Kim', address: 'mumbai' });
7. Adapter
The adapter pattern converts the interface of a class to another expected interface, making two incompatible interfaces work together.
With the adapter pattern, you might need to show the data from a 3rd party library with the bar chart representation, but the data formats of the 3rd party library API and the display bar chart are different. Below, you’ll find an adapter that converts the 3rd party library API response to Highcharts’ bar representation:
This has been a brief introduction to the design patterns in modern JavaScript (ES6). This subject is massive, but hopefully this article has shown you the benefits of using it when writing code.
Thinking about using GraphQL but unsure where to start?
This is a concise tutorial based on our experience using GraphQL. You will learn how to use GraphQL in a Flutter app, including how to create a query, a mutation, and a subscription using the graphql_flutter plugin. Once you’ve mastered the fundamentals, you can move on to designing your own workflow.
Key topics and takeaways:
* GraphQL
* What is graphql_flutter?
* Setting up graphql_flutter and GraphQLProvider
* Queries
* Mutations
* Subscriptions
GraphQL
Looking to call multiple endpoints to populate data for a single screen? Wish you had more control over the data returned by the endpoint? Is it possible to get more data with a single endpoint call, or does the call only return the necessary data fields?
Follow along to learn how to do this with GraphQL. GraphQL’s goal was to change the way data is supplied from the backend, and it allows you to specify the data structure you want.
Let’s imagine that we have the table model in our database that looks like this:
Movie {
title
genre
rating
year
}
These fields represent the properties of the Movie Model:
title property is the name of the Movie,
genre describes what kind of movie
rating represents viewers interests
year states when it is released
We can get movies like this using REST:
/GETlocalhost:8080/movies
[ {"title": "The Godfather","genre": "Drama","rating": 9.2,"year": 1972 }]
As you can see, whether or not we need them, REST returns all of the properties of each movie. In our frontend, we may just need the title and genre properties, yet all of them were returned.
We can avoid redundancy by using GraphQL. We can specify the properties we wish to be returned using GraphQL, for example:
query movies { Movie { title genre }}
We’re informing the server that we only require the movie table’s title and genre properties. It provides us with exactly what we require:
{"data": [ {"title": "The Godfather","genre": "Drama" } ]}
GraphQL is a backend technology, whereas Flutter is a frontend SDK for developing mobile apps. We get the data displayed on the mobile app from a backend when we use mobile apps.
It’s simple to create a Flutter app that retrieves data from a GraphQL backend. Simply make an HTTP request from the Flutter app, then use the returned data to set up and display the UI.
The new graphql_flutter plugin includes APIs and widgets for retrieving and using data from GraphQL backends.
What is graphql_flutter?
The new graphql_flutter plugin includes APIs and widgets that make it simple to retrieve and use data from a GraphQL backend.
graphql_flutter, as the name suggests, is a GraphQL client for Flutter. It exports widgets and providers for retrieving data from GraphQL backends, such as:
HttpLink — This is used to specify the backend’s endpoint or URL.
GraphQLClient — This class is used to retrieve a query or mutation from a GraphQL endpoint as well as to connect to a GraphQL server.
GraphQLCache — We use this class to cache our queries and mutations. It has an options store where we pass the type of store to it during its caching operation.
GraphQLProvider — This widget encapsulates the graphql flutter widgets, allowing them to perform queries and mutations. This widget is given to the GraphQL client to use. All widgets in this provider’s tree have access to this client.
Query — This widget is used to perform a backend GraphQL query.
Mutation — This widget is used to modify a GraphQL backend.
Subscription — This widget allows you to create a subscription.
Setting up graphql_flutter and GraphQLProvider
Create a Flutter project:
flutter create flutter_graphqlcd flutter_graphql
Next, install the graphql_flutter package:
flutter pub add graphql_flutter
The code above will set up the graphql_flutter package. This will include the graphql_flutter package in the dependencies section of your pubspec.yaml file:
dependencies:graphql_flutter: ^5.0.0
To use the widgets, we must import the package as follows:
Before we can start making GraphQL queries and mutations, we must first wrap our root widget in GraphQLProvider. A GraphQLClient instance must be provided to the GraphQLProvider’s client property.
GraphQLProvider( client: GraphQLClient(...))
The GraphQLClient includes the GraphQL server URL as well as a caching mechanism.
HttpLink is used to generate the URL for the GraphQL server. The GraphQLClient receives the instance of the HttpLink in the form of a link property, which contains the URL of the GraphQL endpoint.
The cache passed to GraphQLClient specifies the cache mechanism to be used. To persist or store caches, the InMemoryCache instance makes use of an in-memory database.
A GraphQLClient instance is passed to a ValueNotifier. This ValueNotifer holds a single value and has listeners that notify it when that value changes. This is used by graphql_flutter to notify its widgets when the data from a GraphQL endpoint changes, which helps graphql_flutter remain responsive.
We’ll now encase our MaterialApp widget in GraphQLProvider:
voidmain() { runApp(MyApp());}classMyAppextendsStatelessWidget { // This widget is the root of your application. @override Widget build(BuildContext context) { return GraphQLProvider( client: client, child: MaterialApp( title: 'GraphQL Demo', theme: ThemeData(primarySwatch: Colors.blue), home: MyHomePage(title: 'GraphQL Demo'), )); }}
Queries
We’ll use the Query widget to create a query with the graphql_flutter package.
classMyHomePageextendsStatelessWidget { @override Widget build(BuildContext) { returnQuery( options: QueryOptions( document: gql(readCounters), variables: { 'counterId': 23, }, pollInterval: Duration(seconds: 10), ), builder: (QueryResult result, { VoidCallback refetch, FetchMore fetchMore }) { if (result.hasException) { returnText(result.exception.toString()); } if (result.isLoading) { returnText('Loading'); } // it can be either Map or List List counters = result.data['counter']; return ListView.builder( itemCount: repositories.length, itemBuilder: (context, index) { return Text(counters[index]['name']); }); },) }}
The Query widget encloses the ListView widget, which will display the list of counters to be retrieved from our GraphQL server. As a result, the Query widget must wrap the widget where the data fetched by the Query widget is to be displayed.
The Query widget cannot be the tree’s topmost widget. It can be placed wherever you want as long as the widget that will use its data is underneath or wrapped by it.
In addition, two properties have been passed to the Query widget: options and builder.
The option property is where the query configuration is passed to the Query widget. This options prop is a QueryOptions instance. The QueryOptions class exposes properties that we use to configure the Query widget.
The query string or the query to be conducted by the Query widget is set or sent in via the document property. We passed in the readCounters string here:
final String readCounters ="""query readCounters($counterId: Int!) { counter { name id }}""";
The variables attribute is used to send query variables to the Query widget. There is a ‘counterId’: 23 there. In the readCounters query string, this will be passed in place of $counterId.
The pollInterval specifies how often the Query widget polls or refreshes the query data. The timer is set to 10 seconds, so the Query widget will perform HTTP requests to refresh the query data every 10 seconds.
builder
A function is the builder property. When the Query widget sends an HTTP request to the GraphQL server endpoint, this function is called. The Query widget calls the builder function with the data from the query, a function to re-fetch the data, and a function for pagination. This is used to get more information.
The builder function returns widgets that are listed below the Query widget. The result argument is a QueryResult instance. The QueryResult class has properties that can be used to determine the query’s current state and the data returned by the Query widget.
If the query encounters an error, QueryResult.hasException is set.
If the query is still in progress, QueryResult.isLoading is set. We can use this property to show our users a UI progress bar to let them know that something is on its way.
The data returned by the GraphQL endpoint is stored in QueryResult.data.
Mutations
Let’s look at how to make mutation queries with the Mutation widget in graphql_flutter.
The Mutation widget, like the Query widget, accepts some properties.
options is a MutationOptions class instance. This is the location of the mutation string and other configurations.
The mutation string is set using a document. An addCounter mutation has been passed to the document in this case. The Mutation widget will handle it.
When we want to update the cache, we call update. The update function receives the previous cache (cache) and the outcome of the mutation. Anything returned by the update becomes the cache’s new value. Based on the results, we’re refreshing the cache.
When the mutations on the GraphQL endpoint have been called, onCompleted is called. The onCompleted function is then called with the mutation result builder to return the widget from the Mutation widget tree. This function is invoked with a RunMutation instance, runMutation, and a QueryResult instance result.
The Mutation widget’s mutation is executed using runMutation. The Mutation widget causes the mutation whenever it is called. The mutation variables are passed as parameters to the runMutation function. The runMutation function is invoked with the counterId variable, 21.
When the Mutation’s mutation is finished, the builder is called, and the Mutation rebuilds its tree. runMutation and the mutation result are passed to the builder function.
Subscriptions
Subscriptions in GraphQL are similar to an event system that listens on a WebSocket and calls a function whenever an event is emitted into the stream.
The client connects to the GraphQL server via a WebSocket. The event is passed to the WebSocket whenever the server emits an event from its end. So this is happening in real-time.
The graphql_flutter plugin in Flutter uses WebSockets and Dart streams to open and receive real-time updates from the server.
Let’s look at how we can use our Flutter app’s Subscription widget to create a real-time connection. We’ll start by creating our subscription string:
final counterSubscription ='''subscription counterAdded { counterAdded { name id }}''';
When we add a new counter to our GraphQL server, this subscription will notify us in real-time.
The Subscription widget has several properties, as we can see:
options holds the Subscription widget’s configuration.
document holds the subscription string.
builder returns the Subscription widget’s widget tree.
The subscription result is used to call the builder function. The end result has the following properties:
If the Subscription widget encounters an error while polling the GraphQL server for updates, result.hasException is set.
If polling from the server is active, result.isLoading is set.
The provided helper widget ResultAccumulator is used to collect subscription results, according to graphql_flutter’s pub.dev page.
Conclusion
This blog intends to help you understand what makes GraphQL so powerful, how to use it in Flutter, and how to take advantage of the reactive nature of graphql_flutter. You can now take the first steps in building your applications with GraphQL!