Single Sign On (SSO) makes it simple for users to begin using an application. Support for SSO is crucial for enterprise apps, as many corporate security policies mandate that all applications use certified SSO mechanisms. While the SSO experience is straightforward, the SSO standard is anything but straightforward. It’s easy to get confused when you’re surrounded by complex jargon, including SAML, OAuth 1.0, 1.0a, 2.0, OpenID, OpenID Connect, JWT, and tokens like refresh tokens, access tokens, bearer tokens, and authorization tokens. Standards documentation is too precise to allow generalization, and vendor literature can make you believe it’s too difficult to do it yourself.
I’ve created SSO for a lot of applications in the past. Knowing your target market, norms, and platform are all crucial.
Single Sign On
Single Sign On is an authentication method that allows apps to securely authenticate users into numerous applications by using just one set of login credentials.
This allows applications to avoid the hassle of storing and managing user information like passwords and also cuts down on troubleshooting login-related issues. With SSO configured, applications check with the SSO provider (Okta, Google, Salesforce, Microsoft) if the user’s identity can be verified.
Types of SSO
Security Access Markup Language (SAML)
OpenID Connect (OIDC)
OAuth (specifically OAuth 2.0 nowadays)
Federated Identity Management (FIM)
Security Assertion Markup Language – SAML
SAML (Security Assertion Markup Language) is an open standard that enables identity providers (IdP) to send authorization credentials to service providers (SP). Meaning you can use one set of credentials to log in to many different websites. It’s considerably easier to manage a single login per user than to handle several logins to email, CRM software, Active Directory, and other systems.
For standardized interactions between the identity provider and service providers, SAML transactions employ Extensible Markup Language (XML). SAML is the link between a user’s identity authentication and authorization to use a service.
In our example implementation, we will be using SAML 2.0 as the standard for the authentication flow.
Technical details
A Service Provider (SP) is the entity that provides the service, which is in the form of an application. Examples: Active Directory, Okta Inbuilt IdP, Salesforce IdP, Google Suite.
An Identity Provider (IdP) is the entity that provides identities, including the ability to authenticate a user. The user profile is normally stored in the Identity Provider typically and also includes additional information about the user such as first name, last name, job code, phone number, address, and so on. Depending on the application, some service providers might require a very simple profile (username, email), while others may need a richer set of user data (department, job code, address, location, and so on). Examples: Google – GDrive, Meet, Gmail.
The SAML sign-in flow initiated by the Identity Provider is referred to as an Identity Provider Initiated (IdP-initiated) sign-in. In this flow, the Identity Provider begins a SAML response that is routed to the Service Provider to assert the user’s identity, rather than the SAML flow being triggered by redirection from the Service Provider. When a Service Provider initiates the SAML sign-in process, it is referred to as an SP-initiated sign-in. When end-users try to access a protected resource, such as when the browser tries to load a page from a protected network share, this is often triggered.
Configuration details
Certificate – To validate the signature, the SP must receive the IdP’s public certificate. On the SP side, the certificate is kept and used anytime a SAML response is received.
Assertion Consumer Service (ACS) Endpoint – The SP sign-in URL is sometimes referred to simply as the URL. This is the endpoint supplied by the SP for posting SAML responses. This information must be sent by the SP to the IdP.
IdP Sign-in URL – This is the endpoint where SAML requests are posted on the IdP side. This information must be obtained by the SP from the IdP.
OpenID Connect – OIDC
OIDC protocol is based on the OAuth 2.0 framework. OIDC authenticates the identity of a specific user, while OAuth 2.0 allows two applications to trust each other and exchange data.
So, while the main flow appears to be the same, the labels are different.
How are SAML and OIDC similar?
The basic login flow for both is the same.
1. A user tries to log into the application directly.
2. The program sends the user’s login request to the IdP via the browser.
3. The user logs in to the IdP or confirms that they are already logged in.
4. The IdP verifies that the user has permission to use the program that initiated the request.
5. Information about the user is sent from the IdP to the user’s browser.
6. Their data is subsequently forwarded to the application.
7. The application verifies that they have permission to use the resources.
8. The user has been granted access to the program.
Difference between SAML and OIDC
1. SAML transmits user data in XML, while OpenID Connect transmits data in JSON.
2. SAML calls the data it sends an assertion. OAuth2 calls the data it sends a claim.
3. In SAML, the application or system the user is trying to get into is referred to as the Service Provider. In OIDC, it’s called the Relying Party.
SAML vs. OIDC
1. OpenID Connect is becoming increasingly popular. Because it interacts with RESTful API endpoints, it is easier to build than SAML and is easily available through APIs. This also implies that it is considerably more compatible with mobile apps.
2. You won’t often have a choice between SAML and OIDC when configuring Single Sign On (SSO) for an application through an identity provider like OneLogin. If you do have a choice, it is important to understand not only the differences between the two, but also which one is more likely to be sustained over time. OIDC appears to be the clear winner at this time because developers find it much easier to work with as it is more versatile.
Use Cases
1. SAML with OIDC:
– Log in with Salesforce: SAML Authentication where Salesforce was used as IdP and the web application as an SP.
Key Reason:
All users are centrally managed in Salesforce, so SAML was the preferred choice for authentication.
– Log in with Okta: OIDC Authentication where Okta used IdP and the web application as an SP.
Key Reason:
Okta Active Directory (AD) is already used for user provisioning and de-provisioning of all internal users and employees. Okta AD enables them to integrate Okta with any on-premise AD.
In both the implementation user provisioning and de-provisioning takes place at the IdP side.
SP-initiated (From web application)
IdP-initiated (From Okta Active Directory)
2. Only OIDC login flow:
OIDC Authentication where Google, Salesforce, Office365, and Okta are used as IdP and the web application as SP.
Why not use OAuth for SSO
1. OAuth 2.0 is not a protocol for authentication. It explicitly states this in its documentation.
2. With authentication, you’re basically attempting to figure out who the user is when they authenticated, and how they authenticated. These inquiries are usually answered with SAML assertions rather than access tokens and permission grants.
OIDC vs. OAuth 2.0
OAuth 2.0 is a framework that allows a user of a service to grant third-party application access to the service’s data without revealing the user’s credentials (ID and password).
OpenID Connect is a framework on top of OAuth 2.0 where a third-party application can obtain a user’s identity information which is managed by a service. OpenID Connect can be used for SSO.
In OAuth flow, Authorization Server gives back Access Token only. In the OpenID flow, the Authorization server returns Access Code and ID Token. A JSON Web Token, or JWT, is a specially formatted string of characters that serves as an ID Token. The Client can extract information from the JWT, such as your ID, name, when you logged in, the expiration of the ID Token, and if the JWT has been tampered with.
Federated Identity Management (FIM)
Identity Federation, also known as federated identity management, is a system that allows users from different companies to utilize the same verification method for access to apps and other resources.
In short, it’s what allows you to sign in to Spotify with your Facebook account.
Single Sign On (SSO) is a subset of the identity federation.
SSO generally enables users to use a single set of credentials to access multiple systems within a single organization, while FIM enables users to access systems across different organizations.
How does FIM work?
To log in to their home network, users use the security domain to authenticate.
Users attempt to connect to a distant application that employs identity federation after authenticating to their home domain.
Instead of the remote application authenticating the user itself, the user is prompted to authenticate from their home authentication server.
The user’s home authentication server authorizes the user to the remote application and the user is permitted to access the app. The user’s home client is authenticated to the remote application, and the user is permitted access to the application.
A user can log in to their home domain once, to their home domain; remote apps in other domains can then grant access to the user without an additional login process.
Applications:
Auth0: Auth0 uses OpenID Connect and OAuth 2.0 to authenticate users and get their permission to access protected resources. Auth0 allows developers to design and deploy applications and APIs that easily handle authentication and authorization issues such as the OIDC/OAuth 2.0 protocol with ease.
AWS Cognito
User pools – In Amazon Cognito, a user pool is a user directory. Your users can sign in to your online or mobile app using Amazon Cognito or federate through a third-party identity provider using a user pool (IdP). All members of the user pool have a directory profile that you may access using an SDK, whether they sign indirectly or through a third party.
Identity pools – An identity pool allows your users to get temporary AWS credentials for services like Amazon S3 and DynamoDB.
Conclusion:
I hope you found the summary of my SSO research beneficial. The optimum implementation approach is determined by your unique situation, technological architecture, and business requirements.
Thinking about using GraphQL but unsure where to start?
This is a concise tutorial based on our experience using GraphQL. You will learn how to use GraphQL in a Flutter app, including how to create a query, a mutation, and a subscription using the graphql_flutter plugin. Once you’ve mastered the fundamentals, you can move on to designing your own workflow.
Key topics and takeaways:
* GraphQL
* What is graphql_flutter?
* Setting up graphql_flutter and GraphQLProvider
* Queries
* Mutations
* Subscriptions
GraphQL
Looking to call multiple endpoints to populate data for a single screen? Wish you had more control over the data returned by the endpoint? Is it possible to get more data with a single endpoint call, or does the call only return the necessary data fields?
Follow along to learn how to do this with GraphQL. GraphQL’s goal was to change the way data is supplied from the backend, and it allows you to specify the data structure you want.
Let’s imagine that we have the table model in our database that looks like this:
Movie {
title
genre
rating
year
}
These fields represent the properties of the Movie Model:
title property is the name of the Movie,
genre describes what kind of movie
rating represents viewers interests
year states when it is released
We can get movies like this using REST:
/GETlocalhost:8080/movies
[ {"title": "The Godfather","genre": "Drama","rating": 9.2,"year": 1972 }]
As you can see, whether or not we need them, REST returns all of the properties of each movie. In our frontend, we may just need the title and genre properties, yet all of them were returned.
We can avoid redundancy by using GraphQL. We can specify the properties we wish to be returned using GraphQL, for example:
query movies { Movie { title genre }}
We’re informing the server that we only require the movie table’s title and genre properties. It provides us with exactly what we require:
{"data": [ {"title": "The Godfather","genre": "Drama" } ]}
GraphQL is a backend technology, whereas Flutter is a frontend SDK for developing mobile apps. We get the data displayed on the mobile app from a backend when we use mobile apps.
It’s simple to create a Flutter app that retrieves data from a GraphQL backend. Simply make an HTTP request from the Flutter app, then use the returned data to set up and display the UI.
The new graphql_flutter plugin includes APIs and widgets for retrieving and using data from GraphQL backends.
What is graphql_flutter?
The new graphql_flutter plugin includes APIs and widgets that make it simple to retrieve and use data from a GraphQL backend.
graphql_flutter, as the name suggests, is a GraphQL client for Flutter. It exports widgets and providers for retrieving data from GraphQL backends, such as:
HttpLink — This is used to specify the backend’s endpoint or URL.
GraphQLClient — This class is used to retrieve a query or mutation from a GraphQL endpoint as well as to connect to a GraphQL server.
GraphQLCache — We use this class to cache our queries and mutations. It has an options store where we pass the type of store to it during its caching operation.
GraphQLProvider — This widget encapsulates the graphql flutter widgets, allowing them to perform queries and mutations. This widget is given to the GraphQL client to use. All widgets in this provider’s tree have access to this client.
Query — This widget is used to perform a backend GraphQL query.
Mutation — This widget is used to modify a GraphQL backend.
Subscription — This widget allows you to create a subscription.
Setting up graphql_flutter and GraphQLProvider
Create a Flutter project:
flutter create flutter_graphqlcd flutter_graphql
Next, install the graphql_flutter package:
flutter pub add graphql_flutter
The code above will set up the graphql_flutter package. This will include the graphql_flutter package in the dependencies section of your pubspec.yaml file:
dependencies:graphql_flutter: ^5.0.0
To use the widgets, we must import the package as follows:
Before we can start making GraphQL queries and mutations, we must first wrap our root widget in GraphQLProvider. A GraphQLClient instance must be provided to the GraphQLProvider’s client property.
GraphQLProvider( client: GraphQLClient(...))
The GraphQLClient includes the GraphQL server URL as well as a caching mechanism.
HttpLink is used to generate the URL for the GraphQL server. The GraphQLClient receives the instance of the HttpLink in the form of a link property, which contains the URL of the GraphQL endpoint.
The cache passed to GraphQLClient specifies the cache mechanism to be used. To persist or store caches, the InMemoryCache instance makes use of an in-memory database.
A GraphQLClient instance is passed to a ValueNotifier. This ValueNotifer holds a single value and has listeners that notify it when that value changes. This is used by graphql_flutter to notify its widgets when the data from a GraphQL endpoint changes, which helps graphql_flutter remain responsive.
We’ll now encase our MaterialApp widget in GraphQLProvider:
voidmain() { runApp(MyApp());}classMyAppextendsStatelessWidget { // This widget is the root of your application. @override Widget build(BuildContext context) { return GraphQLProvider( client: client, child: MaterialApp( title: 'GraphQL Demo', theme: ThemeData(primarySwatch: Colors.blue), home: MyHomePage(title: 'GraphQL Demo'), )); }}
Queries
We’ll use the Query widget to create a query with the graphql_flutter package.
classMyHomePageextendsStatelessWidget { @override Widget build(BuildContext) { returnQuery( options: QueryOptions( document: gql(readCounters), variables: { 'counterId': 23, }, pollInterval: Duration(seconds: 10), ), builder: (QueryResult result, { VoidCallback refetch, FetchMore fetchMore }) { if (result.hasException) { returnText(result.exception.toString()); } if (result.isLoading) { returnText('Loading'); } // it can be either Map or List List counters = result.data['counter']; return ListView.builder( itemCount: repositories.length, itemBuilder: (context, index) { return Text(counters[index]['name']); }); },) }}
The Query widget encloses the ListView widget, which will display the list of counters to be retrieved from our GraphQL server. As a result, the Query widget must wrap the widget where the data fetched by the Query widget is to be displayed.
The Query widget cannot be the tree’s topmost widget. It can be placed wherever you want as long as the widget that will use its data is underneath or wrapped by it.
In addition, two properties have been passed to the Query widget: options and builder.
The option property is where the query configuration is passed to the Query widget. This options prop is a QueryOptions instance. The QueryOptions class exposes properties that we use to configure the Query widget.
The query string or the query to be conducted by the Query widget is set or sent in via the document property. We passed in the readCounters string here:
final String readCounters ="""query readCounters($counterId: Int!) { counter { name id }}""";
The variables attribute is used to send query variables to the Query widget. There is a ‘counterId’: 23 there. In the readCounters query string, this will be passed in place of $counterId.
The pollInterval specifies how often the Query widget polls or refreshes the query data. The timer is set to 10 seconds, so the Query widget will perform HTTP requests to refresh the query data every 10 seconds.
builder
A function is the builder property. When the Query widget sends an HTTP request to the GraphQL server endpoint, this function is called. The Query widget calls the builder function with the data from the query, a function to re-fetch the data, and a function for pagination. This is used to get more information.
The builder function returns widgets that are listed below the Query widget. The result argument is a QueryResult instance. The QueryResult class has properties that can be used to determine the query’s current state and the data returned by the Query widget.
If the query encounters an error, QueryResult.hasException is set.
If the query is still in progress, QueryResult.isLoading is set. We can use this property to show our users a UI progress bar to let them know that something is on its way.
The data returned by the GraphQL endpoint is stored in QueryResult.data.
Mutations
Let’s look at how to make mutation queries with the Mutation widget in graphql_flutter.
The Mutation widget, like the Query widget, accepts some properties.
options is a MutationOptions class instance. This is the location of the mutation string and other configurations.
The mutation string is set using a document. An addCounter mutation has been passed to the document in this case. The Mutation widget will handle it.
When we want to update the cache, we call update. The update function receives the previous cache (cache) and the outcome of the mutation. Anything returned by the update becomes the cache’s new value. Based on the results, we’re refreshing the cache.
When the mutations on the GraphQL endpoint have been called, onCompleted is called. The onCompleted function is then called with the mutation result builder to return the widget from the Mutation widget tree. This function is invoked with a RunMutation instance, runMutation, and a QueryResult instance result.
The Mutation widget’s mutation is executed using runMutation. The Mutation widget causes the mutation whenever it is called. The mutation variables are passed as parameters to the runMutation function. The runMutation function is invoked with the counterId variable, 21.
When the Mutation’s mutation is finished, the builder is called, and the Mutation rebuilds its tree. runMutation and the mutation result are passed to the builder function.
Subscriptions
Subscriptions in GraphQL are similar to an event system that listens on a WebSocket and calls a function whenever an event is emitted into the stream.
The client connects to the GraphQL server via a WebSocket. The event is passed to the WebSocket whenever the server emits an event from its end. So this is happening in real-time.
The graphql_flutter plugin in Flutter uses WebSockets and Dart streams to open and receive real-time updates from the server.
Let’s look at how we can use our Flutter app’s Subscription widget to create a real-time connection. We’ll start by creating our subscription string:
final counterSubscription ='''subscription counterAdded { counterAdded { name id }}''';
When we add a new counter to our GraphQL server, this subscription will notify us in real-time.
The Subscription widget has several properties, as we can see:
options holds the Subscription widget’s configuration.
document holds the subscription string.
builder returns the Subscription widget’s widget tree.
The subscription result is used to call the builder function. The end result has the following properties:
If the Subscription widget encounters an error while polling the GraphQL server for updates, result.hasException is set.
If polling from the server is active, result.isLoading is set.
The provided helper widget ResultAccumulator is used to collect subscription results, according to graphql_flutter’s pub.dev page.
Conclusion
This blog intends to help you understand what makes GraphQL so powerful, how to use it in Flutter, and how to take advantage of the reactive nature of graphql_flutter. You can now take the first steps in building your applications with GraphQL!
GraphQL has revolutionized how a client queries a server. With the thin layer of GraphQL middleware, the client has the ability to query the data more comprehensively than what’s provided by the usual REST APIs.
One of the key principles of GraphQL involves having a single data graph of the implementing services that will allow the client to have a unified interface to access more data and services through a single query. Having said that, it can be challenging to follow this principle for an enterprise-level application on a single, monolith GraphQL server.
The Need for Federated Services
James Baxley III, the Engineering Manager at Apollo, in his talk here, puts forward the rationale behind choosing an independently managed federated set of services very well.
To summarize his point, let’s consider a very complex enterprise product. This product would essentially have multiple teams responsible for maintaining different modules of the product. Now, if we’re considering implementing a GraphQL layer at the backend, it would only make sense to follow the one graph principle of GraphQL: this says that to maximize the value of GraphQL, we should have a single unified data graph that’s operating at the data layer of this product. With that, it will be easier for a client to query a single graph and get all the data without having to query different graphs for different data portions.
However, it would be challenging to have all of the huge enterprise data graphs’ layer logic residing on a single codebase. In addition, we want teams to be able to independently implement, maintain, and ship different schemas of the data graph on their own release cycles.
Though there is only one graph, the implementation of that graph should be federated across multiple teams.
Now, let’s consider a massive enterprise e-commerce platform as an example. The different schemas of the e-commerce platform look something like:
Fig:- E-commerce platform set of schemas
Considering the above example, it would be a chaotic task to maintain the graph implementation logic of all these schemas on a single code base. Another overhead that this would bring is having to scale a huge monolith that’s implementing all these services.
Thus, one solution is a federation of services for a single distributed data graph. Each service can be implemented independently by individual teams while maintaining their own release cycles and having their own iterations of their services. Also, a federated set of services would still follow the Onegraph principle of GraphQL, which will allow the client to query a single endpoint for fetching any part of the data graph.
To further demonstrate the example above, let’s say the client asks for the top-five products, their reviews, and the vendor selling them. In a usual monolith GraphQL server, this query would involve writing a resolver that’s a mesh of the data sources of these individual schemas. It would be a task for teams to collaborate and come up with their individual implementations. Let’s consider a federated approach with separate services implementing products, reviews, and vendors. Each service is responsible for resolving only the part of the data graph that includes the schema and data source. This makes it extremely streamlined to allow different teams managing different schemas to collaborate easily.
Another advantage would be handling the scaling of individual services rather than maintaining a compute-heavy monolith for a huge data graph. For example, the products service is used the most on the platform, and the vendors service is scarcely used. In case of a monolith approach, the scaling would’ve had to take place on the overall server. This is eliminated with federated services where we can independently maintain and scale individual services like the products service.
Federated Implementation of GraphQL Services
A monolith GraphQL server that implements a lot of services for different schemas can be challenging to scale. Instead of implementing the complete data graph on a single codebase, the responsibilities of different parts of the data graph can be split across multiple composable services. Each one will contain the implementation of only the part of the data graph it is responsible for. Apollo Federation allows this division of services and follows a declarative programming model to allow splitting of concerns.
Architecture Overview
This article will not cover the basics of GraphQL, such as writing resolvers and schemas. If you’re not acquainted with the basics of GraphQL and setting up a basic GraphQL server using Apollo, I would highly recommend reading about it here. Then, you can come back here to understand the implementation of federated services using Apollo Federation.
Apollo Federation has two principal parts to it:
A collection of services that distinctly define separate GraphQL schemas
A gateway that builds the federated data graph and acts as a forefront to distinctly implement queries for different services
Fig:- Apollo Federation Architecture
Separation of Concerns
The usual way of going about implementing federated services would be by splitting an existing monolith based on the existing schemas defined. Although this way seems like a clear approach, it will quickly cause problems when multiple Schemas are involved.
To illustrate, this is a typical way to split services from a monolith based on the existing defined Schemas:
In the example above, although the tweets field belongs to the User schema, it wouldn’t make sense to populate this field in the User service. The tweets field of a User should be declared and resolved in the Tweet service itself. Similarly, it wouldn’t be right to resolve the creator field inside the Tweet service.
The reason behind this approach is the separation of concerns. The User service might not even have access to the Tweet datastore to be able to resolve the tweets field of a user. On the other hand, the Tweet service might not have access to the User datastore to resolve the creator field of the Tweet schema.
Considering the above schemas, each service is responsible for resolving the respective field of each Schema it is responsible for.
Implementation
To illustrate an Apollo Federation, we’ll be considering a Nodejs server built with Typescript. The packages used are provided by the Apollo libraries.
npm i --save apollo-server @apollo/federation @apollo/gateway
Some additional libraries to help run the services in parallel:
npm i --save nodemon ts-node concurrently
Let’s go ahead and write the structure for the gateway service first. Let’s create a file gateway.ts:
Note the serviceList is an empty array for now since we’ve yet to implement the individual services. In addition, we pass the subscriptions: false option to the apollo server config because currently, Apollo Federation does not support subscriptions.
Next, let’s add the User service in a separate file user.ts using:
The @key directive helps other services understand the User schema is, in fact, an entity that can be extended within other individual services. The fields will help other services uniquely identify individual instances of the User schema based on the id.
The Query and the Mutation types need to be extended by all implementing services according to the Apollo Federation documentation since they are always defined on a gateway level.
As a side note, the User model imported from datasources/model/User
import User from ‘./datasources/models/User’; is essentially a Mongoose ORM Model for MongoDB that will help in all the CRUD operations of a User entity in a MongoDB database. In addition, the mongoStore() function is responsible for establishing a connection to the MongoDB database server.
The User model implementation internally in Mongoose ORM looks something like this:
In the Query type, the users and the user(id: ID!) queries fetch a list or the details of individual users.
In the resolvers, we define a __resolveReference function responsible for returning an instance of the User entity to all other implementing services, which just have a reference id of a User entity and need to return an instance of the User entity. The ref parameter is an object { id: ‘userEntityId’ } that contains the id of an instance of the User entity that may be passed down from other implementing services that need to resolve the reference of a User entity based on the reference id. Internally, we fire a mongoose .findOne query to return an instance of the User from the users database based on the reference id. To illustrate the resolver,
At the end of the file, we make sure the service is running on a unique port number 4001, which we pass as an option while running the apollo server. That concludes the User service.
Next, let’s add the tweet service by creating a file tweet.ts using:
touch tweet.ts
The following code goes as a part of the tweet service:
The Tweet schema has the text field, which is the content of the tweet, a unique id of the tweet, and a creator field, which is of the User entity type and resolves into the details of the user that created the tweet:
We extend the User entity schema in this service, which has the id field with an @external directive. This helps the Tweet service understand that based on the given id field of the User entity schema, the instance of the User entity needs to be derived from another service (user service in this case).
As we discussed previously, the tweets field of the extended User schema for the user entity should be resolved in the Tweet service since all the resolvers and access to the data sources with respect to the Tweets entity resides in this service.
The Query and Mutation types of the Tweet service are pretty straightforward; we have a tweets and a tweet(id: ID!) queries to resolve a list or resolve an individual instance of the Tweet entity.
To resolve the creator field of the Tweet entity, the Tweet service needs to tell the gateway that this field will be resolved by the User service. Hence, we pass the id of the User and a __typename for the gateway to be able to call the right service to resolve the User entity instance. In the User service earlier, we wrote a __resolveReference resolver, which will resolve the reference of a User based on an id.
Now, we need to resolve the tweets field of the User entity extended in the Tweet service. We need to write a resolver where we get the parent user entity reference in the first argument of the resolver using which we can fire a Mongoose ORM query to return all the tweets created by the user given its id.
At the end of the file, similar to the User service, we make sure the Tweet service runs on a different port by adding the port: 4002 option to the Apollo server config. That concludes both our implementing services.
Now that we have our services ready, let’s update our gateway.ts file to reflect the added services:
The concurrently library helps run 3 separate scripts in parallel. The server:* scripts spin up a dev server using nodemon to watch and reload the server for changes and ts-node to execute Typescript node.
Let’s spin up our server:
npm start
On visiting the http://localhost:4000, you should see the GraphQL query playground running an Apollo server:
Querying and Mutation from the Client
Initially, let’s fire some mutations to create two users and some tweets by those users.
Mutations
Here we have created a user with the username “@elonmusk” that returns the id of the user. Fire the following mutations in the GraphQL playground:
We will create another user named “@billgates” and take a note of the ID.
Here is a simple mutation to create a tweet by the user “@elonmusk”. Now that we have two created users, let’s fire some mutations to create tweets by those users:
Here is another mutation that creates a tweet by the user“@billgates”.
After adding a couple of those, we are good to fire our queries, which will allow the gateway to compose the data by resolving fields through different services.
Queries
Initially, let’s list all the tweets along with their creator, which is of type User. The query will look something like:
{ tweets { text creator { username } }}
When the gateway encounters a query asking for tweet data, it forwards that query to the Tweet service since the Tweet service that extends the Query type has a tweet query defined in it.
On encountering the creator field of the tweet schema, which is of the type User, the creator resolver within the Tweet service is invoked. This is essentially just passing a __typename and an id, which tells the gateway to resolve this reference from another service.
In the User service, we have a __resolveReference function, which returns the complete instance of a user given it’s id passed from the Tweet service. It also helps all other implementing services that need the reference of a User entity resolved.
On firing the query, the response should look something like:
{"data": {"tweets": [ {"text": "I own Tesla","creator": {"username": "@elonmusk" } }, {"text": "I own SpaceX","creator": {"username": "@elonmusk" } }, {"text": "I own PayPal","creator": {"username": "@elonmusk" } }, {"text": "I own Microsoft","creator": {"username": "@billgates" } }, {"text": "I own XBOX","creator": {"username": "@billgates" } } ] }}
Now, let’s try it the other way round. Let’s list all users and add the field tweets that will be an array of all the tweets created by that user. The query should look something like:
{ users { username tweets { text } }}
When the gateway encounters the query of type users, it passes down that query to the user service. The User service is responsible for resolving the username field of the query.
On encountering the tweets field of the users query, the gateway checks if any other implementing service has extended the User entity and has a resolver written within the service to resolve any additional fields of the type User.
The Tweet service has extended the type User and has a resolver for the User type to resolve the tweets field, which will fetch all the tweets created by the user given the id of the user.
On firing the query, the response should be something like:
{"data": {"users": [ {"username": "@elonmusk","tweets": [ {"text": "I own Tesla" }, {"text": "I own SpaceX" }, {"text": "I own PayPal" } ] }, {"username": "@billgates","tweets": [ {"text": "I own Microsoft" }, {"text": "I own XBOX" } ] } ] }}
Conclusion
To scale an enterprise data graph on a monolith GraphQL service brings along a lot of challenges. Having the ability to distribute our data graph into implementing services that can be individually maintained or scaled using Apollo Federation helps to quell any concerns.
There are further advantages of federated services. Considering our example above, we could have two different kinds of datastores for the User and the Tweet service. While the User data could reside on a NoSQL database like MongoDB, the Tweet data could be on a SQL database like Postgres or SQL. This would be very easy to implement since each service is only responsible for resolving references only for the type they own.
Final Thoughts
One of the key advantages of having different services that can be maintained individually is the ability to deploy each service separately. In addition, this also enables deployment of different services independently to different platforms such as Firebase, Lambdas, etc.
A single monolith GraphQL server deployed on an instance or a single serverless platform can have some challenges with respect to scaling an instance or handling high concurrency as mentioned above.
By splitting out the services, we could have a separate serverless function for each implementing service that can be maintained or scaled individually and also a separate function on which the gateway can be deployed.
One popular usage of GraphQL Federation can be seen in this Netflix Technology blog, where they’ve explained how they solved a bottleneck with the GraphQL APIs in Netflix Studio . What they did was create a federated GraphQL microservices architecture, along with a Schema store using Apollo Federation. This solution helped them create a unified schema but with distributed ownership and implementation.
If you landed here directly and want to know how to setup Jenkins master-slave architecture, please visit this post related to Setting-up the Jenkins Master-Slave Architecture.
The source code that we are using here is also a continuation of the code that was written in this GitHub Packer-Terraform-Jenkins repository.
Creating Jenkinsfile
We will create some Jenkinsfile to execute a job from our Jenkins master.
Here I will create two Jenkinsfile ideally, it is expected that your Jenkinsfile is present in source code repo but it can be passed directly in the job as well.
There are 2 ways of writing Jenkinsfile – Scripted and Declarative. You can find numerous points online giving their difference. We will be creating both of them to do a build so that we can get a hang of both of them.
Jenkinsfile for Angular App (Scripted)
As mentioned before we will be highlighting both formats of writing the Jenkinsfile. For the Angular app, we will be writing a scripted one but can be easily written in declarative format too.
We will be running this inside a docker container. Thus, the tests are also going to get executed in a headless manner.
Here is the Jenkinsfile for reference.
Here we are trying to leverage Docker volume to keep updating our source code on bare metal and use docker container for the environments.
Dissecting Node App’s Jenkinsfile
We are using CleanWs() to clear the workspace.
Next is the Main build in which we define our complete build process.
We are pulling the required images.
Highlighting the steps that we will be executing.
Checkout SCM: Checking out our code from Git
We are now starting the node container inside of which we will be running npm install and npm run lint.
Get test dependency: Here we are downloading chrome.json which will be used in the next step when starting the container.
Here we test our app. Specific changes for running the test are mentioned below.
Build: Finally we build the app.
Deploy: Once CI is completed we need to start with CD. The CD itself can be a blog of itself but wanted to highlight what basic deployment would do.
Here we are using Nginx container to host our application.
If the container does not exist it will create a container and use the “dist” folder for deployment.
If Nginx container exists, then it will ask for user input to recreate a container or not.
If you select not to create, don’t worry as we are using Nginx it will do a hot reload with new changes.
The angular application used here was created using the standard generate command given by the CLI itself. Although the build and install give no trouble in a bare metal some tweaks are required for running test in a container.
In karma.conf.js update browsers withChromeHeadless.
Next in protractor.conf.js update browserName with chrome and add
That’s it! And We have our CI pipeline setup for Angular based application.
Jenkinsfile for .Net App (Declarative)
For a .Net application, we have to setup MSBuild and MSDeploy. In the blog post mentioned above, we have already setup MSBuild and we will shortly discuss how to setup MSDeploy.
To do the Windows deployment we have two options. Either setup MSBuild in Jenkins Global Tool Configuration or use the full path of MSBuild on the slave machine.
Passing the path is fairly simple and here we will discuss how to use global tool configuration in a Jenkinsfile.
First, get the path of MSBuild from your server. If it is not the latest version then the path is different and is available in Current directory otherwise always in <version> directory.</version>
As we are using MSBuild 2017. Our MSBuild path is:
Now you have your configuration ready to be used in Jenkinsfile.
Jenkinsfile to build and test the app is given below.
As seen above the structure of Declarative syntax is almost same as that of Declarative. Depending upon which one you find easier to read you should opt the syntax.
Dissecting Dotnet App’s Jenkinsfile
In this case too we are cleaning the workspace as the first step.
Checkout: This is also the same as before.
Nuget Restore: We are downloading dependent required packages for both PrimeService and PrimeService.Tests
Build: Building the Dotnet app using MSBuild tool which we had configured earlier before writing the Jenkinsfile.
UnitTest: Here we have used dotnet test although we could’ve used MSTest as well here just wanted to highlight how easy dotnet utility makes it. We can even use dotnet build for the build as well.
Deploy: Deploying on the IIS server. Creation of IIS we are covering below.
From the above-given examples, you get a hang of what Jenkinsfile looks like and how it can be used for creating jobs. Above file highlights basic job creation but it can be extended to everything that old-style job creation could do.
Creating IIS Server
Unlike our Angular application where we just had to get another image and we were good to go. Here we will have to Packer to create our IIS server. We will be automating the creation process and will be using it to host applications.
Here is a Powershell script for IIS for reference.
We won’t be deploying any application on it as we have created a sample app for PrimeNumber. But in the real world, you might be deploying Web Based application and you will need IIS. We have covered here the basic idea of how to install IIS along with any dependency that might be required.
Conclusion
In this post, we have covered deploying Windows and Linux based applications using Jenkinsfile in both scripted and declarative format.
In the first, getting started with Kubernetes operators (Helm based), and the second part, getting started with Kubernetes operators (Ansible based), of this Introduction to Kubernetes operators blog series we learned various concepts related to Kubernetes operators and created a Helm based operator and an Ansible based operator respectively. In this final part, we will build a Golang based operator. In case of Helm based operators, we were executing a helm chart when changes were made to the custom object type of our application, similarly in the case of an Ansible based operator we executed an Ansible role. In case of Golang based operator we write the code for the action we need to perform (reconcile logic) whenever the state of our custom object change, this makes the Golang based operators quite powerful and flexible, at the same time making them the most complex to build out of the 3 types.
What Will We Build?
The database server we deployed as part of our book store app in previous blogs didn’t have any persistent volume attached to it and we would lose data in case the pod restarts, to avoid this we will attach a persistent volume attached to the host (K8s worker nodes ) and run our database as an statefulset rather than a deployment. We will also add a feature to expand the persistent volume associated with the mongodb pod.
Building the Operator
1. Set up the project:
operator-sdk new bookstore-operator –dep-manager=dep
INFO[0000] Generating api version blog.velotio.com/v1alpha1 for kind BookStore. INFO[0000] Created pkg/apis/blog/group.go INFO[0001] Created pkg/apis/blog/v1alpha1/bookstore_types.go INFO[0001] Created pkg/apis/addtoscheme_blog_v1alpha1.go INFO[0001] Created pkg/apis/blog/v1alpha1/register.go INFO[0001] Created pkg/apis/blog/v1alpha1/doc.go INFO[0001] Created deploy/crds/blog.velotio.com_v1alpha1_bookstore_cr.yaml INFO[0009] Created deploy/crds/blog.velotio.com_bookstores_crd.yaml INFO[0009] Running deepcopy code-generation for Custom Resource group versions: [blog:[v1alpha1], ] INFO[0010] Code-generation complete. INFO[0010] Running OpenAPI code-generation for Custom Resource group versions: [blog:[v1alpha1], ] INFO[0011] Created deploy/crds/blog.velotio.com_bookstores_crd.yaml INFO[0011] Code-generation complete. INFO[0011] API generation complete.
The above command creates the bookstore-operator folder in our $GOPATH/src, here we have set the –dep-manager as dep which signifies we want to use dep for managing dependencies, by default it uses go modules for managing dependencies. Similar to what we have seen earlier the operator sdk creates all the necessary folder structure for us inside the bookstore-operator folder.
2. Add the custom resource definition
operator-sdk add api –api-version=blog.velotio.com/v1alpha1 –kind=BookStore
The above command creates the CRD and CR for the BookStore type. It also creates the golang structs (pkg/apis/blog/v1alpha1/bookstore_types.go) for BookStore types. It also registers the custom type (pkg/apis/blog/v1alpha1/register.go) with schema and generates deep-copy methods as well. Here we can see that all the generic tasks are being done by the operator framework itself allowing us to focus on building and object and the controller. We will update the spec of our BookStore object later. We will update the spec of BookStore type to include two custom types BookApp and BookDB.
INFO[0000] Generating controller version blog.velotio.com/v1alpha1 for kind BookStore. INFO[0000] Created pkg/controller/bookstore/bookstore_controller.go INFO[0000] Created pkg/controller/add_bookstore.go INFO[0000] Controller generation complete.
The above command adds the bookstore controller (pkg/controller/bookstore/bookstore_controller.go) to the project and also adds it to the manager.
If we take a look at the add function in the bookstore_controller.go file we can see that a new controller is created here and added to the manager so that the manager can start the controller when it (manager) comes up, the add(mgr manager.Manager, r reconcile.Reconciler)is called by the public function Add(mgr manager.Manager)which also creates a new reconciler objects and passes it to the addwhere the controller is associated with the reconciler, in the addfunction we also set the type of object (BookStore) which the controller will watch.
// Watch for changes to primary resource BookStore err = c.Watch(&source.Kind{Type: &blogv1alpha1.BookStore{}}, &handler.EnqueueRequestForObject{})if err != nil {return err }
This ensures that for any events related to any object of BookStore type, a reconcile request (a namespace/name key) is sent to the Reconcilemethod associated with the reconciler object (ReconcileBookStore) here.
4. Build the reconcile logic
The reconcile logic is implemented inside the Reconcilemethod of the reconciler object of the custom type which implements the reconcile loop.
As a part of our reconcile logic we will do the following
Create the bookstore app deployment if it doesn’t exist.
Create the bookstore app service if it doesn’t exist.
Create the Mongodb statefulset if it doesn’t exist.
Create the Mongodb service if it doesn’t exist.
Ensure deployments and services match their desired configurations like the replica count, image tag, service port, size of the PV associated with the Mongodb statefulset etc.
There are three possible events that can happen with the BookStore object
The object got created: Whenever an object of kind BookStore is created we create all the k8s resources we mentioned above
The object has been updated: When the object gets updated then we update all the k8s resources associated with it..
The object has been deleted: When the object gets deleted we don’t need to do anything as while creating the K8s objects we will set the `BookStore` type as its owner which will ensure that all the K8s objects associated with it gets automatically deleted when we delete the object.
On receiving the reconcile request the first step if to lookup for the object.
If the object is not found, we assume that it got deleted and don’t requeue the request considering the reconcile to be successful.
If any error occurs while doing the reconcile then we return the error and whenever we return non nil error value then controller requeues the request.
In the reconcile logic we call the BookStore method which creates or updates all the k8s objects associated with the BookStore objects based on whether the object has been created or updated.
The implementation of the above method is a bit hacky but gives an idea of the flow. In the above function, we can see that we are setting the BookStore type as an owner for all the resources controllerutil.SetControllerReference(c, bookStoreDep, r.scheme) as we had discussed earlier. If we look at the owner reference for these objects we would see something like this.
The approach to deploy and verify the working of the bookstore application is similar to what we did in the previous two blogs the only difference being that now we have deployed the Mongodb as a stateful set and even if we restart the pod we will see that the information that we stored will still be available.
6. Verify volume expansion
For updatingthe volume associated with the mongodb instance we first need to update the size of the volume we specified while creating the bookstore object. In the example above I had set it to 2GB let’s update it to 3GB and update the bookstore object.
Once the bookstore object is updated if we describe the mongodb PVC we will see that it still has 2GB PV but the conditions we will see something like this.
Conditions: Type Status LastProbeTime LastTransitionTime Reason Message---------------------------------------------------------- FileSystemResizePending True Mon, 01 Jan 000100:00:00+0000 Mon, 30 Sep 201915:07:01+0530 Waiting for user to (re-)start a pod to finish file system resize of volume on node.@velotiotech
It is clear from the message that we need to restart the pod for resizing of volume to reflect. Once we delete the pod it will get restarted and the PVC will get updated to reflect the expanded volume size.
Golang based operators are built mostly for stateful applications like databases. The operator can automate complex operational tasks allow us to run applications with ease. At the same time, building and maintaining it can be quite complex and we should build one only when we are fully convinced that our requirements can’t be met with any other type of operator. Operators are an interesting and emerging area in Kubernetes and I hope this blog series on getting started with it help the readers in learning the basics of it.
In the first part of this blog series, getting started with Kubernetes operators (Helm based), we learned the basics of operators and build a Helm based operator. In this blog post, we will try out an Ansible-based operator. Ansible is a very popular tool used by organizations across the globe for configuration management, deployment, and automation of other operational tasks, this makes Ansible an ideal tool to build operators as with operators also we intend to eliminate/minimize the manual interventions required while running/managing our applications on Kubernetes. Ansible based operators allow us to use Ansible playbooks and roles to manage our application on Kubernetes.
Operator Maturity Model
Image source: Github
Before we start building the operator let’s spend some time in understanding the operator maturity model. Operator maturity model gives an idea of the kind of application management capabilities different types of operators can have. As we can see in the diagram above the model describes five generic phases of maturity/capability for operators. The minimum expectation/requirement from an operator is that they should be able to deploy/install and upgrade application and that is provided by all the operators. Helm based operators are simplest of all of them as Helm is Chart manager and we can do only install and upgrades using it. Ansible based operators can be more mature as Ansible has modules to perform a wide variety of operational tasks, we can use these modules in the Ansible roles/playbooks we use in our operator and make them handle more complex applications or use cases. In the case of Golang based operators, we write the operational logic ourselves so we have the liberty to customize it as per our requirements.
Building an Ansible Based Operator
1. Let’s first install the operator sdk
go get -d github.com/operator-framework/operator-sdkcd $GOPATH/src/github.com/operator-framework/operator-sdkgit checkout master make dep make install
Now we will have the operator-sdk binary in the $GOPATH/bin folder.
2. Setup the project
Operator-sdk new bookstore-operator–api-version=blog.velotio.com/v1alpha1–kind=BookStore–type=ansible
In the above command we have set the operator type as ansible as we want an ansible based operator. It creates a folder structure as shown below
bookstore-operator/||- build/ # Contains the Dockerfile to build the operator image.||- deploy/ # Contains the crd, cr and manifest files for deploying operator.||- roles/ # Contains the helm chart we used while creating the project.||- molecule/ # molecule is used for testing the ansible roles.||- watches.yaml # Specifies the resource the operator watches (maintains the state of).
Inside the roles folder, it creates an Ansible role name `bookstore`. This role is bootstrapped with all the directories and files which are part of the standard ansible roles.
Here we can see that it looks just like the operator is going to watch the events related to the objects of BookStore kind and execute the ansible role bookstore. Drawing parallels from our helm based operator we can see that the behavior in both the cases are similar the only difference being that in case of Helm based operator the operator used to execute the helm chart specified in response to the events related to the object it was watching and here we are executing an ansible role.
In case of ansible based operators, we can get the operator to execute an Ansible playbook as well rather than an ansible role.
3. Building the bookstore Ansible role
Now we need to modify the bookstore Ansible roles created for us by the operator-framework.
First we will update the custom resource (CR) file ( blog_v1alpha1_bookstore_cr.yaml) available at deploy/crd/ location. In this CR we can configure all the values which we want to pass to the bookstore Ansible role. By default the CR contains only the size field, we will update it to include other field which we need in our role. To keep things simple, we will just include some basic variables like image name, tag etc. in our spec.
The Ansible operator passes the key values pairs listed in the spec of the cr as variables to Ansible. The operator changes the name of the variables to snake_case before running Ansible so when we use the variables in our role we will refer the values in snake case.
Next, we need to create the tasks the bookstore roles will execute. Now we will update the tasks to define our deployment. By default an Ansible role executes the tasks defined at `/tasks/main.yml`. For defining our deployment we will leverage k8s module of Ansible. We will create a kubernetes deployment and service for our app as well as mongodb.
In the above file we can see that we have used the pullPolicy field defined in our cr spec as ‘pull_policy’ in our tasks. Here we have used inline definition to create our k8s objects as our app is quite simple. For large applications creating objects using separate definition files would be a better approach.
4 . Build the bookstore-operator image
The Dockerfile for building the operator image is already in our build folder we need to run the below command from the root folder of our operator project to build the image.
You can use your own docker repository instead of ‘akash125/bookstore-operator’
5. Run the bookstore-operator
As we have our operator image ready we can now go ahead and run it. The deployment file (operator.yaml under deploy folder) for the operator was created as a part of our project setup we just need to set the image for this deployment to the one we built in the previous step.
After updating the image in the operator.yaml we are ready to deploy the operator.
Note: The role created might have more permissions then actually required for the operator so it is always a good idea to review it and trim down the permissions in production setups.
Verify that the operator pod is in running state.
Here two containers have been started as part of the operator deployment. One is the operator and the other one is ansible. The ansible pod exists only to make the logs available to stdout in ansible format.
6. Deploy the bookstore app
Now we have the bookstore-operator running in our cluster we just need to create the custom resource for deploying our bookstore app.
First, we can create bookstore cr we need to register its crd
In this blog post, we learned how we can create an Ansible based operator using the operator framework. Ansible based operators are a great way to combine the power of Ansible and Kubernetes as it allows us to deploy our applications using Ansible role and playbooks and we can pass parameters to them (control them) using custom K8s resources. If Ansible is being heavily used across your organization and you are migrating to Kubernetes then Ansible based operators are an ideal choice for managing deployments. In the next blog, we will learn about Golang based operators.
These days, most web applications are driven by JavaScript frameworks that include front-end and back-end development. So, we need to have a robust QA automation framework that covers APIs as well as end-to-end tests (E2E tests). These tests check the user flow over a web application and confirm whether it meets the requirement.
Full-stack QA testing is critical in stabilizing APIs and UI, ensuring a high-quality product that satisfies user needs.
To test UI and APIs independently, we can use several tools and frameworks, like Selenium, Postman, Rest-Assured, Nightwatch, Katalon Studio, and Jest, but this article will be focusing on Cypress.
We will cover how we can do full stack QA testing using Cypress.
What exactly is Cypress?
Cypress is a free, open-source, locally installed Test Runner and Dashboard Service for recording your tests. It is a frontend and backend test automation tool built for the next generation of modern web applications.
It is useful for developers as well as QA engineers to test real-life applications developed in React.js, Angular.js, Node.js, Vue.js, and other front-end technologies.
How does Cypress Work Functionally?
Cypress is executed in the same run loop as your application. Behind Cypress is a Node.js server process.
Most testing tools operate by running outside of the browser and executing remote commands across the network. Cypress does the opposite, while at the same time working outside of the browser for tasks that require higher privileges.
Cypress takes snapshots of your application and enables you to time travel back to the state it was in when commands ran.
Why Use Cypress Over Other Automation Frameworks?
Cypress is a JavaScript test automation solution for web applications.
This all-in-one testing framework provides a chai assertion library with mocking and stubbing all without Selenium. Moreover, it supports the Mocha test framework, which can be used to develop web test automations.
Key Features of Cypress:
Mocking – By mocking the server response, it has the ability to test edge cases.
Time Travel – It takes snapshots as your tests run, allowing users to go back and forth in time during test scenarios.
Flake Resistant – It automatically waits for commands and assertions before moving on.
Spies, Stubs, and Clocks – It can verify and control the behavior of functions, server responses, or timers.
Real Time Reloads – It automatically reloads whenever you make changes to your tests.
Consistent Results – It gives consistent and reliable tests that aren’t flaky.
Network Traffic Control – Easily control, stub, and test edge cases without involving your server.
Automatic Waiting – It automatically waits for commands and assertions without ever adding waits or sleeps to your tests. No more async hell.
Screenshots and Videos – View screenshots taken automatically on failure, or videos of your entire test suite when it has run smoothly.
Debuggability – Readable error messages help you to debug quickly.
Fig:- How Cypress works
Installation and Configuration of the Cypress Framework
Let’s create a sample test under Cypress/Integration/examples/tests within the spec file e2e_test.spec.js
The test naming convention should be test_name.spec.js
To run the Cypress test, use the following command:
$ npx cypress run --spec "cypress/integration/examples/tests/e2e_test.spec.js"
This is how the folder structure will look:
Fig:- Cypress Framework Outline
REST API Testing Using Cypress
It’s important to test APIs along with E2E UI tests, and it can also be helpful to stabilize APIs and prepare data to interact with third-party servers.
Cypress provides the functionality to make an HTTP request.
Using Cypress’s Request() method, we can validate GET, POST, PUT, and DELETE API Endpoints.
With Cypress end-to-end testing, you can replicate user behaviour on your application and cross-check whether everything is working as expected. In this section, we’ll check useful ways to write E2E tests on the front-end using Cypress.
Here is an example of how to write E2E test in Cypress:
describe('Testing Google Search', () => {// To Pass the Test Case 1 it('I can search for Valid Content on Google', () => { cy.visit('https://www.google.com'); cy.get("input[title='Search']").type('Cypress').type(‘{enter}’); cy.contains('https://www.cypress.io'); });// To Fail the Test Case 2it('I can navigate to Wrong URL’', () => { cy.visit('http://localhost:8080'); cy.get("input[title='Search']").type('Cypress').type(‘{enter}’); cy.contains('https://www.cypress.io'); });});
Cross Browser Testing Using Cypress
Cypress can run tests across the latest releases of multiple browsers. It currently has support for Chrome and Firefox (beta).
Cypress supports the following browsers:
Google Chrome
Firefox (beta)
Chromium
Edge
Electron
Browsers can be specified via the –browser flag when using the run command to launch Cypress. npm scripts can be used as shortcuts in package.json to launch Cypress with a specific browser more conveniently.
To run tests on browsers:
$ npx cypress run --browser chrome --spec “cypress/integration/examples/tests/e2e_test.spec.js”
Here is an example of a package.json file to show how to define the npm script:
"scripts": {"cy:run:chrome": "cypress run --browser chrome","cy:run:firefox": "cypress run --browser firefox"}
Cypress Reporting
Reporter options can be specified in the cypress.json configuration file or via CLI options. Cypress supports the following reporting capabilities:
Mocha Built-in Reporting – As Cypress is built on top of Mocha, it has the default Mochawesome reporting
JUnit and TeamCity – These 3rd party Mocha reporters are built into Cypress.
To install additional dependencies for report generation:
Installing Mochaawesome:
$ npm install mochawesome
Or installing JUnit:
$ npm install junit
Examples of a config file and CLI for the Mochawesome report
$ npx cypress run --reporter junit --reporter-options “mochaFile=results/my-test-output.xml,toConsole=true”
Fig:- Collapsed View of Mochawesome Report
Fig:- Expanded View of Mochawesome Report
Fig:- Mochawesome Report Settings
Additional Possibilities of Using Cypress
There are several other things we can do using Cypress that we could not cover in this article, although we’ve covered the most important aspects of the tool..
Here are some other usages of Cypress that we could not explore here:
Continuous integration and continuous deployment with Jenkins
Behavior-driven development (BDD) using Cucumber
Automating applications with XHR
Test retries and retry ability
Custom commands
Environment variables
Plugins
Visual testing
Slack integration
Model-based testing
GraphQL API Testing
Limitations with Cypress
Cypress is a great tool with a great community supporting it. Although it is still young, it is being continuously developed and is quickly catching up with the other full-stack automation tools on the market.
So, before you decide to use Cypress, we would like to touch upon some of its limitations. These limitations are for version 5.2.0, the latest version of Cypress at the time of this article’s publishing.
Here are the current limitations of using Cypress:
It can’t use two browsers at the same time.
It doesn’t provide support for multi-tabs.
It only supports the JavaScript language for creating test cases.
It doesn’t currently provide support for browsers like Safari and IE.
It has limited support for iFrames.
Conclusion
Cypress is a great tool with a growing feature-set. It makes setting up, writing, running, and debugging tests easy for QA automation engineers. It also has a quicker learning cycle with a good, baked-in execution environment.
It is fully JavaScript/MochaJS-oriented with specific new APIs to make scripting easier. It also provides a flexible test execution plan that can implement significant and unexpected changes.
In this blog, we talked about how Cypress works functionally, performed end-to-end UI testing, and touched upon its limitations. We hope you learned more about using Cypress as a full-stack test automation tool.
Flutter and React Native are two of the most popular cross-platform development frameworks on the market. Both of these technologies enable you to develop applications for iOS and Android with a single codebase. However, they’re not entirely interchangeable.
Flutter allows developers to create Material Design-like applications with ease. React Native, on the other hand, has an active community of open source contributors, which means that it can be easily modified to meet almost any standard.
In this blog, we have compared both of these technologies based on popularity, performance, learning curve, community support, and developer mindshare to help you decide which one you can use for your next project.
But before digging into the comparison, let’s have a brief look at both these technologies:
About React Native
React Native has gained the attention of many developers for its ease of use with JS code. Facebook has developed the framework to solve cross-platform application development using React and introduced React Native in their first React.js conference in 2015.
React Native enables developers to create high-end mobile apps with the help of JavaScript. This eventually comes in handy for speeding up the process of developing mobile apps. The framework also makes use of the impressive features of JavaScript while maintaining excellent performance. React Native is highly feature-rich and allows you to create dynamic animations and gestures which are usually unavailable in the native platform.
React Native has been adopted by many companies as their preferred technology.
For example:
Facebook
Instagram
Skype
Shopify
Tesla
Salesforce
About Flutter
Flutter is an open-source mobile development kit that makes it easy for developers to build high-quality applications for Android and iOS. It has a library with widgets to create the user interface of the application independent of the platform on which it is supported.
Flutter has extended the reach of mobile app development by enabling developers to build apps on any platform without being restrained by mobile development limitations. The framework started as an internal project at Google back in 2015, with its first stable release in 2018.
Since its inception, Google aimed to provide a simplistic, usable programming language for building sophisticated apps and wanted to carry out Dart’s goal of replacing JavaScript as the next-generation web programming language.
Let’s see what all apps are built using Flutter:
Google Ads
eBay
Alibaba
BMW
Philips Hue
React Native vs. Flutter – An overall comparison
Design Capabilities
React Native is based on React.js, one of the most popular JavaScript libraries for building user interfaces. It is often used with Redux, which provides a solid basis for creating predictable web applications.
Flutter, on the other hand, is Google’s new mobile UI framework. Flutter uses Dart language to write code, compiled to native code for iOS and Android apps.
Both React Native and Flutter can be used to create applications with beautiful graphics and smooth animations.
React Native
In the React Native framework, UI elements look native to both iOS and Android platforms. These UI elements make it easier for developers to build apps because they only have to write them once. In addition, many of these components also render natively on each platform. The user experiences an interface that feels more natural and seamless while maintaining the capability to customize the app’s look and feel.
The framework allows developers to use JavaScript or a combination of HTML/CSS/JavaScript for cross-platform development. While React Native allows you to build native apps, it does not mean that your app will look the same on both iOS and Android.
Flutter
Flutter is a toolkit for creating high-performance, high-fidelity mobile apps for iOS and Android. Flutter works with existing code, is used by developers and organizations worldwide, and is free and open source. The standard neutral style is what Flutter offers.
Flutter has its own widgets library, which includes Material Design Components and Cupertino.
The Material package contains widgets that look like they belong on Android devices. The Cupertino package contains widgets that look like they belong on iOS devices. By default, a Flutter application uses Material widgets. If you want to use Cupertino widgets, then import the Cupertino library and change your app’s theme to CupertinoTheme.
Community
Flutter and React Native have a very active community of developers. Both frameworks have extensive support and documentation and an active GitHub repository, which means they are constantly being maintained and updated.
With the Flutter community, we can even find exciting tools such as Flutter Inspector or Flutter WebView Plugin. In the case of React Native, Facebook has been investing heavily in this framework. Besides the fact that the development process is entirely open-source, Facebook has created various tools to make the developer’s life easier.
Also, the more updates and versions come out, the more interest and appreciation the developer community shows. Let’s see how both frameworks stack up when it comes to community engagement.
For React Native
The Facebook community is the most significant contributor to the React Native framework, followed by the community members themselves.
React Native has garnered over 1,162 contributors on GitHub since its launch in 2015. The number of commits (or changes) to the framework has increased over time. It increased from 1,183 commits in 2016 to 1,722 commits in 2017.
This increase indicates that more and more developers are interested in improving React Native.
Moreover, there are over 19.8k live projects where developers share their experiences to resolve existing issues. The official React Native website offers tutorials for beginners who want to get started quickly with developing applications for Android and iOS while also providing advanced users with the necessary documentation.
Also, there are a few other platforms where you can ask your question to the community, meet other React Native developers, and gain new contacts:
The Flutter community is smaller than React Native. The main reason is that Flutter is relatively new and is not yet widely used in production apps. But it’s not hard to see that its popularity is growing day by day. Flutter has excellent documentation with examples, articles, and tutorials that you can find online. It also has direct contact with its users through channels, such as Stack Overflow and Google Groups.
The community of Flutter is growing at a steady pace with around 662 contributors. The total count of projects being forked by the community is 13.7k, where anybody can seek help for development purposes.
Here are some platforms to connect with other developers in the Flutter community:
The learning curve of Flutter is steeper than React Native. However, you can learn both frameworks within a reasonable time frame. So, let’s discuss what would be required to learn React Native and Flutter.
React Native
The language of React Native is JavaScript. Any person who knows how to write JS will be able to utilize this framework. But, it’s different from building web applications. So if you are a mobile developer, you need to get the hang of things that might take some time.
However, React Native is relatively easy to learn for newbies. For starters, it offers a variety of resources, both online and offline. On the React website, users can find the documentation, guides, FAQs, and learning resources.
Flutter
Flutter has a bit steeper learning curve than React native. You need to know some basic concepts of native Android or iOS development. Flutter requires you to have experience in Java or Kotlin for Android or Objective-C or Swift for iOS. It can be a challenge if you’re accustomed to using new languages without type casts and generics. However, once you’ve learned how to use it, it can speed up your development process.
Flutter provides great documentation of its APIs that you can refer to. Since the framework is still new, some information might not be updated yet.
Team size
The central aspect of choosing between React Native and Flutter is the team size. To set a realistic expectation on the cost, you need to consider the type of application you will develop.
React Native
Technically, React Native’s core library can be implemented by a single developer. This developer will have to build all native modules by himself, which is not an easy task. However, the required team size for React Native depends on the complexity of the mobile app you want to build. If you plan to create a simple mobile app, such as a mobile-only website, then one developer will be enough. However, if your project requires complex UI and animations, then you will need more skillful and experienced developers.
Flutter
Team size is a very important factor for the flutter app development. The number of people in your team might depend on the requirements and type of app you need to develop.
Flutter makes it easy to use existing code that you might already have, or share code with other apps that you might already be building. You can even use Java or Kotlin if you prefer (though Dart is preferred).
UI component
When developing a cross-platform app, keep in mind that not all platforms behave identically. You will need to choose a library that supports the core element of the app consistent for each platform. We need the framework to have an API so that we can access the native modules.
React Native
There are two aspects to implementing React Native in your app development. The first one is writing the apps in JavaScript. This is the easiest part since it’s somewhat similar to writing web apps. The second aspect is the integration of third-party modules that are not part of the core framework.
The reason third-party modules are required is that React Native does not support all native functionalities. For instance, if you want to implement an Alert box, you need to import the UIAlertController module from Applenv SDK.
This makes the React Native framework somehow dependent on third-party libraries. There are lots of third-party libraries for React Native. You can use these libraries in your project to add native app features which are not available in React Native. Mostly, it is used to include maps, camera, sharing, and other native features.
Flutter
Flutter offers rich GUI components called widgets. A widget can be anything from simple text fields, buttons, switches, sliders, etc., to complex layouts that include multiple pages with split views, navigation bars, tab bars, etc., that are present in modern mobile apps.
The Flutter toolkit is cross-platform and it has its own widgets, but it still needs third-party libraries to create applications. It also depends on the Android SDK and the iOS SDK for compilation and deployment. Developers can use any third-party library they want as long as it does not have any restrictions on open source licensing. Developers are also allowed to create their own libraries for Flutter app development.
Testing Framework and Support
React Native and Flutter have been used to develop many high-quality mobile applications. Of course, in any technology, a well-developed testing framework is essential.
Based on this, we can see that both React Native and Flutter have a relatively mature testing framework.
React Native
React Native uses the same UI components and APIs as a web application written in React.js. This means you can use the same frameworks and libraries for both platforms. Testing a React Native application can be more complex than a traditional web-based application because it relies heavily on the device itself. If you’re using a hybrid JavaScript approach, you can use tools like WebdriverIO or Appium to run the same tests across different browsers. Still, if you’re going native, you need to make sure you choose a tool with solid native support.
Flutter
Flutter has developed a testing framework that helps ensure your application is high quality. It is based on the premise of these three pillars: unit tests, widget tests, and integration tests. As you build out your Flutter applications, you can combine all three types of tests to ensure that your application works perfectly.
Programming language
One of the most important benefits of using Flutter and React Native to develop your mobile app is using a single programming language. This reduces the time required to hire developers and allows you to complete projects faster.
React Native
React Native breaks that paradigm by bridging the gap between native and JavaScript environments. It allows developers to build mobile apps that run across platforms by using JavaScript. It makes mobile app development faster, as it only requires one language—JavaScript—to create a cross-platform mobile app. This gives web developers a significant advantage over native application developers as they already know JavaScript and can build a mobile app prototype in a couple of days. There is no need to learn Java or Swift. They can even use the same JavaScript libraries they use at work, like Redux and ImmutableJS.
Flutter
Flutter provides tools to create native mobile apps for both Android and iOS. Furthermore, it allows you to reuse code between the platforms because it supports code sharing using libraries written in Dart.
You can also choose between two different ways of creating layouts for Flutter apps. The first one is similar to CSS, while the second one is more like HTML. Both are very powerful and simple to use. By default, you should use widgets built by the Flutter team, but if needed, you can also create your own custom widgets or modify existing ones.
Tooling and DX
While using either Flutter or React Native for mobile app development, it is likely that your development team will also be responsible for the CI/CD pipeline used to release new versions of your app.
CI/CD support for Flutter and React Native is very similar at the moment. Both frameworks have good support for continuous integration (CI), continuous delivery (CD), and continuous deployment (CD). Both offer a first-class experience for building, testing, and deploying apps.
React Native
The React Native framework has existed for some time now and is pretty mature. However, it still lacks documentation around continuous integration (CI) and continuous delivery (CD) solutions. Considering the maturity of the framework, we might expect to see more investment here.
Whereas Expo is a development environment and build tool for React Native. It lets you develop and run React Native apps on your computer just like you would do on any other web app.
Expo turns a React Native app into a single JavaScript bundle, which is then published to one of the app stores using Expo’s tools. It provides all the necessary tooling—like bundling, building, and hot reloading—and manages the technical details of publishing to each app store. Expo provides the tooling and environment so that you can develop and test your app in a familiar way, while it also takes care of deploying to production.
Flutter
Flutter’s open-source project is complete, so the next step is to develop a rich ecosystem around it. The good news is that Flutter uses the same command-line interface as XCode, Android Studio, IntelliJ IDEA, and other fully-featured IDE’s. This means Flutter can easily integrate with continuous integration/continuous deployment tools. Some CI/CD tools for Flutter include Bitrise and Codemagic. All of these tools are free to use, although they offer paid plans for more features.
Here is an example of a to-do list app built with React Native and Flutter.
As you can see, both Flutter and React Native are excellent cross-platform app development tools that will be able to offer you high-quality apps for iOS and Android. The choice between React Native vs Flutter will depend on the complexity of the app that you are looking to create, your team size, and your needs for the app. Still, all in all, both of these frameworks are great options to consider to develop cross-platform native mobile applications.
Containers are a disruptive technology and is being adopted by startups and enterprises alike. Whenever a new infrastructure technology comes along, two areas require a lot of innovation – storage & networking. Anyone who is adopting containers would have faced challenges in these two areas.
Flannel is an overlay network that helps to connect containers across multiple hosts. This blog provides an overview of container networking followed by details of Flannel.
What is Docker?
Docker is the world’s leading software container platform. Developers use Docker to eliminate “works on my machine” problems when collaborating on software with co-workers. Operators use Docker to run and manage apps side-by-side in isolated containers to get better compute density. Enterprises use Docker to build agile software delivery pipelines to ship new features faster, more securely and with repeatability for both Linux and Windows Server apps.
Need for Container networking
Containers need to talk to the external world.
Containers should be reachable from the external world so that the external world can use the services that containers provide.
Containers need to talk to the host machine. An example can be getting memory usage of the underlying host.
There should be inter-container connectivity in the same host and across hosts. An example is a LAMP stack running Apache, MySQL and PHP in different containers across hosts.
How Docker’s original networking works?
Docker uses host-private networking. It creates a virtual bridge, called docker0 by default, and allocates a subnet from one of the private address blocks defined in RFC1918 for that bridge. For each container that Docker creates, it allocates a virtual ethernet device (called veth) which is attached to the bridge. The veth is mapped to appear as eth0 in the container, using Linux namespaces. The in-container eth0 interface is given an IP address from the bridge’s address range.
Drawbacks of Docker networking
Docker containers can talk to other containers only if they are on the same machine (and thus the same virtual bridge). Containers on different machines cannot reach each other – in fact they may end up with the exact same network ranges and IP addresses. This limits the system’s effectiveness on cloud platforms.
In order for Docker containers to communicate across nodes, they must be allocated ports on the machine’s own IP address, which are then forwarded or peroxided to the containers. This obviously means that containers must either coordinate which ports they use very carefully or else be allocated ports dynamically.This approach obviously fails if container dies as the new container will get a new IP, breaking the proxy rules.
Real world expectations from Docker
Enterprises expect docker containers to be used in production grade systems, where each component of the application can run on different containers running across different grades of underlying hardware. All application components are not same and some of them may be resource intensive. It makes sense to run such resource intensive components on compute heavy physical servers and others on cost saving cloud virtual machines. It also expects Docker containers to be replicated on demand and the application load to be distributed across the replicas. This is where Google’s Kubernetes project fits in.
What is Kubernetes?
Kubernetes is an open-source platform for automating deployment, scaling, and operations of application containers across clusters of hosts, providing container-centric infrastructure. It provides portability for an application to run on public, private, hybrid, multi-cloud. It gives extensibility as it is modular, pluggable, hookable and composable. It also self heals by doing auto-placement, auto-restart, auto-replication, auto-scaling of application containers. Kubernetes does not provide a way for containers across nodes to communicate with each other, it assumes that each container (pod) has a unique, routable IP inside the cluster. To facilitate inter-container connectivity across nodes, any networking solution based on Pure Layer-3 or VxLAN or UDP model, can be used. Flannel is one such solution which provides an overlay network using UDP as well as VxLAN based model.
Flannel: a solution for networking for Kubernetes
Flannel is a basic overlay network that works by assigning a range of subnet addresses (usually IPv4 with a /24 or /16 subnet mask). An overlay network is a computer network that is built on top of another network. Nodes in the overlay network can be thought of as being connected by virtual or logical links, each of which corresponds to a path, perhaps through many physical links, in the underlying network.
While flannel was originally designed for Kubernetes, it is a generic overlay network that can be used as a simple alternative to existing software defined networking solutions. More specifically, flannel gives each host an IP subnet (/24 by default) from which the Docker daemon is able to allocate IPs to the individual containers. Each address corresponds to a container, so that all containers in a system may reside on different hosts.
It works by first configuring an overlay network, with an IP range and the size of the subnet for each host. For example, one could configure the overlay to use 10.1.0.0/16 and each host to receive a /24 subnet. Host A could then receive 10.1.15.1/24 and host B could get 10.1.20.1/24. Flannel uses etcd to maintain a mapping between allocated subnets and real host IP addresses. For the data path, flannel uses UDP to encapsulate IP datagrams to transmit them to the remote host.
As a result, complex, multi-host systems such as Hadoop can be distributed across multiple Docker container hosts, using Flannel as the underlying fabric, resolving a deficiency in Docker’s native container address mapping system.
Integrating Flannel with Kubernetes
Kubernetes cluster consists of a master node and multiple minion nodes. Each minion node gets its own subnet through flannel service. Docker needs to be configured to use the subnet created by Flannel. Master starts a etcd server and flannel service running on each minion uses that etcd server to registers its container’s IP. etcd server stores a key-value mapping of each containers with its IP. kube-apiserver uses etcd server as a service to get the IP mappings and assign service IP’s accordingly. Kubernetes will create iptable rules through kube-proxy which will allocate static endpoints and load balancing. In case the minion node goes down or the pod restarts it will get a new local IP, but the service IP created by kubernetes will remain the same enabling kubernetes to route traffic to correct set of pods. Learn how to setup Kubernetes with Flannel undefined.
Alternatives to Flannel
Flannel is not the only solution for this. Other options like Calico and Weave are available. Weave is the closest competitor as it provides a similar set of features as Flannel. Flannel gets an edge in its ease of configuration and some of the benchmarks have found Weave to be slower than Flannel.
Custom resources definition (CRD) is a powerful feature introduced in Kubernetes 1.7 which enables users to add their own/custom objects to the Kubernetes cluster and use it like any other native Kubernetes objects. In this blog post, we will see how we can add a custom resource to a Kubernetes cluster using the command line as well as using the Golang client library thus also learning how to programmatically interact with a Kubernetes cluster.
What is a Custom Resource Definition (CRD)?
In the Kubernetes API, every resource is an endpoint to store API objects of certain kind. For example, the built-in service resource contains a collection of service objects. The standard Kubernetes distribution ships with many inbuilt API objects/resources. CRD comes into picture when we want to introduce our own object into the Kubernetes cluster to full fill our requirements. Once we create a CRD in Kubernetes we can use it like any other native Kubernetes object thus leveraging all the features of Kubernetes like its CLI, security, API services, RBAC etc.
The custom resource created is also stored in the etcd cluster with proper replication and lifecycle management. CRD allows us to use all the functionalities provided by a Kubernetes cluster for our custom objects and saves us the overhead of implementing them on our own.
How to register a CRD using command line interface (CLI)
Step-1: Create a CRD definition file sslconfig-crd.yaml
Here we are creating a custom resource definition for an object of kind SslConfig. This object allows us to store the SSL configuration information for a domain. As we can see under the validation section specifying the cert, key and the domain are mandatory for creating objects of this kind, along with this we can store other information like the provider of the certificate etc. The name metadata that we specify must be spec.names.plural+”.”+spec.group.
An API group (blog.velotio.com here) is a collection of API objects which are logically related to each other. We have also specified version for our custom objects (spec.version), as the definition of the object is expected to change/evolve in future so it’s better to start with alpha so that the users of the object knows that the definition might change later. In the scope, we have specified Namespaced, by default a custom resource name is clustered scoped.
Along with the mandatory fields cert, key and domain, we have also stored the information of the provider ( certifying authority ) of the cert.
How to register a CRD programmatically using client-go
Client-go project provides us with packages using which we can easily create go client and access the Kubernetes cluster. For creating a client first we need to create a connection with the API server. How we connect to the API server depends on whether we will be accessing it from within the cluster (our code running in the Kubernetes cluster itself) or if our code is running outside the cluster (locally)
If the code is running outside the cluster then we need to provide either the path of the config file or URL of the Kubernetes proxy server running on the cluster.
var (// Set during buildversion stringproxyURL = flag.String("proxy", "",`If specified, it is assumed that a kubctl proxy server is running on thegiven url and creates a proxy client. In case it is not given InClusterkubernetes setup will be used`))if*proxyURL !="" {config, err = clientcmd.NewNonInteractiveDeferredLoadingClientConfig(&clientcmd.ClientConfigLoadingRules{},&clientcmd.ConfigOverrides{ClusterInfo: clientcmdapi.Cluster{Server: *proxyURL,},}).ClientConfig()if err != nil {glog.Fatalf("error creating client configuration: %v", err)}
When the code is to be run as a part of the cluster then we can simply use
Once the connection is established we can use it to create clientset. For accessing Kubernetes objects, generally the clientset from the client-go project is used, but for CRD related operations we need to use the clientset from apiextensions-apiserver project.
In the create CRD function, we first create the definition of our custom object and then pass it to the create method which creates it in our cluster. Just like we did while creating our definition using CLI, here also we set the parameters like version, group, kind etc.
Once our definition is ready we can create objects of its type just like we did earlier using the CLI. First we need to define our object.
Kubernetes API conventions suggests that each object must have two nested object fields that govern the object’s configuration: the object spec and the object status. Objects must also have metadata associated with them. The custom objects that we define here comply with these standards. It is also recommended to create a list type for every type thus we have also created a SslConfigList struct.
Now we need to write a function which will create a custom client which is aware of the new resource that we have created.
Once we have registered our custom resource definition with the Kubernetes cluster we can create objects of its type using the Kubernetes cli as we did earlier but for creating controllers for these objects or for developing some custom functionalities around them we need to build a client library also using which we can access them from go API. For native Kubernetes objects, this type of library is provided for each object.
We can add more methods like watch, update status etc. Their implementation will also be similar to the methods we have defined above. For looking at the methods available for various Kubernetes objects like pod, node etc. we can refer to the v1 package.
Putting all things together
Now in our main function we will get all the things together.
package mainimport ("flag""fmt""time""blog.velotio.com/crd-blog/v1alpha1""github.com/golang/glog" apiextension "k8s.io/apiextensions-apiserver/pkg/client/clientset/clientset" meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1""k8s.io/client-go/rest""k8s.io/client-go/tools/clientcmd" clientcmdapi "k8s.io/client-go/tools/clientcmd/api")var (// Set during build version string proxyURL = flag.String("proxy", "",`If specified, it is assumed that a kubctl proxy server is running on the given url and creates a proxy client. In case it is not given InCluster kubernetes setup will be used`))func main() { flag.Parse()var err errorvar config *rest.Configif*proxyURL !="" { config, err = clientcmd.NewNonInteractiveDeferredLoadingClientConfig(&clientcmd.ClientConfigLoadingRules{},&clientcmd.ConfigOverrides{ ClusterInfo: clientcmdapi.Cluster{ Server: *proxyURL, }, }).ClientConfig()if err != nil { glog.Fatalf("error creating client configuration: %v", err) } } else {if config, err = rest.InClusterConfig(); err != nil { glog.Fatalf("error creating client configuration: %v", err) } } kubeClient, err := apiextension.NewForConfig(config)if err != nil { glog.Fatalf("Failed to create client: %v", err) }// Create the CRD err = v1alpha1.CreateCRD(kubeClient)if err != nil { glog.Fatalf("Failed to create crd: %v", err) }// Wait for the CRD to be created before we use it. time.Sleep(5* time.Second)// Create a new clientset which include our CRD schema crdclient, err := v1alpha1.NewClient(config)if err != nil {panic(err) }// Create a new SslConfig objectSslConfig :=&v1alpha1.SslConfig{ObjectMeta: meta_v1.ObjectMeta{Name: "sslconfigobj",Labels: map[string]string{"mylabel": "test"}, },Spec: v1alpha1.SslConfigSpec{Cert: "my-cert",Key: "my-key",Domain: "*.velotio.com", },Status: v1alpha1.SslConfigStatus{State: "created",Message: "Created, not processed yet", }, }// Create the SslConfig object we create above in the k8s cluster resp, err := crdclient.SslConfigs("default").Create(SslConfig)if err != nil { fmt.Printf("error while creating object: %vn", err) } else { fmt.Printf("object created: %vn", resp) } obj, err := crdclient.SslConfigs("default").Get(SslConfig.ObjectMeta.Name)if err != nil { glog.Infof("error while getting the object %vn", err) } fmt.Printf("SslConfig Objects Found: n%vn", obj) select {}}
Now if we run our code then our custom resource definition will get created in the Kubernetes cluster and also an object of its type will be there just like with the cli. The docker image akash125/crdblog is build using the code discussed above it can be directly pulled from docker hub and run in a Kubernetes cluster. After the image is run successfully, the CRD definition that we discussed above will get created in the cluster along with an object of its type. We can verify the same using the CLI the way we did earlier, we can also check the logs of the pod running the docker image to verify it. The complete code is available here.
Conclusion and future work
We learned how to create a custom resource definition and objects using Kubernetes command line interface as well as the Golang client. We also learned how to programmatically access a Kubernetes cluster, using which we can build some really cool stuff on Kubernetes, we can now also create custom controllers for our resources which continuously watches the cluster for various life cycle events of our object and takes desired action accordingly. To read more about CRD refer the following links: