- Infrastructure Costs cut by 30-34% monthly, optimizing resource utilization and generating substantial savings.
- Customer Onboarding Time reduced from 50 to 4 days, significantly accelerating the client’s ability to onboard new customers.
- Site Provisioning Time for existing customers reduced from weeks to a few hours, streamlining operations and improving customer satisfaction.
- Downtime affecting customers was reduced to under 30 minutes, with critical issues resolved within 1 hour and most proactively addressed before customer notification.
Tag: Cloud Migration
-
Transforming Infrastructure at Scale with Azure Cloud
-
Automated Containerization and Migration of On-premise Applications to Cloud Platforms
Containerized applications are becoming more popular with each passing year. All enterprise applications are adopting container technology as they modernize their IT systems. Migrating your applications from VMs or physical machines to containers comes with multiple advantages like optimal resource utilization, faster deployment times, replication, quick cloning, lesser lock-in and so on. Various container orchestration platforms like Kubernetes, Google Container Engine (GKE), Amazon EC2 Container Service (Amazon ECS) help in quick deployment and easy management of your containerized applications. But in order to use these platforms, you need to migrate your legacy applications to containers or rewrite/redeploy your applications from scratch with the containerization approach. Rearchitecting your applications using containerization approach is preferable, but is that possible for complex legacy applications? Is your deployment team capable enough to list down each and every detail about the deployment process of your application? Do you have the patience of authoring a Docker file for each of the components of your complex application stack?
Automated migrations!
Velotio has been helping customers with automated migration of VMs and bare-metal servers to various container platforms. We have developed automation to convert these migrated applications as containers on various container deployment platforms like GKE, Amazon ECS and Kubernetes. In this blog post, we will cover one such migration tool developed at Velotio which will migrate your application running on a VM or physical machine to Google Container Engine (GKE) by running a single command.
Migration tool details
We have named our migration tool as A2C(Anything to Container). It can migrate applications running on any Unix or Windows operating system.
The migration tool requires the following information about the server to be migrated:
- IP of the server
- SSH User, SSH Key/Password of the application server
- Configuration file containing data paths for application/database/components (more details below)
- Required name of your docker image (The docker image that will get created for your application)
- GKE Container Cluster details
In order to store persistent data, volumes can be defined in container definition. Data changes done on volume path remain persistent even if the container is killed or crashes. Volumes are basically filesystem path from host machine on which your container is running, NFS or cloud storage. Containers will mount the filesystem path from your local machine to container, leading to data changes being written on the host machine filesystem instead of the container’s filesystem. Our migration tool supports data volumes which can be defined in the configuration file. It will automatically create disks for the defined volumes and copy data from your application server to these disks in a consistent way.
The configuration file we have been talking about is basically a YAML file containing filesystem level information about your application server. A sample of this file can be found below:
includes: - / volumes: - var/log/httpd - var/log/mariadb - var/www/html - var/lib/mysql excludes: - mnt - var/tmp - etc/fstab - proc - tmpThe configuration file contains 3 sections: includes, volumes and excludes:
- Includes contains filesystem paths on your application server which you want to add to your container image.
- Volumes contain filesystem paths on your application server which stores your application data. Generally, filesystem paths containing database files, application code files, configuration files, log files are good candidates for volumes.
- The excludes section contains filesystem paths which you don’t want to make part of the container. This may include temporary filesystem paths like /proc, /tmp and also NFS mounted paths. Ideally, you would include everything by giving “/” in includes section and exclude specifics in exclude section.
Docker image name to be given as input to the migration tool is the docker registry path in which the image will be stored, followed by the name and tag of the image. Docker registry is like GitHub of docker images, where you can store all your images. Different versions of the same image can be stored by giving version specific tag to the image. GKE also provides a Docker registry. Since in this demo we are migrating to GKE, we will also store our image to GKE registry.
GKE container cluster details to be given as input to the migration tool, contains GKE specific details like GKE project name, GKE container cluster name and GKE region name. A container cluster can be created in GKE to host the container applications. We have a separate set of scripts to perform cluster creation operation. Container cluster creation can also be done easily through GKE UI. For now, we will assume that we have a 3 node cluster created in GKE, which we will use to host our application.
Tasks performed under migration
Our migration tool (A2C), performs the following set of activities for migrating the application running on a VM or physical machine to GKE Container Cluster:
1. Install the A2C migration tool with all it’s dependencies to the target application server
2. Create a docker image of the application server, based on the filesystem level information given in the configuration file
3. Capture metadata from the application server like configured services information, port usage information, network configuration, external services, etc.
4. Push the docker image to GKE container registry
5. Create disk in Google Cloud for each volume path defined in configuration file and prepopulate disks with data from application server
6. Create deployment spec for the container application in GKE container cluster, which will open the required ports, configure required services, add multi container dependencies, attach the pre populated disks to containers, etc.
7. Deploy the application, after which you will have your application running as containers in GKE with application software in running state. New application URL’s will be given as output.
8. Load balancing, HA will be configured for your application.
Demo
For demonstration purpose, we will deploy a LAMP stack (Apache+PHP+Mysql) on a CentOS 7 VM and will run the migration utility for the VM, which will migrate the application to our GKE cluster. After the migration we will show our application preconfigured with the same data as on our VM, running on GKE.
Step 1
We setup LAMP stack using Apache, PHP and Mysql on a CentOS 7 VM in GCP. The PHP application can be used to list, add, delete or edit user data. The data is getting stored in MySQL database. We added some data to the database using the application and the UI would show the following:

Step 2
Now we run the A2C migration tool, which will migrate this application stack running on a VM into a container and auto-deploy it to GKE.
# ./migrate.py -c lamp_data_handler.yml -d "tcp://35.202.201.247:4243" -i migrate-lamp -p glassy-chalice-XXXXX -u root -k ~/mykey -l a2c-host --gcecluster a2c-demo --gcezone us-central1-b 130.211.231.58Pushing converter binary to target machine Pushing data config to target machine Pushing installer script to target machine Running converter binary on target machine [130.211.231.58] out: creating docker image [130.211.231.58] out: image created with id 6dad12ba171eaa8615a9c353e2983f0f9130f3a25128708762228f293e82198d [130.211.231.58] out: Collecting metadata for image [130.211.231.58] out: Generating metadata for cent7 [130.211.231.58] out: Building image from metadata Pushing the docker image to GCP container registryInitiate remote data copy Activated service account credentials for: [glassy-chaliceXXXXX@appspot.gserviceaccount.com] for volume var/log/httpd Creating disk migrate-lamp-0 Disk Created Successfully transferring data from sourcefor volume var/log/mariadb Creating disk migrate-lamp-1 Disk Created Successfully transferring data from sourcefor volume var/www/html Creating disk migrate-lamp-2 Disk Created Successfully transferring data from sourcefor volume var/lib/mysql Creating disk migrate-lamp-3 Disk Created Successfully transferring data from sourceConnecting to GCP cluster for deployment Created service file /tmp/gcp-service.yaml Created deployment file /tmp/gcp-deployment.yamlDeploying to GKE
$ kubectl get pod NAMEREADY STATUSRESTARTS AGE migrate-lamp-3707510312-6dr5g 0/1 ContainerCreating 058s$ kubectl get deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE migrate-lamp 1 1 10 1m$ kubectl get service NAME CLUSTER-IP EXTERNAL-IP PORT(S)AGE kubernetes 10.59.240.1443/TCP23hmigrate-lamp 10.59.248.44 35.184.53.100 3306:31494/TCP,80:30909/TCP,22:31448/TCP 53sYou can access your application using above connection details!
Step 3
Access LAMP stack on GKE using the IP 35.184.53.100 on default 80 port as was done on the source machine.

Here is the Docker image being created in GKE Container Registry:

We can also see that disks were created with migrate-lamp-x, as part of this automated migration.

Load Balancer also got provisioned in GCP as part of the migration process

Following service files and deployment files were created by our migration tool to deploy the application on GKE:
# cat /tmp/gcp-service.yaml apiVersion: v1 kind: Service metadata: labels: app: migrate-lamp name: migrate-lamp spec: ports: - name: migrate-lamp-3306 port: 3306 - name: migrate-lamp-80 port: 80 - name: migrate-lamp-22 port: 22 selector: app: migrate-lamp type: LoadBalancer# cat /tmp/gcp-deployment.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: app: migrate-lamp name: migrate-lamp spec: replicas: 1 selector: matchLabels: app: migrate-lamp template: metadata: labels: app: migrate-lamp spec: containers: - image: us.gcr.io/glassy-chalice-129514/migrate-lamp name: migrate-lamp ports: - containerPort: 3306 - containerPort: 80 - containerPort: 22 securityContext: privileged: true volumeMounts: - mountPath: /var/log/httpd name: migrate-lamp-var-log-httpd - mountPath: /var/www/html name: migrate-lamp-var-www-html - mountPath: /var/log/mariadb name: migrate-lamp-var-log-mariadb - mountPath: /var/lib/mysql name: migrate-lamp-var-lib-mysql volumes: - gcePersistentDisk: fsType: ext4 pdName: migrate-lamp-0 name: migrate-lamp-var-log-httpd - gcePersistentDisk: fsType: ext4 pdName: migrate-lamp-2 name: migrate-lamp-var-www-html - gcePersistentDisk: fsType: ext4 pdName: migrate-lamp-1 name: migrate-lamp-var-log-mariadb - gcePersistentDisk: fsType: ext4 pdName: migrate-lamp-3 name: migrate-lamp-var-lib-mysqlConclusion
Migrations are always hard for IT and development teams. At Velotio, we have been helping customers to migrate to cloud and container platforms using streamlined processes and automation. Feel free to reach out to us at contact@rsystems.com to know more about our cloud and container adoption/migration offerings.
-
Taking Amazon’s Elastic Kubernetes Service for a Spin
With the introduction of Elastic Kubernetes service at AWS re: Invent last year, AWS finally threw their hat in the ever booming space of managed Kubernetes services. In this blog post, we will learn the basic concepts of EKS, launch an EKS cluster and also deploy a multi-tier application on it.
What is Elastic Kubernetes service (EKS)?
Kubernetes works on a master-slave architecture. The master is also referred to as control plane. If the master goes down it brings our entire cluster down, thus ensuring high availability of master is absolutely critical as it can be a single point of failure. Ensuring high availability of master and managing all the worker nodes along with it becomes a cumbersome task in itself, thus it is most desirable for organizations to have managed Kubernetes cluster so that they can focus on the most important task which is to run their applications rather than managing the cluster. Other cloud providers like Google cloud and Azure already had their managed Kubernetes service named GKE and AKS respectively. Similarly now with EKS Amazon has also rolled out its managed Kubernetes cluster to provide a seamless way to run Kubernetes workloads.
Key EKS concepts:
EKS takes full advantage of the fact that it is running on AWS so instead of creating Kubernetes specific features from the scratch they have reused/plugged in the existing AWS services with EKS for achieving Kubernetes specific functionalities. Here is a brief overview:
IAM-integration: Amazon EKS integrates IAM authentication with Kubernetes RBAC ( role-based access control system native to Kubernetes) with the help of Heptio Authenticator which is a tool that uses AWS IAM credentials to authenticate to a Kubernetes cluster. Here we can directly attach an RBAC role with an IAM entity this saves the pain of managing another set of credentials at the cluster level.

Container Interface: AWS has developed an open source cni plugin which takes advantage of the fact that multiple network interfaces can be attached to a single EC2 instance and these interfaces can have multiple secondary private ips associated with them, these secondary ips are used to provide pods running on EKS with real ip address from VPC cidr pool. This improves the latency for inter pod communications as the traffic flows without any overlay.

ELB Support: We can use any of the AWS ELB offerings (classic, network, application) to route traffic to our service running on the working nodes.
Auto scaling: The number of worker nodes in the cluster can grow and shrink using the EC2 auto scaling service.
Route 53: With the help of the External DNS project and AWS route53 we can manage the DNS entries for the load balancers which get created when we create an ingress object in our EKS cluster or when we create a service of type LoadBalancer in our cluster. This way the DNS names are always in sync with the load balancers and we don’t have to give separate attention to it.
Shared responsibility for cluster: The responsibilities of an EKS cluster is shared between AWS and customer. AWS takes care of the most critical part of managing the control plane (api server and etcd database) and customers need to manage the worker node. Amazon EKS automatically runs Kubernetes with three masters across three Availability Zones to protect against a single point of failure, control plane nodes are also monitored and replaced if they fail, and are also patched and updated automatically this ensures high availability of the cluster and makes it extremely simple to migrate existing workloads to EKS.

Prerequisites for launching an EKS cluster:
1. IAM role to be assumed by the cluster: Create an IAM role that allows EKS to manage a cluster on your behalf. Choose EKS as the service which will assume this role and add AWS managed policies ‘AmazonEKSClusterPolicy’ and ‘AmazonEKSServicePolicy’ to it.

2. VPC for the cluster: We need to create the VPC where our cluster is going to reside. We need a VPC with subnets, internet gateways and other components configured. We can use an existing VPC for this if we wish or create one using the CloudFormation script provided by AWS here or use the Terraform script available here. The scripts take ‘cidr’ block of the VPC and three other subnets as arguments.
Launching an EKS cluster:
1. Using the web console: With the prerequisites in place now we can go to the EKS console and launch an EKS cluster when we try to launch an EKS cluster we need to provide a the name of the EKS cluster, choose the Kubernetes version to use, provide the IAM role we created in step one and also choose a VPC, once we choose a VPC we also need to select subnets from the VPC where we want our worker nodes to be launched by default all the subnets in the VPC are selected we also need to provide a security group which is applied to the elastic network interfaces (eni) that EKS creates to allow control plane communicate with the worker nodes.
NOTE: Couple of things to note here is that the subnets must be in at least two different availability zones and the security group that we provided is later updated when we create worker node cluster so it is better to not use this security group with any other entity or be completely sure of the changes happening to it.

2. Using awscli :
aws eks create-cluster --name eks-blog-cluster --role-arn arn:aws:iam::XXXXXXXXXXXX:role/eks-service-role --resources-vpc-config subnetIds=subnet-0b8da2094908e1b23,subnet-01a46af43b2c5e16c,securityGroupIds=sg-03fa0c02886c183d4{ "cluster": { "status": "CREATING", "name": "eks-blog-cluster", "certificateAuthority": {}, "roleArn": "arn:aws:iam::XXXXXXXXXXXX:role/eks-service-role", "resourcesVpcConfig": { "subnetIds": [ "subnet-0b8da2094908e1b23", "subnet-01a46af43b2c5e16c" ], "vpcId": "vpc-0364b5ed9f85e7ce1", "securityGroupIds": [ "sg-03fa0c02886c183d4" ] }, "version": "1.10", "arn": "arn:aws:eks:us-east-1:XXXXXXXXXXXX:cluster/eks-blog-cluster", "createdAt": 1535269577.147 } }In the response, we see that the cluster is in creating state. It will take a few minutes before it is available. We can check the status using the below command:
aws eks describe-cluster --name=eks-blog-clusterConfigure kubectl for EKS:
We know that in Kubernetes we interact with the control plane by making requests to the API server. The most common way to interact with the API server is via kubectl command line utility. As our cluster is ready now we need to install kubectl.
1. Install the kubectl binary
curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectlGive executable permission to the binary.
chmod +x ./kubectlMove the kubectl binary to a folder in your system’s $PATH.
sudo cp ./kubectl /bin/kubectl && export PATH=$HOME/bin:$PATHAs discussed earlier EKS uses AWS IAM Authenticator for Kubernetes to allow IAM authentication for your Kubernetes cluster. So we need to download and install the same.
2. Install aws-iam-authenticator
curl -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-07-26/bin/linux/amd64/aws-iam-authenticatorGive executable permission to the binary
chmod +x ./aws-iam-authenticatorMove the aws-iam-authenticator binary to a folder in your system’s $PATH.
sudo cp ./aws-iam-authenticator /bin/aws-iam-authenticator3. Create the kubeconfig file
First create the directory.
mkdir -p ~/.kubeOpen a config file in the folder created above
sudo vi .kube/config-eks-blog-clusterPaste the below code in the file
clusters: - cluster: server: https://DBFE36D09896EECAB426959C35FFCC47.sk1.us-east-1.eks.amazonaws.com certificate-authority-data: ”....................” name: kubernetes contexts: - context: cluster: kubernetes user: aws name: aws current-context: aws kind: Config preferences: {} users: - name: aws user: exec: apiVersion: client.authentication.k8s.io/v1alpha1 command: aws-iam-authenticator args: - "token" - "-i" - “eks-blog-cluster"Replace the values of the server and certificate–authority data with the values of your cluster and certificate and also update the cluster name in the args section. You can get these values from the web console as well as using the command.
aws eks describe-cluster --name=eks-blog-clusterSave and exit.
Add that file path to your KUBECONFIG environment variable so that kubectl knows where to look for your cluster configuration.
export KUBECONFIG=$KUBECONFIG:~/.kube/config-eks-blog-clusterTo verify that the kubectl is now properly configured :
kubectl get all NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 172.20.0.1 443/TCP 50mLaunch and configure worker nodes :
Now we need to launch worker nodes before we can start deploying apps. We can create the worker node cluster by using the CloudFormation script provided by AWS which is available here or use the Terraform script available here.
- ClusterName: Name of the Amazon EKS cluster we created earlier.
- ClusterControlPlaneSecurityGroup: Id of the security group we used in EKS cluster.
- NodeGroupName: Name for the worker node auto scaling group.
- NodeAutoScalingGroupMinSize: Minimum number of worker nodes that you always want in your cluster.
- NodeAutoScalingGroupMaxSize: Maximum number of worker nodes that you want in your cluster.
- NodeInstanceType: Type of worker node you wish to launch.
- NodeImageId: AWS provides Amazon EKS-optimized AMI to be used as worker nodes. Currently AKS is available in only two AWS regions Oregon and N.virginia and the AMI ids are ami-02415125ccd555295 and ami-048486555686d18a0 respectively
- KeyName: Name of the key you will use to ssh into the worker node.
- VpcId: Id of the VPC that we created earlier.
- Subnets: Subnets from the VPC we created earlier.

To enable worker nodes to join your cluster, we need to download, edit and apply the AWS authenticator config map.
Download the config map:
curl -O https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-07-26/aws-auth-cm.yamlOpen it in an editor
apiVersion: v1 kind: ConfigMap metadata: name: aws-auth namespace: kube-system data: mapRoles: | - rolearn: <ARN of instance role (not instance profile)> username: system:node:{{EC2PrivateDNSName}} groups: - system:bootstrappers - system:nodesEdit the value of rolearn with the arn of the role of your worker nodes. This value is available in the output of the scripts that you ran. Save the change and then apply
kubectl apply -f aws-auth-cm.yamlNow you can check if the nodes have joined the cluster or not.
kubectl get nodes NAME STATUS ROLES AGE VERSION ip-10-0-2-171.ec2.internal Ready 12s v1.10.3 ip-10-0-3-58.ec2.internal Ready 14s v1.10.3Deploying an application:
As our cluster is completely ready now we can start deploying applications on it. We will deploy a simple books api application which connects to a mongodb database and allows users to store,list and delete book information.
1. MongoDB Deployment YAML
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: mongodb spec: template: metadata: labels: app: mongodb spec: containers: - name: mongodb image: mongo ports: - name: mongodbport containerPort: 27017 protocol: TCP2. Test Application Development YAML
apiVersion: apps/v1beta1 kind: Deployment metadata: name: test-app spec: replicas: 1 template: metadata: labels: app: test-app spec: containers: - name: test-app image: akash125/pyapp imagePullPolicy: IfNotPresent ports: - containerPort: 30003. MongoDB Service YAML
apiVersion: v1 kind: Service metadata: name: mongodb-service spec: ports: - port: 27017 targetPort: 27017 protocol: TCP name: mongodbport selector: app: mongodb4. Test Application Service YAML
apiVersion: v1 kind: Service metadata: name: test-service spec: type: LoadBalancer ports: - name: test-service port: 80 protocol: TCP targetPort: 3000 selector: app: test-appServices
$ kubectl create -f mongodb-service.yaml $ kubectl create -f testapp-service.yamlDeployments
$ kubectl create -f mongodb-deployment.yaml $ kubectl create -f testapp-deployment.yaml$ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 172.20.0.1 <none> 443/TCP 12m mongodb-service ClusterIP 172.20.55.194 <none> 27017/TCP 4m test-service LoadBalancer 172.20.188.77 a7ee4f4c3b0ea 80:31427/TCP 3mIn the EXTERNAL-IP section of the test-service we see dns of an load balancer we can now access the application from outside the cluster using this dns.
To Store Data :
curl -X POST -d '{"name":"A Game of Thrones (A Song of Ice and Fire)“, "author":"George R.R. Martin","price":343}' http://a7ee4f4c3b0ea11e8b0f912f36098e4d-672471149.us-east-1.elb.amazonaws.com/books {"id":"5b8fab49fa142b000108d6aa","name":"A Game of Thrones (A Song of Ice and Fire)","author":"George R.R. Martin","price":343}To Get Data :
curl -X GET http://a7ee4f4c3b0ea11e8b0f912f36098e4d-672471149.us-east-1.elb.amazonaws.com/books [{"id":"5b8fab49fa142b000108d6aa","name":"A Game of Thrones (A Song of Ice and Fire)","author":"George R.R. Martin","price":343}]We can directly put the URL used in the curl operation above in our browser as well, we will get the same response.

Now our application is deployed on EKS and can be accessed by the users.
Comparison BETWEEN GKE, ECS and EKS:
Cluster creation: Creating GKE and ECS cluster is way simpler than creating an EKS cluster. GKE being the simplest of all three.
Cost: In case of both, GKE and ECS we pay only for the infrastructure that is visible to us i.e., servers, volumes, ELB etc. and there is no cost for master nodes or other cluster management services but with EKS there is a charge of 0.2 $ per hour for the control plane.
Add-ons: GKE provides the option of using Calico as the network plugin which helps in defining network policies for controlling inter pod communication (by default all pods in k8s can communicate with each other).
Serverless: ECS cluster can be created using Fargate which is container as a Service (CaaS) offering from AWS. Similarly EKS is also expected to support Fargate very soon.
In terms of availability and scalability all the services are at par with each other.
Conclusion:
In this blog post we learned the basics concepts of EKS, launched our own EKS cluster and deployed an application as well. EKS is much awaited service from AWS especially for the folks who were already running their Kubernetes workloads on AWS, as now they can easily migrate to EKS and have a fully managed Kubernetes control plane. EKS is expected to be adopted by many organisations in near future.
References: