Category: Services

  • How a DTC Leader Cut Migration Effort by 75% with AI-Led Refactoring from PHP to Java 

    Migration Efficiency
    AI-Led Refactoring: Automated code transformation using GenAI, ASTs, and prompt chaining.
    Modular Decomposition: Extracted clean service boundaries from tangled PHP code.
    Faster Delivery: Accelerated migration timelines with minimal manual effort.

    Business Value
    Seamless Transition: Migrated to cloud-native Java microservices with no business disruption.
    Better Maintainability: Decoupled architecture simplified future updates and releases.
    Innovation Ready: Enabled faster feature rollouts via CI/CD integration.

    Engineering Impact
    High Code Accuracy: Intelligent validation reduced errors and rework.
    Performance Boost: Event-driven architecture improved scalability and resilience.
    DevOps Ready: CI/CD pipelines delivered out of the box.

    Innovation Highlights
    GenAI + AST Parsing: Combined syntax trees with AI for precise code generation.
    RAG-Based Agents: Tailored migration logic using domain-aware prompts.
    Cron to Events: Replaced legacy jobs with scalable, event-based flows.

  • From Scattered Data to Unified Decisions: Custom Dashboards Save 60% Reporting Time with Actionable KPIs

    Modernizing Enterprise Reporting

    • From Excel to Intelligence: Replaced disconnected spreadsheets and legacy tools with automated Power BI dashboards.
    • Smart Structuring: Built in 13-period calendars and historical logic to support trend analysis and performance tracking.
    • Role-Based Views: Dashboards were tailored for executives, operations, finance, supply chain, and partner teams.

    Unifying Data Across the Value Chain

    • Single Source of Truth: Consolidated fragmented Excel/CSV data into centralized, queryable dashboards.
    • Automation at the Core: Enabled auto-refresh cycles, removing manual data pulls and update delays.
    • Standardized Metrics: Eliminated formula inconsistencies, enabling consistent KPIs across departments.

    Driving Operational Efficiency

    • 60% Time Savings: Reduced manual reporting effort, freeing up teams for strategic analysis.
    • Instant Insights: Enabled faster comparisons across plants, orders, budgets, and product lines.
    • Smarter Collaboration: Aligned metrics helped departments make joint, data-driven decisions.

    Transforming the User Experience

    • Interactive by Design: Filters, slicers, and info buttons made dashboards easy to navigate and actionable.
    • Self-Serve Access: Non-technical users could explore, share, and customize insights independently.
    • Scalable Visualization Suite: Over 15 dashboards provided deep insights without overwhelming users.

    Strategic Business Enablement

    • Accelerated Decision-Making: Executives accessed real-time KPIs to guide capacity, revenue, and quality decisions.
    • Improved Accountability: Transparent views into partner milestones and operational KPIs strengthened governance.
    • Future-Ready Architecture: Built-in flexibility supports integration of new data, reports, and business metrics.

  • Automating Insurance Reporting: Real-Time Dashboards with Power BI for Deeper Insights

    Reporting Modernization

    • From Static to Real-Time: Replaced Excel-based reports with automated Power BI dashboards.
    • Smart Data Modeling: Introduced a 13-period calendar and built-in logic to support historical trend analysis.
    • Multi-Tenant Access: Enabled role- and filter-based views for district, claim type, and coverage line.

    Data Unification & Automation

    • Centralized Data Handling: Integrated ShareDrive to unify scattered Excel/CSV files into one processing pipeline.
    • Automated Refresh Cycles: Power BI scheduler ensured continuous data updates without manual effort.
    • Eliminated Excel Complexity: Removed dependency on VLOOKUPs and error-prone formulas.

    Business Impact

    • Faster Insights: Enabled instant comparisons across fiscal periods and policy types.
    • Reduced Manual Load: Freed up reporting teams from repetitive tasks and maintenance-heavy spreadsheets.
    • Improved Data Confidence: Delivered accurate, standardized KPIs with every refresh.

    User Experience Transformation

    • Interactive Dashboards: Included filters, slicers, and info buttons for intuitive exploration.
    • Self-Service Access: Business users could access and customize reports without technical support.
    • Scalable Visualizations: Over 15 dashboard pages provided detailed yet digestible reporting views.

    Strategic Value Delivered

    • Decision Velocity: Leadership gained timely, data-backed views for operational and risk-based decisions.
    • Enhanced Compliance Visibility: Clearer, on-demand access to liability and worker compensation reports.
    • Foundation for Expansion: Flexible architecture allows easy integration of new datasets or metrics.

  • Ditch the Dinosaur Code: Rewriting the Legacy Layer with GenAI, AST, DFG, CFG, and RAG


    From Insight to Action: What This POV Delivers

    • A precision-first approach to legacy modernization using GenAI, ASTs, DFGs, CFGs, and RAG, enabling code transformation without full rewrites.
    • A deep-dive into how metadata-driven pipelines can unlock structural, semantic, and contextual understanding of legacy systems.
    • Technical clarity on building GenAI-assisted migration workflows, from parsing and prompt chaining to human-in-the-loop verification.
    • A clear perspective on reengineering the full SDLC lifecycle, ideation to operations, with modular, AI-native patterns.
    • A blueprint for teams looking to scale modernization with zero downtime, reduced developer effort, and continuous optimization.
  • From Video to Evaluation: Automating Quiz Creation and Grading with Generative AI

    Operational Efficiency

    • Automated Quiz Creation: Quizzes generated within minutes of video upload.
    • AI-Powered Grading: Rubric-based evaluation with LLMs reduced manual effort.
    • Faster Feedback: Accelerated review cycles improved learning responsiveness.

    Customer Value

    • Interactive Learning: Passive videos turned into engaging assessments.
    • Instructor Time Savings: Over 70% reduction in quiz and grading workload.
    • Scalable Delivery: Consistent quality across growing learner base.

    Financial Performance

    • Lower Costs: Reduced manual assessment overhead.
    • Improved ROI: Higher engagement led to better course outcomes.
    • Operational Gains: Efficient scaling with no added manual resources.

    Innovation Highlights

    • Multi-Model Quiz Engine: GPT-3.5, LLAMA-3, Mistral for diverse question formats.
    • Smart Video Segmentation: BERTopic for Bloom’s taxonomy alignment.
    • Hybrid Grading: Combined AI scoring with structured rubrics.

  • Linux Internals of Kubernetes Networking

    Introduction

    This blog is a hands-on guide designed to help you understand Kubernetes networking concepts by following along. We’ll use K3s, a lightweight Kubernetes distribution, to explore how networking works within a cluster.

    System Requirements

    Before getting started, ensure your system meets the following requirements:

    • A Linux-based system (Ubuntu, CentOS, or equivalent).
    • At least 2 CPU cores and 4 GB of RAM.
    • Basic familiarity with Linux commands.

    Installing K3s

    To follow along with this guide, we first need to install K3s—a lightweight Kubernetes distribution designed for ease of use and optimized for resource-constrained environments.

    Install K3s

    You can install K3s by running the following command in your terminal:

    curl -sfL https://get.k3s.io | sh -

    This script will:

    1. Download and install the K3s server.
    2. Set up the necessary dependencies.
    3. Start the K3s service automatically after installation.

    Verify K3s Installation

    After installation, you can check the status of the K3s service to make sure everything is running correctly:

    systemctl status k3s

    If everything is correct, you should see that the K3s service is active and running.

    Set Up kubectl

    K3s comes bundled with its own kubectl binary. To use it, you can either:

    Use the K3s binary directly:

    k3s kubectl get pods -A

    Or set up the kubectl config file by exporting the Kubeconfig path:

    export KUBECONFIG="/etc/rancher/k3s/k3s.yaml"
    sudo chown -R $USER $KUBECONFIG
    kubectl get pods -A

    Understanding Kubernetes Networking

    In Kubernetes, networking plays a crucial role in ensuring seamless communication between pods, services, and external resources. In this section, we will dive into the network configuration and explore how pods communicate with one another.

    Viewing Pods and Their IP Addresses

    To check the IP addresses assigned to the pods, use the following kubectl command:

    CODE: https://gist.github.com/velotiotech/1961a4cdd5ec38f7f0fbe0523821dc7f.sh

    This will show you a list of all the pods across all namespaces, including their corresponding IP addresses. Each pod is assigned a unique IP address within the cluster.

    You’ll notice that the IP addresses are assigned by Kubernetes and typically belong to the range specified by the network plugin (such as Flannel, Calico, or the default CNI). K3s uses Flannel CNI by default and sets default pod CIDR as 10.42.0.0/24. These IPs allow communication within the cluster.

    Observing Network Configuration Changes

    Upon starting K3s, it sets up several network interfaces and configurations on the host machine. These configurations are key to how the Kubernetes networking operates. Let’s examine the changes using the IP utility.

    Show All Network Interfaces

    Run the following command to list all network interfaces:

    ip link show

    This will show all the network interfaces.

    • lo, enp0s3, and enp0s9 are the network interfaces that belong to the host.  
    • flannel.1 interface is created by Flannel CNI for inter-pod communication that exists on different nodes.
    • cni0 interface is created by bridge CNI plugin for inter-pod communication that exists on the same node.
    • vethXXXXXXXX@ifY interface is created by bridge CNI plugin. This interface connects pods with the cni0 bridge.

    Show IP Addresses

    To display the IP addresses assigned to the interfaces:

    ip -c -o addr show

    You should see the IP addresses of all the network interfaces. With regards to K3s-related interfaces, only cni0 and flannel.1 have IP addresses. The rest of the vethXXXXXXXX interfaces only have MAC addresses; the details regarding this will be explained in the later section of this blog.

    Pod-to-Pod Communication and Bridge Networks

    The diagram illustrates how container networking works within a Kubernetes (K3s) node, showing the key components that enable pods to communicate with each other and the outside world. Let’s break down this networking architecture:

    At the top level, we have the host interface (enp0s9) with IP 192.168.2.224, which is the node’s physical network interface connected to the external network. This is the node’s gateway to the outside world.

    enp0s9 interface is connected to the cni0 bridge (IP: 10.42.0.1/24), which acts like a virtual switch inside the node. This bridge serves as the internal network hub for all pods running on the node.

    Each of the pods runs in its own network namespace, with each one having its own separate network stack, which includes its own network interfaces and routing tables. Each of the pod’s internal interfaces, eth0, as shown in the diagram above, has an IP address, which is the pod’s IP address. eth0 inside the pod is connected to its virtual ethernet (veth) pair that exists in the host’s network and connects the eth0 interface of the pod to the cni0 bridge.

    Exploring Network Namespaces in Detail

    Kubernetes uses network namespaces to isolate networking for each pod, ensuring that pods have separate networking environments and do not interfere with each other. 

    A network namespace is a Linux kernel feature that provides network isolation for a group of processes. Each namespace has its own network interfaces, IP addresses, routing tables, and firewall rules. Kubernetes uses this feature to ensure that each pod has its own isolated network environment.

    In Kubernetes:

    • Each pod has its own network namespace.
    • Each container within a pod shares the same network namespace.

    Inspecting Network Namespaces

    To inspect the network namespaces, follow these steps:

    If you installed k3s as per this blog, k3s by default selects containerd runtime, your commands to get the container pid will be different if you run k3s with docker or other container runtimes.

    Identify the container runtime and get the list of running containers.

    sudo crictl ps

    Get the container-id from the output and use it to get the process ID

    sudo crictl inspect <container-id> | grep pid

    Check the network namespace associated with the container

    sudo ls -l /proc/<container-pid>/ns/net

    You can use nsenter to enter the network namespace for further exploration.

    Executing Into Network Namespaces

    To explore the network settings of a pod’s namespace, you can use the nsenter command.

    sudo nsenter --net=/proc/<container-pid>/ns/net
    ip addr show

    Script to exec into network namespace

    You can use the following script to get the container process ID and exec into the pod network namespace directly.

    POD_ID=$(sudo crictl pods --name <pod_name> -q) 
    CONTAINER_ID=$(sudo crictl ps --pod $POD_ID -q) 
    nsenter -t $(sudo crictl inspect $CONTAINER_ID | jq -r .info.pid) -n ip addr show

    Veth Interfaces and Their Connection to Bridge

    Inside the pod’s network namespace, you should see the pod’s interfaces (lo and eth0) and the IP address: 10.42.0.8 assigned to the pod. If observed closely, we see eth0@if13, which means eth0 is connected to interface 13 (in your system the corresponding veth might be different). Interface eth0 inside the pod is a virtual ethernet (veth) interface, veths are always created in interconnected pairs. In this case, one end of veth is eth0 while the other part is if13. But where does if13 exist? It exists as a part of the host network connecting the pod’s network to the host network via the bridge (cni0) in this case.

    ip link show | grep 13

    Here you see veth82ebd960@if2, which denotes that the veth is connected to interface number 2 in the pod’s network namespace. You can verify that the veth is connected to bridge cni0 as follows and that the veth of each pod is connected to the bridge, which enables communication between the pods on the same node.

    brctl show

    Demonstrating Pod-to-Pod Communication

    Deploy Two Pods

    Deploy two busybox pods to test communication:

    kubectl run pod1 --image=busybox --restart=Never -- sleep infinity
    kubectl run pod2 --image=busybox --restart=Never -- sleep infinity

    Get the IP Addresses of the Pods

    kubectl get pods pod1 pod2 -o wide -A

    Pod1 IP : 10.42.0.9

    Pod2 IP : 10.42.0.10

    Ping Between Pods and Observe the Traffic Between Two Pods

    Before we ping from Pod1 to Pod2, we will set up a watch on cni0 and veth pair of Pod1 and pod2 that are connected to cni0 using tcpdump.

    Open three terminals and set up the tcpdump listeners: 

    # Terminal 1 – Watch traffic on cni0 bridge 

    sudo tcpdump -i cni0 icmp

     # Terminal 2 – Watch traffic on veth1 (Pod1’s veth pair)

    sudo tcpdump -i veth3a94f27 icmp

    # Terminal 3 – Watch traffic on veth2 (Pod2’s veth pair) 

    sudo tcpdump -i veth18eb7d52 icmp

    Exec into Pod1 and ping Pod2:

    kubectl exec -it pod1 -- ping -c 4 <pod2-IP>

    Watch results on veth3a94f27 pair of Pod1.

    Watch results on cni0:

    Watch results on veth18eb7d52 pair of Pod2:

    Observing the timestamps for each request and reply on different interfaces, we get the flow of request/reply, as shown in the diagram below.

    Deeper Dive into the Journey of Network Packets from One Pod to Another

    We have already seen the flow of request/reply between two pods via veth interfaces connected to each other in a bridge network. In this section, we will discuss the internal details of how a network packet reaches from one pod to another.

    Packet Leaving Pod1’s Network

    Inside Pod1’s network namespace, the packet originates from eth0 (Pod1’s internal interface) and is sent out via its virtual ethernet interface pair in the host network. The destination address of the network packet is 10.0.0.10, which lies within the CIDR range 10.42.0.0 – 10.42.0.255 hence it matches the second route.

    The packet exits Pod1’s namespace and enters the host namespace via the connected veth pair that exists in the host network. The packet arrives at bridge cni0 since it is the master of all the veth pairs that exist in the host network.

    Once the packet reaches cni0, it gets forwarded to the correct veth pair connected to Pod2.

    Packet Forwarding from cni0 to Pod2’s Network

    When the packet reaches cni0, the job of cni0 is to forward this packet to Pod2. cni0 bridge acts as a Layer2 switch here, which just forwards the packet to the destination veth. The bridge maintains a forwarding database and dynamically learns the mapping of the destination MAC address and its corresponding veth device. 

    You can view forwarding database information with the following command:

    bridge fdb show

    In this screenshot, I have limited the result of forwarding database to just the MAC address of Pod2’s eth0

    1. First column: MAC address of Pod2’s eth0
    2. dev vethX: The network interface this MAC address is reachable through
    3. master cni0: Indicates this entry belongs to cni0 bridge
    4. Flags that may appear:
      • permanent: Static entry, manually added or system-generated
      • self: MAC address belongs to the bridge interface itself
      • No flag: The entry is Dynamically learned.

    Dynamic MAC Learning Process

    When a packet is generated with a payload of ICMP requests made from Pod1, it is packed as a frame at layer 2 with source MAC as the MAC address of the eth0 interface in Pod1, in order to get the destination MAC address, eth0 broadcasts an ARP request to all the network interfaces the ARP request contains the destination interface’s IP address.

    This ARP request is received by all interfaces connected to the bridge, but only Pod2’s eth0 interface responds with its MAC address. The destination MAC address is then added to the frame, and the packet is sent to the cni0 bridge.

    This destination MAC address is added to the frame, and it is sent to the cni0 bridge.  

    When this frame reaches the cni0 bridge, the bridge will open the frame and it will save the source MAC against the source interface(veth pair of pod1’s eth0 in the host network) in the forwarding table.

    Now the bridge has to forward the frame to the appropriate interface where the destination lies (i.e. veth pair of Pod2 in the host network). If the forwarding table has information about veth pair of Pod2 then the bridge will forward that information to Pod2, else it will flood the frame to all the veths connected to the bridge, hence reaching Pod2.

    When Pod2 sends the reply to Pod1 for the request made, the reverse path is followed. In this case, the frame leaves Pod2’s eth0 and is tunneled to cni0 via the veth pair of Pod2’s eth0 in the host network. Bridge adds the source MAC address (in this case, the source will be Pod2’s eth0) and the device from which it is reachable in the forwarding database, and forwards the reply to Pod1, hence completing the request and response cycle.

    Summary and Key Takeaways

    In this guide, we explored the foundational elements of Linux that play a crucial role in Kubernetes networking using K3s. Here are the key takeaways:

    • Network Namespaces ensure pod isolation.
    • Veth Interfaces connect pods to the host network and enable inter-pod communication.
    • Bridge Networks facilitate pod-to-pod communication on the same node.

    I hope you gained a deeper understanding of how Linux internals are used in Kubernetes network design and how they play a key role in pod-to-pod communication within the same node.

  • Taming the OpenStack Beast – A Fun & Easy Guide!

    So, you’ve heard about OpenStack, but it sounds like a mythical beast only cloud wizards can tame? Fear not! No magic spells or enchanted scrolls are needed—we’re breaking it down in a simple, engaging, and fun way.

    Ever felt like managing cloud infrastructure is like trying to tame a wild beast? OpenStack might seem intimidating at first, but with the right approach, it’s more like training a dragon —challenging but totally worth it!

    By the end of this guide, you’ll not only understand OpenStack but also be able to deploy it like a pro using Kolla-Ansible. Let’s dive in! 🚀

    🤔 What Is OpenStack?

    Imagine you’re running an online store. Instead of buying an entire warehouse upfront, you rent shelf space, scaling up or down based on demand. That’s exactly how OpenStack works for computing!

    OpenStack is an open-source cloud platform that lets companies build, manage, and scale their own cloud infrastructure—without relying on expensive proprietary solutions.

    Think of it as LEGO blocks for cloud computing—but instead of plastic bricks, you’re assembling compute, storage, and networking components to create a flexible and powerful cloud. 🧱🚀

    🤷‍♀️ Why Should You Care?

    OpenStack isn’t just another cloud platform—it’s powerful, flexible, and built for the future. Here’s why you should care:

    It’s Free & Open-Source – No hefty licensing fees, no vendor lock-in—just pure, community-driven innovation. Whether you’re a student, a startup, or an enterprise, OpenStack gives you the freedom to build your own cloud, your way.

    Trusted by Industry Giants – If OpenStack is good enough for NASA, PayPal, and CERN (yes, the guys running the Large Hadron Collider ), it’s definitely worth your time! These tech powerhouses use OpenStack to manage mission-critical workloads, proving its reliability at scale.

    Super Scalable – Whether you’re running a tiny home lab or a massive enterprise deployment, OpenStack grows with you. Start with a few nodes and scale to thousands as your needs evolve—without breaking a sweat.

    Perfect for Hands-On Learning – Want real-world cloud experience? OpenStack is a playground for learning cloud infrastructure, automation, and networking. Setting up your own OpenStack lab is like a DevOps gym—you’ll gain hands-on skills that are highly valued in the industry.

    ️🏗️ OpenStack Architecture in Simple Terms – The Avengers of Cloud Computing

    OpenStack is a modular system. Think of it as assembling an Avengers team, where each component has a unique superpower, working together to form a powerful cloud infrastructure. Let’s break down the team:

    🦾 Nova (Iron Man) – The Compute Powerhouse

    Just like Iron Man powers up in his suit, Nova is the core component that spins up and manages virtual machines (VMs) in OpenStack. It ensures your cloud has enough compute power and efficiently allocates resources to different workloads.

    • Acts as the brain of OpenStack, managing instances on physical servers.
    • Works with different hypervisors like KVM, Xen, and VMware to create VMs.
    • Supports auto-scaling, so your applications never run out of power.

    ️🕸️ Neutron (Spider-Man) – The Web of Connectivity

    Neutron is like Spider-Man, ensuring all instances are connected via a complex web of virtual networking. It enables smooth communication between your cloud instances and the outside world.

    • Provides network automation, floating IPs, and load balancing.
    • Supports custom network configurations like VLANs, VXLAN, and GRE tunnels.
    • Just like Spidey’s web shooters, it’s flexible, allowing integration with SDN controllers like Open vSwitch and OVN.

    💪 Cinder (Hulk) – The Strength Behind Storage

    Cinder is OpenStack’s block storage service, acting like the Hulk’s immense strength, giving persistent storage to VMs. When VMs need extra storage, Cinder delivers!

    • Allows you to create, attach, and manage persistent block storage.
    • Works with backend storage solutions like Ceph, NetApp, and LVM.
    • If a VM is deleted, the data remains safe—just like Hulk’s memory, despite all the smashing.

    📸 Glance (Black Widow) – The Memory Keeper

    Glance is OpenStack’s image service, storing and managing operating system images, much like how Black Widow remembers every mission.

    • Acts as a repository for VM images, including Ubuntu, CentOS, and custom OS images.
    • Enables fast booting of instances by storing pre-configured templates.
    • Works with storage backends like Swift, Ceph, or NFS.

    🔑 Keystone (Nick Fury) – The Security Gatekeeper

    Keystone is the authentication and identity service, much like Nick Fury, who ensures that only authorized people (or superheroes) get access to SHIELD.

    • Handles user authentication and role-based access control (RBAC).
    • Supports multiple authentication methods, including LDAP, OAuth, and SAML.
    • Ensures that users and services only access what they are permitted to see.

    🧙‍♂️ Horizon (Doctor Strange) – The All-Seeing Dashboard

    Horizon provides a web-based UI for OpenStack, just like Doctor Strange’s ability to see multiple dimensions.

    • Gives a graphical interface to manage instances, networks, and storage.
    • Allows admins to control the entire OpenStack environment visually.
    • Supports multi-user access with dashboards customized for different roles.

    🚀 Additional Avengers (Other OpenStack Services)

    • Swift (Thor’s Mjolnir) – Object storage, durable and resilient like Thor’s hammer.
    • Heat (Wanda Maximoff) – Automates cloud resources like magic.
    • Ironic (Vision) – Bare metal provisioning, a bridge between hardware and cloud.

    Each of these heroes (services) communicates through APIs, working together to make OpenStack a powerful cloud platform.

    apt update &&

    ️🛠️ How This Helps in Installation

    Understanding these services will make it easier to set up OpenStack. During installation, configure each component based on your needs:

    • If you need VMs, you focus on Nova, Glance, and Cinder.
    • If networking is key, properly configure Neutron.
    • Secure access? Keystone is your best friend.

    Now that you know the Avengers of OpenStack, you’re ready to start your cloud journey. Let’s get our hands dirty with some real-world OpenStack deployment using Kolla-Ansible.

    ️🛠️ Hands-on: Deploying OpenStack with Kolla-Ansible

    So, you’ve learned the Avengers squad of OpenStack—now it’s time to assemble your own OpenStack cluster! 💪

    🔍 Pre-requisites: What You Need Before We Begin

    Before we start, let’s make sure you have everything in place:

    🖥️ Hardware Requirements (Minimum for a Test Setup)

    • 1 Control Node + 1 Compute Node (or more for better scaling).
    • At least 8GB RAM, 4 vCPUs, 100GB Disk per node (More = Better).
    • Ubuntu 22.04 LTS (Recommended) or CentOS 9 Stream.
    • Internet Access (for downloading dependencies).

    🔧 Software & Tools Needed

    Python 3.10+ – Because Python runs the world.

    Ansible 8-9 (ansible-core 2.15-2.16) – Automating OpenStack deployment.

    Docker & Docker Compose – Because we’re running OpenStack in containers!

    Kolla-Ansible – The magic tool for OpenStack deployment.

    Step-by-Step: Setting Up OpenStack with Kolla-Ansible

    1️⃣ Set Up Your Environment

    First, update your system and install dependencies:

    apt update && sudo apt upgrade -y
    apt-get install python3-dev libffi-dev gcc libssl-dev python3-selinux python3-setuptools python3-venv -y

    python3 -m venv kolla-venv
    echo "source ~/kolla-venv/bin/activate" >> ~/.bashrc
    source ~/kolla-venv/bin/activate

    Install Ansible & Docker:

    sudo apt install python3-pip -y
    pip install -U pip
    pip install 'ansible-core>=2.15,<2.17' ansible

    2️⃣ Install Kolla-Ansible

    pip install git+https://opendev.org/openstack/kolla-ansible@master

    3️⃣ Prepare Configuration Files

    Copy default configurations to /etc/kolla:

    mkdir -p /etc/kolla
    sudo chown $USER:$USER /etc/kolla
    cp -r /usr/local/share/kolla-ansible/etc/kolla/* /etc/kolla/

    Generate passwords for OpenStack services:

    kolla-genpwd

    Before deploying OpenStack, let’s configure some essential settings in globals.yml. This file defines how OpenStack services are installed and interact with your infrastructure.

    Run the following command to edit the file:

    nano /etc/kolla/globals.yml

    Here are a few key parameters you must configure:

    kolla_base_distro – Defines the OS used for deployment (e.g., ubuntu or centos).

    kolla_internal_vip_address – Set this to a free IP in your network. It acts as the virtual IP for OpenStack services. Example: 192.168.1.100.

    network_interface – Set this to your main network interface (e.g., eth0). Kolla-Ansible will use this interface for internal communication. (Check using ip -br a)

    enable_horizon – Set to yes to enable the OpenStack web dashboard (Horizon).

    Once configured, save and exit the file. These settings ensure that OpenStack is properly installed in your environment.

    4️⃣ Bootstrap the Nodes (Prepare Servers for Deployment)

    kolla-ansible -i /etc/kolla/inventory/all-in-one bootstrap-servers

    5️⃣ Deploy OpenStack! (The Moment of Truth)

    kolla-ansible -i /etc/kolla/inventory/all-in-one deploy

    This step takes some time (~30 minutes), so grab some ☕ and let OpenStack build itself.

    6️⃣ Access Horizon (Web Dashboard)

    Once deployment is complete, check the OpenStack dashboard:

    kolla-ansible post-deploy

    Now, find your login details:

    cat /etc/kolla/admin-openrc.sh

    Source the credentials and log in:

    source /etc/kolla/admin-openrc.sh
    openstack service list

    Open your browser and try accessing: http://<your-server-ip>/dashboard/ or https://<your-server-ip>/dashboard/”

    Use the username and the password from admin-openrc.sh.

    Troubleshooting Common Issues

    Deploying OpenStack isn’t always smooth sailing. Here are some common issues and how to fix them:

    Kolla-Ansible Fails at Bootstrap

    Solution: Run `kolla-ansible -i /etc/kolla/inventory/all-in-one prechecks` to check for missing dependencies before deployment.

    Containers Keep Restarting or Failing

    Solution: Run docker ps -a | grep Exit to check failed containers. Then inspect logs with:

    docker ps --format 'table {{.ID}}\t{{.Names}}\t{{.Status}}'
    docker logs $(docker ps -q --filter "status=exited")
    journalctl -u docker.service --no-pager | tail -n 50

    Horizon Dashboard Not Accessible

    Solution: Ensure enable_horizon: yes is set in globals.yml and restart services with:

    kolla-ansible -i /etc/kolla/inventory/all-in-one reconfigure

    Missing OpenStack CLI Commands

    Solution: Source the OpenStack credentials file before using the CLI:

    source /etc/kolla/admin-openrc.sh

    By tackling these common issues, you’ll have a much smoother OpenStack deployment experience.

    🎉 Congratulations, You Now Have Your Own Cloud!

    Now that your OpenStack deployment is up and running, you can start launching instances, creating networks, and exploring the endless possibilities.

    What’s Next?

    ✅ Launch your first VM using OpenStack CLI or Horizon!

    ✅ Set up floating IPs and networks to make instances accessible.

    ✅ Experiment with Cinder storage and Neutron networking.

    ✅ Explore Heat for automation and Swift for object storage.

    Final Thoughts

    Deploying OpenStack manually can be a nightmare, but Kolla-Ansible makes it much easier. You’ve now got your own containerized OpenStack cloud running in no time.

  • From Specs to Self-Healing Systems – GenAI’s Full-Stack Impact on the SDLC

    From Insight to Action: What This POV Delivers – 

    • A strategic lens on GenAI’s end-to-end impact across the SDLC ,  from intelligent requirements capture to self-healing production systems.
    • Clarity on how traditional engineering roles are evolving and what new skills and responsibilities are emerging in a GenAI-first environment.
    • A technical understanding of GenAI-driven architecture, code generation, and testing—including real-world patterns, tools, and model behaviors.
    • Insights into building model-aware, feedback-driven engineering pipelines that adapt and evolve continuously post-deployment.
    • A forward-looking view of how to modernize your tech stack with PromptOps, policy-as-code, and AI-powered governance built into every layer.

  • Beyond One-Size-Fits-All: Inside the Era of AI-Personalized Learning

    In this POV, you’ll discover how AI is redefining education through personalized, adaptive learning experiences. Learn how intelligent systems like OptimaAI AptivLearn are reshaping engagement for every stakeholder.

    1. The Shift to Intelligent Learning

    • Traditional digitization isn’t enough—education needs real-time adaptability.
    • AI transforms static platforms into responsive, personalized learning environments.
    • The global EdTech market is moving towards immersive, emotionally aware ecosystems.

    2. Impact Across Personas

    • Educators: Gain real-time insights, reduce admin workload, and dynamically adjust instruction.
    • Learners: Experience adaptive paths, voice-enabled support, and gamified engagement.
    • Administrators & Parents: Access predictive dashboards, behavioral insights, and 24/7 visibility.

    3. The OptimaAI AptivLearn Advantage

    • Delivers a unified, AI-powered ecosystem tailored to each stakeholder.
    • Enables hyper-personalized content, real-time feedback, and intelligent nudging.
    • Seamlessly integrates with existing LMS, SIS, and analytics tools to future-proof learning.

  • Mastering TV App Development: Building Seamless Experiences with EnactJS and WebOS

    As the world of smart TVs evolves, delivering immersive and seamless viewing experiences is more crucial than ever. At Velotio Technologies, we take pride in our proven expertise in crafting high-quality TV applications that redefine user engagement. Over the years, we have built multiple TV apps across diverse platforms, and our mastery of cutting-edge JavaScript frameworks, like EnactJS, has consistently set us apart.

    Our experience extends to WebOS Open Source Edition (OSE), a versatile and innovative platform for smart device development. WebOS OSE’s seamless integration with EnactJS allows us to deliver native-quality apps optimized for smart TVs that offer advanced features like D-pad navigation, real-time communication with system APIs, and modular UI components.

    This blog delves into how we harness the power of WebOS OSE and EnactJS to build scalable, performant TV apps. Learn how Velotio’s expertise in JavaScript frameworks and WebOS technologies drive innovation, creating seamless, future-ready solutions for smart TVs and beyond.

    This blog begins by showcasing the unique features and capabilities of WebOS OSE and EnactJS. We then dive into the technical details of my development journey — building a TV app with a web-based UI that communicates with proprietary C++ modules. From designing the app’s architecture to overcoming platform-specific challenges, this guide is a practical resource for developers venturing into WebOS app development.

    What Makes WebOS OSE and EnactJS Stand Out?

    • Native-quality apps with web technologies: Develop lightweight, responsive apps using familiar HTML, CSS, and JavaScript.
    • Optimized for TV and beyond: EnactJS offers seamless D-pad navigation and localization for Smart TVs, along with modularity for diverse platforms like automotive and IoT.
    • Real-time integration with system APIs: Use Luna Bus to enable bidirectional communication between the UI and native services.
    • Scalability and customization: Component-based architecture allows easy scaling and adaptation of designs for different use cases.
    • Open source innovation: WebOS OSE provides an open, adaptable platform for developing cutting-edge applications.

    What Does This Guide Cover?

    The rest of this blog details my development experience, offering insights into the architecture, tools, and strategies for building TV apps:

    • R&D and Designing the Architecture
    • Choosing EnactJS for UI Development
    • Customizing UI Components for Flexibility
    • Navigation Strategy for TV Apps
    • Handling Emulation and Simulation Gaps
    • Setting Up the Development Machine for the Simulator
    • Setting Up the Development Machine for the Emulator
    • Real-Time Updates (Subscription) with Luna Bus Integration
    • Packaging, Deployment, and App Updates

    R&D and Designing the Architecture

    The app had to connect a web-based interface (HTML, CSS, JS) to proprietary C++ services interacting with system-level processes. This setup is uncommon for WebOS OSE apps, posing two core challenges:

    1. Limited documentation: Resources for WebOS app development were scarce.
    2. WebAssembly infeasibility: Converting the C++ module to WebAssembly would restrict access to system-level processes.

    Solution: An Intermediate C++ Service capable of interacting with both the UI and other C++ modules

    To bridge these gaps, I implemented an intermediate C++ service to:

    • Communicate between the UI and the proprietary C++ service.
    • Use Luna Bus APIs to send and receive messages.

    This approach not only solved the integration challenges but also laid a scalable foundation for future app functionality.

    Architecture

    The WebApp architecture employs MVVM (Model-View-ViewModel), Component-Based Architecture (CBA), and Atomic Design principles to achieve modularity, reusability, and maintainability.

    App Architecture Highlights:

    • WebApp frontend: Web-based UI using EnactJS.
    • External native service: Intermediate C++ service (w/ Client SDK) interacting with the UI via Luna Bus.
    Block Diagram of the App Architecture

    ‍Choosing EnactJS for UI Development

    With the integration architecture in place, I focused on UI development. The D-pad compatibility required for smart TVs narrowed the choice of frameworks to EnactJS, a React-based framework optimized for WebOS apps.

    Why EnactJS?

    • Built-in TV compatibility: Supports remote navigation out-of-the-box.
    • React-based syntax: Familiar for front-end developers.

    Customizing UI Components for Flexibility

    EnactJS’s default components had restrictive customization options and lacked the flexibility for the desired app design.

    Solution: A Custom Design Library

    I reverse-engineered EnactJS’s building blocks (e.g., Buttons, Toggles, Popovers) and created my own atomic components aligned with the app’s design.

    This approach helped in two key ways:

    1. Scalability: The design system allowed me to build complex screens using predefined components quickly.
    2. Flexibility: Complete control over styling and functionality.

    Navigation Strategy for TV Apps

    In the absence of any recommended navigation tool for WebOS, I employed a straightforward navigation model using conditional-based routing:

    1. High-level flow selection: Determining the current process (e.g., Home, Settings).
    2. Step navigation: Tracking the user’s current step within the selected flow.

    This conditional-based routing minimized complexity and avoided adding unnecessary tools like react-router.

    Handling Emulation and Simulation Gaps

    The WebOS OSE simulator was straightforward to use and compatible with Mac and Linux. However, testing the native C++ services needed a Linux-based emulator.

    The Problem: Slow Build Times Cause Slow Development

    Building and deploying code on the emulator had long cycles, drastically slowing development.

    Solution: Mock Services

    To mitigate this, I built a JavaScript-based mock service to replicate the native C++ functionality:

    • On Mac, I used the mock service for rapid UI iterations on the Simulator.
    • On Linux, I swapped the mock service with the real native service for final testing on the Emulator.

    This separation of development and testing environments streamlined the process, saving hours during the UI and flow development.

    Setting Up the Development Machine for the Simulator

    To set up your machine for WebApp development with a simulator, ensure you install the VSCode extensions — webOS Studio, Git, Python3, NVM, and Node.js.

    Install WebOS OSE CLI (ares) and configure the TV profile using ares-config. Then, clone the repository, install the dependencies, and run the WebApp in watch mode with npm run watch.

    Install the “webOS Studio” extension in VSCode and set up the WebOS TV 24 Simulator via the Package Manager or manually. Finally, deploy and test the app on the simulator using the extension and inspect logs directly from the virtual remote interface.

    Note: Ensure the profile is set to TV because the simulator only works only for the TV profile.

    ares-config --profile tv

    Setting Up the Development Machine for the Emulator

    To set up your development machine for WebApp and Native Service development with an emulator, ensure you have a Linux machine and WebOS OSE CLI.

    Install essential tools like Git, GCC, Make, CMake, Python3, NVM, and VirtualBox.

    Build the WebOS Native Development Kit (NDK) using the build-webos repository, which may take 8–10 hours.

    Configure the emulator in VirtualBox and add it as a target device using the ares-setup-device. Clone the repositories, build the WebApp and Native Service, package them into an IPK, install it on the emulator using ares-install, and launch the app with ares-launch.

    Setting Up the Target Device for Ares Command to be Able to Identify the Emulator

    This step is required before you can install the IPK to the emulator.

    Note: To find the IP address of the WebOS Emulator, go to Settings -> Network -> Wired Connection.

    ares-setup-device --add target -i "host=192.168.1.1" -i "port=22" -i "username=root" -i "default=true"

    Real-Time Updates (Subscription) with Luna Bus Integration

    One feature required real-time updates from the C++ module to the UI. While the Luna Bus API provided a means to establish a subscription, I encountered challenges with:

    • Lifecycle Management: Re-subscriptions would fail due to improper cleanup.

    Solution: Custom Subscription Management

    I designed a custom logic layer for stable subscription management, ensuring seamless, real-time updates without interruptions.

    Packaging, Deployment, and App Updates

    Packaging

    Pack a dist of the Enact app, make the native service, and then use the ares-package command to build an IPK containing both the dist and the native service builds.

    npm run pack
    
    cd com.example.app.controller
    mkdir BUILD
    cd BUILD
    source /usr/local/webos-sdk-x86_64/environment-setup-core2-64-webos-linux
    cmake ..
    make
    
    ares-package -n app/dist webos/com.example.app.controller/pkg_x86_64

    Deployment

    The external native service will need to be packaged with the UI code to get an IPK, which can then be installed on the WebOS platform manually.

    ares-install com.example.app_1.0.0_all.ipk -d target
    ares-launch com.example.app -d target

    App Updates

    The app updates need to be sent as Firmware-Over-the-Air (FOTA) — based on libostree.

    WebOS OSE 2.0.0+ supports Firmware-Over-the-Air (FOTA) using libostree, a “git-like” system for managing Linux filesystem upgrades. It enables atomic version upgrades without reflashing by storing sysroots and tracking filesystem changes efficiently. The setup involves preparing a remote repository on a build machine, configuring webos-local.conf, and building a webos-image. Devices upgrade via commands to fetch and deploy rootfs revisions. Writable filesystem support (hotfix mode) allows temporary or persistent changes. Rollback requires manually reconfiguring boot deployment settings. Supported only on physical devices like Raspberry Pi 4, not emulators, FOTA simplifies platform updates while conserving disk space.

    Key Learnings and Recommendations

    1. Mock Early, Test Real: Use mock services for UI development and switch to real services only during final integration.
    2. Build for Reusability: Custom components and a modular architecture saved time during iteration.
    3. Plan for Roadblocks: Niche platforms like WebOS require self-reliance and patience due to limited community support.

    Conclusion: Mastering WebOS Development — A Journey of Innovation

    Building a WebOS TV app was a rewarding challenge. With WebOS OSE and EnactJS, developers can create native-quality apps using familiar web technologies. WebOS OSE stands out for its high performance, seamless integration, and robust localization support, making it ideal for TV app development and beyond (automotive, IOT, and robotics). Pairing it with EnactJS, a React-based framework, simplifies the process with D-pad compatibility and optimized navigation for TV experiences.

    This project showed just how powerful WebOS and EnactJS can be in building apps that bridge web-based UIs and C++ backend services. Leveraging tools like Luna Bus for real-time updates, creating a custom design system, and extending EnactJS’s flexibility allowed for a smooth and scalable development process.

    The biggest takeaway is that developing for niche platforms like WebOS requires persistence, creativity, and the right approach. When you face roadblocks and there’s limited help available, try to come up with your own creative solutions, and persist! Keep iterating, learning, and embracing the journey, and you’ll be able to unlock exciting possibilities.