Category: Tech Talks

  • Thinking Like Your AI Agent

    Thinking Like Your AI Agent

    The AI landscape is experiencing a fundamental shift. We’ve moved beyond the era of simple prompt-response cycles into something far more sophisticated: agentic systems that can perceive, plan, and execute complex multi-step workflows with minimal human oversight. But building effective AI agents requires more than just connecting LLMs to APIs – it demands thinking like the agent itself. This article draws insights from an internal Tech Talk presented by two R Systems experts, Saksham Pandey and Sakshi Alegaonkar, who shared their hands-on experience building autonomous AI systems.

    The Evolution: Why Agents Matter

    Traditional generative AI operates in a predictable pattern: input prompt pattern matching output response. These systems excel at single-step tasks like summarization or classification, but they’re fundamentally reactive and constrained by their training data. Agentic AI flips this paradigm. Instead of generating static responses, agents receive goals, perceive their environment, plan actions, and execute tasks across multiple steps. They predict, but they also act. 

    Generative AI vs. Agentic AI

    DimensionGenerative AIAgentic AI
    ArchitectureSingle LLM, pattern predictionMulti-LLM + tools + memory systems
    Decision MakingNone (prompt-dependent)Autonomous planning and adaptation
    Tool IntegrationLimited or noneExtensive API, database, and plugin ecosystem
    LearningStatic post-trainingDynamic adaptation through feedback loops
    CollaborationIsolated responsesMulti-agent coordination and workflow management


    The technical implications are profound. Agentic systems require orchestration layers, state management, tool abstractions, and sophisticated prompt engineering that goes far beyond simple few-shot examples.

    Architectural Thinking: The Agent Mindset

    Building effective agents starts with decomposing complex workflows into specialized components. Consider the architecture of a mental health wellness bot – a sophisticated agentic system designed to provide therapeutic support through voice interaction.

    The Four-Agent Architecture

    1. Detection Agent

    • Core Function: Condition identification through conversational analysis
    • Technical Implementation: Patient metadata integration, conversation history analysis
    • Prompt Strategy: Empathetic engagement patterns designed to encourage disclosure

    2. Severity Assessment Agent

    • Core Function: Clinical evaluation using standardized methodologies
    • Technical Implementation: Integration with tools like PHQ-9 and GAD-7 assessment protocols
    • Prompt Strategy: Structured questionnaire administration with scoring algorithms

    3. Recommendation Engine

    • Core Function: Resource matching based on condition and severity profiles
    • Technical Implementation: Course database queries, therapist directory integration
    • Prompt Strategy: Multi-factor recommendation logic considering location, availability, and specialization

    4. Appointment Agent

    • Core Function: Scheduling facilitation and calendar management
    • Technical Implementation: Calendar API integration, location services, availability checking
    • Prompt Strategy: Options presentation and booking workflow coordination

    System Integration: The Technical Stack

    The architecture demonstrates sophisticated system-level thinking:

    • Voice Interface Layer: Bidirectional speech-to-text and text-to-speech processing through WebSocket connections
    • Orchestration Layer: Workflow management with conditional routing based on classification and assessment results
    • Data Persistence: Patient metadata storage and retrieval for context continuity
    • Safety Mechanisms: Emergency condition detection with escalation protocols

    Design Principles: When to Build Agents

    Not every use case justifies the complexity of agentic architecture. Effective agent design requires evaluating four critical dimensions:

    • Task Complexity Analysis: Agents excel in ambiguous, multi-step scenarios where traditional prompt engineering falls short. If your workflow requires planning, state management, or iterative refinement, consider agentic approaches.
    • Business Impact Assessment: The development overhead of multi-agent systems demands clear ROI justification. Target high-impact use cases where automation delivers measurable business value.
    • Technical Readiness Evaluation: Ensure your infrastructure can support the complexity. Multi-agent systems require robust error handling, monitoring, and orchestration capabilities.
    • Error Sensitivity Consideration: In high-stakes domains like healthcare or finance, agent decisions carry significant consequences. Design appropriate safeguards and human oversight mechanisms.

    The Future: What’s Coming Next

    The trajectory of agentic development points toward three key innovations:

    • Resource-Aware Agents: Tomorrow’s agents will operate within defined computational budgets – monitoring token usage, API costs, and processing time in real-time. This shift enables scalable deployment across resource-constrained environments.
    • Self-Evolving Toolsets: Current agents consume existing tools. Future systems will build and optimize their own tools based on task requirements and performance feedback, creating adaptive toolchains that improve over time.
    • Distributed Agent Networks: Multi-agent collaboration will evolve beyond simple task delegation to sophisticated coordination protocols with clear roles, responsibilities, and communication patterns, enabling agents to tackle distributed challenges at unprecedented scale.

    Implementation Insights: Technical Considerations

    Building effective agents requires attention to several technical nuances:

    • Prompt Architecture: Move beyond single prompts to prompt chains and conditional branching. Each agent needs specialized instructions that account for its specific tools and objectives.
    • State Management: Agents must maintain context across interactions. Implement robust state persistence and retrieval mechanisms to enable coherent multi-step workflows.
    • Tool Abstraction: Create clean interfaces between agents and external systems. Well-designed tool abstractions enable agents to work with diverse APIs without coupling to specific implementations.
    • Error Recovery: Autonomous systems fail in unexpected ways. Build comprehensive error handling, fallback mechanisms, and graceful degradation strategies.

    Conclusion: The Agentic Mindset

    The transition from generative AI to agentic systems represents a fundamental shift in how we architect intelligent systems. Success requires thinking like your agent: understanding its constraints, designing for its strengths, and building with empathy for both the agent’s capabilities and the user’s needs. The future belongs to autonomous, intelligent, and collaborative AI systems. The question isn’t whether agents will transform our technical landscape – it’s whether we’re ready to think like them.

    ___________________

    This article is based on one of our regular internal tech talks, where team members from across our global offices share their expertise and insights with colleagues. These sessions are part of our commitment to fostering a culture of continuous learning and knowledge sharing – whether you’re a junior engineer with a fresh perspective or a senior architect with years of experience, everyone has something valuable to contribute. If you’re interested in joining a team that values both personal growth and collective expertise, explore our open roles.

  • Understanding Kubernetes: From Apple Pie to Container Orchestration

    Most technical presentations dive straight into complex concepts and configurations, leaving beginners drowning in jargon before understanding why any of it matters. But what if someone took a different approach? This article is adapted from an internal tech talk where Mihai Scornea, Junior Software Engineer at R Systems, tackled one of the most complex topics in modern development with an ambitious goal: explain Kubernetes in one hour without relying on syntax and technical terms.

    The Universe Before the Apple Pie

    Carl Sagan once said, “If you wish to make an apple pie from scratch, you must first invent the universe.” This might sound like a super exaggerated example, but it’s perfectly true. If you start with absolutely nothing, you don’t even have the fabric of reality to work with.

    Imagine trying to explain what an apple pie is to an alien from another dimension. They don’t know what flour, sugar, or apples are. The rules of physics could be completely different from ours. You would truly have to explain our entire universe to them, and you’d have to do it in their language.

    This is exactly the challenge we face when trying to explain Kubernetes to someone. Us humans simply aren’t built to easily understand such complex concepts – not immediately, at least. So, to help people understand Kubernetes, I’ll start from the very beginning.

    Kubernetes is a solution to a problem. Explained by itself, it doesn’t make much sense. But if we first understand the problem it fixes, then we can truly see how Kubernetes works.

    The Developer’s Dilemma

    Picture this common scenario: You write a program that works beautifully on your machine, then you send it to the client. But the client’s machine is vastly different from yours: it might not have the right operating system or it could be missing crucial libraries.

    For example, your program might require Java 21 installed. The client should be able to run it on Windows even if you developed it on Linux, but they need Java 21 installed. The client doesn’t know how to install it. It becomes a problem. This is a simple example, but there are programs that require dozens of things installed and configured just right to work. The fact that the program works on the developer’s machine isn’t that useful – what matters is that the client should be able to use it. Unfortunately, we can’t just ship the developer’s laptop with every program.

    Enter Virtual Machines

    Some very smart people asked themselves: “What if we could?” They figured out a way to simulate a computer inside another computer, and virtual machines were born. All you need is software called a hypervisor that simulates virtual hardware using physical hardware. The client just needs a powerful computer and the hypervisor installed, and now we can fully simulate the developer’s computer on the client’s computer.

    Instead of developing directly on their physical machine, the developer can install a hypervisor, test software on a virtual machine, make sure it works, then save the virtual machine as a file and send it to the client. The client runs it in their own hypervisor, and it comes with every library and dependency already installed.

    But virtual machines aren’t perfect. What if we could make something lighter, faster, and more modular?

    The Hotel Metaphor

    Let me tie these computer concepts to something more familiar. Imagine our computer is a hotel – but not a regular hotel. This is a hotel where guests can bring their own food recipes to the waiters when they go to the restaurant.

    The code of a computer program is very similar to a cooking recipe. You have:

    • Kitchen space and tables (RAM) – where chefs work and place ingredients
    • Empty bowls (variables) – containers for storing values
    • Food processing like mixing two ingredients together (CPU operations) – taking values and performing operations
    • The chef’s hands (CPU registers) – temporarily holding data during operations
    • Extra tools like frying pans (libraries) – additional functionality needed for recipes
    • The freezer (storage/hard drives) – where data is permanently stored
    • Waiters (the kernel) – who read recipes line by line and coordinate everything

    In a normal computer setup, there are no rooms in this hotel. All guests hang out in the same public area, and the hotel already has frying pans and spatulas ready (libraries are pre-installed). The problem arises when guests want different versions of tools – some might want a newer frying pan than others. Programs might require different versions of dependencies, and conflicts emerge.

    The Container Solution

    Building a separate virtual machine for every program is like building an entire hotel for every guest. It’s expensive – you need separate staff, separate buildings, and it takes a lot of time and resources. It’s as if we had a hotel chain to manage.

    Smart people analyzed this problem and realized that only the guests really fight over some things. The rest of the hotel infrastructure (disk space, CPU, RAM, and kernel) remains pretty much the same in all cases. So, they came up with a brilliant idea: What if every guest just thought they were the only guest? What if we gave them individual rooms with room service?

    This is what a container runtime like Docker does. Docker builds walls around guests and acts as room service. Whatever a guest wants gets sent down to the waiters (the kernel), who instruct the chefs (hardware). The kernel does its work and returns output back to the room through the Docker engine.

    The guests have no idea what’s happening outside their rooms – they all think they have their own hotel. There’s a catch: every guest must bring all the tools necessary for their recipe. The hotel no longer provides any tools at all, just access to CPU and RAM. These rooms that hold programs and their dependencies are called containers.

    The guests can also have access to the storage system and network of the host computer. They obtain access to storage through something called volumes and to the network interface through a form of port forwarding. For volumes, the host computer can “share” a folder on its file system and mount it at a location inside the container. The container will think that folder is in its own room as it modifies files in it, when, in reality, it is actually modifying files on the host computer.

    From Containers to Kubernetes

    Containers solve the dependency problem beautifully, but new challenges emerge:

    1. Resource exhaustion: If you need many containers, you might fill up your host machine
    2. Scaling complexity: The solution is to get more computers and run containers on them, but this requires manual tracking of everything
    3. High availability: If you want multiple containers of the same type across machines for redundancy, you need complex routing and load balancing
    4. Management overhead: You could manage two or three machines manually, but what about hundreds?

    This is the problem that Kubernetes solves. Kubernetes excels at coordinating many computers to run containers and managing everything about them exactly the way you want. With Kubernetes, you can have 1000 computers or more, and they’ll all work toward your goal.

    Kubernetes: The Orchestration Layer

    Think of Kubernetes as a sophisticated hotel management system that coordinates multiple hotels (computers) to provide seamless service to guests (containers).

    Core Components of The Control plane

    The control plane in Kubernetes has a lot of components working together in order to manage where containers run and how the networking between them works. Luckily, they all have jobs similar to people working in a hotel (or, in our case, a hotel complex with multiple buildings), therefore, they can all be described in an easier manner:

    The Container Runtime: This is the component that runs the actual containers on our machines. It basically behaves like Docker. It can be told to run new containers or delete existing ones and it will do so. It will also handle the other aspects like port forwarding and mounting volumes, just like Docker. This is our Room Service and also housekeeping.

    The Kubelet: This is also part of the housekeeping of our hotel, or more like the manager of the housekeeping. A kubelet runs in every single hotel and its job is to instruct the Container Runtime what containers should be moving in and out of the hotel. The Container Runtime then runs these containers.

    The kube-api-server: This is the receptionist of our hotel complex. Every single transfer of information about how the hotel manages its guests goes through the receptionist. Every other component only talks to the receptionist. Even us, when we want to make a phone call and book a room for our container, we will  be talking to this receptionist. The receptionist stores all information about the hotel in a guestbook called the ETCD database.

    The ETCD database: This is the guestbook that our receptionist kube-api-server writes information to and also where it reads from when it is asked for information by the other employees.

    The Kubectl command line interface: This is the phone line that we can use to book rooms for our containers in the hotel. We can use commands like “kubectl get pods” for example to get a list of all the rooms occupied in the hotel. This phone line talks directly to the kube-api-server receptionist.

    The kube-scheduler: This is like a reservation planner. When we ask the kube-api-server for a room in a hotel, the kube-scheduler is responsible for finding a suitable hotel with a room big enough for our guest. Some containers might require more resources than what is available on a computer in the cluster, so, they will be scheduled on a machine that has enough resources. It will tell the kube-api-server where the containers should be placed and, the kube-api-server will note it down in the ETCD database. When the Kubelets check via the kube-api-server, the kubelet responsible for the building assigned to the container will make sure to deploy it.

    The kube-controller-manager: This one can perform logical operations on the data stored in the ETCD database. For example, let’s say that we call the kube-api-server receptionist using the kubectl phone line and tell them that we want to move a team of 11 footbal player containers into the hotel (equivalent to applying a replicaset of 11 identifcal containers). The kube-api-server will note this down

    The CoreDNS: This is our information desk. As you will see later, multiple pods of the same type can be placed on the same floor in order to make them easy to reach. CoreDNS can tell us which floor the pods we want are assigned to. Each floor will have an IP address instead of a floor number. For example, we can ask “where is the football-players floor?” and it will tell us “floor 10.96.0.42”. We can then ask the elevator operator for that floor and they will make sure we reach the rooms we needed.

    The kube-proxy: This is our elevator operator. It always makes sure that we reach the right rooms when we ask for a particular floor. In practice, when we define such a floor (or kubernetes service), it is responsible with creating all the networking rules necessary for us to reach a room of the floor we want, when we only access the IP of the floor. When we access a floor, we are randomly routed to one of the pods (containers) on that floor.

    The CNI plugin: Even if each floor has an IP address, each room also has one and they are all on a network called the internal kubernetes overlay network. This CNI plugin is responsible for assigning IP addresses to the rooms and making sure they can all reach each other by these IP addresses. It is the building planner.

    Core Components of Things Deployed on Kubernetes

    Pods: The smallest unit in Kubernetes – an enclosure that can house one or more containers. Unlike individual containers, pods have their own IP address in the internal Kubernetes network and the containers within a pod can share this IP address and storage volumes. Usually, people use the term pod and container interchangeably, however, a pod can contain one or more containers.

    Services: Stable IP addresses tied to pods with particular labels. When you make a request to a service, it routes your request to one of the matching pods, providing built-in load balancing. These are the floors of the hotel mentioned earlier. Moreover, all hotels have glass bridges between their floors so that going to one floor gives us access to all the pods on that floor from all hotels.

    Replica Sets: Ensure you always have the desired number of identical pods running. If one crashes, the replica set automatically creates a replacement. This is like booking rooms for a football team of 11 players with identical needs as mentioned earlier. Such a replica set can be seen in the image with the service, for the nginx pods.

    Deployments: Control replica sets and enable zero-downtime updates through rolling updates – imagine changing an airplane’s engines mid-flight, one by one. These, combined with the services can guarantee smooth version upgrades. We can replace an entire hotel floor with new versions of the pods seamlessly. Gradually scaling down a replicaset and scaling up another one like that is called a rolling update.

    The Magic of Rolling Updates

    Here’s where Kubernetes truly shines. Imagine you have all your pods managed by a replica set and you want to update them. One method is to delete all pods and start an updated replica set, but this creates downtime – users get 404 errors and complaints.

    Kubernetes deployments can perform rolling updates instead. During a rolling update:

    1. Kubernetes slowly removes pods from the old version
    2. Simultaneously adds pods with the new version
    3. Traffic gets served by both versions during the transition
    4. Users might get mixed responses but never see errors
    5. The update completes seamlessly with zero downtime

    This is the magic trick that modern applications use to never go down, even during updates.

    Why This Matters

    Kubernetes abstracts away the complexity of managing distributed systems. Instead of manually configuring load balancers, tracking which containers run where, and orchestrating updates across multiple machines, you declare what you want, and Kubernetes makes it happen.

    It’s the difference between manually conducting a symphony with hundreds of musicians versus having an automated system that ensures every musician plays their part perfectly, in harmony, without you needing to coordinate each individual note.

    Conclusion

    Understanding Kubernetes doesn’t require memorizing syntax or technical specifications. It requires understanding the problems it solves:

    • Dependency conflicts → Containers provide isolation
    • Manual scaling → Kubernetes automates container orchestration
    • Complex deployments → Rolling updates eliminate downtime
    • Resource management → Kubernetes optimally distributes workloads


    Like Carl Sagan’s apple pie, once you understand the universe that Kubernetes operates in (the problems of modern software deployment), the solution becomes not just comprehensible, but elegant.

    The next time someone asks you about Kubernetes, don’t start with pods and services. Start with the developer who needs their program to work everywhere, the client who can’t install dependencies, and the hotel that found a way to give every guest their own perfect room while sharing the same building.

    That’s Kubernetes: the universe that makes the modern software apple pie possible.



    This article is based on one of our regular internal tech talks, where team members from across our global offices share their expertise and insights with colleagues. These sessions are part of our commitment to fostering a culture of continuous learning and knowledge sharing – whether you’re a junior engineer with a fresh perspective or a senior architect with years of experience, everyone has something valuable to contribute. If you’re interested in joining a team that values both personal growth and collective expertise, explore our open roles.

  • OpenShift 101: Enterprise Kubernetes Made Easy

    In a recent internal tech talk, our Junior DevOps Engineer Marin Armas took us on a fascinating journey through the evolution of application deployment – from the chaotic days of manual FTP uploads to the elegant simplicity of modern container orchestration. His presentation, “OpenShift 101: Enterprise Kubernetes Made Easy,” offered valuable insights into why OpenShift has become such a game-changer for development teams looking to harness the power of Kubernetes without the overwhelming complexity. Here’s Marin’s perspective on how we got here and why OpenShift might just be the solution you’ve been looking for.

    The world of application deployment has undergone a remarkable transformation over the past decade. What once required manual processes, inconsistent environments, and endless troubleshooting has evolved into a streamlined, automated experience that empowers development teams to focus on what they do best: building great software.

    The Evolution of Deployment: From Chaos to Containers

    In the early days of software deployment, teams faced a familiar set of challenges that seemed almost impossible to tackle. Manual deployments through FTP uploads and custom scripts were the norm, creating environments where development and production systems rarely matched. This inconsistency led to the dreaded “it works on my machine” syndrome, where applications would behave differently across environments, causing friction between development and operations teams.

    Then Docker swooped in like a superhero. Containers changed everything by solving the consistency problem in the most elegant way possible – if it works in a container on your laptop, it’ll work in a container in production.

    Containers brought several key advantages that immediately resonated with development teams. The path from local development to production became beautifully straightforward. They provided environment consistency, eliminating the guesswork between local development and production deployment. The local-to-production flow became seamless, and dependency management was simplified since all dependencies were packaged within the container itself.

    Kubernetes: Power with Complexity

    As organizations began adopting containers at scale, the need for orchestration increased. Managing hundreds of containers manually was not feasible, and this challenge led to the rise of Kubernetes as the de facto standard for container orchestration.

    Kubernetes brought impressive capabilities to the table: it could manage hundreds of containers simultaneously, automatically restart crashed applications, handle traffic distribution and load balancing, and provide powerful orchestration tools that made complex deployments possible. For teams dealing with microservices architectures or large-scale applications, Kubernetes was a quantum leap in operational capability.

    However, Kubernetes also introduced its own set of challenges. The platform’s power came with significant complexity, particularly around configuration management. Teams found themselves drowning in YAML files, trying to navigate a system that lacked built-in user interfaces or intuitive tooling. The learning curve was steep, and many developers needed substantial support to get started effectively.

    OpenShift: Bridging the Gap

    This is where OpenShift enters the story as a game-changer for teams seeking the power of Kubernetes without the overwhelming complexity. OpenShift can be best described as “Kubernetes, but easy” – it takes the robust orchestration capabilities of Kubernetes and wraps them in an enterprise-friendly package with built-in tools, intuitive interfaces, and streamlined workflows.

    OpenShift transforms the Kubernetes experience by providing a complete platform rather than just a tool. It includes a simple web interface that makes cluster management accessible to developers who may not be Kubernetes experts. The platform comes with integrated CI/CD capabilities, comprehensive monitoring tools, and built-in security features, creating a ready-to-use environment that’s enterprise-friendly from day one.

    What Makes OpenShift Special

    The true value of OpenShift lies in its end-to-end approach to the application lifecycle. The platform enables a seamless progression from source code to a running application: Git repositories integrate directly with build pipelines, Source-to-Image (S2I) automatically assembles containers from code, Routes expose services with built-in HTTPS, and integrated monitoring tools provide visibility into application performance.

    Under the hood, OpenShift maintains the powerful Kubernetes core while adding enterprise-grade enhancements. It uses CRI-O as a secure container runtime, implements operators for automated lifecycle management, and provides OAuth login integration with HTTPS routes for secure access. This combination ensures that teams get the benefits of Kubernetes while maintaining the security and reliability standards that enterprises require.

    OpenShift in Practice

    The transformation in daily workflows is genuinely remarkable. Applications can be deployed directly from Git repositories with minimal configuration. The platform automatically builds containers using S2I technology, eliminating the need for manual Docker file management. Applications are exposed securely through built-in HTTPS routes, and comprehensive monitoring provides real-time insights into performance and health.

    Perhaps most importantly, OpenShift handles scaling and recovery automatically. Applications can be scaled up or down instantly based on demand, and the platform automatically recovers from pod failures without manual intervention. This level of automation reduces operational overhead significantly while improving application reliability.

    The Impact on Teams and Workflows

    The adoption of OpenShift has profound implications for development teams and their workflows. By taking away much of the complexity associated with Kubernetes, OpenShift enables developers to focus on building features rather than managing infrastructure. The integrated tooling reduces context switching, and the streamlined deployment process accelerates time-to-market for new features and applications.

    For organizations implementing similar solutions, starting with hands-on experimentation is crucial. OpenShift provides several options for getting started, including the OpenShift Sandbox for immediate experimentation and OpenShift Local for development environments. These resources allow teams to explore the platform’s capabilities without significant upfront investment.

    Challenges and Considerations

    While OpenShift significantly simplifies the Kubernetes experience, successful implementation still requires careful planning and execution. Teams benefit from thorough testing in non-production environments and continuous monitoring to identify and address issues promptly. Ongoing training and support are essential to help team members adapt to new tools and processes effectively.

    Collaboration between different departments – development, operations, security, and business stakeholders – becomes even more critical when implementing platform solutions like OpenShift. The platform’s capabilities can transform how teams work, but realizing these benefits requires organizational alignment and clear communication about goals and expectations.

    Conclusion

    The journey from FTP deployments to modern container orchestration shows how technology can evolve to be more powerful and more accessible at the same time. By providing the power of Kubernetes with the accessibility of a managed platform, OpenShift enables organizations to embrace modern deployment practices without overwhelming their teams with complexity.

    For teams beginning their journey with container orchestration, OpenShift offers a compelling entry point that grows with organizational needs. The platform’s combination of powerful features, intuitive interfaces, and enterprise-grade capabilities makes it an excellent choice for organizations looking to modernize their deployment practices while maintaining operational excellence.

    ___________________

    This article is based on one of our regular internal tech talks, where team members from across our global offices share their expertise and insights with colleagues. These sessions are part of our commitment to fostering a culture of continuous learning and knowledge sharing – whether you’re a junior engineer with a fresh perspective or a senior architect with years of experience, everyone has something valuable to contribute. If you’re interested in joining a team that values both personal growth and collective expertise, explore our open roles.