Blog

  • Revolutionizing AML Operations with Advanced Technology Integration

    Achieved a 70% increase in productivity while reducing operational costs by 40%

    Enhanced fraud detection accuracy by 80%, with alert coverage rising from 60% to 95%

    Reduced alert backlog by 85%, maintaining a system uptime of 99.9%

    Unified real-time risk assessment across three banking platforms

    Automated processes for alert consolidation, case monitoring, and SAR tracking

  • Rebuilt for Scale. Engineered for Savings.

    Overview

    Fleet Management, Reinvented

    When legacy systems choke innovation and fragmented solutions drain resources, it’s time for a reset. One fleet-tech pioneer chose transformation and achieved 55% lower operational costs, real-time optimization, and seamless scalability across geographies and customer segments.

    Explore how modern architecture, intelligent automation, and embedded analytics helped them deliver resilient, revenue-driving solutions to an increasingly complex fleet landscape.

    What You’ll Uncover in This Case Study:

    • How multitenant SaaS reduced infrastructure and support costs, while making room for agile upgrades.
    • Why abandoning legacy tech was the smartest move toward system uptime and innovation.
    • How embedded Power BI gave leadership real-time insights to reduce fuel waste and improve asset utilization.
    • What secure, flexible authentication looks like in a multi-client fleet environment, and how it drives trust.
    • The architecture that scaled effortlessly across locations, clients, and service types, ready for tomorrow’s mobility needs.

    Ready to empower your fleet operations with the right technology foundation??

    Download the case study and get a blueprint for sustainable growth, efficiency, and resilience in the fleet tech ecosystem.

  • Protecting Your Mobile App: Effective Methods to Combat Unauthorized Access

    Introduction: The Digital World’s Hidden Dangers

    Imagine you’re running a popular mobile app that offers rewards to users. Sounds exciting, right? But what if a few clever users find a way to cheat the system for more rewards? This is exactly the challenge many app developers face today.

    In this blog, we’ll describe a real-world story of how we fought back against digital tricksters and protected our app from fraud. It’s like a digital detective story, but instead of solving crimes, we’re stopping online cheaters.

    Understanding How Fraudsters Try to Trick the System

    The Sneaky World of Device Tricks

    Let’s break down how users may try to outsmart mobile apps:

    One way is through device ID manipulation. What is this? Think of a device ID like a unique fingerprint for your phone. Normally, each phone has its own special ID that helps apps recognize it. But some users have found ways to change this ID, kind of like wearing a disguise.

    Real-world example: Imagine you’re at a carnival with a ticket that lets you ride each ride once. A fraudster might try to change their appearance to get multiple rides. In the digital world, changing a device ID is similar—it lets users create multiple accounts and get more rewards than they should.

    How Do People Create Fake Accounts?

    Users have become super creative in making multiple accounts:

    • Using special apps that create virtual phone environments
    • Playing with email addresses
    • Using temporary email services

    A simple analogy: It’s like someone trying to enter a party multiple times by wearing different costumes and using slightly different names. The goal? To get more free snacks or entry benefits.

    The Detective Work: How to Catch These Digital Tricksters

    Tracking User Behavior

    Modern tracking tools are like having a super-smart security camera that doesn’t just record but actually understands what’s happening. Here are some powerful tools you can explore:

    LogRocket: Your App’s Instant Replay Detective

    LogRocket records and replays user sessions, capturing every interaction, error, and performance hiccup. It’s like having a video camera inside your app, helping developers understand exactly what users experience in real time.

    Quick snapshot:

    • Captures user interactions
    • Tracks performance issues
    • Provides detailed session replays
    • Helps identify and fix bugs instantly

    Mixpanel: The User Behavior Analyst

    Mixpanel is a smart analytics platform that breaks down user behavior, tracking how people use your app, where they drop off, and what features they love most. It’s like having a digital detective who understands your users’ journey.

    Key capabilities:

    • Tracks user actions
    • Creates behavior segments
    • Measures conversion rates
    • Provides actionable insights

    What They Do:

    • Notice unusual account creation patterns
    • Detect suspicious activities
    • Prevent potential fraud before it happens

    Email Validation: The First Line of Defense

    How it works:

    • Recognize similar email addresses
    • Prevent creating multiple accounts with slightly different emails
    • Block tricks like:
      • a.bhi629@gmail.com
      • abhi.629@gmail.com

    Real-life comparison: It’s like a smart mailroom that knows “John Smith” and “J. Smith” are the same person, preventing duplicate mail deliveries.

    Advanced Protection Strategies

    Device ID Tracking

    Key Functions:

    • Store unique device information
    • Check if a device has already claimed rewards
    • Prevent repeat bonus claims

    Simple explanation: Imagine a bouncer at a club who remembers everyone who’s already entered and stops them from sneaking in again.

    Stopping Fake Device Environments

    Some users try to create fake device environments using apps like:

    • Parallel Space
    • Multiple account creators
    • Game cloners

    Protection method: The app identifies and blocks these applications, just like a security system that recognizes fake ID cards.

    Root Device Detection

    What is a Rooted Device? It’s like a phone that’s been modified to give users complete control, bypassing normal security restrictions.

    Detection techniques:

    • Check for special root access files
    • Verify device storage
    • Run specific detection commands

    Analogy: It’s similar to checking if a car has been illegally modified to bypass speed limits.

    Extra Security Layers

    Android Version Requirements

    Upgrading to newer Android versions provides additional security:

    • Better detection of modified devices
    • Stronger app protection
    • More restricted file access

    Simple explanation: It’s like upgrading your home’s security system to a more advanced model that can detect intruders more effectively.

    Additional Protection Methods

    • Data encryption
    • Secure internet communication
    • Location verification
    • Encrypted local storage

    Think of these as multiple locks on your digital front door, each providing an extra layer of protection.

    Real-World Implementation Challenges

    Why is This Important?

    Every time a fraudster successfully tricks the system:

    • The app loses money
    • Genuine users get frustrated
    • Trust in the platform decreases

    Business impact: Imagine running a loyalty program where some people find ways to get 10 times more rewards than others. Not fair, right?

    Practical Tips for App Developers

    • Always stay updated with the latest security trends
    • Regularly audit your app’s security
    • Use multiple protection layers
    • Be proactive, not reactive
    • Learn from each attempted fraud

    Common Misconceptions About App Security

    Myth: “My small app doesn’t need advanced security.” Reality: Every app, regardless of size, can be a target.

    Myth: “Security is a one-time setup.” Reality: Security is an ongoing process of learning and adapting.

    Learning from Real Experiences

    These examples come from actual developers at Velotio Technologies, who faced these challenges head-on. Their approach wasn’t about creating an unbreakable system but about making fraud increasingly difficult and expensive.

    The Human Side of Technology

    Behind every security feature is a human story:

    • Developers protecting user experiences
    • Companies maintaining trust
    • Users expecting fair treatment

    Looking to the Future

    Technology will continue evolving, and so, too, will fraud techniques. The key is to:

    • Stay curious
    • Keep learning
    • Never assume you know everything

    Final Thoughts: Your App, Your Responsibility

    Protecting your mobile app isn’t just about implementing complex technical solutions; it’s about a holistic approach that encompasses understanding user behavior, creating fair experiences, and building trust. Here’s a deeper look into these critical aspects:

    Understanding User Behavior:‍

    Understanding how users interact with your app is crucial. By analyzing user behavior, you can identify patterns that may indicate fraudulent activity. For instance, if a user suddenly starts claiming rewards at an unusually high rate, it could signal potential abuse.
    Utilize analytics tools to gather data on user interactions. This data can help you refine your app’s design and functionality, ensuring it meets genuine user needs while also being resilient against misuse.

    Creating Fair Experiences:‍

    Clearly communicate your app’s rewards, account creation, and user behavior policies. Transparency helps users understand the rules and reduces the likelihood of attempts to game the system.
    Consider implementing a user agreement that outlines acceptable behavior and the consequences of fraudulent actions.

    Building Trust:

    Maintain open lines of communication with your users. Regular updates about security measures, app improvements, and user feedback can help build trust and loyalty.
    Use newsletters, social media, and in-app notifications to keep users informed about changes and enhancements.
    Provide responsive customer support to address user concerns promptly. If users feel heard and valued, they are less likely to engage in fraudulent behavior.

    Implement a robust support system that allows users to report suspicious activities easily and receive timely assistance.

    Remember: Every small protection measure counts.

    Call to Action

    Are you an app developer? Start reviewing your app’s security today. Don’t wait for a fraud incident to take action.

    Want to learn more?

    • Follow security blogs
    • Attend tech conferences
    • Connect with security experts
    • Never stop learning
  • Boost Production Efficiency with Smarter Material Management

    Discover how a leading consumer goods manufacturer achieved 50% faster raw material delivery to their production lines.

    Struggling with delays and bottlenecks in your production process? Learn how our tailored solutions helped a global manufacturer:

    • Minimize material handling delays.
    • Eliminate workflow bottlenecks.
    • Enhance productivity and streamline operations.

    By optimizing raw material transport, they achieved measurable results in efficiency and production timelines.

  • Secure DevOps for Healthcare: 60% Fewer Vulnerabilities, 90% Faster Remediation

    Embedded Security Across the SDLC – Proactive DevSecOps Integration

    • Integrated Microsoft Defender into Azure DevOps and GitHub pipelines for continuous, automated security monitoring.
    • Embedded risk detection scans at the pull-request stage, ensuring vulnerabilities were caught before release.
    • Established unified security dashboards for centralized oversight across hybrid cloud environments.

    Automated Compliance & Rapid Response – Security Without Slowing Delivery

    • Automated HIPAA and SOC2 compliance checks within CI/CD workflows, reducing manual audit overhead.
    • Built incident response playbooks to block compromised code releases and accelerate remediation workflows.
    • Reduced remediation cycles by 90%, enabling developers to focus on innovation without sacrificing security.

    Strategic Outcomes – Stronger Posture, Faster Delivery

    • Achieved a 60% reduction in vulnerabilities across fragmented DevOps environments.
    • Boosted developer productivity by embedding “security by default” into pipelines.
    • Delivered a future-ready DevOps ecosystem that balances regulatory compliance, patient data safety, and rapid software delivery.

  • Transforming Infrastructure at Scale with Azure Cloud

    • Infrastructure Costs cut by 30-34% monthly, optimizing resource utilization and generating substantial savings.
    • Customer Onboarding Time reduced from 50 to 4 days, significantly accelerating the client’s ability to onboard new customers.
    • Site Provisioning Time for existing customers reduced from weeks to a few hours, streamlining operations and improving customer satisfaction.
    • Downtime affecting customers was reduced to under 30 minutes, with critical issues resolved within 1 hour and most proactively addressed before customer notification.

  • Elevating Paratransit Services: The Power of Scalable SaaS Solutions

    A Case Study on Harnessing Technology to Enhance Service Delivery and Customer Experience

    Client Overview: 

    The client, a leading passenger transportation company, provides intelligent transport systems and software solutions for the public transport sector as well as for demand response and special student transport.

    Challenge/ Business Need: 

    • Multiple versions of products made maintenance difficult.
    • Outdated technology stack was hard to maintain.
    • Multitenancy was needed to minimize costs for small customers.

    Solution Provided R Systems:

    • Re-architected and developed scheduling and dispatching paratransit application
    • Newer components implemented as .NET Core API’s running in Azure App Services environment
    • Used Identity Server 4 to give the flexibility to opt from Azure AD or SQL based authentication
    • Incorporated Power BI for report generation 
    • Even Grid, Even Bus and Azure Functions with Data stored on Cosmos DB to exchange instructions

    Outcome/ Results:

    • REVENUE GROWTH:
      • Maximized Asset Utilization: By implementing scalable SaaS, the company optimized the use of existing assets, resulting in incremental revenue growth.
      • Enhanced Customer Satisfaction: Improved system performance and reliability led to higher customer satisfaction, encouraged repeat business and referrals.
    • COST EFFICIENCY:
      • Cost Efficiency through Multitenancy: Transitioning to a multitenant SaaS platform significantly reduced infrastructure and maintenance expenses.
      • Lower Operational Costs: Optimized database access minimized operational expenses, contributing to overall cost savings.
      • Significant Cost Savings: Experience up to 55% reduction in operational costs with our advanced SaaS-based platform.
    • RISK MITIGATION:
      • Enhanced Resilience: The scalable SaaS solution provided robust disaster recovery and business continuity capabilities, minimizing downtime and ensuring service reliability.
      • Improved Security: Advanced security measures and regular updates mitigated the risk of data breaches and compliance issues.
    • INNOVATION ENABLEMENT:
      • Scalable Solutions for Growth: The need for a scalable solution was driven by the company’s growth and demand for more flexible and efficient operations.
      • Increased Focus on Innovation: Automation and streamlined processes allowed the company to concentrate on strategic initiatives, driving further business growth. 

    Technology Stack

    pastedGraphic.png      pastedGraphic_1.png      pastedGraphic_2.png      pastedGraphic_3.png      pastedGraphic_4.png      pastedGraphic_4.png      pastedGraphic_4.png      pastedGraphic_4.png      pastedGraphic_4.png      pastedGraphic_4.png      pastedGraphic_4.png      pastedGraphic_4.png

    Conclusion:

    This case study highlights the successful modernization and migration of the client’s products to a SaaS platform, resulting in significant cost savings and improved efficiency.   

    Call to Action: 

    Streamline your paratransit operations with our SaaS approach. Contact us now to get started.

  • Automated Testing in Telecom: Challenges and How AI Can Help

    Author: Razvan Rusu

    Gen AI is a very powerful tool that simplifies complex tasks in many areas including the technology field. This article tries to answer the question: Can Gen AI reduce the complexity of testing in telecom? 

    The short answer is Yes, in multiple ways, but AI won’t do all the work for us.

    Mobile telephony is an easy-to-use service with a lot of complexity behind the scenes. Making a phone call is trivial, but this simple operation involves numerous systems and dozens of messages being exchanged.  From the initial device authorization to the call end, all these messages are needed. 

    There are a few reasons why there are so many systems and messages involved:

    1. Security

      The communication takes place over an unsecured medium (wireless). Authorization and setting of the encryption keys must be performed before any call/data session. Encryption makes sure nobody can listen to your conversation or see your data transfer. Authorization, on the other hand, makes sure your phone can’t be cloned, which would allow another malicious device to receive or make calls as if it were your phone.

    1. Standardization

      The standardization for GSM is done by 3GPP (https://www.3gpp.org/about-us). The main driver for this standardization is interoperability between operators and interoperability between various vendors. An NE, (Network Element) part of the GSM core network, will work the same way for an operator in the United States as for an operator in Indonesia.

      This standardization has some obvious advantages (roaming, for instance—a service we couldn’t live without these days), but it also has some drawbacks. The architecture was split into multiple systems (Network Elements) with clearly defined functionality and message flows. All mobile operators must use these Network Elements in the same way. None of them can decide they don’t like how things are working and choose to handle calls differently, like for instance having a single system performing all the logic. Everyone must stick to what the standard specifies.

    1. Mobility and multiple generations of GSM (2G/3G/4G/5G) which must coexist

      We can make calls on a 2G/3G connection or over a 4G/5G connection, depending on the coverage provided by the mobile operator in the area where we are located.   The type of connectivity used is not within our control, and we expect consistent behavior for our calls. For instance, we expect to be informed if the called party has been ported to another mobile operator and we expect to be charged the same way regardless of the connection used for making that call.Even more, a call may start on a 4G coverage as a VoLTE call and continue as a 3G call once the 4G coverage is lost. The caller shouldn’t feel this transition as for him it is the same call. However, for the mobile operator, switching from 4G to 3G is a big change that involves multiple systems and messages.

    The Challenge 

    Testing a mobile service is as easy as making and answering a phone call. Or so it seems.

    Testing using mobile phones has a few advantages:

    1. It doesn’t require any special equipment/system; no investment is needed, as normal GSM phones can be used.
    2. It doesn’t require specialized testing personnel. Anyone can use a phone, and the complexity of the systems involved in making a call is not visible during testing.
    3. It provides an end-to-end testing, validating the user experience.

    This testing method appears to be simple and very effective. Therefore it has been adopted by many mobile operators. Even more, this testing method was automated. Either with specialized equipment or by remotely controlling mobile phones. There are many solutions available for this type of automation.

    If this method is effective, automated, and end-to-end, what more could be required? Well, let’s take a closer look at what this method does not cover. First of all, it checks only the edges of the solution. Did we notify all the systems that should have been notified about that call? We can’t say because this is not part of the test. 

    To make a parallel with testing an online shop: testing if the Place Order function works properly is done solely on the result page seen by the user. Whether the warehouse or the invoicing system was notified about that order is not checked. This would be unacceptable for testing an online shop. So why is it acceptable for mobile operators? We’ll discuss this a bit later.

    The second big drawback of this mobile phone testing method is the limitation imposed by the device used for these tests. Several types of tests can’t be executed:

    • Roaming tests. The test phone is typically located in-office, within the country of the mobile operator. Therefore, all calls/events initiated from that phone will be national. As a funny side note, I was discussing this problem with the test lead of a mobile operator. She mentioned that when they need to test changes impacting roaming flows, they sometimes drive to the nearest border. It’s a  one and a half hour drive, and they must be close to the border at midnight when the maintenance window starts. It’s not something they like or want to do, but there’s no other way they can test roaming scenarios.
    • Tests using the reference/test network instead of the live network. In these cases, the device must use the testing infrastructure, which may only be available in dedicated test sites, sometimes even requiring the terminal to be isolated in a Faraday cage.
    • International and premium destinations. For international calls, someone needs to answer the call at the other end, which is difficult to do when the device is not under your control. Premium numbers are expensive to call or text, so they are typically skipped in manual or automated testing.
    • Long calls. If you have an offering with 2000 national minutes included, testing what happens after these minutes are depleted requires 2000 minutes of testing (~ 33.5 hours).  This makes it impossible to conduct nightly tests since they would not finish in time for the following day’s testing. 

    A new question arises: With all these problems, what makes this testing method so widely adopted? The answer lies in the complexity of the systems involved and the difficulty of having a test team with the required specialized technical knowledge. When running acceptance tests for Network Elements, mobile operators rely on the supplier of that NE. The supplier’s engineers possess the deep technical knowledge, and the mobile operator typically only observes and validates the process, without performing any actual testing themselves.

    At the same time, mobile operators focus on testing new functionalities, such as a new voice plan, or a new data offering (e.g. free access to Instagram and TikTok). Regression testing is only seen as a nice-to-have.

    The Solution

    There isn’t a simple solution. If one existed, it would have been already used by mobile operators. However, this doesn’t mean there is no solution. Since it’s a complex problem, the best approach is to split it. Isolate the complex technical parts from the business-driven parts. 

    The technical parts hardly ever change in terms of the systems involved and message flows. It must be compliant with the 3GPP standards. So there isn’t a lot of room for creativity. What changes from test to test are the attributes/parameters of the messages. If you have a parametrized module that sends the messages and validates the responses, all you need do is call that module with the right parameter values. You don’t need to know the protocols involved or the specific messages that will be exchanged; the module will handle this complexity for you. This allows the QA team to run proper and complete testing without requiring deep technical knowledge. 

    For instance, let’s consider the example above. There is a new voice plan where calls are being charged differently. When placing a call, a CAP session triggers a Diameter Ro session towards OCS for 2G calls, or an SIP session which triggers a Diameter session for VoLTE (4G) calls. If you have a module that receives as parameters the originating party (A#), the calling party (B#), and the duration of the call, the QA team doesn’t need to know CAP, SIP, or Diameter, even though the test suite makes use of these protocols.

    This separation allows the QA team to focus on testing functionality while simulating and validating the flows and data exchanged at telco-specific protocols. Testing becomes a bit more complicated than making a phone call, but not significantly so. The modules need to be called with the right parameters and their output needs to be validated. This can be done by an orchestrator (for instance a Shell/Python script) that takes input text files in CSV format and outputs the result in CSV format. The CSV format has several advantages:

    • It is in human-readable format
    • It has a very clear structure
    • Can be edited by well-known & used applications, like Excel, where data validation can be added to reduce the risk of human error

    Having the test data (input data and expected results) in files opens the door to automation. The test execution can be easily integrated into a CI/CD pipeline. However, there is one additional thing to be considered before declaring the tests automated. The test scenarios need to be executed repeatedly and produce consistent results. They must be idempotent and repeatable to be added to an automated test suite. The steps of an idempotent test are:

    1. Setup/configure required data for the test.
    2. Execute the test steps.
    3. Validate the results.
    4. Delete/restore the data modified at step 1.

    How can AI help

    The success of Generative AI created a lot of hype. Enterprises are increasingly adopting Gen AI across their organizations. Chat GPT and GitHub Copilot have proven able to generate pieces of code and have become very useful tools for software developers. 

    Can Gen AI be used effectively in testing? Certainly, it can, and there are 2 main areas where it can help. (Note: the use cases presented below are not theoretical; they have been successfully implemented.)

    1. Test case generation

      This is considered the Holy Grail of Gen AI in testing – take as input a test plan, or even better the specification document, and generate the test suite. While Gen AI is not yet at this point, just as in the case of software development it can be used by QA engineers to develop faster test cases. The complexity isolation described above is very useful when generating test cases with AI. 

      Expecting Gen AI to generate the right messages, in the right order and with the right parameters according to 3GPP is unrealistic. And even if it could, the benefit would be limited as new business requirements don’t modify the 3GPP specifications. However, asking Gen AI to generate CSV files in a specific format with data presented in a natural language is a realistic expectation. For instance, you can give the following prompt to Gen AI: “Verify that a national call of 5 minutes deducts 300 units from NationalSeconds balance” or “A call of 2 minutes to +49123456789 should charge 0.012 EUR from the monetary balance”. 

      With some clever prompt engineering, Gen AI will generate CSV lines in the right format. This allows the QA team to focus on what they want to test rather than how the test is going to be conducted. Another benefit is significantly reducing the ramp-up effort required for new team members.

    1. Troubleshooting support

      There are situations where it’s crucial to understand the specific details of what went wrong in a test case, especially during regression testing. Most likely, something is wrong, preventing the new release from being deployed into production. But we must also investigate the issue. 

      If the problem is related to the business logic introduced by the new release, it may be easier to identify the cause.  On the other hand, issues related to telco-specific protocols used during regression testing pose greater challenges, especially when the QA team lacks deep knowledge of these protocols.

      Another scenario where detailed telco understanding is crucial is when developing telco-specific modules. If the QA engineer writes a test that fails, is the failure a test problem or an application problem? The 3GPP standard and the application specifications should provide clarity in such cases. However, in practice, this isn’t always the case. Have you ever tried to read a 3GPP document? To put it mildly, it’s not the most easily readable documentation. The complexity arises because each document references another, which references another, and so on. This complexity, while justified by the technical intricacies of telco standards, can be daunting for newcomers to the field.

      Besides the standards and the project/system-specific documentation, another important source of information for the QA team is the history of tickets previously reported for that project/system. Since, in the telco world, a system is used for many years (often more than 10), these tickets provide valuable information. However, the sheer volume of tickets can be overwhelming, making it difficult, if not impossible, for a QA engineer to determine if a current problem has been previously reported.  As a result, new tickets are frequently created, further increasing the number of tickets and decreasing the likelihood of identifying similar or identical issues.

      Gen AI proves to be very useful for this problem. All we need is to create a custom knowledge base that includes:

      • Standards and protocol specifications (3GPP docs)
      • Product and project documentation
      • Tickets reported during the product/project lifecycle (from the ticketing system, e.g. JIRA)

      This way, Gen AI can quickly provide relevant information about a particular situation, indicating which parts of the documents are applicable. This saves hours or even days of digging through standards. Identifying existing tickets similar to the current failure is also extremely valuable, as these tickets include details on how the problem was solved, which might be applicable to the current situation.

      Asking the questions in a natural language makes the adoption of such a solution instantaneous.

    Bottom Line

    Even though using Gen AI in testing is not yet mainstream, it has already been proven to facilitate the testing process. Thus, I anticipate a gradual but continuous adoption of Gen AI in testing overall, and specifically in telecom testing.