Category: Engineering blogs

  • GitHub CI/CD vs. Xcode Cloud: A Comprehensive Comparison for iOS Platform

    Source: https://faun.pub/

    Introduction

    In the realm of iOS app development, continuous integration and continuous deployment (CI/CD) have become indispensable to ensure efficient and seamless software development. Developers are constantly seeking the most effective CI/CD solutions to streamline their workflows and optimize the delivery of high-quality iOS applications. Two prominent contenders in this arena are Github CI/CD and Xcode Cloud. In this article, we will delve into the intricacies of these platforms, comparing their features, benefits, and limitations to help you make an informed decision for your iOS development projects.

    GitHub CI/CD

    Github CI/CD is an extension of the popular source code management platform, Github. It offers a versatile and flexible CI/CD workflow for iOS applications, enabling developers to automate the building, testing, and deployment processes. Here are some key aspects of Github CI/CD:

    1. Workflow Configuration: Github CI/CD employs a YAML-based configuration file, allowing developers to define complex workflows. This provides granular control over the CI/CD pipeline, enabling the automation of multiple tasks such as building, testing, code analysis, and deployment.
    2. Wide Range of Integrations: Github CI/CD seamlessly integrates with various third-party tools and services, such as Slack, Jira, and SonarCloud, enhancing collaboration and ensuring efficient communication among team members. This extensibility enables developers to incorporate their preferred tools seamlessly into the CI/CD pipeline.
    3. Scalability and Customizability: Github CI/CD supports parallelism, allowing the execution of multiple jobs concurrently. This feature significantly reduces the overall build and test time, especially for large-scale projects. Additionally, developers can leverage custom scripts and actions to tailor the CI/CD pipeline according to their specific requirements.
    4. Community Support: Github boasts a vast community of developers who actively contribute to the CI/CD ecosystem. This means that developers can access a wealth of resources, tutorials, and shared workflows, expediting the adoption of CI/CD best practices.

    Xcode Cloud

    Xcode Cloud is a cloud-based CI/CD solution designed specifically for iOS and macOS app development. Integrated into Apple’s Xcode IDE, Xcode Cloud provides an end-to-end development experience with seamless integration into the Apple ecosystem. Let’s explore the distinguishing features of Xcode Cloud:

    1. Native Integration with Xcode: Xcode Cloud is tightly integrated with the Xcode IDE, offering a seamless development experience for iOS and macOS apps. This integration simplifies the setup and configuration process, enabling developers to trigger CI/CD workflows directly from Xcode easily.
    2. Automated Testing and UI Testing: Xcode Cloud includes powerful testing capabilities, allowing developers to run automated tests, unit tests, and UI tests effortlessly. The platform provides a comprehensive test report with detailed insights, enabling developers to identify and resolve issues quickly.
    3. Device Testing and Distribution: Xcode Cloud enables developers to leverage Apple’s extensive device testing infrastructure for concurrent testing across multiple simulators and physical devices. Moreover, it facilitates the distribution of beta builds for internal and external testing, making it easier to gather user feedback before the final release.
    4. Seamless Code Signing and App Store Connect Integration: Xcode Cloud simplifies code signing, a critical aspect of iOS app development, by managing certificates, profiles, and provisioning profiles automatically. It seamlessly integrates with App Store Connect, automating the app submission and release process.

    Comparison

    Now, let’s compare Github CI/CD and Xcode Cloud across several key dimensions:

    Ecosystem and Integration

    • GitHub CI/CD: Offers extensive integrations with third-party tools and services, allowing developers to integrate with various services beyond the Apple ecosystem.
    • Xcode Cloud: Excels in its native integration with Xcode and the Apple ecosystem, providing a seamless experience for iOS and macOS developers. It leverages Apple’s testing infrastructure and simplifies code signing and distribution within the Apple ecosystem.

    Flexibility and Customizability

    • GitHub CI/CD: Provides more flexibility and customizability through its YAML-based configuration files, enabling developers to define complex workflows and integrate various tools according to their specific requirements.
    • Xcode Cloud: Focuses on streamlining the development experience within Xcode, limiting customization options compared to GitHub CI/CD.

    Scalability and Parallelism

    • GitHub CI/CD: Offers robust scalability with support for parallel job execution, making it suitable for large-scale projects that require efficient job execution in parallel.
    • Xcode Cloud: Scalability is limited to Apple’s device testing infrastructure, which may not provide the same level of scalability for non-Apple platforms or projects with extensive parallel job execution requirements.

    Community and Resources

    • GitHub CI/CD: Benefits from a large and vibrant community, offering extensive resources, shared workflows, and active community support. Developers can leverage the knowledge and experience shared by the community.
    • Xcode Cloud: As a newer offering, Xcode Cloud is still building its community ecosystem. It may have a smaller community compared to GitHub CI/CD, resulting in fewer shared workflows and resources. However, developers can still rely on Apple’s developer forums and support channels for assistance.

    Pricing

    • GitHub CI/CD: GitHub offers both free and paid plans. The pricing depends on the number of parallel jobs and additional features required. The paid plans provide more scalability and advanced features.
    • Xcode Cloud: Apple offers Xcode Cloud as part of its broader Apple Developer Program, which has an annual subscription fee. The specific pricing details for Xcode Cloud are available on Apple’s official website.

    Performance

    • GitHub CI/CD: The performance of GitHub CI/CD depends on the underlying infrastructure and resources allocated to the CI/CD pipeline. It provides scalability and parallelism options for faster job execution.
    • Xcode Cloud: Xcode Cloud leverages Apple’s testing infrastructure, which is designed for iOS and macOS app development. It offers optimized performance and reliability for testing and distribution processes within the Apple ecosystem.

    Conclusion

    Choosing between Github CI/CD and Xcode Cloud for your iOS development projects depends on your specific needs and priorities. If you value native integration with Xcode and the Apple ecosystem, seamless code signing, and distribution, Xcode Cloud provides a comprehensive solution. On the other hand, if flexibility, customizability, and an extensive ecosystem of integrations are crucial, Github CI/CD offers a powerful CI/CD platform for iOS apps. Consider your project’s unique requirements and evaluate the features and limitations of each platform to make an informed decision that aligns with your development workflow and goals.

  • Agile Estimation and Planning: Driving Success in Software Projects

    Agile software development has revolutionized the way projects are planned and executed. In Agile, estimation and planning are crucial to ensure successful project delivery. This blog post will delve into Agile estimation techniques specific to software projects, including story points, velocity, and capacity planning. We will explore how these techniques contribute to effective planning in Agile environments, enabling teams to deliver value-driven solutions efficiently.

    Understanding Agile Estimation:

    Agile estimation involves assessing work effort, complexity, and duration in a collaborative and iterative manner. Traditional time-based estimation is replaced by relative sizing, allowing flexibility and adaptability. Story points, a popular estimation unit, represent user stories’ relative effort or complexity. They facilitate prioritization and comparison, aiding in effective backlog management.

    The Importance of Agile Estimation:

    Accurate estimation is fundamental to successful project planning. Agile estimation differs from traditional approaches, focusing on relative sizing rather than precise time-based estimations. This allows teams to account for uncertainty and complexity, promoting transparency and collaboration.

    1. Better Decision Making: By understanding the relative effort and complexity of user stories or tasks, teams can make informed decisions about prioritization, resource allocation, and trade-offs.
    2. Enhanced Predictability: Agile estimation enables teams to predict how much work they can complete within a given time, facilitating reliable planning and stakeholder management.
    3. Improved Team Collaboration: Estimation in Agile is a collaborative process that involves the entire team, and it fosters open discussions, shared understanding, and collective ownership of project goals.

    Story Points: The Currency of Agile Estimation:

    Story points are a popular estimation technique used in Agile projects, and they provide a relative measure of effort and complexity for user stories or tasks. Unlike time-based estimates, story points focus on the inherent complexity and the effort required to complete the work. The Fibonacci sequence (1, 2, 3, 5, 8, etc.) or T-shirt sizes (XS, S, M, L, XL) are common scales for assigning story points.

    1. Benefits of Story Points: Story points offer several advantages over time-based estimation:
    • Relative Sizing: Story points enable teams to compare and prioritize tasks based on their relative effort rather than precise time frames. This approach avoids the pitfalls of underestimation or overestimation caused by fixed-time estimates.
    • Encourages Collaboration: Story point estimation involves the entire team, promoting healthy discussions, knowledge sharing, and alignment of expectations.
    • Focuses on Complexity: Story points emphasize the complexity of work, considering factors such as risk, uncertainty, and technical challenges.
    1. Estimation Techniques: Agile teams utilize various techniques to assign story points, such as Planning Poker, in which team members collectively discuss and debate the effort required for each user story. The goal is to reach a consensus and arrive at a shared understanding of the work’s complexity.

    Velocity: Harnessing Team Performance:

    Velocity is a powerful metric derived from Agile project management tools that measure a team’s average output in terms of story points completed during a specific time frame, usually a sprint or iteration. It serves as a baseline for future planning and helps teams assess their performance.

    1. Benefits of Velocity Tracking: Tracking velocity provides several advantages:
    • Predictability: By analyzing past velocity, teams can forecast how much work they will likely complete in subsequent iterations. This enables them to set realistic goals and manage stakeholder expectations.
    • Resource Allocation: Velocity aids in effective resource management, allowing teams to distribute work evenly and avoid overloading or underutilizing team members.
    • Continuous Improvement: Monitoring velocity over time enables teams to identify trends, bottlenecks, and opportunities for improvement. It facilitates a culture of continuous learning and adaptation.
    1. Factors Influencing Velocity: Several factors can influence a team’s velocity, including team composition, skills, experience, availability, and external dependencies. Understanding these factors helps teams adjust their planning and make data-driven decisions.

    Capacity Planning: Balancing Resources and Workload:

    Capacity planning is the process of determining the team’s available resources and their ability to take on work. It involves balancing the team’s capacity with the estimated effort required for the project.

    1. Resource Assessment: Capacity planning begins by evaluating the team’s composition, skill sets, and availability. Understanding each team member’s capacity helps project managers allocate work effectively and ensure an even distribution of tasks.
    2. Managing Dependencies: Capacity planning also considers external dependencies, such as stakeholder availability, vendor dependencies, or third-party integrations. By considering these factors, teams can mitigate risks and avoid unnecessary delays.
    3. Agile Tools for Capacity Planning: Agile project management tools offer features to assist with capacity planning, allowing teams to visualize and allocate work based on the team’s availability. This helps prevent overcommitment and promotes a sustainable work pace.

    Effective Planning in Agile Environments:

    Successful Agile planning requires adopting best practices that align with Agile principles and values. Some essential practices include:

    ‍Refining the Backlog:

    Regularly groom and refine the product backlog to ensure user stories are well-defined, appropriately prioritized, and estimated. This allows the team to plan more clearly and respond to changing requirements effectively, and continuous refinement helps identify dependencies, risks, and opportunities for improvement.

    ‍Collaborative Estimation:

    Encourage collaboration and involvement of the entire team in the estimation process. Techniques like Planning Poker foster discussions and consensus-building, leveraging the diverse perspectives and expertise within the team. Collaborative estimation ensures shared understanding and buy-in, leading to more accurate estimates.

    ‍Iterative Refinement: Continuously Improving Estimation Accuracy:

    Agile estimation is not a one-time activity but an ongoing process of refinement. Teams learn from experience and continuously improve their estimation accuracy. Conduct retrospectives at the end of each sprint to reflect on the planning and estimation process. Identify areas for improvement and experiment with different techniques or approaches. Encourage feedback from the team and incorporate lessons learned into future planning efforts.

    Case-Studies:

    Following are real-world examples and case studies that highlight the benefits of Agile estimation and planning in various software projects:

    Spotify: Scaling Agile with Squads, Tribes, and Guilds:

    Spotify, a renowned music streaming platform, adopted Agile methodologies to manage their growing engineering teams. They introduced the concept of squads, which are small, cross-functional teams responsible for delivering specific features. Each squad estimates and plans their work using Agile techniques such as story points and velocity. This approach allows Spotify to maintain flexibility, foster collaboration, and continuously deliver new features and improvements.

    ‍Salesforce: Agile Planning for Enhanced Customer Satisfaction:

    Salesforce, a cloud-based CRM software provider, implemented Agile estimation and planning techniques to enhance customer satisfaction and product delivery. They adopted a backlog-driven approach, where requirements were gathered in a prioritized backlog. Agile teams estimated the backlog items using relative sizing techniques, such as Planning Poker. By involving stakeholders in the estimation process, Salesforce improved transparency, set realistic expectations, and delivered value incrementally to their customers.

    ‍NASA’s Mars Rover Curiosity: Agile in High-Stakes Space Exploration:

    The software development process for NASA’s Mars Rover Curiosity mission applied Agile principles to ensure the successful exploration of the red planet. The team used Agile estimation techniques to estimate the effort required for each feature, focusing on iterations and continuous integration. Agile planning allowed them to adapt to changing requirements and allocate resources effectively. The iterative development approach enabled frequent feedback loops and ensured the software met the mission’s evolving needs.

    GitHub: Agile Planning in a Collaborative Development Environment:

    GitHub, a leading platform for software development collaboration, employs Agile estimation and planning practices to manage its extensive project portfolio. They break down work into small, manageable user stories and estimate them using T-shirt sizing or affinity estimation techniques. By visualizing project progress on Kanban boards and leveraging metrics like lead time and cycle time, GitHub ensures efficient planning, prioritization, and continuous improvement across their development teams.

    ‍Zappos: Agile Planning in E-Commerce:

    Zappos, an online shoe and clothing retailer, embraced Agile methodologies to optimize their software development and improve customer experience. Zappos efficiently plans and prioritizes features that align with customer needs and business goals by leveraging user story mapping and release planning techniques. Agile estimation helps them determine the effort required for each feature, facilitating resource allocation and ensuring timely releases and updates.

    Common Challenges and Pitfalls in Agile Estimation and Planning:

    Implementing Agile estimation and planning practices can improve project delivery by fostering collaboration, adaptability, and transparency. However, teams may encounter specific challenges or pitfalls during the implementation process. By being aware of these potential issues, teams can better anticipate and address them, improving the overall success of Agile projects. Here are some common challenges and pitfalls to watch out for:

    Unrealistic Expectations:

    One of the most significant challenges is setting realistic expectations about the accuracy of estimates and the ability to plan for uncertainties. Agile embraces change, and it is essential to communicate to stakeholders that estimates are not fixed commitments but rather the best guess based on the available information at a given time.

    Insufficient Stakeholder Involvement:

    Agile estimation and planning rely on active involvement and collaboration among all stakeholders, including the development team, product owners, and business representatives. Lack of stakeholder engagement can lead to misaligned expectations, inadequate requirements, and poor decision-making during the estimation and planning process.

    Incomplete or Unclear Requirements:

    Agile estimation and planning heavily depend on a clear understanding of project requirements. If requirements are vague, ambiguous, or incomplete, estimating accurately and planning effectively becomes challenging. Teams should strive to have well-defined user stories or product backlog items before estimation and planning activities commence.

    Overcommitting or Undercommitting:

    Agile encourages self-organizing teams to determine their capacity and commit to a realistic amount of work for each iteration or sprint. Overcommitting can lead to burnout, quality issues, and missed deadlines while undercommitting can result in inefficient resource utilization and a lack of progress. Balancing workload and capacity requires careful consideration, continuous feedback, and a focus on sustainable delivery.

    Resistance to Change:

    Agile adoption often requires a shift in mindset and culture within the organization. Resistance to change from team members, stakeholders, or management can impede the successful implementation of Agile estimation and planning practices. Addressing resistance through education, training, and highlighting the benefits and value of Agile approaches is vital.

    By acknowledging these common challenges and pitfalls, teams can anticipate and proactively mitigate potential issues. Agile estimation and planning are iterative processes that benefit from continuous learning, collaboration, and adaptability. By addressing these challenges head-on, teams can enhance their ability to deliver successful projects while maintaining transparency, agility, and stakeholder satisfaction.

    Conclusion:

    Remember that Agile planning is a continuous and adaptive process, emphasizing collaboration, value delivery, and flexibility. In the ever-evolving world of software development, Agile estimation and planning serve as the compass that guides teams toward successful project outcomes. By harnessing the power of estimation techniques tailored for Agile environments, teams can navigate through uncertainties, prioritize work effectively, and optimize their delivery process, ultimately driving customer satisfaction and project success.

  • Unlocking Cross-Platform Development with Kotlin Multiplatform Mobile (KMM)

    In the fast-paced and ever-changing world of software development, the task of designing applications that can smoothly operate on various platforms has become a significant hurdle. Developers frequently encounter a dilemma where they must decide between constructing distinct codebases for different platforms or opting for hybrid frameworks that come with certain trade-offs.

    Kotlin Multiplatform (KMP) is an extension of the Kotlin programming language that simplifies cross-platform development by bridging the gap between platforms. This game-changing technology has emerged as a powerful solution for creating cross-platform applications.

    Kotlin Multiplatform Mobile (KMM) is a subset of KMP that provides a specific framework and toolset for building cross-platform mobile applications using Kotlin. KMM is developed by JetBrains to simplify the process of building mobile apps that can run seamlessly on multiple platforms.

    In this article, we will take a deep dive into Kotlin Multiplatform Mobile, exploring its features and benefits and how it enables developers to write shared code that runs natively on multiple platforms.

    What is Kotlin Multiplatform Mobile (KMM)?

    With KMM, developers can share code between Android and iOS platforms, eliminating the need for duplicating efforts and maintaining separate codebases. This significantly reduces development time and effort while improving code consistency and maintainability.

    KMM offers support for a wide range of UI frameworks, libraries, and app architectures, providing developers with flexibility and options. It can seamlessly integrate with existing Android projects, allowing for the gradual adoption of cross-platform development. Additionally, KMM projects can be developed and tested using familiar build tools, making the transition to KMM as smooth as possible.

    KMM vs. Other Platforms

    Here’s a table comparing the KMM (Kotlin Multiplatform Mobile) framework with some other popular cross-platform mobile development platforms:

    Sharing Code Across Multiple Platforms:

    Advantages of Utilizing Kotlin Multiplatform (KMM) in Projects

    Code sharing: Encourages code reuse and reduces duplication, leading to faster development.

    Faster time-to-market: Accelerates mobile app development by reducing codebase development.

    Consistency: Ensures consistency across platforms for better user experience.

    Collaboration between Android and iOS teams: Facilitates collaboration between Android and iOS development teams to improve efficiency.

    Access to Native APIs: Allows developers to access platform-specific APIs and features.

    Reduced maintenance overhead: Shared codebase makes maintenance easier and more efficient.

    Existing Kotlin and Android ecosystem: Provides access to libraries, tools, and resources for developers.

    Gradual adoption: Facilitates cross-platform development by sharing modules and components.

    Performance and efficiency: Generates optimized code for each platform, resulting in efficient and performant applications.

    Community and support: Benefits from active community, resources, tutorials, and support.

    Limitations of Using KMM in Projects

    Limited platform-specific APIs: Provides a common codebase, but does not provide direct access to platform-specific APIs.

    Platform-dependent setup and tooling: Platform-agnostic, but setup and tooling can be platform-dependent.

    Limited interoperability with existing platform code: Interoperability between Kotlin Multiplatform and existing platform code can be challenging.

    Development and debugging experience: Provides code sharing, but development and debugging experience differ.

    Limited third-party library support: There aren’t many ready-to-use libraries available, so developers must implement from scratch or look for alternatives.

    Setting Up Environment for Cross-Platform Development in Android Studio

    Developing Kotlin Multiplatform Mobile (KMM) apps as an Android developer is relatively straightforward. You can use Android Studio, the same IDE that you use for Android app development. 

    To get started, we will need to install the KMM plugin through the IDE plugin manager, which is a simple step. The advantage of using Android Studio for KMM development is that we can create and run iOS apps from within the same IDE. This can help streamline the development process, making it easier to build and test apps across multiple platforms.

    In order to enable the building and running of iOS apps through Android Studio, it’s necessary to have Xcode installed on your system. Xcode is an Integrated Development Environment (IDE) used for iOS programming.

    To ensure that all dependencies are installed correctly for our Kotlin Multiplatform Mobile (KMM) project, we can use kdoctor. This tool can be installed via brew by running the following command in the command-line:

    $ brew install kdoctor 

    Note: If you don’t have Homebrew yet, please install it.

    Once we have all the necessary tools installed on your system, including Android Studio, Xcode, JDK, Kotlin Multiplatform Mobile Plugin, and Kotlin Plugin, we can run kdoctor in the Android Studio terminal or on our command-line tool by entering the following command:

    $ kdoctor 

    This will confirm that all required dependencies are properly installed and configured for our KMM project.

    kdoctor will perform comprehensive checks and provide a detailed report with the results.

    Assuming that all the necessary tools are installed correctly, if kdoctor detects any issues, it will generate a corresponding result or report.

    To resolve the warning mentioned above, touch ~/.zprofile and export changes.

    $ touch  ~/.zprofile 

    $ export LANG=en_US.UTF-8

    export LC_ALL=en_US.UTF-8

    After making the above necessary changes to our environment, we can run kdoctor again to verify that everything is set up correctly. Once kdoctor confirms that all dependencies are properly installed and configured, we are done.

    Building Biometric Face & Fingerprint Authentication Application

    Let’s explore Kotlin Multiplatform Mobile (KMM) by creating an application for face and fingerprint authentication. Here our aim is to leverage KMM’s potential by developing shared code for both Android and iOS platforms. This will promote code reuse and reduce redundancy, leading to optimized code for each platform.

    Set Up an Android project

    To initiate a new project, we will launch Android Studio, select the Kotlin Multiplatform App option from the New Project template, and click on “Next.”

    We will add the fundamental application information, such as the name of the application and the project’s location, on the following screen.

    Lastly, we opt for the recommended dependency manager for the iOS app from the Regular framework and click on “Next.”

    For the iOS app, we can switch the dependency between the regular framework or CocoPods dependency manager.

    After clicking the “Finish” button, the KMM project is created successfully and ready to be utilized.

    After finishing the Gradle sync process, we can execute both the iOS and Android apps by simply clicking the run button located in the toolbar.

    In this illustration, we can observe the structure of a KMM project. The KMM project is organized into three directories: shared, androidApp, and iosApp.

    androidApp: It contains Android app code and follows the typical structure of a standard Android application.

    iosApp: It contains iOS application code, which can be opened in Xcode using the .xcodeproj file.

    shared: It contains code and resources that are shared between the Android (androidApp) and iOS (iosApp) platforms. It allows developers to write platform-independent logic and components that can be reused across both platforms, reducing code duplication and improving development efficiency.

    Launch the iOS app and establish a connection with the framework.

    Before proceeding with iOS app development, ensure that both Xcode and Cocoapods are installed on your system.

    Open the root project folder of the KMM application (KMM_Biometric_App) developed using Android studio and navigate to the iosApp folder. Within the iosApp folder, locate the .xcodeproj file and double-click on it to open it.

    After launching the iosApp in Xcode, the next step is to establish a connection between the framework and the iOS application. To do this, you will need to access the iOS project settings by double-clicking on the project name. Once you are in the project settings, navigate to the Build Phases tab and select the “+” button to add a new Run Script Phase.

     

     

    Add the following script:

    cd “$SRCROOT/..”

    ./gradlew :shared:embedAndSignAppleFrameworkForXcode

    Move the Run Script phase before the Compile Sources phase.

    Navigate to the All build settings on the Build Settings tab and locate the Search Paths section. Within this section, specify the Framework Search Path:

    $(SRCROOT)/../shared/build/xcode-frameworks/$(CONFIGURATION)/$(SDK_NAME)

    In the Linking section of the Build Settings tab, specify the Other Linker flags:

    $(inherited) -framework shared

    Compile the project in Xcode. If all the settings are configured correctly, the project should build successfully.

    Implement Biometric Authentication in the Android App

    To enable Biometric Authentication, we will utilize the BiometricPrompt component available in the Jetpack Biometric library. This component simplifies the process of implementing biometric authentication, but it is only compatible with Android 6.0 (API level 23) and later versions. If we require support for earlier Android versions, we must explore alternative approaches.

    Biometric Library:

    implementation(“androidx.biometric:biometric-ktx:1.2.0-alpha05“)

    To add the Biometric Dependency for Android development, we must include it in the androidMain of sourceSets in the build.gradle file located in the shared folder. This step is specific to Android development.

    // shared/build.gradle.kts

    …………
    sourceSets {
       val androidMain by getting {
           dependencies {
               implementation("androidx.biometric:biometric-ktx:1.2.0-alpha05")
                        }
    	……………
       }
    …………….

    Next, we will generate the FaceAuthenticator class within the commonMain folder, which will allow us to share the Biometric Authentication business logic between the Android and iOS platforms.

    // shared/commonMain/FaceAuthenticator

    expect class FaceAuthenticator {
       fun isDeviceHasBiometric(): Boolean
       fun authenticateWithFace(callback: (Boolean) -> Unit)
    }

    In shared code, the “expect” keyword signifies an expected behavior or interface. It indicates a declaration that is expected to be implemented differently on each platform. By using “expect,” you establish a contract or API that the platform-specific implementations must satisfy.

    The “actual” keyword is utilized to provide the platform-specific implementation for the expected behavior or interface defined with the “expect” keyword. It represents the concrete implementation that varies across different platforms. By using “actual,” you supply the code that fulfills the contract established by the “expect” declaration.

    There are 3 different types of authenticators, defined at a level of granularity supported by BiometricManager and BiometricPrompt.

    At the level of granularity supported by BiometricManager and BiometricPrompt, there exist three distinct types of authenticators.

    Multiple authenticators, such as BIOMETRIC_STRONG | DEVICE_CREDENTIAL | BIOMETRIC_WEAK, can be represented as a single integer by combining their types using bitwise OR.

    BIOMETRIC_STRONG: Any biometric (e.g., fingerprint, iris, or face) on the device that meets or exceeds the requirements for Class 3 (formerly Strong), as defined by the Android CDD.

    BIOMETRIC_WEAK: Any biometric (e.g., fingerprint, iris, or face) on the device that meets or exceeds the requirements for Class 2 (formerly Weak), as defined by the Android CDD.

    DEVICE_CREDENTIAL: Authentication using a screen lock credential—the user’s PIN, pattern, or password.

    Now let’s create an actual implementation of FaceAuthenticator class in the androidMain folder of the shared folder.

    // shared/androidMain/FaceAuthenticator

    actual class FaceAuthenticator(context: FragmentActivity) {
       actual fun isDeviceHasBiometric(): Boolean {
           // code to check biometric available
       }
    
       actual fun authenticateWithFace(callback: (Boolean) -> Unit) {
           // code to authenticate using biometric
       }
    }

    Add the following code to the isDeviceHasBiometric() function to determine whether the device supports biometric authentication or not.

    actual class FaceAuthenticator(context: FragmentActivity) {
    
        var activity: FragmentActivity = context
    
        @RequiresApi(Build.VERSION_CODES.R)
        actual fun isDeviceHasBiometric(): Boolean {
            val biometricManager = BiometricManager.from(activity)
            when (biometricManager.canAuthenticate(BIOMETRIC_STRONG or BIOMETRIC_WEAK)) {
                BiometricManager.BIOMETRIC_SUCCESS -> {
                    Log.d("`FaceAuthenticator.kt`", "App can authenticate using biometrics.")
                    return true
                }
    
                BiometricManager.BIOMETRIC_ERROR_NO_HARDWARE -> {
                    Log.e("`FaceAuthenticator.kt`", "No biometric features available on this device.")
                    return false
                }
    
                BiometricManager.BIOMETRIC_ERROR_HW_UNAVAILABLE -> {
                    Log.e("`FaceAuthenticator.kt`", "Biometric features are currently unavailable.")
                    return false
                }
    
                BiometricManager.BIOMETRIC_ERROR_NONE_ENROLLED -> {
                    Log.e(
                        "`FaceAuthenticator.kt`",
                        "Prompts the user to create credentials that your app accepts."
                    )
                    val enrollIntent = Intent(Settings.ACTION_BIOMETRIC_ENROLL).apply {
                        putExtra(
                            Settings.EXTRA_BIOMETRIC_AUTHENTICATORS_ALLOWED,
                            BIOMETRIC_STRONG or BIOMETRIC_WEAK
                        )
                    }
    
                    startActivityForResult(activity, enrollIntent, 100, null)
                }
    
                BiometricManager.BIOMETRIC_ERROR_SECURITY_UPDATE_REQUIRED -> {
                    Log.e(
                        "`FaceAuthenticator.kt`",
                        "Security vulnerability has been discovered and the sensor is unavailable until a security update has addressed this issue."
                    )
                }
    
                BiometricManager.BIOMETRIC_ERROR_UNSUPPORTED -> {
                    Log.e(
                        "`FaceAuthenticator.kt`",
                        "The user can't authenticate because the specified options are incompatible with the current Android version."
                    )
                }
    
                BiometricManager.BIOMETRIC_STATUS_UNKNOWN -> {
                    Log.e(
                        "`FaceAuthenticator.kt`",
                        "Unable to determine whether the user can authenticate"
                    )
                }
            }
            return false
        }
    
        actual fun authenticateWithFace(callback: (Boolean) -> Unit) {
            // code to authenticate using biometric
        }
    
    }

    In the provided code snippet, an instance of BiometricManager is created, and the canAuthenticate() method is invoked to determine whether the user can authenticate with an authenticator that satisfies the specified requirements. To accomplish this, you must pass the same bitwise combination of types, which you declared using the setAllowedAuthenticators() method, into the canAuthenticate() method.

    To perform biometric authentication, insert the following code into the authenticateWithFace() method.

    actual class FaceAuthenticator(context: FragmentActivity) {
    
        var activity: FragmentActivity = context
    
        @RequiresApi(Build.VERSION_CODES.R)
        actual fun isDeviceHasBiometric(): Boolean {
            val biometricManager = BiometricManager.from(activity)
            when (biometricManager.canAuthenticate(BIOMETRIC_STRONG or BIOMETRIC_WEAK)) {
                BiometricManager.BIOMETRIC_SUCCESS -> {
                    Log.d("`FaceAuthenticator.kt`", "App can authenticate using biometrics.")
                    return true
                }
    
                BiometricManager.BIOMETRIC_ERROR_NO_HARDWARE -> {
                    Log.e("`FaceAuthenticator.kt`", "No biometric features available on this device.")
                    return false
                }
    
                BiometricManager.BIOMETRIC_ERROR_HW_UNAVAILABLE -> {
                    Log.e("`FaceAuthenticator.kt`", "Biometric features are currently unavailable.")
                    return false
                }
    
                BiometricManager.BIOMETRIC_ERROR_NONE_ENROLLED -> {
                    Log.e(
                        "`FaceAuthenticator.kt`",
                        "Prompts the user to create credentials that your app accepts."
                    )
                    val enrollIntent = Intent(Settings.ACTION_BIOMETRIC_ENROLL).apply {
                        putExtra(
                            Settings.EXTRA_BIOMETRIC_AUTHENTICATORS_ALLOWED,
                            BIOMETRIC_STRONG or BIOMETRIC_WEAK
                        )
                    }
    
                    startActivityForResult(activity, enrollIntent, 100, null)
                }
    
                BiometricManager.BIOMETRIC_ERROR_SECURITY_UPDATE_REQUIRED -> {
                    Log.e(
                        "`FaceAuthenticator.kt`",
                        "Security vulnerability has been discovered and the sensor is unavailable until a security update has addressed this issue."
                    )
                }
    
                BiometricManager.BIOMETRIC_ERROR_UNSUPPORTED -> {
                    Log.e(
                        "`FaceAuthenticator.kt`",
                        "The user can't authenticate because the specified options are incompatible with the current Android version."
                    )
                }
    
                BiometricManager.BIOMETRIC_STATUS_UNKNOWN -> {
                    Log.e(
                        "`FaceAuthenticator.kt`",
                        "Unable to determine whether the user can authenticate"
                    )
                }
            }
            return false
        }
    
        @RequiresApi(Build.VERSION_CODES.P)
        actual fun authenticateWithFace(callback: (Boolean) -> Unit) {
            
            // Create prompt Info to set prompt details
            val promptInfo = BiometricPrompt.PromptInfo.Builder()
                .setTitle("Authentication using biometric")
                .setSubtitle("Authenticate using face/fingerprint")
                .setAllowedAuthenticators(BIOMETRIC_STRONG or BIOMETRIC_WEAK or DEVICE_CREDENTIAL)
                .setNegativeButtonText("Cancel")
                .build()
    
            // Create biometricPrompt object to get authentication callback result
            val biometricPrompt = BiometricPrompt(activity, activity.mainExecutor,
                object : BiometricPrompt.AuthenticationCallback() {
                    override fun onAuthenticationError(
                        errorCode: Int,
                        errString: CharSequence,
                    ) {
                        super.onAuthenticationError(errorCode, errString)
                        Toast.makeText(activity, "Authentication error: $errString", Toast.LENGTH_SHORT)
                            .show()
                        callback(false)
                    }
    
                    override fun onAuthenticationSucceeded(
                        result: BiometricPrompt.AuthenticationResult,
                    ) {
                        super.onAuthenticationSucceeded(result)
                        Toast.makeText(activity, "Authentication succeeded!", Toast.LENGTH_SHORT).show()
                        callback(true)
                    }
    
                    override fun onAuthenticationFailed() {
                        super.onAuthenticationFailed()
                        Toast.makeText(activity, "Authentication failed", Toast.LENGTH_SHORT).show()
                        callback(false)
                    }
                })
    
            //Authenticate using biometric prompt
            biometricPrompt.authenticate(promptInfo)
        }
    
    }

    In the code above, the BiometricPrompt.Builder gathers the arguments to be displayed on the biometric dialog provided by the system.

    The setAllowedAuthenticators() function enables us to indicate the authenticators that are permitted for biometric authentication.

    // Create prompt Info to set prompt details

    // Create prompt Info to set prompt details
    val promptInfo = BiometricPrompt.PromptInfo.Builder()
       	.setTitle("Authentication using biometric")
       	.setSubtitle("Authenticate using face/fingerprint")
       	.setAllowedAuthenticators(BIOMETRIC_STRONG or BIOMETRIC_WEAK)   
          .setNegativeButtonText("Cancel")
       	.build()

    It is not possible to include both .setAllowedAuthenticators(BIOMETRIC_WEAK or DEVICE_CREDENTIAL) and .setNegativeButtonText(“Cancel”) simultaneously in a BiometricPrompt.PromptInfo.Builder instance because the last mode of device authentication is being utilized.

    However, it is possible to include both .setAllowedAuthenticators(BIOMETRIC_WEAK or BIOMETRIC_STRONG) and .setNegativeButtonText(“Cancel“) simultaneously in a BiometricPrompt.PromptInfo.Builder instance. This allows for a fallback to device credentials authentication when the user cancels the biometric authentication process.

    The BiometricPrompt object facilitates biometric authentication and provides an AuthenticationCallback to handle the outcomes of the authentication process, indicating whether it was successful or encountered a failure.

    val biometricPrompt = BiometricPrompt(activity, activity.mainExecutor,
                object : BiometricPrompt.AuthenticationCallback() {
                    override fun onAuthenticationError(
                        errorCode: Int,
                        errString: CharSequence,
                    ) {
                        super.onAuthenticationError(errorCode, errString)
                        Toast.makeText(activity, "Authentication error: $errString", Toast.LENGTH_SHORT)
                            .show()
                        callback(false)
                    }
    
                    override fun onAuthenticationSucceeded(
                        result: BiometricPrompt.AuthenticationResult,
                    ) {
                        super.onAuthenticationSucceeded(result)
                        Toast.makeText(activity, "Authentication succeeded!", Toast.LENGTH_SHORT).show()
                        callback(true)
                    }
    
                    override fun onAuthenticationFailed() {
                        super.onAuthenticationFailed()
                        Toast.makeText(activity, "Authentication failed", Toast.LENGTH_SHORT).show()
                        callback(false)
                    }
                })
    
            //Authenticate using biometric prompt
            biometricPrompt.authenticate(promptInfo)

    Now, we have completed the coding of the shared code for Android in the androidMain folder. To utilize this code, we can create a new file named LoginActivity.kt within the androidApp folder.

    // androidApp/LoginActivity

    class LoginActivity : AppCompatActivity() {
    
        @RequiresApi(Build.VERSION_CODES.R)
        override fun onCreate(savedInstanceState: Bundle?) {
            super.onCreate(savedInstanceState)
            setContentView(R.layout.activity_login)
    
            val authenticate = findViewById<Button>(R.id.authenticate_button)
            authenticate.setOnClickListener {
    
                val faceAuthenticatorImpl = FaceAuthenticator(this);
                if (faceAuthenticatorImpl.isDeviceHasBiometric()) {
                    faceAuthenticatorImpl.authenticateWithFace {
                          if (it){ Log.d("'LoginActivity.kt'", "Authentication Successful") }
                          else{ Log.d("'LoginActivity.kt'", "Authentication Failed") }
                    }
                }
    
            }
        }
    }

    Implement Biometric Authentication In iOS App

    For authentication, we have a special framework in iOS, i.e., Local Authentication Framework.

    The Local Authentication framework provides a way to integrate biometric authentication (such as Touch ID or Face ID) and device passcode authentication into your app. This framework allows you to enhance the security of your app by leveraging the biometric capabilities of the device or the device passcode.

    Now, let’s create an actual implementation of FaceAuthenticator class of shared folder in iosMain folder.

    // shared/iosMain/FaceAuthenticator

    actual class FaceAuthenticator(context: FragmentActivity) {
       actual fun isDeviceHasBiometric(): Boolean {
           // code to check biometric available
       }
    
       actual fun authenticateWithFace(callback: (Boolean) -> Unit) {
           // code to authenticate using biometric
       }
    }

    Add the following code to the isDeviceHasBiometric() function to determine whether the device supports biometric authentication or not.

    actual class FaceAuthenticator {
    
        actual fun isDeviceHasBiometric(): Boolean {
            // Check if face authentication is available
            val context = LAContext()
            val error = memScoped {
                allocPointerTo<ObjCObjectVar<NSError?>>()
            }
            return context.canEvaluatePolicy(LAPolicyDeviceOwnerAuthentication, error = error.value)
        }
    
        actual fun authenticateWithFace(callback: (Boolean) -> Unit) {
            // code to authenticate using biometric
        }
    }

    In the above code, LAContext class is part of the Local Authentication framework in iOS. It represents a context for evaluating authentication policies and handling biometric or passcode authentication. 

    LAPolicy represents different authentication policies that can be used with the LAContext class. The LAPolicy enum defines the following policies:

    .deviceOwnerAuthenticationWithBiometrics

    This policy allows the user to authenticate using biometric authentication, such as Touch ID or Face ID. If the device supports biometric authentication and the user has enrolled their biometric data, the authentication prompt will appear for biometric verification.

    .deviceOwnerAuthentication 

    This policy allows the user to authenticate using either biometric authentication (if available) or the device passcode. If biometric authentication is supported and the user has enrolled their biometric data, the prompt will appear for biometric verification. Otherwise, the device passcode will be used for authentication.

    We have used the LAPolicyDeviceOwnerAuthentication policy constant, which authenticates either by biometry or the device passcode.

    We have used the canEvaluatePolicy(_:error:) method to check if the device supports biometric authentication and if the user has added any biometric information (e.g., Touch ID or Face ID).

    To perform biometric authentication, insert the following code into the authenticateWithFace() method.

    // shared/iosMain/FaceAuthenticator

    actual class FaceAuthenticator {
    
        actual fun isDeviceHasBiometric(): Boolean {
            // Check if face authentication is available
            val context = LAContext()
            val error = memScoped {
                allocPointerTo<ObjCObjectVar<NSError?>>()
            }
            return context.canEvaluatePolicy(LAPolicyDeviceOwnerAuthentication, error = error.value)
        }
    
        actual fun authenticateWithFace(callback: (Boolean) -> Unit) {
            // Authenticate using biometric
            val context = LAContext()
            val reason = "Authenticate using face"
    
            if (isDeviceHasBiometric()) {
                // Perform face authentication
                context.evaluatePolicy(
                    LAPolicyDeviceOwnerAuthentication,
                    localizedReason = reason
                ) { b: Boolean, nsError: NSError? ->
                    callback(b)
                    if (!b) {
                        print(nsError?.localizedDescription ?: "Failed to authenticate")
                    }
                }
            }
    
            callback(false)
        }
    
    }

    The primary purpose of LAContext is to evaluate authentication policies, such as biometric authentication or device passcode authentication. The main method for this is 

    evaluatePolicy(_:localizedReason:reply:):

    This method triggers an authentication request, which is returned in the completion block. The localizedReason parameter is a message that explains why the authentication is required and is shown during the authentication process.

    When using evaluatePolicy(_:localizedReason:reply:), we may have the option to fall back to device passcode authentication or cancel the authentication process. We can handle these scenarios by inspecting the LAError object passed in the error parameter of the completion block:

    if let error = error as? LAError {
        switch error.code {
        case .userFallback:
            	// User tapped on fallback button, provide a passcode entry UI
        case .userCancel:
            	// User canceled the authentication
        	// Handle other error cases as needed
        }
    }

    That concludes the coding of the shared code for iOS in the iosMain folder. We can utilize this by creating LoginView.swift in the iosApp folder.

    struct LoginView: View {
        var body: some View {
            var isFaceAuthenticated :Bool = false
            let faceAuthenticator = FaceAuthenticator()
            
            Button(action: {
                if(faceAuthenticator.isDeviceHasBiometric()){
                    faceAuthenticator.authenticateWithFace { isSuccess in
                        isFaceAuthenticated = isSuccess.boolValue
                        print("Result is ")
                        print(isFaceAuthenticated)
                    }
                }
            }) {
                Text("Authenticate")
                .padding()
                .background(Color.blue)
                .foregroundColor(.white)
                .cornerRadius(10)
            }
            
        }
    }

    This ends our implementation of biometric authentication using the KMM application that runs smoothly on both Android and iOS platforms. If you’re interested, you can find the code for this project on our GitHub repository. We would love to hear your thoughts and feedback on our implementation.

    Conclusion

    It is important to acknowledge that while KMM offers numerous advantages, it may not be suitable for every project. Applications with extensive platform-specific requirements or intricate UI components may still require platform-specific development. Nonetheless, KMM can still prove beneficial in such scenarios by facilitating the sharing of non-UI code and minimizing redundancy.

    On the whole, Kotlin Multiplatform Mobile is an exciting framework that empowers developers to effortlessly create cross-platform applications. It provides an efficient and adaptable solution for building robust and high-performing mobile apps, streamlining development processes, and boosting productivity. With its expanding ecosystem and strong community support, KMM is poised to play a significant role in shaping the future of mobile app development.

  • Unlocking Seamless Communication: BLE Integration with React Native for Device Connectivity

    In today’s interconnected world, where smart devices have become an integral part of our daily lives, the ability to communicate with Bluetooth Low Energy (BLE) enabled devices opens up a myriad of possibilities for innovative applications. In this blog, we will explore the exciting realm of communicating with BLE-enabled devices using React Native, a popular cross-platform framework for mobile app development. Whether you’re a seasoned React Native developer or just starting your journey, this blog will equip you with the knowledge and skills to establish seamless communication with BLE devices, enabling you to create powerful and engaging user experiences. So, let’s dive in and unlock the potential of BLE communication in the world of React Native!

    BLE (Bluetooth Low Energy)

    Bluetooth Low Energy (BLE) is a wireless communication technology designed for low-power consumption and short-range connectivity. It allows devices to exchange data and communicate efficiently while consuming minimal energy. BLE has gained popularity in various industries, from healthcare and fitness to home automation and IoT applications. It enables seamless connectivity between devices, allowing for the development of innovative solutions. With its low energy requirements, BLE is ideal for battery-powered devices like wearables and sensors. It offers simplified pairing, efficient data transfer, and supports various profiles for specific use cases. BLE has revolutionized the way devices interact, enabling a wide range of connected experiences in our daily lives.

    Here is a comprehensive overview of how mobile applications establish connections and facilitate communication with BLE devices.

    What will we be using?

    react-native - 0.71.6
    react - 18.0.2
    react-native-ble-manager - 10.0.2

    Note: We are assuming you already have the React Native development environment set up on your system; if not, please refer to the React Native guide for instructions on setting up the RN development environment.

    What are we building?

    Together, we will construct a sample mobile application that showcases the integration of Bluetooth Low Energy (BLE) technology. This app will search for nearby BLE devices, establish connections with them, and facilitate seamless message exchanges between the mobile application and the chosen BLE device. By embarking on this project, you will gain practical experience in building an application that leverages BLE capabilities for effective communication. Let’s commence this exciting journey of mobile app development and BLE connectivity!

    Setup

    Before setting up the react-native-ble manager, let’s start by creating a React Native application using the React Native CLI. Follow these steps:

    Step 1: Ensure that you have Node.js and npm (Node Package Manager) installed on your system.

    Step 2: Open your command prompt or terminal and navigate to the directory where you want to create your React Native project.

    Step 3: Run the following command to create a new React Native project:

    npx react-native@latest init RnBleManager

    Step 4: Wait for the project setup to complete. This might take a few minutes as it downloads the necessary dependencies.

    Step 5: Once the setup is finished, navigate into the project directory:

    cd RnBleManager

    Step 6: Congratulations! You have successfully created a new React Native application using the React Native CLI.

    Now you are ready to set up the react-native-ble manager and integrate it into your React Native project.

    Installing react-native-ble-manager

    If you use NPM -
    npm i --save react-native-ble-manager
    
    With Yarn -
    yarn add react-native-ble-manager

    In order to enable Android applications to utilize Bluetooth and location services for detecting and communicating with BLE devices, it is essential to incorporate the necessary permissions within the Android platform.

    Add these permissions in the AndroidManifest.xml file in android/app/src/main/AndroidManifest.xml

    Integration

    At this stage, having successfully created a new React Native application, installed the react-native-ble-manager, and configured it to function seamlessly on Android, it’s time to proceed with integrating the react-native-ble-manager into your React Native application. Let’s dive into the integration process to harness the power of BLE functionality within your app.

    BleConnectionManager

    To ensure that our application can access the BLE connection state and facilitate communication with the BLE device, we will implement BLE connection management in the global state. This will allow us to make the connection management accessible throughout the entire codebase. To achieve this, we will create a ContextProvider called “BleConnectionContextProvider.” By encapsulating the BLE connection logic within this provider, we can easily share and access the connection state and related functions across different components within the application. This approach will enhance the efficiency and effectiveness of managing BLE connections. Let’s proceed with implementing the BleConnectionContextProvider to empower our application with seamless BLE communication capabilities.

    This context provider will possess the capability to access and manage the current BLE state, providing a centralized hub for interacting with the BLE device. It will serve as the gateway to establish connections, send and receive data, and handle various BLE-related functionalities. By encapsulating the BLE logic within this context provider, we can ensure that all components within the application have access to the BLE device and the ability to communicate with it. This approach simplifies the integration process and facilitates efficient management of the BLE connection and communication throughout the entire application.

    Let’s proceed with creating a context provider equipped with essential state management functionalities. This context provider will effectively handle the connection and scanning states, maintain the BLE object, and manage the list of peripherals (BLE devices) discovered during the application’s scanning process. By implementing this context provider, we will establish a robust foundation for seamlessly managing BLE connectivity and communication within the application.

    NOTE: Although not essential for the example at hand, implementing global management of the BLE connection state allows us to demonstrate its universal management capabilities.

    ....
    BleManager.disconnect(BLE_SERVICE_ID)
      .then(() => {
        dispatch({ type: "disconnected", payload: { peripheral } })
      })
      .catch((error) => {
        // Failure code
        console.log(error);
      });
    ....

    Prior to integrating the BLE-related components, it is crucial to ensure that the mobile app verifies whether the:

    1. Location permissions are granted and enabled
    2. Mobile device’s Bluetooth is enabled

    To accomplish this, we will implement a small method called requestPermissions that grants all the necessary permissions to the user. We will then call this method as soon as our context provider initializes within the useEffect hook in the BleConnectionContextProvider. Doing so ensures that the required permissions are obtained by the mobile app before proceeding with the integration of BLE functionalities.

    import {PermissionsAndroid, Platform} from "react-native"
    import BleManager from "react-native-ble-manager"
    
      const requestBlePermissions = async (): Promise<boolean> => {
        if (Platform.OS === "android" && Platform.Version < 23) {
          return true
        }
        try {
          const status = await PermissionsAndroid.requestMultiple([
            PermissionsAndroid.PERMISSIONS.ACCESS_FINE_LOCATION,
            PermissionsAndroid.PERMISSIONS.BLUETOOTH_CONNECT,
            PermissionsAndroid.PERMISSIONS.BLUETOOTH_SCAN,
            PermissionsAndroid.PERMISSIONS.BLUETOOTH_ADVERTISE,
          ])
          return (
            status[PermissionsAndroid.PERMISSIONS.BLUETOOTH_CONNECT] == "granted" &&
            status[PermissionsAndroid.PERMISSIONS.BLUETOOTH_SCAN] == "granted" &&
            status[PermissionsAndroid.PERMISSIONS.ACCESS_FINE_LOCATION] == "granted"
          )
        } catch (e) {
          console.error("Location Permssions Denied ", e)
          return false
        }
      }
    
    // effects
    useEffect(() => {
      const initBle = async () => {
        await requestBlePermissions()
        BleManager.enableBluetooth()
      }
      
      initBle()
    }, [])

    After granting all the required permissions and enabling Bluetooth, the next step is to start the BleManager. To accomplish this, please add the following line of code after the enableBle command in the aforementioned useEffect:

    // initialize BLE module
    BleManager.start({ showAlert: false })

    By including this code snippet, the BleManager will be initialized, facilitating the smooth integration of BLE functionality within your application.

    Now that we have obtained the necessary permissions, enabled Bluetooth, and initiated the Bluetooth manager, we can proceed with implementing the functionality to scan and detect BLE peripherals. 

    We will now incorporate the code that enables scanning for BLE peripherals. This will allow us to discover and identify nearby BLE devices. Let’s dive into the implementation of this crucial step in our application’s BLE integration process.

    To facilitate scanning and stopping the scanning process for BLE devices, as well as handle various events related to the discovered peripherals, scan stop, and BLE disconnection, we will create a method along with the necessary event listeners.

    In addition, state management is essential to effectively handle the connection and scanning states, as well as maintain the list of scanned devices. To accomplish this, let’s incorporate the following code into the BleConnectionConextProvider. This will ensure seamless management of the aforementioned states and facilitate efficient tracking of scanned devices.

    Let’s proceed with implementing these functionalities to ensure smooth scanning and handling of BLE devices within our application.

    export const BLE_NAME = "SAMPlE_BLE"
    export const BLE_SERVICE_ID = "5476534d-1213-1212-1212-454e544f1212"
    export const BLE_READ_CHAR_ID = "00105354-0000-1000-8000-00805f9b34fb"
    export const BLE_WRITE_CHAR_ID = "00105352-0000-1000-8000-00805f9b34fb"
    
    export const BleContextProvider = ({
      children,
    }: {
      children: React.ReactNode
    }) => {
      // variables
      const BleManagerModule = NativeModules.BleManager
      const bleEmitter = new NativeEventEmitter(BleManagerModule)
      const { setConnectedDevice } = useBleStore()
    
      // State management
      const [state, dispatch] = React.useReducer(
        (prevState: BleState, action: any) => {
          switch (action.type) {
            case "scanning":
              return {
                ...prevState,
                isScanning: action.payload,
              }
            case "connected":
              return {
                ...prevState,
                connectedBle: action.payload.peripheral,
                isConnected: true,
              }
            case "disconnected":
              return {
                ...prevState,
                connectedBle: undefined,
                isConnected: false,
              }
            case "clearPeripherals":
              let peripherals = prevState.peripherals
              peripherals.clear()
              return {
                ...prevState,
                peripherals: peripherals,
              }
            case "addPerpheral":
              peripherals = prevState.peripherals
              peripherals.set(action.payload.id, action.payload.peripheral)
              const list = [action.payload.connectedBle]
              return {
                ...prevState,
                peripherals: peripherals,
              }
            default:
              return prevState
          }
        },
        initialState
      )
    
      // methods
      const getPeripheralName = (item: any) => {
        if (item.advertising) {
          if (item.advertising.localName) {
            return item.advertising.localName
          }
        }
    
        return item.name
      }
    
      // start to scan peripherals
      const startScan = () => {
        // skip if scan process is currenly happening
        console.log("Start scanning ", state.isScanning)
        if (state.isScanning) {
          return
        }
    
        dispatch({ type: "clearPeripherals" })
    
        // then re-scan it
        BleManager.scan([], 10, false)
          .then(() => {
            console.log("Scanning...")
            dispatch({ type: "scanning", payload: true })
          })
          .catch((err) => {
            console.error(err)
          })
      }
    
      const connectBle = (peripheral: any, callback?: (name: string) => void) => {
        if (peripheral && peripheral.name && peripheral.name == BLE_NAME) {
          BleManager.connect(peripheral.id)
            .then((resp) => {
              dispatch({ type: "connected", payload: { peripheral } })
              // callback from the caller
              callback && callback(peripheral.name)
              setConnectedDevice(peripheral)
            })
            .catch((err) => {
              console.log("failed connecting to the device", err)
            })
        }
      }
    
      // handle discovered peripheral
      const handleDiscoverPeripheral = (peripheral: any) => {
        console.log("Got ble peripheral", getPeripheralName(peripheral))
    
        if (peripheral.name && peripheral.name == BLE_NAME) {
          dispatch({
            type: "addPerpheral",
            payload: { id: peripheral.id, peripheral },
          })
        }
      }
    
      // handle stop scan event
      const handleStopScan = () => {
        console.log("Scan is stopped")
        dispatch({ type: "scanning", payload: false })
      }
    
      // handle disconnected peripheral
      const handleDisconnectedPeripheral = (data: any) => {
        console.log("Disconnected from " + data.peripheral)
    
        //
        dispatch({ type: "disconnected" })
      }
    
      const handleUpdateValueForCharacteristic = (data: any) => {
        console.log(
          "Received data from: " + data.peripheral,
          "Characteristic: " + data.characteristic,
          "Data: " + toStringFromBytes(data.value)
        )
      }
    
      // effects
      useEffect(() => {
        const initBle = async () => {
          await requestBlePermissions()
          BleManager.enableBluetooth()
        }
    
        initBle()
    
        // add ble listeners on mount
        const BleManagerDiscoverPeripheral = bleEmitter.addListener(
          "BleManagerDiscoverPeripheral",
          handleDiscoverPeripheral
        )
        const BleManagerStopScan = bleEmitter.addListener(
          "BleManagerStopScan",
          handleStopScan
        )
        const BleManagerDisconnectPeripheral = bleEmitter.addListener(
          "BleManagerDisconnectPeripheral",
          handleDisconnectedPeripheral
        )
        const BleManagerDidUpdateValueForCharacteristic = bleEmitter.addListener(
          "BleManagerDidUpdateValueForCharacteristic",
          handleUpdateValueForCharacteristic
        )
      }, [])
    
    // render
      return (
        <BleContext.Provider
          value={{
            ...state,
            startScan: startScan,
            connectBle: connectBle,
          }}
        >
          {children}
        </BleContext.Provider>
      )
    }

    NOTE: It is important to note the properties of the BLE device we intend to search for and connect to, namely BLE_NAME, BLE_SERVICE_ID, BLE_READ_CHAR_ID, and BLE_WRITE_CHAR_ID. Familiarizing yourself with these properties beforehand is crucial, as they enable you to restrict the search to specific BLE devices and facilitate connection to the desired BLE service and characteristics for reading and writing data. Being aware of these properties will greatly assist you in effectively working with BLE functionality.

    For instance, take a look at the handleDiscoverPeripheral method. In this method, we filter the discovered peripherals based on their device name, matching it with the predefined BLE_NAME we mentioned earlier. As a result, this approach allows us to obtain a list of devices that specifically match the given name, narrowing down the search to the desired devices only. 

    Additionally, you have the option to scan peripherals using the service IDs of the Bluetooth devices. This means you can specify specific service IDs to filter the discovered peripherals during the scanning process. By doing so, you can focus the scanning on Bluetooth devices that provide the desired services, enabling more targeted and efficient scanning operations.

    Excellent! We now have all the necessary components in place for scanning and connecting to the desired BLE device. Let’s proceed by adding the user interface (UI) elements that will allow users to initiate the scan, display the list of scanned devices, and enable connection to the selected device. By implementing these UI components, we will create a seamless user experience for scanning, device listing, and connection within our application.

    Discovering and Establishing Connections with BLE Devices

    Let’s create a new UI component/Page that will handle scanning, listing, and connecting to the BLE device. This page will have:

    • A Scan button to call the scan function
    • A simple FlatList to list the selected BLE devices and
    • A method to connect to the selected BLE device when the user clicks on any BLE item row from the list

    Create HomeScreen.tsx in the src folder and add the following code: 

    import React, {useCallback, useEffect, useMemo} from 'react';
    import {
      ActivityIndicator,
      Alert,
      Button,
      FlatList,
      StyleSheet,
      Text,
      TouchableOpacity,
      View,
    } from 'react-native';
    import {useBleContext} from './BleContextProvider';
    
    interface HomeScreenProps {}
    
    const HomeScreen: React.FC<HomeScreenProps> = () => {
      const {
        isConnected,
        isScanning,
        peripherals,
        connectedBle,
        startScan,
        connectBle,
      } = useBleContext();
    
      // Effects
      const scannedbleList = useMemo(() => {
        const list = [];
        if (connectedBle) list.push(connectedBle);
        if (peripherals) list.push(...Array.from(peripherals.values()));
        return list;
      }, [peripherals, isScanning]);
    
      useEffect(() => {
        if (!isConnected) {
          startScan && startScan();
        }
      }, []);
    
      // Methods
      const getRssi = (rssi: number) => {
        return !!rssi
          ? Math.pow(10, (-69 - rssi) / (10 * 2)).toFixed(2) + ' m'
          : 'N/A';
      };
    
      const onBleConnected = (name: string) => {
        Alert.alert('Device connected', `Connected to ${name}.`, [
          {
            text: 'Ok',
            onPress: () => {},
            style: 'default',
          },
        ]);
      };
      const BleListItem = useCallback((item: any) => {
        // define name and rssi
        return (
          <TouchableOpacity
            style={{
              flex: 1,
              flexDirection: 'row',
              justifyContent: 'space-between',
              padding: 16,
              backgroundColor: '#2A2A2A',
            }}
            onPress={() => {
              connectBle && connectBle(item.item, onBleConnected);
            }}>
            <Text style={{textAlign: 'left', marginRight: 8, color: 'white'}}>
              {item.item.name}
            </Text>
            <Text style={{textAlign: 'right'}}>{getRssi(item.item.rssi)}</Text>
          </TouchableOpacity>
        );
      }, []);
    
      const ItemSeparator = useCallback(() => {
        return <View style={styles.divider} />;
      }, []);
    
      // render
      // Ble List and scan button
      return (
        <View style={styles.container}>
          {/* Loader when app is scanning */}
          {isScanning ? (
            <ActivityIndicator size={'small'} />
          ) : (
            <>
              {/* Ble devices List View */}
              {scannedbleList && scannedbleList.length > 0 ? (
                <>
                  <Text style={styles.listHeader}>Discovered BLE Devices</Text>
                  <FlatList
                    data={scannedbleList}
                    renderItem={({item}) => <BleListItem item={item} />}
                    ItemSeparatorComponent={ItemSeparator}
                  />
                </>
              ) : (
                <View style={styles.emptyList}>
                  <Text style={styles.emptyListText}>
                    No Bluetooth devices discovered. Please click scan to search the
                    BLE devices
                  </Text>
                </View>
              )}
    
              {/* Scan button */}
              <View style={styles.btnContainer}>
                <Button
                  title="Scan"
                  color={'black'}
                  disabled={isConnected || isScanning}
                  onPress={() => {
                    startScan && startScan();
                  }}
                />
              </View>
            </>
          )}
        </View>
      );
    };
    
    const styles = StyleSheet.create({
      container: {
        flex: 1,
        flexDirection: 'column',
      },
      listHeader: {
        padding: 8,
        color: 'black',
      },
      emptyList: {
        flex: 1,
        justifyContent: 'center',
        alignItems: 'center',
      },
      emptyListText: {
        padding: 8,
        textAlign: 'center',
        color: 'black',
      },
      btnContainer: {
        marginTop: 10,
        marginHorizontal: 16,
        bottom: 10,
        alignItems: 'flex-end',
      },
      divider: {
        height: 1,
        width: '100%',
        marginHorizontal: 8,
        backgroundColor: '#1A1A1A',
      },
    });
    
    export default HomeScreen;

    Now, open App.tsx and replace the complete code with the following changes: 
    In App.tsx, we removed the default boilerplate code, react-native cli generated while creating the project with our own code, where we added the BleContextProvider and HomeScreen to the app.

    import React from 'react';
    import {SafeAreaView, StatusBar, useColorScheme, View} from 'react-native';
    
    import {Colors} from 'react-native/Libraries/NewAppScreen';
    import {BleContextProvider} from './BleContextProvider';
    import HomeScreen from './HomeScreen';
    
    function App(): JSX.Element {
      const isDarkMode = useColorScheme() === 'dark';
    
      const backgroundStyle = {
        backgroundColor: isDarkMode ? Colors.darker : Colors.lighter,
      };
    
      return (
        <SafeAreaView style={backgroundStyle}>
          <StatusBar
            barStyle={isDarkMode ? 'light-content' : 'dark-content'}
            backgroundColor={backgroundStyle.backgroundColor}
          />
          <BleContextProvider>
            <View style={{height: '100%', width: '100%'}}>
              <HomeScreen />
            </View>
          </BleContextProvider>
        </SafeAreaView>
      );
    }
    
    export default App;

    Running the application on an Android device: Upon launching the app, you will be presented with an empty list message accompanied by a scan button. Simply tap the scan button to retrieve a list of available BLE peripherals within the range of your mobile device. By selecting a specific BLE device from the list, you can establish a connection with it.

    Awesome! Now we are able to scan, detect, and connect to the BLE devices, but there is more to it than just connecting to the BLE devices. We can write to and read the required information from BLE devices, and based on that information, mobile applications OR backend services can perform several other operations.

    For example, if you are wearing and connected to a BLE device that monitors your blood pressure every one hour, and if it goes beyond the threshold, it can trigger a call to a doctor or family members to check and take precautionary measures as soon as possible.

    Communicating with BLE devices

    For seamless communication with a BLE device, the mobile app must possess precise knowledge of the services and characteristics associated with the device. A BLE device typically presents multiple services, each comprising various distinct characteristics. These services and characteristics can be collaboratively defined and shared by the team responsible for manufacturing the BLE device.

    In BLE communication, comprehending the characteristics and their properties is crucial, as they serve distinct purposes. Certain characteristics facilitate writing data to the BLE device, while others enable reading data from it. Gaining a comprehensive understanding of these characteristics and their properties is vital for effectively interacting with the BLE device and ensuring seamless communication.

    Reading data from BLE device when BLE sends data

    Once the mobile app successfully establishes a connection with the BLE device, it initiates the retrieval of available services. It activates the listener to begin receiving notifications from the BLE device. This process takes place within the callback of the “connect BLE” method, ensuring that the app seamlessly retrieves the necessary information and starts listening for important updates from the connected BLE device.

    const connectBle = (peripheral: any, callback?: (name: string) => void) => {
        if (peripheral && peripheral.name && peripheral.name == BLE_NAME) {
          BleManager.connect(peripheral.id)
            .then((resp) => {
              dispatch({ type: "connected", payload: { peripheral } })
              // callback from the caller
              callback && callback(peripheral.name)
              setConnectedDevice(peripheral)
    
              // retrieve services and start read notification
              BleManager.retrieveServices(peripheral.id).then((resp) => {
                BleManager.startNotification(
                  peripheral.id,
                  BLE_SERVICE_ID,
                  BLE_READ_CHAR_ID
                )
                  .then(console.log)
                  .catch(console.error)
              })
            })
            .catch((err) => {
              console.log("failed connecting to the device", err)
            })
        }
      }

    Consequently, the application will promptly receive notifications whenever the BLE device writes data to the designated characteristic within the specified service.

    Reading and writing data to BLE from a mobile device

    To establish communication between the mobile app and the BLE device, we will implement new methods within BleContextProvider. These methods will facilitate the reading and writing of data to the BLE device. By exposing these methods in BleContextProvider’s reducer, we ensure that the app has a reliable means of interacting with the BLE device and can seamlessly exchange information as required.

    interface BleState {
      isConnected: boolean
      isScanning: boolean
      peripherals: Map<string, any>
      list: Array<any>
      connectedBle: Peripheral | undefined
      startScan?: () => void
      connectBle?: (peripheral: any, callback?: (name: string) => void) => void
      readFromBle?: (id: string) => void
      writeToble?: (
        id: string,
        content: string,
        callback?: (count: number, buttonNumber: ButtonNumber) => void
      ) => void
    }
    
    export const BleContextProvider = ({
      children,
    }: {
      children: React.ReactNode
    }) => {
        ....
        
        const writeToBle = (
        id: string,
        content: string,
        count: number,
        buttonNumber: ButtonNumber,
        callback?: (count: number, buttonNumber: ButtonNumber) => void
      ) => {
        BleManager.retrieveServices(id).then((response) => {
          BleManager.writeWithoutResponse(
            id,
            BLE_SERVICE_ID,
            BLE_WRITE_CHAR_ID,
            toByteArray(content)
          )
            .then((res) => {
              callback && callback(count, buttonNumber)
            })
            .catch((res) => console.log("Error writing to BLE device - ", res))
        })
      }
    
      const readFromBle = (id: string) => {
        BleManager.retrieveServices(id).then((response) => {
          BleManager.read(id, BLE_SERVICE_ID, BLE_READ_CHAR_ID)
            .then((resp) => {
              console.log("Read from BLE", toStringFromBytes(resp))
            })
            .catch((err) => {
              console.error("Error Reading from BLE", err)
            })
        })
      }
      ....
    
      // render
      return (
        <BleContext.Provider
          value={{
            ...state,
            startScan: startScan,
            connectBle: connectBle,
            writeToble: writeToBle,
            readFromBle: readFromBle,
          }}
        >
          {children}
        </BleContext.Provider>
      )    
    }

    NOTE: Before a write, read, or start notification, you need to call retrieveServices method every single time.

    Disconnecting BLE connection

    Once you are done with the BLE services, you can disconnect the BLE connection using the disconnectBLE method provided in the library.

    ....
    BleManager.disconnect(BLE_SERVICE_ID)
      .then(() => {
        dispatch({ type: "disconnected", payload: { peripheral } })
      })
      .catch((error) => {
        // Failure code
        console.log(error);
      });
    ....

    Additionally, the React Native BLE Manager library offers various other methods that can enhance the application’s functionality. These include the createBond method, which facilitates the pairing of the BLE device with the mobile app, the stopNotification method, which ceases receiving notifications from the device, and the readRSSI method, which retrieves the received signal strength indicator (RSSI) of the device. For a more comprehensive understanding of the library and its capabilities, I recommend exploring further details on the React Native BLE Manager library documentation here: https://www.npmjs.com/package/react-native-ble-manager

    Conclusion

    We delved into the fascinating world of communicating with BLE (Bluetooth Low Energy) using the React Native BLE Manager library. Then we explored the power of BLE technology and how it can be seamlessly integrated into React Native applications to enable efficient and low-power communication between devices.

    Using the React Native BLE Manager library, we explored essential functionalities such as scanning for nearby BLE devices, establishing connections, discovering services and characteristics, and exchanging data. We also divided into more advanced features like managing connections and handling notifications for a seamless user experience.

    It’s important to remember that BLE technology is continually evolving, and there may be additional libraries and frameworks available for BLE communication in the React Native ecosystem. As you progress on your journey, I encourage you to explore other resources, keep up with the latest advancements, and stay connected with the vibrant community of developers working with BLE and React Native.

    I hope this blog post has inspired you to explore the immense potential of BLE communication in your React Native applications. By harnessing the power of BLE, you can create innovative, connected experiences that enhance the lives of your users and open doors to new possibilities.

    Thank you for taking the time to read through this blog!

  • ARMed to Entertain: Why the Consumer Electronics Industry loves the ARM microcontroller

    Introduction

    We live in a world where convenience is king. Millions of electronic devices work in tandem to simplify our lives. The brain in these devices is the microcontroller. Today, we’re going to talk about the ARM microcontroller, which is the heart and soul of consumer electronic devices like smartphones, tablets, multimedia players, and wearable devices.

    To start off, there are two main processor architecture designs, namely RISC (reduced instruction set computers) and CISC (complex instruction set computers). ARM is the poster child for RISC, in fact, it is included in its name Advanced RISC Machine.

    Its highly optimized and power-efficient architecture makes it indispensable in today’s world. Let’s look at its design in more detail.

    A Powerful Brain for Embedded Systems

    A mobile or tablet is a shining example of an extremely portable computing device.

    It’s a great way to keep your life organized, communicate with practically anyone, consume media content, and enjoy unlimited games and entertainment. These capabilities just keep improving over time.

    But there is a silent struggle between applications and the hardware they run on. We all have experienced that annoying lag on our smartphones and not to mention the battery giving up on us when we need it the most. Luckily, ARM is packed with features to help us manage this. 

    Let’s Talk Simplicity

    An ‘assembly instruction set’ is the language understood by the ARM controller. Its design plays a crucial role in enabling us to perform a task in an efficient and optimized manner. ARM has a reduced instruction set (RISC). This does not denote there are fewer instructions available for use. It means a single instruction does less work, i.e., a small atomic task.

    As an example, let’s consider adding two numbers that would involve separate instructions for loading, adding, and storing the result using RISC design. Comparatively, a CISC design would have handled all of this in a single instruction. A simple instruction set does not require complex hardware design. This enables an ARM controller design to use fewer transistors and take up less silicon area. This reduces the power consumption, which is critical for battery-operated devices, along with corresponding savings in cost. But RISC controllers need a greater number of instructions to execute a task as compared to CISC. The compiler design for generating machine code from higher-level languages such as C is more complex in this case.

    Hence one needs to write optimized code to extract the best performance from ARM.

    Dealing with the Energy Vampire

    An hour of intense gaming drains your battery and leaves you scrambling for a wall charger or power bank. This is because a lot of computations are done in specially designed hardware units in ARM, which need extra power. These units barely consume any power when your device is idle. This means there is a direct relation between the intensity of computations and energy consumption.

    Every microcontroller needs a clock pulse, which is comparable to the heartbeat of the controller. It governs the speed at which instructions are executed and helps the controller keep time while performing tasks or governing the rate at which peripherals are run. The commencement and duration of any action that a processor may perform can be expressed in terms of clock cycles. A lower clock rate reduces the power consumption, which is critical for embedded devices but unfortunately also leads to a drop in performance. An instruction pipeline helps to boost performance and throughput while enabling a lower clock rate to be used. This can be compared to the functioning of a turbocharger in a car engine, where the real saving is in the benefits of using a smaller capacity engine but boosting it to match one that is larger and more powerful.

    With careful programming, we can increase the instruction throughput to do a lot more in a single clock cycle. Such judicious use of the system clock preserves battery life, reducing the need to charge the battery frequently.

    Busy as a Bee

    Another critical feature that speeds up execution is the instruction pipeline. It introduces parallelism in the execution of instructions. All instructions go through the fetch, decode, and execute stages which involve loading the instruction from program memory, understanding what task it performs, and finally, its execution. We have an instruction in each stage of the pipeline at any point in time. This increases throughput and speeds up code execution. Imagine you are at work, and each time you complete a task, your manager has a new one kept ready so that you are never idle. Yes, that would be the perfect analogy for the instruction pipeline. It reduces the wastage of clock cycles by ensuring there are always instructions fetched and available for execution.

    A Math Specialist

    A core part of computing involves transforming data and making decisions. Speed and accuracy are paramount in such situations. ARM has you covered with hardware units for arithmetic and logical instructions, enhanced DSP, and NEON technology for parallel processing of data. In short, all the bells and whistles needed to handle everything from music playback to powering drone platforms.  

    The NEON coprocessor is capable of executing multiple math operations simultaneously.

    It reduces the computational load on the main ARM controller. The design of these math units allows us to balance the tradeoff between computational speed and accuracy. As per the application requirement, we may choose to perform 4×16 bit multiply operations in parallel via NEON over 4×32 bit multiply operations sequentially in the ARM ALU (arithmetic and logical unit). The precision of the final result is reduced due to the usage of 16 bit operands in NEON, but the change in computational speed is significant. The ability to provide such multimedia acceleration is what makes ARM the main choice for portable audio, video, and gaming applications. 

    Conclusion‍

    We see that the system designers have attempted to balance performance, power consumption, and cost to produce a powerful embedded computing machine. As portability and efficiency demands increase, we can see ARM’s influence continue to expand.

    An application, if designed appropriately to leverage all of ARM’s features, can provide stunning performance without draining the battery.

    It takes a special level of skill to tune an application in “assembly language,” but the final result exceeds expectations. The next time you see a tiny wearable device delivering unbelievable performance, you know who the hidden star of the show is.   

  • A Guide to End-to-End API Test Automation with Postman and GitHub Actions

    Objective

    • The blog intends to provide a step-by-step guide on how to automate API testing using Postman. It also demonstrates how we can create a pipeline for periodically running the test suite.
    • Further, it explains how the report can be stored in a central S3 bucket, finally sending the status of the execution back to a designated slack Channel, informing stakeholders about the status, and enabling them to obtain detailed information about the quality of the API.

    Introduction to Postman

    • To speed up the API testing process and improve the accuracy of our APIs, we are going to automate the API functional tests using Postman.
    • Postman is a great tool when trying to dissect RESTful APIs.
    • It offers a sleek user interface to create our functional tests to validate our API’s functionality.
    • Furthermore, the collection of tests will be integrated with GitHub Actions to set up a CI/CD platform that will be used to automate this API testing workflow.

    Getting started with Postman

    Setting up the environment

    • Click on the “New” button on the top left corner. 
    • Select “Environment” as the building block.
    • Give the desired name to the environment file.

    Create a collection

    • Click on the “New” button on the top left corner.
    • Select “Collection” as the building block.
    • Give the desired name to the Collection.

    ‍Adding requests to the collection

    • Configure the Requests under test in folders as per requirement.
    • Enter the API endpoint in the URL field.
    • Set the Auth credentials necessary to run the endpoint. 
    • Set the header values, if required.
    • Enter the request body, if applicable.
    • Send the request by clicking on the “Send” button.
    • Verify the response status and response body.

    Creating TESTS

    • Click on the “Tests” tab.
    • Write the test scripts in JavaScript using the Postman test API.
    • Run the tests by clicking on the “Send” button and validate the execution of the tests written.
    • Alternatively, the prebuilt snippets given by Postman can also be used to create the tests.
    • In case some test data needs to be created, the “Pre-request Script“ tab can be used.

    Running the Collection

    • Click on the ellipses beside the collection created. 
    • Select the environment created in Step 1.
    • Click on the “Run Collection” button.
    • Alternatively, the collection and the env file can be exported and also run via the Newman command.

    Collaboration

    The original collection and the environment file can be exported and shared with others by clicking on the “Export” button. These collections and environments can be version controlled using a system such as Git.

    • While working in a team, team members raise PRs for their changes against the original collection and env via forking. Create a fork.
    • Make necessary changes to the collection and click on Create a Pull request.
    • Validate the changes and approve and merge them to the main collection.

    Integrating with CI/CD

    Creating a pipeline with GitHub Actions

    GitHub Actions is a continuous integration and continuous delivery (CI/CD) platform that allows you to automate your build, test, and deployment pipeline.

    You can create workflows that build and test every pull request to your repository, or deploy merged pull requests to production. To create a pipeline, follow the below steps

    • Create a .yml file inside folder .github/workflows at root level.
    • The same can also be created via GitHub.
    • Configure the necessary actions/steps for the pipeline.

    Workflow File

    • Add a trigger to run the workflow.
    • The schedule in the below code snippet is a GitHub Actions event that triggers the workflow at a specific time interval using a CRON expression.
    • The push and pull_request events denote the Actions event that triggers the workflow for each push and pull request on the develop branch.
    • The workflow_dispatch tag denotes the ability to run the workflow manually, too, from GitHub Actions.
    • Create a job to run the Postman Collection.
    • Check out the code from the current repository. Also, create a directory to store the results.
    • Install Nodejs.
    • Install Newman and necessary dependencies
    • Running the collection
    • Upload Newman report into the directory

    Generating Allure report and hosting the report onto s3.

    • Along with the default report that Newman provides, Allure reporting can also be used in order to get a dashboard of the result. 
    • To generate the Allure report, install the Allure dependencies given in the installation step above.
    • Once, that is done, add below code to your .yml file.
    • Create a bucket in s3, which you will be using for storing the reports
    • Create an iam role for the bucket.
    • The below code snipper user aws-actions/configure-aws-credentials@v1 action to configure your AWS.
    • Credentials Allure generates 2 separate folders eventually combining them to create a dashboard.

    Use the code snippet in the deploy section to upload the contents of the folder onto your s3 bucket.

    • Once done, you should be able to see the Allure dashboard hosted on the Static Website URL for your bucket.

    Send Slack notification with the Status of the job

    • When a job is executed in a CI/CD pipeline, it’s important to keep the team members informed about the status of the job. 
    • Below GitHub Actions step sends a notification to a Slack channel with the status of the job.
    • It uses the “notify-slack-action” GitHub Action, which is defined in the “ravsamhq/notify-slack-action” repository.
    • The “if: always()” condition indicates that this step should always be executed, regardless of whether the previous steps in the workflow succeeded or failed.
  • Getting the Best Out of FLAC on ARMv7: Performance Optimization Tips

    Overview

    FLAC stands for Free Lossless Audio Codec, an audio format similar to MP3 but lossless. This means audio is compressed in FLAC without any loss in quality. It is generally used when we have to encode audio without compromising quality.

    FLAC is an open-source codec (software or hardware that compresses or decompresses digital audio) and is free to use.

    We chose to deploy the FLAC encoder on an ARMv7 embedded platform.

    ARMv7 is a version of the ARM processor architecture; it is used in a wide range of devices, including smartphones, tablets, and embedded systems.

    Let’s dive into how to optimize FLAC’s performance specifically for the ARMv7 architecture. This will provide you with valuable insight with regard to the importance of optimizing FLAC.

    So, tighten your seat belts, and let’s get started.

    Why Do We Need to Optimize FLAC?

    Optimizing FLAC in terms of its performance will make it faster. That way, it will encode/decode(compress/decompress) the audio faster. The below points explain why we need fast codecs.

    • Suppose you’re using one of your favorite music streaming apps, and suddenly, you encounter glitches or pauses in your listening experience.
    • How would you react to the above? A poor user experience will cause this app to lose users to the competition.
    • There can be many reasons for that glitch to happen, possibly a network problem, a server problem, or maybe the audio codec.
    • The app’s audio codec may not be fast enough for your device to deliver the music without any glitches. That’s the reason we need fast codecs. It is a critical component within our control.
    • FLAC is a widely used HiRes audio codec because of its lossless nature.

    Optimizing FLAC for ARMv7

    WHY Optimize for the ARM Platform?

    • Most music devices use ARM-based processors, like mobiles, tablets, car systems, FM radios, wireless headphones, and speakers. 
    • They use ARM because of the small chip size, low energy consumption (good for battery-powered devices), and it’s less prone to heating.

    Optimization Techniques

    FLAC source code is written in the C programming language. So, there are two ways to optimize.

    1. We can rearrange the FLAC source code or write it in a certain way that will execute it faster, as FLAC source code is written in C. So, let’s call this technique C Optimization Technique.
    2. We can convert some parts of the FLAC source code into machine-specific assembly language. Let’s call this technique ARM Assembly Optimization as we are optimizing it for ARMv7.

    According to my experience, assembly optimization gives better results. 

    To discuss optimization techniques, first, we need to identify where codec performance typically lags.

    • Usually, a general codec uses complex algorithms that involve many complex mathematical operations. 
    • Loops are also one of the parts where codecs generally spend more time.
    • Also, we need to access the main memory (RAM) frequently for the above calculations, which is a penalty in performance.
    • Therefore, before optimizing FLAC, we have to keep the above things in mind. Our main goal should be to make mathematical calculations, loops, and memory access faster.

    C Optimization

    There are many ways in which we can approach C optimizations. Most methods are generalized and can be applied to any C source code.

    Loop Optimizations

    As discussed earlier, loops are one of the parts where a codec generally spends more time. We can optimize loops in C itself.

    There are two widely used methods to optimize the loop in C.

    Loop Unrolling – 
    • Loops have three parts: initialization, condition checking, and increment.
    • In the loop, every time we have to test for conditions to exit and increment the counter. 
    • This condition check disrupts the flow of execution and imposes a significant performance penalty when working on a large data set.
    • Loop unrolling reduces branching overhead by working on a larger data chunk before the condition check.

    Let’s try to understand by an example:

    /* Original loop with n iterations. Assuming n is a multiple of 4 */
    for (int i = 0; i < n; i++) {
        Sum += a[i]*b[i];
    }
    
    
    /* Loop unrolling by 4 */
    for (int i = 0; i < k; i += 4) {
        Sum += a[i]*b[i];
        Sum += a[i+1]*b[i+1];
        Sum += a[i+2]*b[i+2];
        Sum += a[i+3]*b[i+3];
    }

    As you can see, after unrolling it by 4, we have to test the exit condition and increment  n/4 times instead of n times.

    Loop Fusion –

    When we use the same data structure in two loops, then instead of executing two loops, we can combine the loops. That way, it will remove the overhead of one loop, and therefore it will execute faster. But we need to ensure the number of loop iterations are the same and the code operations are independent of each other.

    Let’s see an example.

    /* Loop 1 */
    for(i = 0; i < n; i++)
    {
      prod *= a[i]*5;
    }
    
    
    /* Loop 2 */
    for(i = 0; i < n; i++)
    {
      sum += a[i];
    }
    
    
    /* Merging two loops to remove the overhead of one loop */
    for(i = 0; i < n; i++)
    {
      prod *= a[i]*5;
      sum += a[i];
    }

    As you can see in the above code, we are using the array a[ ] in both loops, so we can merge the loops by which it will check for conditions and increment n times instead of 2n.

    Memory Optimizations for Arm Architecture

    Memory access can significantly impact performance in C since multiple processor cycles are consumed for memory accesses. ARM cannot operate on data stored in memory; it needs to be transferred to the register bank first. This highlights the need to streamline the flow of data to the ARM CPU for processing.

    We can also utilize cache memory, which is much faster than main memory, to help minimize this performance penalty.

    To make memory access faster, data can be rearranged to sequential accesses, which consume fewer cycles. By optimizing memory access, we can improve overall performance in FLAC.

    Fig-1 Cache memory lies between the main memory and the processor

    Below are some tips for using the data cache more efficiently.

    • Preload the frequently used data into the cache memory.
    • Group related data together, as sequential memory accesses are faster.
    • Similarly, try to access array values sequentially instead of randomly.
    • Use arrays instead of linked lists wherever possible for sequential memory access.

    Let’s understand the above by an example:

    for(i = 0; i < n; i++)
    {
      for(j = 0; j < m; j++)
      {
        /* Accessing a[j][i] is inefficient because we are not accessing  array sequentially in memory */
      }
    }
    
    
    /* After Interchanging */
    for(j = 0; j < m; j++)
    {
      for(i = 0; i < n; i++)
      {
        /* Accessing a[j][i] is efficient now*/
      }
    }

    As we can see in the above example, loop interchange significantly reduces cache-misses, with optimized code experiencing only 0.1923% of cache-misses. This accumulates over time to a performance improvement of 20% on ARMv7 for an array a[1000][900].

    Assembly Optimizations

    First, we need to understand why assembly optimizations are required.

    • In C optimization, we can access limited hardware features.
    • In ARM Assembly, we can leverage the processor features to the full extent, which will further help in the fast execution of code.
    • We have a Neon Co-processor, Floating Point Unit, and EDSP unit in ARMv7, which accelerate mathematical operations. We can explicitly access such hardware only via assembly language.
    • Compilers convert C code to assembly code, but may not always generate efficient code for certain functions. Writing those functions directly in assembly can lead to further optimization.

    The below points explain why the compiler doesn’t generate efficient assembly for some functions.

    • The first obvious reason is that compilers are designed to convert any C code to assembly without changing the meaning of the code. The compiler does not understand the algorithms or calculations being used.
    • The person who understands the algorithm can, of course, write better assembly than the compiler.
    • An experienced assembly programmer can modify the code to leverage specific hardware features to speed up performance.

    Now let me explain the most widely used hardware units in ARM, which accelerate mathematical operations.

    NEON – 
    • The NEON co-processor is an additional computational unit to which the ARM processor can offload mathematical calculations.
    • It is just like a sub-conscious mind (co-processor) in our brain (processor), which helps ease the workload.
    • NEON does parallel processing; it can do up to 16 addition, subtraction, etc., in just a single instruction. 
    Fig-2 Instead of adding 4 variables one by one, neon adds them in parallel simultaneously
    • FLOATING POINT UNIT – This hardware unit is used to perform operations on floating point numbers. Typical operations it supports are addition, subtraction, multiplications, divisions, square roots, etc.
    • EDSP(Enhanced Digital Signal Processing) – This hardware unit supports fast multiplications, multiply-accumulate, and vector operations.
       Fig-3 ARMv7 CPU, NEON, EDSP, FPU, and Cache under ARM Core

    Approaching Optimizations

    First of all, we have to see which functions we have to optimize. We can get to know about that by profiling FLAC. 

    Profiling is a technique for learning which section of code takes more time to execute and which functions are getting called frequently. Then we can optimize that section of code or function. 

    Below are some tips you can follow for an idea of which optimization technique to use.

    • For performance-critical functions, ARM Assembly should be considered as the first option for optimization, as it typically provides better performance than C optimization as we can directly leverage hardware features.
    • When there is no scope for using the hardware units which primarily deal with mathematical operations then we can go for C optimizations.
    • To determine if assembly code can be improved, we can check the compiler’s assembly output.
    • If there is scope for improvement, we can write code directly in assembly for better utilization of hardware features, such as Neon and FPU.

    Results 

    After applying the above techniques to the FLAC Encoder, we saw an improvement of 22.1% in encoding time. As you can see in the table below, we used a combination of assembly and C optimizations.

    Fig-4 Graphical visualization of average encoding time vs Sampling frequency before and after optimization.

    Conclusion

    FLAC is a lossless audio codec used to preserve quality for HiRes audio applications. Optimizations that target the platform on which the codec is deployed help in providing a great user experience by drastically improving the speed at which audio can be compressed or decompressed. The same techniques can apply to other codecs by identifying and optimizing performance-critical functions.

    The optimization techniques we have used are bit-exact i.e.,: after optimizations, you will get the same audio output as before.

    However, it is important to note that although we can trade bit-exactness for speed, it should be done judiciously, as it can negatively impact the perceived audio quality.

    Looking to the future, with ongoing research into new compression algorithms and hardware, as these technologies continue to evolve, it is likely that we will see new and innovative ways to optimize audio codecs for better performance and quality.

  • The Art of Release Management: Keys to a Seamless Rollout

    Overview

    A little taste of philosophy: Just like how life is unpredictable, so too are software releases. No matter the time and energy invested in planning a release, things go wrong unexpectedly, leaving us (the software team and business) puzzled. 

    Through this blog, I will walk you through:

    1. Cures: the actions (or reactions!) from the first touchpoint of a software release gone haywire, scrutinizing it per user role in the software team. 
    2. Preventions: Later, I will introduce you to a framework that I devised by being part of numerous hiccups with the software releases, which eventually led me to strategize and correct the methodology for executing smoother releases. 

    Software release hiccups: cures

    Production issues are painful. They suck out the energy and impact the software teams and, eventually, the business on different levels. 

    No system has ever been built foolproof, and there will always be occasions when things go wrong. 

    “It’s not what happens to you but how you react to it that matters.”

    – Epictetus

    I have broken down the cures for a software release gone wrong into three phases: 

    1: Discovery phase

    Getting into the right mindset

    Just after the release, you start receiving alerts or user complaints about the issues they are facing with accessing the application. 

    This is the trickiest phase of them all. When a release goes wrong, it is a basic human emotion to find someone to blame or get defensive. But remember, the user is always right.

    And this is the time for acceptance that there indeed is a problem with the application.

    Keeping the focus on the problem that needs to be resolved helps to a quicker and more efficient resolution. 

    As a Business Analyst/Product/Project Manager, you can:

    Handle the communications:

    • Keep the stakeholders updated at all the stages of problem-solving
    • Emails, root cause analysis [RCA] initiation
    • Product level executive decisions [rollback, feature flags, etc.]

    As an engineer, you can:

    • Check the logs, because logs don’t lie
    • If the logs data is insufficient, check at a code level 

    As a QA, you can:

    • Replicate the issue (obviously!)
    • See what test cases missed the scenario and why
    • Was it an edge case?
    • Was it an environment-specific issue?

    Even though I have separate actions per role stated above, most of these are interchangeable. More eyes and ears help for a swift recovery from a bad release. 

    2: Mitigation phase

    Finding the most efficient solutions to the problem at hand

    Once you have discovered the whys and whats of the problem, it is time to move onto the how phase. This is a crucial phase, as the clock ticks and the business is hurting. Everyone is expecting a resolution, and that too sooner. 

    As a Business Analyst/Product/Project Manager, you can:

    • Have team session/s to come up with the best possible solutions. 
    • Multiple solutions help to gauge the trade-offs and to make a wiser decision.
    • PMs can help with making logical business decisions and analyzing the impacts from the business POV.
    • Communicate the solutions and trade-offs, if needed, with stakeholders to have more visibility on the mindsets.

    As an engineer, you can:

    • Check technical feasibility vs. complexity in terms of time vs. code repercussions to help with the decision-making with the solution.
    • Raise red flags upfront, keeping in mind what part of the current problem to avoid reoccurrence. 
    • Avoid quick fixes as much as possible, even when there is pressure for getting the solutions in place.

    As a QA, you can:

    • Focus on what might break with the proposed solution. 
    • Make sure to run the test cases or modify the existing ones to accommodate the new changes.
    • Replicate the final environment and scenarios in the sandbox as much as possible.

    3: Follow-ups and tollgates

    Stop, check and go 

    Tollgates help us in identifying slippages and seal them tight for the future. Every phase of the software release brings us new learnings, and it is mostly about adapting and course correction, taking the best course of action as a team, for the team. 

    Following are some of the tollgates within the release process: 

    Unit Tests

    • Are all the external dependencies accounted for within the test scenarios?
    • Maybe the root cause case wasn’t considered at all, so it was not initially tested?
    • Too much velocity and hence unit tests were ignored to an extent.
    • Avoid the world of quick fixes and workarounds as much as possible.

    User Acceptance Testing [UAT]

    • Is the sandbox environment different than the actual live environment?
    • Have similar configurations for servers so that we are welcomed by surprises after a release.
    • User error
    • Some issues may have been slipped due to human errors.
    • Data quality issue
    • The type of data in sandbox vs live environments is different, which is not catching the issues in sandbox.

    Software release hiccups: Preventions

    Prevention is better than cure; yes, for sure, that sounds cool! 

    Now that we have seen how to tackle the releases gone wild, let me take you through the prevention part of the process. 

    Though we understand the importance of having the processes and tools to set us up for a smoother release, it is only highlighted when a release goes grim. That’s when the checklists get their spotlight and how the team needs to adhere to the set processes within the team. 

    Well, the following is not a checklist, per se, but a framework for us to identify the problems early in the software release and minimize them to some degree. 

    The D.I.A.P.E.R Framework

    So that you don’t have to do a clean-up later!

    This essentially is a set of six activities that should be in place as you are designing your software.

    Design

    This is not the UI/UX of the app and relates to how the application logs should be maintained. 

    Structured logs

    • Logs in a readable and consistent format that monitors for errors.

    Centralized logging

    • Logs in one place and accessible to all the devs, which can be queried easily for advanced metrics.
    • This removes the dependency on specific people within the team. The logs are not needed by everyone, but the point is multiple people having access to them helps within the team.

    Invest

    • Invest time in setting up processes
    • Software development
    • Release process/checklist
    • QA/UAT sign-offs
    • Invest money in getting the right tools which would cater to the needs
    • Monitoring
    • Alerting
    • Task management

    Alerts

    Setting up an alert mechanism automatically raises the flags for the team. Also, not everyone needs to be on these alerts, hence make a logical decision about who would be benefitting from the alerts system

    • Setup alerts
    • Email
    • Incident management software
    • Identify stakeholders who need to receive these alerts

    Prepare

    • Defining strategies: who take action when things go wrong. This helps in avoiding chaotic situations, and the rest of the folks within the team can work on the solution instead
    • Ex: Identifying color codes for different severities (just like we have in hospitals)
    • Plan of Action for each severity
    • Not all situations are as severe as we think. Hence, it is important to set what action is needed for each of the severities.
    • Ops and dev teams should be tightly intertwined.

    Evaluate

    Whenever we see a problem, we usually tend to jump to solutions. In my experience, it has always helped me to take some time and identify the answers to the following: 

    • What is the issue?
    • The focus: problem
    • How severe?
    • Severity level and mentioned in the previous step
    • Who needs to be involved?
    • Not everyone within the team needs to be involved immediately to fix the problem; identifying who needs to be involved saves time for the rest of us. 

    Resolve

    There is a problem at hand, and the business and stakeholders expect a solution. As previously mentioned, keeping a cool head in this phase is of utmost importance.

    • Propose the best possible solution based on
    • Technical feasibility
    • Time
    • Cost
    • Business impact

    Always have multiple solutions to gauge the trade-offs; some take lesser time but involve rework in the future. Make a logical decision based on the application and the nature of the problem. 

    Takeaways

    • In the discovery phase of the problem, keep the focus on the problem
    • Keep a crisp communication with the stakeholders, making them aware of the severity of the problem and assuring them about a steady solution.
    • In the mitigation phases, identify who needs to be involved in the problem resolution
    • Come up with multiple solutions to pick the most logical and efficient solution out of the lot.
    • Have tollgates in places to catch slippages at multiple levels. 
    • D.I.A.P.E.R framework
    • Design structured and centralized logs.
    • Invest time in setting up the process and invest money in getting the right tools for the team.
    • Alerts: Have a notification system in place, which shall raise flags when things go beyond a certain benchmark.
    • Prepare strategies for different severity levels and assign color codes for the course of action for each level of threat.
    • Evaluate the problem and the action via who, what, and how?
    • Resolution of the problem, which is cost and time efficient and aligns with the business goals/needs. 

    Remember that we are building the software for the people with the help of people within the team. Things go wrong even in the most elite systems with sophisticated setups. 

    Do not go harsh on yourself and others within the team. Adapt, learn, and keep shipping! 

  • Why Signals Could Be the Future for Modern Web Frameworks?

    Introduction

    When React got introduced, it had an edge over other libraries and frameworks present in that era because of a very interesting concept called one-way data binding or in simpler words uni-directional flow of data introduced as a part of Virtual DOM.

    It made for a fantastic developer experience where one didn’t have to think about how the updates flow in the UI when data (”state” to be more technical) changes.

    However, as more and more hooks got introduced there were some syntactical rules to make sure they perform in the most optimum way. Essentially, a deviation from the original purpose of React which is unidirectional flow or explicit mutations

    To call out a few

    • Filling out the dependency arrays correctly
    • Memoizing the right values or callbacks for rendering optimization
    • Consciously avoiding prop drilling

    And possibly a few more that if done the wrong way could cause some serious performance issues i.e. everything just re-renders. A slight deviation from the original purpose of just writing components to build UIs.

    The use of signals is a good example of how adopting Reactive programming primitives can help remove all this complexity and help improve developer experience by shifting focus on the right things without having to explicitly follow a set of syntactical rules for gaining performance.

    What Is a Signal?

    A signal is one of the key primitives of Reactive programming. Syntactically, they are very similar to states in React. However, the reactive capabilities of a signal is what gives it the edge.

    const [state, setState] = useState(0);
    // state -> value
    // setState -> setter
    const [signal, setSignal] = createSignal(0);
    // signal -> getter 
    // setSignal -> setter

    At this point, they look pretty much the same—except that useState returns a value and useSignal returns a getter function.

    How is a signal better than a state?

    Once useState returns a value, the library generally doesn’t concern itself with how the value is used. It’s the developer who has to decide where to use that value and has to explicitly make sure that any effects, memos or callbacks that want to subscribe to changes to that value has that value mentioned in their dependency list and in addition to that, memoizing that value to avoid unnecessary re-renders. A lot of additional effort.

    function ParentComponent() {
      const [state, setState] = useState(0);
      const stateVal = useMemo(() => {
        return doSomeExpensiveStateCalculation(state);
      }, [state]); // Explicitly memoize and make sure dependencies are accurate
      
      useEffect(() => {
        sendDataToServer(state);
      }, [state]); // Explicilty call out subscription to state
      
      return (
        <div>
          <ChildComponent stateVal={stateVal} />
        </div>
      );
    }

    A createSignal, however, returns a getter function since signals are reactive in nature. To break it down further, signals keep track of who is interested in the state’s changes, and if the changes occur, it notifies these subscribers.

    To gain this subscriber information, signals keep track of the context in which these state getters, which are essentially a function, are called. Invoking the getter creates a subscription.

    This is super helpful as the library is now, by itself, taking care of the subscribers who are subscribing to the state’s changes and notifying them without the developer having to explicitly call it out.

    createEffect(() => {
      updateDataElswhere(state());
    }); // effect only runs when `state` changes - an automatic subscription

    The contexts (not to be confused with React Context API) that are invoking the getter are the only ones the library will notify, which means memoizing, explicitly filling out large dependency arrays, and the fixing of unnecessary re-renders can all be avoided. This helps to avoid using a lot of additional hooks meant for this purpose, such as useRef, useCallback, useMemo, and a lot of re-renders.

    This greatly enhances the developer experience and shifts focus back on building components for the UI rather than spending that extra 10% of developer efforts in abiding by strict syntactical rules for performance optimization.

    function ParentComponent() {
      const [state, setState] = createSignal(0);
      const stateVal = doSomeExpensiveStateCalculation(state()); // no need memoize explicity
    
      createEffect(() => {
        sendDataToServer(state());
      }); // will only be fired if state changes - the effect is automatically added as a subscriber
    
      return (
        <div>
          <ChildComponent stateVal={stateVal} />
        </div>
      );
    }

    Conclusion

    It might look like there’s a very biased stance toward using signals and reactive programming in general. However, that’s not the case.

    React is a high-performance, optimized library—even though there are some gaps or misses in using your state in an optimum way, which leads to unnecessary re-renders, it’s still really fast. After years of using React a certain way, frontend developers are used to visualizing a certain flow of data and re-rendering, and replacing that entirely with a reactive programming mindset is not natural. React is still the de facto choice for building user interfaces, and it will continue to be with every iteration and new feature added.

    Reactive programming, in addition to performance enhancements, also makes the developer experience much simpler by boiling down to three major primitives: Signal, Memo, and Effects. This helps focus more on building components for UIs rather than worrying about dealing explicitly with performance optimization.

    Signals are increasingly getting popular and are a part of many modern web frameworks, such as Solid.js, Preact, Qwik, and Vue.js.

  • Apache Flink – A Solution for Real-Time Analytics

    In today’s world, data is being generated at an unprecedented rate. Every click, every tap, every swipe, every tweet, every post, every like, every share, every search, and every view generates a trail of data. Businesses are struggling to keep up with the speed and volume of this data, and traditional batch-processing systems cannot handle the scale and complexity of this data in real-time.

    This is where streaming analytics comes into play, providing faster insights and more timely decision-making. Streaming analytics is particularly useful for scenarios that require quick reactions to events, such as financial fraud detection or IoT data processing. It can handle large volumes of data and provide continuous monitoring and alerts in real-time, allowing for immediate action to be taken when necessary.

    Stream processing or real-time analytics is a method of analyzing and processing data as it is generated, rather than in batches. It allows for faster insights and more timely decision-making. Popular open-source stream processing engines include Apache Flink, Apache Spark Streaming, and Apache Kafka Streams. In this blog, we are going to talk about Apache Flink and its fundamentals and how it can be useful for streaming analytics. 

    Introduction

    Apache Flink is an open-source stream processing framework first introduced in 2014. Flink has been designed to process large amounts of data streams in real-time, and it supports both batch and stream processing. It is built on top of the Java Virtual Machine (JVM) and is written in Java and Scala.

    Flink is a distributed system that can run on a cluster of machines, and it has been designed to be highly available, fault-tolerant, and scalable. It supports a wide range of data sources and provides a unified API for batch and stream processing, which makes it easy to build complex data processing applications.

    Advantages of Apache Flink

    Real-time analytics is the process of analyzing data as it is generated. It requires a system that can handle large volumes of data in real-time and provide insights into the data as soon as possible. Apache Flink has been designed to meet these requirements and has several advantages over other real-time data processing systems.

    1. Low Latency: Flink processes data streams in real-time, which means it can provide insights into the data almost immediately. This makes it an ideal solution for applications that require low latency, such as fraud detection and real-time recommendations.
    2. High Throughput: Flink has been designed to handle large volumes of data and can scale horizontally to handle increasing volumes of data. This makes it an ideal solution for applications that require high throughput, such as log processing and IoT applications.
    3. Flexible Windowing: Flink provides a flexible windowing API that enables the creation of complex windows for processing data streams. This enables the creation of windows based on time, count, or custom triggers, which makes it easy to create complex data processing applications.
    4. Fault Tolerance: Flink is designed to be highly available and fault-tolerant. It can recover from failures quickly and can continue processing data even if some of the nodes in the cluster fail.
    5. Compatibility: Flink is compatible with a wide range of data sources, including Kafka, Hadoop, and Elasticsearch. This makes it easy to integrate with existing data processing systems.

    Flink Architecture

    Apache Flink processes data streams in a distributed manner. The Flink cluster consists of several nodes, each of which is responsible for processing a portion of the data. The nodes communicate with each other using a messaging system, such as Apache Kafka.

    The Flink cluster processes data streams in parallel by dividing the data into small chunks, or partitions, and processing them independently. Each partition is processed by a task, which is a unit of work that runs on a node in the cluster.

    Flink provides several APIs for building data processing applications, including the DataStream API, the DataSet API, and the Table API. The below diagram illustrates what a Flink cluster looks like.

    Apache Flink Cluster
    • Flink application runs on a cluster.
    • A Flink cluster has a job manager and a bunch of task managers.
    • A job manager is responsible for effective allocation and management of computing resources. 
    • Task managers are responsible for the execution of a job.

    Flink Job Execution

    1. Client system submits job graph to the job manager
    • A client system prepares and sends a dataflow/job graph to the job manager.
    • It can be your Java/Scala/Python Flink application or the CLI.
    • The runtime and program execution do not include the client.
    • After submitting the job, the client can either disconnect and operate in detached mode or remain connected to receive progress reports in attached mode.

    Given below is an illustration of how the job graph converted from code looks like

    Job Graph
    1. The job graph is converted to an execution graph by the job manager
    • The execution graph is a parallel version of the job graph. 
    • For each job vertex, it contains an execution vertex per parallel subtask. 
    • An operator that exhibits a parallelism level of 100 will consist of a single job vertex and 100 execution vertices.

    Given below is an illustration of what an execution graph looks like:

    Execution Graph
    1. Job manager submits the parallel instances of execution graph to task managers
    • Execution resources in Flink are defined through task slots. 
    • Each task manager will have one or more task slots, each of which can run one pipeline of parallel tasks. 
    • A pipeline consists of multiple successive tasks
    Parallel instances of execution graph being submitted to task slots

    Flink Program

    Flink programs look like regular programs that transform DataStreams. Each program consists of the same basic parts:

    • Obtain an execution environment 

    ExecutionEnvironment is the context in which a program is executed. This is how execution environment is set up in Flink code:

    ExecutionEnvironment env = ExecutionEnvironment.createLocalEnvironment(); // if program is running on local machine
    ExecutionEnvironment env = new CollectionEnvironment(); // if source is collections
    ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment(); // will do the right thing based on context

    • Connect to data stream

    We can use an instance of the execution environment to connect to the data source which can be file System, a streaming application or collection. This is how we can connect to data source in Flink: 

    DataSet<String> data = env.readTextFile("file:///path/to/file"); // to read from file
    DataSet<User> users = env.fromCollection( /* get elements from a Java Collection */); // to read from collections
    DataSet<User> users = env.addSource(/*streaming application or database*/);

    • Perform Transformations

    We can perform transformation on the events/data that we receive from the data sources.
    A few of the data transformation operations are map, filter, keyBy, flatmap, etc.

    • Specify where to send the data

    Once we have performed the transformation/analytics on the data that is flowing through the stream, we can specify where we will send the data.
    The destination can be a filesystem, database, or data streams.

     dataStream.sinkTo(/*streaming application or database api */);

    Flink Transformations

    1. Map: Takes one element at a time from the stream and performs some transformation on it, and gives one element of any type as an output.

      Given below is an example of Flink’s map operator:

    stream.map(new MapFunction<Integer, String>()
    {
    public String map(Integer integer)
    {
    return " input -> "+integer +" : " +		
    " output -> " +
    ""+numberToWords(integer	
    .toString().	
    toCharArray()); // converts number to words
    }
    }).print();

    1. Filter: Evaluates a boolean function for each element and retains those for which the function returns true.

    Given below is an example of Flink’s filter operator:

    stream.filter(new FilterFunction<Integer>()
    {
    public String filter(Integer integer) throws Exception
    {
    return integer%2 != 0;
    }
    }).print();

    1. Reduce: A “rolling” reduce on a keyed data stream. Combines the current element with the last reduced value and emits the new value.

    Given below is an example of Flink’s reduce operator:

    DataStream<Integer> stream = env.fromCollection(data);
    stream.countWindowAll(3)
    .reduce(new ReduceFunction<Integer>(){
    public Integer reduce(Integer integer, Integer t1)  throws Exception
    {
    return integer+=t1;
    }
    }).print();

    Input : 

    Output : 

    1. KeyBy: 
    • Logically partitions a stream into disjoint partitions. 
    • All records with the same key are assigned to the same partition. 
    • Internally, keyBy() is implemented with hash partitioning.

    The figure below illustrates how key by operator works in Flink.

    Fault Tolerance

    • Flink combines stream replay and checkpointing to achieve fault tolerance. 
    • At a checkpoint, each operator’s corresponding state and the specific point in each input stream are marked.
    • Whenever Checkpointing is done, a snapshot of the data of all the operators is saved in the state backend, which is generally the job manager’s memory or configurable durable storage.
    • Whenever there is a failure, operators are reset to the most recent state in the state backend, and event processing is resumed.

    Checkpointing

    • Checkpointing is implemented using stream barriers.
    • Barriers are injected into the data stream at the source. E.g., kafka, kinesis, etc.
    • Barriers flow with the records as part of the data stream.

    Refer below diagram to understand how checkpoint barriers flow with the events:

    Checkpoint Barriers
    Saving Snapshots
    • Operators snapshot their state at the point in time when they have received all snapshot barriers from their input streams, and before emitting the barriers to their output streams.
    • Once a sink operator (the end of a streaming DAG) has received the barrier n from all of its input streams, it acknowledges that snapshot n to the checkpoint coordinator. 
    • After all sinks have acknowledged a snapshot, it is considered completed.

    The below diagram illustrates how checkpointing is achieved in Flink with the help of barrier events, state backends, and checkpoint table.

    Checkpointing

    Recovery

    • Flink selects the latest completed checkpoint upon failure. 
    • The system then re-deploys the entire distributed dataflow.
    • Gives each operator the state that was snapshotted as part of the checkpoint.
    • The sources are set to start reading the stream from the position given in the snapshot.
    • For example, in Apache Kafka, that means telling the consumer to start fetching messages from an offset given in the snapshot.

    Scalability  

    A Flink job can be scaled up and scaled down as per requirement.

    This can be done manually by:

    1. Triggering a savepoint (manually triggered checkpoint)
    2. Adding/Removing nodes to/from the cluster
    3. Restarting the job from savepoint

    OR 

    Automatically by Reactive Scaling

    • The configuration of a job in Reactive Mode ensures that it utilizes all available resources in the cluster at all times.
    • Adding a Task Manager will scale up your job, and removing resources will scale it down. 
    • Reactive Mode restarts a job on a rescaling event, restoring it from the latest completed checkpoint.
    • The only downside is that it works only in standalone mode.

    Alternatives  

    • Spark Streaming: It is an open-source distributed computing engine that has added streaming capabilities, but Flink is optimized for low-latency processing of real-time data streams and supports more complex processing scenarios.
    • Apache Storm: It is another open-source stream processing system that has a steeper learning curve than Flink and uses a different architecture based on spouts and bolts.
    • Apache Kafka Streams: It is a lightweight stream processing library built on top of Kafka, but it is not as feature-rich as Flink or Spark, and is better suited for simpler stream processing tasks.

    Conclusion  

    In conclusion, Apache Flink is a powerful solution for real-time analytics. With its ability to process data in real-time and support for streaming data sources, it enables businesses to make data-driven decisions with minimal delay. The Flink ecosystem also provides a variety of tools and libraries that make it easy for developers to build scalable and fault-tolerant data processing pipelines.

    One of the key advantages of Apache Flink is its support for event-time processing, which allows it to handle delayed or out-of-order data in a way that accurately reflects the sequence of events. This makes it particularly useful for use cases such as fraud detection, where timely and accurate data processing is critical.

    Additionally, Flink’s support for multiple programming languages, including Java, Scala, and Python, makes it accessible to a broad range of developers. And with its seamless integration with popular big data platforms like Hadoop and Apache Kafka, it is easy to incorporate Flink into existing data infrastructure.

    In summary, Apache Flink is a powerful and flexible solution for real-time analytics, capable of handling a wide range of use cases and delivering timely insights that drive business value.

    References