Category: Type

  • Setting up Mutual TLS Authentication and Authorization on Amazon MSK

    Overview

    We will cover how to set up mutual TLS authentication and authorization on Amazon MSK.

    Amazon MSK is a fully managed service that makes it easy to build and run applications that use Apache Kafka to process streaming data. You can enable client authentication with TLS for connections and client authorization from your applications to your Amazon MSK brokers and ZooKeeper nodes. 

    Prerequisites

    • Terraform: For creating a private CA and MSK Cluster
    • AWS CLI: For creating TLS certificates (the user must have access to create a private CA, issue certificates, and create MSK cluster)

    Setup TLS authentication and authorization

    To use client authentication with TLS on MSK, you need to create the following resources:

    • AWS Private CA
    • MSK cluster with TLS encryption enabled
    • Client certificates

    Create AWS Private CA

    AWS Private CA can be either in the same AWS account as your cluster, or in a different account. For information about AWS Private CAs, see Creating and Managing a AWS Private CA. In this setup, we will use Terraform to create a private CA.

    Steps to create Private CA

    1. Run below Terraform code to create the Private CA.
    terraform {
    required_providers {
    aws = {
          source  = "hashicorp/aws"
          version = "~> 4.0"
        }
      }
    }
    resource "aws_acmpca_certificate_authority" "root_ca" {
    certificate_authority_configuration {
    key_algorithm     = "RSA_4096"
    signing_algorithm = "SHA512WITHRSA"
    subject {
    #Update the attributes as per your need
    common_name         = "exp-msk-ca"
    country             = "US"
    locality            = "Seattle"
    organization        = "Example Corp"
    organizational_unit = "Sales"
    state               = "WA"
        }
      }
    type = "ROOT"
    }

    1. Once the private CA is created, install the certificate from the AWS console.

    Steps to install the certificate.

    • If you are not already on the CA’s details page, open the AWS Private CA console at https://console.aws.amazon.com/acm-pca/home. On the private certificate authorities page, choose a root CA that you have created with the certificate status as Pending or Active.
    • Choose Actions, and installthe  CA certificate to open the Install root CA certificate page.
    • Under Specify the root CA certificate parameters, specify the following certificate parameters:
    • Validity — Specifies the expiration date and time for the CA certificate. The AWS Private CA default validity period for a root CA certificate is ten years.
    • Signature algorithm — Specifies the signing algorithm to use when the root CA issues new certificates. Available options vary according to the AWS Region where you are creating the CA. For more information, see Compatible signing algorithms, Supported cryptographic algorithms, and SigningAlgorithm in CertificateAuthorityConfiguration.
    • SHA256 RSA
    • Review your settings to make sure they’re correct, then choose Confirm and install.        
    • The details page for the CA displays the status of the installation (success or failure) at the top. If the installation was successful, the newly completed root CA displays a status of Active in the General pane.

    Create an MSK cluster that supports TLS client authentication.

    Note: We highly recommend using independent AWS Private CA for each MSK cluster when you use mutual TLS to control access. Doing so will ensure that TLS certificates signed by PCAs only authenticate with a single MSK cluster.

    Run the below Terraform code to create MSK cluster

    Note: Update attributes as per the requirement and configurations.

    terraform {
      required_providers {
        aws = {
          source  = "hashicorp/aws"
          version = "~> 4.0"
        }
      }
    }
    module "kafka" {
      source = "cloudposse/msk-apache-kafka-cluster/aws"
      # Cloud Posse recommends pinning every module to a specific version
      version                       = "2.3.0"
      name                          = "test-msk-cluster" #Change MSK cluster name as per your need
      vpc_id                        = "<VPC_ID>" 
      subnet_ids                    = ["SUBNET1a","SUNBNET2b"] # Minimum 2 subnets required.
      kafka_version                 = "3.4.0" #recommended version by AWS as of 19 Sep 2022
      broker_per_zone               = 1 #Number of broker per availability zone
      broker_instance_type          = "kafka.t3.small" #MSK instance types
      broker_volume_size            = 10 #Broker disk size
      certificate_authority_arns    = ["<CA_ARN>"]  #arn of the CA that you have created in the earlier step
      client_tls_auth_enabled       = true
      encryption_in_cluster         = true 
      client_broker                 = "TLS" # Enables TLS encryption
      enhanced_monitoring           = "PER_TOPIC_PER_BROKER"
      cloudwatch_logs_enabled       = false # Enable if you need cloudwatch logs
      jmx_exporter_enabled          = false # Enable if you need jmx metrics
      node_exporter_enabled         = false # Enable if you need node metrics
      associated_security_group_ids = ["${aws_security_group.kafka_sg.id}"]
      allowed_security_group_ids    = ["${aws_security_group.kafka_sg.id}"]
      create_security_group         = false
    }
    #-----------------------End--------------------#
    resource "aws_security_group" "kafka_sg" {
      name        = "test-msk-cluster-sg" #Change the name as per your need 
      description = "Security Group for kafka cluster"
      vpc_id      = "<VPC_ID>"
      egress {
        from_port        = 0
        to_port          = 0
        protocol         = "-1"
        cidr_blocks      = ["0.0.0.0/0"]
        ipv6_cidr_blocks = ["::/0"]
      }
      ingress {
        from_port        = 2181
        to_port          = 2181
        protocol         = "tcp"
        cidr_blocks      = ["0.0.0.0/0"]
        ipv6_cidr_blocks = ["::/0"]
      }
      ingress {
        from_port        = 9094
        to_port          = 9094
        protocol         = "tcp"
        cidr_blocks      = ["0.0.0.0/0"]
        ipv6_cidr_blocks = ["::/0"]
      }
      # Enable if you need to add tags to MSK cluster
      #tags = var.tags
      # Enable if you need cloudwatch logs
      # depends_on = [
      #   aws_cloudwatch_log_group.cw_log_group
      #]
    }
    # Required for cloudwatch logs
    # resource "aws_cloudwatch_log_group" "cw_log_group" {
    #   name = "blog-msk-cluster"
    #   #tags = var.tags
    # }
    
    output "bootstrap_url" {
      value       = module.kafka.bootstrap_brokers_tls
      description = "Comma separated list of one or more DNS names (or IP addresses) and TLS port pairs for access to the Kafka cluster using TLS"
    }

    It will take 15-20 minutes to create the MSK cluster. 

    Note: Since the bootstrap URL will be used to communicate with the MSK cluster using the Kafka CLI or SDKs, save it from the Terraform output.

    Create TLS certificates using previously created AWS Private CA

    We will create two certificates, one is for admin access, and the other one is for client access. For creating certificates, a common name (CN) is required. The CN is used as a principal while granting permissions through kafka ACLs

    Create admin TLS certificate

    Steps to create TLS certificate

    1. Generate CSR and key.
    openssl req -newkey rsa:2048 -keyout key.pem -out cert.csr -batch -nodes -subj '/CN=admin'

    1. Issue certificate using previously created private CA (replace <CA_ARN> with the ARN of the AWS Private CA that you created).
    certArn=$(aws acm-pca issue-certificate --region <region> --certificate-authority-arn "<CA_ARN>" 
    --csr fileb://cert.csr 
    --signing-algorithm 'SHA256WITHRSA' --validity Value=180,Type='DAYS' --query 
    'CertificateArn' --output text)

    1. Get the certificate ARN issued in the previous step.
    aws acm-pca get-certificate --region <region> --certificate-authority-arn 
    "<CA_ARN>" --certificate-arn "${certArn}" --output text | sed 's/t/n/g' > 
    cert.pem

    1. Export the certificate in pkcs12 format.
    openssl pkcs12 -export -in cert.pem -inkey key.pem -name ssl-configurator 
    -password pass: -out admin.p12

    Create client TLS certificate

    1. Generate CSR and key
    openssl req -newkey rsa:2048 -keyout key.pem -out cert.csr -batch -nodes -subj 
    '/CN=client'

    1. Issue certificate using previously created private CA (replace <CA_ARN> with the ARN of the AWS Private CA that you created).
    certArn=$(aws acm-pca issue-certificate --region <region> 
    --certificate-authority-arn "<CA_ARN>" --csr fileb://cert.csr 
    --signing-algorithm 'SHA256WITHRSA' --validity Value=180,Type='DAYS' --query 
    'CertificateArn' --output text)

    1. Get certificate ARN issue in the previous step.
    aws acm-pca get-certificate --region <region> --certificate-authority-arn 
    "<CA_ARN>" --certificate-arn "${certArn}" --output text | sed 's/t/n/g' > 
    cert.pem

    1. Export the certificate in pkcs12 format.
    openssl pkcs12 -export -in cert.pem -inkey key.pem -name ssl-configurator 
    -password pass: -out client.p12

    Setup a client machine to interact with the MSK cluster

    1. Create an Amazon EC2 instance to use as a client machine. For simplicity, create this instance in the same VPC you used for the cluster. See Step 3: Create a client machine for an example of how to create such a client machine.
    2. Copy previously created certificates admin.p12 and client.p12 into the client machine.
    3. Install java8+ on the client machine.
    4. Download Kafka binaries and extract
    https://archive.apache.org/dist/kafka/3.4.0/kafka_2.13-3.4.0.tgz
    1. Create admin and client configuration files for authentication  and authorization.
    cat <<EOF> admin.propertie
    bootstrap.servers="<BOOTSTRAP_URL>"
    security.protocol=SSL
    ssl.keystore.location=./admin.p12
    ssl.keystore.type=PKCS12
    ssl.keystore.password=
    EOF
    cat <<EOF> client.propertie
    bootstrap.servers="<BOOTSTRAP_URL>"
    security.protocol=SSL
    ssl.keystore.location=./client.p12
    ssl.keystore.type=PKCS12
    ssl.keystore.password=
    EOF

    Test Authentication and Authorization using ACLs

    Create Admin ACLs for granting admin access to clusters, topics, and groups

    By default, the MSK cluster will allow everyone if no ACL is found. Here the Admin ACL will be the first ACL. The Admin user (“User:CN=admin”) will leverage on Admin ACL to grant permissions to Client User(“User:CN=client”).

    ACL for managing cluster operations (Admin ACL).
    ./kafka_2.13-3.5.0/bin/kafka-acls.sh 
    --add 
    --allow-principal "User:CN=admin" 
    --operation All 
    --cluster 
    --bootstrap-server "<BOOTSTRAP_URL>" 
    --command-config admin.properties

    ACL for managing topics permissions (Admin ACL).
    ./kafka_2.13-3.5.0/bin/kafka-acls.sh 
    --add 
    --allow-principal "User:CN=admin" 
    --operation All 
    --topic "*" 
    --bootstrap-server "<BOOTSTRAP_URL>" 
    --command-config admin.properties

    ACL for managing group permissions (Admin ACL).
    ./kafka_2.13-3.5.0/bin/kafka-acls.sh 
    --add 
    --allow-principal "User:CN=admin" 
    --operation All 
    --group "*" 
    --bootstrap-server "<BOOTSTRAP_URL>" 
    --command-config admin.properties

    Create a topic.

    ./kafka_2.13-3.5.0/bin/kafka-topics.sh --bootstrap-server "<BOOTSTRAP_URL>" 
    --create --topic test-topic --command-config admin.properties

    List topic and check the topic is created.

    ./kafka_2.13-3.5.0/bin/kafka-topics.sh --bootstrap-server "<BOOTSTRAP_URL>" 
    --list --command-config admin.properties

    Grant write permission to the topic so that client (producer) can publish messages to the topic (Use admin user for granting access to client).

    ./kafka_2.13-3.5.0/bin/kafka-acls.sh 
    --add 
    --allow-principal "User:CN=client" 
    --operation Write 
    --topic "test-topic" 
    --bootstrap-server "<BOOTSTRAP_URL>" 
    --command-config admin.properties

    Publish messages to the topic using client user.

    for x in {1..10}; do echo "message $x"; done | 
    ./kafka_2.13-3.5.0/bin/kafka-console-producer.sh --bootstrap-server 
    "<BOOTSTRAP_URL>" --producer.config kafka-admin-client.properties --topic 
    test-topic --producer-property enable.idempotence=false

    Consume messages.

    Note: If you try to consume messages from the topic using a consumer group, you will get a group authorization error since the client user is not authorized to access groups.

    ./kafka_2.13-3.5.0/bin/kafka-console-consumer.sh --bootstrap-server 
    "<BOOTSTRAP_URL>" --topic test-topic --max-messages 2 --consumer-property 
    enable.auto.commit=false --consumer-property group.id=consumer-test 
    --from-beginning --consumer.config client.properties

    Grant group permission to the client user.

    ./kafka_2.13-3.5.0/bin/kafka-acls.sh 
    --add 
    --allow-principal "User:CN=client" 
    --operation Read 
    --resource-pattern-type prefixed 
    --group 'consumer-' 
    --bootstrap-server "<BOOTSTRAP_URL>" 
    --command-config admin.properties

    After providing group access, the client user should be able to consume messages from the topic using a consumer group.

    This way, you can manage client access to the topics and groups.

    Additional Commands 

    List ACL.
    ./kafka_2.13-3.5.0/bin/kafka-acls.sh 
    --bootstrap-server "<BOOTSTRAP_URL>" 
    --list 
    --command-config admin.properties

    Delete ACL (Create and delete ACL commands are same except –aad/–remove argument).
    ./kafka_2.13-3.5.0/bin/kafka-acls.sh 
    --remove 
    --allow-principal "User:CN=admin" 
    --operation All 
    --cluster 
    --bootstrap-server "<BOOTSTRAP_URL>" 
    --command-config admin.properties

    Conclusion

    AWS MSK eases the effort of managing independently hosted Kafka clusters. Users can scale Kafka brokers and storage as necessary. MSK supports TLS encryption and allows users to create TLS connections from the application to Amazon MSK brokers and ZooKeeper nodes with the help of the AWS Private CA, which enables users to create certificates for authentication.

  • Integrating Augmented Reality in a Flutter App to Enhance User Experience

    In recent years, augmented reality (AR) has emerged as a cutting-edge technology that has revolutionized various industries, including gaming, retail, education, and healthcare. Its ability to blend digital information with the real world has opened up a new realm of possibilities. One exciting application of AR is integrating it into mobile apps to enhance the user experience.

    In this blog post, we will explore how to leverage Flutter, a powerful cross-platform framework, to integrate augmented reality features into mobile apps and elevate the user experience to new heights.

    Understanding Augmented Reality:‍

    Before we dive into the integration process, let’s briefly understand what augmented reality is. Augmented reality is a technology that overlays computer-generated content onto the real world, enhancing the user’s perception and interaction with their environment. Unlike virtual reality (VR), which creates a fully simulated environment, AR enhances the real world by adding digital elements such as images, videos, and 3D models.

    The applications of augmented reality are vast and span across different industries. In gaming, AR has transformed mobile experiences by overlaying virtual characters and objects onto the real world. It has also found applications in areas such as marketing and advertising, where brands can create interactive campaigns by projecting virtual content onto physical objects or locations. AR has also revolutionized education by offering immersive learning experiences, allowing students to visualize complex concepts and interact with virtual models.

    In the upcoming sections, we will explore the steps to integrate augmented reality features into mobile apps using Flutter.

    ‍What is Flutter?‍

    Flutter is an open-source UI (user interface) toolkit developed by Google for building natively compiled applications for mobile, web, and desktop platforms from a single codebase. It allows developers to create visually appealing and high-performance applications with a reactive and customizable user interface.

    The core language used in Flutter is Dart, which is also developed by Google. Dart is a statically typed, object-oriented programming language that comes with modern features and syntax. It is designed to be easy to learn and offers features like just-in-time (JIT) compilation during development and ahead-of-time (AOT) compilation for optimized performance in production.

    Flutter provides a rich set of customizable UI widgets that enable developers to build beautiful and responsive user interfaces. These widgets can be composed and combined to create complex layouts and interactions, giving developers full control over the app’s appearance and behavior.

    Why Choose Flutter for AR Integration?

    Flutter, backed by Google, is a versatile framework that enables developers to build beautiful and performant cross-platform applications. Its rich set of UI components and fast development cycle make it an excellent choice for integrating augmented reality features. By using Flutter, developers can write a single codebase that runs seamlessly on both Android and iOS platforms, saving time and effort.

    Flutter’s cross-platform capabilities enable developers to write code once and deploy it on multiple platforms, including iOS, Android, web, and even desktop (Windows, macOS, and Linux).

    The Flutter ecosystem is supported by a vibrant community, offering a wide range of packages and plugins that extend its capabilities. These packages cover various functionalities such as networking, database integration, state management, and more, making it easy to add complex features to your Flutter applications.’

    ‍Steps to Integrate AR in a Flutter App:

    Step 1: Set Up Flutter Project:

    Assuming that you already have Flutter installed in your system, create a new Flutter project or open an existing one to start integrating AR features. If not, then follow this https://docs.flutter.dev/get-started/install to set up Flutter.

    Step 2: Add ar_flutter_plugin dependency:

    Update the pubspec.yaml file of your Flutter project and add the following line under the dependencies section:

    dependencies:
    ar_flutter_plugin: ^0.7.3.

    This step ensures that your Flutter project has the necessary dependencies to integrate augmented reality using the ar_flutter_plugin package.
    Run `flutter pub get` to fetch the package.

    Step 3: Initializing the AR View:

    Create a new Dart file for the AR screen. Import the required packages at the top of the file:

    Define a new class called ARScreen that extends StatefulWidget and State. This class represents the AR screen and handles the initialization and rendering of the AR view:

    class ArScreen extends StatefulWidget {  
      const ArScreen({Key? key}) : super(key: key);  
      @override  
      _ArScreenState createState() => _ArScreenState();
    }‍
      class _ArScreenState extends State<ArScreen> {  
      ARSessionManager? arSessionManager;  
      ARObjectManager? arObjectManager;  
      ARAnchorManager? arAnchorManager;‍  
        List<ARNode> nodes = [];  
      List<ARAnchor> anchors = [];‍  
        @override  
        void dispose() {    
        super.dispose();    
        arSessionManager!.dispose();  }‍  
        @override  
        Widget build(BuildContext context) {    
        return Scaffold(        
          appBar: AppBar(          
            title: const Text('Anchors & Objects on Planes'),        
          ),        
          body: Stack(children: [          
            ARView(        
              onARViewCreated: onARViewCreated,        
              planeDetectionConfig: PlaneDetectionConfig.horizontalAndVertical,          
            ),          
            Align(        
              alignment: FractionalOffset.bottomCenter,        
              child: Row(            
                mainAxisAlignment: MainAxisAlignment.spaceEvenly,            
                children: [              
                  ElevatedButton(                  
                    onPressed: onRemoveEverything,                  
                    child: const Text("Remove Everything")),            
                ]),          
            )        
          ]));  
      }

    Step 4: Add AR functionality:

    Create a method onARViewCreated for the onArCoreViewCreated callback. You can add “required” AR functionality in this method, such as loading 3D models or handling interactions. In our demo, we will be adding 3D models in AR on tap:

    void onARViewCreated(
          ARSessionManager arSessionManager,
          ARObjectManager arObjectManager,
          ARAnchorManager arAnchorManager,
          ARLocationManager arLocationManager) {
        this.arSessionManager = arSessionManager;
        this.arObjectManager = arObjectManager;
        this.arAnchorManager = arAnchorManager;
    
        this.arSessionManager!.onInitialize(
              showFeaturePoints: false,
              showPlanes: true,
              customPlaneTexturePath: "Images/triangle.png",
              showWorldOrigin: true,
            );
        this.arObjectManager!.onInitialize();
    
        this.arSessionManager!.onPlaneOrPointTap = onPlaneOrPointTapped;
        this.arObjectManager!.onNodeTap = onNodeTapped;
      }

    After this, create a method onPlaneOrPointTapped for handling interactions.

    Future<void> onPlaneOrPointTapped(
          List<ARHitTestResult> hitTestResults) async {
        var singleHitTestResult = hitTestResults.firstWhere(
            (hitTestResult) => hitTestResult.type == ARHitTestResultType.plane);
        var newAnchor =
            ARPlaneAnchor(transformation: singleHitTestResult.worldTransform);
        bool? didAddAnchor = await arAnchorManager!.addAnchor(newAnchor);
        if (didAddAnchor!) {
          anchors.add(newAnchor);
          // Add note to anchor
          var newNode = ARNode(
              type: NodeType.webGLB,
              uri:
    "https://github.com/KhronosGroup/glTF-Sample-Models/raw/master/2.0/Duck/glTF-Binary/Duck.glb",
              scale: Vector3(0.2, 0.2, 0.2),
              position: Vector3(0.0, 0.0, 0.0),
              rotation: Vector4(1.0, 0.0, 0.0, 0.0));
          bool? didAddNodeToAnchor = await arObjectManager!
              .addNode(newNode, planeAnchor: newAnchor);
          if (didAddNodeToAnchor!) {
            nodes.add(newNode);
          } else {
            arSessionManager!.onError("Adding Node to Anchor failed");
          }
        } else {
          arSessionManager!.onError("Adding Anchor failed");
        }
      }

    Finally, create a method for onRemoveEverything to remove all the elements on the screen.

    Future<void> onRemoveEverything() async {
           for (var anchor in anchors) {
          arAnchorManager!.removeAnchor(anchor);
        }
        anchors = [];
      }

    Step 5: Run the AR screen:

    In your app’s main entry point, set the ARScreen as the home screen:

    void main() {
      runApp(MyApp());
    }
    
    class MyApp extends StatelessWidget {
      @override
      Widget build(BuildContext context) {
        return MaterialApp(
          home: ARScreen(),
        );
      }
    }

    In the example below, we can observe the AR functionality implemented. We are loading a Duck 3D Model whenever the user taps on the screen. The plane is auto-detected, and once that is done, we can add a model to it. We also have a floating button to remove everything that is on the plane at the given moment.

    ‍Benefits of AR Integration:

    • Immersive User Experience: Augmented reality adds an extra dimension to user interactions, creating immersive and captivating experiences. Users can explore virtual objects within their real environment, leading to increased engagement and satisfaction.
    • Interactive Product Visualization: AR allows users to visualize products in real-world settings before making a purchase. They can view how furniture fits in their living space, try on virtual clothes, or preview architectural designs. This interactive visualization enhances decision-making and improves customer satisfaction.
    • Gamification and Entertainment: Augmented reality opens up opportunities for gamification and entertainment within apps. You can develop AR games, quizzes, or interactive storytelling experiences, providing users with unique and enjoyable content.
    • Marketing and Branding: By incorporating AR into your Flutter app, you can create innovative marketing campaigns and branding experiences. AR-powered product demonstrations, virtual try-ons, or virtual showrooms help generate excitement around your brand and products.

    Conclusion:

    Integrating augmented reality into a Flutter app brings a new level of interactivity and immersion to the user experience. Flutter’s versatility with AR frameworks like ARCore and ARKit, empowers developers to create captivating and innovative mobile applications. By following the steps outlined in this blog post, you can unlock the potential of augmented reality and deliver exceptional user experiences that delight and engage your audience. Embrace the possibilities of AR in Flutter and embark on a journey of exciting and immersive app development.

  • Unveiling the Magic of Kubernetes: Exploring Pod Priority, Priority Classes, and Pod Preemption

    ‍Introduction:

    Generally, during the deployment of a manifest, we observe that some pods get successfully scheduled, while few critical pods encounter scheduling issues. Therefore, we must schedule the critical pods first over other pods. While exploring, we discovered a built-in solution for scheduling using Pod Priority and Priority Class. So, in this blog, we’ll be talking about Priority Class and Pod Priority and how we can implement them in our use case.

    Pod Priority:

    It is used to prioritize one pod over another based on its importance. Pod Priority is particularly useful when critical pods cannot be scheduled due to limited resources.

    Priority Classes:

    This Kubernetes object defines the priority of pods. Priority can be set by an integer value. Higher-priority values have higher priority to the pod.

    Understanding Priority Values:

    Priority Classes in Kubernetes are associated with priority values that range from 0 to 1000000000, with a higher value indicating greater importance.

    These values act as a guide for the scheduler when allocating resources. 

    Pod Preemption:

    It is already enabled when we create a priority class. The purpose of Pod Preemption is to evict lower-priority pods in order to make room for higher-priority pods to be scheduled.

    Example Scenario: The Enchanted Shop

    Let’s dive into a scenario featuring “The Enchanted Shop,” a Kubernetes cluster hosting an online store. The shop has three pods, each with a distinct role and priority:

    Priority Class:

    • Create High priority class: 
    apiVersion: scheduling.k8s.io/v1
    kind: PriorityClass
    metadata:
      name: high-priority
    value: 1000000

    • Create Medium priority class:
    apiVersion: scheduling.k8s.io/v1
    kind: PriorityClass
    metadata:
      name: medium-priority
    value: 500000

    • Create Low priority class:
    apiVersion: scheduling.k8s.io/v1
    kind: PriorityClass
    metadata:
      name: low-priority
    value: 100000

    Pods:

    • Checkout Pod (High Priority): This pod is responsible for processing customer orders and must receive top priority.

    Create the Checkout Pod with a high-priority class:

    apiVersion: v1
    kind: Pod
    metadata:
      name: checkout-pod
      labels:
        app: checkout
    spec:
      priorityClassName: high-priority
      containers:
      - name: checkout-container
        image: nginx:checkout

    • Product Recommendations Pod (Medium Priority):

    This pod provides personalized product recommendations to customers and holds moderate importance.

    Create the Product Recommendations Pod with a medium priority class:

    apiVersion: v1
    kind: Pod
    metadata:
      name: product-rec-pod
      labels:
        app: product-recommendations
    spec:
      priorityClassName: medium-priority
      containers:
      - name: product-rec-container
        image: nginx:store

    • Shopping Cart Pod (Low Priority):

    This pod manages customers’ shopping carts and has a lower priority compared to the others.

    Create the Shopping Cart Pod with a low-priority class:

    apiVersion: v1
    kind: Pod
    metadata:
      name: shopping-cart-pod
      labels:
        app: shopping-cart
    spec:
      priorityClassName: low-priority
      containers:
      - name: shopping-cart-container
        image: nginx:cart

    With these pods and their respective priority classes, Kubernetes will allocate resources based on their importance, ensuring smooth operation even during peak loads.

    Commands to Witness the Magic:

    • Verify Priority Classes:

    kubectl get priorityclasses

    Note: Kubernetes includes two predefined Priority Classes: system-cluster-critical and system-node-critical. These classes are specifically designed to prioritize the scheduling of critical components, ensuring they are always scheduled first.

    • Check Pod Priority:

    Conclusion:

    In Kubernetes, you have the flexibility to define how your pods are scheduled. This ensures that your critical pods receive priority over lower-priority pods during the scheduling process. To get deeper into the concepts of Pod Priority, Priority Class, and Pod Preemption, you can find more information by referring to the following links.

  • What’s New with Material 3 in Flutter: Discussing the Key Updates with an Example

    At Google I/O 2021, Google unveiled Material You, the next evolution of Material Design, along with Android 12. This update introduced Material Design 3 (M3), bringing a host of significant changes and improvements to the Material Design system. For Flutter developers, adopting Material 3 offers a seamless and consistent design experience across multiple platforms. In this article, we will delve into the key changes of Material 3 in Flutter and explore how it enhances the app development process.

    1. Dynamic Color:‍

    One of the notable features of Material 3 is dynamic color, which enables developers to apply consistent colors throughout their apps. By leveraging the Material Theme Builder web app or the Figma plugin, developers can visualize and create custom color schemes based on a given seed color. The dynamic color system ensures that colors from different tonal palettes are applied consistently across the UI, resulting in a harmonious visual experience.

    2. Typography:‍

    Material 3 simplifies typography by categorizing it into five key groups: Display, Headline, Title, Body, and Label. This categorization makes using different sizes within each group easier, catering to devices with varying screen sizes. The scaling of typography has also become consistent across the groups, offering a more streamlined and cohesive approach to implementing typography in Flutter apps.

    3. Shapes:‍

    Material 3 introduces a wider range of shapes, including squared, rounded, and rounded rectangular shapes. Previously circular elements, such as the FloatingActionButton (FAB), have now transitioned to a rounded rectangular shape. Additionally, widgets like Card, Dialog, and BottomSheet feature a more rounded appearance in Material 3. These shape enhancements give developers more flexibility in designing visually appealing and modern-looking user interfaces.

    4. Elevation:‍

    In Material Design 2, elevated components had shadows that varied based on their elevation values. Material 3 takes this a step further by introducing the surfaceTintColor color property. This property applies a color to the surface of elevated components, with the intensity varying based on the elevation value. By incorporating surfaceTintColor, elevated components remain visually distinguishable even without shadows, resulting in a more polished and consistent UI.

    Let’s go through each of them in detail.

    Dynamic Color

    Dynamic color in Flutter enables you to apply consistent colors throughout your app. It includes key and neutral colors from different tonal palettes, ensuring a harmonious UI experience. You can use tools like Material Theme Builder or Figma plugin to create a custom color scheme to visualize and generate dynamic colors. By providing a seed color in your app’s theme, you can easily create an M3 ColorScheme. For example, adding “colorSchemeSeed: Colors.green” to your app will result in a lighter green color for elements like the FloatingActionButton (FAB), providing a customized look for your app.

    // primarySwatch: Colors.blue,  
     useMaterial3: true,  
     colorSchemeSeed: Colors.green,
     ),

    Note:
    When using the colorSchemeSeed in Flutter, it’s important to note that if you have already defined a primarySwatch in your app’s theme, you may encounter an assertion error. The error occurs because colorSchemeSeed and primarySwatch should not be used together. To avoid this issue, ensure that you either remove the primarySwatch or set colorSchemeSeed to null when using the colorSchemeSeed feature in your Flutter app.

    Using Material 3

    Typography

    In Material 3, the naming of typography has been made simpler by dividing it into five main groups: 

    1. Display 
    2. Headline 
    3. Title 
    4. Body 
    5. Label

    Each group has a more descriptive role, making it easier to use different font sizes within a specific typography group. For example, instead of using names like bodyText1, bodyText2, and caption, Material 3 introduces names like BodyLarge, BodyMedium, and BodySmall. This improved naming system is particularly helpful when designing typography for devices with varying screen sizes.

    Shapes

    Material 3 introduces an expanded selection of shapes, including square, rounded, and rounded rectangular shapes. The Floating Action Button (FAB), which used to be circular, now has a rounded rectangular shape. Material buttons have transitioned from rounded rectangular to pill-shaped. Additionally, widgets such as Card, Dialog, and BottomSheet have adopted a more rounded appearance in Material 3.

    Elevation

    In Material 2, elevated components were accompanied by shadows, with the size of the shadow increasing as the elevation increased. Material 3 brings a new feature called surfaceTintColor. When applied to elevated components, the surface of these components takes on the specified color, with the intensity varying based on the elevation value. This property is now available for all elevated widgets in Flutter, alongside elevation and shadow properties.

    Here’s an example Flutter app that demonstrates the key changes in Material 3 regarding dynamic color, typography, shapes, and elevation. This example app includes a simple screen with a colored container and text, showcasing the usage of these new features:

    //main.dart
    import 'package:flutter/material.dart';
    void main() {
      runApp(MyApp());
    }
    class MyApp extends StatelessWidget {
      @override
      Widget build(BuildContext context) {
        return MaterialApp(
          debugShowCheckedModeBanner: false,
          theme: ThemeData(
            useMaterial3: true,
            colorSchemeSeed: Colors.green,
          ),
          home: const MyHomePage(),
        );
      }
    }
    class MyHomePage extends StatelessWidget {
      const MyHomePage({Key? key}) : super(key: key);
      @override
      Widget build(BuildContext context) {
        return Scaffold(
          appBar: AppBar(
            title: Text(
              'Material 3 Key Changes',
              style: Theme.of(context).textTheme.headlineSmall,
            ),
            elevation: 8,
            shadowColor: Theme.of(context).shadowColor,
          ),
          body: Container(
            width: double.infinity,
            height: 200,
            color: Theme.of(context).colorScheme.secondary,
            padding: const EdgeInsets.all(16.0),
            child: Center(
              child: Text(
                'Hello, Material 3!',
                style: Theme.of(context).textTheme.bodyLarge?.copyWith(
                      color: Colors.white,
                    ),
              ),
            ),
          ),
          floatingActionButton: FloatingActionButton(
            onPressed: () {},
            child: const Icon(Icons.add),
          ),
        );
      }
    }

    Conclusion:

    Material 3 represents a significant update to the Material Design system in Flutter, offering developers a more streamlined and consistent approach to app design. The dynamic color feature allows for consistent colors throughout the UI, while the simplified typography and expanded shape options provide greater flexibility in creating visually engaging interfaces. Moreover, the enhancements in elevation ensure a cohesive and polished look for elevated components.

    As Flutter continues to evolve and adapt to Material 3, developers can embrace these key changes to create beautiful, personalized, and accessible designs across different platforms. The Flutter team has been diligently working to provide full support for Material 3, enabling developers to migrate their existing Material 2 apps seamlessly. By staying up to date with the progress of Material 3 implementation in Flutter, developers can leverage its features to enhance their app development process and deliver exceptional user experiences.

    Remember, Material 3 is an exciting opportunity for Flutter developers to create consistent and unified UI experiences, and exploring its key changes opens up new possibilities for app design.

  • Flame Engine : Unleashing Flutter’s Game Development Potential

    With Flutter, developers can leverage a single codebase to seamlessly build applications for diverse platforms, including Android, iOS, Linux, macOS, Windows, Google Fuchsia, and the web. The Flutter team remains dedicated to empowering developers of all backgrounds, ensuring effortless creation and publication of applications using this powerful multi-platform UI toolkit.
    Flutter simplifies the process of developing standard applications effortlessly. However, if your aim is to craft an extraordinary game with stunning graphics, captivating gameplay, lightning-fast loading times, and highly responsive interactions, Flames emerges as the perfect solution.s
    This blog will provide you with an in-depth understanding of Flame. Through the features provided by Flame, you will embark on a journey to master the art of building a Flutter game from the ground up. You will gain invaluable insights into seamlessly integrating animations, configuring immersive soundscapes, and efficiently managing diverse game assets.

    1. Flame engine

    Flame is a cutting-edge 2D modular game engine designed to provide a comprehensive suite of specialized solutions for game development. Leveraging the powerful architecture of Flutter, Flame significantly simplifies the coding process, empowering you to create remarkable projects with efficiency and precision.

    1.1. Setup: 

    Run this command with Flutter:

    $ flutter pub add flame

    This will add a line like this to your package’s pubspec.yaml (and run an implicit flutter pub get):

    Dependencies:
    Flame: ^1.8.1

    Import it, and now, in your Dart code, you can use:

    import 'package:flame/flame.dart';

    1.2. Assets Structure: 

    Flame introduces a well-structured assets directory framework, enabling seamless utilization of these resources within your projects.
    To illustrate the concepts further, let’s delve into a practical example that showcases the application of the discussed principles:

    Flame.images.load(‘card_sprites.png');  	
    FlameAudio.play('shuffling.mp3');

    When utilizing image and audio assets in Flame, you can simply specify the asset name without the need for the full path, given that you place the assets within the suggested directories as outlined below.

    For better organization, you have the option to divide your audio folder into two distinct subfolders: music and sfx

    The music directory is intended for audio files used as background music, while the sfx directory is specifically designated for sound effects, encompassing shots, hits, splashes, menu sounds, and more.

    To properly configure your project, it is crucial to include the entry of above-mentioned directories in your pubspec.yaml file:

    1.3. Support to other platforms: 

    As Flame is built upon the robust foundation of Flutter, its platform support is inherently reliant on Flutter’s compatibility with various platforms. Therefore, the range of platforms supported by Flame is contingent upon Flutter’s own platform support.

    Presently, Flame offers extensive support for desktop platforms such as Windows, MacOS, and Linux, in addition to mobile platforms, including Android and iOS. Furthermore, Flame also facilitates game development for the web. It is important to note that Flame primarily focuses on stable channel support, ensuring a reliable and robust experience. While Flame may not provide direct assistance for the dev, beta, and master channels, it is expected that Flame should function effectively in these environments as well.

    1.3.1. Flutter web: 

    To optimize the performance of your web-based game developed with Flame, it is recommended to ensure that your game is utilizing the CanvasKit/Skia renderer. By leveraging the canvas element instead of separate DOM elements, this choice enhances web performance significantly. Therefore, incorporating the CanvasKit/Skia renderer within your Flame-powered game is instrumental in achieving optimal performance on the web platform.

    To run your game using Skia, use the following command:

    flutter run -d chrome --web-renderer canvaskit

    To build the game for production, using Skia, use the following:

    flutter build web --release --web-renderer canvaskit

    2. Implementation

    2.1 GameWidget: 

    To integrate a Game instance into the Flutter widget tree, the recommended approach is to utilize the GameWidget. This widget serves as the root of your game application, enabling seamless integration of your game. You can incorporate a Game instance into the widget tree by following the example provided below:

    void main() {
      runApp(
        GameWidget(game: MyGame()),
      );
    }

    By adopting this approach, you can effectively add your Game instance to the Flutter widget tree, ensuring proper execution and integration of your game within the Flutter application structure.

    2.2 GameWidget:

    When developing games in Flutter, it is crucial to utilize a widget that can efficiently handle high refresh rates, speedy memory allocation, and deallocation and provide enhanced functionality compared to the Stateless and Stateful widgets. Flame offers the FlameGame class, which excels in providing these capabilities.

    By utilizing the FlameGame class, you can create games by adding components to it. This class automatically calls the update and render methods of all the components added to it. Components can be directly added to the FlameGame through the constructor using the named children argument, or they can be added from anywhere else using the add or addAll methods.

    To incorporate the FlameGame into the widget tree, you need to pass its object to the GameWidget. Refer to the example below for clarification:

    class CardMatchGame extends FlameGame {
      @override
      Future<void> onLoad() async {
        await add(CardTable());
      }
    }
    
    main() {
      final cardMatchGame = CardMatchGame(children: [CardTable]);
      runApp(
        GameWidget(
          game: cardMatchGame,
        ),
      );
    }

    2.3 Component:

    This is the last piece of the puzzle. The smallest individual components that make up the game. This is like a widget but within the game. All components can have other components as children, and all components inherit from the abstract class Component. These components serve as the fundamental entities responsible for rendering and interactivity within the game, and their hierarchical organization allows for flexible and modular construction of complex game systems in Flame. These components have their own lifecycle. 

    Component Lifecycle: 

     

    Figure 01

    2.3.1. onLoad:

    The onLoad method serves as a crucial component within the game’s lifecycle, allowing for the execution of asynchronous operations such as image loading. Positioned between the onGameResize and onMount callbacks, this method is strategically placed to ensure the necessary assets are loaded and prepared. In Figure 01 of the component lifecycle, onLoad is set as the initial method due to its one-time execution. It is within this method that all essential assets, including images, audio files, and tmx files, should be loaded. This ensures that these assets are readily available for utilization throughout the game’s progression.

    2.3.2. onGameResize:

    Invoked when new components are added to the component tree or when the screen undergoes resizing, the onGameResize method plays a vital role in handling these events. It is executed before the onMount callback, allowing for necessary adjustments to be made in response to changes in component structure or screen dimensions.

    2.3.3. onParentResize:

    This method is triggered when the parent component undergoes a change in size or whenever the current component is mounted within the component tree. By leveraging the onParentResize callback, developers can implement logic that responds to parent-level resizing events and ensures the proper rendering and positioning of the component.

    2.3.4. onMount:

    As the name suggests, the onMount method is executed each time a component is mounted into the game tree. This critical method offers an opportunity to initialize the component and perform any necessary setup tasks before it becomes an active part of the game.

    2.3.5. onRemove:

    The onRemove method facilitates the execution of code just before a component is removed from the game tree. Regardless of whether the component is removed using the parent’s remove method or the component’s own remove method, this method ensures that the necessary cleanup actions take place in a single execution.

    2.3.6. onChildrenChanged:

    The onChildrenChanged method is triggered whenever a change occurs in a child component. Whether a child is added or removed, this method provides an opportunity to handle the updates and react accordingly, ensuring the parent component remains synchronized with any changes in its children.

    2.3.7. Render & Update Loop:

    The Render method is responsible for generating the user interface, utilizing the available data to create the game screen. It provides developers with canvas objects, allowing them to draw the game’s visual elements. On the other hand, the Update method is responsible for modifying and updating this rendered UI. Changes such as resizing, repositioning, or altering the appearance of components are managed through the Update method. In essence, any changes observed in the size or position of a component can be attributed to the Update method, which ensures the dynamic nature of the game’s user interface.

    3. Sample Project

    To showcase the practical implementation of key classes like GameWidget, FlameGame, and essential Components within the Flame game engine, we will embark on the creation of a captivating action game. By engaging in this hands-on exercise, you will gain valuable insights and hands-on experience in utilizing Flame’s core functionalities and developing compelling games. Through this guided journey, you will unlock the knowledge and skills necessary to create engaging and immersive gaming experiences, while harnessing the power of Flame’s robust framework.

    Let’s start with:

    3.1. Packages & assets: 

    3.1.1. Create a project using the following command:

    flutter create flutter_game_poc

    3.1.2. Add these under dependencies of pubspec.yaml (and run command flutter pub get):

    flame: ^1.8.0

    3.1.3. As mentioned earlier in the Asset Structure section, let’s create a directory called assets in your project and include an images subdirectory within it. Download assets from here, add both the assets to this  images directory.

    Figure 02 

    Figure 03

    In our game, we’ll use “Figure 02” as the background image and “Figure 03” as the avatar character who will be walking. If you have separate images for the avatar’s different walking frames, you can utilize a sprite generator tool to create a sprite sheet from those individual images.

    A sprite generator helps combine multiple separate images into a single sprite sheet, which enables efficient rendering and animation of the character in the game. You can find various sprite generator tools available online that can assist in generating a sprite sheet from your separate avatar images.

    By using a sprite sheet, you can easily manage and animate the character’s walking motion within the game, providing a smooth and visually appealing experience for the players.

    After uploading, your asset structure will look like this: 

    Figure 04

    3.1.4. To use these assets, we have to register them into pubspect.yaml below assets section: 

    assets: 
       -  assets/images/

    3.2. Supporting code: 

    3.2.1. Create 3 directories  constants, overlays, and components inside the lib directory.

    3.2.2. First, we will start with a constants directory where we have to create 4 files as follows:

       all_constants.dart. 

    export 'asset _constants.dart';
    export 'enum_ constants.dart';
    export 'key_constants.dart';

       assets_constants.dart. 

    class AssetConstants {
     static String backgroundImage = 'background.png';
     static String avatarImage = 'avatar_sprite.png';
    }

       enum_constants.dart. 

    enum WalkingDirection {idle, up, down, left, right};

       key_constants.dart. 

    class KeyConstants {
     static String overlayKey = 'DIRECTION_BUTTON';
    }

    3.2.3. In addition to the assets directory, we will create an overlay directory to include elements that need to be constantly visible to the user during the game. These elements typically include information such as the score, health, or action buttons.

    For our game, we will incorporate five control buttons that allow us to direct the gaming avatar’s movements. These buttons will remain visible on the screen at all times, facilitating player interaction and guiding the avatar’s actions within the game environment.

    Organizing these overlay elements in a separate directory makes it easier to manage and update the user interface components that provide vital information and interaction options to the player while the game is in progress.

    In order to effectively manage and control the position of all overlay widgets within our game, let’s create a dedicated controller. This controller will serve as a centralized entity responsible for orchestrating the placement and behavior of these overlay elements. Create a file named  overlay_controller.dart.

    All the files in the overlays directory are common widgets that extend Stateless widget.

    class OverlayController extends StatelessWidget {
     final WalkingGame game;
     const OverlayController({super.key, required this.game});
    
    
     @override
     Widget build(BuildContext context) {
       return Column(children: [
         Row(children: [ButtonOverlay(game: game)])
       ]);
     }
    }

    3.2.5. In our game, all control buttons share a common design, featuring distinct icons and functionalities. To streamline the development process and maintain a consistent user interface, we will create a versatile widget called DirectionButton. This custom widget will handle the uniform UI design for all control buttons.

    Inside the overlays directory, create a directory called widgets and add a file called direction_button.dart in that directory. This file defines the shape and color of all control buttons. 

    class DirectionButton extends StatelessWidget {
     final IconData iconData;
     final VoidCallback onPressed;
    
    
     const DirectionButton(
         {super.key, required this.iconData, required this.onPressed});
    
    
     @override
     Widget build(BuildContext context) {
       return Container(
         height: 40,
         width: 40,
         margin: const EdgeInsets.all(4),
         decoration: const BoxDecoration(
             color: Colors.black45,
             borderRadius: BorderRadius.all(Radius.circular(10))),
         child: IconButton(
           icon: Icon(iconData),
           iconSize: 20,
           color: Colors.white,
           onPressed: onPressed,
         ),
       );
     }
    }

    class ButtonOverlay extends StatelessWidget {
     final WalkingGame game;
     const ButtonOverlay({Key? key, required this.game}) : super(key: key);
    
    
     @override
     Widget build(BuildContext context) {
       return SizedBox(
         height: MediaQuery.of(context).size.height,
         width: MediaQuery.of(context).size.width,
         child: Column(
           children: [
             Expanded(child: Container()),
             Row(
               children: [
                 Expanded(child: Container()),
                 DirectionButton(
                   iconData: Icons.arrow_drop_up,
                   onPressed: () {
                     game.direction = WalkingDirection.up;
                   },
                 ),
                 const SizedBox(height: 50, width: 50)
               ],
             ),
             Row(
               children: [
                 Expanded(child: Container()),
                 DirectionButton(
                   iconData: Icons.arrow_left,
                   onPressed: () {
                     game.direction = WalkingDirection.left;
                   },
                 ),
                 DirectionButton(
                   iconData: Icons.pause,
                   onPressed: () {
                     game.direction = WalkingDirection.idle;
                   },
                 ),
                 DirectionButton(
                   iconData: Icons.arrow_right,
                   onPressed: () {
                     game.direction = WalkingDirection.right;
                   },
                 ),
               ],
             ),
             Row(
               children: [
                 Expanded(child: Container()),
                 DirectionButton(
                   iconData: Icons.arrow_drop_down,
                   onPressed: () {
                     game.direction = WalkingDirection.down;
                   },
                 ),
                 const SizedBox(height: 50, width: 50),
               ],
             ),
           ],
         ),
       );
     }
    }

    3.3. Core logic: 

    Moving forward, we will leverage the code we have previously implemented, building upon the foundations we have laid thus far:

    3.3.1.  The first step is to create a component. As discussed earlier, all the individual elements in the game are considered components, so let’s create 1 component that will be our gaming avatar. For the UI of this avatar, we are going to use assets shown in Figure 03.

    For the avatar, we will be using SpriteAnimationComponents as we want this component to animate automatically.

    In the components directory, create a file called avatar_component.dart. This file will hold the logic of when and how our game avatar will move. 

    In the onLoad() method, we are loading the asset and using it to create animations, and in the update() method, we are using an enum to decide the walking animation.

    class AvatarComponent extends SpriteAnimationComponent with HasGameRef {
     final WalkingGame walkingGame;
     AvatarComponent({required this.walkingGame}) {
       add(RectangleHitbox());
     }
     late SpriteAnimation _downAnimation;
     late SpriteAnimation _leftAnimation;
     late SpriteAnimation _rightAnimation;
     late SpriteAnimation upAnimation;
     late SpriteAnimation _idleAnimation;
     final double _animationSpeed = .1;
    
    
     @override
     Future<void> onLoad() async {
       await super.onLoad();
    
    
       final spriteSheet = SpriteSheet(
         image: await gameRef.images.load(AssetConstants.avatarImage),
         srcSize: Vector2(2284 / 12, 1270 / 4),
       );
    
    
       _downAnimation =
           spriteSheet.createAnimation(row: 0, stepTime: _animationSpeed, to: 11);
       _leftAnimation =
           spriteSheet.createAnimation(row: 1, stepTime: _animationSpeed, to: 11);
       upAnimation =
           spriteSheet.createAnimation(row: 3, stepTime: _animationSpeed, to: 11);
       _rightAnimation =
           spriteSheet.createAnimation(row: 2, stepTime: _animationSpeed, to: 11);
       _idleAnimation =
           spriteSheet.createAnimation(row: 0, stepTime: _animationSpeed, to: 1);
       animation = _idleAnimation;
     }
    
    
     @override
     void update(double dt) {
       switch (walkingGame.direction) {
         case WalkingDirection.idle:
           animation = _idleAnimation;
           break;
         case WalkingDirection.down:
           animation = _downAnimation;
           if (y < walkingGame.mapHeight - height) {
             y += dt * walkingGame.characterSpeed;
           }
           break;
         case WalkingDirection.left:
           animation = _leftAnimation;
           if (x > 0) {
             x -= dt * walkingGame.characterSpeed;
           }
           break;
         case WalkingDirection.up:
           animation = upAnimation;
           if (y > 0) {
             y -= dt * walkingGame.characterSpeed;
           }
           break;
         case WalkingDirection.right:
           animation = _rightAnimation;
           if (x < walkingGame.mapWidth - width) {
             x += dt * walkingGame.characterSpeed;
           }
           break;
       }
       super.update(dt);
     }
    }

    3.1.2. Our Avatar is ready to walk now, but there is no map or world where he can do that. So, let’s create a game and add a background to it.  

    Create file name walking_game.dart in the lib directory and add the following code.

    class WalkingGame extends FlameGame with HasCollisionDetection {
     late double mapWidth =2520 ;
     late double mapHeight = 1300;
     WalkingDirection direction = WalkingDirection.idle;
     final double characterSpeed = 80;
     final _world = World();
    
    
     // avatar sprint
     late AvatarComponent _avatar;
    
    
     // Background image
     late SpriteComponent _background;
     final Vector2 _backgroundSize = Vector2(2520, 1300);
    
    
     // Camera Components
     late final CameraComponent _cameraComponent;
    
    
     @override
     Future<void> onLoad() async {
       await super.onLoad();
    
    
       overlays.add(KeyConstants.overlayKey);
    
    
       _background = SpriteComponent(
         sprite: Sprite(
           await images.load(AssetConstants.backgroundImage),
           srcPosition: Vector2(0, 0),
           srcSize: _backgroundSize,
         ),
         position: Vector2(0, 0),
         size: Vector2(2520, 1300),
       );
       _world.add(_background);
    
    
       _avatar = AvatarComponent(walkingGame: this)
         ..position = Vector2(529, 128)
         ..debugMode = true
         ..size = Vector2(1145 / 24, 635 / 8);
    
    
       _world.add(_avatar);
    
    
       _cameraComponent = CameraComponent(world: _world)
         ..setBounds(Rectangle.fromLTRB(390, 200, mapWidth - 390, mapHeight - 200))
         ..viewfinder.anchor = Anchor.center
         ..follow(_avatar);
    
    
       addAll([_cameraComponent, _world]);
     }
    }

    First thing in onLoad(), you can see that we are adding an overlay using a key. You can learn more about this key in the main class.

    Next is to create background components using SpriteComponent and add it to the world component. For creating the background component, we are using SpriteComponent instead of SpriteAnimationComponent because we do not need any background animation in our game.

    Then we add AvatarComponent in the same world component where we added the background component. To keep the camera fixed on the AvatarComponent, we are using 1 extra component, which is called CameraComponent.

    Lastly, we are adding both world & CameraComponents in our game by using addAll() method.

    3.1.3. Finally, we have to create the main.dart file. In this example, we are wrapping a GameWidget with MaterialApp because we want to use some features of material themes like icons, etc., in this project. If you do not want to do that, you can pass GameWidget to the runApp() method directly.
    Here we are not only adding the WalkingGame into GameWidget but also adding an overlay, which will show the control buttons. The key mentioned here for the overlay is the same key we added in walking_game.dart file’s onLoad() method.

    void main() {
     WidgetsFlutterBinding.ensureInitialized();
     Flame.device.fullScreen();
     runApp(MaterialApp(
       home: Scaffold(
         body: GameWidget(
           game: WalkingGame(),
           overlayBuilderMap: {
             KeyConstants.overlayKey: (BuildContext context, WalkingGame game) {
               return OverlayController(game: game);
             }
           },
         ),
       ),
     ));
    }

    After all this, our game will look like this, and with these 5 control buttons, we can tell your avatar to move and/or stop.

    4. Result

    For your convenience, the complete code for the project can be found here. Feel free to refer to this code repository for a comprehensive overview of the implementation details and to access the entirety of the game’s source code.

    5. Conclusion

    Flame game engine alleviates the burden of crucial tasks such as asset loading, managing refresh rates, and efficient memory management. By taking care of these essential functionalities, Flame allows developers to concentrate on implementing the core functionality and creating an exceptional game application.

    By leveraging Flame’s capabilities, you can maximize your productivity and create an amazing game application that resonates with players across various platforms, all while enjoying the benefits of a unified codebase.

    6. References

    1. https://docs.flutter.dev/
    2. https://pub.dev/packages/flame
    3. https://docs.flame-engine.org/latest
    4. https://medium.flutterdevs.com/flame-with-flutter-4c6c3bd8931c
    5. https://supabase.com/blog/flutter-real-time-multiplayer-game
    6. https://www.kodeco.com/27407121-building-games-in-flutter-with-flame-getting-started
    7. https://blog.codemagic.io/flutter-flame-game-development/
    8. https://codelabs.developers.google.com/codelabs/flutter-flame-game

  • GitHub CI/CD vs. Xcode Cloud: A Comprehensive Comparison for iOS Platform

    Source: https://faun.pub/

    Introduction

    In the realm of iOS app development, continuous integration and continuous deployment (CI/CD) have become indispensable to ensure efficient and seamless software development. Developers are constantly seeking the most effective CI/CD solutions to streamline their workflows and optimize the delivery of high-quality iOS applications. Two prominent contenders in this arena are Github CI/CD and Xcode Cloud. In this article, we will delve into the intricacies of these platforms, comparing their features, benefits, and limitations to help you make an informed decision for your iOS development projects.

    GitHub CI/CD

    Github CI/CD is an extension of the popular source code management platform, Github. It offers a versatile and flexible CI/CD workflow for iOS applications, enabling developers to automate the building, testing, and deployment processes. Here are some key aspects of Github CI/CD:

    1. Workflow Configuration: Github CI/CD employs a YAML-based configuration file, allowing developers to define complex workflows. This provides granular control over the CI/CD pipeline, enabling the automation of multiple tasks such as building, testing, code analysis, and deployment.
    2. Wide Range of Integrations: Github CI/CD seamlessly integrates with various third-party tools and services, such as Slack, Jira, and SonarCloud, enhancing collaboration and ensuring efficient communication among team members. This extensibility enables developers to incorporate their preferred tools seamlessly into the CI/CD pipeline.
    3. Scalability and Customizability: Github CI/CD supports parallelism, allowing the execution of multiple jobs concurrently. This feature significantly reduces the overall build and test time, especially for large-scale projects. Additionally, developers can leverage custom scripts and actions to tailor the CI/CD pipeline according to their specific requirements.
    4. Community Support: Github boasts a vast community of developers who actively contribute to the CI/CD ecosystem. This means that developers can access a wealth of resources, tutorials, and shared workflows, expediting the adoption of CI/CD best practices.

    Xcode Cloud

    Xcode Cloud is a cloud-based CI/CD solution designed specifically for iOS and macOS app development. Integrated into Apple’s Xcode IDE, Xcode Cloud provides an end-to-end development experience with seamless integration into the Apple ecosystem. Let’s explore the distinguishing features of Xcode Cloud:

    1. Native Integration with Xcode: Xcode Cloud is tightly integrated with the Xcode IDE, offering a seamless development experience for iOS and macOS apps. This integration simplifies the setup and configuration process, enabling developers to trigger CI/CD workflows directly from Xcode easily.
    2. Automated Testing and UI Testing: Xcode Cloud includes powerful testing capabilities, allowing developers to run automated tests, unit tests, and UI tests effortlessly. The platform provides a comprehensive test report with detailed insights, enabling developers to identify and resolve issues quickly.
    3. Device Testing and Distribution: Xcode Cloud enables developers to leverage Apple’s extensive device testing infrastructure for concurrent testing across multiple simulators and physical devices. Moreover, it facilitates the distribution of beta builds for internal and external testing, making it easier to gather user feedback before the final release.
    4. Seamless Code Signing and App Store Connect Integration: Xcode Cloud simplifies code signing, a critical aspect of iOS app development, by managing certificates, profiles, and provisioning profiles automatically. It seamlessly integrates with App Store Connect, automating the app submission and release process.

    Comparison

    Now, let’s compare Github CI/CD and Xcode Cloud across several key dimensions:

    Ecosystem and Integration

    • GitHub CI/CD: Offers extensive integrations with third-party tools and services, allowing developers to integrate with various services beyond the Apple ecosystem.
    • Xcode Cloud: Excels in its native integration with Xcode and the Apple ecosystem, providing a seamless experience for iOS and macOS developers. It leverages Apple’s testing infrastructure and simplifies code signing and distribution within the Apple ecosystem.

    Flexibility and Customizability

    • GitHub CI/CD: Provides more flexibility and customizability through its YAML-based configuration files, enabling developers to define complex workflows and integrate various tools according to their specific requirements.
    • Xcode Cloud: Focuses on streamlining the development experience within Xcode, limiting customization options compared to GitHub CI/CD.

    Scalability and Parallelism

    • GitHub CI/CD: Offers robust scalability with support for parallel job execution, making it suitable for large-scale projects that require efficient job execution in parallel.
    • Xcode Cloud: Scalability is limited to Apple’s device testing infrastructure, which may not provide the same level of scalability for non-Apple platforms or projects with extensive parallel job execution requirements.

    Community and Resources

    • GitHub CI/CD: Benefits from a large and vibrant community, offering extensive resources, shared workflows, and active community support. Developers can leverage the knowledge and experience shared by the community.
    • Xcode Cloud: As a newer offering, Xcode Cloud is still building its community ecosystem. It may have a smaller community compared to GitHub CI/CD, resulting in fewer shared workflows and resources. However, developers can still rely on Apple’s developer forums and support channels for assistance.

    Pricing

    • GitHub CI/CD: GitHub offers both free and paid plans. The pricing depends on the number of parallel jobs and additional features required. The paid plans provide more scalability and advanced features.
    • Xcode Cloud: Apple offers Xcode Cloud as part of its broader Apple Developer Program, which has an annual subscription fee. The specific pricing details for Xcode Cloud are available on Apple’s official website.

    Performance

    • GitHub CI/CD: The performance of GitHub CI/CD depends on the underlying infrastructure and resources allocated to the CI/CD pipeline. It provides scalability and parallelism options for faster job execution.
    • Xcode Cloud: Xcode Cloud leverages Apple’s testing infrastructure, which is designed for iOS and macOS app development. It offers optimized performance and reliability for testing and distribution processes within the Apple ecosystem.

    Conclusion

    Choosing between Github CI/CD and Xcode Cloud for your iOS development projects depends on your specific needs and priorities. If you value native integration with Xcode and the Apple ecosystem, seamless code signing, and distribution, Xcode Cloud provides a comprehensive solution. On the other hand, if flexibility, customizability, and an extensive ecosystem of integrations are crucial, Github CI/CD offers a powerful CI/CD platform for iOS apps. Consider your project’s unique requirements and evaluate the features and limitations of each platform to make an informed decision that aligns with your development workflow and goals.

  • Agile Estimation and Planning: Driving Success in Software Projects

    Agile software development has revolutionized the way projects are planned and executed. In Agile, estimation and planning are crucial to ensure successful project delivery. This blog post will delve into Agile estimation techniques specific to software projects, including story points, velocity, and capacity planning. We will explore how these techniques contribute to effective planning in Agile environments, enabling teams to deliver value-driven solutions efficiently.

    Understanding Agile Estimation:

    Agile estimation involves assessing work effort, complexity, and duration in a collaborative and iterative manner. Traditional time-based estimation is replaced by relative sizing, allowing flexibility and adaptability. Story points, a popular estimation unit, represent user stories’ relative effort or complexity. They facilitate prioritization and comparison, aiding in effective backlog management.

    The Importance of Agile Estimation:

    Accurate estimation is fundamental to successful project planning. Agile estimation differs from traditional approaches, focusing on relative sizing rather than precise time-based estimations. This allows teams to account for uncertainty and complexity, promoting transparency and collaboration.

    1. Better Decision Making: By understanding the relative effort and complexity of user stories or tasks, teams can make informed decisions about prioritization, resource allocation, and trade-offs.
    2. Enhanced Predictability: Agile estimation enables teams to predict how much work they can complete within a given time, facilitating reliable planning and stakeholder management.
    3. Improved Team Collaboration: Estimation in Agile is a collaborative process that involves the entire team, and it fosters open discussions, shared understanding, and collective ownership of project goals.

    Story Points: The Currency of Agile Estimation:

    Story points are a popular estimation technique used in Agile projects, and they provide a relative measure of effort and complexity for user stories or tasks. Unlike time-based estimates, story points focus on the inherent complexity and the effort required to complete the work. The Fibonacci sequence (1, 2, 3, 5, 8, etc.) or T-shirt sizes (XS, S, M, L, XL) are common scales for assigning story points.

    1. Benefits of Story Points: Story points offer several advantages over time-based estimation:
    • Relative Sizing: Story points enable teams to compare and prioritize tasks based on their relative effort rather than precise time frames. This approach avoids the pitfalls of underestimation or overestimation caused by fixed-time estimates.
    • Encourages Collaboration: Story point estimation involves the entire team, promoting healthy discussions, knowledge sharing, and alignment of expectations.
    • Focuses on Complexity: Story points emphasize the complexity of work, considering factors such as risk, uncertainty, and technical challenges.
    1. Estimation Techniques: Agile teams utilize various techniques to assign story points, such as Planning Poker, in which team members collectively discuss and debate the effort required for each user story. The goal is to reach a consensus and arrive at a shared understanding of the work’s complexity.

    Velocity: Harnessing Team Performance:

    Velocity is a powerful metric derived from Agile project management tools that measure a team’s average output in terms of story points completed during a specific time frame, usually a sprint or iteration. It serves as a baseline for future planning and helps teams assess their performance.

    1. Benefits of Velocity Tracking: Tracking velocity provides several advantages:
    • Predictability: By analyzing past velocity, teams can forecast how much work they will likely complete in subsequent iterations. This enables them to set realistic goals and manage stakeholder expectations.
    • Resource Allocation: Velocity aids in effective resource management, allowing teams to distribute work evenly and avoid overloading or underutilizing team members.
    • Continuous Improvement: Monitoring velocity over time enables teams to identify trends, bottlenecks, and opportunities for improvement. It facilitates a culture of continuous learning and adaptation.
    1. Factors Influencing Velocity: Several factors can influence a team’s velocity, including team composition, skills, experience, availability, and external dependencies. Understanding these factors helps teams adjust their planning and make data-driven decisions.

    Capacity Planning: Balancing Resources and Workload:

    Capacity planning is the process of determining the team’s available resources and their ability to take on work. It involves balancing the team’s capacity with the estimated effort required for the project.

    1. Resource Assessment: Capacity planning begins by evaluating the team’s composition, skill sets, and availability. Understanding each team member’s capacity helps project managers allocate work effectively and ensure an even distribution of tasks.
    2. Managing Dependencies: Capacity planning also considers external dependencies, such as stakeholder availability, vendor dependencies, or third-party integrations. By considering these factors, teams can mitigate risks and avoid unnecessary delays.
    3. Agile Tools for Capacity Planning: Agile project management tools offer features to assist with capacity planning, allowing teams to visualize and allocate work based on the team’s availability. This helps prevent overcommitment and promotes a sustainable work pace.

    Effective Planning in Agile Environments:

    Successful Agile planning requires adopting best practices that align with Agile principles and values. Some essential practices include:

    ‍Refining the Backlog:

    Regularly groom and refine the product backlog to ensure user stories are well-defined, appropriately prioritized, and estimated. This allows the team to plan more clearly and respond to changing requirements effectively, and continuous refinement helps identify dependencies, risks, and opportunities for improvement.

    ‍Collaborative Estimation:

    Encourage collaboration and involvement of the entire team in the estimation process. Techniques like Planning Poker foster discussions and consensus-building, leveraging the diverse perspectives and expertise within the team. Collaborative estimation ensures shared understanding and buy-in, leading to more accurate estimates.

    ‍Iterative Refinement: Continuously Improving Estimation Accuracy:

    Agile estimation is not a one-time activity but an ongoing process of refinement. Teams learn from experience and continuously improve their estimation accuracy. Conduct retrospectives at the end of each sprint to reflect on the planning and estimation process. Identify areas for improvement and experiment with different techniques or approaches. Encourage feedback from the team and incorporate lessons learned into future planning efforts.

    Case-Studies:

    Following are real-world examples and case studies that highlight the benefits of Agile estimation and planning in various software projects:

    Spotify: Scaling Agile with Squads, Tribes, and Guilds:

    Spotify, a renowned music streaming platform, adopted Agile methodologies to manage their growing engineering teams. They introduced the concept of squads, which are small, cross-functional teams responsible for delivering specific features. Each squad estimates and plans their work using Agile techniques such as story points and velocity. This approach allows Spotify to maintain flexibility, foster collaboration, and continuously deliver new features and improvements.

    ‍Salesforce: Agile Planning for Enhanced Customer Satisfaction:

    Salesforce, a cloud-based CRM software provider, implemented Agile estimation and planning techniques to enhance customer satisfaction and product delivery. They adopted a backlog-driven approach, where requirements were gathered in a prioritized backlog. Agile teams estimated the backlog items using relative sizing techniques, such as Planning Poker. By involving stakeholders in the estimation process, Salesforce improved transparency, set realistic expectations, and delivered value incrementally to their customers.

    ‍NASA’s Mars Rover Curiosity: Agile in High-Stakes Space Exploration:

    The software development process for NASA’s Mars Rover Curiosity mission applied Agile principles to ensure the successful exploration of the red planet. The team used Agile estimation techniques to estimate the effort required for each feature, focusing on iterations and continuous integration. Agile planning allowed them to adapt to changing requirements and allocate resources effectively. The iterative development approach enabled frequent feedback loops and ensured the software met the mission’s evolving needs.

    GitHub: Agile Planning in a Collaborative Development Environment:

    GitHub, a leading platform for software development collaboration, employs Agile estimation and planning practices to manage its extensive project portfolio. They break down work into small, manageable user stories and estimate them using T-shirt sizing or affinity estimation techniques. By visualizing project progress on Kanban boards and leveraging metrics like lead time and cycle time, GitHub ensures efficient planning, prioritization, and continuous improvement across their development teams.

    ‍Zappos: Agile Planning in E-Commerce:

    Zappos, an online shoe and clothing retailer, embraced Agile methodologies to optimize their software development and improve customer experience. Zappos efficiently plans and prioritizes features that align with customer needs and business goals by leveraging user story mapping and release planning techniques. Agile estimation helps them determine the effort required for each feature, facilitating resource allocation and ensuring timely releases and updates.

    Common Challenges and Pitfalls in Agile Estimation and Planning:

    Implementing Agile estimation and planning practices can improve project delivery by fostering collaboration, adaptability, and transparency. However, teams may encounter specific challenges or pitfalls during the implementation process. By being aware of these potential issues, teams can better anticipate and address them, improving the overall success of Agile projects. Here are some common challenges and pitfalls to watch out for:

    Unrealistic Expectations:

    One of the most significant challenges is setting realistic expectations about the accuracy of estimates and the ability to plan for uncertainties. Agile embraces change, and it is essential to communicate to stakeholders that estimates are not fixed commitments but rather the best guess based on the available information at a given time.

    Insufficient Stakeholder Involvement:

    Agile estimation and planning rely on active involvement and collaboration among all stakeholders, including the development team, product owners, and business representatives. Lack of stakeholder engagement can lead to misaligned expectations, inadequate requirements, and poor decision-making during the estimation and planning process.

    Incomplete or Unclear Requirements:

    Agile estimation and planning heavily depend on a clear understanding of project requirements. If requirements are vague, ambiguous, or incomplete, estimating accurately and planning effectively becomes challenging. Teams should strive to have well-defined user stories or product backlog items before estimation and planning activities commence.

    Overcommitting or Undercommitting:

    Agile encourages self-organizing teams to determine their capacity and commit to a realistic amount of work for each iteration or sprint. Overcommitting can lead to burnout, quality issues, and missed deadlines while undercommitting can result in inefficient resource utilization and a lack of progress. Balancing workload and capacity requires careful consideration, continuous feedback, and a focus on sustainable delivery.

    Resistance to Change:

    Agile adoption often requires a shift in mindset and culture within the organization. Resistance to change from team members, stakeholders, or management can impede the successful implementation of Agile estimation and planning practices. Addressing resistance through education, training, and highlighting the benefits and value of Agile approaches is vital.

    By acknowledging these common challenges and pitfalls, teams can anticipate and proactively mitigate potential issues. Agile estimation and planning are iterative processes that benefit from continuous learning, collaboration, and adaptability. By addressing these challenges head-on, teams can enhance their ability to deliver successful projects while maintaining transparency, agility, and stakeholder satisfaction.

    Conclusion:

    Remember that Agile planning is a continuous and adaptive process, emphasizing collaboration, value delivery, and flexibility. In the ever-evolving world of software development, Agile estimation and planning serve as the compass that guides teams toward successful project outcomes. By harnessing the power of estimation techniques tailored for Agile environments, teams can navigate through uncertainties, prioritize work effectively, and optimize their delivery process, ultimately driving customer satisfaction and project success.

  • Unlocking Cross-Platform Development with Kotlin Multiplatform Mobile (KMM)

    In the fast-paced and ever-changing world of software development, the task of designing applications that can smoothly operate on various platforms has become a significant hurdle. Developers frequently encounter a dilemma where they must decide between constructing distinct codebases for different platforms or opting for hybrid frameworks that come with certain trade-offs.

    Kotlin Multiplatform (KMP) is an extension of the Kotlin programming language that simplifies cross-platform development by bridging the gap between platforms. This game-changing technology has emerged as a powerful solution for creating cross-platform applications.

    Kotlin Multiplatform Mobile (KMM) is a subset of KMP that provides a specific framework and toolset for building cross-platform mobile applications using Kotlin. KMM is developed by JetBrains to simplify the process of building mobile apps that can run seamlessly on multiple platforms.

    In this article, we will take a deep dive into Kotlin Multiplatform Mobile, exploring its features and benefits and how it enables developers to write shared code that runs natively on multiple platforms.

    What is Kotlin Multiplatform Mobile (KMM)?

    With KMM, developers can share code between Android and iOS platforms, eliminating the need for duplicating efforts and maintaining separate codebases. This significantly reduces development time and effort while improving code consistency and maintainability.

    KMM offers support for a wide range of UI frameworks, libraries, and app architectures, providing developers with flexibility and options. It can seamlessly integrate with existing Android projects, allowing for the gradual adoption of cross-platform development. Additionally, KMM projects can be developed and tested using familiar build tools, making the transition to KMM as smooth as possible.

    KMM vs. Other Platforms

    Here’s a table comparing the KMM (Kotlin Multiplatform Mobile) framework with some other popular cross-platform mobile development platforms:

    Sharing Code Across Multiple Platforms:

    Advantages of Utilizing Kotlin Multiplatform (KMM) in Projects

    Code sharing: Encourages code reuse and reduces duplication, leading to faster development.

    Faster time-to-market: Accelerates mobile app development by reducing codebase development.

    Consistency: Ensures consistency across platforms for better user experience.

    Collaboration between Android and iOS teams: Facilitates collaboration between Android and iOS development teams to improve efficiency.

    Access to Native APIs: Allows developers to access platform-specific APIs and features.

    Reduced maintenance overhead: Shared codebase makes maintenance easier and more efficient.

    Existing Kotlin and Android ecosystem: Provides access to libraries, tools, and resources for developers.

    Gradual adoption: Facilitates cross-platform development by sharing modules and components.

    Performance and efficiency: Generates optimized code for each platform, resulting in efficient and performant applications.

    Community and support: Benefits from active community, resources, tutorials, and support.

    Limitations of Using KMM in Projects

    Limited platform-specific APIs: Provides a common codebase, but does not provide direct access to platform-specific APIs.

    Platform-dependent setup and tooling: Platform-agnostic, but setup and tooling can be platform-dependent.

    Limited interoperability with existing platform code: Interoperability between Kotlin Multiplatform and existing platform code can be challenging.

    Development and debugging experience: Provides code sharing, but development and debugging experience differ.

    Limited third-party library support: There aren’t many ready-to-use libraries available, so developers must implement from scratch or look for alternatives.

    Setting Up Environment for Cross-Platform Development in Android Studio

    Developing Kotlin Multiplatform Mobile (KMM) apps as an Android developer is relatively straightforward. You can use Android Studio, the same IDE that you use for Android app development. 

    To get started, we will need to install the KMM plugin through the IDE plugin manager, which is a simple step. The advantage of using Android Studio for KMM development is that we can create and run iOS apps from within the same IDE. This can help streamline the development process, making it easier to build and test apps across multiple platforms.

    In order to enable the building and running of iOS apps through Android Studio, it’s necessary to have Xcode installed on your system. Xcode is an Integrated Development Environment (IDE) used for iOS programming.

    To ensure that all dependencies are installed correctly for our Kotlin Multiplatform Mobile (KMM) project, we can use kdoctor. This tool can be installed via brew by running the following command in the command-line:

    $ brew install kdoctor 

    Note: If you don’t have Homebrew yet, please install it.

    Once we have all the necessary tools installed on your system, including Android Studio, Xcode, JDK, Kotlin Multiplatform Mobile Plugin, and Kotlin Plugin, we can run kdoctor in the Android Studio terminal or on our command-line tool by entering the following command:

    $ kdoctor 

    This will confirm that all required dependencies are properly installed and configured for our KMM project.

    kdoctor will perform comprehensive checks and provide a detailed report with the results.

    Assuming that all the necessary tools are installed correctly, if kdoctor detects any issues, it will generate a corresponding result or report.

    To resolve the warning mentioned above, touch ~/.zprofile and export changes.

    $ touch  ~/.zprofile 

    $ export LANG=en_US.UTF-8

    export LC_ALL=en_US.UTF-8

    After making the above necessary changes to our environment, we can run kdoctor again to verify that everything is set up correctly. Once kdoctor confirms that all dependencies are properly installed and configured, we are done.

    Building Biometric Face & Fingerprint Authentication Application

    Let’s explore Kotlin Multiplatform Mobile (KMM) by creating an application for face and fingerprint authentication. Here our aim is to leverage KMM’s potential by developing shared code for both Android and iOS platforms. This will promote code reuse and reduce redundancy, leading to optimized code for each platform.

    Set Up an Android project

    To initiate a new project, we will launch Android Studio, select the Kotlin Multiplatform App option from the New Project template, and click on “Next.”

    We will add the fundamental application information, such as the name of the application and the project’s location, on the following screen.

    Lastly, we opt for the recommended dependency manager for the iOS app from the Regular framework and click on “Next.”

    For the iOS app, we can switch the dependency between the regular framework or CocoPods dependency manager.

    After clicking the “Finish” button, the KMM project is created successfully and ready to be utilized.

    After finishing the Gradle sync process, we can execute both the iOS and Android apps by simply clicking the run button located in the toolbar.

    In this illustration, we can observe the structure of a KMM project. The KMM project is organized into three directories: shared, androidApp, and iosApp.

    androidApp: It contains Android app code and follows the typical structure of a standard Android application.

    iosApp: It contains iOS application code, which can be opened in Xcode using the .xcodeproj file.

    shared: It contains code and resources that are shared between the Android (androidApp) and iOS (iosApp) platforms. It allows developers to write platform-independent logic and components that can be reused across both platforms, reducing code duplication and improving development efficiency.

    Launch the iOS app and establish a connection with the framework.

    Before proceeding with iOS app development, ensure that both Xcode and Cocoapods are installed on your system.

    Open the root project folder of the KMM application (KMM_Biometric_App) developed using Android studio and navigate to the iosApp folder. Within the iosApp folder, locate the .xcodeproj file and double-click on it to open it.

    After launching the iosApp in Xcode, the next step is to establish a connection between the framework and the iOS application. To do this, you will need to access the iOS project settings by double-clicking on the project name. Once you are in the project settings, navigate to the Build Phases tab and select the “+” button to add a new Run Script Phase.

     

     

    Add the following script:

    cd “$SRCROOT/..”

    ./gradlew :shared:embedAndSignAppleFrameworkForXcode

    Move the Run Script phase before the Compile Sources phase.

    Navigate to the All build settings on the Build Settings tab and locate the Search Paths section. Within this section, specify the Framework Search Path:

    $(SRCROOT)/../shared/build/xcode-frameworks/$(CONFIGURATION)/$(SDK_NAME)

    In the Linking section of the Build Settings tab, specify the Other Linker flags:

    $(inherited) -framework shared

    Compile the project in Xcode. If all the settings are configured correctly, the project should build successfully.

    Implement Biometric Authentication in the Android App

    To enable Biometric Authentication, we will utilize the BiometricPrompt component available in the Jetpack Biometric library. This component simplifies the process of implementing biometric authentication, but it is only compatible with Android 6.0 (API level 23) and later versions. If we require support for earlier Android versions, we must explore alternative approaches.

    Biometric Library:

    implementation(“androidx.biometric:biometric-ktx:1.2.0-alpha05“)

    To add the Biometric Dependency for Android development, we must include it in the androidMain of sourceSets in the build.gradle file located in the shared folder. This step is specific to Android development.

    // shared/build.gradle.kts

    …………
    sourceSets {
       val androidMain by getting {
           dependencies {
               implementation("androidx.biometric:biometric-ktx:1.2.0-alpha05")
                        }
    	……………
       }
    …………….

    Next, we will generate the FaceAuthenticator class within the commonMain folder, which will allow us to share the Biometric Authentication business logic between the Android and iOS platforms.

    // shared/commonMain/FaceAuthenticator

    expect class FaceAuthenticator {
       fun isDeviceHasBiometric(): Boolean
       fun authenticateWithFace(callback: (Boolean) -> Unit)
    }

    In shared code, the “expect” keyword signifies an expected behavior or interface. It indicates a declaration that is expected to be implemented differently on each platform. By using “expect,” you establish a contract or API that the platform-specific implementations must satisfy.

    The “actual” keyword is utilized to provide the platform-specific implementation for the expected behavior or interface defined with the “expect” keyword. It represents the concrete implementation that varies across different platforms. By using “actual,” you supply the code that fulfills the contract established by the “expect” declaration.

    There are 3 different types of authenticators, defined at a level of granularity supported by BiometricManager and BiometricPrompt.

    At the level of granularity supported by BiometricManager and BiometricPrompt, there exist three distinct types of authenticators.

    Multiple authenticators, such as BIOMETRIC_STRONG | DEVICE_CREDENTIAL | BIOMETRIC_WEAK, can be represented as a single integer by combining their types using bitwise OR.

    BIOMETRIC_STRONG: Any biometric (e.g., fingerprint, iris, or face) on the device that meets or exceeds the requirements for Class 3 (formerly Strong), as defined by the Android CDD.

    BIOMETRIC_WEAK: Any biometric (e.g., fingerprint, iris, or face) on the device that meets or exceeds the requirements for Class 2 (formerly Weak), as defined by the Android CDD.

    DEVICE_CREDENTIAL: Authentication using a screen lock credential—the user’s PIN, pattern, or password.

    Now let’s create an actual implementation of FaceAuthenticator class in the androidMain folder of the shared folder.

    // shared/androidMain/FaceAuthenticator

    actual class FaceAuthenticator(context: FragmentActivity) {
       actual fun isDeviceHasBiometric(): Boolean {
           // code to check biometric available
       }
    
       actual fun authenticateWithFace(callback: (Boolean) -> Unit) {
           // code to authenticate using biometric
       }
    }

    Add the following code to the isDeviceHasBiometric() function to determine whether the device supports biometric authentication or not.

    actual class FaceAuthenticator(context: FragmentActivity) {
    
        var activity: FragmentActivity = context
    
        @RequiresApi(Build.VERSION_CODES.R)
        actual fun isDeviceHasBiometric(): Boolean {
            val biometricManager = BiometricManager.from(activity)
            when (biometricManager.canAuthenticate(BIOMETRIC_STRONG or BIOMETRIC_WEAK)) {
                BiometricManager.BIOMETRIC_SUCCESS -> {
                    Log.d("`FaceAuthenticator.kt`", "App can authenticate using biometrics.")
                    return true
                }
    
                BiometricManager.BIOMETRIC_ERROR_NO_HARDWARE -> {
                    Log.e("`FaceAuthenticator.kt`", "No biometric features available on this device.")
                    return false
                }
    
                BiometricManager.BIOMETRIC_ERROR_HW_UNAVAILABLE -> {
                    Log.e("`FaceAuthenticator.kt`", "Biometric features are currently unavailable.")
                    return false
                }
    
                BiometricManager.BIOMETRIC_ERROR_NONE_ENROLLED -> {
                    Log.e(
                        "`FaceAuthenticator.kt`",
                        "Prompts the user to create credentials that your app accepts."
                    )
                    val enrollIntent = Intent(Settings.ACTION_BIOMETRIC_ENROLL).apply {
                        putExtra(
                            Settings.EXTRA_BIOMETRIC_AUTHENTICATORS_ALLOWED,
                            BIOMETRIC_STRONG or BIOMETRIC_WEAK
                        )
                    }
    
                    startActivityForResult(activity, enrollIntent, 100, null)
                }
    
                BiometricManager.BIOMETRIC_ERROR_SECURITY_UPDATE_REQUIRED -> {
                    Log.e(
                        "`FaceAuthenticator.kt`",
                        "Security vulnerability has been discovered and the sensor is unavailable until a security update has addressed this issue."
                    )
                }
    
                BiometricManager.BIOMETRIC_ERROR_UNSUPPORTED -> {
                    Log.e(
                        "`FaceAuthenticator.kt`",
                        "The user can't authenticate because the specified options are incompatible with the current Android version."
                    )
                }
    
                BiometricManager.BIOMETRIC_STATUS_UNKNOWN -> {
                    Log.e(
                        "`FaceAuthenticator.kt`",
                        "Unable to determine whether the user can authenticate"
                    )
                }
            }
            return false
        }
    
        actual fun authenticateWithFace(callback: (Boolean) -> Unit) {
            // code to authenticate using biometric
        }
    
    }

    In the provided code snippet, an instance of BiometricManager is created, and the canAuthenticate() method is invoked to determine whether the user can authenticate with an authenticator that satisfies the specified requirements. To accomplish this, you must pass the same bitwise combination of types, which you declared using the setAllowedAuthenticators() method, into the canAuthenticate() method.

    To perform biometric authentication, insert the following code into the authenticateWithFace() method.

    actual class FaceAuthenticator(context: FragmentActivity) {
    
        var activity: FragmentActivity = context
    
        @RequiresApi(Build.VERSION_CODES.R)
        actual fun isDeviceHasBiometric(): Boolean {
            val biometricManager = BiometricManager.from(activity)
            when (biometricManager.canAuthenticate(BIOMETRIC_STRONG or BIOMETRIC_WEAK)) {
                BiometricManager.BIOMETRIC_SUCCESS -> {
                    Log.d("`FaceAuthenticator.kt`", "App can authenticate using biometrics.")
                    return true
                }
    
                BiometricManager.BIOMETRIC_ERROR_NO_HARDWARE -> {
                    Log.e("`FaceAuthenticator.kt`", "No biometric features available on this device.")
                    return false
                }
    
                BiometricManager.BIOMETRIC_ERROR_HW_UNAVAILABLE -> {
                    Log.e("`FaceAuthenticator.kt`", "Biometric features are currently unavailable.")
                    return false
                }
    
                BiometricManager.BIOMETRIC_ERROR_NONE_ENROLLED -> {
                    Log.e(
                        "`FaceAuthenticator.kt`",
                        "Prompts the user to create credentials that your app accepts."
                    )
                    val enrollIntent = Intent(Settings.ACTION_BIOMETRIC_ENROLL).apply {
                        putExtra(
                            Settings.EXTRA_BIOMETRIC_AUTHENTICATORS_ALLOWED,
                            BIOMETRIC_STRONG or BIOMETRIC_WEAK
                        )
                    }
    
                    startActivityForResult(activity, enrollIntent, 100, null)
                }
    
                BiometricManager.BIOMETRIC_ERROR_SECURITY_UPDATE_REQUIRED -> {
                    Log.e(
                        "`FaceAuthenticator.kt`",
                        "Security vulnerability has been discovered and the sensor is unavailable until a security update has addressed this issue."
                    )
                }
    
                BiometricManager.BIOMETRIC_ERROR_UNSUPPORTED -> {
                    Log.e(
                        "`FaceAuthenticator.kt`",
                        "The user can't authenticate because the specified options are incompatible with the current Android version."
                    )
                }
    
                BiometricManager.BIOMETRIC_STATUS_UNKNOWN -> {
                    Log.e(
                        "`FaceAuthenticator.kt`",
                        "Unable to determine whether the user can authenticate"
                    )
                }
            }
            return false
        }
    
        @RequiresApi(Build.VERSION_CODES.P)
        actual fun authenticateWithFace(callback: (Boolean) -> Unit) {
            
            // Create prompt Info to set prompt details
            val promptInfo = BiometricPrompt.PromptInfo.Builder()
                .setTitle("Authentication using biometric")
                .setSubtitle("Authenticate using face/fingerprint")
                .setAllowedAuthenticators(BIOMETRIC_STRONG or BIOMETRIC_WEAK or DEVICE_CREDENTIAL)
                .setNegativeButtonText("Cancel")
                .build()
    
            // Create biometricPrompt object to get authentication callback result
            val biometricPrompt = BiometricPrompt(activity, activity.mainExecutor,
                object : BiometricPrompt.AuthenticationCallback() {
                    override fun onAuthenticationError(
                        errorCode: Int,
                        errString: CharSequence,
                    ) {
                        super.onAuthenticationError(errorCode, errString)
                        Toast.makeText(activity, "Authentication error: $errString", Toast.LENGTH_SHORT)
                            .show()
                        callback(false)
                    }
    
                    override fun onAuthenticationSucceeded(
                        result: BiometricPrompt.AuthenticationResult,
                    ) {
                        super.onAuthenticationSucceeded(result)
                        Toast.makeText(activity, "Authentication succeeded!", Toast.LENGTH_SHORT).show()
                        callback(true)
                    }
    
                    override fun onAuthenticationFailed() {
                        super.onAuthenticationFailed()
                        Toast.makeText(activity, "Authentication failed", Toast.LENGTH_SHORT).show()
                        callback(false)
                    }
                })
    
            //Authenticate using biometric prompt
            biometricPrompt.authenticate(promptInfo)
        }
    
    }

    In the code above, the BiometricPrompt.Builder gathers the arguments to be displayed on the biometric dialog provided by the system.

    The setAllowedAuthenticators() function enables us to indicate the authenticators that are permitted for biometric authentication.

    // Create prompt Info to set prompt details

    // Create prompt Info to set prompt details
    val promptInfo = BiometricPrompt.PromptInfo.Builder()
       	.setTitle("Authentication using biometric")
       	.setSubtitle("Authenticate using face/fingerprint")
       	.setAllowedAuthenticators(BIOMETRIC_STRONG or BIOMETRIC_WEAK)   
          .setNegativeButtonText("Cancel")
       	.build()

    It is not possible to include both .setAllowedAuthenticators(BIOMETRIC_WEAK or DEVICE_CREDENTIAL) and .setNegativeButtonText(“Cancel”) simultaneously in a BiometricPrompt.PromptInfo.Builder instance because the last mode of device authentication is being utilized.

    However, it is possible to include both .setAllowedAuthenticators(BIOMETRIC_WEAK or BIOMETRIC_STRONG) and .setNegativeButtonText(“Cancel“) simultaneously in a BiometricPrompt.PromptInfo.Builder instance. This allows for a fallback to device credentials authentication when the user cancels the biometric authentication process.

    The BiometricPrompt object facilitates biometric authentication and provides an AuthenticationCallback to handle the outcomes of the authentication process, indicating whether it was successful or encountered a failure.

    val biometricPrompt = BiometricPrompt(activity, activity.mainExecutor,
                object : BiometricPrompt.AuthenticationCallback() {
                    override fun onAuthenticationError(
                        errorCode: Int,
                        errString: CharSequence,
                    ) {
                        super.onAuthenticationError(errorCode, errString)
                        Toast.makeText(activity, "Authentication error: $errString", Toast.LENGTH_SHORT)
                            .show()
                        callback(false)
                    }
    
                    override fun onAuthenticationSucceeded(
                        result: BiometricPrompt.AuthenticationResult,
                    ) {
                        super.onAuthenticationSucceeded(result)
                        Toast.makeText(activity, "Authentication succeeded!", Toast.LENGTH_SHORT).show()
                        callback(true)
                    }
    
                    override fun onAuthenticationFailed() {
                        super.onAuthenticationFailed()
                        Toast.makeText(activity, "Authentication failed", Toast.LENGTH_SHORT).show()
                        callback(false)
                    }
                })
    
            //Authenticate using biometric prompt
            biometricPrompt.authenticate(promptInfo)

    Now, we have completed the coding of the shared code for Android in the androidMain folder. To utilize this code, we can create a new file named LoginActivity.kt within the androidApp folder.

    // androidApp/LoginActivity

    class LoginActivity : AppCompatActivity() {
    
        @RequiresApi(Build.VERSION_CODES.R)
        override fun onCreate(savedInstanceState: Bundle?) {
            super.onCreate(savedInstanceState)
            setContentView(R.layout.activity_login)
    
            val authenticate = findViewById<Button>(R.id.authenticate_button)
            authenticate.setOnClickListener {
    
                val faceAuthenticatorImpl = FaceAuthenticator(this);
                if (faceAuthenticatorImpl.isDeviceHasBiometric()) {
                    faceAuthenticatorImpl.authenticateWithFace {
                          if (it){ Log.d("'LoginActivity.kt'", "Authentication Successful") }
                          else{ Log.d("'LoginActivity.kt'", "Authentication Failed") }
                    }
                }
    
            }
        }
    }

    Implement Biometric Authentication In iOS App

    For authentication, we have a special framework in iOS, i.e., Local Authentication Framework.

    The Local Authentication framework provides a way to integrate biometric authentication (such as Touch ID or Face ID) and device passcode authentication into your app. This framework allows you to enhance the security of your app by leveraging the biometric capabilities of the device or the device passcode.

    Now, let’s create an actual implementation of FaceAuthenticator class of shared folder in iosMain folder.

    // shared/iosMain/FaceAuthenticator

    actual class FaceAuthenticator(context: FragmentActivity) {
       actual fun isDeviceHasBiometric(): Boolean {
           // code to check biometric available
       }
    
       actual fun authenticateWithFace(callback: (Boolean) -> Unit) {
           // code to authenticate using biometric
       }
    }

    Add the following code to the isDeviceHasBiometric() function to determine whether the device supports biometric authentication or not.

    actual class FaceAuthenticator {
    
        actual fun isDeviceHasBiometric(): Boolean {
            // Check if face authentication is available
            val context = LAContext()
            val error = memScoped {
                allocPointerTo<ObjCObjectVar<NSError?>>()
            }
            return context.canEvaluatePolicy(LAPolicyDeviceOwnerAuthentication, error = error.value)
        }
    
        actual fun authenticateWithFace(callback: (Boolean) -> Unit) {
            // code to authenticate using biometric
        }
    }

    In the above code, LAContext class is part of the Local Authentication framework in iOS. It represents a context for evaluating authentication policies and handling biometric or passcode authentication. 

    LAPolicy represents different authentication policies that can be used with the LAContext class. The LAPolicy enum defines the following policies:

    .deviceOwnerAuthenticationWithBiometrics

    This policy allows the user to authenticate using biometric authentication, such as Touch ID or Face ID. If the device supports biometric authentication and the user has enrolled their biometric data, the authentication prompt will appear for biometric verification.

    .deviceOwnerAuthentication 

    This policy allows the user to authenticate using either biometric authentication (if available) or the device passcode. If biometric authentication is supported and the user has enrolled their biometric data, the prompt will appear for biometric verification. Otherwise, the device passcode will be used for authentication.

    We have used the LAPolicyDeviceOwnerAuthentication policy constant, which authenticates either by biometry or the device passcode.

    We have used the canEvaluatePolicy(_:error:) method to check if the device supports biometric authentication and if the user has added any biometric information (e.g., Touch ID or Face ID).

    To perform biometric authentication, insert the following code into the authenticateWithFace() method.

    // shared/iosMain/FaceAuthenticator

    actual class FaceAuthenticator {
    
        actual fun isDeviceHasBiometric(): Boolean {
            // Check if face authentication is available
            val context = LAContext()
            val error = memScoped {
                allocPointerTo<ObjCObjectVar<NSError?>>()
            }
            return context.canEvaluatePolicy(LAPolicyDeviceOwnerAuthentication, error = error.value)
        }
    
        actual fun authenticateWithFace(callback: (Boolean) -> Unit) {
            // Authenticate using biometric
            val context = LAContext()
            val reason = "Authenticate using face"
    
            if (isDeviceHasBiometric()) {
                // Perform face authentication
                context.evaluatePolicy(
                    LAPolicyDeviceOwnerAuthentication,
                    localizedReason = reason
                ) { b: Boolean, nsError: NSError? ->
                    callback(b)
                    if (!b) {
                        print(nsError?.localizedDescription ?: "Failed to authenticate")
                    }
                }
            }
    
            callback(false)
        }
    
    }

    The primary purpose of LAContext is to evaluate authentication policies, such as biometric authentication or device passcode authentication. The main method for this is 

    evaluatePolicy(_:localizedReason:reply:):

    This method triggers an authentication request, which is returned in the completion block. The localizedReason parameter is a message that explains why the authentication is required and is shown during the authentication process.

    When using evaluatePolicy(_:localizedReason:reply:), we may have the option to fall back to device passcode authentication or cancel the authentication process. We can handle these scenarios by inspecting the LAError object passed in the error parameter of the completion block:

    if let error = error as? LAError {
        switch error.code {
        case .userFallback:
            	// User tapped on fallback button, provide a passcode entry UI
        case .userCancel:
            	// User canceled the authentication
        	// Handle other error cases as needed
        }
    }

    That concludes the coding of the shared code for iOS in the iosMain folder. We can utilize this by creating LoginView.swift in the iosApp folder.

    struct LoginView: View {
        var body: some View {
            var isFaceAuthenticated :Bool = false
            let faceAuthenticator = FaceAuthenticator()
            
            Button(action: {
                if(faceAuthenticator.isDeviceHasBiometric()){
                    faceAuthenticator.authenticateWithFace { isSuccess in
                        isFaceAuthenticated = isSuccess.boolValue
                        print("Result is ")
                        print(isFaceAuthenticated)
                    }
                }
            }) {
                Text("Authenticate")
                .padding()
                .background(Color.blue)
                .foregroundColor(.white)
                .cornerRadius(10)
            }
            
        }
    }

    This ends our implementation of biometric authentication using the KMM application that runs smoothly on both Android and iOS platforms. If you’re interested, you can find the code for this project on our GitHub repository. We would love to hear your thoughts and feedback on our implementation.

    Conclusion

    It is important to acknowledge that while KMM offers numerous advantages, it may not be suitable for every project. Applications with extensive platform-specific requirements or intricate UI components may still require platform-specific development. Nonetheless, KMM can still prove beneficial in such scenarios by facilitating the sharing of non-UI code and minimizing redundancy.

    On the whole, Kotlin Multiplatform Mobile is an exciting framework that empowers developers to effortlessly create cross-platform applications. It provides an efficient and adaptable solution for building robust and high-performing mobile apps, streamlining development processes, and boosting productivity. With its expanding ecosystem and strong community support, KMM is poised to play a significant role in shaping the future of mobile app development.

  • Unlocking Seamless Communication: BLE Integration with React Native for Device Connectivity

    In today’s interconnected world, where smart devices have become an integral part of our daily lives, the ability to communicate with Bluetooth Low Energy (BLE) enabled devices opens up a myriad of possibilities for innovative applications. In this blog, we will explore the exciting realm of communicating with BLE-enabled devices using React Native, a popular cross-platform framework for mobile app development. Whether you’re a seasoned React Native developer or just starting your journey, this blog will equip you with the knowledge and skills to establish seamless communication with BLE devices, enabling you to create powerful and engaging user experiences. So, let’s dive in and unlock the potential of BLE communication in the world of React Native!

    BLE (Bluetooth Low Energy)

    Bluetooth Low Energy (BLE) is a wireless communication technology designed for low-power consumption and short-range connectivity. It allows devices to exchange data and communicate efficiently while consuming minimal energy. BLE has gained popularity in various industries, from healthcare and fitness to home automation and IoT applications. It enables seamless connectivity between devices, allowing for the development of innovative solutions. With its low energy requirements, BLE is ideal for battery-powered devices like wearables and sensors. It offers simplified pairing, efficient data transfer, and supports various profiles for specific use cases. BLE has revolutionized the way devices interact, enabling a wide range of connected experiences in our daily lives.

    Here is a comprehensive overview of how mobile applications establish connections and facilitate communication with BLE devices.

    What will we be using?

    react-native - 0.71.6
    react - 18.0.2
    react-native-ble-manager - 10.0.2

    Note: We are assuming you already have the React Native development environment set up on your system; if not, please refer to the React Native guide for instructions on setting up the RN development environment.

    What are we building?

    Together, we will construct a sample mobile application that showcases the integration of Bluetooth Low Energy (BLE) technology. This app will search for nearby BLE devices, establish connections with them, and facilitate seamless message exchanges between the mobile application and the chosen BLE device. By embarking on this project, you will gain practical experience in building an application that leverages BLE capabilities for effective communication. Let’s commence this exciting journey of mobile app development and BLE connectivity!

    Setup

    Before setting up the react-native-ble manager, let’s start by creating a React Native application using the React Native CLI. Follow these steps:

    Step 1: Ensure that you have Node.js and npm (Node Package Manager) installed on your system.

    Step 2: Open your command prompt or terminal and navigate to the directory where you want to create your React Native project.

    Step 3: Run the following command to create a new React Native project:

    npx react-native@latest init RnBleManager

    Step 4: Wait for the project setup to complete. This might take a few minutes as it downloads the necessary dependencies.

    Step 5: Once the setup is finished, navigate into the project directory:

    cd RnBleManager

    Step 6: Congratulations! You have successfully created a new React Native application using the React Native CLI.

    Now you are ready to set up the react-native-ble manager and integrate it into your React Native project.

    Installing react-native-ble-manager

    If you use NPM -
    npm i --save react-native-ble-manager
    
    With Yarn -
    yarn add react-native-ble-manager

    In order to enable Android applications to utilize Bluetooth and location services for detecting and communicating with BLE devices, it is essential to incorporate the necessary permissions within the Android platform.

    Add these permissions in the AndroidManifest.xml file in android/app/src/main/AndroidManifest.xml

    Integration

    At this stage, having successfully created a new React Native application, installed the react-native-ble-manager, and configured it to function seamlessly on Android, it’s time to proceed with integrating the react-native-ble-manager into your React Native application. Let’s dive into the integration process to harness the power of BLE functionality within your app.

    BleConnectionManager

    To ensure that our application can access the BLE connection state and facilitate communication with the BLE device, we will implement BLE connection management in the global state. This will allow us to make the connection management accessible throughout the entire codebase. To achieve this, we will create a ContextProvider called “BleConnectionContextProvider.” By encapsulating the BLE connection logic within this provider, we can easily share and access the connection state and related functions across different components within the application. This approach will enhance the efficiency and effectiveness of managing BLE connections. Let’s proceed with implementing the BleConnectionContextProvider to empower our application with seamless BLE communication capabilities.

    This context provider will possess the capability to access and manage the current BLE state, providing a centralized hub for interacting with the BLE device. It will serve as the gateway to establish connections, send and receive data, and handle various BLE-related functionalities. By encapsulating the BLE logic within this context provider, we can ensure that all components within the application have access to the BLE device and the ability to communicate with it. This approach simplifies the integration process and facilitates efficient management of the BLE connection and communication throughout the entire application.

    Let’s proceed with creating a context provider equipped with essential state management functionalities. This context provider will effectively handle the connection and scanning states, maintain the BLE object, and manage the list of peripherals (BLE devices) discovered during the application’s scanning process. By implementing this context provider, we will establish a robust foundation for seamlessly managing BLE connectivity and communication within the application.

    NOTE: Although not essential for the example at hand, implementing global management of the BLE connection state allows us to demonstrate its universal management capabilities.

    ....
    BleManager.disconnect(BLE_SERVICE_ID)
      .then(() => {
        dispatch({ type: "disconnected", payload: { peripheral } })
      })
      .catch((error) => {
        // Failure code
        console.log(error);
      });
    ....

    Prior to integrating the BLE-related components, it is crucial to ensure that the mobile app verifies whether the:

    1. Location permissions are granted and enabled
    2. Mobile device’s Bluetooth is enabled

    To accomplish this, we will implement a small method called requestPermissions that grants all the necessary permissions to the user. We will then call this method as soon as our context provider initializes within the useEffect hook in the BleConnectionContextProvider. Doing so ensures that the required permissions are obtained by the mobile app before proceeding with the integration of BLE functionalities.

    import {PermissionsAndroid, Platform} from "react-native"
    import BleManager from "react-native-ble-manager"
    
      const requestBlePermissions = async (): Promise<boolean> => {
        if (Platform.OS === "android" && Platform.Version < 23) {
          return true
        }
        try {
          const status = await PermissionsAndroid.requestMultiple([
            PermissionsAndroid.PERMISSIONS.ACCESS_FINE_LOCATION,
            PermissionsAndroid.PERMISSIONS.BLUETOOTH_CONNECT,
            PermissionsAndroid.PERMISSIONS.BLUETOOTH_SCAN,
            PermissionsAndroid.PERMISSIONS.BLUETOOTH_ADVERTISE,
          ])
          return (
            status[PermissionsAndroid.PERMISSIONS.BLUETOOTH_CONNECT] == "granted" &&
            status[PermissionsAndroid.PERMISSIONS.BLUETOOTH_SCAN] == "granted" &&
            status[PermissionsAndroid.PERMISSIONS.ACCESS_FINE_LOCATION] == "granted"
          )
        } catch (e) {
          console.error("Location Permssions Denied ", e)
          return false
        }
      }
    
    // effects
    useEffect(() => {
      const initBle = async () => {
        await requestBlePermissions()
        BleManager.enableBluetooth()
      }
      
      initBle()
    }, [])

    After granting all the required permissions and enabling Bluetooth, the next step is to start the BleManager. To accomplish this, please add the following line of code after the enableBle command in the aforementioned useEffect:

    // initialize BLE module
    BleManager.start({ showAlert: false })

    By including this code snippet, the BleManager will be initialized, facilitating the smooth integration of BLE functionality within your application.

    Now that we have obtained the necessary permissions, enabled Bluetooth, and initiated the Bluetooth manager, we can proceed with implementing the functionality to scan and detect BLE peripherals. 

    We will now incorporate the code that enables scanning for BLE peripherals. This will allow us to discover and identify nearby BLE devices. Let’s dive into the implementation of this crucial step in our application’s BLE integration process.

    To facilitate scanning and stopping the scanning process for BLE devices, as well as handle various events related to the discovered peripherals, scan stop, and BLE disconnection, we will create a method along with the necessary event listeners.

    In addition, state management is essential to effectively handle the connection and scanning states, as well as maintain the list of scanned devices. To accomplish this, let’s incorporate the following code into the BleConnectionConextProvider. This will ensure seamless management of the aforementioned states and facilitate efficient tracking of scanned devices.

    Let’s proceed with implementing these functionalities to ensure smooth scanning and handling of BLE devices within our application.

    export const BLE_NAME = "SAMPlE_BLE"
    export const BLE_SERVICE_ID = "5476534d-1213-1212-1212-454e544f1212"
    export const BLE_READ_CHAR_ID = "00105354-0000-1000-8000-00805f9b34fb"
    export const BLE_WRITE_CHAR_ID = "00105352-0000-1000-8000-00805f9b34fb"
    
    export const BleContextProvider = ({
      children,
    }: {
      children: React.ReactNode
    }) => {
      // variables
      const BleManagerModule = NativeModules.BleManager
      const bleEmitter = new NativeEventEmitter(BleManagerModule)
      const { setConnectedDevice } = useBleStore()
    
      // State management
      const [state, dispatch] = React.useReducer(
        (prevState: BleState, action: any) => {
          switch (action.type) {
            case "scanning":
              return {
                ...prevState,
                isScanning: action.payload,
              }
            case "connected":
              return {
                ...prevState,
                connectedBle: action.payload.peripheral,
                isConnected: true,
              }
            case "disconnected":
              return {
                ...prevState,
                connectedBle: undefined,
                isConnected: false,
              }
            case "clearPeripherals":
              let peripherals = prevState.peripherals
              peripherals.clear()
              return {
                ...prevState,
                peripherals: peripherals,
              }
            case "addPerpheral":
              peripherals = prevState.peripherals
              peripherals.set(action.payload.id, action.payload.peripheral)
              const list = [action.payload.connectedBle]
              return {
                ...prevState,
                peripherals: peripherals,
              }
            default:
              return prevState
          }
        },
        initialState
      )
    
      // methods
      const getPeripheralName = (item: any) => {
        if (item.advertising) {
          if (item.advertising.localName) {
            return item.advertising.localName
          }
        }
    
        return item.name
      }
    
      // start to scan peripherals
      const startScan = () => {
        // skip if scan process is currenly happening
        console.log("Start scanning ", state.isScanning)
        if (state.isScanning) {
          return
        }
    
        dispatch({ type: "clearPeripherals" })
    
        // then re-scan it
        BleManager.scan([], 10, false)
          .then(() => {
            console.log("Scanning...")
            dispatch({ type: "scanning", payload: true })
          })
          .catch((err) => {
            console.error(err)
          })
      }
    
      const connectBle = (peripheral: any, callback?: (name: string) => void) => {
        if (peripheral && peripheral.name && peripheral.name == BLE_NAME) {
          BleManager.connect(peripheral.id)
            .then((resp) => {
              dispatch({ type: "connected", payload: { peripheral } })
              // callback from the caller
              callback && callback(peripheral.name)
              setConnectedDevice(peripheral)
            })
            .catch((err) => {
              console.log("failed connecting to the device", err)
            })
        }
      }
    
      // handle discovered peripheral
      const handleDiscoverPeripheral = (peripheral: any) => {
        console.log("Got ble peripheral", getPeripheralName(peripheral))
    
        if (peripheral.name && peripheral.name == BLE_NAME) {
          dispatch({
            type: "addPerpheral",
            payload: { id: peripheral.id, peripheral },
          })
        }
      }
    
      // handle stop scan event
      const handleStopScan = () => {
        console.log("Scan is stopped")
        dispatch({ type: "scanning", payload: false })
      }
    
      // handle disconnected peripheral
      const handleDisconnectedPeripheral = (data: any) => {
        console.log("Disconnected from " + data.peripheral)
    
        //
        dispatch({ type: "disconnected" })
      }
    
      const handleUpdateValueForCharacteristic = (data: any) => {
        console.log(
          "Received data from: " + data.peripheral,
          "Characteristic: " + data.characteristic,
          "Data: " + toStringFromBytes(data.value)
        )
      }
    
      // effects
      useEffect(() => {
        const initBle = async () => {
          await requestBlePermissions()
          BleManager.enableBluetooth()
        }
    
        initBle()
    
        // add ble listeners on mount
        const BleManagerDiscoverPeripheral = bleEmitter.addListener(
          "BleManagerDiscoverPeripheral",
          handleDiscoverPeripheral
        )
        const BleManagerStopScan = bleEmitter.addListener(
          "BleManagerStopScan",
          handleStopScan
        )
        const BleManagerDisconnectPeripheral = bleEmitter.addListener(
          "BleManagerDisconnectPeripheral",
          handleDisconnectedPeripheral
        )
        const BleManagerDidUpdateValueForCharacteristic = bleEmitter.addListener(
          "BleManagerDidUpdateValueForCharacteristic",
          handleUpdateValueForCharacteristic
        )
      }, [])
    
    // render
      return (
        <BleContext.Provider
          value={{
            ...state,
            startScan: startScan,
            connectBle: connectBle,
          }}
        >
          {children}
        </BleContext.Provider>
      )
    }

    NOTE: It is important to note the properties of the BLE device we intend to search for and connect to, namely BLE_NAME, BLE_SERVICE_ID, BLE_READ_CHAR_ID, and BLE_WRITE_CHAR_ID. Familiarizing yourself with these properties beforehand is crucial, as they enable you to restrict the search to specific BLE devices and facilitate connection to the desired BLE service and characteristics for reading and writing data. Being aware of these properties will greatly assist you in effectively working with BLE functionality.

    For instance, take a look at the handleDiscoverPeripheral method. In this method, we filter the discovered peripherals based on their device name, matching it with the predefined BLE_NAME we mentioned earlier. As a result, this approach allows us to obtain a list of devices that specifically match the given name, narrowing down the search to the desired devices only. 

    Additionally, you have the option to scan peripherals using the service IDs of the Bluetooth devices. This means you can specify specific service IDs to filter the discovered peripherals during the scanning process. By doing so, you can focus the scanning on Bluetooth devices that provide the desired services, enabling more targeted and efficient scanning operations.

    Excellent! We now have all the necessary components in place for scanning and connecting to the desired BLE device. Let’s proceed by adding the user interface (UI) elements that will allow users to initiate the scan, display the list of scanned devices, and enable connection to the selected device. By implementing these UI components, we will create a seamless user experience for scanning, device listing, and connection within our application.

    Discovering and Establishing Connections with BLE Devices

    Let’s create a new UI component/Page that will handle scanning, listing, and connecting to the BLE device. This page will have:

    • A Scan button to call the scan function
    • A simple FlatList to list the selected BLE devices and
    • A method to connect to the selected BLE device when the user clicks on any BLE item row from the list

    Create HomeScreen.tsx in the src folder and add the following code: 

    import React, {useCallback, useEffect, useMemo} from 'react';
    import {
      ActivityIndicator,
      Alert,
      Button,
      FlatList,
      StyleSheet,
      Text,
      TouchableOpacity,
      View,
    } from 'react-native';
    import {useBleContext} from './BleContextProvider';
    
    interface HomeScreenProps {}
    
    const HomeScreen: React.FC<HomeScreenProps> = () => {
      const {
        isConnected,
        isScanning,
        peripherals,
        connectedBle,
        startScan,
        connectBle,
      } = useBleContext();
    
      // Effects
      const scannedbleList = useMemo(() => {
        const list = [];
        if (connectedBle) list.push(connectedBle);
        if (peripherals) list.push(...Array.from(peripherals.values()));
        return list;
      }, [peripherals, isScanning]);
    
      useEffect(() => {
        if (!isConnected) {
          startScan && startScan();
        }
      }, []);
    
      // Methods
      const getRssi = (rssi: number) => {
        return !!rssi
          ? Math.pow(10, (-69 - rssi) / (10 * 2)).toFixed(2) + ' m'
          : 'N/A';
      };
    
      const onBleConnected = (name: string) => {
        Alert.alert('Device connected', `Connected to ${name}.`, [
          {
            text: 'Ok',
            onPress: () => {},
            style: 'default',
          },
        ]);
      };
      const BleListItem = useCallback((item: any) => {
        // define name and rssi
        return (
          <TouchableOpacity
            style={{
              flex: 1,
              flexDirection: 'row',
              justifyContent: 'space-between',
              padding: 16,
              backgroundColor: '#2A2A2A',
            }}
            onPress={() => {
              connectBle && connectBle(item.item, onBleConnected);
            }}>
            <Text style={{textAlign: 'left', marginRight: 8, color: 'white'}}>
              {item.item.name}
            </Text>
            <Text style={{textAlign: 'right'}}>{getRssi(item.item.rssi)}</Text>
          </TouchableOpacity>
        );
      }, []);
    
      const ItemSeparator = useCallback(() => {
        return <View style={styles.divider} />;
      }, []);
    
      // render
      // Ble List and scan button
      return (
        <View style={styles.container}>
          {/* Loader when app is scanning */}
          {isScanning ? (
            <ActivityIndicator size={'small'} />
          ) : (
            <>
              {/* Ble devices List View */}
              {scannedbleList && scannedbleList.length > 0 ? (
                <>
                  <Text style={styles.listHeader}>Discovered BLE Devices</Text>
                  <FlatList
                    data={scannedbleList}
                    renderItem={({item}) => <BleListItem item={item} />}
                    ItemSeparatorComponent={ItemSeparator}
                  />
                </>
              ) : (
                <View style={styles.emptyList}>
                  <Text style={styles.emptyListText}>
                    No Bluetooth devices discovered. Please click scan to search the
                    BLE devices
                  </Text>
                </View>
              )}
    
              {/* Scan button */}
              <View style={styles.btnContainer}>
                <Button
                  title="Scan"
                  color={'black'}
                  disabled={isConnected || isScanning}
                  onPress={() => {
                    startScan && startScan();
                  }}
                />
              </View>
            </>
          )}
        </View>
      );
    };
    
    const styles = StyleSheet.create({
      container: {
        flex: 1,
        flexDirection: 'column',
      },
      listHeader: {
        padding: 8,
        color: 'black',
      },
      emptyList: {
        flex: 1,
        justifyContent: 'center',
        alignItems: 'center',
      },
      emptyListText: {
        padding: 8,
        textAlign: 'center',
        color: 'black',
      },
      btnContainer: {
        marginTop: 10,
        marginHorizontal: 16,
        bottom: 10,
        alignItems: 'flex-end',
      },
      divider: {
        height: 1,
        width: '100%',
        marginHorizontal: 8,
        backgroundColor: '#1A1A1A',
      },
    });
    
    export default HomeScreen;

    Now, open App.tsx and replace the complete code with the following changes: 
    In App.tsx, we removed the default boilerplate code, react-native cli generated while creating the project with our own code, where we added the BleContextProvider and HomeScreen to the app.

    import React from 'react';
    import {SafeAreaView, StatusBar, useColorScheme, View} from 'react-native';
    
    import {Colors} from 'react-native/Libraries/NewAppScreen';
    import {BleContextProvider} from './BleContextProvider';
    import HomeScreen from './HomeScreen';
    
    function App(): JSX.Element {
      const isDarkMode = useColorScheme() === 'dark';
    
      const backgroundStyle = {
        backgroundColor: isDarkMode ? Colors.darker : Colors.lighter,
      };
    
      return (
        <SafeAreaView style={backgroundStyle}>
          <StatusBar
            barStyle={isDarkMode ? 'light-content' : 'dark-content'}
            backgroundColor={backgroundStyle.backgroundColor}
          />
          <BleContextProvider>
            <View style={{height: '100%', width: '100%'}}>
              <HomeScreen />
            </View>
          </BleContextProvider>
        </SafeAreaView>
      );
    }
    
    export default App;

    Running the application on an Android device: Upon launching the app, you will be presented with an empty list message accompanied by a scan button. Simply tap the scan button to retrieve a list of available BLE peripherals within the range of your mobile device. By selecting a specific BLE device from the list, you can establish a connection with it.

    Awesome! Now we are able to scan, detect, and connect to the BLE devices, but there is more to it than just connecting to the BLE devices. We can write to and read the required information from BLE devices, and based on that information, mobile applications OR backend services can perform several other operations.

    For example, if you are wearing and connected to a BLE device that monitors your blood pressure every one hour, and if it goes beyond the threshold, it can trigger a call to a doctor or family members to check and take precautionary measures as soon as possible.

    Communicating with BLE devices

    For seamless communication with a BLE device, the mobile app must possess precise knowledge of the services and characteristics associated with the device. A BLE device typically presents multiple services, each comprising various distinct characteristics. These services and characteristics can be collaboratively defined and shared by the team responsible for manufacturing the BLE device.

    In BLE communication, comprehending the characteristics and their properties is crucial, as they serve distinct purposes. Certain characteristics facilitate writing data to the BLE device, while others enable reading data from it. Gaining a comprehensive understanding of these characteristics and their properties is vital for effectively interacting with the BLE device and ensuring seamless communication.

    Reading data from BLE device when BLE sends data

    Once the mobile app successfully establishes a connection with the BLE device, it initiates the retrieval of available services. It activates the listener to begin receiving notifications from the BLE device. This process takes place within the callback of the “connect BLE” method, ensuring that the app seamlessly retrieves the necessary information and starts listening for important updates from the connected BLE device.

    const connectBle = (peripheral: any, callback?: (name: string) => void) => {
        if (peripheral && peripheral.name && peripheral.name == BLE_NAME) {
          BleManager.connect(peripheral.id)
            .then((resp) => {
              dispatch({ type: "connected", payload: { peripheral } })
              // callback from the caller
              callback && callback(peripheral.name)
              setConnectedDevice(peripheral)
    
              // retrieve services and start read notification
              BleManager.retrieveServices(peripheral.id).then((resp) => {
                BleManager.startNotification(
                  peripheral.id,
                  BLE_SERVICE_ID,
                  BLE_READ_CHAR_ID
                )
                  .then(console.log)
                  .catch(console.error)
              })
            })
            .catch((err) => {
              console.log("failed connecting to the device", err)
            })
        }
      }

    Consequently, the application will promptly receive notifications whenever the BLE device writes data to the designated characteristic within the specified service.

    Reading and writing data to BLE from a mobile device

    To establish communication between the mobile app and the BLE device, we will implement new methods within BleContextProvider. These methods will facilitate the reading and writing of data to the BLE device. By exposing these methods in BleContextProvider’s reducer, we ensure that the app has a reliable means of interacting with the BLE device and can seamlessly exchange information as required.

    interface BleState {
      isConnected: boolean
      isScanning: boolean
      peripherals: Map<string, any>
      list: Array<any>
      connectedBle: Peripheral | undefined
      startScan?: () => void
      connectBle?: (peripheral: any, callback?: (name: string) => void) => void
      readFromBle?: (id: string) => void
      writeToble?: (
        id: string,
        content: string,
        callback?: (count: number, buttonNumber: ButtonNumber) => void
      ) => void
    }
    
    export const BleContextProvider = ({
      children,
    }: {
      children: React.ReactNode
    }) => {
        ....
        
        const writeToBle = (
        id: string,
        content: string,
        count: number,
        buttonNumber: ButtonNumber,
        callback?: (count: number, buttonNumber: ButtonNumber) => void
      ) => {
        BleManager.retrieveServices(id).then((response) => {
          BleManager.writeWithoutResponse(
            id,
            BLE_SERVICE_ID,
            BLE_WRITE_CHAR_ID,
            toByteArray(content)
          )
            .then((res) => {
              callback && callback(count, buttonNumber)
            })
            .catch((res) => console.log("Error writing to BLE device - ", res))
        })
      }
    
      const readFromBle = (id: string) => {
        BleManager.retrieveServices(id).then((response) => {
          BleManager.read(id, BLE_SERVICE_ID, BLE_READ_CHAR_ID)
            .then((resp) => {
              console.log("Read from BLE", toStringFromBytes(resp))
            })
            .catch((err) => {
              console.error("Error Reading from BLE", err)
            })
        })
      }
      ....
    
      // render
      return (
        <BleContext.Provider
          value={{
            ...state,
            startScan: startScan,
            connectBle: connectBle,
            writeToble: writeToBle,
            readFromBle: readFromBle,
          }}
        >
          {children}
        </BleContext.Provider>
      )    
    }

    NOTE: Before a write, read, or start notification, you need to call retrieveServices method every single time.

    Disconnecting BLE connection

    Once you are done with the BLE services, you can disconnect the BLE connection using the disconnectBLE method provided in the library.

    ....
    BleManager.disconnect(BLE_SERVICE_ID)
      .then(() => {
        dispatch({ type: "disconnected", payload: { peripheral } })
      })
      .catch((error) => {
        // Failure code
        console.log(error);
      });
    ....

    Additionally, the React Native BLE Manager library offers various other methods that can enhance the application’s functionality. These include the createBond method, which facilitates the pairing of the BLE device with the mobile app, the stopNotification method, which ceases receiving notifications from the device, and the readRSSI method, which retrieves the received signal strength indicator (RSSI) of the device. For a more comprehensive understanding of the library and its capabilities, I recommend exploring further details on the React Native BLE Manager library documentation here: https://www.npmjs.com/package/react-native-ble-manager

    Conclusion

    We delved into the fascinating world of communicating with BLE (Bluetooth Low Energy) using the React Native BLE Manager library. Then we explored the power of BLE technology and how it can be seamlessly integrated into React Native applications to enable efficient and low-power communication between devices.

    Using the React Native BLE Manager library, we explored essential functionalities such as scanning for nearby BLE devices, establishing connections, discovering services and characteristics, and exchanging data. We also divided into more advanced features like managing connections and handling notifications for a seamless user experience.

    It’s important to remember that BLE technology is continually evolving, and there may be additional libraries and frameworks available for BLE communication in the React Native ecosystem. As you progress on your journey, I encourage you to explore other resources, keep up with the latest advancements, and stay connected with the vibrant community of developers working with BLE and React Native.

    I hope this blog post has inspired you to explore the immense potential of BLE communication in your React Native applications. By harnessing the power of BLE, you can create innovative, connected experiences that enhance the lives of your users and open doors to new possibilities.

    Thank you for taking the time to read through this blog!

  • ARMed to Entertain: Why the Consumer Electronics Industry loves the ARM microcontroller

    Introduction

    We live in a world where convenience is king. Millions of electronic devices work in tandem to simplify our lives. The brain in these devices is the microcontroller. Today, we’re going to talk about the ARM microcontroller, which is the heart and soul of consumer electronic devices like smartphones, tablets, multimedia players, and wearable devices.

    To start off, there are two main processor architecture designs, namely RISC (reduced instruction set computers) and CISC (complex instruction set computers). ARM is the poster child for RISC, in fact, it is included in its name Advanced RISC Machine.

    Its highly optimized and power-efficient architecture makes it indispensable in today’s world. Let’s look at its design in more detail.

    A Powerful Brain for Embedded Systems

    A mobile or tablet is a shining example of an extremely portable computing device.

    It’s a great way to keep your life organized, communicate with practically anyone, consume media content, and enjoy unlimited games and entertainment. These capabilities just keep improving over time.

    But there is a silent struggle between applications and the hardware they run on. We all have experienced that annoying lag on our smartphones and not to mention the battery giving up on us when we need it the most. Luckily, ARM is packed with features to help us manage this. 

    Let’s Talk Simplicity

    An ‘assembly instruction set’ is the language understood by the ARM controller. Its design plays a crucial role in enabling us to perform a task in an efficient and optimized manner. ARM has a reduced instruction set (RISC). This does not denote there are fewer instructions available for use. It means a single instruction does less work, i.e., a small atomic task.

    As an example, let’s consider adding two numbers that would involve separate instructions for loading, adding, and storing the result using RISC design. Comparatively, a CISC design would have handled all of this in a single instruction. A simple instruction set does not require complex hardware design. This enables an ARM controller design to use fewer transistors and take up less silicon area. This reduces the power consumption, which is critical for battery-operated devices, along with corresponding savings in cost. But RISC controllers need a greater number of instructions to execute a task as compared to CISC. The compiler design for generating machine code from higher-level languages such as C is more complex in this case.

    Hence one needs to write optimized code to extract the best performance from ARM.

    Dealing with the Energy Vampire

    An hour of intense gaming drains your battery and leaves you scrambling for a wall charger or power bank. This is because a lot of computations are done in specially designed hardware units in ARM, which need extra power. These units barely consume any power when your device is idle. This means there is a direct relation between the intensity of computations and energy consumption.

    Every microcontroller needs a clock pulse, which is comparable to the heartbeat of the controller. It governs the speed at which instructions are executed and helps the controller keep time while performing tasks or governing the rate at which peripherals are run. The commencement and duration of any action that a processor may perform can be expressed in terms of clock cycles. A lower clock rate reduces the power consumption, which is critical for embedded devices but unfortunately also leads to a drop in performance. An instruction pipeline helps to boost performance and throughput while enabling a lower clock rate to be used. This can be compared to the functioning of a turbocharger in a car engine, where the real saving is in the benefits of using a smaller capacity engine but boosting it to match one that is larger and more powerful.

    With careful programming, we can increase the instruction throughput to do a lot more in a single clock cycle. Such judicious use of the system clock preserves battery life, reducing the need to charge the battery frequently.

    Busy as a Bee

    Another critical feature that speeds up execution is the instruction pipeline. It introduces parallelism in the execution of instructions. All instructions go through the fetch, decode, and execute stages which involve loading the instruction from program memory, understanding what task it performs, and finally, its execution. We have an instruction in each stage of the pipeline at any point in time. This increases throughput and speeds up code execution. Imagine you are at work, and each time you complete a task, your manager has a new one kept ready so that you are never idle. Yes, that would be the perfect analogy for the instruction pipeline. It reduces the wastage of clock cycles by ensuring there are always instructions fetched and available for execution.

    A Math Specialist

    A core part of computing involves transforming data and making decisions. Speed and accuracy are paramount in such situations. ARM has you covered with hardware units for arithmetic and logical instructions, enhanced DSP, and NEON technology for parallel processing of data. In short, all the bells and whistles needed to handle everything from music playback to powering drone platforms.  

    The NEON coprocessor is capable of executing multiple math operations simultaneously.

    It reduces the computational load on the main ARM controller. The design of these math units allows us to balance the tradeoff between computational speed and accuracy. As per the application requirement, we may choose to perform 4×16 bit multiply operations in parallel via NEON over 4×32 bit multiply operations sequentially in the ARM ALU (arithmetic and logical unit). The precision of the final result is reduced due to the usage of 16 bit operands in NEON, but the change in computational speed is significant. The ability to provide such multimedia acceleration is what makes ARM the main choice for portable audio, video, and gaming applications. 

    Conclusion‍

    We see that the system designers have attempted to balance performance, power consumption, and cost to produce a powerful embedded computing machine. As portability and efficiency demands increase, we can see ARM’s influence continue to expand.

    An application, if designed appropriately to leverage all of ARM’s features, can provide stunning performance without draining the battery.

    It takes a special level of skill to tune an application in “assembly language,” but the final result exceeds expectations. The next time you see a tiny wearable device delivering unbelievable performance, you know who the hidden star of the show is.