Tag: react native

  • Protecting Your Mobile App: Effective Methods to Combat Unauthorized Access

    Introduction: The Digital World’s Hidden Dangers

    Imagine you’re running a popular mobile app that offers rewards to users. Sounds exciting, right? But what if a few clever users find a way to cheat the system for more rewards? This is exactly the challenge many app developers face today.

    In this blog, we’ll describe a real-world story of how we fought back against digital tricksters and protected our app from fraud. It’s like a digital detective story, but instead of solving crimes, we’re stopping online cheaters.

    Understanding How Fraudsters Try to Trick the System

    The Sneaky World of Device Tricks

    Let’s break down how users may try to outsmart mobile apps:

    One way is through device ID manipulation. What is this? Think of a device ID like a unique fingerprint for your phone. Normally, each phone has its own special ID that helps apps recognize it. But some users have found ways to change this ID, kind of like wearing a disguise.

    Real-world example: Imagine you’re at a carnival with a ticket that lets you ride each ride once. A fraudster might try to change their appearance to get multiple rides. In the digital world, changing a device ID is similar—it lets users create multiple accounts and get more rewards than they should.

    How Do People Create Fake Accounts?

    Users have become super creative in making multiple accounts:

    • Using special apps that create virtual phone environments
    • Playing with email addresses
    • Using temporary email services

    A simple analogy: It’s like someone trying to enter a party multiple times by wearing different costumes and using slightly different names. The goal? To get more free snacks or entry benefits.

    The Detective Work: How to Catch These Digital Tricksters

    Tracking User Behavior

    Modern tracking tools are like having a super-smart security camera that doesn’t just record but actually understands what’s happening. Here are some powerful tools you can explore:

    LogRocket: Your App’s Instant Replay Detective

    LogRocket records and replays user sessions, capturing every interaction, error, and performance hiccup. It’s like having a video camera inside your app, helping developers understand exactly what users experience in real time.

    Quick snapshot:

    • Captures user interactions
    • Tracks performance issues
    • Provides detailed session replays
    • Helps identify and fix bugs instantly

    Mixpanel: The User Behavior Analyst

    Mixpanel is a smart analytics platform that breaks down user behavior, tracking how people use your app, where they drop off, and what features they love most. It’s like having a digital detective who understands your users’ journey.

    Key capabilities:

    • Tracks user actions
    • Creates behavior segments
    • Measures conversion rates
    • Provides actionable insights

    What They Do:

    • Notice unusual account creation patterns
    • Detect suspicious activities
    • Prevent potential fraud before it happens

    Email Validation: The First Line of Defense

    How it works:

    • Recognize similar email addresses
    • Prevent creating multiple accounts with slightly different emails
    • Block tricks like:
      • a.bhi629@gmail.com
      • abhi.629@gmail.com

    Real-life comparison: It’s like a smart mailroom that knows “John Smith” and “J. Smith” are the same person, preventing duplicate mail deliveries.

    Advanced Protection Strategies

    Device ID Tracking

    Key Functions:

    • Store unique device information
    • Check if a device has already claimed rewards
    • Prevent repeat bonus claims

    Simple explanation: Imagine a bouncer at a club who remembers everyone who’s already entered and stops them from sneaking in again.

    Stopping Fake Device Environments

    Some users try to create fake device environments using apps like:

    • Parallel Space
    • Multiple account creators
    • Game cloners

    Protection method: The app identifies and blocks these applications, just like a security system that recognizes fake ID cards.

    Root Device Detection

    What is a Rooted Device? It’s like a phone that’s been modified to give users complete control, bypassing normal security restrictions.

    Detection techniques:

    • Check for special root access files
    • Verify device storage
    • Run specific detection commands

    Analogy: It’s similar to checking if a car has been illegally modified to bypass speed limits.

    Extra Security Layers

    Android Version Requirements

    Upgrading to newer Android versions provides additional security:

    • Better detection of modified devices
    • Stronger app protection
    • More restricted file access

    Simple explanation: It’s like upgrading your home’s security system to a more advanced model that can detect intruders more effectively.

    Additional Protection Methods

    • Data encryption
    • Secure internet communication
    • Location verification
    • Encrypted local storage

    Think of these as multiple locks on your digital front door, each providing an extra layer of protection.

    Real-World Implementation Challenges

    Why is This Important?

    Every time a fraudster successfully tricks the system:

    • The app loses money
    • Genuine users get frustrated
    • Trust in the platform decreases

    Business impact: Imagine running a loyalty program where some people find ways to get 10 times more rewards than others. Not fair, right?

    Practical Tips for App Developers

    • Always stay updated with the latest security trends
    • Regularly audit your app’s security
    • Use multiple protection layers
    • Be proactive, not reactive
    • Learn from each attempted fraud

    Common Misconceptions About App Security

    Myth: “My small app doesn’t need advanced security.” Reality: Every app, regardless of size, can be a target.

    Myth: “Security is a one-time setup.” Reality: Security is an ongoing process of learning and adapting.

    Learning from Real Experiences

    These examples come from actual developers at Velotio Technologies, who faced these challenges head-on. Their approach wasn’t about creating an unbreakable system but about making fraud increasingly difficult and expensive.

    The Human Side of Technology

    Behind every security feature is a human story:

    • Developers protecting user experiences
    • Companies maintaining trust
    • Users expecting fair treatment

    Looking to the Future

    Technology will continue evolving, and so, too, will fraud techniques. The key is to:

    • Stay curious
    • Keep learning
    • Never assume you know everything

    Final Thoughts: Your App, Your Responsibility

    Protecting your mobile app isn’t just about implementing complex technical solutions; it’s about a holistic approach that encompasses understanding user behavior, creating fair experiences, and building trust. Here’s a deeper look into these critical aspects:

    Understanding User Behavior:‍

    Understanding how users interact with your app is crucial. By analyzing user behavior, you can identify patterns that may indicate fraudulent activity. For instance, if a user suddenly starts claiming rewards at an unusually high rate, it could signal potential abuse.
    Utilize analytics tools to gather data on user interactions. This data can help you refine your app’s design and functionality, ensuring it meets genuine user needs while also being resilient against misuse.

    Creating Fair Experiences:‍

    Clearly communicate your app’s rewards, account creation, and user behavior policies. Transparency helps users understand the rules and reduces the likelihood of attempts to game the system.
    Consider implementing a user agreement that outlines acceptable behavior and the consequences of fraudulent actions.

    Building Trust:

    Maintain open lines of communication with your users. Regular updates about security measures, app improvements, and user feedback can help build trust and loyalty.
    Use newsletters, social media, and in-app notifications to keep users informed about changes and enhancements.
    Provide responsive customer support to address user concerns promptly. If users feel heard and valued, they are less likely to engage in fraudulent behavior.

    Implement a robust support system that allows users to report suspicious activities easily and receive timely assistance.

    Remember: Every small protection measure counts.

    Call to Action

    Are you an app developer? Start reviewing your app’s security today. Don’t wait for a fraud incident to take action.

    Want to learn more?

    • Follow security blogs
    • Attend tech conferences
    • Connect with security experts
    • Never stop learning
  • React Native: Session Reply with Microsoft Clarity

    Microsoft recently launched session replay support for iOS on both Native iOS and React Native applications. We decided to see how it performs compared to competitors like LogRocket and UXCam.

    This blog discusses what session replay is, how it works, and its benefits for debugging applications and understanding user behavior. We will also quickly integrate Microsoft Clarity in React Native applications and compare its performance with competitors like LogRocket and UXCam.

    Below, we will explore the key features of session replay, the steps to integrate Microsoft Clarity into your React Native application, and benchmark its performance against other popular tools.

    Key Features of Session Replay

    Session replay provides a visual playback of user interactions on your application. This allows developers to observe how users navigate the app, identify any issues they encounter, and understand user behavior patterns. Here are some of the standout features:

    • User Interaction Tracking: Record clicks, scrolls, and navigation paths for a comprehensive view of user activities.
    • Error Monitoring: Capture and analyze errors in real time to quickly diagnose and fix issues.
    • Heatmaps: Visualize areas of high interaction to understand which parts of the app are most engaging.
    • Anonymized Data: Ensure user privacy by anonymizing sensitive information during session recording.

    Integrating Microsoft Clarity with React Native

    Integrating Microsoft Clarity into your React Native application is a straightforward process. Follow these steps to get started:

    1. Sign Up for Microsoft Clarity:

    a. Visit the Microsoft Clarity website and sign up for a free account.

    b. Create a new project and obtain your Clarity tracking code.

    1. Install the Clarity SDK:

    Use npm or yarn to install the Clarity SDK in your React Native project:

    npm install clarity@latest‍ 
    yarn add clarity@latest

    1. Initialize Clarity in Your App:

    Import and initialize Clarity in your main application file (e.g., App.js):

    import Clarity from 'clarity';‍
    Clarity.initialize('YOUR_CLARITY_TRACKING_CODE');

    1. Verify Integration:

    a. Run your application and navigate through various screens to ensure Clarity is capturing session data correctly.

    b. Log into your Clarity dashboard to see the recorded sessions and analytics.

    Benchmarking Against Competitors

    To evaluate the performance of Microsoft Clarity, we’ll compare it against two popular session replay tools, LogRocket and UXCam, assessing them based on the following criteria:

    • Ease of Integration: How simple is integrating the tool into a React Native application?
    • Feature Set: What features does each tool offer for session replay and user behavior analysis?
    • Performance Impact: How does the tool impact the app’s performance and user experience?
    • Cost: What are the pricing models and how do they compare?

    Detailed Comparison

    Ease of Integration

    • Microsoft Clarity: The integration process is straightforward and well-documented, making it easy for developers to get started.
    • LogRocket: LogRocket also offers a simple integration process with comprehensive documentation and support.
    • UXCam: UXCam provides detailed guides and support for integration, but it may require additional configuration steps compared to Clarity and LogRocket.

    Feature Set

    • Microsoft Clarity: Offers robust session replay, heatmaps, and error monitoring. However, it may lack some advanced features found in premium tools.
    • LogRocket: Provides a rich set of features, including session replay, performance monitoring, Network request logs, and integration with other tools like Redux and GraphQL.
    • UXCam: Focuses on mobile app analytics with features like session replay, screen flow analysis, and retention tracking.

    Performance Impact

    • Microsoft Clarity: Minimal impact on app performance, making it a suitable choice for most applications.
    • LogRocket: Slightly heavier than Clarity but offers more advanced features. Performance impact is manageable with proper configuration.
    • UXCam: Designed for mobile apps with performance optimization in mind. The impact is generally low but can vary based on app complexity.

    Cost

    • Microsoft Clarity: Free to use, making it an excellent option for startups and small teams.
    • LogRocket: Offers tiered pricing plans, with a free tier for basic usage and paid plans for advanced features.
    • UXCam: Provides a range of pricing options, including a free tier. Paid plans offer more advanced features and higher data limits.

    Final Verdict

    After evaluating the key aspects of session replay tools, Microsoft Clarity stands out as a strong contender, especially for teams looking for a cost-effective solution with essential features. LogRocket and UXCam offer more advanced capabilities, which may be beneficial for larger teams or more complex applications.

    Ultimately, the right tool will depend on your specific needs and budget. For basic session replay and user behavior insights, Microsoft Clarity is a fantastic choice. If you require more comprehensive analytics and integrations, LogRocket or UXCam may be worth the investment.

    Sample App

    I have also created a basic sample app to demonstrate how to set up Microsoft Clarity for React Native apps.

    Please check it out here: https://github.com/rakesho-vel/ms-rn-clarity-sample-app

    This sample video shows how Microsoft Clarity records and lets you review user sessions on its dashboard.

    References

    1. https://clarity.microsoft.com/blog/clarity-sdk-release/
    2. https://web.swipeinsight.app/posts/microsoft-clarity-finally-launches-ios-sdk-8312

  • Unlocking Seamless Communication: BLE Integration with React Native for Device Connectivity

    In today’s interconnected world, where smart devices have become an integral part of our daily lives, the ability to communicate with Bluetooth Low Energy (BLE) enabled devices opens up a myriad of possibilities for innovative applications. In this blog, we will explore the exciting realm of communicating with BLE-enabled devices using React Native, a popular cross-platform framework for mobile app development. Whether you’re a seasoned React Native developer or just starting your journey, this blog will equip you with the knowledge and skills to establish seamless communication with BLE devices, enabling you to create powerful and engaging user experiences. So, let’s dive in and unlock the potential of BLE communication in the world of React Native!

    BLE (Bluetooth Low Energy)

    Bluetooth Low Energy (BLE) is a wireless communication technology designed for low-power consumption and short-range connectivity. It allows devices to exchange data and communicate efficiently while consuming minimal energy. BLE has gained popularity in various industries, from healthcare and fitness to home automation and IoT applications. It enables seamless connectivity between devices, allowing for the development of innovative solutions. With its low energy requirements, BLE is ideal for battery-powered devices like wearables and sensors. It offers simplified pairing, efficient data transfer, and supports various profiles for specific use cases. BLE has revolutionized the way devices interact, enabling a wide range of connected experiences in our daily lives.

    Here is a comprehensive overview of how mobile applications establish connections and facilitate communication with BLE devices.

    What will we be using?

    react-native - 0.71.6
    react - 18.0.2
    react-native-ble-manager - 10.0.2

    Note: We are assuming you already have the React Native development environment set up on your system; if not, please refer to the React Native guide for instructions on setting up the RN development environment.

    What are we building?

    Together, we will construct a sample mobile application that showcases the integration of Bluetooth Low Energy (BLE) technology. This app will search for nearby BLE devices, establish connections with them, and facilitate seamless message exchanges between the mobile application and the chosen BLE device. By embarking on this project, you will gain practical experience in building an application that leverages BLE capabilities for effective communication. Let’s commence this exciting journey of mobile app development and BLE connectivity!

    Setup

    Before setting up the react-native-ble manager, let’s start by creating a React Native application using the React Native CLI. Follow these steps:

    Step 1: Ensure that you have Node.js and npm (Node Package Manager) installed on your system.

    Step 2: Open your command prompt or terminal and navigate to the directory where you want to create your React Native project.

    Step 3: Run the following command to create a new React Native project:

    npx react-native@latest init RnBleManager

    Step 4: Wait for the project setup to complete. This might take a few minutes as it downloads the necessary dependencies.

    Step 5: Once the setup is finished, navigate into the project directory:

    cd RnBleManager

    Step 6: Congratulations! You have successfully created a new React Native application using the React Native CLI.

    Now you are ready to set up the react-native-ble manager and integrate it into your React Native project.

    Installing react-native-ble-manager

    If you use NPM -
    npm i --save react-native-ble-manager
    
    With Yarn -
    yarn add react-native-ble-manager

    In order to enable Android applications to utilize Bluetooth and location services for detecting and communicating with BLE devices, it is essential to incorporate the necessary permissions within the Android platform.

    Add these permissions in the AndroidManifest.xml file in android/app/src/main/AndroidManifest.xml

    Integration

    At this stage, having successfully created a new React Native application, installed the react-native-ble-manager, and configured it to function seamlessly on Android, it’s time to proceed with integrating the react-native-ble-manager into your React Native application. Let’s dive into the integration process to harness the power of BLE functionality within your app.

    BleConnectionManager

    To ensure that our application can access the BLE connection state and facilitate communication with the BLE device, we will implement BLE connection management in the global state. This will allow us to make the connection management accessible throughout the entire codebase. To achieve this, we will create a ContextProvider called “BleConnectionContextProvider.” By encapsulating the BLE connection logic within this provider, we can easily share and access the connection state and related functions across different components within the application. This approach will enhance the efficiency and effectiveness of managing BLE connections. Let’s proceed with implementing the BleConnectionContextProvider to empower our application with seamless BLE communication capabilities.

    This context provider will possess the capability to access and manage the current BLE state, providing a centralized hub for interacting with the BLE device. It will serve as the gateway to establish connections, send and receive data, and handle various BLE-related functionalities. By encapsulating the BLE logic within this context provider, we can ensure that all components within the application have access to the BLE device and the ability to communicate with it. This approach simplifies the integration process and facilitates efficient management of the BLE connection and communication throughout the entire application.

    Let’s proceed with creating a context provider equipped with essential state management functionalities. This context provider will effectively handle the connection and scanning states, maintain the BLE object, and manage the list of peripherals (BLE devices) discovered during the application’s scanning process. By implementing this context provider, we will establish a robust foundation for seamlessly managing BLE connectivity and communication within the application.

    NOTE: Although not essential for the example at hand, implementing global management of the BLE connection state allows us to demonstrate its universal management capabilities.

    ....
    BleManager.disconnect(BLE_SERVICE_ID)
      .then(() => {
        dispatch({ type: "disconnected", payload: { peripheral } })
      })
      .catch((error) => {
        // Failure code
        console.log(error);
      });
    ....

    Prior to integrating the BLE-related components, it is crucial to ensure that the mobile app verifies whether the:

    1. Location permissions are granted and enabled
    2. Mobile device’s Bluetooth is enabled

    To accomplish this, we will implement a small method called requestPermissions that grants all the necessary permissions to the user. We will then call this method as soon as our context provider initializes within the useEffect hook in the BleConnectionContextProvider. Doing so ensures that the required permissions are obtained by the mobile app before proceeding with the integration of BLE functionalities.

    import {PermissionsAndroid, Platform} from "react-native"
    import BleManager from "react-native-ble-manager"
    
      const requestBlePermissions = async (): Promise<boolean> => {
        if (Platform.OS === "android" && Platform.Version < 23) {
          return true
        }
        try {
          const status = await PermissionsAndroid.requestMultiple([
            PermissionsAndroid.PERMISSIONS.ACCESS_FINE_LOCATION,
            PermissionsAndroid.PERMISSIONS.BLUETOOTH_CONNECT,
            PermissionsAndroid.PERMISSIONS.BLUETOOTH_SCAN,
            PermissionsAndroid.PERMISSIONS.BLUETOOTH_ADVERTISE,
          ])
          return (
            status[PermissionsAndroid.PERMISSIONS.BLUETOOTH_CONNECT] == "granted" &&
            status[PermissionsAndroid.PERMISSIONS.BLUETOOTH_SCAN] == "granted" &&
            status[PermissionsAndroid.PERMISSIONS.ACCESS_FINE_LOCATION] == "granted"
          )
        } catch (e) {
          console.error("Location Permssions Denied ", e)
          return false
        }
      }
    
    // effects
    useEffect(() => {
      const initBle = async () => {
        await requestBlePermissions()
        BleManager.enableBluetooth()
      }
      
      initBle()
    }, [])

    After granting all the required permissions and enabling Bluetooth, the next step is to start the BleManager. To accomplish this, please add the following line of code after the enableBle command in the aforementioned useEffect:

    // initialize BLE module
    BleManager.start({ showAlert: false })

    By including this code snippet, the BleManager will be initialized, facilitating the smooth integration of BLE functionality within your application.

    Now that we have obtained the necessary permissions, enabled Bluetooth, and initiated the Bluetooth manager, we can proceed with implementing the functionality to scan and detect BLE peripherals. 

    We will now incorporate the code that enables scanning for BLE peripherals. This will allow us to discover and identify nearby BLE devices. Let’s dive into the implementation of this crucial step in our application’s BLE integration process.

    To facilitate scanning and stopping the scanning process for BLE devices, as well as handle various events related to the discovered peripherals, scan stop, and BLE disconnection, we will create a method along with the necessary event listeners.

    In addition, state management is essential to effectively handle the connection and scanning states, as well as maintain the list of scanned devices. To accomplish this, let’s incorporate the following code into the BleConnectionConextProvider. This will ensure seamless management of the aforementioned states and facilitate efficient tracking of scanned devices.

    Let’s proceed with implementing these functionalities to ensure smooth scanning and handling of BLE devices within our application.

    export const BLE_NAME = "SAMPlE_BLE"
    export const BLE_SERVICE_ID = "5476534d-1213-1212-1212-454e544f1212"
    export const BLE_READ_CHAR_ID = "00105354-0000-1000-8000-00805f9b34fb"
    export const BLE_WRITE_CHAR_ID = "00105352-0000-1000-8000-00805f9b34fb"
    
    export const BleContextProvider = ({
      children,
    }: {
      children: React.ReactNode
    }) => {
      // variables
      const BleManagerModule = NativeModules.BleManager
      const bleEmitter = new NativeEventEmitter(BleManagerModule)
      const { setConnectedDevice } = useBleStore()
    
      // State management
      const [state, dispatch] = React.useReducer(
        (prevState: BleState, action: any) => {
          switch (action.type) {
            case "scanning":
              return {
                ...prevState,
                isScanning: action.payload,
              }
            case "connected":
              return {
                ...prevState,
                connectedBle: action.payload.peripheral,
                isConnected: true,
              }
            case "disconnected":
              return {
                ...prevState,
                connectedBle: undefined,
                isConnected: false,
              }
            case "clearPeripherals":
              let peripherals = prevState.peripherals
              peripherals.clear()
              return {
                ...prevState,
                peripherals: peripherals,
              }
            case "addPerpheral":
              peripherals = prevState.peripherals
              peripherals.set(action.payload.id, action.payload.peripheral)
              const list = [action.payload.connectedBle]
              return {
                ...prevState,
                peripherals: peripherals,
              }
            default:
              return prevState
          }
        },
        initialState
      )
    
      // methods
      const getPeripheralName = (item: any) => {
        if (item.advertising) {
          if (item.advertising.localName) {
            return item.advertising.localName
          }
        }
    
        return item.name
      }
    
      // start to scan peripherals
      const startScan = () => {
        // skip if scan process is currenly happening
        console.log("Start scanning ", state.isScanning)
        if (state.isScanning) {
          return
        }
    
        dispatch({ type: "clearPeripherals" })
    
        // then re-scan it
        BleManager.scan([], 10, false)
          .then(() => {
            console.log("Scanning...")
            dispatch({ type: "scanning", payload: true })
          })
          .catch((err) => {
            console.error(err)
          })
      }
    
      const connectBle = (peripheral: any, callback?: (name: string) => void) => {
        if (peripheral && peripheral.name && peripheral.name == BLE_NAME) {
          BleManager.connect(peripheral.id)
            .then((resp) => {
              dispatch({ type: "connected", payload: { peripheral } })
              // callback from the caller
              callback && callback(peripheral.name)
              setConnectedDevice(peripheral)
            })
            .catch((err) => {
              console.log("failed connecting to the device", err)
            })
        }
      }
    
      // handle discovered peripheral
      const handleDiscoverPeripheral = (peripheral: any) => {
        console.log("Got ble peripheral", getPeripheralName(peripheral))
    
        if (peripheral.name && peripheral.name == BLE_NAME) {
          dispatch({
            type: "addPerpheral",
            payload: { id: peripheral.id, peripheral },
          })
        }
      }
    
      // handle stop scan event
      const handleStopScan = () => {
        console.log("Scan is stopped")
        dispatch({ type: "scanning", payload: false })
      }
    
      // handle disconnected peripheral
      const handleDisconnectedPeripheral = (data: any) => {
        console.log("Disconnected from " + data.peripheral)
    
        //
        dispatch({ type: "disconnected" })
      }
    
      const handleUpdateValueForCharacteristic = (data: any) => {
        console.log(
          "Received data from: " + data.peripheral,
          "Characteristic: " + data.characteristic,
          "Data: " + toStringFromBytes(data.value)
        )
      }
    
      // effects
      useEffect(() => {
        const initBle = async () => {
          await requestBlePermissions()
          BleManager.enableBluetooth()
        }
    
        initBle()
    
        // add ble listeners on mount
        const BleManagerDiscoverPeripheral = bleEmitter.addListener(
          "BleManagerDiscoverPeripheral",
          handleDiscoverPeripheral
        )
        const BleManagerStopScan = bleEmitter.addListener(
          "BleManagerStopScan",
          handleStopScan
        )
        const BleManagerDisconnectPeripheral = bleEmitter.addListener(
          "BleManagerDisconnectPeripheral",
          handleDisconnectedPeripheral
        )
        const BleManagerDidUpdateValueForCharacteristic = bleEmitter.addListener(
          "BleManagerDidUpdateValueForCharacteristic",
          handleUpdateValueForCharacteristic
        )
      }, [])
    
    // render
      return (
        <BleContext.Provider
          value={{
            ...state,
            startScan: startScan,
            connectBle: connectBle,
          }}
        >
          {children}
        </BleContext.Provider>
      )
    }

    NOTE: It is important to note the properties of the BLE device we intend to search for and connect to, namely BLE_NAME, BLE_SERVICE_ID, BLE_READ_CHAR_ID, and BLE_WRITE_CHAR_ID. Familiarizing yourself with these properties beforehand is crucial, as they enable you to restrict the search to specific BLE devices and facilitate connection to the desired BLE service and characteristics for reading and writing data. Being aware of these properties will greatly assist you in effectively working with BLE functionality.

    For instance, take a look at the handleDiscoverPeripheral method. In this method, we filter the discovered peripherals based on their device name, matching it with the predefined BLE_NAME we mentioned earlier. As a result, this approach allows us to obtain a list of devices that specifically match the given name, narrowing down the search to the desired devices only. 

    Additionally, you have the option to scan peripherals using the service IDs of the Bluetooth devices. This means you can specify specific service IDs to filter the discovered peripherals during the scanning process. By doing so, you can focus the scanning on Bluetooth devices that provide the desired services, enabling more targeted and efficient scanning operations.

    Excellent! We now have all the necessary components in place for scanning and connecting to the desired BLE device. Let’s proceed by adding the user interface (UI) elements that will allow users to initiate the scan, display the list of scanned devices, and enable connection to the selected device. By implementing these UI components, we will create a seamless user experience for scanning, device listing, and connection within our application.

    Discovering and Establishing Connections with BLE Devices

    Let’s create a new UI component/Page that will handle scanning, listing, and connecting to the BLE device. This page will have:

    • A Scan button to call the scan function
    • A simple FlatList to list the selected BLE devices and
    • A method to connect to the selected BLE device when the user clicks on any BLE item row from the list

    Create HomeScreen.tsx in the src folder and add the following code: 

    import React, {useCallback, useEffect, useMemo} from 'react';
    import {
      ActivityIndicator,
      Alert,
      Button,
      FlatList,
      StyleSheet,
      Text,
      TouchableOpacity,
      View,
    } from 'react-native';
    import {useBleContext} from './BleContextProvider';
    
    interface HomeScreenProps {}
    
    const HomeScreen: React.FC<HomeScreenProps> = () => {
      const {
        isConnected,
        isScanning,
        peripherals,
        connectedBle,
        startScan,
        connectBle,
      } = useBleContext();
    
      // Effects
      const scannedbleList = useMemo(() => {
        const list = [];
        if (connectedBle) list.push(connectedBle);
        if (peripherals) list.push(...Array.from(peripherals.values()));
        return list;
      }, [peripherals, isScanning]);
    
      useEffect(() => {
        if (!isConnected) {
          startScan && startScan();
        }
      }, []);
    
      // Methods
      const getRssi = (rssi: number) => {
        return !!rssi
          ? Math.pow(10, (-69 - rssi) / (10 * 2)).toFixed(2) + ' m'
          : 'N/A';
      };
    
      const onBleConnected = (name: string) => {
        Alert.alert('Device connected', `Connected to ${name}.`, [
          {
            text: 'Ok',
            onPress: () => {},
            style: 'default',
          },
        ]);
      };
      const BleListItem = useCallback((item: any) => {
        // define name and rssi
        return (
          <TouchableOpacity
            style={{
              flex: 1,
              flexDirection: 'row',
              justifyContent: 'space-between',
              padding: 16,
              backgroundColor: '#2A2A2A',
            }}
            onPress={() => {
              connectBle && connectBle(item.item, onBleConnected);
            }}>
            <Text style={{textAlign: 'left', marginRight: 8, color: 'white'}}>
              {item.item.name}
            </Text>
            <Text style={{textAlign: 'right'}}>{getRssi(item.item.rssi)}</Text>
          </TouchableOpacity>
        );
      }, []);
    
      const ItemSeparator = useCallback(() => {
        return <View style={styles.divider} />;
      }, []);
    
      // render
      // Ble List and scan button
      return (
        <View style={styles.container}>
          {/* Loader when app is scanning */}
          {isScanning ? (
            <ActivityIndicator size={'small'} />
          ) : (
            <>
              {/* Ble devices List View */}
              {scannedbleList && scannedbleList.length > 0 ? (
                <>
                  <Text style={styles.listHeader}>Discovered BLE Devices</Text>
                  <FlatList
                    data={scannedbleList}
                    renderItem={({item}) => <BleListItem item={item} />}
                    ItemSeparatorComponent={ItemSeparator}
                  />
                </>
              ) : (
                <View style={styles.emptyList}>
                  <Text style={styles.emptyListText}>
                    No Bluetooth devices discovered. Please click scan to search the
                    BLE devices
                  </Text>
                </View>
              )}
    
              {/* Scan button */}
              <View style={styles.btnContainer}>
                <Button
                  title="Scan"
                  color={'black'}
                  disabled={isConnected || isScanning}
                  onPress={() => {
                    startScan && startScan();
                  }}
                />
              </View>
            </>
          )}
        </View>
      );
    };
    
    const styles = StyleSheet.create({
      container: {
        flex: 1,
        flexDirection: 'column',
      },
      listHeader: {
        padding: 8,
        color: 'black',
      },
      emptyList: {
        flex: 1,
        justifyContent: 'center',
        alignItems: 'center',
      },
      emptyListText: {
        padding: 8,
        textAlign: 'center',
        color: 'black',
      },
      btnContainer: {
        marginTop: 10,
        marginHorizontal: 16,
        bottom: 10,
        alignItems: 'flex-end',
      },
      divider: {
        height: 1,
        width: '100%',
        marginHorizontal: 8,
        backgroundColor: '#1A1A1A',
      },
    });
    
    export default HomeScreen;

    Now, open App.tsx and replace the complete code with the following changes: 
    In App.tsx, we removed the default boilerplate code, react-native cli generated while creating the project with our own code, where we added the BleContextProvider and HomeScreen to the app.

    import React from 'react';
    import {SafeAreaView, StatusBar, useColorScheme, View} from 'react-native';
    
    import {Colors} from 'react-native/Libraries/NewAppScreen';
    import {BleContextProvider} from './BleContextProvider';
    import HomeScreen from './HomeScreen';
    
    function App(): JSX.Element {
      const isDarkMode = useColorScheme() === 'dark';
    
      const backgroundStyle = {
        backgroundColor: isDarkMode ? Colors.darker : Colors.lighter,
      };
    
      return (
        <SafeAreaView style={backgroundStyle}>
          <StatusBar
            barStyle={isDarkMode ? 'light-content' : 'dark-content'}
            backgroundColor={backgroundStyle.backgroundColor}
          />
          <BleContextProvider>
            <View style={{height: '100%', width: '100%'}}>
              <HomeScreen />
            </View>
          </BleContextProvider>
        </SafeAreaView>
      );
    }
    
    export default App;

    Running the application on an Android device: Upon launching the app, you will be presented with an empty list message accompanied by a scan button. Simply tap the scan button to retrieve a list of available BLE peripherals within the range of your mobile device. By selecting a specific BLE device from the list, you can establish a connection with it.

    Awesome! Now we are able to scan, detect, and connect to the BLE devices, but there is more to it than just connecting to the BLE devices. We can write to and read the required information from BLE devices, and based on that information, mobile applications OR backend services can perform several other operations.

    For example, if you are wearing and connected to a BLE device that monitors your blood pressure every one hour, and if it goes beyond the threshold, it can trigger a call to a doctor or family members to check and take precautionary measures as soon as possible.

    Communicating with BLE devices

    For seamless communication with a BLE device, the mobile app must possess precise knowledge of the services and characteristics associated with the device. A BLE device typically presents multiple services, each comprising various distinct characteristics. These services and characteristics can be collaboratively defined and shared by the team responsible for manufacturing the BLE device.

    In BLE communication, comprehending the characteristics and their properties is crucial, as they serve distinct purposes. Certain characteristics facilitate writing data to the BLE device, while others enable reading data from it. Gaining a comprehensive understanding of these characteristics and their properties is vital for effectively interacting with the BLE device and ensuring seamless communication.

    Reading data from BLE device when BLE sends data

    Once the mobile app successfully establishes a connection with the BLE device, it initiates the retrieval of available services. It activates the listener to begin receiving notifications from the BLE device. This process takes place within the callback of the “connect BLE” method, ensuring that the app seamlessly retrieves the necessary information and starts listening for important updates from the connected BLE device.

    const connectBle = (peripheral: any, callback?: (name: string) => void) => {
        if (peripheral && peripheral.name && peripheral.name == BLE_NAME) {
          BleManager.connect(peripheral.id)
            .then((resp) => {
              dispatch({ type: "connected", payload: { peripheral } })
              // callback from the caller
              callback && callback(peripheral.name)
              setConnectedDevice(peripheral)
    
              // retrieve services and start read notification
              BleManager.retrieveServices(peripheral.id).then((resp) => {
                BleManager.startNotification(
                  peripheral.id,
                  BLE_SERVICE_ID,
                  BLE_READ_CHAR_ID
                )
                  .then(console.log)
                  .catch(console.error)
              })
            })
            .catch((err) => {
              console.log("failed connecting to the device", err)
            })
        }
      }

    Consequently, the application will promptly receive notifications whenever the BLE device writes data to the designated characteristic within the specified service.

    Reading and writing data to BLE from a mobile device

    To establish communication between the mobile app and the BLE device, we will implement new methods within BleContextProvider. These methods will facilitate the reading and writing of data to the BLE device. By exposing these methods in BleContextProvider’s reducer, we ensure that the app has a reliable means of interacting with the BLE device and can seamlessly exchange information as required.

    interface BleState {
      isConnected: boolean
      isScanning: boolean
      peripherals: Map<string, any>
      list: Array<any>
      connectedBle: Peripheral | undefined
      startScan?: () => void
      connectBle?: (peripheral: any, callback?: (name: string) => void) => void
      readFromBle?: (id: string) => void
      writeToble?: (
        id: string,
        content: string,
        callback?: (count: number, buttonNumber: ButtonNumber) => void
      ) => void
    }
    
    export const BleContextProvider = ({
      children,
    }: {
      children: React.ReactNode
    }) => {
        ....
        
        const writeToBle = (
        id: string,
        content: string,
        count: number,
        buttonNumber: ButtonNumber,
        callback?: (count: number, buttonNumber: ButtonNumber) => void
      ) => {
        BleManager.retrieveServices(id).then((response) => {
          BleManager.writeWithoutResponse(
            id,
            BLE_SERVICE_ID,
            BLE_WRITE_CHAR_ID,
            toByteArray(content)
          )
            .then((res) => {
              callback && callback(count, buttonNumber)
            })
            .catch((res) => console.log("Error writing to BLE device - ", res))
        })
      }
    
      const readFromBle = (id: string) => {
        BleManager.retrieveServices(id).then((response) => {
          BleManager.read(id, BLE_SERVICE_ID, BLE_READ_CHAR_ID)
            .then((resp) => {
              console.log("Read from BLE", toStringFromBytes(resp))
            })
            .catch((err) => {
              console.error("Error Reading from BLE", err)
            })
        })
      }
      ....
    
      // render
      return (
        <BleContext.Provider
          value={{
            ...state,
            startScan: startScan,
            connectBle: connectBle,
            writeToble: writeToBle,
            readFromBle: readFromBle,
          }}
        >
          {children}
        </BleContext.Provider>
      )    
    }

    NOTE: Before a write, read, or start notification, you need to call retrieveServices method every single time.

    Disconnecting BLE connection

    Once you are done with the BLE services, you can disconnect the BLE connection using the disconnectBLE method provided in the library.

    ....
    BleManager.disconnect(BLE_SERVICE_ID)
      .then(() => {
        dispatch({ type: "disconnected", payload: { peripheral } })
      })
      .catch((error) => {
        // Failure code
        console.log(error);
      });
    ....

    Additionally, the React Native BLE Manager library offers various other methods that can enhance the application’s functionality. These include the createBond method, which facilitates the pairing of the BLE device with the mobile app, the stopNotification method, which ceases receiving notifications from the device, and the readRSSI method, which retrieves the received signal strength indicator (RSSI) of the device. For a more comprehensive understanding of the library and its capabilities, I recommend exploring further details on the React Native BLE Manager library documentation here: https://www.npmjs.com/package/react-native-ble-manager

    Conclusion

    We delved into the fascinating world of communicating with BLE (Bluetooth Low Energy) using the React Native BLE Manager library. Then we explored the power of BLE technology and how it can be seamlessly integrated into React Native applications to enable efficient and low-power communication between devices.

    Using the React Native BLE Manager library, we explored essential functionalities such as scanning for nearby BLE devices, establishing connections, discovering services and characteristics, and exchanging data. We also divided into more advanced features like managing connections and handling notifications for a seamless user experience.

    It’s important to remember that BLE technology is continually evolving, and there may be additional libraries and frameworks available for BLE communication in the React Native ecosystem. As you progress on your journey, I encourage you to explore other resources, keep up with the latest advancements, and stay connected with the vibrant community of developers working with BLE and React Native.

    I hope this blog post has inspired you to explore the immense potential of BLE communication in your React Native applications. By harnessing the power of BLE, you can create innovative, connected experiences that enhance the lives of your users and open doors to new possibilities.

    Thank you for taking the time to read through this blog!

  • How to setup iOS app with Apple developer account and TestFlight from scratch

    In this article, we will discuss how to set up the Apple developer account, build an app (create IPA files), configure TestFlight, and deploy it to TestFlight for the very first time.

    There are tons of articles explaining how to configure and build an app or how to setup TestFlight or setup application for ad hoc distribution. However, most of them are either outdated or missing steps and can be misleading for someone who is doing it for the very first time.

    If you haven’t done this before, don’t worry, just traverse through the minute details of this article, follow every step correctly, and you will be able to set up your iOS application end-to-end, ready for TestFlight or ad hoc distribution within an hour.

    Prerequisites

    Before we start, please make sure, you have:

    • A React Native Project created and opened in the XCode
    • XCode set up on your Mac
    • An Apple developer account with access to create the Identifiers and Certificates, i.e. you have at least have a Developer or Admin access – https://developer.apple.com/account/
    • Access to App Store Connect with your apple developer account -https://appstoreconnect.apple.com/
    • Make sure you have an Apple developer account, if not, please get it created first.

    The Setup contains 4 major steps: 

    • Creating Certificates, Identifiers, and Profiles from your Apple Developer account
    • Configuring the iOS app using these Identifiers, Certificates, and Profiles in XCode
    • Setting up TestFlight and Internal Testers group on App Store Connect
    • Generating iOS builds, signing them, and uploading them to TestFlight on App Store Connect

    Certificates, Identifiers, and Profiles

    Before we do anything, we need to create:

    • Bundle Identifier, which is an app bundle ID and a unique app identifier used by the App Store
    • A Certificate – to sign the iOS app before submitting it to the App Store
    • Provisioning Profile – for linking bundle ID and certificates together

    Bundle Identifiers

    For the App Store to recognize your app uniquely, we need to create a unique Bundle Identifier.

    Go to https://developer.apple.com/account: you will see the Certificates, Identifiers & Profiles tab. Click on Identifiers. 

    Click the Plus icon next to Identifiers:

    Select the App IDs option from the list of options and click Continue:

    Select App from app types and click Continue

    On the next page, you will need to enter the app ID and select the required services your application can have if required (this is optional—you can enable them in the future when you actually implement them). 

    Keep those unselected for now as we don’t need them for this setup.

    Once filled with all the information, please click on continue and register your Bundle Identifier.

    Generating Certificate

    Certificates can be generated 2 ways:

    • By automatically managing certificates from Xcode
    • By manually generating them

    We will generate them manually.

    To create a certificate, we need a Certificate Signing Request form, which needs to be generated from your Mac’s KeyChain Access authority.

    Creating Certificate Signing Request:

    Open the KeyChain Access application and Click on the KeyChain Access Menu item at the left top of the screen, then select Preferences

    Select Certificate Assistance -> Request Certificate from Managing Authority

    Enter the required information like email address and name, then select the Save to Disk option.

    Click Continue and save this form to a place so you can easily upload it to your Apple developer account

    Now head back to the Apple developer account, click on Certificates. Again click on the + icon next to Certificates title and you will be taken to the new certificate form.

    Select the iOS Distribution (App Store and ad hoc) option. Here, you can select the required services this certificate will need from a list of options (for example, Apple Push Notification service). 

    As we don’t need any services, ignore it for now and click continue.

    On the next screen, upload the certificate signing request form we generated in the last step and click Continue.

    At this step, your certificate will be generated and will be available to download.

    NOTE: The certificate can be downloaded only once, so please download it and keep it in a secure location to use it in the future.

    Download your certificate and install it by clicking on the downloaded certificate file. The certificate will be installed on your mac and can be used for generating builds in the next steps.

    You can verify this by going back to the KeyChain Access app and seeing the newly installed certificate in the certificates list.

    Generating a Provisioning Profile

    Now link your identifier and certificate together by creating a provisioning profile.

    Let’s go back to the Apple developer account, select the profiles option, and select the + icon next to the Profiles title.

    You will be redirected to the new Profiles form page.

    Select Distribution Profile and click continue:

    Select the App ID we created in the first step and click Continue:

    Now, select the certificate we created in the previous step:

    Enter a Provisioning Profile name and click Generate:

    Once Profile is generated, it will be available to download, please download it and keep it at the same location where you kept Certificate for future usage.

    Configure App in XCode

    Now, we need to configure our iOS application using the bundle ID and the Apple developer account we used for generating the certificate and profiles.

    Open the <appname>.xcworkspace file in XCode and click on the app name on the left pan. It will open the app configuration page.

    Select the app from targets, go to signing and capabilities, and enter the bundle identifier. 

    Now, to automatically manage the provisioning profile, we need to download the provisioning profile we generated recently. 

    For this, we need to sign into XCode using your Apple ID.

    Select Preferences from the top left XCode Menu option, go to Accounts, and click on the + icon at the bottom.

    Select Apple ID from the account you want to add to the list, click continue and enter the Apple ID.

    It will prompt you to enter the password as well.

    Once successfully logged in, XCode will fetch all the provisioning profiles associated with this account. Verify that you see your project in the Teams section of this account page.

    Now, go back to the XCode Signing Capabilities page, select Automatically Manage Signing, and then select the required team from the Team dropdown.

    At this point, your application will be able to generate the Archives to upload it to either TestFlight or Sign them ad hoc to distribute it using other mediums (Diawi, etc.).

    Setup TestFlight

    TestFlight and App Store management are managed by the App Store Connect portal.

    Open the App Store Connect portal and log in to the application.

    After you log in, please make sure you have selected the correct team from the top right corner (you can check the team name just below the user name).

    Select My Apps from the list of options. 

    If this is the first time you are setting up an application on this team, you will see the + (Add app) option at the center of the page, but if your team has already set up applications, you will see the + icon right next to Apps Header.

    Click on the + icon and select New App Option:

    Enter the complete app details, like platform (iOS, MacOS OR tvOS), aApp name, bundle ID (the one we created), SKU, access type, and click the Create button.

    You should now be able to see your newly created application on the Apps menu. Select the app and go to TestFlight. You will see no builds there as we did not push any yet.

    Generate and upload the build to TestFlight

    At this point, we are fully ready to generate a build from XCode and push it to TestFlight. To do this, head back to XCode.

    On the top middle section, you will see your app name and right arrow. There might be an iPhone or other simulator selected. Pplease click on the options list and select Any iOS Device.

    Select the Product menu from the Menu list and click on the Archive option.

    Once the archive succeeds, XCode will open the Organizer window (you can also open this page from the Windows Menu list).

    Here, we sign our application archive (build) using the certificate we created and upload it to the App Store Connect TestFlight.

    On the Organizer window, you will see the recently generated build. Please select the build and click on Distribute Button from the right panel of the Organizer page.

    On the next page, select App Store Connect from the “Select a method of distribution” window and click Continue.

    NOTE: We are selecting the App Store Connect option as we want to upload a build to TestFlight, but if you want to distribute it privately using other channels, please select the Ad Hoc option.

    Select Upload from the “Select a Destination” options and click continue. This will prepare your build to submit it to App Store Connect TestFlight.

    For the first time, it will ask you how you want to sign the build, Automatically or Manually?

    Please Select Automatically and click the Next button.

    XCode may ask you to authenticate your certificate using your system password. Please authenticate it and wait until XCode uploads the build to TestFlight.

    Once the build is uploaded successfully, XCode will prompt you with the Success modal.

    Now, your app is uploaded to TestFlight and is being processed. This processing takes 5 to 15 minutes, at which point TestFlight makes it available for testing.

    Add Internal Testers and other teammates to TestFlight

    Once we are done with all the setup and uploaded the build to TestFlight, we need to add internal testers to TestFlight.

    This is a 2-step process. First, you need to add a user to App Store Connect and then add a user to TestFlight.

    Go to Users and Access

    Add a new User and App Store sends an invitation to the user

    Once the user accepts the invitation, go to TestFlight -> Internal Testing

    In the Internal Testing section, create a new Testing group if not added already and

    add the user to TestFlight testing group.

    Now, you should be able to configure the app, upload it to TestFlight, and add users to the TestFlight testing group.

    Hopefully, you enjoyed this article, and it helped in setting up iOS applications end-to-end quickly without getting too much confused. 

    Thanks.

  • Flutter vs React Native: A Detailed Comparison

    Flutter and React Native are two of the most popular cross-platform development frameworks on the market. Both of these technologies enable you to develop applications for iOS and Android with a single codebase. However, they’re not entirely interchangeable.

    Flutter allows developers to create Material Design-like applications with ease. React Native, on the other hand, has an active community of open source contributors, which means that it can be easily modified to meet almost any standard.

    In this blog, we have compared both of these technologies based on popularity, performance, learning curve, community support, and developer mindshare to help you decide which one you can use for your next project.

    But before digging into the comparison, let’s have a brief look at both these technologies:

    ‍About React Native

    React Native has gained the attention of many developers for its ease of use with JS code. Facebook has developed the framework to solve cross-platform application development using React and introduced React Native in their first React.js conference in 2015.

    React Native enables developers to create high-end mobile apps with the help of JavaScript. This eventually comes in handy for speeding up the process of developing mobile apps. The framework also makes use of the impressive features of JavaScript while maintaining excellent performance. React Native is highly feature-rich and allows you to create dynamic animations and gestures which are usually unavailable in the native platform.

    React Native has been adopted by many companies as their preferred technology. 

    For example:

    • Facebook
    • Instagram
    • Skype
    • Shopify
    • Tesla
    • Salesforce

    About Flutter

    Flutter is an open-source mobile development kit that makes it easy for developers to build high-quality applications for Android and iOS. It has a library with widgets to create the user interface of the application independent of the platform on which it is supported.

    Flutter has extended the reach of mobile app development by enabling developers to build apps on any platform without being restrained by mobile development limitations. The framework started as an internal project at Google back in 2015, with its first stable release in 2018

    Since its inception, Google aimed to provide a simplistic, usable programming language for building sophisticated apps and wanted to carry out Dart’s goal of replacing JavaScript as the next-generation web programming language.

    Let’s see what all apps are built using Flutter:

    • Google Ads
    • eBay
    • Alibaba
    • BMW
    • Philips Hue

    React Native vs. Flutter – An overall comparison

    Design Capabilities

    React Native is based on React.js, one of the most popular JavaScript libraries for building user interfaces. It is often used with Redux, which provides a solid basis for creating predictable web applications.

    Flutter, on the other hand, is Google’s new mobile UI framework. Flutter uses Dart language to write code, compiled to native code for iOS and Android apps.

    Both React Native and Flutter can be used to create applications with beautiful graphics and smooth animations.

    React Native

    In the React Native framework, UI elements look native to both iOS and Android platforms. These UI elements make it easier for developers to build apps because they only have to write them once. In addition, many of these components also render natively on each platform. The user experiences an interface that feels more natural and seamless while maintaining the capability to customize the app’s look and feel.

    The framework allows developers to use JavaScript or a combination of HTML/CSS/JavaScript for cross-platform development. While React Native allows you to build native apps, it does not mean that your app will look the same on both iOS and Android.

    Flutter

    Flutter is a toolkit for creating high-performance, high-fidelity mobile apps for iOS and Android. Flutter works with existing code, is used by developers and organizations worldwide, and is free and open source. The standard neutral style is what Flutter offers.

    Flutter has its own widgets library, which includes Material Design Components and Cupertino. 

    The Material package contains widgets that look like they belong on Android devices. The Cupertino package contains widgets that look like they belong on iOS devices. By default, a Flutter application uses Material widgets. If you want to use Cupertino widgets, then import the Cupertino library and change your app’s theme to CupertinoTheme.

    Community

    Flutter and React Native have a very active community of developers. Both frameworks have extensive support and documentation and an active GitHub repository, which means they are constantly being maintained and updated.

    With the Flutter community, we can even find exciting tools such as Flutter Inspector or Flutter WebView Plugin. In the case of React Native, Facebook has been investing heavily in this framework. Besides the fact that the development process is entirely open-source, Facebook has created various tools to make the developer’s life easier.

    Also, the more updates and versions come out, the more interest and appreciation the developer community shows. Let’s see how both frameworks stack up when it comes to community engagement.

    For React Native

    The Facebook community is the most significant contributor to the React Native framework, followed by the community members themselves.

    React Native has garnered over 1,162 contributors on GitHub since its launch in 2015. The number of commits (or changes) to the framework has increased over time. It increased from 1,183 commits in 2016 to 1,722 commits in 2017.

    This increase indicates that more and more developers are interested in improving React Native.

    Moreover, there are over 19.8k live projects where developers share their experiences to resolve existing issues. The official React Native website offers tutorials for beginners who want to get started quickly with developing applications for Android and iOS while also providing advanced users with the necessary documentation.

    Also, there are a few other platforms where you can ask your question to the community, meet other React Native developers, and gain new contacts:

    Reddit: https://www.reddit.com/r/reactnative/

    Stack Overflow: http://stackoverflow.com/questions/tagged/react-native

    Meetuphttps://www.meetup.com/topics/react-native/

    Facebook: https://www.facebook.com/groups/reactnativecommunity/

    For Flutter

    The Flutter community is smaller than React Native. The main reason is that Flutter is relatively new and is not yet widely used in production apps. But it’s not hard to see that its popularity is growing day by day. Flutter has excellent documentation with examples, articles, and tutorials that you can find online. It also has direct contact with its users through channels, such as Stack Overflow and Google Groups.

    The community of Flutter is growing at a steady pace with around 662 contributors. The total count of projects being forked by the community is 13.7k, where anybody can seek help for development purposes.

    Here are some platforms to connect with other developers in the Flutter community:

    GitHub: https://github.com/flutter

    Google Groups: https://groups.google.com/g/flutter-announce

    Stack Overflow: https://stackoverflow.com/tags/flutter

    Reddithttps://www.reddit.com/r/FlutterDev/

    Discordhttps://discord.com/invite/N7Yshp4

    Slack: https://fluttercommunity.slack.com/

    Learning curve

    The learning curve of Flutter is steeper than React Native. However, you can learn both frameworks within a reasonable time frame. So, let’s discuss what would be required to learn React Native and Flutter.

    React Native

    The language of React Native is JavaScript. Any person who knows how to write JS will be able to utilize this framework. But, it’s different from building web applications. So if you are a mobile developer, you need to get the hang of things that might take some time.

    However, React Native is relatively easy to learn for newbies. For starters, it offers a variety of resources, both online and offline. On the React website, users can find the documentation, guides, FAQs, and learning resources.

    Flutter

    Flutter has a bit steeper learning curve than React native. You need to know some basic concepts of native Android or iOS development. Flutter requires you to have experience in Java or Kotlin for Android or Objective-C or Swift for iOS. It can be a challenge if you’re accustomed to using new languages without type casts and generics. However, once you’ve learned how to use it, it can speed up your development process.

    Flutter provides great documentation of its APIs that you can refer to. Since the framework is still new, some information might not be updated yet.

    Team size

    The central aspect of choosing between React Native and Flutter is the team size. To set a realistic expectation on the cost, you need to consider the type of application you will develop.

    React Native

    Technically, React Native’s core library can be implemented by a single developer. This developer will have to build all native modules by himself, which is not an easy task. However, the required team size for React Native depends on the complexity of the mobile app you want to build. If you plan to create a simple mobile app, such as a mobile-only website, then one developer will be enough. However, if your project requires complex UI and animations, then you will need more skillful and experienced developers.

    Flutter

    Team size is a very important factor for the flutter app development. The number of people in your team might depend on the requirements and type of app you need to develop.

    Flutter makes it easy to use existing code that you might already have, or share code with other apps that you might already be building. You can even use Java or Kotlin if you prefer (though Dart is preferred).

    UI component

    When developing a cross-platform app, keep in mind that not all platforms behave identically. You will need to choose a library that supports the core element of the app consistent for each platform. We need the framework to have an API so that we can access the native modules.

    React Native

    There are two aspects to implementing React Native in your app development. The first one is writing the apps in JavaScript. This is the easiest part since it’s somewhat similar to writing web apps. The second aspect is the integration of third-party modules that are not part of the core framework.

    The reason third-party modules are required is that React Native does not support all native functionalities. For instance, if you want to implement an Alert box, you need to import the UIAlertController module from Applenv SDK.

    This makes the React Native framework somehow dependent on third-party libraries. There are lots of third-party libraries for React Native. You can use these libraries in your project to add native app features which are not available in React Native. Mostly, it is used to include maps, camera, sharing, and other native features.

    Flutter

    Flutter offers rich GUI components called widgets. A widget can be anything from simple text fields, buttons, switches, sliders, etc., to complex layouts that include multiple pages with split views, navigation bars, tab bars, etc., that are present in modern mobile apps.

    The Flutter toolkit is cross-platform and it has its own widgets, but it still needs third-party libraries to create applications. It also depends on the Android SDK and the iOS SDK for compilation and deployment. Developers can use any third-party library they want as long as it does not have any restrictions on open source licensing. Developers are also allowed to create their own libraries for Flutter app development.

    Testing Framework and Support

    React Native and Flutter have been used to develop many high-quality mobile applications. Of course, in any technology, a well-developed testing framework is essential.

    Based on this, we can see that both React Native and Flutter have a relatively mature testing framework. 

    React Native

    React Native uses the same UI components and APIs as a web application written in React.js. This means you can use the same frameworks and libraries for both platforms. Testing a React Native application can be more complex than a traditional web-based application because it relies heavily on the device itself. If you’re using a hybrid JavaScript approach, you can use tools like WebdriverIO or Appium to run the same tests across different browsers. Still, if you’re going native, you need to make sure you choose a tool with solid native support.

    Flutter

    Flutter has developed a testing framework that helps ensure your application is high quality. It is based on the premise of these three pillars: unit tests, widget tests, and integration tests. As you build out your Flutter applications, you can combine all three types of tests to ensure that your application works perfectly.

    Programming language

    One of the most important benefits of using Flutter and React Native to develop your mobile app is using a single programming language. This reduces the time required to hire developers and allows you to complete projects faster.

    React Native

    React Native breaks that paradigm by bridging the gap between native and JavaScript environments. It allows developers to build mobile apps that run across platforms by using JavaScript. It makes mobile app development faster, as it only requires one language—JavaScript—to create a cross-platform mobile app. This gives web developers a significant advantage over native application developers as they already know JavaScript and can build a mobile app prototype in a couple of days. There is no need to learn Java or Swift. They can even use the same JavaScript libraries they use at work, like Redux and ImmutableJS.

    Flutter

    Flutter provides tools to create native mobile apps for both Android and iOS. Furthermore, it allows you to reuse code between the platforms because it supports code sharing using libraries written in Dart.

    You can also choose between two different ways of creating layouts for Flutter apps. The first one is similar to CSS, while the second one is more like HTML. Both are very powerful and simple to use. By default, you should use widgets built by the Flutter team, but if needed, you can also create your own custom widgets or modify existing ones.

    Tooling and DX

    While using either Flutter or React Native for mobile app development, it is likely that your development team will also be responsible for the CI/CD pipeline used to release new versions of your app.

    CI/CD support for Flutter and React Native is very similar at the moment. Both frameworks have good support for continuous integration (CI), continuous delivery (CD), and continuous deployment (CD). Both offer a first-class experience for building, testing, and deploying apps.

    React Native

    The React Native framework has existed for some time now and is pretty mature. However, it still lacks documentation around continuous integration (CI) and continuous delivery (CD) solutions. Considering the maturity of the framework, we might expect to see more investment here. 

    Whereas Expo is a development environment and build tool for React Native. It lets you develop and run React Native apps on your computer just like you would do on any other web app.

    Expo turns a React Native app into a single JavaScript bundle, which is then published to one of the app stores using Expo’s tools. It provides all the necessary tooling—like bundling, building, and hot reloading—and manages the technical details of publishing to each app store. Expo provides the tooling and environment so that you can develop and test your app in a familiar way, while it also takes care of deploying to production.

    Flutter

    Flutter’s open-source project is complete, so the next step is to develop a rich ecosystem around it. The good news is that Flutter uses the same command-line interface as XCode, Android Studio, IntelliJ IDEA, and other fully-featured IDE’s. This means Flutter can easily integrate with continuous integration/continuous deployment tools. Some CI/CD tools for Flutter include Bitrise and Codemagic. All of these tools are free to use, although they offer paid plans for more features.

    Here is an example of a to-do list app built with React Native and Flutter.

    Flutter: https://github.com/velotiotech/simple_todo_flutter

    React Native: https://github.com/velotiotech/react-native-todo-example

    Conclusion

    As you can see, both Flutter and React Native are excellent cross-platform app development tools that will be able to offer you high-quality apps for iOS and Android. The choice between React Native vs Flutter will depend on the complexity of the app that you are looking to create, your team size, and your needs for the app. Still, all in all, both of these frameworks are great options to consider to develop cross-platform native mobile applications.

  • Test Automation in React Native apps using Appium and WebdriverIO

    React Native provides a mobile app development experience without sacrificing user experience or visual performance. And when it comes to mobile app UI testing, Appium is a great way to test indigenous React Native apps out of the box. Creating native apps from the same code and being able to do it using JavaScript has made Appium popular. Apart from this, businesses are attracted by the fact that they can save a lot of money by using this application development framework.

    In this blog, we are going to cover how to add automated tests for React native apps using Appium & WebdriverIO with a Node.js framework. 

    What are React Native Apps

    React Native is an open-source framework for building Android and iOS apps using React and local app capabilities. With React Native, you can use JavaScript to access the APIs on your platform and define the look and behavior of your UI using React components: lots of usable, non-compact code. In the development of Android and iOS apps, “viewing” is the basic building block of a UI: this small rectangular object on the screen can be used to display text, photos, or user input. Even the smallest detail of an app, such as a text line or a button, is a kind of view. Some views may contain other views.

    What is Appium

    Appium is an open-source tool for traditional automation, web, and hybrid apps on iOS, Android, and Windows desktop mobile platforms. Indigenous apps are those written using iOS and Android. Mobile web applications are accessed using a mobile browser (Appium supports Safari for iOS apps and Chrome or the built-in ‘Browser’ for Android apps). Hybrid apps have a wrapper around “web view”—a traditional controller that allows you to interact with web content. Projects like Apache Cordova make it easy to build applications using web technology and integrate it into a traditional wrapper, creating a hybrid application.

    Importantly, Appium is “cross-platform”, allowing you to write tests against multiple platforms (iOS, Android), using the same API. This enables code usage between iOS, Android, and Windows test suites. It runs on iOS and Android applications using the WebDriver protocol.

    Fig:- Appium Architecture

    What is WebDriverIO

    WebdriverIO is a next-gen browser and Node.js automated mobile testing framework. It allows you to customize any application written with modern web frameworks for mobile devices or browsers, such as React, Angular, Polymeror, and Vue.js.

    WebdriverIO is a widely used test automation framework in JavaScript. It has various features like it supports many reports and services, Test Frameworks, and WDIO CLI Test Runners

    The following are examples of supported services:

    • Appium Service
    • Devtools Service
    • Firefox Profile Service
    • Selenium Standalone Service
    • Shared Store Service
    • Static Server Service
    • ChromeDriver Service
    • Report Portal Service
    • Docker Service

    The followings are supported by the test framework:

    • Mocha
    • Jasmine
    • Cucumber 
    Fig:- WebdriverIO Architecture

    Key features of Appium & WebdriverIO

    Appium

    • Does not require application source code or library
    • Provides a strong and active community
    • Has multi-platform support, i.e., it can run the same test cases on multiple platforms
    • Allows the parallel execution of test scripts
    • In Appium, a small change does not require reinstallation of the application
    • Supports various languages like C#, Python, Java, Ruby, PHP, JavaScript with node.js, and many others that have a Selenium client library

    WebdriverIO 

    • Extendable
    • Compatible
    • Feature-rich 
    • Supports modern web and mobile frameworks
    • Runs automation tests both for web applications as well as native mobile apps.
    • Simple and easy syntax
    • Integrates tests to third-party tools such as Appium
    • ‘Wdio setup wizard’ makes the setup simple and easy
    • Integrated test runner

    Installation & Configuration

    $ mkdir Demo_Appium_Project

    • Create a sample Appium Project
    $ npm init
    $ package name: (demo_appium_project) demo_appium_test
    $ version: (1.0.0) 1.0.0
    $ description: demo_appium_practice
    $ entry point: (index.js) index.js
    $ test command: "./node_modules/.bin/wdio wdio.conf.js"
    $ git repository: 
    $ keywords: 
    $ author: Pushkar
    $ license: (ISC) ISC

    This will also create a package.json file for test settings and project dependencies.

    • Install node packages
    $ npm install

    • Install Appium through npm or as a standalone app.
    $ npm install -g appium or npm install --save appium

    $ npm install -g webdriverio or npm install --save-dev webdriverio @wdio/cli
    • Install Chai Assertion library 
    $ npm install -g chai or npm install --save chai

    Make sure you have following versions installed: 

    $ node --version - v.14.17.0
    $ npm --version - 7.17.0
    $ appium --version - 1.21.0
    $ java --version - java 16.0.1
    $ allure --version - 2.14.0

    WebdriverIO Configuration 

    The web driver configuration file must be created to apply the configuration during the test Generate command below project:

    $ npx wdio config

    With the following series of questions, install the required dependencies,

    $ Where is your automation backend located? - On my local machine
    $ Which framework do you want to use? - mocha	
    $ Do you want to use a compiler? No!
    $ Where are your test specs located? - ./test/specs/**/*.js
    $ Do you want WebdriverIO to autogenerate some test files? - Yes
    $ Do you want to use page objects (https://martinfowler.com/bliki/PageObject.html)? - No
    $ Which reporter do you want to use? - Allure
    $ Do you want to add a service to your test setup? - No
    $ What is the base url? - http://localhost

    This is how wdio.conf.js looks:

    exports.config = {
     port: 4724,
     path: '/wd/hub/',
     runner: 'local',
     specs: ['./test/specs/*.js'],
     maxInstances: 1,
     capabilities: [
       {
         platformName: 'Android',
         platformVersion: '11',
         appPackage: 'com.facebook.katana',
         appActivity: 'com.facebook.katana.LoginActivity',
         automationName: 'UiAutomator2'
       }
     ],
     services: [
       [
         'appium',
         {
           args: {
             relaxedSecurity: true
            },
           command: 'appium'
         }
       ]
     ],
     logLevel: 'debug',
     bail: 0,
     baseUrl: 'http://localhost',
     waitforTimeout: 10000,
     connectionRetryTimeout: 90000,
     connectionRetryCount: 3,
     framework: 'mocha',
     reporters: [
       [
         'allure',
         {
           outputDir: 'allure-results',
           disableWebdriverStepsReporting: true,
           disableWebdriverScreenshotsReporting: false
         }
       ]
     ],
     mochaOpts: {
       ui: 'bdd',
       timeout: 60000
     },
     afterTest: function(test, context, { error, result, duration, passed, retries }) {
       if (!passed) {
           browser.takeScreenshot();
       }
     }
    }
    view raw

    For iOS Automation, just add the following capabilities in wdio.conf.js & the Appium Configuration: 

    {
      "platformName": "IOS",
      "platformVersion": "14.5",
      "app": "/Your_PATH/wdioNativeDemoApp.app",
      "deviceName": "iPhone 12 Pro Max"
    }

    Launch the iOS Simulator from Xcode

    Install Appium Doctor for iOS by using following command:

    npm install -g appium-doctor

    Fig:- Appium Doctor Installed

    This is how package.json will look:

    {
     "name": "demo_appium_test",
     "version": "1.0.0",
     "description": "demo_appium_practice",
     "main": "index.js",
     "scripts": {
       "test": "./node_modules/.bin/wdio wdio.conf.js"
     },
     "author": "Pushkar",
     "license": "ISC",
     "dependencies": {
       "@wdio/sync": "^7.7.4",
       "appium": "^1.21.0",
       "chai": "^4.3.4",
       "webdriverio": "^7.7.4"
     },
     "devDependencies": {
       "@wdio/allure-reporter": "^7.7.3",
       "@wdio/appium-service": "^7.7.3",
       "@wdio/cli": "^7.7.4",
       "@wdio/local-runner": "^7.7.4",
       "@wdio/mocha-framework": "^7.7.4",
       "@wdio/selenium-standalone-service": "^7.7.4"
     }
    }

    Steps to follow if npm legacy peer deeps problem occurred:

    npm install --save --legacy-peer-deps
    npm config set legacy-peer-deps true
    npm i --legacy-peer-deps
    npm config set legacy-peer-deps true
    npm cache clean --force

    This is how the folder structure will look in Appium with the WebDriverIO Framework:

    Fig:- Appium Framework Outline

    Step-by-Step Configuration of Android Emulator using Android Studio

    Fig:- Android Studio Launch

     

    Fig:- Android Studio AVD Manager

     

    Fig:- Create Virtual Device

     

    Fig:- Choose a device Definition

     

    Fig:- Select system image

    Fig:- License Agreement

     

    Fig:- Component Installer

     

    Fig:- System Image Download

     

    Fig:- Configuration Verification

    Fig:- Virtual Device Listing

    ‍Appium Desktop Configuration

    Fig:- Appium Desktop Launch

    Setup of ANDROID_HOME + ANDROID_SDK_ROOT &  JAVA_HOME

    Follow these steps for setting up ANDROID_HOME: 

    vi ~/.bash_profile
    Add following 
    export ANDROID_HOME=/Users/pushkar/android-sdk 
    export PATH=$PATH:$ANDROID_HOME/platform-tools 
    export PATH=$PATH:$ANDROID_HOME/tools 
    export PATH=$PATH:$ANDROID_HOME/tools/bin 
    export PATH=$PATH:$ANDROID_HOME/emulator
    Save ~/.bash_profile 
    source ~/.bash_profile 
    echo $ANDROID_HOME
    /Users/pushkar/Library/Android/sdk

    Follow these steps for setting up ANDROID_SDK_ROOT:

    vi ~/.bash_profile
    Add following 
    export ANDROID_HOME=/Users/pushkar/Android/sdk
    export ANDROID_SDK_ROOT=/Users/pushkar/Android/sdk
    export ANDROID_AVD_HOME=/Users/pushkar/.android/avd
    Save ~/.bash_profile 
    source ~/.bash_profile 
    echo $ANDROID_SDK_ROOT
    /Users/pushkar/Library/Android/sdk

    Follow these steps for setting up JAVA_HOME:

    java --version
    vi ~/.bash_profile
    Add following 
    export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk-16.0.1.jdk/Contents/Home.
    echo $JAVA_HOME
    /Library/Java/JavaVirtualMachines/jdk-16.0.1.jdk/Contents/Home

    Fig:- Environment Variables in Appium

     

    Fig:- Appium Server Starts 

     

    Fig:- Appium Start Inspector Session

    Fig:- Inspector Session Configurations

    Note – Make sure you need to install the app from Google Play Store. 

    Fig:- Android Emulator Launch  

     

    Fig: – Android Emulator with Facebook React Native Mobile App

     

    Fig:- Success of Appium with Emulator

     

    Fig:- Locating Elements using Appium Inspector

    How to write E2E React Native Mobile App Tests 

    Fig:- Test Suite Structure of Mocha

    ‍Here is an example of how to write E2E test in Appium:

    Positive Testing Scenario – Validate Login & Nav Bar

    1. Open Facebook React Native App 
    2. Enter valid email and password
    3. Click on Login
    4. Users should be able to login into Facebook 

    Negative Testing Scenario – Invalid Login

    1. Open Facebook React Native App
    2. Enter invalid email and password 
    3. Click on login 
    4. Users should not be able to login after receiving an “Incorrect Password” message popup

    Negative Testing Scenario – Invalid Element

    1. Open Facebook React Native App 
    2. Enter invalid email and  password 
    3. Click on login 
    4. Provide invalid element to capture message

    Make sure test_script should be under test/specs folder 

    var expect = require('chai').expect
    
    beforeEach(() => {
     driver.launchApp()
    })
    
    afterEach(() => {
     driver.closeApp()
    })
    
    describe('Verify Login Scenarios on Facebook React Native Mobile App', () => {
     it('User should be able to login using valid credentials to Facebook Mobile App', () => {   
       $(`~Username`).waitForDisplayed(20000)
       $(`~Username`).setValue('Valid-Email')
       $(`~Password`).waitForDisplayed(20000)
       $(`~Password`).setValue('Valid-Password')
       $('~Log In').click()
       browser.pause(10000)
     })
    
     it('User should not be able to login with invalid credentials to Facebook Mobile App', () => {
       $(`~Username`).waitForDisplayed(20000)
       $(`~Username`).setValue('Invalid-Email')
       $(`~Password`).waitForDisplayed(20000)
       $(`~Password`).setValue('Invalid-Password')   
       $('~Log In').click()
       $(
           '//android.widget.TextView[@resource-id="com.facebook.katana:id/(name removed)"]'
         )
         .waitForDisplayed(11000)
       const status = $(
         '//android.widget.TextView[@resource-id="com.facebook.katana:id/(name removed)"]'
       ).getText()
       expect(status).to.equal(
         `You Can't Use This Feature Right Now`     
       )
     })
    
     it('Test Case should Fail Because of Invalid Element', () => {
       $(`~Username`).waitForDisplayed(20000)
       $(`~Username`).setValue('Invalid-Email')
       $(`~Password`).waitForDisplayed(20000)
       $(`~Password`).setValue('Invalid-Pasword')   
       $('~Log In').click()
       $(
           '//android.widget.TextView[@resource-id="com.facebook.katana:id/(name removed)"'
         )
         .waitForDisplayed(11000)
       const status = $(
         '//android.widget.TextView[@resource-id="com.facebook.katana"'
       ).getText()
       expect(status).to.equal(
         `You Can't Use This Feature Right Now`     
       )
     })
    
    })

    How to Run Mobile Tests Scripts  

    $ npm test 
    This will create a Results folder with .xml report 

    Reporting

    The following are examples of the supported reporters:

    • Allure Reporter
    • Concise Reporter
    • Dot Reporter
    • JUnit Reporter
    • Spec Reporter
    • Sumologic Reporter
    • Report Portal Reporter
    • Video Reporter
    • HTML Reporter
    • JSON Reporter
    • Mochawesome Reporter
    • Timeline Reporter
    • CucumberJS JSON Reporter

    Here, we are using Allure Reporting. Allure Reporting in WebdriverIO is a plugin to create Allure Test Reports.

    The easiest way is to keep @wdio/allure-reporter as a devDependency in your package.json with

    $ npm install @wdio/allure-reporter --save-dev

    Reporter options can be specified in the wdio.conf.js configuration file 

    reporters: [
       [
         'allure',
         {
           outputDir: 'allure-results',
           disableWebdriverStepsReporting: true,
           disableWebdriverScreenshotsReporting: false
         }
       ]
     ]

    To convert Allure .xml report to .html report, run the following command: 

    $ allure generate && allure open
    Allure HTML report should be opened in browser

    This is what Allure Reports look like: 

    Fig:- Allure Report Overview 

     

    Fig:- Allure Categories

     

    Fig:- Allure Suites

     

    Fig: – Allure Graphs

     

    Fig:- Allure Timeline

     

    Fig:- Allure Behaviors

     

    Fig:- Allure Packages

    Limitations with Appium & WebDriverIO

    Appium 

    • Android versions lower than 4.2 are not supported for testing
    • Limited support for hybrid app testing
    • Doesn’t support image comparison.

    WebdriverIO

    • It has a custom implementation
    • It can be used for automating AngularJS apps, but it is not as customized as Protractor.

    Conclusion

    In the QA and developer ecosystem, using Appium to test React native applications is common. Appium makes it easy to record test cases on both Android and iOS platforms while working with React Native. Selenium, a basic web developer, acts as a bridge between Appium and mobile platforms for delivery and testing. Appium is a solid framework for automatic UI testing. This article explains that this framework is capable of conducting test cases quickly and reliably. Most importantly, it can test both Android and iOS apps developed by the React Native framework on the basis of a single code.

    Related Articles –

    References