Category: Software Engineering & Architecture

  • Unlocking Seamless Communication: BLE Integration with React Native for Device Connectivity

    In today’s interconnected world, where smart devices have become an integral part of our daily lives, the ability to communicate with Bluetooth Low Energy (BLE) enabled devices opens up a myriad of possibilities for innovative applications. In this blog, we will explore the exciting realm of communicating with BLE-enabled devices using React Native, a popular cross-platform framework for mobile app development. Whether you’re a seasoned React Native developer or just starting your journey, this blog will equip you with the knowledge and skills to establish seamless communication with BLE devices, enabling you to create powerful and engaging user experiences. So, let’s dive in and unlock the potential of BLE communication in the world of React Native!

    BLE (Bluetooth Low Energy)

    Bluetooth Low Energy (BLE) is a wireless communication technology designed for low-power consumption and short-range connectivity. It allows devices to exchange data and communicate efficiently while consuming minimal energy. BLE has gained popularity in various industries, from healthcare and fitness to home automation and IoT applications. It enables seamless connectivity between devices, allowing for the development of innovative solutions. With its low energy requirements, BLE is ideal for battery-powered devices like wearables and sensors. It offers simplified pairing, efficient data transfer, and supports various profiles for specific use cases. BLE has revolutionized the way devices interact, enabling a wide range of connected experiences in our daily lives.

    Here is a comprehensive overview of how mobile applications establish connections and facilitate communication with BLE devices.

    What will we be using?

    react-native - 0.71.6
    react - 18.0.2
    react-native-ble-manager - 10.0.2

    Note: We are assuming you already have the React Native development environment set up on your system; if not, please refer to the React Native guide for instructions on setting up the RN development environment.

    What are we building?

    Together, we will construct a sample mobile application that showcases the integration of Bluetooth Low Energy (BLE) technology. This app will search for nearby BLE devices, establish connections with them, and facilitate seamless message exchanges between the mobile application and the chosen BLE device. By embarking on this project, you will gain practical experience in building an application that leverages BLE capabilities for effective communication. Let’s commence this exciting journey of mobile app development and BLE connectivity!

    Setup

    Before setting up the react-native-ble manager, let’s start by creating a React Native application using the React Native CLI. Follow these steps:

    Step 1: Ensure that you have Node.js and npm (Node Package Manager) installed on your system.

    Step 2: Open your command prompt or terminal and navigate to the directory where you want to create your React Native project.

    Step 3: Run the following command to create a new React Native project:

    npx react-native@latest init RnBleManager

    Step 4: Wait for the project setup to complete. This might take a few minutes as it downloads the necessary dependencies.

    Step 5: Once the setup is finished, navigate into the project directory:

    cd RnBleManager

    Step 6: Congratulations! You have successfully created a new React Native application using the React Native CLI.

    Now you are ready to set up the react-native-ble manager and integrate it into your React Native project.

    Installing react-native-ble-manager

    If you use NPM -
    npm i --save react-native-ble-manager
    
    With Yarn -
    yarn add react-native-ble-manager

    In order to enable Android applications to utilize Bluetooth and location services for detecting and communicating with BLE devices, it is essential to incorporate the necessary permissions within the Android platform.

    Add these permissions in the AndroidManifest.xml file in android/app/src/main/AndroidManifest.xml

    Integration

    At this stage, having successfully created a new React Native application, installed the react-native-ble-manager, and configured it to function seamlessly on Android, it’s time to proceed with integrating the react-native-ble-manager into your React Native application. Let’s dive into the integration process to harness the power of BLE functionality within your app.

    BleConnectionManager

    To ensure that our application can access the BLE connection state and facilitate communication with the BLE device, we will implement BLE connection management in the global state. This will allow us to make the connection management accessible throughout the entire codebase. To achieve this, we will create a ContextProvider called “BleConnectionContextProvider.” By encapsulating the BLE connection logic within this provider, we can easily share and access the connection state and related functions across different components within the application. This approach will enhance the efficiency and effectiveness of managing BLE connections. Let’s proceed with implementing the BleConnectionContextProvider to empower our application with seamless BLE communication capabilities.

    This context provider will possess the capability to access and manage the current BLE state, providing a centralized hub for interacting with the BLE device. It will serve as the gateway to establish connections, send and receive data, and handle various BLE-related functionalities. By encapsulating the BLE logic within this context provider, we can ensure that all components within the application have access to the BLE device and the ability to communicate with it. This approach simplifies the integration process and facilitates efficient management of the BLE connection and communication throughout the entire application.

    Let’s proceed with creating a context provider equipped with essential state management functionalities. This context provider will effectively handle the connection and scanning states, maintain the BLE object, and manage the list of peripherals (BLE devices) discovered during the application’s scanning process. By implementing this context provider, we will establish a robust foundation for seamlessly managing BLE connectivity and communication within the application.

    NOTE: Although not essential for the example at hand, implementing global management of the BLE connection state allows us to demonstrate its universal management capabilities.

    ....
    BleManager.disconnect(BLE_SERVICE_ID)
      .then(() => {
        dispatch({ type: "disconnected", payload: { peripheral } })
      })
      .catch((error) => {
        // Failure code
        console.log(error);
      });
    ....

    Prior to integrating the BLE-related components, it is crucial to ensure that the mobile app verifies whether the:

    1. Location permissions are granted and enabled
    2. Mobile device’s Bluetooth is enabled

    To accomplish this, we will implement a small method called requestPermissions that grants all the necessary permissions to the user. We will then call this method as soon as our context provider initializes within the useEffect hook in the BleConnectionContextProvider. Doing so ensures that the required permissions are obtained by the mobile app before proceeding with the integration of BLE functionalities.

    import {PermissionsAndroid, Platform} from "react-native"
    import BleManager from "react-native-ble-manager"
    
      const requestBlePermissions = async (): Promise<boolean> => {
        if (Platform.OS === "android" && Platform.Version < 23) {
          return true
        }
        try {
          const status = await PermissionsAndroid.requestMultiple([
            PermissionsAndroid.PERMISSIONS.ACCESS_FINE_LOCATION,
            PermissionsAndroid.PERMISSIONS.BLUETOOTH_CONNECT,
            PermissionsAndroid.PERMISSIONS.BLUETOOTH_SCAN,
            PermissionsAndroid.PERMISSIONS.BLUETOOTH_ADVERTISE,
          ])
          return (
            status[PermissionsAndroid.PERMISSIONS.BLUETOOTH_CONNECT] == "granted" &&
            status[PermissionsAndroid.PERMISSIONS.BLUETOOTH_SCAN] == "granted" &&
            status[PermissionsAndroid.PERMISSIONS.ACCESS_FINE_LOCATION] == "granted"
          )
        } catch (e) {
          console.error("Location Permssions Denied ", e)
          return false
        }
      }
    
    // effects
    useEffect(() => {
      const initBle = async () => {
        await requestBlePermissions()
        BleManager.enableBluetooth()
      }
      
      initBle()
    }, [])

    After granting all the required permissions and enabling Bluetooth, the next step is to start the BleManager. To accomplish this, please add the following line of code after the enableBle command in the aforementioned useEffect:

    // initialize BLE module
    BleManager.start({ showAlert: false })

    By including this code snippet, the BleManager will be initialized, facilitating the smooth integration of BLE functionality within your application.

    Now that we have obtained the necessary permissions, enabled Bluetooth, and initiated the Bluetooth manager, we can proceed with implementing the functionality to scan and detect BLE peripherals. 

    We will now incorporate the code that enables scanning for BLE peripherals. This will allow us to discover and identify nearby BLE devices. Let’s dive into the implementation of this crucial step in our application’s BLE integration process.

    To facilitate scanning and stopping the scanning process for BLE devices, as well as handle various events related to the discovered peripherals, scan stop, and BLE disconnection, we will create a method along with the necessary event listeners.

    In addition, state management is essential to effectively handle the connection and scanning states, as well as maintain the list of scanned devices. To accomplish this, let’s incorporate the following code into the BleConnectionConextProvider. This will ensure seamless management of the aforementioned states and facilitate efficient tracking of scanned devices.

    Let’s proceed with implementing these functionalities to ensure smooth scanning and handling of BLE devices within our application.

    export const BLE_NAME = "SAMPlE_BLE"
    export const BLE_SERVICE_ID = "5476534d-1213-1212-1212-454e544f1212"
    export const BLE_READ_CHAR_ID = "00105354-0000-1000-8000-00805f9b34fb"
    export const BLE_WRITE_CHAR_ID = "00105352-0000-1000-8000-00805f9b34fb"
    
    export const BleContextProvider = ({
      children,
    }: {
      children: React.ReactNode
    }) => {
      // variables
      const BleManagerModule = NativeModules.BleManager
      const bleEmitter = new NativeEventEmitter(BleManagerModule)
      const { setConnectedDevice } = useBleStore()
    
      // State management
      const [state, dispatch] = React.useReducer(
        (prevState: BleState, action: any) => {
          switch (action.type) {
            case "scanning":
              return {
                ...prevState,
                isScanning: action.payload,
              }
            case "connected":
              return {
                ...prevState,
                connectedBle: action.payload.peripheral,
                isConnected: true,
              }
            case "disconnected":
              return {
                ...prevState,
                connectedBle: undefined,
                isConnected: false,
              }
            case "clearPeripherals":
              let peripherals = prevState.peripherals
              peripherals.clear()
              return {
                ...prevState,
                peripherals: peripherals,
              }
            case "addPerpheral":
              peripherals = prevState.peripherals
              peripherals.set(action.payload.id, action.payload.peripheral)
              const list = [action.payload.connectedBle]
              return {
                ...prevState,
                peripherals: peripherals,
              }
            default:
              return prevState
          }
        },
        initialState
      )
    
      // methods
      const getPeripheralName = (item: any) => {
        if (item.advertising) {
          if (item.advertising.localName) {
            return item.advertising.localName
          }
        }
    
        return item.name
      }
    
      // start to scan peripherals
      const startScan = () => {
        // skip if scan process is currenly happening
        console.log("Start scanning ", state.isScanning)
        if (state.isScanning) {
          return
        }
    
        dispatch({ type: "clearPeripherals" })
    
        // then re-scan it
        BleManager.scan([], 10, false)
          .then(() => {
            console.log("Scanning...")
            dispatch({ type: "scanning", payload: true })
          })
          .catch((err) => {
            console.error(err)
          })
      }
    
      const connectBle = (peripheral: any, callback?: (name: string) => void) => {
        if (peripheral && peripheral.name && peripheral.name == BLE_NAME) {
          BleManager.connect(peripheral.id)
            .then((resp) => {
              dispatch({ type: "connected", payload: { peripheral } })
              // callback from the caller
              callback && callback(peripheral.name)
              setConnectedDevice(peripheral)
            })
            .catch((err) => {
              console.log("failed connecting to the device", err)
            })
        }
      }
    
      // handle discovered peripheral
      const handleDiscoverPeripheral = (peripheral: any) => {
        console.log("Got ble peripheral", getPeripheralName(peripheral))
    
        if (peripheral.name && peripheral.name == BLE_NAME) {
          dispatch({
            type: "addPerpheral",
            payload: { id: peripheral.id, peripheral },
          })
        }
      }
    
      // handle stop scan event
      const handleStopScan = () => {
        console.log("Scan is stopped")
        dispatch({ type: "scanning", payload: false })
      }
    
      // handle disconnected peripheral
      const handleDisconnectedPeripheral = (data: any) => {
        console.log("Disconnected from " + data.peripheral)
    
        //
        dispatch({ type: "disconnected" })
      }
    
      const handleUpdateValueForCharacteristic = (data: any) => {
        console.log(
          "Received data from: " + data.peripheral,
          "Characteristic: " + data.characteristic,
          "Data: " + toStringFromBytes(data.value)
        )
      }
    
      // effects
      useEffect(() => {
        const initBle = async () => {
          await requestBlePermissions()
          BleManager.enableBluetooth()
        }
    
        initBle()
    
        // add ble listeners on mount
        const BleManagerDiscoverPeripheral = bleEmitter.addListener(
          "BleManagerDiscoverPeripheral",
          handleDiscoverPeripheral
        )
        const BleManagerStopScan = bleEmitter.addListener(
          "BleManagerStopScan",
          handleStopScan
        )
        const BleManagerDisconnectPeripheral = bleEmitter.addListener(
          "BleManagerDisconnectPeripheral",
          handleDisconnectedPeripheral
        )
        const BleManagerDidUpdateValueForCharacteristic = bleEmitter.addListener(
          "BleManagerDidUpdateValueForCharacteristic",
          handleUpdateValueForCharacteristic
        )
      }, [])
    
    // render
      return (
        <BleContext.Provider
          value={{
            ...state,
            startScan: startScan,
            connectBle: connectBle,
          }}
        >
          {children}
        </BleContext.Provider>
      )
    }

    NOTE: It is important to note the properties of the BLE device we intend to search for and connect to, namely BLE_NAME, BLE_SERVICE_ID, BLE_READ_CHAR_ID, and BLE_WRITE_CHAR_ID. Familiarizing yourself with these properties beforehand is crucial, as they enable you to restrict the search to specific BLE devices and facilitate connection to the desired BLE service and characteristics for reading and writing data. Being aware of these properties will greatly assist you in effectively working with BLE functionality.

    For instance, take a look at the handleDiscoverPeripheral method. In this method, we filter the discovered peripherals based on their device name, matching it with the predefined BLE_NAME we mentioned earlier. As a result, this approach allows us to obtain a list of devices that specifically match the given name, narrowing down the search to the desired devices only. 

    Additionally, you have the option to scan peripherals using the service IDs of the Bluetooth devices. This means you can specify specific service IDs to filter the discovered peripherals during the scanning process. By doing so, you can focus the scanning on Bluetooth devices that provide the desired services, enabling more targeted and efficient scanning operations.

    Excellent! We now have all the necessary components in place for scanning and connecting to the desired BLE device. Let’s proceed by adding the user interface (UI) elements that will allow users to initiate the scan, display the list of scanned devices, and enable connection to the selected device. By implementing these UI components, we will create a seamless user experience for scanning, device listing, and connection within our application.

    Discovering and Establishing Connections with BLE Devices

    Let’s create a new UI component/Page that will handle scanning, listing, and connecting to the BLE device. This page will have:

    • A Scan button to call the scan function
    • A simple FlatList to list the selected BLE devices and
    • A method to connect to the selected BLE device when the user clicks on any BLE item row from the list

    Create HomeScreen.tsx in the src folder and add the following code: 

    import React, {useCallback, useEffect, useMemo} from 'react';
    import {
      ActivityIndicator,
      Alert,
      Button,
      FlatList,
      StyleSheet,
      Text,
      TouchableOpacity,
      View,
    } from 'react-native';
    import {useBleContext} from './BleContextProvider';
    
    interface HomeScreenProps {}
    
    const HomeScreen: React.FC<HomeScreenProps> = () => {
      const {
        isConnected,
        isScanning,
        peripherals,
        connectedBle,
        startScan,
        connectBle,
      } = useBleContext();
    
      // Effects
      const scannedbleList = useMemo(() => {
        const list = [];
        if (connectedBle) list.push(connectedBle);
        if (peripherals) list.push(...Array.from(peripherals.values()));
        return list;
      }, [peripherals, isScanning]);
    
      useEffect(() => {
        if (!isConnected) {
          startScan && startScan();
        }
      }, []);
    
      // Methods
      const getRssi = (rssi: number) => {
        return !!rssi
          ? Math.pow(10, (-69 - rssi) / (10 * 2)).toFixed(2) + ' m'
          : 'N/A';
      };
    
      const onBleConnected = (name: string) => {
        Alert.alert('Device connected', `Connected to ${name}.`, [
          {
            text: 'Ok',
            onPress: () => {},
            style: 'default',
          },
        ]);
      };
      const BleListItem = useCallback((item: any) => {
        // define name and rssi
        return (
          <TouchableOpacity
            style={{
              flex: 1,
              flexDirection: 'row',
              justifyContent: 'space-between',
              padding: 16,
              backgroundColor: '#2A2A2A',
            }}
            onPress={() => {
              connectBle && connectBle(item.item, onBleConnected);
            }}>
            <Text style={{textAlign: 'left', marginRight: 8, color: 'white'}}>
              {item.item.name}
            </Text>
            <Text style={{textAlign: 'right'}}>{getRssi(item.item.rssi)}</Text>
          </TouchableOpacity>
        );
      }, []);
    
      const ItemSeparator = useCallback(() => {
        return <View style={styles.divider} />;
      }, []);
    
      // render
      // Ble List and scan button
      return (
        <View style={styles.container}>
          {/* Loader when app is scanning */}
          {isScanning ? (
            <ActivityIndicator size={'small'} />
          ) : (
            <>
              {/* Ble devices List View */}
              {scannedbleList && scannedbleList.length > 0 ? (
                <>
                  <Text style={styles.listHeader}>Discovered BLE Devices</Text>
                  <FlatList
                    data={scannedbleList}
                    renderItem={({item}) => <BleListItem item={item} />}
                    ItemSeparatorComponent={ItemSeparator}
                  />
                </>
              ) : (
                <View style={styles.emptyList}>
                  <Text style={styles.emptyListText}>
                    No Bluetooth devices discovered. Please click scan to search the
                    BLE devices
                  </Text>
                </View>
              )}
    
              {/* Scan button */}
              <View style={styles.btnContainer}>
                <Button
                  title="Scan"
                  color={'black'}
                  disabled={isConnected || isScanning}
                  onPress={() => {
                    startScan && startScan();
                  }}
                />
              </View>
            </>
          )}
        </View>
      );
    };
    
    const styles = StyleSheet.create({
      container: {
        flex: 1,
        flexDirection: 'column',
      },
      listHeader: {
        padding: 8,
        color: 'black',
      },
      emptyList: {
        flex: 1,
        justifyContent: 'center',
        alignItems: 'center',
      },
      emptyListText: {
        padding: 8,
        textAlign: 'center',
        color: 'black',
      },
      btnContainer: {
        marginTop: 10,
        marginHorizontal: 16,
        bottom: 10,
        alignItems: 'flex-end',
      },
      divider: {
        height: 1,
        width: '100%',
        marginHorizontal: 8,
        backgroundColor: '#1A1A1A',
      },
    });
    
    export default HomeScreen;

    Now, open App.tsx and replace the complete code with the following changes: 
    In App.tsx, we removed the default boilerplate code, react-native cli generated while creating the project with our own code, where we added the BleContextProvider and HomeScreen to the app.

    import React from 'react';
    import {SafeAreaView, StatusBar, useColorScheme, View} from 'react-native';
    
    import {Colors} from 'react-native/Libraries/NewAppScreen';
    import {BleContextProvider} from './BleContextProvider';
    import HomeScreen from './HomeScreen';
    
    function App(): JSX.Element {
      const isDarkMode = useColorScheme() === 'dark';
    
      const backgroundStyle = {
        backgroundColor: isDarkMode ? Colors.darker : Colors.lighter,
      };
    
      return (
        <SafeAreaView style={backgroundStyle}>
          <StatusBar
            barStyle={isDarkMode ? 'light-content' : 'dark-content'}
            backgroundColor={backgroundStyle.backgroundColor}
          />
          <BleContextProvider>
            <View style={{height: '100%', width: '100%'}}>
              <HomeScreen />
            </View>
          </BleContextProvider>
        </SafeAreaView>
      );
    }
    
    export default App;

    Running the application on an Android device: Upon launching the app, you will be presented with an empty list message accompanied by a scan button. Simply tap the scan button to retrieve a list of available BLE peripherals within the range of your mobile device. By selecting a specific BLE device from the list, you can establish a connection with it.

    Awesome! Now we are able to scan, detect, and connect to the BLE devices, but there is more to it than just connecting to the BLE devices. We can write to and read the required information from BLE devices, and based on that information, mobile applications OR backend services can perform several other operations.

    For example, if you are wearing and connected to a BLE device that monitors your blood pressure every one hour, and if it goes beyond the threshold, it can trigger a call to a doctor or family members to check and take precautionary measures as soon as possible.

    Communicating with BLE devices

    For seamless communication with a BLE device, the mobile app must possess precise knowledge of the services and characteristics associated with the device. A BLE device typically presents multiple services, each comprising various distinct characteristics. These services and characteristics can be collaboratively defined and shared by the team responsible for manufacturing the BLE device.

    In BLE communication, comprehending the characteristics and their properties is crucial, as they serve distinct purposes. Certain characteristics facilitate writing data to the BLE device, while others enable reading data from it. Gaining a comprehensive understanding of these characteristics and their properties is vital for effectively interacting with the BLE device and ensuring seamless communication.

    Reading data from BLE device when BLE sends data

    Once the mobile app successfully establishes a connection with the BLE device, it initiates the retrieval of available services. It activates the listener to begin receiving notifications from the BLE device. This process takes place within the callback of the “connect BLE” method, ensuring that the app seamlessly retrieves the necessary information and starts listening for important updates from the connected BLE device.

    const connectBle = (peripheral: any, callback?: (name: string) => void) => {
        if (peripheral && peripheral.name && peripheral.name == BLE_NAME) {
          BleManager.connect(peripheral.id)
            .then((resp) => {
              dispatch({ type: "connected", payload: { peripheral } })
              // callback from the caller
              callback && callback(peripheral.name)
              setConnectedDevice(peripheral)
    
              // retrieve services and start read notification
              BleManager.retrieveServices(peripheral.id).then((resp) => {
                BleManager.startNotification(
                  peripheral.id,
                  BLE_SERVICE_ID,
                  BLE_READ_CHAR_ID
                )
                  .then(console.log)
                  .catch(console.error)
              })
            })
            .catch((err) => {
              console.log("failed connecting to the device", err)
            })
        }
      }

    Consequently, the application will promptly receive notifications whenever the BLE device writes data to the designated characteristic within the specified service.

    Reading and writing data to BLE from a mobile device

    To establish communication between the mobile app and the BLE device, we will implement new methods within BleContextProvider. These methods will facilitate the reading and writing of data to the BLE device. By exposing these methods in BleContextProvider’s reducer, we ensure that the app has a reliable means of interacting with the BLE device and can seamlessly exchange information as required.

    interface BleState {
      isConnected: boolean
      isScanning: boolean
      peripherals: Map<string, any>
      list: Array<any>
      connectedBle: Peripheral | undefined
      startScan?: () => void
      connectBle?: (peripheral: any, callback?: (name: string) => void) => void
      readFromBle?: (id: string) => void
      writeToble?: (
        id: string,
        content: string,
        callback?: (count: number, buttonNumber: ButtonNumber) => void
      ) => void
    }
    
    export const BleContextProvider = ({
      children,
    }: {
      children: React.ReactNode
    }) => {
        ....
        
        const writeToBle = (
        id: string,
        content: string,
        count: number,
        buttonNumber: ButtonNumber,
        callback?: (count: number, buttonNumber: ButtonNumber) => void
      ) => {
        BleManager.retrieveServices(id).then((response) => {
          BleManager.writeWithoutResponse(
            id,
            BLE_SERVICE_ID,
            BLE_WRITE_CHAR_ID,
            toByteArray(content)
          )
            .then((res) => {
              callback && callback(count, buttonNumber)
            })
            .catch((res) => console.log("Error writing to BLE device - ", res))
        })
      }
    
      const readFromBle = (id: string) => {
        BleManager.retrieveServices(id).then((response) => {
          BleManager.read(id, BLE_SERVICE_ID, BLE_READ_CHAR_ID)
            .then((resp) => {
              console.log("Read from BLE", toStringFromBytes(resp))
            })
            .catch((err) => {
              console.error("Error Reading from BLE", err)
            })
        })
      }
      ....
    
      // render
      return (
        <BleContext.Provider
          value={{
            ...state,
            startScan: startScan,
            connectBle: connectBle,
            writeToble: writeToBle,
            readFromBle: readFromBle,
          }}
        >
          {children}
        </BleContext.Provider>
      )    
    }

    NOTE: Before a write, read, or start notification, you need to call retrieveServices method every single time.

    Disconnecting BLE connection

    Once you are done with the BLE services, you can disconnect the BLE connection using the disconnectBLE method provided in the library.

    ....
    BleManager.disconnect(BLE_SERVICE_ID)
      .then(() => {
        dispatch({ type: "disconnected", payload: { peripheral } })
      })
      .catch((error) => {
        // Failure code
        console.log(error);
      });
    ....

    Additionally, the React Native BLE Manager library offers various other methods that can enhance the application’s functionality. These include the createBond method, which facilitates the pairing of the BLE device with the mobile app, the stopNotification method, which ceases receiving notifications from the device, and the readRSSI method, which retrieves the received signal strength indicator (RSSI) of the device. For a more comprehensive understanding of the library and its capabilities, I recommend exploring further details on the React Native BLE Manager library documentation here: https://www.npmjs.com/package/react-native-ble-manager

    Conclusion

    We delved into the fascinating world of communicating with BLE (Bluetooth Low Energy) using the React Native BLE Manager library. Then we explored the power of BLE technology and how it can be seamlessly integrated into React Native applications to enable efficient and low-power communication between devices.

    Using the React Native BLE Manager library, we explored essential functionalities such as scanning for nearby BLE devices, establishing connections, discovering services and characteristics, and exchanging data. We also divided into more advanced features like managing connections and handling notifications for a seamless user experience.

    It’s important to remember that BLE technology is continually evolving, and there may be additional libraries and frameworks available for BLE communication in the React Native ecosystem. As you progress on your journey, I encourage you to explore other resources, keep up with the latest advancements, and stay connected with the vibrant community of developers working with BLE and React Native.

    I hope this blog post has inspired you to explore the immense potential of BLE communication in your React Native applications. By harnessing the power of BLE, you can create innovative, connected experiences that enhance the lives of your users and open doors to new possibilities.

    Thank you for taking the time to read through this blog!

  • A Guide to End-to-End API Test Automation with Postman and GitHub Actions

    Objective

    • The blog intends to provide a step-by-step guide on how to automate API testing using Postman. It also demonstrates how we can create a pipeline for periodically running the test suite.
    • Further, it explains how the report can be stored in a central S3 bucket, finally sending the status of the execution back to a designated slack Channel, informing stakeholders about the status, and enabling them to obtain detailed information about the quality of the API.

    Introduction to Postman

    • To speed up the API testing process and improve the accuracy of our APIs, we are going to automate the API functional tests using Postman.
    • Postman is a great tool when trying to dissect RESTful APIs.
    • It offers a sleek user interface to create our functional tests to validate our API’s functionality.
    • Furthermore, the collection of tests will be integrated with GitHub Actions to set up a CI/CD platform that will be used to automate this API testing workflow.

    Getting started with Postman

    Setting up the environment

    • Click on the “New” button on the top left corner. 
    • Select “Environment” as the building block.
    • Give the desired name to the environment file.

    Create a collection

    • Click on the “New” button on the top left corner.
    • Select “Collection” as the building block.
    • Give the desired name to the Collection.

    ‍Adding requests to the collection

    • Configure the Requests under test in folders as per requirement.
    • Enter the API endpoint in the URL field.
    • Set the Auth credentials necessary to run the endpoint. 
    • Set the header values, if required.
    • Enter the request body, if applicable.
    • Send the request by clicking on the “Send” button.
    • Verify the response status and response body.

    Creating TESTS

    • Click on the “Tests” tab.
    • Write the test scripts in JavaScript using the Postman test API.
    • Run the tests by clicking on the “Send” button and validate the execution of the tests written.
    • Alternatively, the prebuilt snippets given by Postman can also be used to create the tests.
    • In case some test data needs to be created, the “Pre-request Script“ tab can be used.

    Running the Collection

    • Click on the ellipses beside the collection created. 
    • Select the environment created in Step 1.
    • Click on the “Run Collection” button.
    • Alternatively, the collection and the env file can be exported and also run via the Newman command.

    Collaboration

    The original collection and the environment file can be exported and shared with others by clicking on the “Export” button. These collections and environments can be version controlled using a system such as Git.

    • While working in a team, team members raise PRs for their changes against the original collection and env via forking. Create a fork.
    • Make necessary changes to the collection and click on Create a Pull request.
    • Validate the changes and approve and merge them to the main collection.

    Integrating with CI/CD

    Creating a pipeline with GitHub Actions

    GitHub Actions is a continuous integration and continuous delivery (CI/CD) platform that allows you to automate your build, test, and deployment pipeline.

    You can create workflows that build and test every pull request to your repository, or deploy merged pull requests to production. To create a pipeline, follow the below steps

    • Create a .yml file inside folder .github/workflows at root level.
    • The same can also be created via GitHub.
    • Configure the necessary actions/steps for the pipeline.

    Workflow File

    • Add a trigger to run the workflow.
    • The schedule in the below code snippet is a GitHub Actions event that triggers the workflow at a specific time interval using a CRON expression.
    • The push and pull_request events denote the Actions event that triggers the workflow for each push and pull request on the develop branch.
    • The workflow_dispatch tag denotes the ability to run the workflow manually, too, from GitHub Actions.
    • Create a job to run the Postman Collection.
    • Check out the code from the current repository. Also, create a directory to store the results.
    • Install Nodejs.
    • Install Newman and necessary dependencies
    • Running the collection
    • Upload Newman report into the directory

    Generating Allure report and hosting the report onto s3.

    • Along with the default report that Newman provides, Allure reporting can also be used in order to get a dashboard of the result. 
    • To generate the Allure report, install the Allure dependencies given in the installation step above.
    • Once, that is done, add below code to your .yml file.
    • Create a bucket in s3, which you will be using for storing the reports
    • Create an iam role for the bucket.
    • The below code snipper user aws-actions/configure-aws-credentials@v1 action to configure your AWS.
    • Credentials Allure generates 2 separate folders eventually combining them to create a dashboard.

    Use the code snippet in the deploy section to upload the contents of the folder onto your s3 bucket.

    • Once done, you should be able to see the Allure dashboard hosted on the Static Website URL for your bucket.

    Send Slack notification with the Status of the job

    • When a job is executed in a CI/CD pipeline, it’s important to keep the team members informed about the status of the job. 
    • Below GitHub Actions step sends a notification to a Slack channel with the status of the job.
    • It uses the “notify-slack-action” GitHub Action, which is defined in the “ravsamhq/notify-slack-action” repository.
    • The “if: always()” condition indicates that this step should always be executed, regardless of whether the previous steps in the workflow succeeded or failed.
  • Why Signals Could Be the Future for Modern Web Frameworks?

    Introduction

    When React got introduced, it had an edge over other libraries and frameworks present in that era because of a very interesting concept called one-way data binding or in simpler words uni-directional flow of data introduced as a part of Virtual DOM.

    It made for a fantastic developer experience where one didn’t have to think about how the updates flow in the UI when data (”state” to be more technical) changes.

    However, as more and more hooks got introduced there were some syntactical rules to make sure they perform in the most optimum way. Essentially, a deviation from the original purpose of React which is unidirectional flow or explicit mutations

    To call out a few

    • Filling out the dependency arrays correctly
    • Memoizing the right values or callbacks for rendering optimization
    • Consciously avoiding prop drilling

    And possibly a few more that if done the wrong way could cause some serious performance issues i.e. everything just re-renders. A slight deviation from the original purpose of just writing components to build UIs.

    The use of signals is a good example of how adopting Reactive programming primitives can help remove all this complexity and help improve developer experience by shifting focus on the right things without having to explicitly follow a set of syntactical rules for gaining performance.

    What Is a Signal?

    A signal is one of the key primitives of Reactive programming. Syntactically, they are very similar to states in React. However, the reactive capabilities of a signal is what gives it the edge.

    const [state, setState] = useState(0);
    // state -> value
    // setState -> setter
    const [signal, setSignal] = createSignal(0);
    // signal -> getter 
    // setSignal -> setter

    At this point, they look pretty much the same—except that useState returns a value and useSignal returns a getter function.

    How is a signal better than a state?

    Once useState returns a value, the library generally doesn’t concern itself with how the value is used. It’s the developer who has to decide where to use that value and has to explicitly make sure that any effects, memos or callbacks that want to subscribe to changes to that value has that value mentioned in their dependency list and in addition to that, memoizing that value to avoid unnecessary re-renders. A lot of additional effort.

    function ParentComponent() {
      const [state, setState] = useState(0);
      const stateVal = useMemo(() => {
        return doSomeExpensiveStateCalculation(state);
      }, [state]); // Explicitly memoize and make sure dependencies are accurate
      
      useEffect(() => {
        sendDataToServer(state);
      }, [state]); // Explicilty call out subscription to state
      
      return (
        <div>
          <ChildComponent stateVal={stateVal} />
        </div>
      );
    }

    A createSignal, however, returns a getter function since signals are reactive in nature. To break it down further, signals keep track of who is interested in the state’s changes, and if the changes occur, it notifies these subscribers.

    To gain this subscriber information, signals keep track of the context in which these state getters, which are essentially a function, are called. Invoking the getter creates a subscription.

    This is super helpful as the library is now, by itself, taking care of the subscribers who are subscribing to the state’s changes and notifying them without the developer having to explicitly call it out.

    createEffect(() => {
      updateDataElswhere(state());
    }); // effect only runs when `state` changes - an automatic subscription

    The contexts (not to be confused with React Context API) that are invoking the getter are the only ones the library will notify, which means memoizing, explicitly filling out large dependency arrays, and the fixing of unnecessary re-renders can all be avoided. This helps to avoid using a lot of additional hooks meant for this purpose, such as useRef, useCallback, useMemo, and a lot of re-renders.

    This greatly enhances the developer experience and shifts focus back on building components for the UI rather than spending that extra 10% of developer efforts in abiding by strict syntactical rules for performance optimization.

    function ParentComponent() {
      const [state, setState] = createSignal(0);
      const stateVal = doSomeExpensiveStateCalculation(state()); // no need memoize explicity
    
      createEffect(() => {
        sendDataToServer(state());
      }); // will only be fired if state changes - the effect is automatically added as a subscriber
    
      return (
        <div>
          <ChildComponent stateVal={stateVal} />
        </div>
      );
    }

    Conclusion

    It might look like there’s a very biased stance toward using signals and reactive programming in general. However, that’s not the case.

    React is a high-performance, optimized library—even though there are some gaps or misses in using your state in an optimum way, which leads to unnecessary re-renders, it’s still really fast. After years of using React a certain way, frontend developers are used to visualizing a certain flow of data and re-rendering, and replacing that entirely with a reactive programming mindset is not natural. React is still the de facto choice for building user interfaces, and it will continue to be with every iteration and new feature added.

    Reactive programming, in addition to performance enhancements, also makes the developer experience much simpler by boiling down to three major primitives: Signal, Memo, and Effects. This helps focus more on building components for UIs rather than worrying about dealing explicitly with performance optimization.

    Signals are increasingly getting popular and are a part of many modern web frameworks, such as Solid.js, Preact, Qwik, and Vue.js.

  • Machine Learning in Flutter using TensorFlow

    Machine learning has become part of day-to-day life. Small tasks like searching songs on YouTube and suggestions on Amazon are using ML in the background. This is a well-developed field of technology with immense possibilities. But how we can use it?

    This blog is aimed at explaining how easy it is to use machine learning models (which will act as a brain) to build powerful ML-based Flutter applications. We will briefly touch base on the following points

    1. Definitions

    Let’s jump to the part where most people are confused. A person who is not exposed to the IT industry might think AI, ML, & DL are all the same. So, let’s understand the difference.  

    Figure 01

    1.1. Artificial Intelligence (AI): 

    AI, i.e. artificial intelligence, is a concept of machines being able to carry out tasks in a smarter way. You all must have used YouTube. In the search bar, you can type the lyrics of any song, even lyrics that are not necessarily the starting part of the song or title of songs, and get almost perfect results every time. This is the work of a very powerful AI.
    Artificial intelligence is the ability of a machine to do tasks that are usually done by humans. This ability is special because the task we are talking about requires human intelligence and discernment.

    1.2. Machine Learning (ML):

    Machine learning is a subset of artificial intelligence. It is based on the idea that we expose machines to new data, which can be a complete or partial row, and let the machine decide the future output. We can also say it is a sub-field of AI that deals with the extraction of patterns from data sets. With a new data set and processing, the last result machine will slowly reach the expected result. This means that the machine can find rules for optical behavior to get new output. It also can adapt itself to new changing data just like humans.

    1.3. Deep Learning (ML): 

    Deep learning is again a smaller subset of machine learning, which is essentially a neural network with multiple layers. These neural networks attempt to simulate the behavior of the human brain, so you can say we are trying to create an artificial human brain. With one layer of a neural network, we can still make approximate predictions, and additional layers can help to optimize and refine for accuracy.

    2. Types of ML

    Before starting the implementation, we need to know the types of machine learning because it is very important to know which type is more suitable for our expected functionality.

    Figure 02

    2.1. Supervised Learning

    As the name suggests, in supervised learning, the learning happens under supervision. Supervision means the data that is provided to the machine is already classified data i.e., each piece of data has fixed labels, and inputs are already mapped to the output.
    Once the machine is learned, it is ready for the classification of new data.
    This learning is useful for tasks like fraud detection, spam filtering, etc.

    2.2. Unsupervised Learning

    In unsupervised learning, the data given to machines to learn is purely raw, with no tags or labels. Here, the machine is the one that will create new classes by extracting patterns.
    This learning can be used for clustering, association, etc.

    2.3. Semi-Supervised Learning

    Both supervised and unsupervised have their own limitations, because one requires labeled data, and the other does not, so this learning combines the behavior of both learnings, and with that, we can overcome the limitations.
    In this learning, we feed row data and categorized data to the machine so it can classify the row data, and if necessary, create new clusters.

    2.4. : Reinforcement Learning

    For this learning, we feed the last output’s feedback with new incoming data to machines so they can learn from their mistakes. This feedback-based process will continue until the machine reaches the perfect output. This feedback is given by humans in the form of punishment or reward. This is like when a search algorithm gives you a list of results, but users do not click on other than the first result. It is like a human child who is learning from every available option and by correcting its mistakes, it grows.

    3. TensorFlow

    Machine learning is a complex process where we need to perform multiple activities like processing of acquiring data, training models, serving predictions, and refining future results.

    To perform such operations, Google developed a framework in November 2015 called TensorFlow. All the above-mentioned processes can become easy if we use the TensorFlow framework. 

    For this project, we are not going to use a complete TensorFlow framework but a small tool called TensorFlow Lite

    3.1. TensorFlow Lite

    TensorFlow Lite allows us to run the machine learning models on devices with limited resources, like limited RAM or memory.

    3.2. TensorFlow Lite Features

    • Optimized for on-device ML by addressing five key constraints: 
    • Latency: because there’s no round-trip to a server 
    • Privacy: because no personal data leaves the device 
    • Connectivity: because internet connectivity is not required 
    • Size: because of a reduced model and binary size
    • Power consumption: because of efficient inference and a lack of network connections
    • Support for Android and iOS devices, embedded Linux, and microcontrollers
    • Support for Java, Swift, Objective-C, C++, and Python programming languages
    • High performance, with hardware acceleration and model optimization
    • End-to-end examples for common machine learning tasks such as image classification, object detection, pose estimation, question answering, text classification, etc., on multiple platforms

    4. What is Flutter?

    Flutter is an open source, cross-platform development framework. With the help of Flutter by using a single code base, we can create applications for Android, iOS, web, as well as desktop. It was created by Google and uses Dart as a development language. The first stable version of Flutter was released in Apr 2018, and since then, there have been many improvements. 

    5. Building an ML-Flutter Application

    We are now going to build a Flutter application through which we can find the state of mind of a person from their facial expressions. The below steps explain the update we need to do for an Android-native application. For an iOS application, please refer to the links provided in the steps.

    5.1. TensorFlow Lite – Native setup (Android)

    • In android/app/build.gradle, add the following setting in the android block:
    aaptOptions {
            noCompress 'tflite'
            noCompress 'lite'
        }

    5.2. TensorFlow Lite – Flutter setup (Dart)

    • Create an assets folder and place your label file and model file in it. (These files we will create shortly.) In pubspec.yaml add:
    assets:
       - assets/labels.txt
       - assets/<file_name>.tflite

     

    Figure 02

    • Run this command (Install TensorFlow Light package): 
    $ flutter pub add tflite

    • Add the following line to your package’s pubspec.yaml (and run an implicit flutter pub get):
    dependencies:
         tflite: ^0.9.0

    • Now in your Dart code, you can use:
    import 'package:tflite/tflite.dart';

    • Add camera dependencies to your package’s pubspec.yaml (optional):
    dependencies:
         camera: ^0.10.0+1

    • Now in your Dart code, you can use:
    import 'package:camera/camera.dart';

    • As the camera is a hardware feature, in the native code, there are few updates we need to do for both Android & iOS.  To learn more, visit:
    https://pub.dev/packages/camera
    • Following is the code that will appear under dependencies in pubspec.yaml once the the setup is complete.
    Figure 03
    • Flutter will automatically download the most recent version if you ignore the version number of packages.
    • Do not forget to add the assets folder in the root directory.

    5.3. Generate model (using website)

    • Click on Get Started

    • Select Image project
    • There are three different categories of ML projects available. We’ll choose an image project since we’re going to develop a project that analyzes a person’s facial expression to determine their emotional condition.
    • The other two types, audio project and pose project, will be useful for creating projects that involve audio operation and human pose indication, respectively.

    • Select Standard Image model
    • Once more, there are two distinct groups of image machine learning projects. Since we are creating a project for an Android smartphone, we will select a standard picture project.
    • The other type, an Embedded Image Model project, is designed for hardware with relatively little memory and computing power.

    • Upload images for training the classes
    • We will create new classes by clicking on “Add a class.”
    • We must upload photographs to these classes as we are developing a project that analyzes a person’s emotional state from their facial expression.
    • The more photographs we upload, the more precise our result will be.
    • Click on train model and wait till training is over
    • Click on Export model
    • Select TensorFlow Lite Tab -> Quantized  button -> Download my model

    5.4. Add files/models to the Flutter project

    • Labels.txt

    File contains all the class names which you created during model creation.

     

    • *.tflite

    File contains the original model file as well as associated files a ZIP.

    5.5. Load & Run ML-Model

    • We are importing the model from assets, so this line of code is crucial. This model will serve as the project’s brain.
    • Here, we’re configuring the camera using a camera controller and obtaining a live feed (Cameras[0] is the front camera).

    6. Conclusion

    We can achieve good performance of a Flutter app with an appropriate architecture, as discussed in this blog.

  • Discover the Benefits of Android Clean Architecture

    All architectures have one common goal: to manage the complexity of our application. We may not need to worry about it on a smaller project, but it becomes a lifesaver on larger ones. The purpose of Clean Architecture is to minimize code complexity by preventing implementation complexity.

    We must first understand a few things to implement the Clean Architecture in an Android project.

    • Entities: Encapsulate enterprise-wide critical business rules. An entity can be an object with methods or data structures and functions.
    • Use cases: It demonstrates data flow to and from the entities.
    • Controllers, gateways, presenters: A set of adapters that convert data from the use cases and entities format to the most convenient way to pass the data to the upper level (typically the UI).
    • UI, external interfaces, DB, web, devices: The outermost layer of the architecture, generally composed of frameworks such as database and web frameworks.

    Here is one thumb rule we need to follow. First, look at the direction of the arrows in the diagram. Entities do not depend on use cases and use cases do not depend on controllers, and so on. A lower-level module should always rely on something other than a higher-level module. The dependencies between the layers must be inwards.

    Advantages of Clean Architecture:

    • Strict architecture—hard to make mistakes
    • Business logic is encapsulated, easy to use, and tested
    • Enforcement of dependencies through encapsulation
    • Allows for parallel development
    • Highly scalable
    • Easy to understand and maintain
    • Testing is facilitated

    Let’s understand this using the small case study of the Android project, which gives more practical knowledge rather than theoretical.

    A pragmatic approach

    A typical Android project typically needs to separate the concerns between the UI, the business logic, and the data model, so taking “the theory” into account, we decided to split the project into three modules:

    • Domain Layer: contains the definitions of the business logic of the app, the data models, the abstract definition of repositories, and the definition of the use cases.
    Domain Module
    • Data Layer: This layer provides the abstract definition of all the data sources. Any application can reuse this without modifications. It contains repositories and data sources implementations, the database definition and its DAOs, the network APIs definitions, some mappers to convert network API models to database models, and vice versa.
    Data Module
    • Presentation layer: This is the layer that mainly interacts with the UI. It’s Android-specific and contains fragments, view models, adapters, activities, composable, and so on. It also includes a service locator to manage dependencies.
    Presentation Module

    Marvel’s comic characters App

    To elaborate on all the above concepts related to Clean Architecture, we are creating an app that lists Marvel’s comic characters using Marvel’s developer API. The app shows a list of Marvel characters, and clicking on each character will show details of that character. Users can also bookmark their favorite characters. It seems like nothing complicated, right?

    Before proceeding further into the sample, it’s good to have an idea of the following frameworks because the example is wholly based on them.

    • Jetpack Compose – Android’s recommended modern toolkit for building native UI.
    • Retrofit 2 – A type-safe HTTP client for Android for Network calls.
    • ViewModel – A class responsible for preparing and managing the data for an activity or a fragment.
    • Kotlin – Kotlin is a cross-platform, statically typed, general-purpose programming language with type inference.

    To get a characters list, we have used marvel’s developer API, which returns the list of marvel characters.

    http://gateway.marvel.com/v1/public/characters

    The domain layer

    In the domain layer, we define the data model, the use cases, and the abstract definition of the character repository. The API returns a list of characters, with some info like name, description, and image links.

    data class CharacterEntity(
        val id: Long,
        val name: String,
        val description: String,
        val imageUrl: String,
        val bookmarkStatus: Boolean
    )

    interface MarvelDataRepository {
        suspend fun getCharacters(dataSource: DataSource): Flow<List<CharacterEntity>>
        suspend fun getCharacter(characterId: Long): Flow<CharacterEntity>
        suspend fun toggleCharacterBookmarkStatus(characterId: Long): Boolean
        suspend fun getComics(dataSource: DataSource, characterId: Long): Flow<List<ComicsEntity>>
    }

    class GetCharactersUseCase(
        private val marvelDataRepository: MarvelDataRepository,
        private val ioDispatcher: CoroutineDispatcher = Dispatchers.IO
    ) {
        operator fun invoke(forceRefresh: Boolean = false): Flow<List<CharacterEntity>> {
            return flow {
                emitAll(
                    marvelDataRepository.getCharacters(
                        if (forceRefresh) {
                            DataSource.Network
                        } else {
                            DataSource.Cache
                        }
                    )
                )
            }
                .flowOn(ioDispatcher)
        }
    }

    The data layer

    As we said before, the data layer must implement the abstract definition of the domain layer, so we need to put the repository’s concrete implementation in this layer. To do so, we can define two data sources, a “local” data source to provide persistence and a “remote” data source to fetch the data from the API.

    class MarvelDataRepositoryImpl(
        private val marvelRemoteService: MarvelRemoteService,
        private val charactersDao: CharactersDao,
        private val comicsDao: ComicsDao,
        private val ioDispatcher: CoroutineDispatcher = Dispatchers.IO
    ) : MarvelDataRepository {
    
        override suspend fun getCharacters(dataSource: DataSource): Flow<List<CharacterEntity>> =
            flow {
                emitAll(
                    when (dataSource) {
                        is DataSource.Cache -> getCharactersCache().map { list ->
                            if (list.isEmpty()) {
                                getCharactersNetwork()
                            } else {
                                list.toDomain()
                            }
                        }
                            .flowOn(ioDispatcher)
    
                        is DataSource.Network -> flowOf(getCharactersNetwork())
                            .flowOn(ioDispatcher)
                    }
                )
            }
    
        private suspend fun getCharactersNetwork(): List<CharacterEntity> =
            marvelRemoteService.getCharacters().body()?.data?.results?.let { remoteData ->
                if (remoteData.isNotEmpty()) {
                    charactersDao.upsert(remoteData.toCache())
                }
                remoteData.toDomain()
            } ?: emptyList()
    
        private fun getCharactersCache(): Flow<List<CharacterCache>> =
            charactersDao.getCharacters()
    
        override suspend fun getCharacter(characterId: Long): Flow<CharacterEntity> =
            charactersDao.getCharacterFlow(id = characterId).map {
                it.toDomain()
            }
    
        override suspend fun toggleCharacterBookmarkStatus(characterId: Long): Boolean {
    
            val status = charactersDao.getCharacter(characterId)?.bookmarkStatus?.not() ?: false
    
            return charactersDao.toggleCharacterBookmarkStatus(id = characterId, status = status) > 0
        }
    
        override suspend fun getComics(
            dataSource: DataSource,
            characterId: Long
        ): Flow<List<ComicsEntity>> = flow {
            emitAll(
                when (dataSource) {
                    is DataSource.Cache -> getComicsCache(characterId = characterId).map { list ->
                        if (list.isEmpty()) {
                            getComicsNetwork(characterId = characterId)
                        } else {
                            list.toDomain()
                        }
                    }
                    is DataSource.Network -> flowOf(getComicsNetwork(characterId = characterId))
                        .flowOn(ioDispatcher)
                }
            )
        }
    
        private suspend fun getComicsNetwork(characterId: Long): List<ComicsEntity> =
            marvelRemoteService.getComics(characterId = characterId)
                .body()?.data?.results?.let { remoteData ->
                    if (remoteData.isNotEmpty()) {
                        comicsDao.upsert(remoteData.toCache(characterId = characterId))
                    }
                    remoteData.toDomain()
                } ?: emptyList()
    
        private fun getComicsCache(characterId: Long): Flow<List<ComicsCache>> =
            comicsDao.getComics(characterId = characterId)
    }

    Since we defined the data source to manage persistence, in this layer, we also need to determine the database for which we are using the room database. In addition, it’s good practice to create some mappers to map the API response to the corresponding database entity.

    fun List<Characters>.toCache() = map { character -> character.toCache() }
    
    fun Characters.toCache() = CharacterCache(
        id = id ?: 0,
        name = name ?: "",
        description = description ?: "",
        imageUrl = thumbnail?.let {
            "${it.path}.${it.extension}"
        } ?: ""
    )
    
    fun List<Characters>.toDomain() = map { character -> character.toDomain() }
    
    fun Characters.toDomain() = CharacterEntity(
        id = id ?: 0,
        name = name ?: "",
        description = description ?: "",
        imageUrl = thumbnail?.let {
            "${it.path}.${it.extension}"
        } ?: "",
        bookmarkStatus = false
    )

    @Entity
    data class CharacterCache(
        @PrimaryKey
        val id: Long,
        val name: String,
        val description: String,
        val imageUrl: String,
        val bookmarkStatus: Boolean = false
    ) : BaseCache

    The presentation layer

    In this layer, we need a UI component like fragments, activity, or composable to display the list of characters; here, we can use the widely used MVVM approach. The view model takes the use cases in its constructors and invokes the corresponding use case according to user actions (get a character, characters & comics, etc.).

    Each use case will invoke the appropriate method in the repository.

    class CharactersListViewModel(
        private val getCharacters: GetCharactersUseCase,
        private val toggleCharacterBookmarkStatus: ToggleCharacterBookmarkStatus
    ) : ViewModel() {
    
        private val _characters = MutableStateFlow<UiState<List<CharacterViewState>>>(UiState.Loading())
        val characters: StateFlow<UiState<List<CharacterViewState>>> = _characters
    
        init {
            _characters.value = UiState.Loading()
            getAllCharacters()
        }
    
        private fun getAllCharacters(forceRefresh: Boolean = false) {
            getCharacters(forceRefresh)
                .catch { error ->
                    error.printStackTrace()
                    when (error) {
                        is UnknownHostException, is ConnectException, is SocketTimeoutException -> _characters.value =
                            UiState.NoInternetError(error)
                        else -> _characters.value = UiState.ApiError(error)
                    }
                }.map { list ->
                    _characters.value = UiState.Loaded(list.toViewState())
                }.launchIn(viewModelScope)
        }
    
        fun refresh(showLoader: Boolean = false) {
            if (showLoader) {
                _characters.value = UiState.Loading()
            }
            getAllCharacters(forceRefresh = true)
        }
    
        fun bookmarkCharacter(characterId: Long) {
            viewModelScope.launch {
                toggleCharacterBookmarkStatus(characterId = characterId)
            }
        }
    }

    /*
    * Scaffold(Layout) for Characters list page
    * */
    
    
    @SuppressLint("UnusedMaterialScaffoldPaddingParameter")
    @Composable
    fun CharactersListScaffold(
        showComics: (Long) -> Unit,
        closeAction: () -> Unit,
        modifier: Modifier = Modifier,
        charactersListViewModel: CharactersListViewModel = getViewModel()
    ) {
        Scaffold(
            modifier = modifier,
            topBar = {
                TopAppBar(
                    title = {
                        Text(text = stringResource(id = R.string.characters))
                    },
                    navigationIcon = {
                        IconButton(onClick = closeAction) {
                            Icon(
                                imageVector = Icons.Filled.Close,
                                contentDescription = stringResource(id = R.string.close_icon)
                            )
                        }
                    }
                )
            }
        ) {
            val state = charactersListViewModel.characters.collectAsState()
    
            when (state.value) {
    
                is UiState.Loading -> {
                    Loader()
                }
    
                is UiState.Loaded -> {
                    state.value.data?.let { characters ->
                        val isRefreshing = remember { mutableStateOf(false) }
                        SwipeRefresh(
                            state = rememberSwipeRefreshState(isRefreshing = isRefreshing.value),
                            onRefresh = {
                                isRefreshing.value = true
                                charactersListViewModel.refresh()
                            }
                        ) {
                            isRefreshing.value = false
    
                            if (characters.isNotEmpty()) {
    
                                LazyVerticalGrid(
                                    columns = GridCells.Fixed(2),
                                    modifier = Modifier
                                        .padding(5.dp)
                                        .fillMaxSize()
                                ) {
                                    items(characters) { state ->
                                        CharacterTile(
                                            state = state,
                                            characterSelectAction = {
                                                showComics(state.id)
                                            },
                                            bookmarkAction = {
                                                charactersListViewModel.bookmarkCharacter(state.id)
                                            },
                                            modifier = Modifier
                                                .padding(5.dp)
                                                .fillMaxHeight(fraction = 0.35f)
                                        )
                                    }
                                }
    
                            } else {
                                Info(
                                    messageResource = R.string.no_characters_available,
                                    iconResource = R.drawable.ic_no_data
                                )
                            }
                        }
                    }
                }
    
                is UiState.ApiError -> {
                    Info(
                        messageResource = R.string.api_error,
                        iconResource = R.drawable.ic_something_went_wrong
                    )
                }
    
                is UiState.NoInternetError -> {
                    Info(
                        messageResource = R.string.no_internet,
                        iconResource = R.drawable.ic_no_connection,
                        isInfoOnly = false,
                        buttonAction = {
                            charactersListViewModel.refresh(showLoader = true)
                        }
                    )
                }
            }
        }
    }
    
    @Preview
    @Composable
    private fun CharactersListScaffoldPreview() {
        MarvelComicTheme {
            CharactersListScaffold(showComics = {}, closeAction = {})
        }
    }

    Let’s see how the communication between the layers looks like.

    Source: Clean Architecture Tutorial for Android

    As you can see, each layer communicates only with the closest one, keeping inner layers independent from lower layers, this way, we can quickly test each module separately, and the separation of concerns will help developers to collaborate on the different modules of the project.

    Thank you so much!

  • Node.js – Async Your Way out of Callback Hell with Promises, Async & Async/Await

    In this blog, I will compare various methods to avoid the dreaded callback hells that are common in Node.js. What exactly am I talking about? Have a look at this piece of code below. Every child function executes only when the result of its parent function is available. Callbacks are the very essence of the unblocking (and hence performant) nature of Node.js.

    foo(arg, (err, val) => {
         if (err) {
              console.log(err);
         } else {
              val += 1;
              bar(val, (err1, val1) => {
                   if (err) {
                        console.log(err1);
                   } else {
                        val1 += 2;
                        baz(val1, (err2, result) => {
                             if (err2) {
                                  console.log(err2);
                             } else {
                                  result += 3;
                                  console.log(result); // 6
                             }
                        });
                   }
              });
         }
    });

    Convinced yet? Even though there is some seemingly unnecessary error handling done here, I assume you get the drift! The problem with such code is more than just indentation. Instead, our programs entire flow is based on side effects – one function only incidentally calling the inner function.

    There are multiple ways in which we can avoid writing such deeply nested code. Let’s have a look at our options:

    Promises

    According to the official specification, promise represents an eventual result of an asynchronous operation. Basically, it represents an operation that has not completed yet but is expected to in the future. The then method is a major component of a promise. It is used to get the return value (fulfilled or rejected) of a promise. Only one of these two values will ever be set. Let’s have a look at a simple file read example without using promises:

    fs.readFile(filePath, (err, result) => {
         if (err) { console.log(err); }
         console.log(data);
    });

    Now, if readFile function returned a promise, the same logic could be written like so:

    var fileReadPromise = fs.readFile(filePath);
    fileReadPromise.then(console.log, console.error)

    The fileReadPromise can then be passed around multiple times in a code where you need to read a file. This helps in writing robust unit tests for your code since you now only have to write a single test for a promise. And more readable code!

    Chaining using promises

    The then function itself returns a promise which can again be used to do the next operation. Changing the first code snippet to using promises results in this:

    foo(arg, (err, val) => {
         if (err) {
              console.log(err);
         } else {
              val += 1;
              bar(val, (err1, val1) => {
                   if (err) {
                        console.log(err1);
                   } else {
                        val1 += 2;
                        baz(val1, (err2, result) => {
                             if (err2) {
                                  console.log(err2);
                             } else {
                                  result += 3;
                                  console.log(result); // 6
                             }
                        });
                   }
              });
         }
    });

    As in evident, it makes the code more composed, readable and easier to maintain. Also, instead of chaining we could have used Promise.all. Promise.all takes an array of promises as input and returns a single promise that resolves when all the promises supplied in the array are resolved. Other useful information on promises can be found here.

    The async utility module

    Async is an utility module which provides a set of over 70 functions that can be used to elegantly solve the problem of callback hells. All these functions follow the Node.js convention of error-first callbacks which means that the first callback argument is assumed to be an error (null in case of success). Let’s try to solve the same foo-bar-baz problem using async module. Here is the code snippet:

    function foo(arg, callback) {
      if (arg < 0) {
        callback('error');
        return;
      }
      callback(null, arg+1);
    }
    
    function bar(arg, callback) {
      if (arg < 0) {
        callback('error');
        return;
      }
      callback(null, arg+2);
    }
    
    function baz(arg, callback) {
      if (arg < 0) {
        callback('error');
        return;
      }
      callback(null, arg+3);
    }
    
    async.waterfall([
      (cb) => {
        foo(0, cb);
      },
      (arg, cb) => {
        bar(arg, cb);
      },
      (arg, cb) => {
        baz(arg, cb);
      }
    ], (err, result) => {
      if (err) {
        console.log(err);
      } else {
        console.log(result); //6
      }
    });

    Here, I have used the async.waterfall function as an example. There are a multiple functions available according to the nature of the problem you are trying to solve like async.each – for parallel execution, async.eachSeries – for serial execution etc.

    Async/Await

    Now, this is one of the most exciting features coming to Javascript in near future. It internally uses promises but handles them in a more intuitive manner. Even though it seems like promises and/or 3rd party modules like async would solve most of the problems, a further simplification is always welcome! For those of you who have worked with C# async/await, this concept is directly cribbed from there and being brought into ES7. 

    Async/await enables us to write asynchronous promise-based code as if it were synchronous, but without blocking the main thread. An async function always returns a promise whether await is used or not. But whenever an await is observed, the function is paused until the promise either resolves or rejects. Following code snippet should make it clearer:

    async function asyncFun() {
      try {
        const result = await promise;
      } catch(error) {
        console.log(error);
      }
    }

    Here,  asyncFun is an async function which captures the promised result using await. This has made the code readable and a major convenience for developers who are more comfortable with linearly executed languages, without blocking the main thread. 

    Now, like before, lets solve the foo-bar-baz problem using async/await. Note that foo, bar and baz individually return promises just like before. But instead of chaining, we have written the code linearly.

    async fooBarBaz(arg) {
      try {
      const fooResponse = await foo(arg+1);
      const barResponse = await bar(arg+2);
      const bazResponse = await baz(arg+3);
    
      return bazResponse;
      } catch (error) {
        return Error(error);
      }
    }

    How long should you (a)wait for async to come to fore?

    Well, it’s already here in the Chrome 55 release and the latest update of the V8 engine.  The native support in the language means that we should see a much more widespread use of this feature. The only, catch is that if you would want to use async/await on a codebase which isn’t promise aware and based completely on callbacks, it probably will require a lot of wrapping the existing functions to make them usable.

    To wrap up, async/await definitely make coding numerous async operations an easier job. Although promises and callbacks would do the job for most, async/await looks like the way to make some architectural problems go away and improve code quality. 

  • The 7 Most Useful Design Patterns in ES6 (and how you can implement them)

    After spending a couple of years in JavaScript development, I’ve realized how incredibly important design patterns are, in modern JavaScript (ES6). And I’d love to share my experience and knowledge on the subject, hoping you’d make this a critical part of your development process as well.

    Note: All the examples covered in this post are implemented with ES6 features, but you can also integrate the design patterns with ES5.

    At Velotio, we always follow best practices to achieve highly maintainable and more robust code. And we are strong believers of using design patterns as one of the best ways to write clean code. 

    In the post below, I’ve listed the most useful design patterns I’ve implemented so far and how you can implement them too:

    1. Module

    The module pattern simply allows you to keep units of code cleanly separated and organized. 

    Modules promote encapsulation, which means the variables and functions are kept private inside the module body and can’t be overwritten.

    Creating a module in ES6 is quite simple.

    // Addition module
    export const sum = (num1, num2) => num1 + num2;

    // usage
    import { sum } from 'modules/sum';
    const result = sum(20, 30); // 50

    ES6 also allows us to export the module as default. The following example gives you a better understanding of this.

    // All the variables and functions which are not exported are private within the module and cannot be used outside. Only the exported members are public and can be used by importing them.
    
    // Here the businessList is private member to city module
    const businessList = new WeakMap();
     
    // Here City uses the businessList member as it’s in same module
    class City {
     constructor() {
       businessList.set(this, ['Pizza Hut', 'Dominos', 'Street Pizza']);
     }
     
     // public method to access the private ‘businessList’
     getBusinessList() {
       return businessList.get(this);
     }
    
    // public method to add business to ‘businessList’
     addBusiness(business) {
       businessList.get(this).push(business);
     }
    }
     
    // export the City class as default module
    export default City;

    // usage
    import City from 'modules/city';
    const city = new City();
    city.getBusinessList();

    There is a great article written on the features of ES6 modules here.

    2. Factory

    Imagine creating a Notification Management application where your application currently only allows for a notification through Email, so most of the code lives inside the EmailNotification class. And now there is a new requirement for PushNotifications. So, to implement the PushNotifications, you have to do a lot of work as your application is mostly coupled with the EmailNotification. You will repeat the same thing for future implementations.

    To solve this complexity, we will delegate the object creation to another object called factory.

    class PushNotification {
     constructor(sendTo, message) {
       this.sendTo = sendTo;
       this.message = message;
     }
    }
     
    class EmailNotification {
     constructor(sendTo, cc, emailContent) {
       this.sendTo = sendTo;
       this.cc = cc;
       this.emailContent = emailContent;
     }
    }
     
    // Notification Factory
     
    class NotificationFactory {
     createNotification(type, props) {
       switch (type) {
         case 'email':
           return new EmailNotification(props.sendTo, props.cc, props.emailContent);
         case 'push':
           return new PushNotification(props.sendTo, props.message);
       }
     }
    }
     
    // usage
    const factory = new NotificationFactory();
     
    // create email notification
    const emailNotification = factory.createNotification('email', {
     sendTo: 'receiver@domain.com',
     cc: 'test@domain.com',
     emailContent: 'This is the email content to be delivered.!',
    });
     
    // create push notification
    const pushNotification = factory.createNotification('push', {
     sendTo: 'receiver-device-id',
     message: 'The push notification message',
    });

    3. Observer

    (Also known as the publish/subscribe pattern.)

    An observer pattern maintains the list of subscribers so that whenever an event occurs, it will notify them. An observer can also remove the subscriber if the subscriber no longer wishes to be notified.

    On YouTube, many times, the channels we’re subscribed to will notify us whenever a new video is uploaded.

    // Publisher
    class Video {
     constructor(observable, name, content) {
       this.observable = observable;
       this.name = name;
       this.content = content;
       // publish the ‘video-uploaded’ event
       this.observable.publish('video-uploaded', {
         name,
         content,
       });
     }
    }
    // Subscriber
    class User {
     constructor(observable) {
       this.observable = observable;
       this.intrestedVideos = [];
       // subscribe with the event naame and the call back function
       this.observable.subscribe('video-uploaded', this.addVideo.bind(this));
     }
     
     addVideo(video) {
       this.intrestedVideos.push(video);
     }
    }
    // Observer 
    class Observable {
     constructor() {
       this.handlers = [];
     }
     
     subscribe(event, handler) {
       this.handlers[event] = this.handlers[event] || [];
       this.handlers[event].push(handler);
     }
     
     publish(event, eventData) {
       const eventHandlers = this.handlers[event];
     
       if (eventHandlers) {
         for (var i = 0, l = eventHandlers.length; i < l; ++i) {
           eventHandlers[i].call({}, eventData);
         }
       }
     }
    }
    // usage
    const observable = new Observable();
    const user = new User(observable);
    const video = new Video(observable, 'ES6 Design Patterns', videoFile);

    4. Mediator

    The mediator pattern provides a unified interface through which different components of an application can communicate with each other.

    If a system appears to have too many direct relationships between components, it may be time to have a central point of control that components communicate through instead. 

    The mediator promotes loose coupling. 

    A real-time analogy could be a traffic light signal that handles which vehicles can go and stop, as all the communications are controlled from a traffic light.

    Let’s create a chatroom (mediator) through which the participants can register themselves. The chatroom is responsible for handling the routing when the participants chat with each other. 

    // each participant represented by Participant object
    class Participant {
     constructor(name) {
       this.name = name;
     }
      getParticiantDetails() {
       return this.name;
     }
    }
     
    // Mediator
    class Chatroom {
     constructor() {
       this.participants = {};
     }
     
     register(participant) {
       this.participants[participant.name] = participant;
       participant.chatroom = this;
     }
     
     send(message, from, to) {
       if (to) {
         // single message
         to.receive(message, from);
       } else {
         // broadcast message to everyone
         for (key in this.participants) {
           if (this.participants[key] !== from) {
             this.participants[key].receive(message, from);
           }
         }
       }
     }
    }
     
    // usage
    // Create two participants  
     const john = new Participant('John');
     const snow = new Participant('Snow');
    // Register the participants to Chatroom
     var chatroom = new Chatroom();
     chatroom.register(john);
     chatroom.register(snow);
    // Participants now chat with each other
     john.send('Hey, Snow!');
     john.send('Are you there?');
     snow.send('Hey man', yoko);
     snow.send('Yes, I heard that!');

    5. Command

    In the command pattern, an operation is wrapped as a command object and passed to the invoker object. The invoker object passes the command to the corresponding object, which executes the command.

    The command pattern decouples the objects executing the commands from objects issuing the commands. The command pattern encapsulates actions as objects. It maintains a stack of commands whenever a command is executed, and pushed to stack. To undo a command, it will pop the action from stack and perform reverse action.

    You can consider a calculator as a command that performs addition, subtraction, division and multiplication, and each operation is encapsulated by a command object.

    // The list of operations can be performed
    const addNumbers = (num1, num2) => num1 + num2;
    const subNumbers = (num1, num2) => num1 - num2;
    const multiplyNumbers = (num1, num2) => num1 * num2;
    const divideNumbers = (num1, num2) => num1 / num2;
     
    // CalculatorCommand class initialize with execute function, undo function // and the value 
    class CalculatorCommand {
     constructor(execute, undo, value) {
       this.execute = execute;
       this.undo = undo;
       this.value = value;
     }
    }
    // Here we are creating the command objects
    const DoAddition = value => new CalculatorCommand(addNumbers, subNumbers, value);
    const DoSubtraction = value => new CalculatorCommand(subNumbers, addNumbers, value);
    const DoMultiplication = value => new CalculatorCommand(multiplyNumbers, divideNumbers, value);
    const DoDivision = value => new CalculatorCommand(divideNumbers, multiplyNumbers, value);
     
    // AdvancedCalculator which maintains the list of commands to execute and // undo the executed command
    class AdvancedCalculator {
     constructor() {
       this.current = 0;
       this.commands = [];
     }
     
     execute(command) {
       this.current = command.execute(this.current, command.value);
       this.commands.push(command);
     }
     
     undo() {
       let command = this.commands.pop();
       this.current = command.undo(this.current, command.value);
     }
     
     getCurrentValue() {
       return this.current;
     }
    }
    
    // usage
    const advCal = new AdvancedCalculator();
     
    // invoke commands
    advCal.execute(new DoAddition(50)); //50
    advCal.execute(new DoSubtraction(25)); //25
    advCal.execute(new DoMultiplication(4)); //100
    advCal.execute(new DoDivision(2)); //50
     
    // undo commands
    advCal.undo();
    advCal.getCurrentValue(); //100

    6. Facade

    The facade pattern is used when we want to show the higher level of abstraction and hide the complexity behind the large codebase.

    A great example of this pattern is used in the common DOM manipulation libraries like jQuery, which simplifies the selection and events adding mechanism of the elements.

    // JavaScript:
    /* handle click event  */
    document.getElementById('counter').addEventListener('click', () => {
     counter++;
    });
     
    // jQuery:
    /* handle click event */
    $('#counter').on('click', () => {
     counter++;
    });

    Though it seems simple on the surface, there is an entire complex logic implemented when performing the operation.

    The following Account Creation example gives you clarity about the facade pattern: 

    // Here AccountManager is responsible to create new account of type 
    // Savings or Current with the unique account number
    let currentAccountNumber = 0;
    
    class AccountManager {
     createAccount(type, details) {
       const accountNumber = AccountManager.getUniqueAccountNumber();
       let account;
       if (type === 'current') {
         account = new CurrentAccount();
       } else {
         account = new SavingsAccount();
       }
       return account.addAccount({ accountNumber, details });
     }
     
     static getUniqueAccountNumber() {
       return ++currentAccountNumber;
     }
    }
    
    
    // class Accounts maintains the list of all accounts created
    class Accounts {
     constructor() {
       this.accounts = [];
     }
     
     addAccount(account) {
       this.accounts.push(account);
       return this.successMessage(complaint);
     }
     
     getAccount(accountNumber) {
       return this.accounts.find(account => account.accountNumber === accountNumber);
     }
     
     successMessage(account) {}
    }
    
    // CurrentAccounts extends the implementation of Accounts for providing more specific success messages on successful account creation
    class CurrentAccounts extends Accounts {
     constructor() {
       super();
       if (CurrentAccounts.exists) {
         return CurrentAccounts.instance;
       }
       CurrentAccounts.instance = this;
       CurrentAccounts.exists = true;
       return this;
     }
     
     successMessage({ accountNumber, details }) {
       return `Current Account created with ${details}. ${accountNumber} is your account number.`;
     }
    }
     
    // Same here, SavingsAccount extends the implementation of Accounts for providing more specific success messages on successful account creation
    class SavingsAccount extends Accounts {
     constructor() {
       super();
       if (SavingsAccount.exists) {
         return SavingsAccount.instance;
       }
       SavingsAccount.instance = this;
       SavingsAccount.exists = true;
       return this;
     }
     
     successMessage({ accountNumber, details }) {
       return `Savings Account created with ${details}. ${accountNumber} is your account number.`;
     }
    }
     
    // usage
    // Here we are hiding the complexities of creating account
    const accountManager = new AccountManager();
     
    const currentAccount = accountManager.createAccount('current', { name: 'John Snow', address: 'pune' });
     
    const savingsAccount = accountManager.createAccount('savings', { name: 'Petter Kim', address: 'mumbai' });

    7. Adapter

    The adapter pattern converts the interface of a class to another expected interface, making two incompatible interfaces work together. 

    With the adapter pattern, you might need to show the data from a 3rd party library with the bar chart representation, but the data formats of the 3rd party library API and the display bar chart are different. Below, you’ll find an adapter that converts the 3rd party library API response to Highcharts’ bar representation:

    // API Response
    [{
       symbol: 'SIC DIVISION',
       exchange: 'Agricultural services',
       volume: 42232,
    }]
     
    // Required format
    [{
       category: 'Agricultural services',
       name: 'SIC DIVISION',
       y: 42232,
    }]
     
    const mapping = {
     symbol: 'category',
     exchange: 'name',
     volume: 'y',
    };
     
    const highchartsAdapter = (response, mapping) => {
     return response.map(item => {
       const normalized = {};
     
       // Normalize each response's item key, according to the mapping
       Object.keys(item).forEach(key => (normalized[mapping[key]] = item[key]));
       return normalized;
     });
    };
     
    highchartsAdapter(response, mapping);

    Conclusion

    This has been a brief introduction to the design patterns in modern JavaScript (ES6). This subject is massive, but hopefully this article has shown you the benefits of using it when writing code.

    Related Articles

    1. Cleaner, Efficient Code with Hooks and Functional Programming

    2. Building a Progressive Web Application in React [With Live Code Examples]

  • Game of Hackathon 2019: A Glimpse of Our First Hackathon

    Hackathons for technology startups are like picking up a good book. It may take a long time before you start, but once you do, you wonder why you didn’t do it sooner? Last Friday, on 31st May 2019, we conducted our first Hackathon at Velotio and it was a grand success!

    Although challenging projects from our clients are always pushing us to learn new things, we saw a whole new level of excitement and enthusiasm among our employees to bring their own ideas to life during the event. The 12-hour Hackathon saw participation from 15 teams, a lot of whom came well-prepared in advance with frameworks and ideas to start building immediately.

    Here are some pictures from the event:

    The intense coding session was then followed by a series of presentations where all the teams showcased their solutions.

    The first prize was bagged by Team Mitron who worked on a performance review app and was awarded a cash prize of 75,000 Rs.

    The second prize of 50,000 Rs. was awarded to Team WireQ. Their solution was an easy sync up platform that would serve as a single source of truth for the designers, developers, and testers to work seamlessly on projects together — a problem we have often struggled with in-house as well.

    Our QA Team put together a complete test suite framework that would perform all functional and non-functional testing activities, including maintaining consistency in testing, minimal code usage, improvement in test structuring and so on. They won the third prize worth 25,000 Rs.

    Our heartiest congratulations to all the winners!

    This Hackathon has definitely injected a lot of positive energy and innovation into our work culture and got so many of us to collaborate more effectively and learn from each other. We cannot wait to do our next Hackathon and share more with you all.

    Until then, stay tuned!

  • Test Automation in React Native apps using Appium and WebdriverIO

    React Native provides a mobile app development experience without sacrificing user experience or visual performance. And when it comes to mobile app UI testing, Appium is a great way to test indigenous React Native apps out of the box. Creating native apps from the same code and being able to do it using JavaScript has made Appium popular. Apart from this, businesses are attracted by the fact that they can save a lot of money by using this application development framework.

    In this blog, we are going to cover how to add automated tests for React native apps using Appium & WebdriverIO with a Node.js framework. 

    What are React Native Apps

    React Native is an open-source framework for building Android and iOS apps using React and local app capabilities. With React Native, you can use JavaScript to access the APIs on your platform and define the look and behavior of your UI using React components: lots of usable, non-compact code. In the development of Android and iOS apps, “viewing” is the basic building block of a UI: this small rectangular object on the screen can be used to display text, photos, or user input. Even the smallest detail of an app, such as a text line or a button, is a kind of view. Some views may contain other views.

    What is Appium

    Appium is an open-source tool for traditional automation, web, and hybrid apps on iOS, Android, and Windows desktop mobile platforms. Indigenous apps are those written using iOS and Android. Mobile web applications are accessed using a mobile browser (Appium supports Safari for iOS apps and Chrome or the built-in ‘Browser’ for Android apps). Hybrid apps have a wrapper around “web view”—a traditional controller that allows you to interact with web content. Projects like Apache Cordova make it easy to build applications using web technology and integrate it into a traditional wrapper, creating a hybrid application.

    Importantly, Appium is “cross-platform”, allowing you to write tests against multiple platforms (iOS, Android), using the same API. This enables code usage between iOS, Android, and Windows test suites. It runs on iOS and Android applications using the WebDriver protocol.

    Fig:- Appium Architecture

    What is WebDriverIO

    WebdriverIO is a next-gen browser and Node.js automated mobile testing framework. It allows you to customize any application written with modern web frameworks for mobile devices or browsers, such as React, Angular, Polymeror, and Vue.js.

    WebdriverIO is a widely used test automation framework in JavaScript. It has various features like it supports many reports and services, Test Frameworks, and WDIO CLI Test Runners

    The following are examples of supported services:

    • Appium Service
    • Devtools Service
    • Firefox Profile Service
    • Selenium Standalone Service
    • Shared Store Service
    • Static Server Service
    • ChromeDriver Service
    • Report Portal Service
    • Docker Service

    The followings are supported by the test framework:

    • Mocha
    • Jasmine
    • Cucumber 
    Fig:- WebdriverIO Architecture

    Key features of Appium & WebdriverIO

    Appium

    • Does not require application source code or library
    • Provides a strong and active community
    • Has multi-platform support, i.e., it can run the same test cases on multiple platforms
    • Allows the parallel execution of test scripts
    • In Appium, a small change does not require reinstallation of the application
    • Supports various languages like C#, Python, Java, Ruby, PHP, JavaScript with node.js, and many others that have a Selenium client library

    WebdriverIO 

    • Extendable
    • Compatible
    • Feature-rich 
    • Supports modern web and mobile frameworks
    • Runs automation tests both for web applications as well as native mobile apps.
    • Simple and easy syntax
    • Integrates tests to third-party tools such as Appium
    • ‘Wdio setup wizard’ makes the setup simple and easy
    • Integrated test runner

    Installation & Configuration

    $ mkdir Demo_Appium_Project

    • Create a sample Appium Project
    $ npm init
    $ package name: (demo_appium_project) demo_appium_test
    $ version: (1.0.0) 1.0.0
    $ description: demo_appium_practice
    $ entry point: (index.js) index.js
    $ test command: "./node_modules/.bin/wdio wdio.conf.js"
    $ git repository: 
    $ keywords: 
    $ author: Pushkar
    $ license: (ISC) ISC

    This will also create a package.json file for test settings and project dependencies.

    • Install node packages
    $ npm install

    • Install Appium through npm or as a standalone app.
    $ npm install -g appium or npm install --save appium

    $ npm install -g webdriverio or npm install --save-dev webdriverio @wdio/cli
    • Install Chai Assertion library 
    $ npm install -g chai or npm install --save chai

    Make sure you have following versions installed: 

    $ node --version - v.14.17.0
    $ npm --version - 7.17.0
    $ appium --version - 1.21.0
    $ java --version - java 16.0.1
    $ allure --version - 2.14.0

    WebdriverIO Configuration 

    The web driver configuration file must be created to apply the configuration during the test Generate command below project:

    $ npx wdio config

    With the following series of questions, install the required dependencies,

    $ Where is your automation backend located? - On my local machine
    $ Which framework do you want to use? - mocha	
    $ Do you want to use a compiler? No!
    $ Where are your test specs located? - ./test/specs/**/*.js
    $ Do you want WebdriverIO to autogenerate some test files? - Yes
    $ Do you want to use page objects (https://martinfowler.com/bliki/PageObject.html)? - No
    $ Which reporter do you want to use? - Allure
    $ Do you want to add a service to your test setup? - No
    $ What is the base url? - http://localhost

    This is how wdio.conf.js looks:

    exports.config = {
     port: 4724,
     path: '/wd/hub/',
     runner: 'local',
     specs: ['./test/specs/*.js'],
     maxInstances: 1,
     capabilities: [
       {
         platformName: 'Android',
         platformVersion: '11',
         appPackage: 'com.facebook.katana',
         appActivity: 'com.facebook.katana.LoginActivity',
         automationName: 'UiAutomator2'
       }
     ],
     services: [
       [
         'appium',
         {
           args: {
             relaxedSecurity: true
            },
           command: 'appium'
         }
       ]
     ],
     logLevel: 'debug',
     bail: 0,
     baseUrl: 'http://localhost',
     waitforTimeout: 10000,
     connectionRetryTimeout: 90000,
     connectionRetryCount: 3,
     framework: 'mocha',
     reporters: [
       [
         'allure',
         {
           outputDir: 'allure-results',
           disableWebdriverStepsReporting: true,
           disableWebdriverScreenshotsReporting: false
         }
       ]
     ],
     mochaOpts: {
       ui: 'bdd',
       timeout: 60000
     },
     afterTest: function(test, context, { error, result, duration, passed, retries }) {
       if (!passed) {
           browser.takeScreenshot();
       }
     }
    }
    view raw

    For iOS Automation, just add the following capabilities in wdio.conf.js & the Appium Configuration: 

    {
      "platformName": "IOS",
      "platformVersion": "14.5",
      "app": "/Your_PATH/wdioNativeDemoApp.app",
      "deviceName": "iPhone 12 Pro Max"
    }

    Launch the iOS Simulator from Xcode

    Install Appium Doctor for iOS by using following command:

    npm install -g appium-doctor

    Fig:- Appium Doctor Installed

    This is how package.json will look:

    {
     "name": "demo_appium_test",
     "version": "1.0.0",
     "description": "demo_appium_practice",
     "main": "index.js",
     "scripts": {
       "test": "./node_modules/.bin/wdio wdio.conf.js"
     },
     "author": "Pushkar",
     "license": "ISC",
     "dependencies": {
       "@wdio/sync": "^7.7.4",
       "appium": "^1.21.0",
       "chai": "^4.3.4",
       "webdriverio": "^7.7.4"
     },
     "devDependencies": {
       "@wdio/allure-reporter": "^7.7.3",
       "@wdio/appium-service": "^7.7.3",
       "@wdio/cli": "^7.7.4",
       "@wdio/local-runner": "^7.7.4",
       "@wdio/mocha-framework": "^7.7.4",
       "@wdio/selenium-standalone-service": "^7.7.4"
     }
    }

    Steps to follow if npm legacy peer deeps problem occurred:

    npm install --save --legacy-peer-deps
    npm config set legacy-peer-deps true
    npm i --legacy-peer-deps
    npm config set legacy-peer-deps true
    npm cache clean --force

    This is how the folder structure will look in Appium with the WebDriverIO Framework:

    Fig:- Appium Framework Outline

    Step-by-Step Configuration of Android Emulator using Android Studio

    Fig:- Android Studio Launch

     

    Fig:- Android Studio AVD Manager

     

    Fig:- Create Virtual Device

     

    Fig:- Choose a device Definition

     

    Fig:- Select system image

    Fig:- License Agreement

     

    Fig:- Component Installer

     

    Fig:- System Image Download

     

    Fig:- Configuration Verification

    Fig:- Virtual Device Listing

    ‍Appium Desktop Configuration

    Fig:- Appium Desktop Launch

    Setup of ANDROID_HOME + ANDROID_SDK_ROOT &  JAVA_HOME

    Follow these steps for setting up ANDROID_HOME: 

    vi ~/.bash_profile
    Add following 
    export ANDROID_HOME=/Users/pushkar/android-sdk 
    export PATH=$PATH:$ANDROID_HOME/platform-tools 
    export PATH=$PATH:$ANDROID_HOME/tools 
    export PATH=$PATH:$ANDROID_HOME/tools/bin 
    export PATH=$PATH:$ANDROID_HOME/emulator
    Save ~/.bash_profile 
    source ~/.bash_profile 
    echo $ANDROID_HOME
    /Users/pushkar/Library/Android/sdk

    Follow these steps for setting up ANDROID_SDK_ROOT:

    vi ~/.bash_profile
    Add following 
    export ANDROID_HOME=/Users/pushkar/Android/sdk
    export ANDROID_SDK_ROOT=/Users/pushkar/Android/sdk
    export ANDROID_AVD_HOME=/Users/pushkar/.android/avd
    Save ~/.bash_profile 
    source ~/.bash_profile 
    echo $ANDROID_SDK_ROOT
    /Users/pushkar/Library/Android/sdk

    Follow these steps for setting up JAVA_HOME:

    java --version
    vi ~/.bash_profile
    Add following 
    export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk-16.0.1.jdk/Contents/Home.
    echo $JAVA_HOME
    /Library/Java/JavaVirtualMachines/jdk-16.0.1.jdk/Contents/Home

    Fig:- Environment Variables in Appium

     

    Fig:- Appium Server Starts 

     

    Fig:- Appium Start Inspector Session

    Fig:- Inspector Session Configurations

    Note – Make sure you need to install the app from Google Play Store. 

    Fig:- Android Emulator Launch  

     

    Fig: – Android Emulator with Facebook React Native Mobile App

     

    Fig:- Success of Appium with Emulator

     

    Fig:- Locating Elements using Appium Inspector

    How to write E2E React Native Mobile App Tests 

    Fig:- Test Suite Structure of Mocha

    ‍Here is an example of how to write E2E test in Appium:

    Positive Testing Scenario – Validate Login & Nav Bar

    1. Open Facebook React Native App 
    2. Enter valid email and password
    3. Click on Login
    4. Users should be able to login into Facebook 

    Negative Testing Scenario – Invalid Login

    1. Open Facebook React Native App
    2. Enter invalid email and password 
    3. Click on login 
    4. Users should not be able to login after receiving an “Incorrect Password” message popup

    Negative Testing Scenario – Invalid Element

    1. Open Facebook React Native App 
    2. Enter invalid email and  password 
    3. Click on login 
    4. Provide invalid element to capture message

    Make sure test_script should be under test/specs folder 

    var expect = require('chai').expect
    
    beforeEach(() => {
     driver.launchApp()
    })
    
    afterEach(() => {
     driver.closeApp()
    })
    
    describe('Verify Login Scenarios on Facebook React Native Mobile App', () => {
     it('User should be able to login using valid credentials to Facebook Mobile App', () => {   
       $(`~Username`).waitForDisplayed(20000)
       $(`~Username`).setValue('Valid-Email')
       $(`~Password`).waitForDisplayed(20000)
       $(`~Password`).setValue('Valid-Password')
       $('~Log In').click()
       browser.pause(10000)
     })
    
     it('User should not be able to login with invalid credentials to Facebook Mobile App', () => {
       $(`~Username`).waitForDisplayed(20000)
       $(`~Username`).setValue('Invalid-Email')
       $(`~Password`).waitForDisplayed(20000)
       $(`~Password`).setValue('Invalid-Password')   
       $('~Log In').click()
       $(
           '//android.widget.TextView[@resource-id="com.facebook.katana:id/(name removed)"]'
         )
         .waitForDisplayed(11000)
       const status = $(
         '//android.widget.TextView[@resource-id="com.facebook.katana:id/(name removed)"]'
       ).getText()
       expect(status).to.equal(
         `You Can't Use This Feature Right Now`     
       )
     })
    
     it('Test Case should Fail Because of Invalid Element', () => {
       $(`~Username`).waitForDisplayed(20000)
       $(`~Username`).setValue('Invalid-Email')
       $(`~Password`).waitForDisplayed(20000)
       $(`~Password`).setValue('Invalid-Pasword')   
       $('~Log In').click()
       $(
           '//android.widget.TextView[@resource-id="com.facebook.katana:id/(name removed)"'
         )
         .waitForDisplayed(11000)
       const status = $(
         '//android.widget.TextView[@resource-id="com.facebook.katana"'
       ).getText()
       expect(status).to.equal(
         `You Can't Use This Feature Right Now`     
       )
     })
    
    })

    How to Run Mobile Tests Scripts  

    $ npm test 
    This will create a Results folder with .xml report 

    Reporting

    The following are examples of the supported reporters:

    • Allure Reporter
    • Concise Reporter
    • Dot Reporter
    • JUnit Reporter
    • Spec Reporter
    • Sumologic Reporter
    • Report Portal Reporter
    • Video Reporter
    • HTML Reporter
    • JSON Reporter
    • Mochawesome Reporter
    • Timeline Reporter
    • CucumberJS JSON Reporter

    Here, we are using Allure Reporting. Allure Reporting in WebdriverIO is a plugin to create Allure Test Reports.

    The easiest way is to keep @wdio/allure-reporter as a devDependency in your package.json with

    $ npm install @wdio/allure-reporter --save-dev

    Reporter options can be specified in the wdio.conf.js configuration file 

    reporters: [
       [
         'allure',
         {
           outputDir: 'allure-results',
           disableWebdriverStepsReporting: true,
           disableWebdriverScreenshotsReporting: false
         }
       ]
     ]

    To convert Allure .xml report to .html report, run the following command: 

    $ allure generate && allure open
    Allure HTML report should be opened in browser

    This is what Allure Reports look like: 

    Fig:- Allure Report Overview 

     

    Fig:- Allure Categories

     

    Fig:- Allure Suites

     

    Fig: – Allure Graphs

     

    Fig:- Allure Timeline

     

    Fig:- Allure Behaviors

     

    Fig:- Allure Packages

    Limitations with Appium & WebDriverIO

    Appium 

    • Android versions lower than 4.2 are not supported for testing
    • Limited support for hybrid app testing
    • Doesn’t support image comparison.

    WebdriverIO

    • It has a custom implementation
    • It can be used for automating AngularJS apps, but it is not as customized as Protractor.

    Conclusion

    In the QA and developer ecosystem, using Appium to test React native applications is common. Appium makes it easy to record test cases on both Android and iOS platforms while working with React Native. Selenium, a basic web developer, acts as a bridge between Appium and mobile platforms for delivery and testing. Appium is a solid framework for automatic UI testing. This article explains that this framework is capable of conducting test cases quickly and reliably. Most importantly, it can test both Android and iOS apps developed by the React Native framework on the basis of a single code.

    Related Articles –

    References 

  • How to Make Asynchronous Calls in Redux Without Middlewares

    Redux has greatly helped in reducing the complexities of state management. Its one way data flow is easier to reason about and it also provides a powerful mechanism to include middlewares which can be chained together to do our biding. One of the most common use cases for the middleware is to make async calls in the application. Different middlewares like redux-thunk, redux-sagas, redux-observable, etc are a few examples. All of these come with their own learning curve and are best suited for tackling different scenarios.

    But what if our use-case is simple enough and we don’t want to have the added complexities that implementing a middleware brings? Can we somehow implement the most common use-case of making async API calls using only redux and javascript?

    The answer is Yes! This blog will try to explain on how to implement async action calls in redux without the use of any middlewares.

    So let us first start by making a simple react project by using create-react-app

    npx create-react-app async-redux-without-middlewares
    cd async-redux-without-middlewares
    npm start

    Also we will be using react-redux in addition to redux to make our life a little easier. And to mock the APIs we will be using https://jsonplaceholder.typicode.com/

    We will just implement two API calls to not to over complicate things.

    Create a new file called api.js .It is the file in which we will keep the fetch calls to the endpoint.

    export const getPostsById = id => fetch(`https://jsonplaceholder.typicode.com/Posts/${id}`);
     
    export const getPostsBulk = () => fetch("https://jsonplaceholder.typicode.com/posts");

    Each API call has three base actions associated with it. Namely, REQUEST, SUCCESS and FAIL. Each of our APIs will be in one of these three states at any given time. And depending on these states we can decide how to show our UI. Like when it is in REQUEST state we can have the UI show a loader and when it is in FAIL state we can show a custom UI to tell the user that something has went wrong.

    So we create three constants of REQUEST, SUCCESS and FAIL for each API call which we will be making. In our case the constants.js file will look something like this:

    export const GET_POSTS_BY_ID_REQUEST = "getpostsbyidrequest";
    export const GET_POSTS_BY_ID_SUCCESS = "getpostsbyidsuccess";
    export const GET_POSTS_BY_ID_FAIL = "getpostsbyidfail";
     
    export const GET_POSTS_BULK_REQUEST = "getpostsbulkrequest";
    export const GET_POSTS_BULK_SUCCESS = "getpostsbulksuccess";
    export const GET_POSTS_BULK_FAIL = "getpostsbulkfail";

    The store.js file and the initialState of our application is as follows:

    import { createStore } from 'redux'
    import reducer from './reducers';
     
    const initialState = {
        byId: {
            isLoading: null,
            error: null,
            data: null
        },
        byBulk: {
            isLoading: null,
            error: null,
            data: null
        }
    };
     
    const store = createStore(reducer, initialState, window.__REDUX_DEVTOOLS_EXTENSION__ && window.__REDUX_DEVTOOLS_EXTENSION__());
     
    export default store;

    As can be seen from the above code, each of our APIs data lives in one object the the state object. Keys isLoading tells us if the API is in the REQUEST state.

    Now as we have our store defined, let us see how we will manipulate the statewith different phases that an API call can be in. Below is our reducers.js file.

    import {
        GET_POSTS_BY_ID_REQUEST,
        GET_POSTS_BY_ID_SUCCESS,
        GET_POSTS_BY_ID_FAIL,
     
        GET_POSTS_BULK_REQUEST,
        GET_POSTS_BULK_SUCCESS,
        GET_POSTS_BULK_FAIL
     
    } from "./constants";
     
    const reducer = (state, action) => {
        switch (action.type) {
            case GET_POSTS_BY_ID_REQUEST:
                return {
                    ...state,
                    byId: {
                        isLoading: true,
                        error: null,
                        data: null
                    }
                }
            case GET_POSTS_BY_ID_SUCCESS:
                return {
                    ...state,
                    byId: {
                        isLoading: false,
                        error: false,
                        data: action.payload
                    }
                }
            case GET_POSTS_BY_ID_FAIL:
                return {
                    ...state,
                    byId: {
                        isLoading: false,
                        error: action.payload,
                        data: false
                    }
                }
     
                case GET_POSTS_BULK_REQUEST:
                return {
                    ...state,
                    byBulk: {
                        isLoading: true,
                        error: null,
                        data: null
                    }
                }
            case GET_POSTS_BULK_SUCCESS:
                return {
                    ...state,
                    byBulk: {
                        isLoading: false,
                        error: false,
                        data: action.payload
                    }
                }
            case GET_POSTS_BULK_FAIL:
                return {
                    ...state,
                    byBulk: {
                        isLoading: false,
                        error: action.payload,
                        data: false
                    }
                }
            default: return state;
        }
    }
     
    export default reducer;

    By giving each individual API call its own variable to denote the loading phase we can now easily implement something like multiple loaders in the same screen according to which API call is in which phase.

    Now to actually implement the async behaviour in the actions we just need a normal JavaScript function which will pass the dispatch as the first argument. We pass dispatch to the function because it dispatches actions to the store. Normally a component has access to dispatch but since we want an external function to take control over dispatching, we need to give it control over dispatching.

    const getPostById = async (dispatch, id) => {
        dispatch({ type: GET_POSTS_BY_ID_REQUEST });
     
        try {
            const response = await getPostsById(id);
            const res = await response.json();
            dispatch({ type: GET_POSTS_BY_ID_SUCCESS, payload: res });
        } catch (e) {
            dispatch({ type: GET_POSTS_BY_ID_FAIL, payload: e });
        }
    };

    And a function to give dispatch in the above function’s scope:

    export const getPostByIdFunc = dispatch => {
        return id => getPostById(dispatch, id);
    }

    So now our complete actions.js file looks like this:

    import {
        GET_POSTS_BY_ID_REQUEST,
        GET_POSTS_BY_ID_SUCCESS,
        GET_POSTS_BY_ID_FAIL,
     
        GET_POSTS_BULK_REQUEST,
        GET_POSTS_BULK_SUCCESS,
        GET_POSTS_BULK_FAIL
     
    } from "./constants";
     
    import {
        getPostsById,
        getPostsBulk
    } from "./api";
     
    const getPostById = async (dispatch, id) => {
        dispatch({ type: GET_POSTS_BY_ID_REQUEST });
     
        try {
            const response = await getPostsById(id);
            const res = await response.json();
            dispatch({ type: GET_POSTS_BY_ID_SUCCESS, payload: res });
        } catch (e) {
            dispatch({ type: GET_POSTS_BY_ID_FAIL, payload: e });
        }
    };
     
    const getPostBulk = async dispatch => {
        dispatch({ type: GET_POSTS_BULK_REQUEST });
     
        try {
            const response = await getPostsBulk();
            const res = await response.json();
            dispatch({ type: GET_POSTS_BULK_SUCCESS, payload: res });
        } catch (e) {
            dispatch({ type: GET_POSTS_BULK_FAIL, payload: e });
        }
    };
     
    export const getPostByIdFunc = dispatch => {
        return id => getPostById(dispatch, id);
    }
     
    export const getPostsBulkFunc = dispatch => {
        return () => getPostBulk(dispatch);
    }

    Once this is done, all that is left to do is to pass these functions in mapDispatchToProps of our connected component.

    const mapDispatchToProps = dispatch => {
      return {
        getPostById: getPostByIdFunc(dispatch),
        getPostBulk: getPostsBulkFunc(dispatch)
      }
    };

    Our App.js file looks like the one below:

    import React, { Component } from 'react';
    import './App.css';
     
    import { connect } from 'react-redux';
    import { getPostByIdFunc, getPostsBulkFunc } from './actions';
     
    class App extends Component {
      render() {
        console.log(this.props);
        return (
          <div className="App">
            <button onClick={() => {
              this.props.getPostById(1);
            }}>By Id</button>
            <button onClick={() => {
              this.props.getPostBulk();
            }}>In bulk</button>
          </div>
        );
      }
    }
     
    const mapStateToProps = state => {
      return {
        state
      };
    }
     
    const mapDispatchToProps = dispatch => {
      return {
        getPostById: getPostByIdFunc(dispatch),
        getPostBulk: getPostsBulkFunc(dispatch)
      }
    };

    This is how we do async calls without middlewares in redux. This is a much simpler approach than using a middleware and the learning curve associated with it. If this approach covers all your use cases then by all means use it.

    Conclusion

    This type of approach really shines when you have to make a simple enough application like a demo of sorts, where API calls is all the side-effect that you need. In larger and more complicated applications there are a few inconveniences with this approach. First we have to pass dispatch around to which seems kind of yucky. Also, remember which call requires dispatch and which do not.

    The full code can be found here.