Tag: mobile development

  • Unlocking Cross-Platform Development with Kotlin Multiplatform Mobile (KMM)

    In the fast-paced and ever-changing world of software development, the task of designing applications that can smoothly operate on various platforms has become a significant hurdle. Developers frequently encounter a dilemma where they must decide between constructing distinct codebases for different platforms or opting for hybrid frameworks that come with certain trade-offs.

    Kotlin Multiplatform (KMP) is an extension of the Kotlin programming language that simplifies cross-platform development by bridging the gap between platforms. This game-changing technology has emerged as a powerful solution for creating cross-platform applications.

    Kotlin Multiplatform Mobile (KMM) is a subset of KMP that provides a specific framework and toolset for building cross-platform mobile applications using Kotlin. KMM is developed by JetBrains to simplify the process of building mobile apps that can run seamlessly on multiple platforms.

    In this article, we will take a deep dive into Kotlin Multiplatform Mobile, exploring its features and benefits and how it enables developers to write shared code that runs natively on multiple platforms.

    What is Kotlin Multiplatform Mobile (KMM)?

    With KMM, developers can share code between Android and iOS platforms, eliminating the need for duplicating efforts and maintaining separate codebases. This significantly reduces development time and effort while improving code consistency and maintainability.

    KMM offers support for a wide range of UI frameworks, libraries, and app architectures, providing developers with flexibility and options. It can seamlessly integrate with existing Android projects, allowing for the gradual adoption of cross-platform development. Additionally, KMM projects can be developed and tested using familiar build tools, making the transition to KMM as smooth as possible.

    KMM vs. Other Platforms

    Here’s a table comparing the KMM (Kotlin Multiplatform Mobile) framework with some other popular cross-platform mobile development platforms:

    Sharing Code Across Multiple Platforms:

    Advantages of Utilizing Kotlin Multiplatform (KMM) in Projects

    Code sharing: Encourages code reuse and reduces duplication, leading to faster development.

    Faster time-to-market: Accelerates mobile app development by reducing codebase development.

    Consistency: Ensures consistency across platforms for better user experience.

    Collaboration between Android and iOS teams: Facilitates collaboration between Android and iOS development teams to improve efficiency.

    Access to Native APIs: Allows developers to access platform-specific APIs and features.

    Reduced maintenance overhead: Shared codebase makes maintenance easier and more efficient.

    Existing Kotlin and Android ecosystem: Provides access to libraries, tools, and resources for developers.

    Gradual adoption: Facilitates cross-platform development by sharing modules and components.

    Performance and efficiency: Generates optimized code for each platform, resulting in efficient and performant applications.

    Community and support: Benefits from active community, resources, tutorials, and support.

    Limitations of Using KMM in Projects

    Limited platform-specific APIs: Provides a common codebase, but does not provide direct access to platform-specific APIs.

    Platform-dependent setup and tooling: Platform-agnostic, but setup and tooling can be platform-dependent.

    Limited interoperability with existing platform code: Interoperability between Kotlin Multiplatform and existing platform code can be challenging.

    Development and debugging experience: Provides code sharing, but development and debugging experience differ.

    Limited third-party library support: There aren’t many ready-to-use libraries available, so developers must implement from scratch or look for alternatives.

    Setting Up Environment for Cross-Platform Development in Android Studio

    Developing Kotlin Multiplatform Mobile (KMM) apps as an Android developer is relatively straightforward. You can use Android Studio, the same IDE that you use for Android app development. 

    To get started, we will need to install the KMM plugin through the IDE plugin manager, which is a simple step. The advantage of using Android Studio for KMM development is that we can create and run iOS apps from within the same IDE. This can help streamline the development process, making it easier to build and test apps across multiple platforms.

    In order to enable the building and running of iOS apps through Android Studio, it’s necessary to have Xcode installed on your system. Xcode is an Integrated Development Environment (IDE) used for iOS programming.

    To ensure that all dependencies are installed correctly for our Kotlin Multiplatform Mobile (KMM) project, we can use kdoctor. This tool can be installed via brew by running the following command in the command-line:

    $ brew install kdoctor 

    Note: If you don’t have Homebrew yet, please install it.

    Once we have all the necessary tools installed on your system, including Android Studio, Xcode, JDK, Kotlin Multiplatform Mobile Plugin, and Kotlin Plugin, we can run kdoctor in the Android Studio terminal or on our command-line tool by entering the following command:

    $ kdoctor 

    This will confirm that all required dependencies are properly installed and configured for our KMM project.

    kdoctor will perform comprehensive checks and provide a detailed report with the results.

    Assuming that all the necessary tools are installed correctly, if kdoctor detects any issues, it will generate a corresponding result or report.

    To resolve the warning mentioned above, touch ~/.zprofile and export changes.

    $ touch  ~/.zprofile 

    $ export LANG=en_US.UTF-8

    export LC_ALL=en_US.UTF-8

    After making the above necessary changes to our environment, we can run kdoctor again to verify that everything is set up correctly. Once kdoctor confirms that all dependencies are properly installed and configured, we are done.

    Building Biometric Face & Fingerprint Authentication Application

    Let’s explore Kotlin Multiplatform Mobile (KMM) by creating an application for face and fingerprint authentication. Here our aim is to leverage KMM’s potential by developing shared code for both Android and iOS platforms. This will promote code reuse and reduce redundancy, leading to optimized code for each platform.

    Set Up an Android project

    To initiate a new project, we will launch Android Studio, select the Kotlin Multiplatform App option from the New Project template, and click on “Next.”

    We will add the fundamental application information, such as the name of the application and the project’s location, on the following screen.

    Lastly, we opt for the recommended dependency manager for the iOS app from the Regular framework and click on “Next.”

    For the iOS app, we can switch the dependency between the regular framework or CocoPods dependency manager.

    After clicking the “Finish” button, the KMM project is created successfully and ready to be utilized.

    After finishing the Gradle sync process, we can execute both the iOS and Android apps by simply clicking the run button located in the toolbar.

    In this illustration, we can observe the structure of a KMM project. The KMM project is organized into three directories: shared, androidApp, and iosApp.

    androidApp: It contains Android app code and follows the typical structure of a standard Android application.

    iosApp: It contains iOS application code, which can be opened in Xcode using the .xcodeproj file.

    shared: It contains code and resources that are shared between the Android (androidApp) and iOS (iosApp) platforms. It allows developers to write platform-independent logic and components that can be reused across both platforms, reducing code duplication and improving development efficiency.

    Launch the iOS app and establish a connection with the framework.

    Before proceeding with iOS app development, ensure that both Xcode and Cocoapods are installed on your system.

    Open the root project folder of the KMM application (KMM_Biometric_App) developed using Android studio and navigate to the iosApp folder. Within the iosApp folder, locate the .xcodeproj file and double-click on it to open it.

    After launching the iosApp in Xcode, the next step is to establish a connection between the framework and the iOS application. To do this, you will need to access the iOS project settings by double-clicking on the project name. Once you are in the project settings, navigate to the Build Phases tab and select the “+” button to add a new Run Script Phase.

     

     

    Add the following script:

    cd “$SRCROOT/..”

    ./gradlew :shared:embedAndSignAppleFrameworkForXcode

    Move the Run Script phase before the Compile Sources phase.

    Navigate to the All build settings on the Build Settings tab and locate the Search Paths section. Within this section, specify the Framework Search Path:

    $(SRCROOT)/../shared/build/xcode-frameworks/$(CONFIGURATION)/$(SDK_NAME)

    In the Linking section of the Build Settings tab, specify the Other Linker flags:

    $(inherited) -framework shared

    Compile the project in Xcode. If all the settings are configured correctly, the project should build successfully.

    Implement Biometric Authentication in the Android App

    To enable Biometric Authentication, we will utilize the BiometricPrompt component available in the Jetpack Biometric library. This component simplifies the process of implementing biometric authentication, but it is only compatible with Android 6.0 (API level 23) and later versions. If we require support for earlier Android versions, we must explore alternative approaches.

    Biometric Library:

    implementation(“androidx.biometric:biometric-ktx:1.2.0-alpha05“)

    To add the Biometric Dependency for Android development, we must include it in the androidMain of sourceSets in the build.gradle file located in the shared folder. This step is specific to Android development.

    // shared/build.gradle.kts

    …………
    sourceSets {
       val androidMain by getting {
           dependencies {
               implementation("androidx.biometric:biometric-ktx:1.2.0-alpha05")
                        }
    	……………
       }
    …………….

    Next, we will generate the FaceAuthenticator class within the commonMain folder, which will allow us to share the Biometric Authentication business logic between the Android and iOS platforms.

    // shared/commonMain/FaceAuthenticator

    expect class FaceAuthenticator {
       fun isDeviceHasBiometric(): Boolean
       fun authenticateWithFace(callback: (Boolean) -> Unit)
    }

    In shared code, the “expect” keyword signifies an expected behavior or interface. It indicates a declaration that is expected to be implemented differently on each platform. By using “expect,” you establish a contract or API that the platform-specific implementations must satisfy.

    The “actual” keyword is utilized to provide the platform-specific implementation for the expected behavior or interface defined with the “expect” keyword. It represents the concrete implementation that varies across different platforms. By using “actual,” you supply the code that fulfills the contract established by the “expect” declaration.

    There are 3 different types of authenticators, defined at a level of granularity supported by BiometricManager and BiometricPrompt.

    At the level of granularity supported by BiometricManager and BiometricPrompt, there exist three distinct types of authenticators.

    Multiple authenticators, such as BIOMETRIC_STRONG | DEVICE_CREDENTIAL | BIOMETRIC_WEAK, can be represented as a single integer by combining their types using bitwise OR.

    BIOMETRIC_STRONG: Any biometric (e.g., fingerprint, iris, or face) on the device that meets or exceeds the requirements for Class 3 (formerly Strong), as defined by the Android CDD.

    BIOMETRIC_WEAK: Any biometric (e.g., fingerprint, iris, or face) on the device that meets or exceeds the requirements for Class 2 (formerly Weak), as defined by the Android CDD.

    DEVICE_CREDENTIAL: Authentication using a screen lock credential—the user’s PIN, pattern, or password.

    Now let’s create an actual implementation of FaceAuthenticator class in the androidMain folder of the shared folder.

    // shared/androidMain/FaceAuthenticator

    actual class FaceAuthenticator(context: FragmentActivity) {
       actual fun isDeviceHasBiometric(): Boolean {
           // code to check biometric available
       }
    
       actual fun authenticateWithFace(callback: (Boolean) -> Unit) {
           // code to authenticate using biometric
       }
    }

    Add the following code to the isDeviceHasBiometric() function to determine whether the device supports biometric authentication or not.

    actual class FaceAuthenticator(context: FragmentActivity) {
    
        var activity: FragmentActivity = context
    
        @RequiresApi(Build.VERSION_CODES.R)
        actual fun isDeviceHasBiometric(): Boolean {
            val biometricManager = BiometricManager.from(activity)
            when (biometricManager.canAuthenticate(BIOMETRIC_STRONG or BIOMETRIC_WEAK)) {
                BiometricManager.BIOMETRIC_SUCCESS -> {
                    Log.d("`FaceAuthenticator.kt`", "App can authenticate using biometrics.")
                    return true
                }
    
                BiometricManager.BIOMETRIC_ERROR_NO_HARDWARE -> {
                    Log.e("`FaceAuthenticator.kt`", "No biometric features available on this device.")
                    return false
                }
    
                BiometricManager.BIOMETRIC_ERROR_HW_UNAVAILABLE -> {
                    Log.e("`FaceAuthenticator.kt`", "Biometric features are currently unavailable.")
                    return false
                }
    
                BiometricManager.BIOMETRIC_ERROR_NONE_ENROLLED -> {
                    Log.e(
                        "`FaceAuthenticator.kt`",
                        "Prompts the user to create credentials that your app accepts."
                    )
                    val enrollIntent = Intent(Settings.ACTION_BIOMETRIC_ENROLL).apply {
                        putExtra(
                            Settings.EXTRA_BIOMETRIC_AUTHENTICATORS_ALLOWED,
                            BIOMETRIC_STRONG or BIOMETRIC_WEAK
                        )
                    }
    
                    startActivityForResult(activity, enrollIntent, 100, null)
                }
    
                BiometricManager.BIOMETRIC_ERROR_SECURITY_UPDATE_REQUIRED -> {
                    Log.e(
                        "`FaceAuthenticator.kt`",
                        "Security vulnerability has been discovered and the sensor is unavailable until a security update has addressed this issue."
                    )
                }
    
                BiometricManager.BIOMETRIC_ERROR_UNSUPPORTED -> {
                    Log.e(
                        "`FaceAuthenticator.kt`",
                        "The user can't authenticate because the specified options are incompatible with the current Android version."
                    )
                }
    
                BiometricManager.BIOMETRIC_STATUS_UNKNOWN -> {
                    Log.e(
                        "`FaceAuthenticator.kt`",
                        "Unable to determine whether the user can authenticate"
                    )
                }
            }
            return false
        }
    
        actual fun authenticateWithFace(callback: (Boolean) -> Unit) {
            // code to authenticate using biometric
        }
    
    }

    In the provided code snippet, an instance of BiometricManager is created, and the canAuthenticate() method is invoked to determine whether the user can authenticate with an authenticator that satisfies the specified requirements. To accomplish this, you must pass the same bitwise combination of types, which you declared using the setAllowedAuthenticators() method, into the canAuthenticate() method.

    To perform biometric authentication, insert the following code into the authenticateWithFace() method.

    actual class FaceAuthenticator(context: FragmentActivity) {
    
        var activity: FragmentActivity = context
    
        @RequiresApi(Build.VERSION_CODES.R)
        actual fun isDeviceHasBiometric(): Boolean {
            val biometricManager = BiometricManager.from(activity)
            when (biometricManager.canAuthenticate(BIOMETRIC_STRONG or BIOMETRIC_WEAK)) {
                BiometricManager.BIOMETRIC_SUCCESS -> {
                    Log.d("`FaceAuthenticator.kt`", "App can authenticate using biometrics.")
                    return true
                }
    
                BiometricManager.BIOMETRIC_ERROR_NO_HARDWARE -> {
                    Log.e("`FaceAuthenticator.kt`", "No biometric features available on this device.")
                    return false
                }
    
                BiometricManager.BIOMETRIC_ERROR_HW_UNAVAILABLE -> {
                    Log.e("`FaceAuthenticator.kt`", "Biometric features are currently unavailable.")
                    return false
                }
    
                BiometricManager.BIOMETRIC_ERROR_NONE_ENROLLED -> {
                    Log.e(
                        "`FaceAuthenticator.kt`",
                        "Prompts the user to create credentials that your app accepts."
                    )
                    val enrollIntent = Intent(Settings.ACTION_BIOMETRIC_ENROLL).apply {
                        putExtra(
                            Settings.EXTRA_BIOMETRIC_AUTHENTICATORS_ALLOWED,
                            BIOMETRIC_STRONG or BIOMETRIC_WEAK
                        )
                    }
    
                    startActivityForResult(activity, enrollIntent, 100, null)
                }
    
                BiometricManager.BIOMETRIC_ERROR_SECURITY_UPDATE_REQUIRED -> {
                    Log.e(
                        "`FaceAuthenticator.kt`",
                        "Security vulnerability has been discovered and the sensor is unavailable until a security update has addressed this issue."
                    )
                }
    
                BiometricManager.BIOMETRIC_ERROR_UNSUPPORTED -> {
                    Log.e(
                        "`FaceAuthenticator.kt`",
                        "The user can't authenticate because the specified options are incompatible with the current Android version."
                    )
                }
    
                BiometricManager.BIOMETRIC_STATUS_UNKNOWN -> {
                    Log.e(
                        "`FaceAuthenticator.kt`",
                        "Unable to determine whether the user can authenticate"
                    )
                }
            }
            return false
        }
    
        @RequiresApi(Build.VERSION_CODES.P)
        actual fun authenticateWithFace(callback: (Boolean) -> Unit) {
            
            // Create prompt Info to set prompt details
            val promptInfo = BiometricPrompt.PromptInfo.Builder()
                .setTitle("Authentication using biometric")
                .setSubtitle("Authenticate using face/fingerprint")
                .setAllowedAuthenticators(BIOMETRIC_STRONG or BIOMETRIC_WEAK or DEVICE_CREDENTIAL)
                .setNegativeButtonText("Cancel")
                .build()
    
            // Create biometricPrompt object to get authentication callback result
            val biometricPrompt = BiometricPrompt(activity, activity.mainExecutor,
                object : BiometricPrompt.AuthenticationCallback() {
                    override fun onAuthenticationError(
                        errorCode: Int,
                        errString: CharSequence,
                    ) {
                        super.onAuthenticationError(errorCode, errString)
                        Toast.makeText(activity, "Authentication error: $errString", Toast.LENGTH_SHORT)
                            .show()
                        callback(false)
                    }
    
                    override fun onAuthenticationSucceeded(
                        result: BiometricPrompt.AuthenticationResult,
                    ) {
                        super.onAuthenticationSucceeded(result)
                        Toast.makeText(activity, "Authentication succeeded!", Toast.LENGTH_SHORT).show()
                        callback(true)
                    }
    
                    override fun onAuthenticationFailed() {
                        super.onAuthenticationFailed()
                        Toast.makeText(activity, "Authentication failed", Toast.LENGTH_SHORT).show()
                        callback(false)
                    }
                })
    
            //Authenticate using biometric prompt
            biometricPrompt.authenticate(promptInfo)
        }
    
    }

    In the code above, the BiometricPrompt.Builder gathers the arguments to be displayed on the biometric dialog provided by the system.

    The setAllowedAuthenticators() function enables us to indicate the authenticators that are permitted for biometric authentication.

    // Create prompt Info to set prompt details

    // Create prompt Info to set prompt details
    val promptInfo = BiometricPrompt.PromptInfo.Builder()
       	.setTitle("Authentication using biometric")
       	.setSubtitle("Authenticate using face/fingerprint")
       	.setAllowedAuthenticators(BIOMETRIC_STRONG or BIOMETRIC_WEAK)   
          .setNegativeButtonText("Cancel")
       	.build()

    It is not possible to include both .setAllowedAuthenticators(BIOMETRIC_WEAK or DEVICE_CREDENTIAL) and .setNegativeButtonText(“Cancel”) simultaneously in a BiometricPrompt.PromptInfo.Builder instance because the last mode of device authentication is being utilized.

    However, it is possible to include both .setAllowedAuthenticators(BIOMETRIC_WEAK or BIOMETRIC_STRONG) and .setNegativeButtonText(“Cancel“) simultaneously in a BiometricPrompt.PromptInfo.Builder instance. This allows for a fallback to device credentials authentication when the user cancels the biometric authentication process.

    The BiometricPrompt object facilitates biometric authentication and provides an AuthenticationCallback to handle the outcomes of the authentication process, indicating whether it was successful or encountered a failure.

    val biometricPrompt = BiometricPrompt(activity, activity.mainExecutor,
                object : BiometricPrompt.AuthenticationCallback() {
                    override fun onAuthenticationError(
                        errorCode: Int,
                        errString: CharSequence,
                    ) {
                        super.onAuthenticationError(errorCode, errString)
                        Toast.makeText(activity, "Authentication error: $errString", Toast.LENGTH_SHORT)
                            .show()
                        callback(false)
                    }
    
                    override fun onAuthenticationSucceeded(
                        result: BiometricPrompt.AuthenticationResult,
                    ) {
                        super.onAuthenticationSucceeded(result)
                        Toast.makeText(activity, "Authentication succeeded!", Toast.LENGTH_SHORT).show()
                        callback(true)
                    }
    
                    override fun onAuthenticationFailed() {
                        super.onAuthenticationFailed()
                        Toast.makeText(activity, "Authentication failed", Toast.LENGTH_SHORT).show()
                        callback(false)
                    }
                })
    
            //Authenticate using biometric prompt
            biometricPrompt.authenticate(promptInfo)

    Now, we have completed the coding of the shared code for Android in the androidMain folder. To utilize this code, we can create a new file named LoginActivity.kt within the androidApp folder.

    // androidApp/LoginActivity

    class LoginActivity : AppCompatActivity() {
    
        @RequiresApi(Build.VERSION_CODES.R)
        override fun onCreate(savedInstanceState: Bundle?) {
            super.onCreate(savedInstanceState)
            setContentView(R.layout.activity_login)
    
            val authenticate = findViewById<Button>(R.id.authenticate_button)
            authenticate.setOnClickListener {
    
                val faceAuthenticatorImpl = FaceAuthenticator(this);
                if (faceAuthenticatorImpl.isDeviceHasBiometric()) {
                    faceAuthenticatorImpl.authenticateWithFace {
                          if (it){ Log.d("'LoginActivity.kt'", "Authentication Successful") }
                          else{ Log.d("'LoginActivity.kt'", "Authentication Failed") }
                    }
                }
    
            }
        }
    }

    Implement Biometric Authentication In iOS App

    For authentication, we have a special framework in iOS, i.e., Local Authentication Framework.

    The Local Authentication framework provides a way to integrate biometric authentication (such as Touch ID or Face ID) and device passcode authentication into your app. This framework allows you to enhance the security of your app by leveraging the biometric capabilities of the device or the device passcode.

    Now, let’s create an actual implementation of FaceAuthenticator class of shared folder in iosMain folder.

    // shared/iosMain/FaceAuthenticator

    actual class FaceAuthenticator(context: FragmentActivity) {
       actual fun isDeviceHasBiometric(): Boolean {
           // code to check biometric available
       }
    
       actual fun authenticateWithFace(callback: (Boolean) -> Unit) {
           // code to authenticate using biometric
       }
    }

    Add the following code to the isDeviceHasBiometric() function to determine whether the device supports biometric authentication or not.

    actual class FaceAuthenticator {
    
        actual fun isDeviceHasBiometric(): Boolean {
            // Check if face authentication is available
            val context = LAContext()
            val error = memScoped {
                allocPointerTo<ObjCObjectVar<NSError?>>()
            }
            return context.canEvaluatePolicy(LAPolicyDeviceOwnerAuthentication, error = error.value)
        }
    
        actual fun authenticateWithFace(callback: (Boolean) -> Unit) {
            // code to authenticate using biometric
        }
    }

    In the above code, LAContext class is part of the Local Authentication framework in iOS. It represents a context for evaluating authentication policies and handling biometric or passcode authentication. 

    LAPolicy represents different authentication policies that can be used with the LAContext class. The LAPolicy enum defines the following policies:

    .deviceOwnerAuthenticationWithBiometrics

    This policy allows the user to authenticate using biometric authentication, such as Touch ID or Face ID. If the device supports biometric authentication and the user has enrolled their biometric data, the authentication prompt will appear for biometric verification.

    .deviceOwnerAuthentication 

    This policy allows the user to authenticate using either biometric authentication (if available) or the device passcode. If biometric authentication is supported and the user has enrolled their biometric data, the prompt will appear for biometric verification. Otherwise, the device passcode will be used for authentication.

    We have used the LAPolicyDeviceOwnerAuthentication policy constant, which authenticates either by biometry or the device passcode.

    We have used the canEvaluatePolicy(_:error:) method to check if the device supports biometric authentication and if the user has added any biometric information (e.g., Touch ID or Face ID).

    To perform biometric authentication, insert the following code into the authenticateWithFace() method.

    // shared/iosMain/FaceAuthenticator

    actual class FaceAuthenticator {
    
        actual fun isDeviceHasBiometric(): Boolean {
            // Check if face authentication is available
            val context = LAContext()
            val error = memScoped {
                allocPointerTo<ObjCObjectVar<NSError?>>()
            }
            return context.canEvaluatePolicy(LAPolicyDeviceOwnerAuthentication, error = error.value)
        }
    
        actual fun authenticateWithFace(callback: (Boolean) -> Unit) {
            // Authenticate using biometric
            val context = LAContext()
            val reason = "Authenticate using face"
    
            if (isDeviceHasBiometric()) {
                // Perform face authentication
                context.evaluatePolicy(
                    LAPolicyDeviceOwnerAuthentication,
                    localizedReason = reason
                ) { b: Boolean, nsError: NSError? ->
                    callback(b)
                    if (!b) {
                        print(nsError?.localizedDescription ?: "Failed to authenticate")
                    }
                }
            }
    
            callback(false)
        }
    
    }

    The primary purpose of LAContext is to evaluate authentication policies, such as biometric authentication or device passcode authentication. The main method for this is 

    evaluatePolicy(_:localizedReason:reply:):

    This method triggers an authentication request, which is returned in the completion block. The localizedReason parameter is a message that explains why the authentication is required and is shown during the authentication process.

    When using evaluatePolicy(_:localizedReason:reply:), we may have the option to fall back to device passcode authentication or cancel the authentication process. We can handle these scenarios by inspecting the LAError object passed in the error parameter of the completion block:

    if let error = error as? LAError {
        switch error.code {
        case .userFallback:
            	// User tapped on fallback button, provide a passcode entry UI
        case .userCancel:
            	// User canceled the authentication
        	// Handle other error cases as needed
        }
    }

    That concludes the coding of the shared code for iOS in the iosMain folder. We can utilize this by creating LoginView.swift in the iosApp folder.

    struct LoginView: View {
        var body: some View {
            var isFaceAuthenticated :Bool = false
            let faceAuthenticator = FaceAuthenticator()
            
            Button(action: {
                if(faceAuthenticator.isDeviceHasBiometric()){
                    faceAuthenticator.authenticateWithFace { isSuccess in
                        isFaceAuthenticated = isSuccess.boolValue
                        print("Result is ")
                        print(isFaceAuthenticated)
                    }
                }
            }) {
                Text("Authenticate")
                .padding()
                .background(Color.blue)
                .foregroundColor(.white)
                .cornerRadius(10)
            }
            
        }
    }

    This ends our implementation of biometric authentication using the KMM application that runs smoothly on both Android and iOS platforms. If you’re interested, you can find the code for this project on our GitHub repository. We would love to hear your thoughts and feedback on our implementation.

    Conclusion

    It is important to acknowledge that while KMM offers numerous advantages, it may not be suitable for every project. Applications with extensive platform-specific requirements or intricate UI components may still require platform-specific development. Nonetheless, KMM can still prove beneficial in such scenarios by facilitating the sharing of non-UI code and minimizing redundancy.

    On the whole, Kotlin Multiplatform Mobile is an exciting framework that empowers developers to effortlessly create cross-platform applications. It provides an efficient and adaptable solution for building robust and high-performing mobile apps, streamlining development processes, and boosting productivity. With its expanding ecosystem and strong community support, KMM is poised to play a significant role in shaping the future of mobile app development.

  • Machine Learning in Flutter using TensorFlow

    Machine learning has become part of day-to-day life. Small tasks like searching songs on YouTube and suggestions on Amazon are using ML in the background. This is a well-developed field of technology with immense possibilities. But how we can use it?

    This blog is aimed at explaining how easy it is to use machine learning models (which will act as a brain) to build powerful ML-based Flutter applications. We will briefly touch base on the following points

    1. Definitions

    Let’s jump to the part where most people are confused. A person who is not exposed to the IT industry might think AI, ML, & DL are all the same. So, let’s understand the difference.  

    Figure 01

    1.1. Artificial Intelligence (AI): 

    AI, i.e. artificial intelligence, is a concept of machines being able to carry out tasks in a smarter way. You all must have used YouTube. In the search bar, you can type the lyrics of any song, even lyrics that are not necessarily the starting part of the song or title of songs, and get almost perfect results every time. This is the work of a very powerful AI.
    Artificial intelligence is the ability of a machine to do tasks that are usually done by humans. This ability is special because the task we are talking about requires human intelligence and discernment.

    1.2. Machine Learning (ML):

    Machine learning is a subset of artificial intelligence. It is based on the idea that we expose machines to new data, which can be a complete or partial row, and let the machine decide the future output. We can also say it is a sub-field of AI that deals with the extraction of patterns from data sets. With a new data set and processing, the last result machine will slowly reach the expected result. This means that the machine can find rules for optical behavior to get new output. It also can adapt itself to new changing data just like humans.

    1.3. Deep Learning (ML): 

    Deep learning is again a smaller subset of machine learning, which is essentially a neural network with multiple layers. These neural networks attempt to simulate the behavior of the human brain, so you can say we are trying to create an artificial human brain. With one layer of a neural network, we can still make approximate predictions, and additional layers can help to optimize and refine for accuracy.

    2. Types of ML

    Before starting the implementation, we need to know the types of machine learning because it is very important to know which type is more suitable for our expected functionality.

    Figure 02

    2.1. Supervised Learning

    As the name suggests, in supervised learning, the learning happens under supervision. Supervision means the data that is provided to the machine is already classified data i.e., each piece of data has fixed labels, and inputs are already mapped to the output.
    Once the machine is learned, it is ready for the classification of new data.
    This learning is useful for tasks like fraud detection, spam filtering, etc.

    2.2. Unsupervised Learning

    In unsupervised learning, the data given to machines to learn is purely raw, with no tags or labels. Here, the machine is the one that will create new classes by extracting patterns.
    This learning can be used for clustering, association, etc.

    2.3. Semi-Supervised Learning

    Both supervised and unsupervised have their own limitations, because one requires labeled data, and the other does not, so this learning combines the behavior of both learnings, and with that, we can overcome the limitations.
    In this learning, we feed row data and categorized data to the machine so it can classify the row data, and if necessary, create new clusters.

    2.4. : Reinforcement Learning

    For this learning, we feed the last output’s feedback with new incoming data to machines so they can learn from their mistakes. This feedback-based process will continue until the machine reaches the perfect output. This feedback is given by humans in the form of punishment or reward. This is like when a search algorithm gives you a list of results, but users do not click on other than the first result. It is like a human child who is learning from every available option and by correcting its mistakes, it grows.

    3. TensorFlow

    Machine learning is a complex process where we need to perform multiple activities like processing of acquiring data, training models, serving predictions, and refining future results.

    To perform such operations, Google developed a framework in November 2015 called TensorFlow. All the above-mentioned processes can become easy if we use the TensorFlow framework. 

    For this project, we are not going to use a complete TensorFlow framework but a small tool called TensorFlow Lite

    3.1. TensorFlow Lite

    TensorFlow Lite allows us to run the machine learning models on devices with limited resources, like limited RAM or memory.

    3.2. TensorFlow Lite Features

    • Optimized for on-device ML by addressing five key constraints: 
    • Latency: because there’s no round-trip to a server 
    • Privacy: because no personal data leaves the device 
    • Connectivity: because internet connectivity is not required 
    • Size: because of a reduced model and binary size
    • Power consumption: because of efficient inference and a lack of network connections
    • Support for Android and iOS devices, embedded Linux, and microcontrollers
    • Support for Java, Swift, Objective-C, C++, and Python programming languages
    • High performance, with hardware acceleration and model optimization
    • End-to-end examples for common machine learning tasks such as image classification, object detection, pose estimation, question answering, text classification, etc., on multiple platforms

    4. What is Flutter?

    Flutter is an open source, cross-platform development framework. With the help of Flutter by using a single code base, we can create applications for Android, iOS, web, as well as desktop. It was created by Google and uses Dart as a development language. The first stable version of Flutter was released in Apr 2018, and since then, there have been many improvements. 

    5. Building an ML-Flutter Application

    We are now going to build a Flutter application through which we can find the state of mind of a person from their facial expressions. The below steps explain the update we need to do for an Android-native application. For an iOS application, please refer to the links provided in the steps.

    5.1. TensorFlow Lite – Native setup (Android)

    • In android/app/build.gradle, add the following setting in the android block:
    aaptOptions {
            noCompress 'tflite'
            noCompress 'lite'
        }

    5.2. TensorFlow Lite – Flutter setup (Dart)

    • Create an assets folder and place your label file and model file in it. (These files we will create shortly.) In pubspec.yaml add:
    assets:
       - assets/labels.txt
       - assets/<file_name>.tflite

     

    Figure 02

    • Run this command (Install TensorFlow Light package): 
    $ flutter pub add tflite

    • Add the following line to your package’s pubspec.yaml (and run an implicit flutter pub get):
    dependencies:
         tflite: ^0.9.0

    • Now in your Dart code, you can use:
    import 'package:tflite/tflite.dart';

    • Add camera dependencies to your package’s pubspec.yaml (optional):
    dependencies:
         camera: ^0.10.0+1

    • Now in your Dart code, you can use:
    import 'package:camera/camera.dart';

    • As the camera is a hardware feature, in the native code, there are few updates we need to do for both Android & iOS.  To learn more, visit:
    https://pub.dev/packages/camera
    • Following is the code that will appear under dependencies in pubspec.yaml once the the setup is complete.
    Figure 03
    • Flutter will automatically download the most recent version if you ignore the version number of packages.
    • Do not forget to add the assets folder in the root directory.

    5.3. Generate model (using website)

    • Click on Get Started

    • Select Image project
    • There are three different categories of ML projects available. We’ll choose an image project since we’re going to develop a project that analyzes a person’s facial expression to determine their emotional condition.
    • The other two types, audio project and pose project, will be useful for creating projects that involve audio operation and human pose indication, respectively.

    • Select Standard Image model
    • Once more, there are two distinct groups of image machine learning projects. Since we are creating a project for an Android smartphone, we will select a standard picture project.
    • The other type, an Embedded Image Model project, is designed for hardware with relatively little memory and computing power.

    • Upload images for training the classes
    • We will create new classes by clicking on “Add a class.”
    • We must upload photographs to these classes as we are developing a project that analyzes a person’s emotional state from their facial expression.
    • The more photographs we upload, the more precise our result will be.
    • Click on train model and wait till training is over
    • Click on Export model
    • Select TensorFlow Lite Tab -> Quantized  button -> Download my model

    5.4. Add files/models to the Flutter project

    • Labels.txt

    File contains all the class names which you created during model creation.

     

    • *.tflite

    File contains the original model file as well as associated files a ZIP.

    5.5. Load & Run ML-Model

    • We are importing the model from assets, so this line of code is crucial. This model will serve as the project’s brain.
    • Here, we’re configuring the camera using a camera controller and obtaining a live feed (Cameras[0] is the front camera).

    6. Conclusion

    We can achieve good performance of a Flutter app with an appropriate architecture, as discussed in this blog.

  • A Primer To Flutter

    In this blog post, we will explore the basics of cross platform mobile application development using Flutter, compare it with existing cross-platform solutions and create a simple to-do application to demonstrate how quickly we can build apps with Flutter.

    Brief introduction

    Flutter is a free and open source UI toolkit for building natively compiled applications for mobile platforms like Android and iOS, and for the web and desktop as well. Some of the prominent features are native performance, single codebase for multiple platforms, quick development, and a wide range of beautifully designed widgets.

    Flutter apps are written in Dart programming language, which is a very intuitive language with a C-like syntax. Dart is optimized for performance and developer friendliness. Apps written in Dart can be as fast as native applications because Dart code compiles down to machine instructions for ARM and x64 processors and to Javascript for the web platform. This, along with the Flutter engine, makes Flutter apps platform agnostic.

    Other interesting Dart features used in Flutter apps is the just-in-time (JIT) compiler, used during development and debugging, which powers the hot reload functionality. And the ahead-of-time (AOT) compiler which is used when building applications for the target platforms such as Android or iOS, resulting in native performance.

    Everything composed on the screen with Flutter is a widget including stuff like padding, alignment or opacity. The Flutter engine draws and controls each pixel on the screen using its own graphics engine called Skia.

    Flutter vs React-Native

    Flutter apps are truly native and hence offer great performance, whereas apps built with react-native requires a JavaScript bridge to interact with OEM widgets. Flutter apps are much faster to develop because of a wide range of built-in widgets, good amount of documentation, hot reload, and several other developer-friendly choices made by Google while building Dart and Flutter. 

    React Native, on the other hand, has the advantage of being older and hence has a large community of businesses and developers who have experience in building react-native apps. It also has more third party libraries and packages as compared to Flutter. That said, Flutter is catching up and rapidly gaining momentum as evident from Stackoverflow’s 2019 developer survey, where it scored 75.4% under “Most Loved Framework, Libraries and Tools”.

     

    All in all, Flutter is a great tool to have in our arsenal as mobile developers in 2020.

    Getting started with a sample application

    Flutter’s official docs are really well written and include getting started guides for different OS platforms, API documentation, widget catalogue along with several cookbooks and codelabs that one can follow along to learn more about Flutter.

    To get started with development, we will follow the official guide which is available here. Flutter requires Flutter SDK as well as native build tools to be installed on the machine to begin development. To write apps, one may use Android Studios or VS Code, or any text editor can be used with Flutter’s command line tools. But a good rule of thumb is to install Android Studio because it offers better support for management of Android SDK, build tools and virtual devices. It also includes several built-in tools such as the icons and assets editor.

    Once done with the setup, we will start by creating a project. Open VS Code and create a new Flutter project:

    We should see the main file main.Dart with some sample code (the counter application). We will start editing this file to create our to-do app.

    Some of the features we will add to our to-do app:

    • Display a list of to-do items
    • Mark to-do items as completed
    • Add new item to the list

    Let’s start by creating a widget to hold our list of to-do items. This is going to be a StatefulWidget, which is a type of widget with some state. Flutter tracks changes to the state and redraws the widget when a new change in the state is detected.

    After creating theToDoList widget, our main.Dart file looks like this:

    /// imports widgets from the material design 
    import 'package:flutter/material.dart';
    
    void main() => runApp(TodoApp());
    
    /// Stateless widgets must implement the build() method and return a widget. 
    /// The first parameter passed to build function is the context in which this widget is built
    class TodoApp extends StatelessWidget {
      @override
      Widget build(BuildContext context) {
        return MaterialApp(
          title: 'TODO',
          theme: ThemeData(
            primarySwatch: Colors.blue,
          ),
          home: TodoList(),
        );
      }
    }
    
    /// Stateful widgets must implement the createState method
    /// State of a stateless widget against has a build() method with context
    class TodoList extends StatefulWidget {
      @override
      State<StatefulWidget> createState() => TodoListState();
    }
    
    class TodoListState extends State<TodoList> {
      @override
      Widget build(BuildContext context) {
        return Scaffold(
          appBar: AppBar(
            title: Text('Todo'),
          ),
          body: Text('Todo List'),
        );
      }
    }

    The ToDoApp class here extends Stateless widget i.e. a widget without any state whereas ToDoList extends StatefulWidget. All Flutter apps are a combination of these two types of widgets. StatelessWidgets must implement the build() method whereas Stateful widgets must implement the createState() method.

    Some built-in widgets used here are the MaterialApp widget, the Scaffold widget and AppBar and Text widgets. These all are imported from Flutter’s implementation of material design, available in the material.dart package. Similarly, to use native looking iOS widgets in applications, we can import widgets from the flutter/cupertino.dart package.

    Next, let’s create a model class that represents an individual to-do item. We will keep this simple i.e. only store label and completed status of the to-do item.

    class Todo {
      final String label;
      bool completed;
      Todo(this.label, this.completed);
    }

    The constructor we wrote in the code above is implemented using one of Dart’s syntactic sugar to assign a constructor argument to the instance variable. For more such interesting tidbits, take the Dart language tour.

    Now let’s modify the ToDoListState class to store a list of to-do items in its state and also display it in a list. We will use ListView.builder to create a dynamic list of to-do items. We will also use Checkbox and Text widget to display to-do items.

    /// State is composed all the variables declared in the State implementation of a Stateful widget
    class TodoListState extends State<TodoList> {
      final List<Todo> todos = List<Todo>();
      @override
      Widget build(BuildContext context) {
        return Scaffold(
          appBar: AppBar(
            title: Text('Todo'),
          ),
          body: Padding(
            padding: EdgeInsets.all(16.0),
            child: todos.length > 0
                ? ListView.builder(
                    itemCount: todos.length,
                    itemBuilder: _buildRow,
                  )
                : Text('There is nothing here yet. Start by adding some Todos'),
          ),
        );
      }
    
      /// build a single row of the list
      Widget _buildRow(context, index) => Row(
            children: <Widget>[
              Checkbox(
                  value: todos[index].completed,
                  onChanged: (value) => _changeTodo(index, value)),
              Text(todos[index].label,
                  style: TextStyle(
                      decoration: todos[index].completed
                          ? TextDecoration.lineThrough
                          : null))
            ],
          );
    
      /// toggle the completed state of a todo item
      _changeTodo(int index, bool value) =>
          setState(() => todos[index].completed = value);
    }

    A few things to note here are: private functions start with an underscore, functions with a single line of body can be written using fat arrows (=>) and most importantly, to change the state of any variable contained in a Stateful widget, one must call the setState method.

    The ListView.builder constructor allows us to work with very large lists, since list items are created only when they are scrolled.

    Another takeaway here is the fact that Dart is such an intuitive language that it is quite easy to understand and you can start writing Dart code immediately.

    Everything on a screen, like padding, alignment or opacity, is a widget. Notice in the code above, we have used Padding as a widget that wraps the list or a text widget depending on the number of to-do items. If there’s nothing in the list, a text widget is displayed with some default message.

    Also note how we haven’t used the new keyword when creating instances of a class, say Text. That’s because using the new keyword is optional in Dart and discouraged, according to the effective Dart guidelines.

    Running the application

    At this point, let’s run the code and see how the app looks on a device. Press F5, then select a virtual device and wait for the app to get installed. If you haven’t created a virtual device yet, refer to the getting started guide.

    Once the virtual device launches, we should see the following screen in a while. During development, the first launch always takes a while because the entire app gets built and installed on the virtual device, but subsequent changes to code are instantly reflected on the device, thanks to Flutter’s amazing hot reload feature. This reduces development time and also allows developers and designers to experiment more frequently with the interface changes.

    As we can see, there are no to-dos here yet. Now let’s add a floating action button that opens a dialog which we will use to add new to-do items.

    Adding the FAB is as easy as passing floatingActionButton parameter to the scaffold widget.

    floatingActionButton: FloatingActionButton(
      child: Icon(Icons.add),                                /// uses the built-in icons
      onPressed: () => _promptDialog(context),
    ),

    And declare a function inside ToDoListState that displays a popup (AlertDialog) with a text input box.

    /// display a dialog that accepts text
      _promptDialog(BuildContext context) {
        String _todoLabel = '';
        return showDialog(
            context: context,
            builder: (context) {
              return AlertDialog(
                title: Text('Enter TODO item'),
                content: TextField(
                    onChanged: (value) => _todoLabel = value,
                    decoration: InputDecoration(hintText: 'Add new TODO item')),
                actions: <Widget>[
                  FlatButton(
                    child: new Text('CANCEL'),
                    onPressed: () => Navigator.of(context).pop(),
                  ),
                  FlatButton(
                    child: new Text('ADD'),
                    onPressed: () {
                      setState(() => todos.add(Todo(_todoLabel, false)));
                      /// dismisses the alert dialog
                      Navigator.of(context).pop();
                    },
                  )
                ],
              );
            });
      }

    At this point, saving changes to the file should result in the application getting updated on the virtual device (hot reload), so we can just click on the new floating action button that appeared on the bottom right of the screen and start testing how the dialog looks.

    We used a few more built-in widgets here:

    • AlertDialog: a dialog prompt that opens up when clicking on the FAB
    • TextField: text input field for accepting user input
    • InputDecoration: a widget that adds style to the input field
    • FlatButton: a variation of button with no border or shadow
    • FloatingActionButton: a floating icon button, used to trigger primary action on the screen

    Here’s a quick preview of how the application should look and function at this point:

    And just like that, in less than 100 lines of code, we’ve built the user interface of a simple, cross platform to-do application.

    The source code for this application is available here.

    A few links to further explore Flutter:

    Conclusion:

    To conclude, Flutter is  an extremely powerful toolkit to build cross platform applications that have native performance and are beautiful to look at. Dart, the language behind Flutter, is designed considering the nuances of user interface development and Flutter offers a wide range of built-in widgets. This makes development fun and development cycles shorter; something that we experienced while building the to-do app. With Flutter, time to market is also greatly reduced which enables teams to experiment more often, collect more feedback and ship applications faster.  And finally, Flutter has a very enthusiastic and thriving community of designers and developers who are always experimenting and adding to the Flutter ecosystem.