Tag: ios

  • Protecting Your Mobile App: Effective Methods to Combat Unauthorized Access

    Introduction: The Digital World’s Hidden Dangers

    Imagine you’re running a popular mobile app that offers rewards to users. Sounds exciting, right? But what if a few clever users find a way to cheat the system for more rewards? This is exactly the challenge many app developers face today.

    In this blog, we’ll describe a real-world story of how we fought back against digital tricksters and protected our app from fraud. It’s like a digital detective story, but instead of solving crimes, we’re stopping online cheaters.

    Understanding How Fraudsters Try to Trick the System

    The Sneaky World of Device Tricks

    Let’s break down how users may try to outsmart mobile apps:

    One way is through device ID manipulation. What is this? Think of a device ID like a unique fingerprint for your phone. Normally, each phone has its own special ID that helps apps recognize it. But some users have found ways to change this ID, kind of like wearing a disguise.

    Real-world example: Imagine you’re at a carnival with a ticket that lets you ride each ride once. A fraudster might try to change their appearance to get multiple rides. In the digital world, changing a device ID is similar—it lets users create multiple accounts and get more rewards than they should.

    How Do People Create Fake Accounts?

    Users have become super creative in making multiple accounts:

    • Using special apps that create virtual phone environments
    • Playing with email addresses
    • Using temporary email services

    A simple analogy: It’s like someone trying to enter a party multiple times by wearing different costumes and using slightly different names. The goal? To get more free snacks or entry benefits.

    The Detective Work: How to Catch These Digital Tricksters

    Tracking User Behavior

    Modern tracking tools are like having a super-smart security camera that doesn’t just record but actually understands what’s happening. Here are some powerful tools you can explore:

    LogRocket: Your App’s Instant Replay Detective

    LogRocket records and replays user sessions, capturing every interaction, error, and performance hiccup. It’s like having a video camera inside your app, helping developers understand exactly what users experience in real time.

    Quick snapshot:

    • Captures user interactions
    • Tracks performance issues
    • Provides detailed session replays
    • Helps identify and fix bugs instantly

    Mixpanel: The User Behavior Analyst

    Mixpanel is a smart analytics platform that breaks down user behavior, tracking how people use your app, where they drop off, and what features they love most. It’s like having a digital detective who understands your users’ journey.

    Key capabilities:

    • Tracks user actions
    • Creates behavior segments
    • Measures conversion rates
    • Provides actionable insights

    What They Do:

    • Notice unusual account creation patterns
    • Detect suspicious activities
    • Prevent potential fraud before it happens

    Email Validation: The First Line of Defense

    How it works:

    • Recognize similar email addresses
    • Prevent creating multiple accounts with slightly different emails
    • Block tricks like:
      • a.bhi629@gmail.com
      • abhi.629@gmail.com

    Real-life comparison: It’s like a smart mailroom that knows “John Smith” and “J. Smith” are the same person, preventing duplicate mail deliveries.

    Advanced Protection Strategies

    Device ID Tracking

    Key Functions:

    • Store unique device information
    • Check if a device has already claimed rewards
    • Prevent repeat bonus claims

    Simple explanation: Imagine a bouncer at a club who remembers everyone who’s already entered and stops them from sneaking in again.

    Stopping Fake Device Environments

    Some users try to create fake device environments using apps like:

    • Parallel Space
    • Multiple account creators
    • Game cloners

    Protection method: The app identifies and blocks these applications, just like a security system that recognizes fake ID cards.

    Root Device Detection

    What is a Rooted Device? It’s like a phone that’s been modified to give users complete control, bypassing normal security restrictions.

    Detection techniques:

    • Check for special root access files
    • Verify device storage
    • Run specific detection commands

    Analogy: It’s similar to checking if a car has been illegally modified to bypass speed limits.

    Extra Security Layers

    Android Version Requirements

    Upgrading to newer Android versions provides additional security:

    • Better detection of modified devices
    • Stronger app protection
    • More restricted file access

    Simple explanation: It’s like upgrading your home’s security system to a more advanced model that can detect intruders more effectively.

    Additional Protection Methods

    • Data encryption
    • Secure internet communication
    • Location verification
    • Encrypted local storage

    Think of these as multiple locks on your digital front door, each providing an extra layer of protection.

    Real-World Implementation Challenges

    Why is This Important?

    Every time a fraudster successfully tricks the system:

    • The app loses money
    • Genuine users get frustrated
    • Trust in the platform decreases

    Business impact: Imagine running a loyalty program where some people find ways to get 10 times more rewards than others. Not fair, right?

    Practical Tips for App Developers

    • Always stay updated with the latest security trends
    • Regularly audit your app’s security
    • Use multiple protection layers
    • Be proactive, not reactive
    • Learn from each attempted fraud

    Common Misconceptions About App Security

    Myth: “My small app doesn’t need advanced security.” Reality: Every app, regardless of size, can be a target.

    Myth: “Security is a one-time setup.” Reality: Security is an ongoing process of learning and adapting.

    Learning from Real Experiences

    These examples come from actual developers at Velotio Technologies, who faced these challenges head-on. Their approach wasn’t about creating an unbreakable system but about making fraud increasingly difficult and expensive.

    The Human Side of Technology

    Behind every security feature is a human story:

    • Developers protecting user experiences
    • Companies maintaining trust
    • Users expecting fair treatment

    Looking to the Future

    Technology will continue evolving, and so, too, will fraud techniques. The key is to:

    • Stay curious
    • Keep learning
    • Never assume you know everything

    Final Thoughts: Your App, Your Responsibility

    Protecting your mobile app isn’t just about implementing complex technical solutions; it’s about a holistic approach that encompasses understanding user behavior, creating fair experiences, and building trust. Here’s a deeper look into these critical aspects:

    Understanding User Behavior:‍

    Understanding how users interact with your app is crucial. By analyzing user behavior, you can identify patterns that may indicate fraudulent activity. For instance, if a user suddenly starts claiming rewards at an unusually high rate, it could signal potential abuse.
    Utilize analytics tools to gather data on user interactions. This data can help you refine your app’s design and functionality, ensuring it meets genuine user needs while also being resilient against misuse.

    Creating Fair Experiences:‍

    Clearly communicate your app’s rewards, account creation, and user behavior policies. Transparency helps users understand the rules and reduces the likelihood of attempts to game the system.
    Consider implementing a user agreement that outlines acceptable behavior and the consequences of fraudulent actions.

    Building Trust:

    Maintain open lines of communication with your users. Regular updates about security measures, app improvements, and user feedback can help build trust and loyalty.
    Use newsletters, social media, and in-app notifications to keep users informed about changes and enhancements.
    Provide responsive customer support to address user concerns promptly. If users feel heard and valued, they are less likely to engage in fraudulent behavior.

    Implement a robust support system that allows users to report suspicious activities easily and receive timely assistance.

    Remember: Every small protection measure counts.

    Call to Action

    Are you an app developer? Start reviewing your app’s security today. Don’t wait for a fraud incident to take action.

    Want to learn more?

    • Follow security blogs
    • Attend tech conferences
    • Connect with security experts
    • Never stop learning
  • React Native: Session Reply with Microsoft Clarity

    Microsoft recently launched session replay support for iOS on both Native iOS and React Native applications. We decided to see how it performs compared to competitors like LogRocket and UXCam.

    This blog discusses what session replay is, how it works, and its benefits for debugging applications and understanding user behavior. We will also quickly integrate Microsoft Clarity in React Native applications and compare its performance with competitors like LogRocket and UXCam.

    Below, we will explore the key features of session replay, the steps to integrate Microsoft Clarity into your React Native application, and benchmark its performance against other popular tools.

    Key Features of Session Replay

    Session replay provides a visual playback of user interactions on your application. This allows developers to observe how users navigate the app, identify any issues they encounter, and understand user behavior patterns. Here are some of the standout features:

    • User Interaction Tracking: Record clicks, scrolls, and navigation paths for a comprehensive view of user activities.
    • Error Monitoring: Capture and analyze errors in real time to quickly diagnose and fix issues.
    • Heatmaps: Visualize areas of high interaction to understand which parts of the app are most engaging.
    • Anonymized Data: Ensure user privacy by anonymizing sensitive information during session recording.

    Integrating Microsoft Clarity with React Native

    Integrating Microsoft Clarity into your React Native application is a straightforward process. Follow these steps to get started:

    1. Sign Up for Microsoft Clarity:

    a. Visit the Microsoft Clarity website and sign up for a free account.

    b. Create a new project and obtain your Clarity tracking code.

    1. Install the Clarity SDK:

    Use npm or yarn to install the Clarity SDK in your React Native project:

    npm install clarity@latest‍ 
    yarn add clarity@latest

    1. Initialize Clarity in Your App:

    Import and initialize Clarity in your main application file (e.g., App.js):

    import Clarity from 'clarity';‍
    Clarity.initialize('YOUR_CLARITY_TRACKING_CODE');

    1. Verify Integration:

    a. Run your application and navigate through various screens to ensure Clarity is capturing session data correctly.

    b. Log into your Clarity dashboard to see the recorded sessions and analytics.

    Benchmarking Against Competitors

    To evaluate the performance of Microsoft Clarity, we’ll compare it against two popular session replay tools, LogRocket and UXCam, assessing them based on the following criteria:

    • Ease of Integration: How simple is integrating the tool into a React Native application?
    • Feature Set: What features does each tool offer for session replay and user behavior analysis?
    • Performance Impact: How does the tool impact the app’s performance and user experience?
    • Cost: What are the pricing models and how do they compare?

    Detailed Comparison

    Ease of Integration

    • Microsoft Clarity: The integration process is straightforward and well-documented, making it easy for developers to get started.
    • LogRocket: LogRocket also offers a simple integration process with comprehensive documentation and support.
    • UXCam: UXCam provides detailed guides and support for integration, but it may require additional configuration steps compared to Clarity and LogRocket.

    Feature Set

    • Microsoft Clarity: Offers robust session replay, heatmaps, and error monitoring. However, it may lack some advanced features found in premium tools.
    • LogRocket: Provides a rich set of features, including session replay, performance monitoring, Network request logs, and integration with other tools like Redux and GraphQL.
    • UXCam: Focuses on mobile app analytics with features like session replay, screen flow analysis, and retention tracking.

    Performance Impact

    • Microsoft Clarity: Minimal impact on app performance, making it a suitable choice for most applications.
    • LogRocket: Slightly heavier than Clarity but offers more advanced features. Performance impact is manageable with proper configuration.
    • UXCam: Designed for mobile apps with performance optimization in mind. The impact is generally low but can vary based on app complexity.

    Cost

    • Microsoft Clarity: Free to use, making it an excellent option for startups and small teams.
    • LogRocket: Offers tiered pricing plans, with a free tier for basic usage and paid plans for advanced features.
    • UXCam: Provides a range of pricing options, including a free tier. Paid plans offer more advanced features and higher data limits.

    Final Verdict

    After evaluating the key aspects of session replay tools, Microsoft Clarity stands out as a strong contender, especially for teams looking for a cost-effective solution with essential features. LogRocket and UXCam offer more advanced capabilities, which may be beneficial for larger teams or more complex applications.

    Ultimately, the right tool will depend on your specific needs and budget. For basic session replay and user behavior insights, Microsoft Clarity is a fantastic choice. If you require more comprehensive analytics and integrations, LogRocket or UXCam may be worth the investment.

    Sample App

    I have also created a basic sample app to demonstrate how to set up Microsoft Clarity for React Native apps.

    Please check it out here: https://github.com/rakesho-vel/ms-rn-clarity-sample-app

    This sample video shows how Microsoft Clarity records and lets you review user sessions on its dashboard.

    References

    1. https://clarity.microsoft.com/blog/clarity-sdk-release/
    2. https://web.swipeinsight.app/posts/microsoft-clarity-finally-launches-ios-sdk-8312

  • Optimizing iOS Memory Usage with Instruments Xcode Tool

    Introduction

    Developing iOS applications that deliver a smooth user experience requires more than just clean code and engaging features. Efficient memory management helps ensure that your app performs well and avoids common pitfalls like crashes and excessive battery drain. 

    In this blog, we’ll explore how to optimize memory usage in your iOS app using Xcode’s powerful Instruments and other memory management tools.

    Memory Management and Usage

    Before we delve into the other aspects of memory optimization, it’s important to understand why it’s so essential:

    Memory management in iOS refers to the process of allocating and deallocating memory for objects in an iOS application to ensure efficient and reliable operation. Proper memory management prevents issues like memory leaks, crashes, and excessive memory usage, which can degrade an app’s performance and user experience. 

    Memory management in iOS primarily involves the use of Automatic Reference Counting (ARC) and understanding how to manage memory effectively.

    Here are some key concepts and techniques related to memory management in iOS:

    1. Automatic Reference Counting (ARC): ARC is a memory management technique introduced by Apple to automate memory management in Objective-C and Swift. With ARC, the compiler automatically inserts retain, release, and autorelease calls, ensuring that memory is allocated and deallocated as needed. Developers don’t need to manually manage memory by calling “retain,” “release,” or “autorelease`” methods as they did in manual memory management in pre-ARC era.
    2. Strong and Weak References: In ARC, objects have strong, weak, and unowned references. A strong reference keeps an object in memory as long as at least one strong reference to it exists. A weak reference, on the other hand, does not keep an object alive. It’s commonly used to avoid strong reference cycles (retain cycles) and potential memory leaks.
    3. Retain Cycles: A retain cycle occurs when two or more objects hold strong references to each other, creating a situation where they cannot be deallocated, even if they are no longer needed. To prevent retain cycles, you can use weak references, unowned references, or break the cycle manually by setting references to “nil” when appropriate.
    4. Avoiding Strong Reference Cycles: To avoid retain cycles, use weak references (and unowned references when appropriate) in situations where two objects reference each other. Also, consider using closure capture lists to prevent strong reference cycles when using closures.
    5. Resource Management: Memory management also includes managing other resources like files, network connections, and graphics contexts. Ensure you release or close these resources when they are no longer needed.
    6. Memory Profiling: The Memory Report in the Debug Navigator of Xcode is a tool used for monitoring and analyzing the memory usage of your iOS or macOS application during runtime. It provides valuable insights into how your app utilizes memory, helps identify memory-related issues, and allows you to optimize the application’s performance.

    Also, use tools like Instruments to profile your app’s memory usage and identify memory leaks and excessive memory consumption.

    Instruments: Your Ally for Memory Optimization

    In Xcode, “Instruments” refer to a set of performance analysis and debugging tools integrated into the Xcode development environment. These instruments are used by developers to monitor and analyze the performance of their iOS, macOS, watchOS, and tvOS applications during development and testing. Instruments help developers identify and address performance bottlenecks, memory issues, and other problems in their code.

     

    Some of the common instruments available in Xcode include:

    1. Allocations: The Allocations instrument helps you track memory allocations and deallocations in your app. It’s useful for detecting memory leaks and excessive memory usage.
    2. Leaks: The Leaks instrument finds memory leaks in your application. It can identify objects that are not properly deallocated.
    3. Time Profiler: Time Profiler helps you measure and analyze the CPU usage of your application over time. It can identify which functions or methods are consuming the most CPU resources.
    4. Custom Instruments: Xcode also allows you to create custom instruments tailored to your specific needs using the Instruments development framework.

    To use these instruments, you can run your application with profiling enabled, and then choose the instrument that best suits your performance analysis goals. 

    Launching Instruments

    Because Instruments is located inside Xcode’s app bundle, you won’t be able to find it in the Finder. 

    To launch Instruments on macOS, follow these steps:

    1. Open Xcode: Instruments is bundled with Xcode, Apple’s integrated development environment for macOS, iOS, watchOS, and tvOS app development. If you don’t have Xcode installed, you can download it from the Mac App Store or Apple’s developer website.
    2. Open Your Project: Launch Xcode and open the project for which you want to use Instruments. You can do this by selecting “File” > “Open” and then navigating to your project’s folder.
    3. Choose Instruments: Once your project is open, go to the “Xcode” menu at the top-left corner of the screen. From the drop-down menu, select “Open Developer Tool” and choose “Instruments.”
    4. Select a Template: Instruments will open, and you’ll see a window with a list of available performance templates on the left-hand side. These templates correspond to the different types of analysis you can perform. Choose the template that best matches the type of analysis you want to conduct. For example, you can select “Time Profiler” for CPU profiling or “Leaks” for memory analysis.
    5. Configure Settings: Depending on the template you selected, you may need to configure some settings or choose the target process (your app) you want to profile. These settings can typically be adjusted in the template configuration area.
    6. Start Recording: Click the red record button in the top-left corner of the Instruments window to start profiling your application. This will launch your app with the selected template and begin collecting performance data.
    7. Analyze Data: Interact with your application as you normally would to trigger the performance scenarios you want to analyze. Instruments will record data related to CPU usage, memory usage, network activity, and other aspects of your app’s performance.
    8. Stop Recording: When you’re done profiling your app, click the square “Stop” button in Instruments to stop recording data.
    9. Analyze Results: After stopping the recording, Instruments will display a detailed analysis of your app’s performance. You can explore various graphs, timelines, and reports to identify and address performance issues.
    10. Save or Share Results: You can save your Instruments session for future reference or share it with colleagues if needed.

    Using the Allocations Instrument

    The “Allocations” instrument helps you monitor memory allocation and deallocation. Here’s how to use it:

    1. Start the Allocations Instrument: In Instruments, select “Allocations” as your instrument.

    2. Profile Your App: Use your app as you normally would to trigger the scenarios you want to profile.

    3. Examine the Memory Allocation Graph: The graph displays memory usage over time. Look for spikes or steady increases in memory usage.

    4. Inspect Objects: The instrument provides a list of objects that have been allocated and deallocated. You can inspect these objects and their associated memory usage.

    5. Call Tree and Source Code: To pinpoint memory issues, use the Call Tree to identify the functions or methods responsible for memory allocation. You can then inspect the associated source code in the Source View.

    Detecting Memory Leaks with the Leaks Instrument

    Retain Cycle

    A retain cycle in Swift occurs when two or more objects hold strong references to each other in a way that prevents them from being deallocated, causing a memory leak. This situation is also known as a “strong reference cycle.” It’s essential to understand retain cycles because they can lead to increased memory usage and potential app crashes.  

    A common scenario for retain cycles is when two objects reference each other, both using strong references. 

    Here’s an example to illustrate a retain cycle:

    class Person {
        var name: String
        var pet: Pet?
    
        init(name: String) {
            self.name = name
        }
    
        deinit {
            print("(name) has been deallocated")
        }
    }
    
    class Pet {
        var name: String
        var owner: Person?
    
        init(name: String) {
            self.name = name
        }
    
        deinit {
            print("(name) has been deallocated")
        }
    }
    
    var rohit: Person? = Person(name: "Rohit")
    var jerry: Pet? = Pet(name: "Jerry")
    
    rohit?.pet = jerry
    jerry?.owner = rohit
    
    rohit = nil
    jerry = nil

    In this example, we have two classes, Person and Pet, representing a person and their pet. Both classes have a property to store a reference to the other class (person.pet and pet.owner).  

    The “Leaks” instrument is designed to detect memory leaks in your app. 

    Here’s how to use it:

    1. Launch Instruments in Xcode: First, open your project in Xcode.  

    2. Commence Profiling: To commence the profiling process, navigate to the “Product” menu and select “Profile.”  

    3. Select the Leaks Instrument: Within the Instruments interface, choose the “Leaks” instrument from the available options.  

    4. Trigger the Memory Leak Scenario: To trigger the scenario where memory is leaked, interact with your application. This interaction, such as creating a retain cycle, will induce the memory leak.

    5. Identify Leaked Objects: The Leaks Instrument will automatically detect and pinpoint the leaked objects, offering information about their origins, including backtraces and the responsible callers.  

    6. Analyze Backtraces and Responsible Callers: To gain insights into the context in which the memory leak occurred, you can inspect the source code in the Source View provided by Instruments.  

    7. Address the Leaks: Armed with this information, you can proceed to fix the memory leaks by making the necessary adjustments in your code to ensure memory is released correctly, preventing future occurrences of memory leaks.

    You should see memory leaks like below in the Instruments.

    The issue in the above code is that both Person and Pet are holding strong references to each other. When you create a Person and a Pet and set their respective references, a retain cycle is established. Even when you set rohit and jerry to nil, the objects are not deallocated, and the deinit methods are not called. This is a memory leak caused by the retain cycle. 

    To break the retain cycle and prevent this memory leak, you can use weak or unowned references. In this case, you can make the owner property in Pet a weak reference because a pet should not own its owner:

    class Pet {
        var name: String
        weak var owner: Person?
    
        init(name: String) {
            self.name = name
        }
    
        deinit {
            print("(name) has been deallocated")
        }
    }

    By making owner a weak reference, the retain cycle is broken, and when you set rohit and jerry to nil, the objects will be deallocated, and the deinit methods will be called. This ensures proper memory management and avoids memory leaks.

    Best Practices for Memory Optimization

    In addition to using Instruments, consider the following best practices for memory optimization:

    1. Release Memory Properly: Ensure that memory is released when objects are no longer needed.

    2. Use Weak References: Use weak references when appropriate to prevent strong reference cycles.

    3. Using Unowned to break retain cycle: An unowned reference does not increment or decrease an object’s reference count. 

    3. Minimize Singletons and Global Variables: These can lead to retained objects. Use them judiciously.

    4. Implement Lazy Loading: Load resources lazily to reduce initial memory usage.

    Conclusion

    Optimizing memory usage is an essential part of creating high-quality iOS apps. 

    Instruments, integrated into Xcode, is a versatile tool that provides insights into memory allocation, leaks, and CPU-intensive code. By mastering these tools and best practices, you can ensure your app is memory-efficient, stable, and provides a superior user experience. Happy profiling!

  • Unlocking Cross-Platform Development with Kotlin Multiplatform Mobile (KMM)

    In the fast-paced and ever-changing world of software development, the task of designing applications that can smoothly operate on various platforms has become a significant hurdle. Developers frequently encounter a dilemma where they must decide between constructing distinct codebases for different platforms or opting for hybrid frameworks that come with certain trade-offs.

    Kotlin Multiplatform (KMP) is an extension of the Kotlin programming language that simplifies cross-platform development by bridging the gap between platforms. This game-changing technology has emerged as a powerful solution for creating cross-platform applications.

    Kotlin Multiplatform Mobile (KMM) is a subset of KMP that provides a specific framework and toolset for building cross-platform mobile applications using Kotlin. KMM is developed by JetBrains to simplify the process of building mobile apps that can run seamlessly on multiple platforms.

    In this article, we will take a deep dive into Kotlin Multiplatform Mobile, exploring its features and benefits and how it enables developers to write shared code that runs natively on multiple platforms.

    What is Kotlin Multiplatform Mobile (KMM)?

    With KMM, developers can share code between Android and iOS platforms, eliminating the need for duplicating efforts and maintaining separate codebases. This significantly reduces development time and effort while improving code consistency and maintainability.

    KMM offers support for a wide range of UI frameworks, libraries, and app architectures, providing developers with flexibility and options. It can seamlessly integrate with existing Android projects, allowing for the gradual adoption of cross-platform development. Additionally, KMM projects can be developed and tested using familiar build tools, making the transition to KMM as smooth as possible.

    KMM vs. Other Platforms

    Here’s a table comparing the KMM (Kotlin Multiplatform Mobile) framework with some other popular cross-platform mobile development platforms:

    Sharing Code Across Multiple Platforms:

    Advantages of Utilizing Kotlin Multiplatform (KMM) in Projects

    Code sharing: Encourages code reuse and reduces duplication, leading to faster development.

    Faster time-to-market: Accelerates mobile app development by reducing codebase development.

    Consistency: Ensures consistency across platforms for better user experience.

    Collaboration between Android and iOS teams: Facilitates collaboration between Android and iOS development teams to improve efficiency.

    Access to Native APIs: Allows developers to access platform-specific APIs and features.

    Reduced maintenance overhead: Shared codebase makes maintenance easier and more efficient.

    Existing Kotlin and Android ecosystem: Provides access to libraries, tools, and resources for developers.

    Gradual adoption: Facilitates cross-platform development by sharing modules and components.

    Performance and efficiency: Generates optimized code for each platform, resulting in efficient and performant applications.

    Community and support: Benefits from active community, resources, tutorials, and support.

    Limitations of Using KMM in Projects

    Limited platform-specific APIs: Provides a common codebase, but does not provide direct access to platform-specific APIs.

    Platform-dependent setup and tooling: Platform-agnostic, but setup and tooling can be platform-dependent.

    Limited interoperability with existing platform code: Interoperability between Kotlin Multiplatform and existing platform code can be challenging.

    Development and debugging experience: Provides code sharing, but development and debugging experience differ.

    Limited third-party library support: There aren’t many ready-to-use libraries available, so developers must implement from scratch or look for alternatives.

    Setting Up Environment for Cross-Platform Development in Android Studio

    Developing Kotlin Multiplatform Mobile (KMM) apps as an Android developer is relatively straightforward. You can use Android Studio, the same IDE that you use for Android app development. 

    To get started, we will need to install the KMM plugin through the IDE plugin manager, which is a simple step. The advantage of using Android Studio for KMM development is that we can create and run iOS apps from within the same IDE. This can help streamline the development process, making it easier to build and test apps across multiple platforms.

    In order to enable the building and running of iOS apps through Android Studio, it’s necessary to have Xcode installed on your system. Xcode is an Integrated Development Environment (IDE) used for iOS programming.

    To ensure that all dependencies are installed correctly for our Kotlin Multiplatform Mobile (KMM) project, we can use kdoctor. This tool can be installed via brew by running the following command in the command-line:

    $ brew install kdoctor 

    Note: If you don’t have Homebrew yet, please install it.

    Once we have all the necessary tools installed on your system, including Android Studio, Xcode, JDK, Kotlin Multiplatform Mobile Plugin, and Kotlin Plugin, we can run kdoctor in the Android Studio terminal or on our command-line tool by entering the following command:

    $ kdoctor 

    This will confirm that all required dependencies are properly installed and configured for our KMM project.

    kdoctor will perform comprehensive checks and provide a detailed report with the results.

    Assuming that all the necessary tools are installed correctly, if kdoctor detects any issues, it will generate a corresponding result or report.

    To resolve the warning mentioned above, touch ~/.zprofile and export changes.

    $ touch  ~/.zprofile 

    $ export LANG=en_US.UTF-8

    export LC_ALL=en_US.UTF-8

    After making the above necessary changes to our environment, we can run kdoctor again to verify that everything is set up correctly. Once kdoctor confirms that all dependencies are properly installed and configured, we are done.

    Building Biometric Face & Fingerprint Authentication Application

    Let’s explore Kotlin Multiplatform Mobile (KMM) by creating an application for face and fingerprint authentication. Here our aim is to leverage KMM’s potential by developing shared code for both Android and iOS platforms. This will promote code reuse and reduce redundancy, leading to optimized code for each platform.

    Set Up an Android project

    To initiate a new project, we will launch Android Studio, select the Kotlin Multiplatform App option from the New Project template, and click on “Next.”

    We will add the fundamental application information, such as the name of the application and the project’s location, on the following screen.

    Lastly, we opt for the recommended dependency manager for the iOS app from the Regular framework and click on “Next.”

    For the iOS app, we can switch the dependency between the regular framework or CocoPods dependency manager.

    After clicking the “Finish” button, the KMM project is created successfully and ready to be utilized.

    After finishing the Gradle sync process, we can execute both the iOS and Android apps by simply clicking the run button located in the toolbar.

    In this illustration, we can observe the structure of a KMM project. The KMM project is organized into three directories: shared, androidApp, and iosApp.

    androidApp: It contains Android app code and follows the typical structure of a standard Android application.

    iosApp: It contains iOS application code, which can be opened in Xcode using the .xcodeproj file.

    shared: It contains code and resources that are shared between the Android (androidApp) and iOS (iosApp) platforms. It allows developers to write platform-independent logic and components that can be reused across both platforms, reducing code duplication and improving development efficiency.

    Launch the iOS app and establish a connection with the framework.

    Before proceeding with iOS app development, ensure that both Xcode and Cocoapods are installed on your system.

    Open the root project folder of the KMM application (KMM_Biometric_App) developed using Android studio and navigate to the iosApp folder. Within the iosApp folder, locate the .xcodeproj file and double-click on it to open it.

    After launching the iosApp in Xcode, the next step is to establish a connection between the framework and the iOS application. To do this, you will need to access the iOS project settings by double-clicking on the project name. Once you are in the project settings, navigate to the Build Phases tab and select the “+” button to add a new Run Script Phase.

     

     

    Add the following script:

    cd “$SRCROOT/..”

    ./gradlew :shared:embedAndSignAppleFrameworkForXcode

    Move the Run Script phase before the Compile Sources phase.

    Navigate to the All build settings on the Build Settings tab and locate the Search Paths section. Within this section, specify the Framework Search Path:

    $(SRCROOT)/../shared/build/xcode-frameworks/$(CONFIGURATION)/$(SDK_NAME)

    In the Linking section of the Build Settings tab, specify the Other Linker flags:

    $(inherited) -framework shared

    Compile the project in Xcode. If all the settings are configured correctly, the project should build successfully.

    Implement Biometric Authentication in the Android App

    To enable Biometric Authentication, we will utilize the BiometricPrompt component available in the Jetpack Biometric library. This component simplifies the process of implementing biometric authentication, but it is only compatible with Android 6.0 (API level 23) and later versions. If we require support for earlier Android versions, we must explore alternative approaches.

    Biometric Library:

    implementation(“androidx.biometric:biometric-ktx:1.2.0-alpha05“)

    To add the Biometric Dependency for Android development, we must include it in the androidMain of sourceSets in the build.gradle file located in the shared folder. This step is specific to Android development.

    // shared/build.gradle.kts

    …………
    sourceSets {
       val androidMain by getting {
           dependencies {
               implementation("androidx.biometric:biometric-ktx:1.2.0-alpha05")
                        }
    	……………
       }
    …………….

    Next, we will generate the FaceAuthenticator class within the commonMain folder, which will allow us to share the Biometric Authentication business logic between the Android and iOS platforms.

    // shared/commonMain/FaceAuthenticator

    expect class FaceAuthenticator {
       fun isDeviceHasBiometric(): Boolean
       fun authenticateWithFace(callback: (Boolean) -> Unit)
    }

    In shared code, the “expect” keyword signifies an expected behavior or interface. It indicates a declaration that is expected to be implemented differently on each platform. By using “expect,” you establish a contract or API that the platform-specific implementations must satisfy.

    The “actual” keyword is utilized to provide the platform-specific implementation for the expected behavior or interface defined with the “expect” keyword. It represents the concrete implementation that varies across different platforms. By using “actual,” you supply the code that fulfills the contract established by the “expect” declaration.

    There are 3 different types of authenticators, defined at a level of granularity supported by BiometricManager and BiometricPrompt.

    At the level of granularity supported by BiometricManager and BiometricPrompt, there exist three distinct types of authenticators.

    Multiple authenticators, such as BIOMETRIC_STRONG | DEVICE_CREDENTIAL | BIOMETRIC_WEAK, can be represented as a single integer by combining their types using bitwise OR.

    BIOMETRIC_STRONG: Any biometric (e.g., fingerprint, iris, or face) on the device that meets or exceeds the requirements for Class 3 (formerly Strong), as defined by the Android CDD.

    BIOMETRIC_WEAK: Any biometric (e.g., fingerprint, iris, or face) on the device that meets or exceeds the requirements for Class 2 (formerly Weak), as defined by the Android CDD.

    DEVICE_CREDENTIAL: Authentication using a screen lock credential—the user’s PIN, pattern, or password.

    Now let’s create an actual implementation of FaceAuthenticator class in the androidMain folder of the shared folder.

    // shared/androidMain/FaceAuthenticator

    actual class FaceAuthenticator(context: FragmentActivity) {
       actual fun isDeviceHasBiometric(): Boolean {
           // code to check biometric available
       }
    
       actual fun authenticateWithFace(callback: (Boolean) -> Unit) {
           // code to authenticate using biometric
       }
    }

    Add the following code to the isDeviceHasBiometric() function to determine whether the device supports biometric authentication or not.

    actual class FaceAuthenticator(context: FragmentActivity) {
    
        var activity: FragmentActivity = context
    
        @RequiresApi(Build.VERSION_CODES.R)
        actual fun isDeviceHasBiometric(): Boolean {
            val biometricManager = BiometricManager.from(activity)
            when (biometricManager.canAuthenticate(BIOMETRIC_STRONG or BIOMETRIC_WEAK)) {
                BiometricManager.BIOMETRIC_SUCCESS -> {
                    Log.d("`FaceAuthenticator.kt`", "App can authenticate using biometrics.")
                    return true
                }
    
                BiometricManager.BIOMETRIC_ERROR_NO_HARDWARE -> {
                    Log.e("`FaceAuthenticator.kt`", "No biometric features available on this device.")
                    return false
                }
    
                BiometricManager.BIOMETRIC_ERROR_HW_UNAVAILABLE -> {
                    Log.e("`FaceAuthenticator.kt`", "Biometric features are currently unavailable.")
                    return false
                }
    
                BiometricManager.BIOMETRIC_ERROR_NONE_ENROLLED -> {
                    Log.e(
                        "`FaceAuthenticator.kt`",
                        "Prompts the user to create credentials that your app accepts."
                    )
                    val enrollIntent = Intent(Settings.ACTION_BIOMETRIC_ENROLL).apply {
                        putExtra(
                            Settings.EXTRA_BIOMETRIC_AUTHENTICATORS_ALLOWED,
                            BIOMETRIC_STRONG or BIOMETRIC_WEAK
                        )
                    }
    
                    startActivityForResult(activity, enrollIntent, 100, null)
                }
    
                BiometricManager.BIOMETRIC_ERROR_SECURITY_UPDATE_REQUIRED -> {
                    Log.e(
                        "`FaceAuthenticator.kt`",
                        "Security vulnerability has been discovered and the sensor is unavailable until a security update has addressed this issue."
                    )
                }
    
                BiometricManager.BIOMETRIC_ERROR_UNSUPPORTED -> {
                    Log.e(
                        "`FaceAuthenticator.kt`",
                        "The user can't authenticate because the specified options are incompatible with the current Android version."
                    )
                }
    
                BiometricManager.BIOMETRIC_STATUS_UNKNOWN -> {
                    Log.e(
                        "`FaceAuthenticator.kt`",
                        "Unable to determine whether the user can authenticate"
                    )
                }
            }
            return false
        }
    
        actual fun authenticateWithFace(callback: (Boolean) -> Unit) {
            // code to authenticate using biometric
        }
    
    }

    In the provided code snippet, an instance of BiometricManager is created, and the canAuthenticate() method is invoked to determine whether the user can authenticate with an authenticator that satisfies the specified requirements. To accomplish this, you must pass the same bitwise combination of types, which you declared using the setAllowedAuthenticators() method, into the canAuthenticate() method.

    To perform biometric authentication, insert the following code into the authenticateWithFace() method.

    actual class FaceAuthenticator(context: FragmentActivity) {
    
        var activity: FragmentActivity = context
    
        @RequiresApi(Build.VERSION_CODES.R)
        actual fun isDeviceHasBiometric(): Boolean {
            val biometricManager = BiometricManager.from(activity)
            when (biometricManager.canAuthenticate(BIOMETRIC_STRONG or BIOMETRIC_WEAK)) {
                BiometricManager.BIOMETRIC_SUCCESS -> {
                    Log.d("`FaceAuthenticator.kt`", "App can authenticate using biometrics.")
                    return true
                }
    
                BiometricManager.BIOMETRIC_ERROR_NO_HARDWARE -> {
                    Log.e("`FaceAuthenticator.kt`", "No biometric features available on this device.")
                    return false
                }
    
                BiometricManager.BIOMETRIC_ERROR_HW_UNAVAILABLE -> {
                    Log.e("`FaceAuthenticator.kt`", "Biometric features are currently unavailable.")
                    return false
                }
    
                BiometricManager.BIOMETRIC_ERROR_NONE_ENROLLED -> {
                    Log.e(
                        "`FaceAuthenticator.kt`",
                        "Prompts the user to create credentials that your app accepts."
                    )
                    val enrollIntent = Intent(Settings.ACTION_BIOMETRIC_ENROLL).apply {
                        putExtra(
                            Settings.EXTRA_BIOMETRIC_AUTHENTICATORS_ALLOWED,
                            BIOMETRIC_STRONG or BIOMETRIC_WEAK
                        )
                    }
    
                    startActivityForResult(activity, enrollIntent, 100, null)
                }
    
                BiometricManager.BIOMETRIC_ERROR_SECURITY_UPDATE_REQUIRED -> {
                    Log.e(
                        "`FaceAuthenticator.kt`",
                        "Security vulnerability has been discovered and the sensor is unavailable until a security update has addressed this issue."
                    )
                }
    
                BiometricManager.BIOMETRIC_ERROR_UNSUPPORTED -> {
                    Log.e(
                        "`FaceAuthenticator.kt`",
                        "The user can't authenticate because the specified options are incompatible with the current Android version."
                    )
                }
    
                BiometricManager.BIOMETRIC_STATUS_UNKNOWN -> {
                    Log.e(
                        "`FaceAuthenticator.kt`",
                        "Unable to determine whether the user can authenticate"
                    )
                }
            }
            return false
        }
    
        @RequiresApi(Build.VERSION_CODES.P)
        actual fun authenticateWithFace(callback: (Boolean) -> Unit) {
            
            // Create prompt Info to set prompt details
            val promptInfo = BiometricPrompt.PromptInfo.Builder()
                .setTitle("Authentication using biometric")
                .setSubtitle("Authenticate using face/fingerprint")
                .setAllowedAuthenticators(BIOMETRIC_STRONG or BIOMETRIC_WEAK or DEVICE_CREDENTIAL)
                .setNegativeButtonText("Cancel")
                .build()
    
            // Create biometricPrompt object to get authentication callback result
            val biometricPrompt = BiometricPrompt(activity, activity.mainExecutor,
                object : BiometricPrompt.AuthenticationCallback() {
                    override fun onAuthenticationError(
                        errorCode: Int,
                        errString: CharSequence,
                    ) {
                        super.onAuthenticationError(errorCode, errString)
                        Toast.makeText(activity, "Authentication error: $errString", Toast.LENGTH_SHORT)
                            .show()
                        callback(false)
                    }
    
                    override fun onAuthenticationSucceeded(
                        result: BiometricPrompt.AuthenticationResult,
                    ) {
                        super.onAuthenticationSucceeded(result)
                        Toast.makeText(activity, "Authentication succeeded!", Toast.LENGTH_SHORT).show()
                        callback(true)
                    }
    
                    override fun onAuthenticationFailed() {
                        super.onAuthenticationFailed()
                        Toast.makeText(activity, "Authentication failed", Toast.LENGTH_SHORT).show()
                        callback(false)
                    }
                })
    
            //Authenticate using biometric prompt
            biometricPrompt.authenticate(promptInfo)
        }
    
    }

    In the code above, the BiometricPrompt.Builder gathers the arguments to be displayed on the biometric dialog provided by the system.

    The setAllowedAuthenticators() function enables us to indicate the authenticators that are permitted for biometric authentication.

    // Create prompt Info to set prompt details

    // Create prompt Info to set prompt details
    val promptInfo = BiometricPrompt.PromptInfo.Builder()
       	.setTitle("Authentication using biometric")
       	.setSubtitle("Authenticate using face/fingerprint")
       	.setAllowedAuthenticators(BIOMETRIC_STRONG or BIOMETRIC_WEAK)   
          .setNegativeButtonText("Cancel")
       	.build()

    It is not possible to include both .setAllowedAuthenticators(BIOMETRIC_WEAK or DEVICE_CREDENTIAL) and .setNegativeButtonText(“Cancel”) simultaneously in a BiometricPrompt.PromptInfo.Builder instance because the last mode of device authentication is being utilized.

    However, it is possible to include both .setAllowedAuthenticators(BIOMETRIC_WEAK or BIOMETRIC_STRONG) and .setNegativeButtonText(“Cancel“) simultaneously in a BiometricPrompt.PromptInfo.Builder instance. This allows for a fallback to device credentials authentication when the user cancels the biometric authentication process.

    The BiometricPrompt object facilitates biometric authentication and provides an AuthenticationCallback to handle the outcomes of the authentication process, indicating whether it was successful or encountered a failure.

    val biometricPrompt = BiometricPrompt(activity, activity.mainExecutor,
                object : BiometricPrompt.AuthenticationCallback() {
                    override fun onAuthenticationError(
                        errorCode: Int,
                        errString: CharSequence,
                    ) {
                        super.onAuthenticationError(errorCode, errString)
                        Toast.makeText(activity, "Authentication error: $errString", Toast.LENGTH_SHORT)
                            .show()
                        callback(false)
                    }
    
                    override fun onAuthenticationSucceeded(
                        result: BiometricPrompt.AuthenticationResult,
                    ) {
                        super.onAuthenticationSucceeded(result)
                        Toast.makeText(activity, "Authentication succeeded!", Toast.LENGTH_SHORT).show()
                        callback(true)
                    }
    
                    override fun onAuthenticationFailed() {
                        super.onAuthenticationFailed()
                        Toast.makeText(activity, "Authentication failed", Toast.LENGTH_SHORT).show()
                        callback(false)
                    }
                })
    
            //Authenticate using biometric prompt
            biometricPrompt.authenticate(promptInfo)

    Now, we have completed the coding of the shared code for Android in the androidMain folder. To utilize this code, we can create a new file named LoginActivity.kt within the androidApp folder.

    // androidApp/LoginActivity

    class LoginActivity : AppCompatActivity() {
    
        @RequiresApi(Build.VERSION_CODES.R)
        override fun onCreate(savedInstanceState: Bundle?) {
            super.onCreate(savedInstanceState)
            setContentView(R.layout.activity_login)
    
            val authenticate = findViewById<Button>(R.id.authenticate_button)
            authenticate.setOnClickListener {
    
                val faceAuthenticatorImpl = FaceAuthenticator(this);
                if (faceAuthenticatorImpl.isDeviceHasBiometric()) {
                    faceAuthenticatorImpl.authenticateWithFace {
                          if (it){ Log.d("'LoginActivity.kt'", "Authentication Successful") }
                          else{ Log.d("'LoginActivity.kt'", "Authentication Failed") }
                    }
                }
    
            }
        }
    }

    Implement Biometric Authentication In iOS App

    For authentication, we have a special framework in iOS, i.e., Local Authentication Framework.

    The Local Authentication framework provides a way to integrate biometric authentication (such as Touch ID or Face ID) and device passcode authentication into your app. This framework allows you to enhance the security of your app by leveraging the biometric capabilities of the device or the device passcode.

    Now, let’s create an actual implementation of FaceAuthenticator class of shared folder in iosMain folder.

    // shared/iosMain/FaceAuthenticator

    actual class FaceAuthenticator(context: FragmentActivity) {
       actual fun isDeviceHasBiometric(): Boolean {
           // code to check biometric available
       }
    
       actual fun authenticateWithFace(callback: (Boolean) -> Unit) {
           // code to authenticate using biometric
       }
    }

    Add the following code to the isDeviceHasBiometric() function to determine whether the device supports biometric authentication or not.

    actual class FaceAuthenticator {
    
        actual fun isDeviceHasBiometric(): Boolean {
            // Check if face authentication is available
            val context = LAContext()
            val error = memScoped {
                allocPointerTo<ObjCObjectVar<NSError?>>()
            }
            return context.canEvaluatePolicy(LAPolicyDeviceOwnerAuthentication, error = error.value)
        }
    
        actual fun authenticateWithFace(callback: (Boolean) -> Unit) {
            // code to authenticate using biometric
        }
    }

    In the above code, LAContext class is part of the Local Authentication framework in iOS. It represents a context for evaluating authentication policies and handling biometric or passcode authentication. 

    LAPolicy represents different authentication policies that can be used with the LAContext class. The LAPolicy enum defines the following policies:

    .deviceOwnerAuthenticationWithBiometrics

    This policy allows the user to authenticate using biometric authentication, such as Touch ID or Face ID. If the device supports biometric authentication and the user has enrolled their biometric data, the authentication prompt will appear for biometric verification.

    .deviceOwnerAuthentication 

    This policy allows the user to authenticate using either biometric authentication (if available) or the device passcode. If biometric authentication is supported and the user has enrolled their biometric data, the prompt will appear for biometric verification. Otherwise, the device passcode will be used for authentication.

    We have used the LAPolicyDeviceOwnerAuthentication policy constant, which authenticates either by biometry or the device passcode.

    We have used the canEvaluatePolicy(_:error:) method to check if the device supports biometric authentication and if the user has added any biometric information (e.g., Touch ID or Face ID).

    To perform biometric authentication, insert the following code into the authenticateWithFace() method.

    // shared/iosMain/FaceAuthenticator

    actual class FaceAuthenticator {
    
        actual fun isDeviceHasBiometric(): Boolean {
            // Check if face authentication is available
            val context = LAContext()
            val error = memScoped {
                allocPointerTo<ObjCObjectVar<NSError?>>()
            }
            return context.canEvaluatePolicy(LAPolicyDeviceOwnerAuthentication, error = error.value)
        }
    
        actual fun authenticateWithFace(callback: (Boolean) -> Unit) {
            // Authenticate using biometric
            val context = LAContext()
            val reason = "Authenticate using face"
    
            if (isDeviceHasBiometric()) {
                // Perform face authentication
                context.evaluatePolicy(
                    LAPolicyDeviceOwnerAuthentication,
                    localizedReason = reason
                ) { b: Boolean, nsError: NSError? ->
                    callback(b)
                    if (!b) {
                        print(nsError?.localizedDescription ?: "Failed to authenticate")
                    }
                }
            }
    
            callback(false)
        }
    
    }

    The primary purpose of LAContext is to evaluate authentication policies, such as biometric authentication or device passcode authentication. The main method for this is 

    evaluatePolicy(_:localizedReason:reply:):

    This method triggers an authentication request, which is returned in the completion block. The localizedReason parameter is a message that explains why the authentication is required and is shown during the authentication process.

    When using evaluatePolicy(_:localizedReason:reply:), we may have the option to fall back to device passcode authentication or cancel the authentication process. We can handle these scenarios by inspecting the LAError object passed in the error parameter of the completion block:

    if let error = error as? LAError {
        switch error.code {
        case .userFallback:
            	// User tapped on fallback button, provide a passcode entry UI
        case .userCancel:
            	// User canceled the authentication
        	// Handle other error cases as needed
        }
    }

    That concludes the coding of the shared code for iOS in the iosMain folder. We can utilize this by creating LoginView.swift in the iosApp folder.

    struct LoginView: View {
        var body: some View {
            var isFaceAuthenticated :Bool = false
            let faceAuthenticator = FaceAuthenticator()
            
            Button(action: {
                if(faceAuthenticator.isDeviceHasBiometric()){
                    faceAuthenticator.authenticateWithFace { isSuccess in
                        isFaceAuthenticated = isSuccess.boolValue
                        print("Result is ")
                        print(isFaceAuthenticated)
                    }
                }
            }) {
                Text("Authenticate")
                .padding()
                .background(Color.blue)
                .foregroundColor(.white)
                .cornerRadius(10)
            }
            
        }
    }

    This ends our implementation of biometric authentication using the KMM application that runs smoothly on both Android and iOS platforms. If you’re interested, you can find the code for this project on our GitHub repository. We would love to hear your thoughts and feedback on our implementation.

    Conclusion

    It is important to acknowledge that while KMM offers numerous advantages, it may not be suitable for every project. Applications with extensive platform-specific requirements or intricate UI components may still require platform-specific development. Nonetheless, KMM can still prove beneficial in such scenarios by facilitating the sharing of non-UI code and minimizing redundancy.

    On the whole, Kotlin Multiplatform Mobile is an exciting framework that empowers developers to effortlessly create cross-platform applications. It provides an efficient and adaptable solution for building robust and high-performing mobile apps, streamlining development processes, and boosting productivity. With its expanding ecosystem and strong community support, KMM is poised to play a significant role in shaping the future of mobile app development.

  • How to setup iOS app with Apple developer account and TestFlight from scratch

    In this article, we will discuss how to set up the Apple developer account, build an app (create IPA files), configure TestFlight, and deploy it to TestFlight for the very first time.

    There are tons of articles explaining how to configure and build an app or how to setup TestFlight or setup application for ad hoc distribution. However, most of them are either outdated or missing steps and can be misleading for someone who is doing it for the very first time.

    If you haven’t done this before, don’t worry, just traverse through the minute details of this article, follow every step correctly, and you will be able to set up your iOS application end-to-end, ready for TestFlight or ad hoc distribution within an hour.

    Prerequisites

    Before we start, please make sure, you have:

    • A React Native Project created and opened in the XCode
    • XCode set up on your Mac
    • An Apple developer account with access to create the Identifiers and Certificates, i.e. you have at least have a Developer or Admin access – https://developer.apple.com/account/
    • Access to App Store Connect with your apple developer account -https://appstoreconnect.apple.com/
    • Make sure you have an Apple developer account, if not, please get it created first.

    The Setup contains 4 major steps: 

    • Creating Certificates, Identifiers, and Profiles from your Apple Developer account
    • Configuring the iOS app using these Identifiers, Certificates, and Profiles in XCode
    • Setting up TestFlight and Internal Testers group on App Store Connect
    • Generating iOS builds, signing them, and uploading them to TestFlight on App Store Connect

    Certificates, Identifiers, and Profiles

    Before we do anything, we need to create:

    • Bundle Identifier, which is an app bundle ID and a unique app identifier used by the App Store
    • A Certificate – to sign the iOS app before submitting it to the App Store
    • Provisioning Profile – for linking bundle ID and certificates together

    Bundle Identifiers

    For the App Store to recognize your app uniquely, we need to create a unique Bundle Identifier.

    Go to https://developer.apple.com/account: you will see the Certificates, Identifiers & Profiles tab. Click on Identifiers. 

    Click the Plus icon next to Identifiers:

    Select the App IDs option from the list of options and click Continue:

    Select App from app types and click Continue

    On the next page, you will need to enter the app ID and select the required services your application can have if required (this is optional—you can enable them in the future when you actually implement them). 

    Keep those unselected for now as we don’t need them for this setup.

    Once filled with all the information, please click on continue and register your Bundle Identifier.

    Generating Certificate

    Certificates can be generated 2 ways:

    • By automatically managing certificates from Xcode
    • By manually generating them

    We will generate them manually.

    To create a certificate, we need a Certificate Signing Request form, which needs to be generated from your Mac’s KeyChain Access authority.

    Creating Certificate Signing Request:

    Open the KeyChain Access application and Click on the KeyChain Access Menu item at the left top of the screen, then select Preferences

    Select Certificate Assistance -> Request Certificate from Managing Authority

    Enter the required information like email address and name, then select the Save to Disk option.

    Click Continue and save this form to a place so you can easily upload it to your Apple developer account

    Now head back to the Apple developer account, click on Certificates. Again click on the + icon next to Certificates title and you will be taken to the new certificate form.

    Select the iOS Distribution (App Store and ad hoc) option. Here, you can select the required services this certificate will need from a list of options (for example, Apple Push Notification service). 

    As we don’t need any services, ignore it for now and click continue.

    On the next screen, upload the certificate signing request form we generated in the last step and click Continue.

    At this step, your certificate will be generated and will be available to download.

    NOTE: The certificate can be downloaded only once, so please download it and keep it in a secure location to use it in the future.

    Download your certificate and install it by clicking on the downloaded certificate file. The certificate will be installed on your mac and can be used for generating builds in the next steps.

    You can verify this by going back to the KeyChain Access app and seeing the newly installed certificate in the certificates list.

    Generating a Provisioning Profile

    Now link your identifier and certificate together by creating a provisioning profile.

    Let’s go back to the Apple developer account, select the profiles option, and select the + icon next to the Profiles title.

    You will be redirected to the new Profiles form page.

    Select Distribution Profile and click continue:

    Select the App ID we created in the first step and click Continue:

    Now, select the certificate we created in the previous step:

    Enter a Provisioning Profile name and click Generate:

    Once Profile is generated, it will be available to download, please download it and keep it at the same location where you kept Certificate for future usage.

    Configure App in XCode

    Now, we need to configure our iOS application using the bundle ID and the Apple developer account we used for generating the certificate and profiles.

    Open the <appname>.xcworkspace file in XCode and click on the app name on the left pan. It will open the app configuration page.

    Select the app from targets, go to signing and capabilities, and enter the bundle identifier. 

    Now, to automatically manage the provisioning profile, we need to download the provisioning profile we generated recently. 

    For this, we need to sign into XCode using your Apple ID.

    Select Preferences from the top left XCode Menu option, go to Accounts, and click on the + icon at the bottom.

    Select Apple ID from the account you want to add to the list, click continue and enter the Apple ID.

    It will prompt you to enter the password as well.

    Once successfully logged in, XCode will fetch all the provisioning profiles associated with this account. Verify that you see your project in the Teams section of this account page.

    Now, go back to the XCode Signing Capabilities page, select Automatically Manage Signing, and then select the required team from the Team dropdown.

    At this point, your application will be able to generate the Archives to upload it to either TestFlight or Sign them ad hoc to distribute it using other mediums (Diawi, etc.).

    Setup TestFlight

    TestFlight and App Store management are managed by the App Store Connect portal.

    Open the App Store Connect portal and log in to the application.

    After you log in, please make sure you have selected the correct team from the top right corner (you can check the team name just below the user name).

    Select My Apps from the list of options. 

    If this is the first time you are setting up an application on this team, you will see the + (Add app) option at the center of the page, but if your team has already set up applications, you will see the + icon right next to Apps Header.

    Click on the + icon and select New App Option:

    Enter the complete app details, like platform (iOS, MacOS OR tvOS), aApp name, bundle ID (the one we created), SKU, access type, and click the Create button.

    You should now be able to see your newly created application on the Apps menu. Select the app and go to TestFlight. You will see no builds there as we did not push any yet.

    Generate and upload the build to TestFlight

    At this point, we are fully ready to generate a build from XCode and push it to TestFlight. To do this, head back to XCode.

    On the top middle section, you will see your app name and right arrow. There might be an iPhone or other simulator selected. Pplease click on the options list and select Any iOS Device.

    Select the Product menu from the Menu list and click on the Archive option.

    Once the archive succeeds, XCode will open the Organizer window (you can also open this page from the Windows Menu list).

    Here, we sign our application archive (build) using the certificate we created and upload it to the App Store Connect TestFlight.

    On the Organizer window, you will see the recently generated build. Please select the build and click on Distribute Button from the right panel of the Organizer page.

    On the next page, select App Store Connect from the “Select a method of distribution” window and click Continue.

    NOTE: We are selecting the App Store Connect option as we want to upload a build to TestFlight, but if you want to distribute it privately using other channels, please select the Ad Hoc option.

    Select Upload from the “Select a Destination” options and click continue. This will prepare your build to submit it to App Store Connect TestFlight.

    For the first time, it will ask you how you want to sign the build, Automatically or Manually?

    Please Select Automatically and click the Next button.

    XCode may ask you to authenticate your certificate using your system password. Please authenticate it and wait until XCode uploads the build to TestFlight.

    Once the build is uploaded successfully, XCode will prompt you with the Success modal.

    Now, your app is uploaded to TestFlight and is being processed. This processing takes 5 to 15 minutes, at which point TestFlight makes it available for testing.

    Add Internal Testers and other teammates to TestFlight

    Once we are done with all the setup and uploaded the build to TestFlight, we need to add internal testers to TestFlight.

    This is a 2-step process. First, you need to add a user to App Store Connect and then add a user to TestFlight.

    Go to Users and Access

    Add a new User and App Store sends an invitation to the user

    Once the user accepts the invitation, go to TestFlight -> Internal Testing

    In the Internal Testing section, create a new Testing group if not added already and

    add the user to TestFlight testing group.

    Now, you should be able to configure the app, upload it to TestFlight, and add users to the TestFlight testing group.

    Hopefully, you enjoyed this article, and it helped in setting up iOS applications end-to-end quickly without getting too much confused. 

    Thanks.