Category: Software Engineering

  • Mastering TV App Development: Building Seamless Experiences with EnactJS and WebOS

    As the world of smart TVs evolves, delivering immersive and seamless viewing experiences is more crucial than ever. At Velotio Technologies, we take pride in our proven expertise in crafting high-quality TV applications that redefine user engagement. Over the years, we have built multiple TV apps across diverse platforms, and our mastery of cutting-edge JavaScript frameworks, like EnactJS, has consistently set us apart.

    Our experience extends to WebOS Open Source Edition (OSE), a versatile and innovative platform for smart device development. WebOS OSE’s seamless integration with EnactJS allows us to deliver native-quality apps optimized for smart TVs that offer advanced features like D-pad navigation, real-time communication with system APIs, and modular UI components.

    This blog delves into how we harness the power of WebOS OSE and EnactJS to build scalable, performant TV apps. Learn how Velotio’s expertise in JavaScript frameworks and WebOS technologies drive innovation, creating seamless, future-ready solutions for smart TVs and beyond.

    This blog begins by showcasing the unique features and capabilities of WebOS OSE and EnactJS. We then dive into the technical details of my development journey — building a TV app with a web-based UI that communicates with proprietary C++ modules. From designing the app’s architecture to overcoming platform-specific challenges, this guide is a practical resource for developers venturing into WebOS app development.

    What Makes WebOS OSE and EnactJS Stand Out?

    • Native-quality apps with web technologies: Develop lightweight, responsive apps using familiar HTML, CSS, and JavaScript.
    • Optimized for TV and beyond: EnactJS offers seamless D-pad navigation and localization for Smart TVs, along with modularity for diverse platforms like automotive and IoT.
    • Real-time integration with system APIs: Use Luna Bus to enable bidirectional communication between the UI and native services.
    • Scalability and customization: Component-based architecture allows easy scaling and adaptation of designs for different use cases.
    • Open source innovation: WebOS OSE provides an open, adaptable platform for developing cutting-edge applications.

    What Does This Guide Cover?

    The rest of this blog details my development experience, offering insights into the architecture, tools, and strategies for building TV apps:

    • R&D and Designing the Architecture
    • Choosing EnactJS for UI Development
    • Customizing UI Components for Flexibility
    • Navigation Strategy for TV Apps
    • Handling Emulation and Simulation Gaps
    • Setting Up the Development Machine for the Simulator
    • Setting Up the Development Machine for the Emulator
    • Real-Time Updates (Subscription) with Luna Bus Integration
    • Packaging, Deployment, and App Updates

    R&D and Designing the Architecture

    The app had to connect a web-based interface (HTML, CSS, JS) to proprietary C++ services interacting with system-level processes. This setup is uncommon for WebOS OSE apps, posing two core challenges:

    1. Limited documentation: Resources for WebOS app development were scarce.
    2. WebAssembly infeasibility: Converting the C++ module to WebAssembly would restrict access to system-level processes.

    Solution: An Intermediate C++ Service capable of interacting with both the UI and other C++ modules

    To bridge these gaps, I implemented an intermediate C++ service to:

    • Communicate between the UI and the proprietary C++ service.
    • Use Luna Bus APIs to send and receive messages.

    This approach not only solved the integration challenges but also laid a scalable foundation for future app functionality.

    Architecture

    The WebApp architecture employs MVVM (Model-View-ViewModel), Component-Based Architecture (CBA), and Atomic Design principles to achieve modularity, reusability, and maintainability.

    App Architecture Highlights:

    • WebApp frontend: Web-based UI using EnactJS.
    • External native service: Intermediate C++ service (w/ Client SDK) interacting with the UI via Luna Bus.
    Block Diagram of the App Architecture

    ‍Choosing EnactJS for UI Development

    With the integration architecture in place, I focused on UI development. The D-pad compatibility required for smart TVs narrowed the choice of frameworks to EnactJS, a React-based framework optimized for WebOS apps.

    Why EnactJS?

    • Built-in TV compatibility: Supports remote navigation out-of-the-box.
    • React-based syntax: Familiar for front-end developers.

    Customizing UI Components for Flexibility

    EnactJS’s default components had restrictive customization options and lacked the flexibility for the desired app design.

    Solution: A Custom Design Library

    I reverse-engineered EnactJS’s building blocks (e.g., Buttons, Toggles, Popovers) and created my own atomic components aligned with the app’s design.

    This approach helped in two key ways:

    1. Scalability: The design system allowed me to build complex screens using predefined components quickly.
    2. Flexibility: Complete control over styling and functionality.

    Navigation Strategy for TV Apps

    In the absence of any recommended navigation tool for WebOS, I employed a straightforward navigation model using conditional-based routing:

    1. High-level flow selection: Determining the current process (e.g., Home, Settings).
    2. Step navigation: Tracking the user’s current step within the selected flow.

    This conditional-based routing minimized complexity and avoided adding unnecessary tools like react-router.

    Handling Emulation and Simulation Gaps

    The WebOS OSE simulator was straightforward to use and compatible with Mac and Linux. However, testing the native C++ services needed a Linux-based emulator.

    The Problem: Slow Build Times Cause Slow Development

    Building and deploying code on the emulator had long cycles, drastically slowing development.

    Solution: Mock Services

    To mitigate this, I built a JavaScript-based mock service to replicate the native C++ functionality:

    • On Mac, I used the mock service for rapid UI iterations on the Simulator.
    • On Linux, I swapped the mock service with the real native service for final testing on the Emulator.

    This separation of development and testing environments streamlined the process, saving hours during the UI and flow development.

    Setting Up the Development Machine for the Simulator

    To set up your machine for WebApp development with a simulator, ensure you install the VSCode extensions — webOS Studio, Git, Python3, NVM, and Node.js.

    Install WebOS OSE CLI (ares) and configure the TV profile using ares-config. Then, clone the repository, install the dependencies, and run the WebApp in watch mode with npm run watch.

    Install the “webOS Studio” extension in VSCode and set up the WebOS TV 24 Simulator via the Package Manager or manually. Finally, deploy and test the app on the simulator using the extension and inspect logs directly from the virtual remote interface.

    Note: Ensure the profile is set to TV because the simulator only works only for the TV profile.

    ares-config --profile tv

    Setting Up the Development Machine for the Emulator

    To set up your development machine for WebApp and Native Service development with an emulator, ensure you have a Linux machine and WebOS OSE CLI.

    Install essential tools like Git, GCC, Make, CMake, Python3, NVM, and VirtualBox.

    Build the WebOS Native Development Kit (NDK) using the build-webos repository, which may take 8–10 hours.

    Configure the emulator in VirtualBox and add it as a target device using the ares-setup-device. Clone the repositories, build the WebApp and Native Service, package them into an IPK, install it on the emulator using ares-install, and launch the app with ares-launch.

    Setting Up the Target Device for Ares Command to be Able to Identify the Emulator

    This step is required before you can install the IPK to the emulator.

    Note: To find the IP address of the WebOS Emulator, go to Settings -> Network -> Wired Connection.

    ares-setup-device --add target -i "host=192.168.1.1" -i "port=22" -i "username=root" -i "default=true"

    Real-Time Updates (Subscription) with Luna Bus Integration

    One feature required real-time updates from the C++ module to the UI. While the Luna Bus API provided a means to establish a subscription, I encountered challenges with:

    • Lifecycle Management: Re-subscriptions would fail due to improper cleanup.

    Solution: Custom Subscription Management

    I designed a custom logic layer for stable subscription management, ensuring seamless, real-time updates without interruptions.

    Packaging, Deployment, and App Updates

    Packaging

    Pack a dist of the Enact app, make the native service, and then use the ares-package command to build an IPK containing both the dist and the native service builds.

    npm run pack
    
    cd com.example.app.controller
    mkdir BUILD
    cd BUILD
    source /usr/local/webos-sdk-x86_64/environment-setup-core2-64-webos-linux
    cmake ..
    make
    
    ares-package -n app/dist webos/com.example.app.controller/pkg_x86_64

    Deployment

    The external native service will need to be packaged with the UI code to get an IPK, which can then be installed on the WebOS platform manually.

    ares-install com.example.app_1.0.0_all.ipk -d target
    ares-launch com.example.app -d target

    App Updates

    The app updates need to be sent as Firmware-Over-the-Air (FOTA) — based on libostree.

    WebOS OSE 2.0.0+ supports Firmware-Over-the-Air (FOTA) using libostree, a “git-like” system for managing Linux filesystem upgrades. It enables atomic version upgrades without reflashing by storing sysroots and tracking filesystem changes efficiently. The setup involves preparing a remote repository on a build machine, configuring webos-local.conf, and building a webos-image. Devices upgrade via commands to fetch and deploy rootfs revisions. Writable filesystem support (hotfix mode) allows temporary or persistent changes. Rollback requires manually reconfiguring boot deployment settings. Supported only on physical devices like Raspberry Pi 4, not emulators, FOTA simplifies platform updates while conserving disk space.

    Key Learnings and Recommendations

    1. Mock Early, Test Real: Use mock services for UI development and switch to real services only during final integration.
    2. Build for Reusability: Custom components and a modular architecture saved time during iteration.
    3. Plan for Roadblocks: Niche platforms like WebOS require self-reliance and patience due to limited community support.

    Conclusion: Mastering WebOS Development — A Journey of Innovation

    Building a WebOS TV app was a rewarding challenge. With WebOS OSE and EnactJS, developers can create native-quality apps using familiar web technologies. WebOS OSE stands out for its high performance, seamless integration, and robust localization support, making it ideal for TV app development and beyond (automotive, IOT, and robotics). Pairing it with EnactJS, a React-based framework, simplifies the process with D-pad compatibility and optimized navigation for TV experiences.

    This project showed just how powerful WebOS and EnactJS can be in building apps that bridge web-based UIs and C++ backend services. Leveraging tools like Luna Bus for real-time updates, creating a custom design system, and extending EnactJS’s flexibility allowed for a smooth and scalable development process.

    The biggest takeaway is that developing for niche platforms like WebOS requires persistence, creativity, and the right approach. When you face roadblocks and there’s limited help available, try to come up with your own creative solutions, and persist! Keep iterating, learning, and embracing the journey, and you’ll be able to unlock exciting possibilities.

  • Protecting Your Mobile App: Effective Methods to Combat Unauthorized Access

    Introduction: The Digital World’s Hidden Dangers

    Imagine you’re running a popular mobile app that offers rewards to users. Sounds exciting, right? But what if a few clever users find a way to cheat the system for more rewards? This is exactly the challenge many app developers face today.

    In this blog, we’ll describe a real-world story of how we fought back against digital tricksters and protected our app from fraud. It’s like a digital detective story, but instead of solving crimes, we’re stopping online cheaters.

    Understanding How Fraudsters Try to Trick the System

    The Sneaky World of Device Tricks

    Let’s break down how users may try to outsmart mobile apps:

    One way is through device ID manipulation. What is this? Think of a device ID like a unique fingerprint for your phone. Normally, each phone has its own special ID that helps apps recognize it. But some users have found ways to change this ID, kind of like wearing a disguise.

    Real-world example: Imagine you’re at a carnival with a ticket that lets you ride each ride once. A fraudster might try to change their appearance to get multiple rides. In the digital world, changing a device ID is similar—it lets users create multiple accounts and get more rewards than they should.

    How Do People Create Fake Accounts?

    Users have become super creative in making multiple accounts:

    • Using special apps that create virtual phone environments
    • Playing with email addresses
    • Using temporary email services

    A simple analogy: It’s like someone trying to enter a party multiple times by wearing different costumes and using slightly different names. The goal? To get more free snacks or entry benefits.

    The Detective Work: How to Catch These Digital Tricksters

    Tracking User Behavior

    Modern tracking tools are like having a super-smart security camera that doesn’t just record but actually understands what’s happening. Here are some powerful tools you can explore:

    LogRocket: Your App’s Instant Replay Detective

    LogRocket records and replays user sessions, capturing every interaction, error, and performance hiccup. It’s like having a video camera inside your app, helping developers understand exactly what users experience in real time.

    Quick snapshot:

    • Captures user interactions
    • Tracks performance issues
    • Provides detailed session replays
    • Helps identify and fix bugs instantly

    Mixpanel: The User Behavior Analyst

    Mixpanel is a smart analytics platform that breaks down user behavior, tracking how people use your app, where they drop off, and what features they love most. It’s like having a digital detective who understands your users’ journey.

    Key capabilities:

    • Tracks user actions
    • Creates behavior segments
    • Measures conversion rates
    • Provides actionable insights

    What They Do:

    • Notice unusual account creation patterns
    • Detect suspicious activities
    • Prevent potential fraud before it happens

    Email Validation: The First Line of Defense

    How it works:

    • Recognize similar email addresses
    • Prevent creating multiple accounts with slightly different emails
    • Block tricks like:
      • a.bhi629@gmail.com
      • abhi.629@gmail.com

    Real-life comparison: It’s like a smart mailroom that knows “John Smith” and “J. Smith” are the same person, preventing duplicate mail deliveries.

    Advanced Protection Strategies

    Device ID Tracking

    Key Functions:

    • Store unique device information
    • Check if a device has already claimed rewards
    • Prevent repeat bonus claims

    Simple explanation: Imagine a bouncer at a club who remembers everyone who’s already entered and stops them from sneaking in again.

    Stopping Fake Device Environments

    Some users try to create fake device environments using apps like:

    • Parallel Space
    • Multiple account creators
    • Game cloners

    Protection method: The app identifies and blocks these applications, just like a security system that recognizes fake ID cards.

    Root Device Detection

    What is a Rooted Device? It’s like a phone that’s been modified to give users complete control, bypassing normal security restrictions.

    Detection techniques:

    • Check for special root access files
    • Verify device storage
    • Run specific detection commands

    Analogy: It’s similar to checking if a car has been illegally modified to bypass speed limits.

    Extra Security Layers

    Android Version Requirements

    Upgrading to newer Android versions provides additional security:

    • Better detection of modified devices
    • Stronger app protection
    • More restricted file access

    Simple explanation: It’s like upgrading your home’s security system to a more advanced model that can detect intruders more effectively.

    Additional Protection Methods

    • Data encryption
    • Secure internet communication
    • Location verification
    • Encrypted local storage

    Think of these as multiple locks on your digital front door, each providing an extra layer of protection.

    Real-World Implementation Challenges

    Why is This Important?

    Every time a fraudster successfully tricks the system:

    • The app loses money
    • Genuine users get frustrated
    • Trust in the platform decreases

    Business impact: Imagine running a loyalty program where some people find ways to get 10 times more rewards than others. Not fair, right?

    Practical Tips for App Developers

    • Always stay updated with the latest security trends
    • Regularly audit your app’s security
    • Use multiple protection layers
    • Be proactive, not reactive
    • Learn from each attempted fraud

    Common Misconceptions About App Security

    Myth: “My small app doesn’t need advanced security.” Reality: Every app, regardless of size, can be a target.

    Myth: “Security is a one-time setup.” Reality: Security is an ongoing process of learning and adapting.

    Learning from Real Experiences

    These examples come from actual developers at Velotio Technologies, who faced these challenges head-on. Their approach wasn’t about creating an unbreakable system but about making fraud increasingly difficult and expensive.

    The Human Side of Technology

    Behind every security feature is a human story:

    • Developers protecting user experiences
    • Companies maintaining trust
    • Users expecting fair treatment

    Looking to the Future

    Technology will continue evolving, and so, too, will fraud techniques. The key is to:

    • Stay curious
    • Keep learning
    • Never assume you know everything

    Final Thoughts: Your App, Your Responsibility

    Protecting your mobile app isn’t just about implementing complex technical solutions; it’s about a holistic approach that encompasses understanding user behavior, creating fair experiences, and building trust. Here’s a deeper look into these critical aspects:

    Understanding User Behavior:‍

    Understanding how users interact with your app is crucial. By analyzing user behavior, you can identify patterns that may indicate fraudulent activity. For instance, if a user suddenly starts claiming rewards at an unusually high rate, it could signal potential abuse.
    Utilize analytics tools to gather data on user interactions. This data can help you refine your app’s design and functionality, ensuring it meets genuine user needs while also being resilient against misuse.

    Creating Fair Experiences:‍

    Clearly communicate your app’s rewards, account creation, and user behavior policies. Transparency helps users understand the rules and reduces the likelihood of attempts to game the system.
    Consider implementing a user agreement that outlines acceptable behavior and the consequences of fraudulent actions.

    Building Trust:

    Maintain open lines of communication with your users. Regular updates about security measures, app improvements, and user feedback can help build trust and loyalty.
    Use newsletters, social media, and in-app notifications to keep users informed about changes and enhancements.
    Provide responsive customer support to address user concerns promptly. If users feel heard and valued, they are less likely to engage in fraudulent behavior.

    Implement a robust support system that allows users to report suspicious activities easily and receive timely assistance.

    Remember: Every small protection measure counts.

    Call to Action

    Are you an app developer? Start reviewing your app’s security today. Don’t wait for a fraud incident to take action.

    Want to learn more?

    • Follow security blogs
    • Attend tech conferences
    • Connect with security experts
    • Never stop learning
  • React Native: Session Reply with Microsoft Clarity

    Microsoft recently launched session replay support for iOS on both Native iOS and React Native applications. We decided to see how it performs compared to competitors like LogRocket and UXCam.

    This blog discusses what session replay is, how it works, and its benefits for debugging applications and understanding user behavior. We will also quickly integrate Microsoft Clarity in React Native applications and compare its performance with competitors like LogRocket and UXCam.

    Below, we will explore the key features of session replay, the steps to integrate Microsoft Clarity into your React Native application, and benchmark its performance against other popular tools.

    Key Features of Session Replay

    Session replay provides a visual playback of user interactions on your application. This allows developers to observe how users navigate the app, identify any issues they encounter, and understand user behavior patterns. Here are some of the standout features:

    • User Interaction Tracking: Record clicks, scrolls, and navigation paths for a comprehensive view of user activities.
    • Error Monitoring: Capture and analyze errors in real time to quickly diagnose and fix issues.
    • Heatmaps: Visualize areas of high interaction to understand which parts of the app are most engaging.
    • Anonymized Data: Ensure user privacy by anonymizing sensitive information during session recording.

    Integrating Microsoft Clarity with React Native

    Integrating Microsoft Clarity into your React Native application is a straightforward process. Follow these steps to get started:

    1. Sign Up for Microsoft Clarity:

    a. Visit the Microsoft Clarity website and sign up for a free account.

    b. Create a new project and obtain your Clarity tracking code.

    1. Install the Clarity SDK:

    Use npm or yarn to install the Clarity SDK in your React Native project:

    npm install clarity@latest‍ 
    yarn add clarity@latest

    1. Initialize Clarity in Your App:

    Import and initialize Clarity in your main application file (e.g., App.js):

    import Clarity from 'clarity';‍
    Clarity.initialize('YOUR_CLARITY_TRACKING_CODE');

    1. Verify Integration:

    a. Run your application and navigate through various screens to ensure Clarity is capturing session data correctly.

    b. Log into your Clarity dashboard to see the recorded sessions and analytics.

    Benchmarking Against Competitors

    To evaluate the performance of Microsoft Clarity, we’ll compare it against two popular session replay tools, LogRocket and UXCam, assessing them based on the following criteria:

    • Ease of Integration: How simple is integrating the tool into a React Native application?
    • Feature Set: What features does each tool offer for session replay and user behavior analysis?
    • Performance Impact: How does the tool impact the app’s performance and user experience?
    • Cost: What are the pricing models and how do they compare?

    Detailed Comparison

    Ease of Integration

    • Microsoft Clarity: The integration process is straightforward and well-documented, making it easy for developers to get started.
    • LogRocket: LogRocket also offers a simple integration process with comprehensive documentation and support.
    • UXCam: UXCam provides detailed guides and support for integration, but it may require additional configuration steps compared to Clarity and LogRocket.

    Feature Set

    • Microsoft Clarity: Offers robust session replay, heatmaps, and error monitoring. However, it may lack some advanced features found in premium tools.
    • LogRocket: Provides a rich set of features, including session replay, performance monitoring, Network request logs, and integration with other tools like Redux and GraphQL.
    • UXCam: Focuses on mobile app analytics with features like session replay, screen flow analysis, and retention tracking.

    Performance Impact

    • Microsoft Clarity: Minimal impact on app performance, making it a suitable choice for most applications.
    • LogRocket: Slightly heavier than Clarity but offers more advanced features. Performance impact is manageable with proper configuration.
    • UXCam: Designed for mobile apps with performance optimization in mind. The impact is generally low but can vary based on app complexity.

    Cost

    • Microsoft Clarity: Free to use, making it an excellent option for startups and small teams.
    • LogRocket: Offers tiered pricing plans, with a free tier for basic usage and paid plans for advanced features.
    • UXCam: Provides a range of pricing options, including a free tier. Paid plans offer more advanced features and higher data limits.

    Final Verdict

    After evaluating the key aspects of session replay tools, Microsoft Clarity stands out as a strong contender, especially for teams looking for a cost-effective solution with essential features. LogRocket and UXCam offer more advanced capabilities, which may be beneficial for larger teams or more complex applications.

    Ultimately, the right tool will depend on your specific needs and budget. For basic session replay and user behavior insights, Microsoft Clarity is a fantastic choice. If you require more comprehensive analytics and integrations, LogRocket or UXCam may be worth the investment.

    Sample App

    I have also created a basic sample app to demonstrate how to set up Microsoft Clarity for React Native apps.

    Please check it out here: https://github.com/rakesho-vel/ms-rn-clarity-sample-app

    This sample video shows how Microsoft Clarity records and lets you review user sessions on its dashboard.

    References

    1. https://clarity.microsoft.com/blog/clarity-sdk-release/
    2. https://web.swipeinsight.app/posts/microsoft-clarity-finally-launches-ios-sdk-8312

  • JNIgen: Simplify Native Integration in Flutter

    Prepare to embark on a groundbreaking journey through the realms of Flutter as we uncover the remarkable new feature—JNIgen. In this blog, we pull back the curtain to reveal JNIgen’s transformative power, from simplifying intricate tasks to amplifying scalability; this blog serves as a guiding light along the path to a seamlessly integrated Flutter ecosystem.

    As Flutter continues to mesmerize developers with its constant evolution, each release unveiling a treasure trove of thrilling new features, the highly anticipated Google I/O 2023 was an extraordinary milestone. Amidst the excitement, a groundbreaking technique was unveiled: JNIgen, offering effortless access to native code like never before.

    Let this blog guide you towards a future where your Flutter projects transcend limitations and manifest into awe-inspiring creations.

    1. What is JNIgen?

    JNIgen, which stands for Java native interface generator,  is an innovative tool that automates the process of generating Dart bindings for Android APIs accessible through Java or Kotlin code. By utilizing these generated bindings, developers can invoke Android APIs with a syntax that closely resembles native code.

    With JNIgen, developers can seamlessly bridge the gap between Dart and the rich ecosystem of Android APIs. This empowers them to leverage the full spectrum of Android’s functionality, ranging from system-level operations to platform-specific features. By effortlessly integrating with Android APIs through JNIgen-generated bindings, developers can harness the power of native code and build robust applications with ease.

    1.1. Default approach: 

    In the current Flutter framework, we rely on Platform channels to establish a seamless communication channel between Dart code and native code. These channels serve as a bridge for exchanging messages and data.

    Typically, we have a Flutter app acting as the client, while the native code contains the desired methods to be executed. The Flutter app sends a message containing the method name to the native code, which then executes the requested method and sends the response back to the Flutter app.

    However, this approach requires the manual implementation of handlers on both the Dart and native code sides. It entails writing code to handle method calls and manage the exchange of responses. Additionally, developers need to carefully manage method names and channel names on both sides to ensure proper communication.

    1.2. Working principle of JNIgen: 

    Figure 1

     

    In JNIgen, our native code path is passed to the JNIgen generator, which initiates the generation of an intermediate layer of C code. This C code is followed by the necessary boilerplate in Dart, facilitating access to the C methods. All data binding and C files are automatically generated in the directory specified in the .yaml file, which we will explore shortly.

    Consequently, as a Flutter application, our interaction is solely focused on interfacing with the newly generated Dart code, eliminating the need for direct utilization of native code.

    1.3. Similar tools: 

    During the Google I/O 2023 event, JNIgen was introduced as a tool for native code integration. However, it is important to note that not all external libraries available on www.pub.dev are developed exclusively using channels. Another tool, FFIgen, was introduced earlier at Google I/O 2021 and serves a similar purpose. Both FFIgen and JNIgen function similarly, converting native code into intermediate C code with corresponding Dart dependencies to establish the necessary connections.

    While JNIgen primarily facilitates communication between Android native code and Dart code, FFIgen has become the preferred choice for establishing communication between iOS native code and Dart code. Both tools are specifically designed to convert native code into intermediate code, enabling seamless interoperability within their respective platforms.

    2. Configuration

    Prior to proceeding with the code implementation, it is essential to set up and install the necessary tools.

    2.1. System setup: 

    2.1.1 Install MVN

    Windows

    • Download the Maven archive for Windows from the link here [download Binary zip archive]
    • After Extracting the zip file, you will get a folder with name “apache-maven-x.x.x”
    • Create a new folder with the name “ApacheMaven” in “C:Program Files” and paste the above folder in it. [Your current path will be “C:Program FilesApacheMavenapache-maven-x.x.x”]
    • Add the following entry in “Environment Variable” →  “User Variables”
      M2 ⇒ “C:Program FilesApacheMavenapache-maven-x.x.xbin”
      M2_HOME ⇒ “C:Program FilesApacheMavenapache-maven-x.x.x”
    • Add a new entry “%M2_HOME%bin” in “path” variable

    Mac

    • Download Maven archive for mac from the link here [download Binary tar.gz archive]
    • Run the following command where you have downloaded the *.tar.gz file
    tar -xvf apache-maven-3.8.7.bin.tar.gz

    • Add the following entry in .zshrc or .bash_profile to set Maven path “export PATH=”$PATH:/Users/username/Downloads/apache-maven-x.x.x/bin”

    Or

    • You can use brew to install llvm 
    brew install llvm

    • Brew will give you instruction like this for further setup
    ==> llvm
    To use the bundled libc++ please add the following LDFLAGS:
    LDFLAGS="-L/opt/homebrew/opt/llvm/lib/c++ -Wl,-rpath,/opt/homebrew/opt/llvm/lib/c++"
    
    llvm is keg-only, which means it was not symlinked into /opt/homebrew,
    because macOS already provides this software and installing another version in
    parallel can cause all kinds of trouble.
    
    If you need to have llvm first in your PATH, run:
    echo 'export PATH="/opt/homebrew/opt/llvm/bin:$PATH"' >> ~/.zshrc
    
    For compilers to find llvm you may need to set:
    export LDFLAGS="-L/opt/homebrew/opt/llvm/lib"
    export CPPFLAGS="-I/opt/homebrew/opt/llvm/include"

    2.1.1 Install Clang-Format

    Windows

    • Download the latest version of LLVM for windows from the link here

    Mac

    • Run the following brew command: 
    brew install clang-format

    2.2. Flutter setup: 

    2.2.1 Get Dependencies

    Run the following commands with Flutter:

    flutter pub add jni

    flutter pub add jnigen

    2.2.2 Setup configuration file

    Figure 01 provides a visual representation of the .yaml file, which holds crucial configurations utilized by JNIgen. These configurations serve the purpose of identifying paths for native classes, as well as specifying the locations where JNIgen generates the resulting C and Dart files. Furthermore, the .yaml file allows for specifying Maven configurations, enabling the selection of specific third-party libraries that need to be downloaded to facilitate code generation.

    By leveraging the power of the .yaml file, developers gain control over the path identification process and ensure that the generated code is placed in the desired locations. Additionally, the ability to define Maven configurations grants flexibility in managing dependencies, allowing the seamless integration of required third-party libraries into the generated code. This comprehensive approach enables precise control and customization over the code generation process, enhancing the overall efficiency and effectiveness of the development workflow.

    Let’s explore the properties that we have utilized within the .yaml file (Please refer “3.2.2. code implementation” section’s example for better understanding):

    • android_sdk_config: 

    When the value of a specific property is set to “true,” it triggers the execution of a Gradle stub during the invocation of JNIgen. Additionally, it includes the Android compile classpath in the classpath of JNIgen. However, to ensure that all dependencies are cached appropriately, it is necessary to have previously performed a release build.

    • output 

    As the name implies, the “output” section defines the configuration related to the generation of intermediate code. This section plays a crucial role in determining how the intermediate code will be generated and organized.

    •  c >> library_name &&  c >> path:
      Here we are setting details for c_based binding code.

    •  dart >> path &&  dart >> structure:

    Here we are defining configuration for dart_based binding code.

    •  source_path:

    These are specific directories that are scanned during the process of locating the relevant source files.

    •  classes:

    By providing a comprehensive list of classes or packages, developers can effectively control the scope of the code generation process. This ensures that the binding code is generated only for the desired components, minimizing unnecessary code generation

    By utilizing these properties within the .yaml file, developers can effectively control various aspects of the code generation process, including path identification, code organization, and dependency management. To get more in-depth information, please check out the official documentation here.

    2.3. Generate bindings files:

    Once this setup is complete, the final step for JNIgen is to obtain the jar file that will be scanned to generate the required bindings. To initiate the process of generating the Android APK, you can execute the following command:

    flutter build apk

    Run the following command in your terminal to generate code:

    dart run jnigen --config jnigen.yaml

    2.3. Android setup: 

    Add the address of CMakeLists.txt file in your android >> app >> build.gradle file’s buildTypes section:

    buildTypes {
            externalNativeBuild {
                cmake {
                    path <address of CMakeLists.txt>
                }
            }
        }

    With this configuration, we are specifying the path for the CMake file that will been generated by JNIgen.This path declaration is crucial for identifying the location of the generated CMake file within the project structure.

    With the completion of the aforementioned steps, you are now ready to run your application and leverage all the native functions that have been integrated.

    3. Sample Project

    To gain hands-on experience and better understand the JNIgen, let’s create a small project together. Follow the steps below to get started. 

    Let’s start with:

    3.1. Packages & directories:

    3.1.1 Create a project using the following command:

    flutter create jnigen_integration_project

    3.1.2 Add these under dependencies of pubspec.yaml (and run command flutter pub get):

    jni: ^0.5.0
    jnigen: ^0.5.0

    3.1.3. Got to android >> app >> src >> main directory.

    3.1.4. Create directories inside the main as show below:

    Figure 02 

    3.2. Code Implementation:

    3.2.1 We will start with Android code. Create 2 files HardwareUtils.java & HardwareUtilsKotlin.kt inside the utils directory.

     HardwareUtilsKotlin.kt

    package com.hardware.utils
    
    import android.os.Build
    
    class HardwareUtilsKotlin {
    
       fun getHardwareDetails(): Map<String, String>? {
           val hardwareDetails: MutableMap<String, String> = HashMap()
           hardwareDetails["Language"] = "Kotlin"
           hardwareDetails["Manufacture"] = Build.MANUFACTURER
           hardwareDetails["Model No."] = Build.MODEL
           hardwareDetails["Type"] = Build.TYPE
           hardwareDetails["User"] = Build.USER
           hardwareDetails["SDK"] = Build.VERSION.SDK
           hardwareDetails["Board"] = Build.BOARD
           hardwareDetails["Version Code"] = Build.VERSION.RELEASE
           return hardwareDetails
       }
    }

     HardwareUtils.java 

    package com.hardware.utils;
    
    
    import android.os.Build;
    
    
    import java.util.HashMap;
    import java.util.Map;
    
    
    public class HardwareUtils {
    
    
       public Map<String, String> getHardwareDetails() {
           Map<String, String> hardwareDetails = new HashMap<String, String>();
           hardwareDetails.put("Language", "JAVA");
           hardwareDetails.put("Manufacture", Build.MANUFACTURER);
           hardwareDetails.put("Model No.", Build.MODEL);
           hardwareDetails.put("Type", Build.TYPE);
           hardwareDetails.put("User", Build.USER);
           hardwareDetails.put("SDK", Build.VERSION.SDK);
           hardwareDetails.put("Board", Build.BOARD);
           hardwareDetails.put("Version Code", Build.VERSION.RELEASE);
           return hardwareDetails;
       }
    
    
       public Map<String, String> getHardwareDetailsKotlin() {
           return new HardwareUtilsKotlin().getHardwareDetails();
       }
    
    
    }

    3.2.2 To provide the necessary configurations to JNIGen for code generation, we will create a .yaml file named JNIgen.yaml in the root of the project.

       jnigen.yaml 

    android_sdk_config:
     add_gradle_deps: true
    
    
    output:
     c:
       library_name: hardware_utils
       path: src/
     dart:
       path: lib/hardware_utils.dart
       structure: single_file
    
    
    source_path:
     - 'android/app/src/main/java'
    
    
    classes:
     - 'com.hardware.utils'

    3.2.3 Let’s generate C & Dart code.

    Execute the following command to create APK:

    flutter build apk

    After the successful execution of the above command, execute the following command:

    dart run jnigen --config jnigen.yaml

    3.2.4 Add the address of CMakeLists.txt in your android >> app >> build.gradle file’s buildTypes section as shown below :

    buildTypes {
            externalNativeBuild {
                cmake {
                    path "../../src/CMakeLists.txt"
                }
            }
      }

    3.2.5. Final step is to call the methods from Dart code, which was generated by JNIgen.

    To do this, replace the MyHomePage class code with the below code from main.dart file.

    class MyHomePage extends StatefulWidget {
     const MyHomePage({super.key, required this.title});
    
     final String title;
    
     @override
     State<MyHomePage> createState() => _MyHomePageState();
    }
    
    class _MyHomePageState extends State<MyHomePage> {
     String _hardwareDetails = '';
     String _hardwareDetailsKotlin = '';
     JObject activity = JObject.fromRef(Jni.getCurrentActivity());
    
     @override
     void initState() {
       JMap<JString, JString> deviceHardwareDetails =
           HardwareUtils().getHardwareDetails();
       _hardwareDetails = 'This device details from Java class:n';
       deviceHardwareDetails.forEach((key, value) {
         _hardwareDetails =
             '$_hardwareDetailsn${key.toDartString()} is ${value.toDartString()}';
       });
    
       JMap<JString, JString> deviceHardwareDetailsKotlin =
           HardwareUtils().getHardwareDetailsKotlin();
       _hardwareDetailsKotlin = 'This device details from Kotlin class:n';
       deviceHardwareDetailsKotlin.forEach((key, value) {
         _hardwareDetailsKotlin =
             '$_hardwareDetailsKotlinn${key.toDartString()} is ${value.toDartString()}';
       });
    
       setState(() {
         _hardwareDetails;
         _hardwareDetailsKotlin;
       });
       super.initState();
     }
    
     @override
     Widget build(BuildContext context) {
       return Scaffold(
         appBar: AppBar(
           title: Text(widget.title),
         ),
         body: Center(
           child: Column(
             mainAxisAlignment: MainAxisAlignment.center,
             children: <Widget>[
               Text(
                 _hardwareDetails,
                 textAlign: TextAlign.center,
               ),
               SizedBox(height: 20,),
               Text(
                 _hardwareDetailsKotlin,
                 textAlign: TextAlign.center,
               ),
             ],
           ),
         ),
       );
     }
    }

    After all of this, when we launch our app, we will see information about our Android device.

    4. Result

    For your convenience, the complete code for the project can be found here. Feel free to refer to this code repository for a comprehensive overview of the implementation details and to access the entirety of the source code.

    5. Conclusion

    In conclusion, we explored the limitations of the traditional approach to native API access in Flutter for mid to large-scale projects. Through our insightful exploration of JNIgen’s working principles, we uncovered its remarkable potential for simplifying the native integration process.

    By gaining a deep understanding of JNIgen’s inner workings, we successfully developed a sample project and provided detailed guidance on the essential setup requirements. Armed with this knowledge, developers can embrace JNIgen’s capabilities to streamline their native integration process effectively.

    We can say that JNIgen is a valuable tool for Flutter developers seeking to combine the power of Flutter’s cross-platform capabilities with the flexibility and performance benefits offered by native code. It empowers developers to build high-quality apps that seamlessly integrate platform-specific features and existing native code libraries, ultimately enhancing the overall user experience. 

    Hopefully, this blog post has inspired you to explore the immense potential of JNIgen in your Flutter applications. By harnessing the JNIgen, we can open doors to new possibilities.

    Thank you for taking the time to read through this blog!

    6. Reference

    1. https://docs.flutter.dev/
    2. https://pub.dev/packages/jnigen
    3. https://pub.dev/packages/jni
    4. https://github.com/dart-lang/jnigen
    5. https://github.com/dart-lang/jnigen#readme
    6. https://github.com/dart-lang/jnigen/wiki/Architecture-&-Design-Notes
    7. https://medium.com/simform-engineering/jnigen-an-easy-way-to-access-platform-apis-cb1fd3101e33
    8. https://medium.com/@marcoedomingos/the-ultimate-showdown-methodchannel-vs-d83135f2392d
  • Serverpod: The Ultimate Backend for Flutter

    Join us on this exhilarating journey, where we bridge the gap between frontend and backend development with the seamless integration of Serverpod and Flutter.

    Gone are the days of relying on different programming languages for frontend and backend development. With Flutter’s versatile framework, you can effortlessly create stunning user interfaces for a myriad of platforms. However, the missing piece has always been the ability to build the backend in Dart as well—until now.

    Introducing Serverpod, the missing link that completes the Flutter ecosystem. Now, with Serverpod, you can develop your entire application, from frontend to backend, all within the familiar and elegant Dart language. This synergy enables a seamless exchange of data and functions between the client and the server, reducing development complexities and boosting productivity.

    1. What is Serverpod?

    As a developer or tech enthusiast, we recognize the critical role backend services play in the success of any application. Whether you’re building a web, mobile, or desktop project, a robust backend infrastructure is the backbone that ensures seamless functionality and scalability.

    That’s where “Serverpod” comes into the picture—an innovative backend solution developed entirely in Dart, just like your Flutter projects. With Serverpod at your disposal, you can harness the full power of Dart on both the frontend and backend, creating a harmonious development environment that streamlines your workflow.

    The biggest advantage of using Serverpod is that it automates protocol and client-side code generation by analyzing your server, making remote endpoint calls as simple as local method calls.

    1.1. Current market status

    The top 10 programming languages for backend development in 2023 are as follows: 

    [Note: The results presented here are not absolute and are based on a combination of surveys conducted in 2023, including ‘Stack Overflow Developer Survey – 2023,’ ‘State of the Developer Ecosystem Survey,’ ‘New Stack Developer Survey,’ and more.]

    • Node.js – ~32%
    • Python (Django, Flask) – ~28%
    • Java (Spring Boot, Java EE) – ~18%
    • Ruby (Ruby on Rails) – ~7%
    • PHP (Laravel, Symfony) – ~6%
    • Go (Golang) – ~3%
    • .NET (C#) – ~2%
    • Rust – Approximately 1%
    • Kotlin (Spring Boot with Kotlin) – ~1%
    • Express.js (for Node.js) – ~1%
    Figure 01

    Figure 01 provides a comprehensive overview of the current usage of backend development technologies, showcasing a plethora of options with diverse features and capabilities. However, the landscape takes a different turn when it comes to frontend development. While the backend technologies offer a wealth of choices, most of these languages lack native multiplatform support for frontend applications.

    As a result, developers find themselves in a situation where they must choose between two sets of languages or technologies for backend and frontend business logic development.

    1.2. New solution

    As the demand for multiplatform applications continues to grow, developers are actively exploring new frameworks and languages that bridge the gap between backend and frontend development. Recently, a groundbreaking solution has emerged in the form of Serverpod. With Serverpod, developers can now accomplish server development in Dart, filling the crucial gap that was previously missing in the Flutter ecosystem.

    Flutter has already demonstrated its remarkable support for a wide range of platforms. The absence of server development capabilities was a notable limitation that has now been triumphantly addressed with the introduction of Serverpod. This remarkable achievement enables developers to harness the power of Dart to build both frontend and backend components, creating unified applications with a shared codebase.

    2. Configurations 

    Prior to proceeding with the code implementation, it is essential to set up and install the necessary tools.

    [Note: Given Serverpod’s initial stage, encountering errors without readily available online solutions is plausible. In such instances, seeking assistance from the Flutter community forum is highly recommended. Drawing from my experience, I suggest running the application on Flutter web first, particularly for Serverpod version 1.1.1, to ensure a smoother development process and gain insights into potential challenges.]

    2.1. Initial setup

    2.1.1 Install Docker

    Docker serves a crucial role in Serverpod, facilitating:

    • Containerization: Applications are packaged and shipped as containers, enabling seamless deployment and execution across diverse infrastructures.
    • Isolation: Applications are isolated from one another, enhancing both security and performance aspects, safeguarding against potential vulnerabilities, and optimizing system efficiency.

    Download & Install Docker from here.

    2.1.2 Install Serverpod CLI 

    • Run the following command:
    dart pub global activate serverpod_cli

    • Now test the installation by running:
    serverpod

    With proper configuration, the Serverpod command displays help information.

    2.2. Project creation

    To initiate serverpod commands, the Docker application must be launched first. Ensuring an active Docker instance in the backend environment is imperative to execute Serverpod commands successfully.

    • Create a new project with the command:
    serverpod create <your_project_name>

    Upon execution, a new directory will be generated with the specified project name, comprising three Dart packages:

    <your_project_name>_server: This package is designated for server-side code, encompassing essential components such as business logic, API endpoints, DB connections, and more.
    <your_project_name>_client: Within this package, the code responsible for server communication is auto-generated. Manual editing of files in this package is typically avoided.
    <your_project_name>_flutter: Representing the Flutter app, it comes pre-configured to seamlessly connect with your local server, ensuring seamless communication between frontend and backend elements.

    2.3. Project execution

    Step 1: Navigate to the server package with the following command:

    cd <your_project_name>/<your_project_name>_server

    Step 2: (Optional) Open the project in the VS Code IDE using the command:

    (Note: You can use any IDE you prefer, but for our purposes, we’ll use VS Code, which also simplifies DB connection later.)

    code .

    Step 3: Once the project is open in the IDE, stop any existing Docker containers with this command:

    .setup-tables.cmd

    Step 4: Before starting the server, initiate new Docker containers with the following command:

    docker-compose up --build --detach

    Step 5: The command above will start PostgreSQL and Redis containers, and you should receive the output:

    ~> docker-compose up --build --detach
    	[+] Running 2/2
     	✔ Container <your_project_name>_server-redis-1     Started                                                                                                
     	✔ Container <your_project_name>_server-postgres-1  Started

    (Note: If the output doesn’t match, refer to this Stack Overflow link for missing commands in the official documentation.)

    Step 6: Proceed to start the server with this command:

    dart bin/main.dart

    Step 7: Upon successful execution, you will receive the following output, where the “Server Default listening on port” value is crucial. Please take note of this value.

    ~> dart bin/main.dart
     	SERVERPOD version: 1.1.1, dart: 3.0.5 (stable) (Mon Jun 12 18:31:49 2023 +0000) on "windows_x64", time: 2023-07-19 15:24:27.704037Z
     	mode: development, role: monolith, logging: normal, serverId: default
     	Insights listening on port 8081
     	Server default listening on port 8080
     	Webserver listening on port 8082
     	CPU and memory usage metrics are not supported on this platform.

    Step 8: Use the “Server Default listening on port” value after “localhost,” i.e., “127.0.0.1,” and load this URL in your browser. Accessing “localhost:8080” will display the desired output, indicating that your server is running and ready to process requests.

    Figure 02

    Step 9: Now, as the containers reach the “Started” state, you can establish a connection with the database. We have opted for PostgreSQL as our DB choice, and the rationale behind this selection lies in the “docker-compose.yaml” file at the server project’s root. In the “service” section, PostgreSQL is already added, making it an ideal choice as the required setup is readily available. 

    Figure 03

    For the database setup, we need key information, such as Host, Port, Username, and Password. You can find all this vital information in the “config” directory’s “development.yaml” and “passwords.yaml” files. If you encounter difficulties locating these details, please refer to Figure 04.

    Figure 04

    Step 10: To establish the connection, you can install an application similar to Postico or, alternatively, I recommend using the MySQL extension, which can be installed in your VS Code with just one click.

    Figure 05

    Step 11: Follow these next steps:

    1. Select the “Database” option.
    2. Click on “Create Connection.”
    3. Choose the “PostgreSQL” option.
    4. Add a name for your Connection.
    5. Fill in the information collected in the last step.
    6. Finally, select the “Connect” option.
    Figure 06
    1. Upon success, you will receive a “Connect Success!” message, and the new connection will be added to the Explorer Tab.
    Figure 07

    Step 12: Now, we shift our focus to the Flutter project (Frontend):

    Thus far, we have been working on the server project. Let us open a new VS Code instance for a separate Flutter project while keeping the current VS Code instance active, serving as the server.

    Step 13: Execute the following command to run the Flutter project on Chrome:

    flutter run -d chrome

    With this, the default project will generate the following output:

    Step 14: When you are finished, you can shut down Serverpod with “Ctrl-C.”

    Step 15: Then stop Postgres and Redis.

    docker compose stop

    Figure 08

    3. Sample Project

    So far, we have successfully created and executed the project, identifying three distinct components. The server project caters to server/backend development, while the Flutter project handles application/frontend development. The client project, automatically generated, serves as the vital intermediary, bridging the gap between the frontend and backend.

    However, merely acknowledging the projects’ existence is insufficient. To maximize our proficiency, it is crucial to grasp the code and file structure comprehensively. To achieve this, we will embark on a practical journey, constructing a small project to gain hands-on experience and unlock deeper insights into all three components. This approach empowers us with a well-rounded understanding, further enhancing our capabilities in building remarkable applications.

    3.1. What are we building?

    In this blog, we will construct a sample project with basic Login and SignUp functionality. The SignUp process will collect user information such as Email, Password, Username, and age. Users can subsequently log in using their email and password, leading to the display of user details on the dashboard screen. With the initial system setup complete and the newly created project up and running, it’s time to commence coding. 

    3.1.1 Create custom models for API endpoints

    Step1: Create a new file in the “lib >> src >> protocol” directory named “users.yaml”:

    class: Users
    table: users
    fields:
      username: String
      email: String
      password: String
      age: int

    Step 2: Save the file and run the following command to generate essential data classes and table creation queries:

    serverpod generate

    (Note: Add “–watch” after the command for continuous code generation). 

    Successful execution of the above command will generate a new file named “users.dart” in the “lib >> src >> generated” folder. Additionally, the “tables.pgsql” file now contains SQL queries for creating the “users” table. The same command updates the auto-generated code in the client project. 

    3.1.2 Create Tables in DB for the generated model 

    Step 1: Copy the queries written in the “generated >> tables.pgsql” file.

    In the MySQL Extension’s Database section, select the created database >> [project_name] >> public >> Tables >> + (Create New Table).

    Figure 09

    Step 2: Paste the queries into the newly created .sql file and click “Execute” above both queries.

    Figure 10

    Step 3: After execution, you will obtain an empty table with the “id” as the Primary key.

    Figure 11

    If you found multiple tables already present in your database like shown in the next figure, you can ignore those. These tables are created by the system with queries present in the “generated >> tables-serverpod.pgsql” file.

    Figure 12

    3.1.3 Create an API endpoint

    Step 1: Generate a new file in the “lib >> src >> endpoints” directory named “session_endpoints.dart”:

    class SessionEndpoint extends Endpoint {
      Future<Users?> login(Session session, String email, String password) async {
        List<Users> userList = await Users.find(session,
            where: (p0) =>
                (p0.email.equals(email)) & (p0.password.equals(password)));
        return userList.isEmpty ? null : userList[0];
      }
    
    
      Future<bool> signUp(Session session, Users newUser) async {
        try {
          await Users.insert(session, newUser);
          return true;
        } catch (e) {
          print(e.toString());
          return false;
        }
      }
    }

    If “serverpod generate –watch” is already running, you can ignore this step 2.

    Step 2: Run the command:

    serverpod generate

    Step 3: Start the server.
    [For help, check out Step 1 Step 6 mentioned in Project Execution part.]

    3.1.3 Create three screens

    Login Screen:

    Figure 13

    SignUp Screen:

    Figure 14

    Dashboard Screen:

    Figure 15

    3.1.4 Setup Flutter code

    Step 1: Add the code provided to the SignUp button in the SignUp screen to handle user signups.

    try {
            final result = await client.session.signUp(
              Users(
                email: _emailEditingController.text.trim(),
                username: _usernameEditingController.text.trim(),
                password: _passwordEditingController.text.trim(),
                age: int.parse(_ageEditingController.text.trim()),
              ),
            );
            if (result) {
              Navigator.pop(context);
            } else {
              _errorText = 'Something went wrong, Try again.';
            }
          } catch (e) {
            debugPrint(e.toString());
            _errorText = e.toString();
          }

    Step 2: Add the code provided to the Login button in the Login screen to handle user logins.

    try {
            final result = await client.session.login(
              _emailEditingController.text.trim(),
              _passwordEditingController.text.trim(),
            );
            if (result != null) {
              _emailEditingController.text = '';
              _passwordEditingController.text = '';
              Navigator.push(
                context,
                MaterialPageRoute(
                  builder: (context) => DashboardPage(user: result),
                ),
              );
            } else {
              _errorText = 'Something went wrong, Try again.';
            }
          } catch (e) {
            debugPrint(e.toString());
            _errorText = e.toString();
          }

    Step 3: Implement logic to display user data on the dashboard screen.

    With these steps completed, our Flutter app becomes a fully functional project, showcasing the power of this new technology. Armed with Dart knowledge, every Flutter developer can transform into a proficient full-stack developer.

    4. Result

    Figure 16

    To facilitate your exploration, the entire project code is conveniently available in this code repository. Feel free to refer to this repository for an in-depth understanding of the implementation details and access to the complete source code, enabling you to delve deeper into the project’s intricacies and leverage its functionalities effectively.

    5. Conclusion

    In conclusion, we have provided a comprehensive walkthrough of the step-by-step setup process for running Serverpod seamlessly. We explored creating data models, integrating the database with our server project, defining tables, executing data operations, and establishing accessible API endpoints for Flutter applications.

    Hopefully, this blog post has kindled your curiosity to delve deeper into Serverpod’s immense potential for elevating your Flutter applications. Embracing Serverpod unlocks a world of boundless possibilities, empowering you to achieve remarkable feats in your development endeavors.

    Thank you for investing your time in reading this informative blog!

    6. References

    1. https://docs.flutter.dev/
    2. https://pub.dev/packages/serverpod/
    3. https://serverpod.dev/
    4. https://docs.docker.com/get-docker/
    5. https://medium.com/serverpod/introducing-serverpod-a-complete-backend-for-flutter-written-in-dart-f348de228e19
    6. https://medium.com/serverpod/serverpod-our-vision-for-a-seamless-scalable-backend-for-the-flutter-community-24ba311b306b
    7. https://stackoverflow.com/questions/76180598/serverpod-sql-error-when-starting-a-clean-project
    8. https://www.youtube.com/watch?v=3Q2vKGacfh0
    9. https://www.youtube.com/watch?v=8sCxWBWhm2Y

  • Integrating Augmented Reality in a Flutter App to Enhance User Experience

    In recent years, augmented reality (AR) has emerged as a cutting-edge technology that has revolutionized various industries, including gaming, retail, education, and healthcare. Its ability to blend digital information with the real world has opened up a new realm of possibilities. One exciting application of AR is integrating it into mobile apps to enhance the user experience.

    In this blog post, we will explore how to leverage Flutter, a powerful cross-platform framework, to integrate augmented reality features into mobile apps and elevate the user experience to new heights.

    Understanding Augmented Reality:‍

    Before we dive into the integration process, let’s briefly understand what augmented reality is. Augmented reality is a technology that overlays computer-generated content onto the real world, enhancing the user’s perception and interaction with their environment. Unlike virtual reality (VR), which creates a fully simulated environment, AR enhances the real world by adding digital elements such as images, videos, and 3D models.

    The applications of augmented reality are vast and span across different industries. In gaming, AR has transformed mobile experiences by overlaying virtual characters and objects onto the real world. It has also found applications in areas such as marketing and advertising, where brands can create interactive campaigns by projecting virtual content onto physical objects or locations. AR has also revolutionized education by offering immersive learning experiences, allowing students to visualize complex concepts and interact with virtual models.

    In the upcoming sections, we will explore the steps to integrate augmented reality features into mobile apps using Flutter.

    ‍What is Flutter?‍

    Flutter is an open-source UI (user interface) toolkit developed by Google for building natively compiled applications for mobile, web, and desktop platforms from a single codebase. It allows developers to create visually appealing and high-performance applications with a reactive and customizable user interface.

    The core language used in Flutter is Dart, which is also developed by Google. Dart is a statically typed, object-oriented programming language that comes with modern features and syntax. It is designed to be easy to learn and offers features like just-in-time (JIT) compilation during development and ahead-of-time (AOT) compilation for optimized performance in production.

    Flutter provides a rich set of customizable UI widgets that enable developers to build beautiful and responsive user interfaces. These widgets can be composed and combined to create complex layouts and interactions, giving developers full control over the app’s appearance and behavior.

    Why Choose Flutter for AR Integration?

    Flutter, backed by Google, is a versatile framework that enables developers to build beautiful and performant cross-platform applications. Its rich set of UI components and fast development cycle make it an excellent choice for integrating augmented reality features. By using Flutter, developers can write a single codebase that runs seamlessly on both Android and iOS platforms, saving time and effort.

    Flutter’s cross-platform capabilities enable developers to write code once and deploy it on multiple platforms, including iOS, Android, web, and even desktop (Windows, macOS, and Linux).

    The Flutter ecosystem is supported by a vibrant community, offering a wide range of packages and plugins that extend its capabilities. These packages cover various functionalities such as networking, database integration, state management, and more, making it easy to add complex features to your Flutter applications.’

    ‍Steps to Integrate AR in a Flutter App:

    Step 1: Set Up Flutter Project:

    Assuming that you already have Flutter installed in your system, create a new Flutter project or open an existing one to start integrating AR features. If not, then follow this https://docs.flutter.dev/get-started/install to set up Flutter.

    Step 2: Add ar_flutter_plugin dependency:

    Update the pubspec.yaml file of your Flutter project and add the following line under the dependencies section:

    dependencies:
    ar_flutter_plugin: ^0.7.3.

    This step ensures that your Flutter project has the necessary dependencies to integrate augmented reality using the ar_flutter_plugin package.
    Run `flutter pub get` to fetch the package.

    Step 3: Initializing the AR View:

    Create a new Dart file for the AR screen. Import the required packages at the top of the file:

    Define a new class called ARScreen that extends StatefulWidget and State. This class represents the AR screen and handles the initialization and rendering of the AR view:

    class ArScreen extends StatefulWidget {  
      const ArScreen({Key? key}) : super(key: key);  
      @override  
      _ArScreenState createState() => _ArScreenState();
    }‍
      class _ArScreenState extends State<ArScreen> {  
      ARSessionManager? arSessionManager;  
      ARObjectManager? arObjectManager;  
      ARAnchorManager? arAnchorManager;‍  
        List<ARNode> nodes = [];  
      List<ARAnchor> anchors = [];‍  
        @override  
        void dispose() {    
        super.dispose();    
        arSessionManager!.dispose();  }‍  
        @override  
        Widget build(BuildContext context) {    
        return Scaffold(        
          appBar: AppBar(          
            title: const Text('Anchors & Objects on Planes'),        
          ),        
          body: Stack(children: [          
            ARView(        
              onARViewCreated: onARViewCreated,        
              planeDetectionConfig: PlaneDetectionConfig.horizontalAndVertical,          
            ),          
            Align(        
              alignment: FractionalOffset.bottomCenter,        
              child: Row(            
                mainAxisAlignment: MainAxisAlignment.spaceEvenly,            
                children: [              
                  ElevatedButton(                  
                    onPressed: onRemoveEverything,                  
                    child: const Text("Remove Everything")),            
                ]),          
            )        
          ]));  
      }

    Step 4: Add AR functionality:

    Create a method onARViewCreated for the onArCoreViewCreated callback. You can add “required” AR functionality in this method, such as loading 3D models or handling interactions. In our demo, we will be adding 3D models in AR on tap:

    void onARViewCreated(
          ARSessionManager arSessionManager,
          ARObjectManager arObjectManager,
          ARAnchorManager arAnchorManager,
          ARLocationManager arLocationManager) {
        this.arSessionManager = arSessionManager;
        this.arObjectManager = arObjectManager;
        this.arAnchorManager = arAnchorManager;
    
        this.arSessionManager!.onInitialize(
              showFeaturePoints: false,
              showPlanes: true,
              customPlaneTexturePath: "Images/triangle.png",
              showWorldOrigin: true,
            );
        this.arObjectManager!.onInitialize();
    
        this.arSessionManager!.onPlaneOrPointTap = onPlaneOrPointTapped;
        this.arObjectManager!.onNodeTap = onNodeTapped;
      }

    After this, create a method onPlaneOrPointTapped for handling interactions.

    Future<void> onPlaneOrPointTapped(
          List<ARHitTestResult> hitTestResults) async {
        var singleHitTestResult = hitTestResults.firstWhere(
            (hitTestResult) => hitTestResult.type == ARHitTestResultType.plane);
        var newAnchor =
            ARPlaneAnchor(transformation: singleHitTestResult.worldTransform);
        bool? didAddAnchor = await arAnchorManager!.addAnchor(newAnchor);
        if (didAddAnchor!) {
          anchors.add(newAnchor);
          // Add note to anchor
          var newNode = ARNode(
              type: NodeType.webGLB,
              uri:
    "https://github.com/KhronosGroup/glTF-Sample-Models/raw/master/2.0/Duck/glTF-Binary/Duck.glb",
              scale: Vector3(0.2, 0.2, 0.2),
              position: Vector3(0.0, 0.0, 0.0),
              rotation: Vector4(1.0, 0.0, 0.0, 0.0));
          bool? didAddNodeToAnchor = await arObjectManager!
              .addNode(newNode, planeAnchor: newAnchor);
          if (didAddNodeToAnchor!) {
            nodes.add(newNode);
          } else {
            arSessionManager!.onError("Adding Node to Anchor failed");
          }
        } else {
          arSessionManager!.onError("Adding Anchor failed");
        }
      }

    Finally, create a method for onRemoveEverything to remove all the elements on the screen.

    Future<void> onRemoveEverything() async {
           for (var anchor in anchors) {
          arAnchorManager!.removeAnchor(anchor);
        }
        anchors = [];
      }

    Step 5: Run the AR screen:

    In your app’s main entry point, set the ARScreen as the home screen:

    void main() {
      runApp(MyApp());
    }
    
    class MyApp extends StatelessWidget {
      @override
      Widget build(BuildContext context) {
        return MaterialApp(
          home: ARScreen(),
        );
      }
    }

    In the example below, we can observe the AR functionality implemented. We are loading a Duck 3D Model whenever the user taps on the screen. The plane is auto-detected, and once that is done, we can add a model to it. We also have a floating button to remove everything that is on the plane at the given moment.

    ‍Benefits of AR Integration:

    • Immersive User Experience: Augmented reality adds an extra dimension to user interactions, creating immersive and captivating experiences. Users can explore virtual objects within their real environment, leading to increased engagement and satisfaction.
    • Interactive Product Visualization: AR allows users to visualize products in real-world settings before making a purchase. They can view how furniture fits in their living space, try on virtual clothes, or preview architectural designs. This interactive visualization enhances decision-making and improves customer satisfaction.
    • Gamification and Entertainment: Augmented reality opens up opportunities for gamification and entertainment within apps. You can develop AR games, quizzes, or interactive storytelling experiences, providing users with unique and enjoyable content.
    • Marketing and Branding: By incorporating AR into your Flutter app, you can create innovative marketing campaigns and branding experiences. AR-powered product demonstrations, virtual try-ons, or virtual showrooms help generate excitement around your brand and products.

    Conclusion:

    Integrating augmented reality into a Flutter app brings a new level of interactivity and immersion to the user experience. Flutter’s versatility with AR frameworks like ARCore and ARKit, empowers developers to create captivating and innovative mobile applications. By following the steps outlined in this blog post, you can unlock the potential of augmented reality and deliver exceptional user experiences that delight and engage your audience. Embrace the possibilities of AR in Flutter and embark on a journey of exciting and immersive app development.

  • What’s New with Material 3 in Flutter: Discussing the Key Updates with an Example

    At Google I/O 2021, Google unveiled Material You, the next evolution of Material Design, along with Android 12. This update introduced Material Design 3 (M3), bringing a host of significant changes and improvements to the Material Design system. For Flutter developers, adopting Material 3 offers a seamless and consistent design experience across multiple platforms. In this article, we will delve into the key changes of Material 3 in Flutter and explore how it enhances the app development process.

    1. Dynamic Color:‍

    One of the notable features of Material 3 is dynamic color, which enables developers to apply consistent colors throughout their apps. By leveraging the Material Theme Builder web app or the Figma plugin, developers can visualize and create custom color schemes based on a given seed color. The dynamic color system ensures that colors from different tonal palettes are applied consistently across the UI, resulting in a harmonious visual experience.

    2. Typography:‍

    Material 3 simplifies typography by categorizing it into five key groups: Display, Headline, Title, Body, and Label. This categorization makes using different sizes within each group easier, catering to devices with varying screen sizes. The scaling of typography has also become consistent across the groups, offering a more streamlined and cohesive approach to implementing typography in Flutter apps.

    3. Shapes:‍

    Material 3 introduces a wider range of shapes, including squared, rounded, and rounded rectangular shapes. Previously circular elements, such as the FloatingActionButton (FAB), have now transitioned to a rounded rectangular shape. Additionally, widgets like Card, Dialog, and BottomSheet feature a more rounded appearance in Material 3. These shape enhancements give developers more flexibility in designing visually appealing and modern-looking user interfaces.

    4. Elevation:‍

    In Material Design 2, elevated components had shadows that varied based on their elevation values. Material 3 takes this a step further by introducing the surfaceTintColor color property. This property applies a color to the surface of elevated components, with the intensity varying based on the elevation value. By incorporating surfaceTintColor, elevated components remain visually distinguishable even without shadows, resulting in a more polished and consistent UI.

    Let’s go through each of them in detail.

    Dynamic Color

    Dynamic color in Flutter enables you to apply consistent colors throughout your app. It includes key and neutral colors from different tonal palettes, ensuring a harmonious UI experience. You can use tools like Material Theme Builder or Figma plugin to create a custom color scheme to visualize and generate dynamic colors. By providing a seed color in your app’s theme, you can easily create an M3 ColorScheme. For example, adding “colorSchemeSeed: Colors.green” to your app will result in a lighter green color for elements like the FloatingActionButton (FAB), providing a customized look for your app.

    // primarySwatch: Colors.blue,  
     useMaterial3: true,  
     colorSchemeSeed: Colors.green,
     ),

    Note:
    When using the colorSchemeSeed in Flutter, it’s important to note that if you have already defined a primarySwatch in your app’s theme, you may encounter an assertion error. The error occurs because colorSchemeSeed and primarySwatch should not be used together. To avoid this issue, ensure that you either remove the primarySwatch or set colorSchemeSeed to null when using the colorSchemeSeed feature in your Flutter app.

    Using Material 3

    Typography

    In Material 3, the naming of typography has been made simpler by dividing it into five main groups: 

    1. Display 
    2. Headline 
    3. Title 
    4. Body 
    5. Label

    Each group has a more descriptive role, making it easier to use different font sizes within a specific typography group. For example, instead of using names like bodyText1, bodyText2, and caption, Material 3 introduces names like BodyLarge, BodyMedium, and BodySmall. This improved naming system is particularly helpful when designing typography for devices with varying screen sizes.

    Shapes

    Material 3 introduces an expanded selection of shapes, including square, rounded, and rounded rectangular shapes. The Floating Action Button (FAB), which used to be circular, now has a rounded rectangular shape. Material buttons have transitioned from rounded rectangular to pill-shaped. Additionally, widgets such as Card, Dialog, and BottomSheet have adopted a more rounded appearance in Material 3.

    Elevation

    In Material 2, elevated components were accompanied by shadows, with the size of the shadow increasing as the elevation increased. Material 3 brings a new feature called surfaceTintColor. When applied to elevated components, the surface of these components takes on the specified color, with the intensity varying based on the elevation value. This property is now available for all elevated widgets in Flutter, alongside elevation and shadow properties.

    Here’s an example Flutter app that demonstrates the key changes in Material 3 regarding dynamic color, typography, shapes, and elevation. This example app includes a simple screen with a colored container and text, showcasing the usage of these new features:

    //main.dart
    import 'package:flutter/material.dart';
    void main() {
      runApp(MyApp());
    }
    class MyApp extends StatelessWidget {
      @override
      Widget build(BuildContext context) {
        return MaterialApp(
          debugShowCheckedModeBanner: false,
          theme: ThemeData(
            useMaterial3: true,
            colorSchemeSeed: Colors.green,
          ),
          home: const MyHomePage(),
        );
      }
    }
    class MyHomePage extends StatelessWidget {
      const MyHomePage({Key? key}) : super(key: key);
      @override
      Widget build(BuildContext context) {
        return Scaffold(
          appBar: AppBar(
            title: Text(
              'Material 3 Key Changes',
              style: Theme.of(context).textTheme.headlineSmall,
            ),
            elevation: 8,
            shadowColor: Theme.of(context).shadowColor,
          ),
          body: Container(
            width: double.infinity,
            height: 200,
            color: Theme.of(context).colorScheme.secondary,
            padding: const EdgeInsets.all(16.0),
            child: Center(
              child: Text(
                'Hello, Material 3!',
                style: Theme.of(context).textTheme.bodyLarge?.copyWith(
                      color: Colors.white,
                    ),
              ),
            ),
          ),
          floatingActionButton: FloatingActionButton(
            onPressed: () {},
            child: const Icon(Icons.add),
          ),
        );
      }
    }

    Conclusion:

    Material 3 represents a significant update to the Material Design system in Flutter, offering developers a more streamlined and consistent approach to app design. The dynamic color feature allows for consistent colors throughout the UI, while the simplified typography and expanded shape options provide greater flexibility in creating visually engaging interfaces. Moreover, the enhancements in elevation ensure a cohesive and polished look for elevated components.

    As Flutter continues to evolve and adapt to Material 3, developers can embrace these key changes to create beautiful, personalized, and accessible designs across different platforms. The Flutter team has been diligently working to provide full support for Material 3, enabling developers to migrate their existing Material 2 apps seamlessly. By staying up to date with the progress of Material 3 implementation in Flutter, developers can leverage its features to enhance their app development process and deliver exceptional user experiences.

    Remember, Material 3 is an exciting opportunity for Flutter developers to create consistent and unified UI experiences, and exploring its key changes opens up new possibilities for app design.

  • How to build High-Performance Flutter Apps using Streams

    Performance is a significant factor for any mobile app, and multiple factors like architecture, logic, memory management, etc. cause low performance. When we develop an app in Flutter, the initial performance results are very good, but as development progresses, the negative effects of a bad codebase start showing up.  This blog is aimed at using an architecture that improves Flutter app performance. We will briefly touch base on the following points:

    1. What is High-Performance Architecture?

    1.1. Framework

    1.2. Motivation

    1.3. Implementation

    2. Sample Project

    3. Additional benefits

    4. Conclusion

    1. What is High-Performance Architecture?

    This Architecture uses streams instead of the variable-based state management approach. Streams are the most preferred approach for scenarios in which an app needs data in real-time. Even with these benefits, why are streams not the first choice for developers? One of the reasons is that streams are considered difficult and complicated, but that reputation is slightly overstated. 
    Dart is a programming language designed to have a reactive style system, i.e., architecture, with observable streams, as quoted by Flutter’s Director of Engineering, Eric Seidel in this podcast. [Note: The podcast’s audio is removed but an important part which is related to this architecture can be heard here in Zaiste’s youtube video.] 

    1.1. Framework: 

      Figure 01

    As shown in figure 01, we have 3 main components:

    • Supervisor: The Supervisor wraps the complete application, and is the Supervise responsible for creating the singleton of all managers as well as providing this manager’s singleton to the required screen.
    • Managers: Each Manager has its own initialized streams that any screen can access by accessing the respective singleton. These streams can hold data that we can use anywhere in the application. Plus, as we are using streams, any update in this data will be reflected everywhere at the same time.
    • Screens: Screens will be on the receiver end of this project. Each screen uses local streams for its operations, and if global action is required, then accesses streams from managers using a singleton.

    1.2. Motivation:

    Zaiste proposed an idea in 2019 and created a plugin for such architecture. He named it “Sprinkel Architecture” and his plugin is called sprinkle which made our development easy to a certain level. But as of today, his plugin does not support new null safety features introduced in Dart 2.12.0. You can check more about his implementation here and can try his given sample with following command:

    flutter run -–no-sound-null-safety

    1.3. Implementation:

    We will be using the get plugin and rxdart plugins in combination to create our high performance architecture.

    The Rxdart plugin will handle the stream creation and manipulation, whereas the get plugin can help us in dependency injection, route management, as well as state management.

    2. Sample Project:

    We will create a sample project to understand how to implement this architecture.

    2.1. Create a project using following command:

    flutter create sprinkle_architecture

    2.2. Add these under dependencies of pubspec.yaml (and run command flutter pub get):

    get: ^4.6.5
    Rxdart: ^0.27.4

    2.3. Create 3 directories, constants, managers, and views, inside the lib directory:

    2.4. First, we will start with a manager who will have streams & will increment the counter. Create dart file with name counter_manager.dart under managers directory:

    import 'package:get/get.dart';
    
    class CounterManager extends GetLifeCycle {
        final RxInt count = RxInt(0);
        int get getCounter => count.value;
        void increment() => count.value = count.value + 1;
    } 

    2.5. With this, we have a working manager. Next, we’ll create a Supervisor who will create a singleton of all available managers. In our case, we’ll create a singleton of only one manager. Create a supervisor.dart file in the lib directory:

    import 'package:get/get.dart';
    import 'package:sprinkle_architecture/managers/counter_manager.dart';
    
    abstract class Supervisor {
     static Future<void> init() async {
       _initManagers();
     }
    
     static void _initManagers() {
       Get.lazyPut<CounterManager>(() => CounterManager());
     }
    }

    2.6. This application only has 1 screen, but it is a good practice to create constants related to routing, so let’s add route details. Create a dart file route_paths.dart:

    abstract class RoutePaths {
      static const String counterPage = '/';
    }

    2.7. And route_pages.dart under constants directory:

    import 'package:get/get.dart';
    import 'package:sprinkle_architecture_exp/constants/route_paths.dart';
    import 'package:sprinkle_architecture_exp/views/counter_page.dart';
    
    abstract class RoutePages {
     static final List<GetPage<dynamic>> pages = <GetPage<dynamic>>[
       GetPage<void>(
         name: RoutePaths.counterPage,
         page: () => const CounterPage(title: 'Flutter Demo Home Page'),
         binding: CounterPageBindings(),
       ),
     ];
    }
    
    class CounterPageBindings extends Bindings {
     @override
     void dependencies() => Get.lazyPut<CounterManager>(() => CounterManager());
    }

    2.8. Now, we have a routing constant that we can use. But do not have a CounterPage Class. But before creating this class, let’s update our main file:

    import 'package:flutter/material.dart';
    import 'package:get/get_navigation/src/root/get_material_app.dart';
    import 'package:sprinkle_architecture_exp/constants/route_pages.dart';
    import 'package:sprinkle_architecture_exp/constants/route_paths.dart';
    import 'package:sprinkle_architecture_exp/supervisor.dart';
    
    void main() {
     WidgetsFlutterBinding.ensureInitialized();
     Supervisor.init();
     runApp(
       GetMaterialApp(
         initialRoute: RoutePaths.counterPage,
         getPages: RoutePages.pages,
       ),
     );
    }

    2.9. Finally, add the file counter_page_controller.dart:

    import 'package:get/get.dart';
    import 'package:sprinkle_architecture_exp/managers/counter_manager.dart';
    
    class CounterPageController extends GetxController {
     final CounterManager manager = Get.find();
    }

    2.10. As well as our landing page  counter_page.dart:

    import 'package:flutter/material.dart';
    import 'package:flutter/widgets.dart';
    import 'package:get/get.dart';
    import 'package:sprinkle_architecture_exp_2/views/counter_page_controller.dart';
    
    class CounterPage extends GetWidget<CounterPageController> {
     const CounterPage({Key? key, required this.title}) : super(key: key);
     final String title;
    
     CounterPageController get c => Get.put(CounterPageController());
    
     @override
     Widget build(BuildContext context) {
       return Obx(() {
         return Scaffold(
           appBar: AppBar(title: Text(title)),
           body: Center(
             child: Column(
               mainAxisAlignment: MainAxisAlignment.center,
               children: <Widget>[
                 const Text('You have pushed the button this many times:'),
                 Text('${c.manager.getCounter}',
                     style: Theme.of(context).textTheme.headline4),
               ],
             ),
           ),
           floatingActionButton: FloatingActionButton(
             onPressed: c.manager.increment,
             tooltip: 'Increment',
             child: const Icon(Icons.add),
           ),
         );
       });
     }
    }

    2.11. The get plugin allows us to add 1 controller per screen by using the GetxController class. In this controller, we can do operations whose scope is limited to our screen. Here, CounterPageController provides CounterPage the singleton on CounterManger.

    If everything is done as per the above commands, we will end up with the following tree structure:

    2.12. Now we can test our project by running the following command:

    flutter run

    3. Additional Benefits:

    3.1. Self Aware UI:

    As all managers in our application are using streams to share data, whenever a screen changes managers’ data, the second screens with dependency on that data also update themselves in real-time. This will happen because of the listen() property of streams. 

    3.2. Modularization:

    We have separate managers for handling REST APIs, preferences, appStateInfo, etc. So, the modularization happens automatically. Plus UI logic gets separate from business logic as we are using getXController provided by the get plugin

    3.3. Small rebuild footprint:

    By default, Flutter rebuilds the whole widget tree for updating the UI but with the get and rxdart plugins, only the dependent widget refreshes itself.

    4. Conclusion

    We can achieve good performance of a Flutter app with an appropriate architecture as discussed in this blog. 

  • How to setup iOS app with Apple developer account and TestFlight from scratch

    In this article, we will discuss how to set up the Apple developer account, build an app (create IPA files), configure TestFlight, and deploy it to TestFlight for the very first time.

    There are tons of articles explaining how to configure and build an app or how to setup TestFlight or setup application for ad hoc distribution. However, most of them are either outdated or missing steps and can be misleading for someone who is doing it for the very first time.

    If you haven’t done this before, don’t worry, just traverse through the minute details of this article, follow every step correctly, and you will be able to set up your iOS application end-to-end, ready for TestFlight or ad hoc distribution within an hour.

    Prerequisites

    Before we start, please make sure, you have:

    • A React Native Project created and opened in the XCode
    • XCode set up on your Mac
    • An Apple developer account with access to create the Identifiers and Certificates, i.e. you have at least have a Developer or Admin access – https://developer.apple.com/account/
    • Access to App Store Connect with your apple developer account -https://appstoreconnect.apple.com/
    • Make sure you have an Apple developer account, if not, please get it created first.

    The Setup contains 4 major steps: 

    • Creating Certificates, Identifiers, and Profiles from your Apple Developer account
    • Configuring the iOS app using these Identifiers, Certificates, and Profiles in XCode
    • Setting up TestFlight and Internal Testers group on App Store Connect
    • Generating iOS builds, signing them, and uploading them to TestFlight on App Store Connect

    Certificates, Identifiers, and Profiles

    Before we do anything, we need to create:

    • Bundle Identifier, which is an app bundle ID and a unique app identifier used by the App Store
    • A Certificate – to sign the iOS app before submitting it to the App Store
    • Provisioning Profile – for linking bundle ID and certificates together

    Bundle Identifiers

    For the App Store to recognize your app uniquely, we need to create a unique Bundle Identifier.

    Go to https://developer.apple.com/account: you will see the Certificates, Identifiers & Profiles tab. Click on Identifiers. 

    Click the Plus icon next to Identifiers:

    Select the App IDs option from the list of options and click Continue:

    Select App from app types and click Continue

    On the next page, you will need to enter the app ID and select the required services your application can have if required (this is optional—you can enable them in the future when you actually implement them). 

    Keep those unselected for now as we don’t need them for this setup.

    Once filled with all the information, please click on continue and register your Bundle Identifier.

    Generating Certificate

    Certificates can be generated 2 ways:

    • By automatically managing certificates from Xcode
    • By manually generating them

    We will generate them manually.

    To create a certificate, we need a Certificate Signing Request form, which needs to be generated from your Mac’s KeyChain Access authority.

    Creating Certificate Signing Request:

    Open the KeyChain Access application and Click on the KeyChain Access Menu item at the left top of the screen, then select Preferences

    Select Certificate Assistance -> Request Certificate from Managing Authority

    Enter the required information like email address and name, then select the Save to Disk option.

    Click Continue and save this form to a place so you can easily upload it to your Apple developer account

    Now head back to the Apple developer account, click on Certificates. Again click on the + icon next to Certificates title and you will be taken to the new certificate form.

    Select the iOS Distribution (App Store and ad hoc) option. Here, you can select the required services this certificate will need from a list of options (for example, Apple Push Notification service). 

    As we don’t need any services, ignore it for now and click continue.

    On the next screen, upload the certificate signing request form we generated in the last step and click Continue.

    At this step, your certificate will be generated and will be available to download.

    NOTE: The certificate can be downloaded only once, so please download it and keep it in a secure location to use it in the future.

    Download your certificate and install it by clicking on the downloaded certificate file. The certificate will be installed on your mac and can be used for generating builds in the next steps.

    You can verify this by going back to the KeyChain Access app and seeing the newly installed certificate in the certificates list.

    Generating a Provisioning Profile

    Now link your identifier and certificate together by creating a provisioning profile.

    Let’s go back to the Apple developer account, select the profiles option, and select the + icon next to the Profiles title.

    You will be redirected to the new Profiles form page.

    Select Distribution Profile and click continue:

    Select the App ID we created in the first step and click Continue:

    Now, select the certificate we created in the previous step:

    Enter a Provisioning Profile name and click Generate:

    Once Profile is generated, it will be available to download, please download it and keep it at the same location where you kept Certificate for future usage.

    Configure App in XCode

    Now, we need to configure our iOS application using the bundle ID and the Apple developer account we used for generating the certificate and profiles.

    Open the <appname>.xcworkspace file in XCode and click on the app name on the left pan. It will open the app configuration page.

    Select the app from targets, go to signing and capabilities, and enter the bundle identifier. 

    Now, to automatically manage the provisioning profile, we need to download the provisioning profile we generated recently. 

    For this, we need to sign into XCode using your Apple ID.

    Select Preferences from the top left XCode Menu option, go to Accounts, and click on the + icon at the bottom.

    Select Apple ID from the account you want to add to the list, click continue and enter the Apple ID.

    It will prompt you to enter the password as well.

    Once successfully logged in, XCode will fetch all the provisioning profiles associated with this account. Verify that you see your project in the Teams section of this account page.

    Now, go back to the XCode Signing Capabilities page, select Automatically Manage Signing, and then select the required team from the Team dropdown.

    At this point, your application will be able to generate the Archives to upload it to either TestFlight or Sign them ad hoc to distribute it using other mediums (Diawi, etc.).

    Setup TestFlight

    TestFlight and App Store management are managed by the App Store Connect portal.

    Open the App Store Connect portal and log in to the application.

    After you log in, please make sure you have selected the correct team from the top right corner (you can check the team name just below the user name).

    Select My Apps from the list of options. 

    If this is the first time you are setting up an application on this team, you will see the + (Add app) option at the center of the page, but if your team has already set up applications, you will see the + icon right next to Apps Header.

    Click on the + icon and select New App Option:

    Enter the complete app details, like platform (iOS, MacOS OR tvOS), aApp name, bundle ID (the one we created), SKU, access type, and click the Create button.

    You should now be able to see your newly created application on the Apps menu. Select the app and go to TestFlight. You will see no builds there as we did not push any yet.

    Generate and upload the build to TestFlight

    At this point, we are fully ready to generate a build from XCode and push it to TestFlight. To do this, head back to XCode.

    On the top middle section, you will see your app name and right arrow. There might be an iPhone or other simulator selected. Pplease click on the options list and select Any iOS Device.

    Select the Product menu from the Menu list and click on the Archive option.

    Once the archive succeeds, XCode will open the Organizer window (you can also open this page from the Windows Menu list).

    Here, we sign our application archive (build) using the certificate we created and upload it to the App Store Connect TestFlight.

    On the Organizer window, you will see the recently generated build. Please select the build and click on Distribute Button from the right panel of the Organizer page.

    On the next page, select App Store Connect from the “Select a method of distribution” window and click Continue.

    NOTE: We are selecting the App Store Connect option as we want to upload a build to TestFlight, but if you want to distribute it privately using other channels, please select the Ad Hoc option.

    Select Upload from the “Select a Destination” options and click continue. This will prepare your build to submit it to App Store Connect TestFlight.

    For the first time, it will ask you how you want to sign the build, Automatically or Manually?

    Please Select Automatically and click the Next button.

    XCode may ask you to authenticate your certificate using your system password. Please authenticate it and wait until XCode uploads the build to TestFlight.

    Once the build is uploaded successfully, XCode will prompt you with the Success modal.

    Now, your app is uploaded to TestFlight and is being processed. This processing takes 5 to 15 minutes, at which point TestFlight makes it available for testing.

    Add Internal Testers and other teammates to TestFlight

    Once we are done with all the setup and uploaded the build to TestFlight, we need to add internal testers to TestFlight.

    This is a 2-step process. First, you need to add a user to App Store Connect and then add a user to TestFlight.

    Go to Users and Access

    Add a new User and App Store sends an invitation to the user

    Once the user accepts the invitation, go to TestFlight -> Internal Testing

    In the Internal Testing section, create a new Testing group if not added already and

    add the user to TestFlight testing group.

    Now, you should be able to configure the app, upload it to TestFlight, and add users to the TestFlight testing group.

    Hopefully, you enjoyed this article, and it helped in setting up iOS applications end-to-end quickly without getting too much confused. 

    Thanks.

  • Implementing GraphQL with Flutter: Everything you need to know

    Thinking about using GraphQL but unsure where to start? 

    This is a concise tutorial based on our experience using GraphQL. You will learn how to use GraphQL in a Flutter app, including how to create a query, a mutation, and a subscription using the graphql_flutter plugin. Once you’ve mastered the fundamentals, you can move on to designing your own workflow.

    Key topics and takeaways:

    * GraphQL

    * What is graphql_flutter?

    * Setting up graphql_flutter and GraphQLProvider

    * Queries

    * Mutations

    * Subscriptions

    GraphQL

    Looking to call multiple endpoints to populate data for a single screen? Wish you had more control over the data returned by the endpoint? Is it possible to get more data with a single endpoint call, or does the call only return the necessary data fields?

    Follow along to learn how to do this with GraphQL. GraphQL’s goal was to change the way data is supplied from the backend, and it allows you to specify the data structure you want.

    Let’s imagine that we have the table model in our database that looks like this:

    Movie {

     title

     genre

     rating

     year

    }

    These fields represent the properties of the Movie Model:

    • title property is the name of the Movie,
    • genre describes what kind of movie
    • rating represents viewers interests
    • year states when it is released

    We can get movies like this using REST:

    /GET localhost:8080/movies

    [
     {
       "title": "The Godfather",
       "genre":  "Drama",
       "rating": 9.2,
       "year": 1972
     }
    ]

    As you can see, whether or not we need them, REST returns all of the properties of each movie. In our frontend, we may just need the title and genre properties, yet all of them were returned.

    We can avoid redundancy by using GraphQL. We can specify the properties we wish to be returned using GraphQL, for example:

    query movies { Movie {   title   genre }}

    We’re informing the server that we only require the movie table’s title and genre properties. It provides us with exactly what we require:

    {
     "data": [
       {
         "title": "The Godfather",
         "genre": "Drama"
       }
     ]
    }

    GraphQL is a backend technology, whereas Flutter is a frontend SDK for developing mobile apps. We get the data displayed on the mobile app from a backend when we use mobile apps.

    It’s simple to create a Flutter app that retrieves data from a GraphQL backend. Simply make an HTTP request from the Flutter app, then use the returned data to set up and display the UI.

    The new graphql_flutter plugin includes APIs and widgets for retrieving and using data from GraphQL backends.

    What is graphql_flutter?

    The new graphql_flutter plugin includes APIs and widgets that make it simple to retrieve and use data from a GraphQL backend.

    graphql_flutter, as the name suggests, is a GraphQL client for Flutter. It exports widgets and providers for retrieving data from GraphQL backends, such as:

    • HttpLink — This is used to specify the backend’s endpoint or URL.
    • GraphQLClient — This class is used to retrieve a query or mutation from a GraphQL endpoint as well as to connect to a GraphQL server.
    • GraphQLCache — We use this class to cache our queries and mutations. It has an options store where we pass the type of store to it during its caching operation.
    • GraphQLProvider — This widget encapsulates the graphql flutter widgets, allowing them to perform queries and mutations. This widget is given to the GraphQL client to use. All widgets in this provider’s tree have access to this client.
    • Query — This widget is used to perform a backend GraphQL query.
    • Mutation — This widget is used to modify a GraphQL backend.
    • Subscription — This widget allows you to create a subscription.

    Setting up graphql_flutter and GraphQLProvider

    Create a Flutter project:

    flutter create flutter_graphqlcd flutter_graphql

    Next, install the graphql_flutter package:

    flutter pub add graphql_flutter

    The code above will set up the graphql_flutter package. This will include the graphql_flutter package in the dependencies section of your pubspec.yaml file:

    dependencies:graphql_flutter: ^5.0.0

    To use the widgets, we must import the package as follows:

    import 'package:graphql_flutter/graphql_flutter.dart';

    Before we can start making GraphQL queries and mutations, we must first wrap our root widget in GraphQLProvider. A GraphQLClient instance must be provided to the GraphQLProvider’s client property.

    GraphQLProvider( client: GraphQLClient(...))

    The GraphQLClient includes the GraphQL server URL as well as a caching mechanism.

    final httpLink = HttpLink(uri: "http://10.0.2.2:8000/");‍ValueNotifier<GraphQLClient> client = ValueNotifier( GraphQLClient(   cache: InMemoryCache(),   link: httpLink ));

    HttpLink is used to generate the URL for the GraphQL server. The GraphQLClient receives the instance of the HttpLink in the form of a link property, which contains the URL of the GraphQL endpoint.

    The cache passed to GraphQLClient specifies the cache mechanism to be used. To persist or store caches, the InMemoryCache instance makes use of an in-memory database.

    A GraphQLClient instance is passed to a ValueNotifier. This ValueNotifer holds a single value and has listeners that notify it when that value changes. This is used by graphql_flutter to notify its widgets when the data from a GraphQL endpoint changes, which helps graphql_flutter remain responsive.

    We’ll now encase our MaterialApp widget in GraphQLProvider:

    void main() { runApp(MyApp());}‍class MyApp extends StatelessWidget { // This widget is the root of your application. @override Widget build(BuildContext context) {   return GraphQLProvider(       client: client,       child: MaterialApp(         title: 'GraphQL Demo',         theme: ThemeData(primarySwatch: Colors.blue),         home: MyHomePage(title: 'GraphQL Demo'),       )); }}

    Queries

    We’ll use the Query widget to create a query with the graphql_flutter package.

    class MyHomePage extends StatelessWidget { @override Widget build(BuildContext) {   return Query(     options: QueryOptions(       document: gql(readCounters),       variables: {         'counterId': 23,       },       pollInterval: Duration(seconds: 10),     ),     builder: (QueryResult result,         { VoidCallback refetch, FetchMore fetchMore }) {       if (result.hasException) {         return Text(result.exception.toString());       }‍       if (result.isLoading) {         return Text('Loading');       }‍       // it can be either Map or List       List counters = result.data['counter'];‍       return ListView.builder(           itemCount: repositories.length,           itemBuilder: (context, index) {             return Text(counters[index]['name']);           });     },) }}

    The Query widget encloses the ListView widget, which will display the list of counters to be retrieved from our GraphQL server. As a result, the Query widget must wrap the widget where the data fetched by the Query widget is to be displayed.

    The Query widget cannot be the tree’s topmost widget. It can be placed wherever you want as long as the widget that will use its data is underneath or wrapped by it.

    In addition, two properties have been passed to the Query widget: options and builder.

    options

    options: QueryOptions( document: gql(readCounters), variables: {   'conuterId': 23, }, pollInterval: Duration(seconds: 10),),

    The option property is where the query configuration is passed to the Query widget. This options prop is a QueryOptions instance. The QueryOptions class exposes properties that we use to configure the Query widget.

    The query string or the query to be conducted by the Query widget is set or sent in via the document property. We passed in the readCounters string here:

    final String readCounters = """query readCounters($counterId: Int!) {   counter {       name       id   }}""";

    The variables attribute is used to send query variables to the Query widget. There is a ‘counterId’: 23 there. In the readCounters query string, this will be passed in place of $counterId.

    The pollInterval specifies how often the Query widget polls or refreshes the query data. The timer is set to 10 seconds, so the Query widget will perform HTTP requests to refresh the query data every 10 seconds.

    builder

    A function is the builder property. When the Query widget sends an HTTP request to the GraphQL server endpoint, this function is called. The Query widget calls the builder function with the data from the query, a function to re-fetch the data, and a function for pagination. This is used to get more information.

    The builder function returns widgets that are listed below the Query widget. The result argument is a QueryResult instance. The QueryResult class has properties that can be used to determine the query’s current state and the data returned by the Query widget.

    • If the query encounters an error, QueryResult.hasException is set.
    • If the query is still in progress, QueryResult.isLoading is set. We can use this property to show our users a UI progress bar to let them know that something is on its way.
    • The data returned by the GraphQL endpoint is stored in QueryResult.data.

    Mutations

    Let’s look at how to make mutation queries with the Mutation widget in graphql_flutter.

    The Mutation widget is used as follows:

    Mutation( options: MutationOptions(   document: gql(addCounter),   update: (GraphQLDataProxy cache, QueryResult result) {     return cache;   },   onCompleted: (dynamic resultData) {     print(resultData);   }, ), builder: (   RunMutation runMutation,   QueryResult result, ) {   return FlatButton(       onPressed: () => runMutation({             'counterId': 21,           }),       child: Text('Add Counter')); },);

    The Mutation widget, like the Query widget, accepts some properties.

    • options is a MutationOptions class instance. This is the location of the mutation string and other configurations.
    • The mutation string is set using a document. An addCounter mutation has been passed to the document in this case. The Mutation widget will handle it.
    • When we want to update the cache, we call update. The update function receives the previous cache (cache) and the outcome of the mutation. Anything returned by the update becomes the cache’s new value. Based on the results, we’re refreshing the cache.
    • When the mutations on the GraphQL endpoint have been called, onCompleted is called. The onCompleted function is then called with the mutation result builder to return the widget from the Mutation widget tree. This function is invoked with a RunMutation instance, runMutation, and a QueryResult instance result.
    • The Mutation widget’s mutation is executed using runMutation. The Mutation widget causes the mutation whenever it is called. The mutation variables are passed as parameters to the runMutation function. The runMutation function is invoked with the counterId variable, 21.

    When the Mutation’s mutation is finished, the builder is called, and the Mutation rebuilds its tree. runMutation and the mutation result are passed to the builder function.

    Subscriptions

    Subscriptions in GraphQL are similar to an event system that listens on a WebSocket and calls a function whenever an event is emitted into the stream.

    The client connects to the GraphQL server via a WebSocket. The event is passed to the WebSocket whenever the server emits an event from its end. So this is happening in real-time.

    The graphql_flutter plugin in Flutter uses WebSockets and Dart streams to open and receive real-time updates from the server.

    Let’s look at how we can use our Flutter app’s Subscription widget to create a real-time connection. We’ll start by creating our subscription string:

    final counterSubscription = '''subscription counterAdded {   counterAdded {       name       id   }}''';

    When we add a new counter to our GraphQL server, this subscription will notify us in real-time.

    Subscription(   options: SubscriptionOptions(     document: gql(counterSubscription),   ),   builder: (result) {     if (result.hasException) {       return Text("Error occurred: " + result.exception.toString());     }‍     if (result.isLoading) {       return Center(         child: const CircularProgressIndicator(),       );     }‍     return ResultAccumulator.appendUniqueEntries(         latest: result.data,         builder: (context, {results}) => ...     );   }),

    The Subscription widget has several properties, as we can see:

    • options holds the Subscription widget’s configuration.
    • document holds the subscription string.
    • builder returns the Subscription widget’s widget tree.

    The subscription result is used to call the builder function. The end result has the following properties:

    • If the Subscription widget encounters an error while polling the GraphQL server for updates, result.hasException is set.
    • If polling from the server is active, result.isLoading is set.

    The provided helper widget ResultAccumulator is used to collect subscription results, according to graphql_flutter’s pub.dev page.

    Conclusion

    This blog intends to help you understand what makes GraphQL so powerful, how to use it in Flutter, and how to take advantage of the reactive nature of graphql_flutter. You can now take the first steps in building your applications with GraphQL!