Take a look at these two JavaScript code snippets. They look nearly identical — but do they behave the same?
Snippet 1 (without semicolon):
constpromise1=newPromise((resolve, reject) => {resolve('printing content of promise1');})(async () => {constres=await promise1; console.log('logging result ->', res);})();
Snippet 2 (with semicolon):
constpromise1=newPromise((resolve, reject) => {resolve('printing content of promise1');});(async () => {constres=await promise1; console.log('logging result ->', res);})();
What Happens When You Run Them?
❌ Snippet 1 Output:
TypeError: (intermediate value) is not a function
✅ Snippet 2 Output:
logging result -> printing content of promise1
Why Does a Single Semicolon Make Such a Big Difference?
We’ve always heard that semicolons are optional in JavaScript. So why does omitting just one lead to a runtime error here?
Let’s investigate.
What’s Really Going On?
The issue boils down to JavaScript’s Automatic Semicolon Insertion (ASI).
When you omit a semicolon, JavaScript tries to infer where it should end your statements. Usually, it does a decent job. But it’s not perfect.
In the first snippet, JavaScript parses this like so:
const promise1 = new Promise(…)(async () => { … })();
Here, it thinks you are calling the result of new Promise(…) as a function, which is not valid — hence the TypeError.
But Wait, Aren’t Semicolons Optional in JavaScript?
They are — until they’re not.
Here’s the trap:
If a new line starts with:
(
[
+ or –
/ (as in regex)
JavaScript might interpret it as part of the previous expression.
That’s what’s happening here. The async IIFE starts with (, so JavaScript assumes it continues the previous line unless you forcefully break it with a semicolon.
Key Takeaways:
ASI is not foolproof and can lead to surprising bugs.
A semicolon before an IIFE ensures it is not misinterpreted as part of the preceding line.
This is especially important when using modern JavaScript features like async/await, arrow functions, and top-level code.
Why You Should Use Semicolons Consistently
Even though many style guides (like those from Prettier or StandardJS) allow you to skip semicolons, using them consistently provides:
✅ Clarity
You eliminate ambiguity and make your code more readable and predictable.
✅ Fewer Bugs
You avoid hidden edge cases like this one, which are hard to debug — especially in production code.
✅ Compatibility
Not all environments handle ASI equally. Tools like Babel, TypeScript, or older browsers might behave differently.
Conclusion
The difference between working and broken code here is one semicolon. JavaScript’s ASI mechanism is helpful, but it can fail — especially when lines begin with characters like ( or [.
If you’re writing clean, modular, modern JavaScript, consider adding that semicolon. It’s a tiny keystroke that saves a lot of headaches.
Happy coding — and remember, when in doubt, punctuate!
In the fast-paced world of mobile technology, iOS widgets stand out as dynamic tools that enhance user engagement and convenience. With iOS 14’s introduction of widgets, Apple has empowered developers to create versatile, interactive components that provide valuable information and functionality right from the Home screen.
In this blog, we’ll delve into the world of iOS widgets, exploring the topic to create exceptional user experiences.
Understanding WidgetKit:
WidgetKit is a framework provided by Apple that simplifies creating and managing widgets for iOS, iPadOS, and macOS. It offers a set of APIs and tools that enable developers to easily design, develop, and deploy widgets. WidgetKit handles various aspects of widget development, including data management, layout rendering, and update scheduling, allowing developers to focus on creating compelling widget experiences.
Key Components of WidgetKit:
Widget Extension: A widget extension is a separate target within an iOS app project responsible for defining and managing the widget’s behavior, appearance, and data.
Widget Configuration: The widget configuration determines the appearance and behavior of the widget displayed on the Home screen. It includes attributes such as the widget’s name, description, supported sizes, and placeholder content.
Timeline Provider: The timeline provider supplies the widget with dynamic content based on predefined schedules or user interactions.
Widget Views: Widget views are SwiftUI views used to define the layout and presentation of the widget’s content.
Understanding iOS Widgets:
Widgets offer a convenient way to present timely and relevant information from your app or provide quick access to app features directly on the device’s Home screen. Introduced in iOS 14, widgets come in various sizes and can showcase a wide range of content, including weather forecasts, calendar events, news headlines, and app-specific data.
Benefits of iOS Widgets:
Enhanced Accessibility: Widgets enable users to access important information and perform tasks without navigating away from the Home screen, saving time and effort.
Increased Engagement: By displaying dynamic content and interactive elements, widgets encourage users to interact with apps more frequently, leading to higher engagement rates.
Personalization: Users can customize their Home screen by adding, resizing, and rearranging widgets to suit their preferences and priorities.
Improved Productivity: Widgets provide at-a-glance updates on calendar events, reminders, and to-do lists, helping users stay organized and productive throughout the day.
Widget Sizes
Widget sizes refer to the dimensions and layouts available for widgets on different platforms and devices. In the context of iOS, iPadOS, and macOS, widgets come in various sizes, each offering a distinct layout and content display.
These sizes are designed to accommodate different amounts of information and fit various screen sizes, ensuring a consistent user experience across devices.
Here are the common widget sizes available:
Small: This size is compact, displaying essential information in a concise format. Small widgets are ideal for providing quick updates or notifications without taking up much space on the screen.
Medium: Medium-sized widgets offer slightly more space for content display compared to small widgets. They can accommodate additional information or more detailed visualizations while remaining relatively compact.
Large: Large widgets provide ample space for displaying extensive content or detailed visuals. They offer a comprehensive view of information and may include interactive elements for enhanced functionality.
Extra Large: This size is available primarily on iPadOS and macOS, offering the most significant amount of space for content display. Extra-large widgets are suitable for showcasing extensive data or intricate visualizations, maximizing visibility and usability on larger screens.
These widget sizes cater to different user preferences and use cases, allowing developers to choose the most appropriate size based on the content and functionality of their widgets. By offering a range of sizes, developers can ensure their widgets deliver a tailored experience that meets the diverse needs of users across various devices and platforms.
Best Practices for Widget Design and Development:
Building on the existing best practices, let’s introduce additional tips:
Accessibility Considerations: Ensure that widgets are accessible to all users, including those with disabilities, by implementing features such as VoiceOver support and high contrast modes.
Localization Support: Localize widget content and interface elements to cater to users from diverse linguistic and cultural backgrounds, enhancing the app’s global reach and appeal.
Data Privacy and Security: Safeguard users’ personal information and sensitive data by implementing robust security measures and adhering to privacy best practices outlined in Apple’s guidelines.
Integration with App Clips: Explore opportunities to integrate widgets with App Clips, which are lightweight app experiences that allow users to access specific features or content without installing the full app.
Creating a Month-Wise Holiday Widget
In this example, we will create a widget that displays the holidays of a month, allowing users to quickly see the month’s holidays at a glance right on their home screen.
Initial Setup
Open Xcode: Launch Xcode on your Mac.
Create a New Project: Select “Create a new Xcode project” from the welcome screen or go to File > New > Project from the menu bar.
Choose a Template: In the template chooser window, select the “App” template under the iOS tab. Make sure to select SwiftUI as the User Interface and click “Next.”
Configure Your Project: Enter the name of your project, choose the organization identifier (usually your reverse domain name), interface as swiftUI and select Swift as the language and click “Next.”
Xcode will generate a default SwiftUI view for your app.
Add a Widget Extension: In Xcode, navigate to the File menu and select New > Target. In the template chooser window, select the “Widget Extension” template under the iOS tab and click “Next.”
Configure the Widget Extension: Enter a name for your widget extension as “Monthly Holiday” and choose the parent app for the extension (your main project). Click “Finish.”
Select “Activate” when the Activate scheme pops up.
Set Up the Widget Extension: Xcode will generate the necessary files for your widget extension, including a view file (e.g., WidgetView.swift) and a provider file (e.g., WidgetProvider.swift).
Developing the Month-Wise Holidays Widget
Implementing Provider Struct and TimelineProvider Protocol:
The TimelineProvider protocol provides the data that a widget displays over time. By conforming to this protocol, you define how and when the data for your widget should be updated.
struct Provider: TimelineProvider {// Provides a placeholder entry while the widget is loading. func placeholder(in context: Context) -> DayEntry {DayEntry(date: Date(), configuration: ConfigurationIntent()) }// Provides a snapshot of the widget's current state. func getSnapshot(in context: Context, completion: @escaping (DayEntry) -> ()) {let entry =DayEntry(date: Date(), configuration: ConfigurationIntent())completion(entry) }// Provides a timeline of entries for the widget. func getTimeline(in context: Context, completion: @escaping (Timeline<DayEntry>) -> ()) {var entries: [DayEntry] = []// Generate a timeline consisting of seven entries an day apart, starting from the current date.let currentDate =Date() for dayOffset in0 ..<7 {let entryDate = Calendar.current.date(byAdding: .day, value: dayOffset, to: currentDate)!let startOfDate = Calendar.current.startOfDay(for: entryDate)let entry =DayEntry(date: startOfDate, configuration: ConfigurationIntent()) entries.append(entry)let timeline =Timeline(entries: entries, policy: .atEnd)completion(timeline) } }}
Define a struct named DayEntry that conforms to the TimelineEntry protocol.
TimelineEntry is used in conjunction with TimelineProvider to manage and provide the data that the widget displays over time. By creating multiple timeline entries, you can control what your widget displays at different times throughout the day.
Define a SwiftUI view named MonthlyHolidayWidgetEntryView to display each entry in the widget.
struct MonthlyHolidayWidgetEntryView: View {var entry:DayEntryvar config:MonthConfig// Custom initializer to configure the view based on the entry's dateinit(entry: DayEntry) { self.entry = entry self.config = MonthConfig.determineConfig(from: entry.date) }var body:someView { ZStack {// Background shape with gradient color based on the month configurationContainerRelativeShape() .fill(config.backgroundColor.gradient) VStack {Spacer()// Display the date associated with the monthHStack(spacing: 4) {Text(config.dateText) .foregroundColor(config.dayTextColor) .font(.system(size: 25, weight: .heavy)) }Spacer()// Display the name of the monthText(config.month) .font(.system(size: 38, weight: .heavy)) .foregroundColor(config.dayTextColor)Spacer() } .padding() } }}
Define a widget named MonthlyHolidayWidget using SwiftUI and WidgetKit.
struct MonthlyHolidayWidget: Widget {let kind:String="MonthlyHolidaysWidget"var body:someWidgetConfiguration {StaticConfiguration(kind: kind, provider: Provider()) { entry inMonthlyHolidayWidgetEntryView(entry: entry) } .configurationDisplayName("Monthly style widget") // Display name for the widget in the widget gallery .description("The date of the widget changes based on holidays of month.") // Description of the widget's functionality .supportedFamilies([.systemLarge]) // Specify the widget size supported (large in this case) }}
Define a PreviewProvider struct named MonthlyHolidayWidget_Previews.
struct MonthlyHolidayWidget_Previews: PreviewProvider { static var previews:someView {// Provide a preview of the MonthlyHolidayWidgetEntryView for the widget galleryMonthlyHolidayWidgetEntryView(entry: DayEntry(date: dateToDisplay(month: 12, day: 22), configuration: ConfigurationIntent())) .previewContext(WidgetPreviewContext(family: .systemLarge)) }// Helper function to create a date for the given month and day in the year 2024 static func dateToDisplay(month: Int, day: Int) -> Date {let components =DateComponents(calendar: Calendar.current, year: 2024, month: month, day: day)return Calendar.current.date(from: components)! }}
Define an extension on the Date struct, adding computed properties to format dates in a specific way.
extension Date {// Computed property to get the weekday in a wide format (e.g., "Monday")var weekDayDisplayFormat:String { self.formatted(.dateTime.weekday(.wide)) }// Computed property to get the day of the month (e.g., "22")var dayDisplayFormat:String {formatted(.dateTime.day()) }}
Define `MonthConfig` struct that encapsulates configuration data.
For displaying month-specific attributes such as background color, date text, weekday text color, day text color, and month name based on a given date.
struct MonthConfig {let backgroundColor:Color// Background color for the month displaylet dateText:String// Text describing specific dates or holidays in the monthlet weekdayTextColor:Color// Text color for weekdayslet dayTextColor:Color// Text color for days of the monthlet month:String// Name of the month/// Determines and returns the configuration (MonthConfig) based on the given date.////// - Parameter date: The date used to determine the month configuration./// - Returns: A MonthConfig object corresponding to the month of the given date. static func determineConfig(from date: Date) -> MonthConfig {let monthInt = Calendar.current.component(.month, from: date)switch monthInt {case1: // JanuaryreturnMonthConfig(backgroundColor: .gray, dateText: "1 and 26", weekdayTextColor: .black.opacity(0.6), dayTextColor: .white.opacity(0.8), month: "Jan")case2: // FebruaryreturnMonthConfig(backgroundColor: .palePink, dateText: "No Holiday", weekdayTextColor: .pink.opacity(0.5), dayTextColor: .white.opacity(0.8), month: "Feb")case3: // MarchreturnMonthConfig(backgroundColor: .paleGreen, dateText: "25", weekdayTextColor: .black.opacity(0.7), dayTextColor: .white.opacity(0.8), month: "March")case4: // AprilreturnMonthConfig(backgroundColor: .paleBlue, dateText: "No Holiday", weekdayTextColor: .black.opacity(0.5), dayTextColor: .white.opacity(0.8), month: "April")case5: // MayreturnMonthConfig(backgroundColor: .paleYellow, dateText: "1", weekdayTextColor: .black.opacity(0.5), dayTextColor: .white.opacity(0.7), month: "May")case6: // JunereturnMonthConfig(backgroundColor: .skyBlue, dateText: "No Holiday", weekdayTextColor: .black.opacity(0.5), dayTextColor: .white.opacity(0.7), month: "June")case7: // JulyreturnMonthConfig(backgroundColor: .blue, dateText: "No Holiday", weekdayTextColor: .black.opacity(0.5), dayTextColor: .white.opacity(0.8), month: "July")case8: // AugustreturnMonthConfig(backgroundColor: .paleOrange, dateText: "15", weekdayTextColor: .black.opacity(0.5), dayTextColor: .white.opacity(0.8), month: "August")case9: // SeptemberreturnMonthConfig(backgroundColor: .paleRed, dateText: "No Holiday", weekdayTextColor: .black.opacity(0.5), dayTextColor: .paleYellow.opacity(0.9), month: "Sep")case10: // OctoberreturnMonthConfig(backgroundColor: .black, dateText: "2", weekdayTextColor: .white.opacity(0.6), dayTextColor: .orange.opacity(0.8), month: "Oct")case11: // NovemberreturnMonthConfig(backgroundColor: .paleBrown, dateText: "31", weekdayTextColor: .black.opacity(0.6), dayTextColor: .white.opacity(0.6), month: "Nov")case12: // DecemberreturnMonthConfig(backgroundColor: .paleRed, dateText: "25", weekdayTextColor: .white.opacity(0.6), dayTextColor: .darkGreen.opacity(0.8), month: "Dec")default:// Default case for unexpected month values (shouldn't typically happen)returnMonthConfig(backgroundColor: .gray, dateText: " ", weekdayTextColor: .black.opacity(0.6), dayTextColor: .white.opacity(0.8), month: "None") } }}
Call MonthlyHolidayWidget and MonthlyWidgetLiveActivity inside “MonthlyWidgetBundle.”
import WidgetKitimport SwiftUI@mainstruct MonthlyWidgetBundle: WidgetBundle { var body: some Widget { MonthlyHolidayWidget() MonthlyWidgetLiveActivity() }}
Now, finally add our created widget to a device.some text
Tap on the blank area of the screen and hold it for 2 seconds.
Then click on the plus(+) button at the top left corner.
Then, enter the widget name in the search widgets search bar.
Finally, select the widget name, “Monthly Holiday” in our case, to add it to the screen.
Visual effects of widgets will be as follows:
Conclusion:
iOS widgets represent a powerful tool for developers to enhance user experiences, drive engagement, and promote app adoption. By understanding the various types of widgets, implementing best practices for design and development, and exploring innovative use cases, developers can leverage their full potential to create compelling and impactful experiences for iOS users worldwide. As Apple continues to evolve the platform and introduce new features, widgets will remain a vital component of the iOS ecosystem, offering endless possibilities for innovation and creativity.
Go interfaces are powerful tools for designing flexible and adaptable code. However, their inner workings can often seem hidden behind the simple syntax.
This blog post aims to peel back the layers and explore the internals of Go interfaces, providing you with a deeper understanding of their power and capabilities.
1. Interfaces: Not Just Method Signatures
While interfaces appear as collections of method signatures, they are deeper than that. An interface defines a contract: any type that implements the interface guarantees the ability to perform specific actions through those methods. This contract-based approach promotes loose coupling and enhances code reusability.
// Interface defining a "printable" behaviortypePrintable interface { String() string}// Struct types implementing the Printable interfacetypeBook struct { Title string}typeArticle struct { Title string Content string}// Implement String() method to fulfill the contractfunc (b Book) String() string {return b.Title}// Implement String() method to fulfill the contractfunc (a Article) String() string {return fmt.Sprintf("%s", a.Title)}
Here, both Book and Article types implement the Printable interface by providing a String() method. This allows us to treat them interchangeably in functions expecting Printable values.
2. Interface Values and Dynamic Typing
An interface variable itself cannot hold a value. Instead, it refers to an underlying concrete type that implements the interface. Go uses dynamic typing to determine the actual type at runtime. This allows for flexible operations like:
func printAll(printables []Printable) { for _, p := range printables { fmt.Println(p.String()) // Calls the appropriate String() based on concrete type }}book := Book{Title: "Go for Beginners"}article := Article{Title: "The power of interfaces"}printables := []Printable{book, article}printAll(printables)
The printAll function takes a slice of Printable and iterates over it. Go dynamically invokes the correct String() method based on the concrete type of each element (Book or Article) within the slice.
3. Embedded Interfaces and Interface Inheritance
Go interfaces support embedding existing interfaces to create more complex contracts. This allows for code reuse and hierarchical relationships, further enhancing the flexibility of your code:
typeWriter interface { Write(data []byte) (int, error)}typeReadWriter interface { Writer Read([]byte) (int, error)}typeMyFile struct {// ... file data and methods}// MyFile implements both Writer and ReadWriter by embedding their interfacesfunc (f *MyFile) Write(data []byte) (int, error) {// ... write data to file}func (f *MyFile) Read(data []byte) (int, error) {// ... read data from file}
Here, ReadWriter inherits all methods from the embedded Writer interface, effectively creating a more specific “read-write” contract.
4. The Empty Interface and Its Power
The special interface{} represents the empty interface, meaning it requires no specific methods. This seemingly simple concept unlocks powerful capabilities:
// Function accepting any type using the empty interfacefunc PrintAnything(value interface{}) { fmt.Println(reflect.TypeOf(value), value)}PrintAnything(42) // Output: int 42PrintAnything("Hello") // Output: string HelloPrintAnything(MyFile{}) // Output: main.MyFile {}
This function can accept any type because interface{} has no requirements. Internally, Go uses reflection to extract the actual type and value at runtime, enabling generic operations.
5. Understanding Interface Equality and Comparisons
Equality checks on interface values involve both the dynamic type and underlying value:
book1 := Book{Title: "Go for Beginners"}book2 := Book{Title: "Go for Beginners"}// Same type and value, so equalfmt.Println(book1 == book2) // TruedifferentBook := Book{Title: "Go for Dummies"}// Same type, different value, so not equalfmt.Println(book1 == differentBook) // Falsearticle := Article{Title: "Go for Beginners"}// This will cause a compilation errorfmt.Println(book1 == article) // Error: invalid operation: book1 == article (mismatched types Book and Article)
However, it’s essential to remember that interfaces themselves cannot be directly compared using the == operator unless they both contain exactly the same value of the same type.
To compare interface values effectively, you can utilize two main approaches:
1. Type Assertions: These allow you to safely access the underlying value and perform comparisons if you’re certain about the actual type:
func getBookTitleFromPrintable(p Printable) (string, bool) { book, ok := p.(Book) // Check if p is a Bookif ok {return book.Title, true }return"", false// Return empty string and false if not a Book}bookTitle, ok :=getBookTitleFromPrintable(article)if ok { fmt.Println("Extracted book title:", bookTitle)} else { fmt.Println("Article is not a Book")}
2. Custom Comparison Functions: You can also create dedicated functions to compare interface values based on specific criteria:
The Increment method receives a pointer to MyCounter, allowing it to directly modify the count field.
7. Error Handling and Interfaces
Go interfaces play a crucial role in error handling. The built-in error interface defines a single method, Error() string, used to represent errors:
typeerror interface { Error() string}// Custom error type implementing the error interfacetypeMyError struct { message string}func (e MyError) Error() string {return e.message}func myFunction() error {// ... some operationreturn MyError{"Something went wrong"}}iferr :=myFunction(); err != nil { fmt.Println("Error:", err.Error()) // Prints "Something went wrong"}
By adhering to the error interface, custom errors can be seamlessly integrated into Go’s error-handling mechanisms.
8. Interface Values and Nil
Interface values can be nil, indicating they don’t hold any concrete value. However, attempting to call methods on a nil interface value results in a panic.
var printable Printable // nil interface valuefmt.Println(printable.String()) // Panics!
Always check for nil before calling methods on interface values.
However, it’s important to understand that an interface{} value doesn’t simply hold a reference to the underlying data. Internally, Go creates a special structure to store both the type information and the actual value. This hidden structure is often referred to as “boxing” the value.
Imagine a small container holding both a label indicating the type (e.g., int, string) and the actual data inside something like this:
typeiface struct { tab *itab data unsafe.Pointer}
Technically, this structure involves two components:
tab: This type descriptor carries details like the interface’s method set, the underlying type, and the methods of the underlying type that implement the interface.
data pointer: This pointer directly points to the memory location where the actual value resides.
When you retrieve a value from an interface{}, Go performs “unboxing.” It reads the type information and data pointer and then creates a new variable of the appropriate type based on this information.
This internal mechanism might seem complex, but the Go runtime handles it seamlessly. However, understanding this concept can give you deeper insights into how Go interfaces work under the hood.
9. Conclusion
This journey through the magic of Go interfaces has hopefully provided you with a deeper understanding of their capabilities and how they work. We’ve explored how they go beyond simple method signatures to define contracts, enable dynamic behavior, and making it way more flexible.
Remember, interfaces are not just tools for code reuse, but also powerful mechanisms for designing adaptable and maintainable applications.
Here are some key takeaways to keep in mind:
Interfaces define contracts, not just method signatures.
Interfaces enable dynamic typing and flexible operations.
Embedded interfaces allow for hierarchical relationships and code reuse.
The empty interface unlocks powerful generic capabilities.
Understand the nuances of interface equality and comparisons.
Interfaces play a crucial role in Go’s error-handling mechanisms.
Be mindful of nil interface values and potential panics.
To make the reader understand the use/effect of test cases in software development.
What’s in it for you?
In the world of coding, we’re often in a rush to complete work before a deadline hits. And let’s be honest, writing test cases isn’t usually at the top of our priority list. We get it—they seem tedious, so we’d rather skip this extra step. But here’s the thing: those seemingly boring lines of code have superhero potential. Don’t believe me? You will.
In this blog, we’re going to break down the mystery around test cases. No jargon, just simple talk. We’ll chat about what they are, explore a handy tool called Jest, and uncover why these little lines are actually the unsung heroes of coding. So, let’s ditch the complications and discover why giving some attention to test cases can level up our coding game. Ready? Let’s dive in!
What are test cases?
A test case is a detailed document specifying conditions under which a developer assesses whether a software application aligns with customer requirements. It includes preconditions, the case name, input conditions, and expected results. Derived from test scenarios, test cases cover both positive and negative inputs, providing a roadmap for test execution. This one-time effort aids future regression testing.
Test cases offer insights into testing strategy, process, preconditions, and expected outputs. Executed during testing, they ensure the software performs its intended tasks. Linking defects to test case IDs facilitates efficient defect reporting. The comprehensive documentation acts as a safeguard, catching any oversights during test case execution and reinforcing the development team’s efforts.
Different types of test cases exist, including integration, functional, non-functional, and unit. For this blog, we will talk about unit test cases.
What are unit test cases?
Unit testing is the process of testing the smallest functional unit of code. A functional unit could be a class member or simply a function that does something to your input and provides an output. Test cases around those functional units are called unit test cases.
Purpose of unit test cases
To validate that each unit of the software works as intended and meets the requirements: For example, if your requirement is that the function returns an object with specific properties, a unit test will detect whether the code is written accordingly.
To check the robustness of code: Unit tests are automated and run each time the code is changed to ensure that new code does not break existing functionality.
To check the errors and bugs beforehand: If a case fails or doesn’t fulfill the requirement, it helps the developer isolate the area and recheck it for bugs before testing on demo/UAT/staging.
Different frameworks for writing unit test cases
There are various frameworks for unit test cases, including:
Jest is used and recommended by Facebook and officially supported by the React dev team.
It has a great community and active support, so if you run into a problem and can’t find a solution in the comprehensive documentation, there are thousands of developers out there who could help you figure it out within hours.
1. Performance: Ideal for larger projects with continuous deployment needs, Jest delivers enhanced performance.
2. Compatibility: While Jest is widely used for testing React applications, it seamlessly integrates with other frameworks like Angular, Node, Vue, and Babel-based projects.
3. Auto Mocking: Jest automatically mocks imported libraries in test files, reducing boilerplate and facilitating smoother testing workflows.
4. Extended API: Jest comes with a comprehensive API, eliminating the necessity for additional libraries in most cases.
5. Timer Mocks: Featuring a Time mocking system, Jest accelerates timeout processes, saving valuable testing time.
6. Active Development & Community: Jest undergoes continuous improvement, boasting the most active community support for rapid issue resolution and updates.
Components of a test case in Jest
Describe
As the name indicates, they are responsible for describing the module we are going to test.
It should only describe the module, not the tests, as this describe module is generally not tested by Jest.
It
Here, the actual code is tested and verified with actual or fake (spy, mocks) outputs. We can nest various it modules under the describe module.
It’s good to describe what the test does or doesn’t do in the description of the it module.
Matchers
Matchers match the output with a real/fake output.
A test case without a matcher will always be a true/trivial test case.
// For each unit test you write,// answer these questions:describe('What component aspect are you testing?', () => {it('What should the feature do?', () => {constactual='What is the actual output?'constexpected='What is the expected output?'expect(actual).toEqual(expected) // matcher }) })
Mocks and spies in Jest
Mocks: They are objects or functions that simulate the behavior of real components. They are used to create controlled environments for testing by replacing actual components with simulated ones. Mocks are employed to isolate the code being tested, ensuring that the test focuses solely on the unit or component under examination without interference from external dependencies.
It is mainly used for mocking a library or function that is most frequently used in the whole file or unit test case.
Let Code.ts be the file you want to test.
import { v4 as uuidv4 } from uuidexportconstfunctionToTest= () => {constid=uuidv4()// rest of the codereturn id;}
As this is a unit test, we won’t be testing the uuidV4 function, so we will mock the whole uuid module using jest.mock.
jest.mock('uuid', () => { uuidv4: () =>'random id value' })) // mocking uuid module which will have uuidV4 as functiondescribe('testing code.ts', () => {it('i have mocked uuid module', ()=> {constres=functionToTest()expect(res).tobeEqual('random id value')})})
And that’s it. You have mocked the entire uuid module, so when it is coded during a test, it will return uuidV4 function, and that function, when executed, will give a random id value.
Spies: They are functions or objects that “spy” on other functions by tracking calls made to them. They allow you to observe and verify the behavior of functions during testing. Spies are useful for checking if certain functions are called, how many times they are called, and with what arguments. They help ensure that functions are interacting as expected.
This is by far the most used method, as this method works on object values and thus can be used to spy class methods efficiently.
describe('DataService Class', () => {it('should spy on the fetchData method with mockImplementation', () => {constdataServiceInstance=newDataService();constfetchDataSpy= jest.spyon(DataService.prototype, 'fetchData'); // prototype makes class method to a object fetchDataSpy.mockImplementation(() =>'Mocked Data'); // will return mocked data whenever function will be calledconstresult= dataServiceInstance.fetchData(); // mocked Dataexpect(fetchDataSpy).toHaveBeenCalledTimes(1)expect(result).toBe('Mocked Data'); } }
Mocking database call
One of the best uses of Jest is to mock a database call, i.e., mocking create, put, post, and delete calls for a database table.
We can complete the same action with the help of only Jest spies.
Let us suppose we have a database called DB, and it has lots of tables in it. Let’s say it has Table Student in it, and we want to mock create a Student database call.
functionasyncAddStudent(student:Student) {await db.Student.create(student) // the call we want to mock }
Now, as we are using the Jest spy method, we know that it will only be applicable to objects, so we will first make the Db. Students table into an object with create as method inside it, which will be jest.fn() (a function which can be used for mocking functions).
Students an object with create as method inside object which will be jest.fn() (a function which can be used for mocking functions in one line without actually calling that function).
describe('mocking data base call', () => {it('mocking create function', async () => { db.Student = { create: jest.fn() }consttempStudent= { name: 'john', age: '12', Rollno: 12 }constmock= jest.spyon(db.Student, 'create').mockResolvedvalue('Student has been created successfully')awaitAddStudent(tempStudent)expect(mock).tohaveBeenCalledwith(tempStudent); }) })
Testing private methods
Sometime, in development, we write private code for classes that can only be used within the class itself. But when writing test cases, we call the function by creating a class instance, and the private functions won’t be accessible to us, so we will not be able to test private functions.
But in core JavaScript, there is no concept of private and public functions; it is introduced to us as TypeScript. So, we can actually test the private function as a normal public function by using the //@ts-ignore comment just above calling the private function.
classTest() {privateprivate_fun() { console.log("i am in private function");return"i am in private function" } }
describe('Testing test class', () => {it('testing private function', () => {consttest=newTest() //calling code with ts-ignore comment//@ts-ignoreconstres= test.private_fun() // output ->> "i am in private function "//expect(res).toBeEqual("i am in private function") }) })
P.S. One thing to note is that this will only work with TypeScript/JavaScript files.
The importance of test cases in software development
Makes code agile:
In software development, one may have to change the structure or design of your code to add new features. Changing the already-tested code can be risky and costly. When you do the unit test, you just need to test the newly added code instead of the entire program.
Improves code quality:
A lot of bugs in software development occur due to unforeseen edge cases. If you forget to predict a single input, you may encounter a major bug in your application. When you write unit tests, think carefully about the edge cases of every function in your application.
Provides Documentation:
The unit test gives a basic idea of what the code does, and all the different use cases are covered through the program. It makes documentation easier, increasing the readability and understandability of the code. Anytime other developers can go through the unit test interface, understand the program better, and work on it fast and easily.
Easy Debugging:
Unit testing has made debugging a lot easier and quicker. If the test fails at any stage, you only need to debug the latest changes made in the code instead of the entire program. We have also mentioned how unit testing makes debugging easier at the next stage of integration testing as well.
Conclusion
So, if you made it to the end, you must have some understanding of the importance of test cases in your code.
We’ve covered the best framework to choose from and how to write your first test case in Jest. And now, you are more confident in proving bug-free, robust, clean, documented, and tested code in your next MR/PR.
In the ever-evolving world of Android app development, seamless integration of compelling animations is key to a polished user experience. MotionLayout, a robust tool in the Android toolkit, has an effortless and elegant ability to embed animations directly into the UI. Join us as we navigate through its features and master the skill of effortlessly designing stunning visuals.
1. Introduction to MotionLayout
MotionLayout transcends conventional layouts, standing as a specialized tool to seamlessly synchronize a myriad of animations with screen updates in your Android application.
1.1 Advantages of MotionLayout
Animation Separation:
MotionLayout distinguishes itself with the ability to compartmentalize animation logic into a separate XML file. This not only optimizes Java or Kotlin code but also enhances its overall manageability.
No Dependence on Manager or Controller:
An exceptional feature of MotionLayout is its user-friendly approach, enabling developers to attach intricate animations to screen changes without requiring a dedicated animation manager or controller.
Backward Compatibility:
Of paramount importance, MotionLayout maintains backward compatibility, ensuring its applicability across Android systems starting from API level 14.
Android Studio Integration:
Empowering developers further is the seamless integration with Android Studio. The graphical tooling provided by the visual editor facilitates the design and fine-tuning of MotionLayout animations, offering an intuitive workflow.
Derivation from ConstraintLayout:
MotionLayout, being a subclass of ConstraintLayout, serves as an extension specifically designed to facilitate the implementation of complex motion and animation design within a ConstraintLayout.
1.2 Important Tags
As elucidated earlier, Animation XML is separated into the following important tags and attributes:
<MotionScene>: The topmost tag in XML, wrapping all subsequent tags.
<ConstraintSet>: Describes one screen state, with two sets required for animations between states. For example, if we desire an animation where the screen transitions from state A to state B, we necessitate the definition of two ConstraintSets.
<Transition>: Attaches to two ConstraintSets, triggering animation between them.
<ViewTransition>: Utilized for changes within a single ConstraintSet.
As explained before, animation XML is separate following are important tags and attributes that we should know
1.3 Why It’s Better Than Its Alternatives
It’s important to note that MotionLayout is not the sole solution for every animation scenario. Similar to the saying that a sword cannot replace a needle, MotionLayout can be a better solution when planning for complex animations. MotionLayout can replace animation created using threads and runnables. Apart from MotionLayout, several common alternatives for creating animations include:
Animated Vector Drawable
Property animation frameworks
LayoutTransition animation
Layout Transitions with TransitionManager
CoordinatorLayout
Each alternative has unique advantages and disadvantages compared to MotionLayout. For smaller animations like icon changes, Animated Vector Drawable might be preferred. The choice between alternatives depends on the specific requirements of the animation task at hand.
MotionLayout is a comprehensive solution, bridging the gap between layout transitions and complex motion handling. It seamlessly integrates features from the property animation framework, TransitionManager, and CoordinatorLayout. Developers can describe transitions between layouts, animate any property, handle touch interactions, and achieve a fully declarative implementation, all through the expressive power of XML.
2. Configuration
2.1 System setup
For optimal development and utilization of the Motion Editor, Android Studio is a prerequisite. Kindly follow this link for the Android Studio installation guide.
2.2 Project Implementation
Initiate a new Android project and opt for the “Empty View Activity” template.
Since MotionLayout is an extension of ConstraintLayout, it’s essential to include ConstraintLayout in the build.gradle file.
Substitute “x.x.x” with the most recent version of ConstraintLayout.
Replace “ConstraintLayout” with “MotionLayout.” Opting for the right-click method is recommended, as it facilitates automatically creating the necessary animation XML.
Figure 1
When converting our existing layout to MotionLayout by right-clicking, a new XML file named “activity_main_scene.xml” is generated in the XML directory. This file is dedicated to storing animation details for MotionLayout.
Execute the following steps:
Click on the “start” ConstraintSet.
Move the Text View by dragging it to a desired position on your screen.
Click on the “end” ConstraintSet.
Move the Text View to another position on your screen.
Click on the arrow above “start” and “end” ConstraintSet.
Click on the “+” symbol in the “Attributes” tab.
Add the attribute “autoTransition” with the value “jumpToEnd.”
Click the play button on the “Transition” tab.
Preview the animation in real time by running the application. The animation will initiate when called from the associated Java class.
Note: You can also manually edit the activity_main_scene.xml file to make these changes.
3. Sample Project and Result
Until now, we’ve navigated through the complexities of MotionLayout and laid the groundwork for an Android project. Now, let’s transition from theory to practical application by crafting a sample project. In this endeavor, we’ll keep the animation simple and accessible for a clearer understanding.
3.1 Adding Dependencies
Include the following lines of code in your gradle.build file (Module: app), and then click on “Sync Now” to ensure synchronization with the project:
For a thorough comprehension of the implementation specifics and complete access to the source code, allowing you to delve into the intricacies of the project and harness its functionalities adeptly, please refer to this repository.
4. Assignment
Expanding the animation’s complexity becomes seamless by incorporating additional elements with meticulous handling. Here’s an assignment for you: endeavor to create the specified output below.
4.1 Assignment 1
4.2 Assignment 2
5. Conclusion
In conclusion, this guide has explored the essentials of using MotionLayout in Android development, highlighting its superiority over other animation methods. While we’ve touched on its basic capabilities here, future installments will explore more advanced features and uses. We hope this piece has ignited your interest in MotionLayout’s potential to enhance your Android apps.
Thank you for dedicating your time to this informative read!
In today’s digital age, the user experience is paramount. Mobile applications need to be intuitive and user-friendly so that users not only enjoy the app’s main functionalities but also easily navigate through its features. There are instances where a little extra guidance can go a long way, whether it’s introducing users to a fresh feature, showing them shortcuts to complete tasks more efficiently, or simply offering tips on getting the most out of an app. Many developers have traditionally crafted custom overlays or tooltips to bridge this gap, often requiring a considerable amount of effort. But the wait for a streamlined solution is over. After much anticipation, Apple has introduced the TipKit framework, a dedicated tool to simplify this endeavor, enhancing user experience with finesse.
TipKit
Introduced at WWDC 2023, TipKit emerges as a beacon for app developers aiming to enhance user engagement and experience. This framework is ingeniously crafted to present mini-tutorials, shining a spotlight on new, intriguing, or yet-to-be-discovered features within an application. Its utility isn’t just confined to a single platform—TipKit boasts integration with iCloud to ensure data synchronization across various devices.
At the heart of TipKit lies its two cornerstone components: the Tip Protocol and the TipView. These components serve as the foundation, enabling developers to craft intuitive and informative tips that resonate with their user base.
Tip Protocol
The essence of TipKit lies in its Tip Protocol, which acts as the blueprint for crafting and configuring content-driven tips. To create your tips tailored to your application’s needs, it’s imperative to conform to the Tip Protocol.
While every Tip demands a title for identification, the protocol offers flexibility by introducing a suite of properties that can be optionally integrated, allowing developers to craft a comprehensive and informative tip.
title(Text): The title of the Tip.
message(Text): A concise description further elaborates the essence of the Tip, providing users with a deeper understanding.
asset(Image): An image to display on the left side of the Tip view.
id(String): A unique identifier to your tip. Default will be the name of the type that conforms to the Tip protocol.
rules(Array of type Tips.Rule): This can be used to add rules to the Tip that can determine when the Tip needs to be displayed.
options(Array of type Tips.Option): Allows to add options for defining the behavior of the Tip.
actions(Array of type Tips.Action): This will provide primary and secondary buttons in the TipView that could help the user learn more about the Tip or execute a custom action when the user interacts with it.
Creating a Custom Tip
Let’s create our first Tip. Here, we are going to show a Tip to help the user understand the functionality of the cart button.
struct CartItemsTip: Tip {var title:Text {Text("Click the cart button to see what's in your cart") }var message:Text? {Text("You can edit/remove the items from your cart") }var image:Image? {Image(systemName:"cart") }}
TipView
As the name suggests, TipView is a user interface that represents the Inline Tip. The initializer of TipView requires an instance of the Tip protocol we discussed above, an Edge parameter, which is optional, for deciding the edge of the tip view that displays the arrow.
Displaying a Tip
Following are the two ways the Tip can be displayed.
Inline
You can display the tip along with other views. An object of TipView requires a type conforming Tip protocol used to display the Inline tip. As a developer, handling multiple views on the screen could be a complex and time-consuming task. TipKit framework makes it easy for the developers as it automatically adjusts the layout and the position of the TipView to ensure other views are accessible to the user.
struct ProductList: View { private let cartTip =CartItemsTip()var body:someView { \ Other viewsTipView(cartTip) \ Other views }}
Popover
TipKit Frameworks allow you to show a popover Tip for any UI element, e.g., Button, Image, etc. The popover tip appears over the entire screen, thus blocking the other views from user interaction until the tip is dismissed. A popoverTip modifier displays a Popover Tip for any UI element. Consider an example below where a Popover tip is displayed for a cart image.
private let cartTip =CartItemsTip()cartTip.invalidate(reason: .actionPerformed)
Tips Center
We have discussed essential points to define and display a tip using Tip protocol and TipView, respectively. Still, there is one last and most important step—to configure and load the tip using the configure method as described in the below example. This is mandatory to display the tips within your application. Otherwise, you will not see tips.
import SwiftUIimport TipKit@mainstruct TipkitDemoApp: App { var body: some Scene { WindowGroup { ContentView() .task { try? Tips.configure([ .displayFrequency(.immediate), .datastoreLocation(.applicationDefault) ]) } } }}
If you see the definition of configure method, it should be something like:
If you notice, the configure method accepts a list of types conforming to TipsConfiguration. There are two options available for TipsConfiguration, DisplayFrequency and DataStoreLocation.
You can set these values as per your requirement.
DisplayFrequency
DisplayFrequnecy allows you to control the frequency of your tips and has multiple options.
Use the immediate option when you do not want to set any restrictions.
Use the hourly, daily, weekly, and monthly values to display no more than one tip hourly, weekly, and so on, respectively.
For some situations, you need to set the custom display frequency as TimeInterval, when all the available options could not serve the purpose. In the below example, we have set a custom display frequency that restricts the tips to be displayed once per two days.
let customDisplayFrequency:TimeInterval=2*24*60*60try? Tips.configure([ .displayFrequency(customDisplayFrequency), .datastoreLocation(.applicationDefault)])
DatastoreLocation
This will be used for persisting tips and associated data.
You can use the following initializers to decide how to persist tips and data.
public init(url: URL, shouldReset: Bool = false)
url: A specific URL location where you want to persist the data.
shouldReset: If set to true, it will erase all data from the datastore. Resetting all tips present in the application.
public init(_ location: DatastoreLocation, shouldReset: Bool = false)
location: A predefined datastore location. Setting a default value ‘applicationDefault’ would persist the datastore in the app’s support directory.
groupIdentifier: The name of the group whose shared directory is used by the group of your team’s applications. Use the optional directoryName to specify a directory within this group.
directoryName: The optional directoryName to specify a directory within the group.
Max Display Count
As discussed earlier, we can set options to define tip behavior. One such option is MaxDisplayCount. Consider that you want to show CartItemsTip whenever the user is on the Home screen. Showing the tip every time a user comes to the Home screen can be annoying or frustrating. To prevent this, one of the solutions, perhaps the easiest, is using MaxDisplayCount. The other solution could be defining a Rule that determines when the tip needs to be displayed. Below is an example showcasing the use of the MaxDisplayCount option for defining CartItemsTip.
struct CartItemsTip: Tip {var title:Text {Text("Click here to see what's in your cart") }var message:Text? {Text("You can edit/remove the items from your cart") }var image:Image? {Image(systemName:"cart") }var options: [TipOption] { [ MaxDisplayCount(2) ] }}
Rule Based Tips
Let’s understand how Rules can help you gain more control over displaying your tips. There are two types of Rules: parameter-based rules and event-based rules.
Parameter Rules
These are persistent and more useful for State and Boolean comparisons. There are Macros (#Rule, @Parameter) available to define a rule.
In the below example, we define a rule that checks if the value stored in static itemsInCart property is greater than or equal to 3.
Defining rules ensures displaying tips only when all the conditions are satisfied.
struct CartTip: Tip {var title:Text {Text("Proceed with buying cart items.") }var message:Text? {Text("There are 3 or more items in your cart.") }var image:Image? {Image(systemName:"cart") } @Parameter static var itemsInCart:Int=0var rules: [Rule] { #Rule(Self.$itemsInCart) { $0 >=3 } }}
Event Rules
Event-based rules are useful when we want to track occurrences of certain actions in the app. Each event has a unique identifier id of type string, with which we can differentiate between various events. Whenever the action occurs, we need to use the denote() method to increment the counter.
Let’s consider the below example where we want to show a Tip to the user when the user selects the iPhone 14 Pro (256 GB) – Purple product more than 2 times.
The example below creates a didViewProductDetail event with an associated donation value and donates it anytime the ProductDetailsView appears:
The example below creates a display rule for ProductDetailsTip based on the didViewProductDetail event.
struct ProductDetailsTip: Tip {var title:Text {Text("Add iPhone 14 Pro (256 GB) - Purple to your cart") }var message:Text? {Text("You can edit/remove the items from your cart") }var image:Image? {Image(systemName:"cart") }var rules: [Rule] {// Tip will only display when the didViewProductDetail event for product name 'iPhone 14 Pro (256 GB) - Purple' has been donated 3 or more times in a day. #Rule(ProductDetailsView.didViewProductDetail) { $0.donations.donatedWithin(.day).filter( { $0.productName =="iPhone 14 Pro (256 GB) - Purple" }).count >=3 } }var actions: [Action] { [ Tip.Action(id: "add-product-to-cart", title: "Add to cart", perform: {print("Product added into the cart") }) ] }}
Customization for Tip
Customization is the key feature as every app has its own theme throughout the application. Customizing tips to gale along with application themes surely enhances the user experience. Although, as of now, there is not much customization offered by the TipKit framework, but we expect it to get upgraded in the future. Below are the available methods for customization of tips.
public func tipAssetSize(_ size: CGSize) -> some Viewpublic func tipCornerRadius(_ cornerRadius: Double, antialiased: Bool =true) -> some Viewpublic func tipBackground(_ style: some ShapeStyle) -> some View
Testing
Testing tips is very important as a small issue in the implementation of this framework can ruin your app’s user experience. We can construct UI test cases for various scenarios, and tthe following methods can be helpful to test tips.
showAllTips
hideAllTips
showTips([<instance-of-your-tip>])
hideTips([<instance-of-your-tip>])
Pros
Compatibility: TipKit is compatible across all the Apple platforms, including iOS, macOs, watchOs, visionOS.
Supports both SwiftUI and UIKit
Easyimplementation and testing
Avoidingdependency on third-party libraries
Cons
Availability: Only available from iOS 17.0, iPadOS 17.0, macOS 14.0, Mac Catalyst 17.0, tvOS 17.0, watchOS 10.0 and visionOS 1.0 Beta. So no backwards compatibility as of now.
It mightfrustratethe user if the application incorrectly implements this framework
Conclusion
The TipKit framework is a great way to introduce new features in our application to the user. It is easy to implement, and it enhances the user experience. Having said that, we should avoid extensive use of it as it may frustrate the user. We should always avoid displaying promotional and error messages in the form of tips.
Developing iOS applications that deliver a smooth user experience requires more than just clean code and engaging features. Efficient memory management helps ensure that your app performs well and avoids common pitfalls like crashes and excessive battery drain.
In this blog, we’ll explore how to optimize memory usage in your iOS app using Xcode’s powerful Instruments and other memory management tools.
Memory Management and Usage
Before we delve into the other aspects of memory optimization, it’s important to understand why it’s so essential:
Memory management in iOS refers to the process of allocating and deallocating memory for objects in an iOS application to ensure efficient and reliable operation. Proper memory management prevents issues like memory leaks, crashes, and excessive memory usage, which can degrade an app’s performance and user experience.
Memory management in iOS primarily involves the use of Automatic Reference Counting (ARC) and understanding how to manage memory effectively.
Here are some key concepts and techniques related to memory management in iOS:
Automatic Reference Counting (ARC): ARC is a memory management technique introduced by Apple to automate memory management in Objective-C and Swift. With ARC, the compiler automatically inserts retain, release, and autorelease calls, ensuring that memory is allocated and deallocated as needed. Developers don’t need to manually manage memory by calling “retain,” “release,” or “autorelease`” methods as they did in manual memory management in pre-ARC era.
Strong and Weak References: In ARC, objects have strong, weak, and unowned references. A strong reference keeps an object in memory as long as at least one strong reference to it exists. A weak reference, on the other hand, does not keep an object alive. It’s commonly used to avoid strong reference cycles (retain cycles) and potential memory leaks.
Retain Cycles: A retain cycle occurs when two or more objects hold strong references to each other, creating a situation where they cannot be deallocated, even if they are no longer needed. To prevent retain cycles, you can use weak references, unowned references, or break the cycle manually by setting references to “nil” when appropriate.
Avoiding Strong Reference Cycles: To avoid retain cycles, use weak references (and unowned references when appropriate) in situations where two objects reference each other. Also, consider using closure capture lists to prevent strong reference cycles when using closures.
Resource Management: Memory management also includes managing other resources like files, network connections, and graphics contexts. Ensure you release or close these resources when they are no longer needed.
Memory Profiling: The Memory Report in the Debug Navigator of Xcode is a tool used for monitoring and analyzing the memory usage of your iOS or macOS application during runtime. It provides valuable insights into how your app utilizes memory, helps identify memory-related issues, and allows you to optimize the application’s performance.
Also, use tools like Instruments to profile your app’s memory usage and identify memory leaks and excessive memory consumption.
Instruments: Your Ally for Memory Optimization
In Xcode, “Instruments” refer to a set of performance analysis and debugging tools integrated into the Xcode development environment. These instruments are used by developers to monitor and analyze the performance of their iOS, macOS, watchOS, and tvOS applications during development and testing. Instruments help developers identify and address performance bottlenecks, memory issues, and other problems in their code.
Some of the common instruments available in Xcode include:
Allocations: The Allocations instrument helps you track memory allocations and deallocations in your app. It’s useful for detecting memory leaks and excessive memory usage.
Leaks: The Leaks instrument finds memory leaks in your application. It can identify objects that are not properly deallocated.
Time Profiler: Time Profiler helps you measure and analyze the CPU usage of your application over time. It can identify which functions or methods are consuming the most CPU resources.
Custom Instruments: Xcode also allows you to create custom instruments tailored to your specific needs using the Instruments development framework.
To use these instruments, you can run your application with profiling enabled, and then choose the instrument that best suits your performance analysis goals.
Launching Instruments
Because Instruments is located inside Xcode’s app bundle, you won’t be able to find it in the Finder.
To launch Instruments on macOS, follow these steps:
Open Xcode: Instruments is bundled with Xcode, Apple’s integrated development environment for macOS, iOS, watchOS, and tvOS app development. If you don’t have Xcode installed, you can download it from the Mac App Store or Apple’s developer website.
Open Your Project: Launch Xcode and open the project for which you want to use Instruments. You can do this by selecting “File” > “Open” and then navigating to your project’s folder.
Choose Instruments: Once your project is open, go to the “Xcode” menu at the top-left corner of the screen. From the drop-down menu, select “Open Developer Tool” and choose “Instruments.”
Select a Template: Instruments will open, and you’ll see a window with a list of available performance templates on the left-hand side. These templates correspond to the different types of analysis you can perform. Choose the template that best matches the type of analysis you want to conduct. For example, you can select “Time Profiler” for CPU profiling or “Leaks” for memory analysis.
Configure Settings: Depending on the template you selected, you may need to configure some settings or choose the target process (your app) you want to profile. These settings can typically be adjusted in the template configuration area.
Start Recording: Click the red record button in the top-left corner of the Instruments window to start profiling your application. This will launch your app with the selected template and begin collecting performance data.
Analyze Data: Interact with your application as you normally would to trigger the performance scenarios you want to analyze. Instruments will record data related to CPU usage, memory usage, network activity, and other aspects of your app’s performance.
Stop Recording: When you’re done profiling your app, click the square “Stop” button in Instruments to stop recording data.
Analyze Results: After stopping the recording, Instruments will display a detailed analysis of your app’s performance. You can explore various graphs, timelines, and reports to identify and address performance issues.
Save or Share Results: You can save your Instruments session for future reference or share it with colleagues if needed.
Using the Allocations Instrument
The “Allocations” instrument helps you monitor memory allocation and deallocation. Here’s how to use it:
1. Start the Allocations Instrument: In Instruments, select “Allocations” as your instrument.
2. Profile Your App: Use your app as you normally would to trigger the scenarios you want to profile.
3. Examine the Memory Allocation Graph: The graph displays memory usage over time. Look for spikes or steady increases in memory usage.
4. Inspect Objects: The instrument provides a list of objects that have been allocated and deallocated. You can inspect these objects and their associated memory usage.
5. Call Tree and Source Code: To pinpoint memory issues, use the Call Tree to identify the functions or methods responsible for memory allocation. You can then inspect the associated source code in the Source View.
Detecting Memory Leaks with the Leaks Instrument
Retain Cycle
A retain cycle in Swift occurs when two or more objects hold strong references to each other in a way that prevents them from being deallocated, causing a memory leak. This situation is also known as a “strong reference cycle.” It’s essential to understand retain cycles because they can lead to increased memory usage and potential app crashes.
A common scenario for retain cycles is when two objects reference each other, both using strong references.
Here’s an example to illustrate a retain cycle:
classPerson { var name:String var pet:Pet?init(name:String) { self.name = name } deinit {print("(name) has been deallocated") }}classPet { var name:String var owner:Person?init(name:String) { self.name = name } deinit {print("(name) has been deallocated") }}var rohit:Person?=Person(name: "Rohit")var jerry:Pet?=Pet(name: "Jerry")rohit?.pet = jerryjerry?.owner = rohitrohit = niljerry = nil
In this example, we have two classes, Person and Pet, representing a person and their pet. Both classes have a property to store a reference to the other class (person.pet and pet.owner).
The “Leaks” instrument is designed to detect memory leaks in your app.
Here’s how to use it:
1. Launch Instruments in Xcode: First, open your project in Xcode.
2. Commence Profiling: To commence the profiling process, navigate to the “Product” menu and select “Profile.”
3. Select the Leaks Instrument: Within the Instruments interface, choose the “Leaks” instrument from the available options.
4. Trigger the Memory Leak Scenario: To trigger the scenario where memory is leaked, interact with your application. This interaction, such as creating a retain cycle, will induce the memory leak.
5. Identify Leaked Objects: The Leaks Instrument will automatically detect and pinpoint the leaked objects, offering information about their origins, including backtraces and the responsible callers.
6. Analyze Backtraces and Responsible Callers: To gain insights into the context in which the memory leak occurred, you can inspect the source code in the Source View provided by Instruments.
7. Address the Leaks: Armed with this information, you can proceed to fix the memory leaks by making the necessary adjustments in your code to ensure memory is released correctly, preventing future occurrences of memory leaks.
You should see memory leaks like below in the Instruments.
The issue in the above code is that both Person and Pet are holding strong references to each other. When you create a Person and a Pet and set their respective references, a retain cycle is established. Even when you set rohit and jerry to nil, the objects are not deallocated, and the deinit methods are not called. This is a memory leak caused by the retain cycle.
To break the retain cycle and prevent this memory leak, you can use weak or unowned references. In this case, you can make the owner property in Pet a weak reference because a pet should not own its owner:
classPet { var name:String weak var owner:Person?init(name:String) { self.name = name } deinit {print("(name) has been deallocated") }}
By making owner a weak reference, the retain cycle is broken, and when you set rohit and jerry to nil, the objects will be deallocated, and the deinit methods will be called. This ensures proper memory management and avoids memory leaks.
Best Practices for Memory Optimization
In addition to using Instruments, consider the following best practices for memory optimization:
1. Release Memory Properly: Ensure that memory is released when objects are no longer needed.
2. Use Weak References: Use weak references when appropriate to prevent strong reference cycles.
3. Using Unowned to break retain cycle: An unowned reference does not increment or decrease an object’s reference count.
3. Minimize Singletons and Global Variables: These can lead to retained objects. Use them judiciously.
Optimizing memory usage is an essential part of creating high-quality iOS apps.
Instruments, integrated into Xcode, is a versatile tool that provides insights into memory allocation, leaks, and CPU-intensive code. By mastering these tools and best practices, you can ensure your app is memory-efficient, stable, and provides a superior user experience. Happy profiling!
With Flutter, developers can leverage a single codebase to seamlessly build applications for diverse platforms, including Android, iOS, Linux, macOS, Windows, Google Fuchsia, and the web. The Flutter team remains dedicated to empowering developers of all backgrounds, ensuring effortless creation and publication of applications using this powerful multi-platform UI toolkit. Flutter simplifies the process of developing standard applications effortlessly. However, if your aim is to craft an extraordinary game with stunning graphics, captivating gameplay, lightning-fast loading times, and highly responsive interactions, Flames emerges as the perfect solution.s This blog will provide you with an in-depth understanding of Flame. Through the features provided by Flame, you will embark on a journey to master the art of building a Flutter game from the ground up. You will gain invaluable insights into seamlessly integrating animations, configuring immersive soundscapes, and efficiently managing diverse game assets.
1. Flame engine
Flame is a cutting-edge 2D modular game engine designed to provide a comprehensive suite of specialized solutions for game development. Leveraging the powerful architecture of Flutter, Flame significantly simplifies the coding process, empowering you to create remarkable projects with efficiency and precision.
1.1. Setup:
Run this command with Flutter:
$ flutter pub add flame
This will add a line like this to your package’s pubspec.yaml (and run an implicit flutter pub get):
Dependencies:Flame: ^1.8.1
Import it, and now, in your Dart code, you can use:
import'package:flame/flame.dart';
1.2. Assets Structure:
Flame introduces a well-structured assets directory framework, enabling seamless utilization of these resources within your projects. To illustrate the concepts further, let’s delve into a practical example that showcases the application of the discussed principles:
When utilizing image and audio assets in Flame, you can simply specify the asset name without the need for the full path, given that you place the assets within the suggested directories as outlined below.
For better organization, you have the option to divide your audio folder into two distinct subfolders: music and sfx.
The music directory is intended for audio files used as background music, while the sfx directory is specifically designated for sound effects, encompassing shots, hits, splashes, menu sounds, and more.
To properly configure your project, it is crucial to include the entry of above-mentioned directories in your pubspec.yaml file:
1.3. Support to other platforms:
As Flame is built upon the robust foundation of Flutter, its platform support is inherently reliant on Flutter’s compatibility with various platforms. Therefore, the range of platforms supported by Flame is contingent upon Flutter’s own platform support.
Presently, Flame offers extensive support for desktop platforms such as Windows, MacOS, and Linux, in addition to mobile platforms, including Android and iOS. Furthermore, Flame also facilitates game development for the web. It is important to note that Flame primarily focuses on stable channel support, ensuring a reliable and robust experience. While Flame may not provide direct assistance for the dev, beta, and master channels, it is expected that Flame should function effectively in these environments as well.
1.3.1. Flutter web:
To optimize the performance of your web-based game developed with Flame, it is recommended to ensure that your game is utilizing the CanvasKit/Skia renderer. By leveraging the canvas element instead of separate DOM elements, this choice enhances web performance significantly. Therefore, incorporating the CanvasKit/Skia renderer within your Flame-powered game is instrumental in achieving optimal performance on the web platform.
To run your game using Skia, use the following command:
flutter run -d chrome --web-renderer canvaskit
To build the game for production, using Skia, use the following:
flutter build web --release --web-renderer canvaskit
2. Implementation
2.1 GameWidget:
To integrate a Game instance into the Flutter widget tree, the recommended approach is to utilize the GameWidget. This widget serves as the root of your game application, enabling seamless integration of your game. You can incorporate a Game instance into the widget tree by following the example provided below:
By adopting this approach, you can effectively add your Game instance to the Flutter widget tree, ensuring proper execution and integration of your game within the Flutter application structure.
2.2 GameWidget:
When developing games in Flutter, it is crucial to utilize a widget that can efficiently handle high refresh rates, speedy memory allocation, and deallocation and provide enhanced functionality compared to the Stateless and Stateful widgets. Flame offers the FlameGame class, which excels in providing these capabilities.
By utilizing the FlameGame class, you can create games by adding components to it. This class automatically calls the update and render methods of all the components added to it. Components can be directly added to the FlameGame through the constructor using the named children argument, or they can be added from anywhere else using the add or addAll methods.
To incorporate the FlameGame into the widget tree, you need to pass its object to the GameWidget. Refer to the example below for clarification:
This is the last piece of the puzzle. The smallest individual components that make up the game. This is like a widget but within the game. All components can have other components as children, and all components inherit from the abstract class Component. These components serve as the fundamental entities responsible for rendering and interactivity within the game, and their hierarchical organization allows for flexible and modular construction of complex game systems in Flame. These components have their own lifecycle.
Component Lifecycle:
Figure 01
2.3.1. onLoad:
The onLoad method serves as a crucial component within the game’s lifecycle, allowing for the execution of asynchronous operations such as image loading. Positioned between the onGameResize and onMount callbacks, this method is strategically placed to ensure the necessary assets are loaded and prepared. In Figure 01 of the component lifecycle, onLoad is set as the initial method due to its one-time execution. It is within this method that all essential assets, including images, audio files, and tmx files, should be loaded. This ensures that these assets are readily available for utilization throughout the game’s progression.
2.3.2. onGameResize:
Invoked when new components are added to the component tree or when the screen undergoes resizing, the onGameResize method plays a vital role in handling these events. It is executed before the onMount callback, allowing for necessary adjustments to be made in response to changes in component structure or screen dimensions.
2.3.3. onParentResize:
This method is triggered when the parent component undergoes a change in size or whenever the current component is mounted within the component tree. By leveraging the onParentResize callback, developers can implement logic that responds to parent-level resizing events and ensures the proper rendering and positioning of the component.
2.3.4. onMount:
As the name suggests, the onMount method is executed each time a component is mounted into the game tree. This critical method offers an opportunity to initialize the component and perform any necessary setup tasks before it becomes an active part of the game.
2.3.5. onRemove:
The onRemove method facilitates the execution of code just before a component is removed from the game tree. Regardless of whether the component is removed using the parent’s remove method or the component’s own remove method, this method ensures that the necessary cleanup actions take place in a single execution.
2.3.6. onChildrenChanged:
The onChildrenChanged method is triggered whenever a change occurs in a child component. Whether a child is added or removed, this method provides an opportunity to handle the updates and react accordingly, ensuring the parent component remains synchronized with any changes in its children.
2.3.7. Render & Update Loop:
The Render method is responsible for generating the user interface, utilizing the available data to create the game screen. It provides developers with canvas objects, allowing them to draw the game’s visual elements. On the other hand, the Update method is responsible for modifying and updating this rendered UI. Changes such as resizing, repositioning, or altering the appearance of components are managed through the Update method. In essence, any changes observed in the size or position of a component can be attributed to the Update method, which ensures the dynamic nature of the game’s user interface.
3. Sample Project
To showcase the practical implementation of key classes like GameWidget, FlameGame, and essential Components within the Flame game engine, we will embark on the creation of a captivating action game. By engaging in this hands-on exercise, you will gain valuable insights and hands-on experience in utilizing Flame’s core functionalities and developing compelling games. Through this guided journey, you will unlock the knowledge and skills necessary to create engaging and immersive gaming experiences, while harnessing the power of Flame’s robust framework.
Let’s start with:
3.1. Packages & assets:
3.1.1. Create a project using the following command:
flutter create flutter_game_poc
3.1.2. Add these under dependencies of pubspec.yaml (and run command flutter pub get):
flame: ^1.8.0
3.1.3. As mentioned earlier in the Asset Structure section, let’s create a directory called assets in your project and include an images subdirectory within it. Download assets from here, add both the assets to this images directory.
Figure 02
Figure 03
In our game, we’ll use “Figure 02” as the background image and “Figure 03” as the avatar character who will be walking. If you have separate images for the avatar’s different walking frames, you can utilize a sprite generator tool to create a sprite sheet from those individual images.
A sprite generator helps combine multiple separate images into a single sprite sheet, which enables efficient rendering and animation of the character in the game. You can find various sprite generator tools available online that can assist in generating a sprite sheet from your separate avatar images.
By using a sprite sheet, you can easily manage and animate the character’s walking motion within the game, providing a smooth and visually appealing experience for the players.
After uploading, your asset structure will look like this:
Figure 04
3.1.4. To use these assets, we have to register them into pubspect.yaml below assets section:
assets: - assets/images/
3.2. Supporting code:
3.2.1. Create 3 directories constants, overlays, and components inside the lib directory.
3.2.2. First, we will start with a constants directory where we have to create 4 files as follows:
3.2.3. In addition to the assets directory, we will create an overlay directory to include elements that need to be constantly visible to the user during the game. These elements typically include information such as the score, health, or action buttons.
For our game, we will incorporate five control buttons that allow us to direct the gaming avatar’s movements. These buttons will remain visible on the screen at all times, facilitating player interaction and guiding the avatar’s actions within the game environment.
Organizing these overlay elements in a separate directory makes it easier to manage and update the user interface components that provide vital information and interaction options to the player while the game is in progress.
In order to effectively manage and control the position of all overlay widgets within our game, let’s create a dedicated controller. This controller will serve as a centralized entity responsible for orchestrating the placement and behavior of these overlay elements. Create a file named overlay_controller.dart.
All the files in the overlays directory are common widgets that extend Stateless widget.
3.2.5. In our game, all control buttons share a common design, featuring distinct icons and functionalities. To streamline the development process and maintain a consistent user interface, we will create a versatile widget called DirectionButton. This custom widget will handle the uniform UI design for all control buttons.
Inside the overlays directory, create a directory called widgets and add a file called direction_button.dart in that directory. This file defines the shape and color of all control buttons.
Moving forward, we will leverage the code we have previously implemented, building upon the foundations we have laid thus far:
3.3.1. The first step is to create a component. As discussed earlier, all the individual elements in the game are considered components, so let’s create 1 component that will be our gaming avatar. For the UI of this avatar, we are going to use assets shown in Figure 03.
For the avatar, we will be using SpriteAnimationComponents as we want this component to animate automatically.
In the components directory, create a file called avatar_component.dart. This file will hold the logic of when and how our game avatar will move.
In the onLoad() method, we are loading the asset and using it to create animations, and in the update() method, we are using an enum to decide the walking animation.
classAvatarComponentextendsSpriteAnimationComponentwithHasGameRef { final WalkingGame walkingGame;AvatarComponent({requiredthis.walkingGame}) {add(RectangleHitbox()); } late SpriteAnimation _downAnimation; late SpriteAnimation _leftAnimation; late SpriteAnimation _rightAnimation; late SpriteAnimation upAnimation; late SpriteAnimation _idleAnimation; final double _animationSpeed=.1; @override Future<void>onLoad() async {awaitsuper.onLoad(); final spriteSheet =SpriteSheet( image: await gameRef.images.load(AssetConstants.avatarImage), srcSize: Vector2(2284/12, 1270/4), ); _downAnimation = spriteSheet.createAnimation(row: 0, stepTime: _animationSpeed, to: 11); _leftAnimation = spriteSheet.createAnimation(row: 1, stepTime: _animationSpeed, to: 11); upAnimation = spriteSheet.createAnimation(row: 3, stepTime: _animationSpeed, to: 11); _rightAnimation = spriteSheet.createAnimation(row: 2, stepTime: _animationSpeed, to: 11); _idleAnimation = spriteSheet.createAnimation(row: 0, stepTime: _animationSpeed, to: 1); animation = _idleAnimation; } @overridevoidupdate(doubledt) {switch (walkingGame.direction) {case WalkingDirection.idle: animation = _idleAnimation;break;case WalkingDirection.down: animation = _downAnimation;if (y < walkingGame.mapHeight - height) { y += dt * walkingGame.characterSpeed; }break;case WalkingDirection.left: animation = _leftAnimation;if (x >0) { x -= dt * walkingGame.characterSpeed; }break;case WalkingDirection.up: animation = upAnimation;if (y >0) { y -= dt * walkingGame.characterSpeed; }break;case WalkingDirection.right: animation = _rightAnimation;if (x < walkingGame.mapWidth - width) { x += dt * walkingGame.characterSpeed; }break; }super.update(dt); }}
3.1.2. Our Avatar is ready to walk now, but there is no map or world where he can do that. So, let’s create a game and add a background to it.
Create file name walking_game.dart in the lib directory and add the following code.
classWalkingGameextendsFlameGamewithHasCollisionDetection { late double mapWidth=2520 ; late double mapHeight=1300; WalkingDirection direction= WalkingDirection.idle; final double characterSpeed=80; final _world=World();// avatar sprint late AvatarComponent _avatar;// Background image late SpriteComponent _background; final Vector2 _backgroundSize=Vector2(2520, 1300);// Camera Components late final CameraComponent _cameraComponent; @override Future<void>onLoad() async {awaitsuper.onLoad(); overlays.add(KeyConstants.overlayKey); _background =SpriteComponent( sprite: Sprite(await images.load(AssetConstants.backgroundImage), srcPosition: Vector2(0, 0), srcSize: _backgroundSize, ), position: Vector2(0, 0), size: Vector2(2520, 1300), ); _world.add(_background); _avatar =AvatarComponent(walkingGame: this) ..position =Vector2(529, 128) ..debugMode =true ..size =Vector2(1145/24, 635/8); _world.add(_avatar); _cameraComponent =CameraComponent(world: _world) ..setBounds(Rectangle.fromLTRB(390, 200, mapWidth -390, mapHeight -200)) ..viewfinder.anchor = Anchor.center ..follow(_avatar);addAll([_cameraComponent, _world]); }}
First thing in onLoad(), you can see that we are adding an overlay using a key. You can learn more about this key in the main class.
Next is to create background components using SpriteComponent and add it to the world component. For creating the background component, we are using SpriteComponent instead of SpriteAnimationComponent because we do not need any background animation in our game.
Then we add AvatarComponent in the same world component where we added the background component. To keep the camera fixed on the AvatarComponent, we are using 1 extra component, which is called CameraComponent.
Lastly, we are adding both world & CameraComponents in our game by using addAll() method.
3.1.3. Finally, we have to create the main.dart file. In this example, we are wrapping a GameWidget with MaterialApp because we want to use some features of material themes like icons, etc., in this project. If you do not want to do that, you can pass GameWidget to the runApp() method directly. Here we are not only adding the WalkingGame into GameWidget but also adding an overlay, which will show the control buttons. The key mentioned here for the overlay is the same key we added in walking_game.dart file’s onLoad() method.
After all this, our game will look like this, and with these 5 control buttons, we can tell your avatar to move and/or stop.
4. Result
For your convenience, the complete code for the project can be found here. Feel free to refer to this code repository for a comprehensive overview of the implementation details and to access the entirety of the game’s source code.
5. Conclusion
Flame game engine alleviates the burden of crucial tasks such as asset loading, managing refresh rates, and efficient memory management. By taking care of these essential functionalities, Flame allows developers to concentrate on implementing the core functionality and creating an exceptional game application.
By leveraging Flame’s capabilities, you can maximize your productivity and create an amazing game application that resonates with players across various platforms, all while enjoying the benefits of a unified codebase.
In the realm of iOS app development, continuous integration and continuous deployment (CI/CD) have become indispensable to ensure efficient and seamless software development. Developers are constantly seeking the most effective CI/CD solutions to streamline their workflows and optimize the delivery of high-quality iOS applications. Two prominent contenders in this arena are Github CI/CD and Xcode Cloud. In this article, we will delve into the intricacies of these platforms, comparing their features, benefits, and limitations to help you make an informed decision for your iOS development projects.
GitHub CI/CD
Github CI/CD is an extension of the popular source code management platform, Github. It offers a versatile and flexible CI/CD workflow for iOS applications, enabling developers to automate the building, testing, and deployment processes. Here are some key aspects of Github CI/CD:
Workflow Configuration: Github CI/CD employs a YAML-based configuration file, allowing developers to define complex workflows. This provides granular control over the CI/CD pipeline, enabling the automation of multiple tasks such as building, testing, code analysis, and deployment.
Wide Range of Integrations: Github CI/CD seamlessly integrates with various third-party tools and services, such as Slack, Jira, and SonarCloud, enhancing collaboration and ensuring efficient communication among team members. This extensibility enables developers to incorporate their preferred tools seamlessly into the CI/CD pipeline.
Scalability and Customizability: Github CI/CD supports parallelism, allowing the execution of multiple jobs concurrently. This feature significantly reduces the overall build and test time, especially for large-scale projects. Additionally, developers can leverage custom scripts and actions to tailor the CI/CD pipeline according to their specific requirements.
Community Support: Github boasts a vast community of developers who actively contribute to the CI/CD ecosystem. This means that developers can access a wealth of resources, tutorials, and shared workflows, expediting the adoption of CI/CD best practices.
Xcode Cloud
Xcode Cloud is a cloud-based CI/CD solution designed specifically for iOS and macOS app development. Integrated into Apple’s Xcode IDE, Xcode Cloud provides an end-to-end development experience with seamless integration into the Apple ecosystem. Let’s explore the distinguishing features of Xcode Cloud:
Native Integration with Xcode: Xcode Cloud is tightly integrated with the Xcode IDE, offering a seamless development experience for iOS and macOS apps. This integration simplifies the setup and configuration process, enabling developers to trigger CI/CD workflows directly from Xcode easily.
Automated Testing and UI Testing: Xcode Cloud includes powerful testing capabilities, allowing developers to run automated tests, unit tests, and UI tests effortlessly. The platform provides a comprehensive test report with detailed insights, enabling developers to identify and resolve issues quickly.
Device Testing and Distribution: Xcode Cloud enables developers to leverage Apple’s extensive device testing infrastructure for concurrent testing across multiple simulators and physical devices. Moreover, it facilitates the distribution of beta builds for internal and external testing, making it easier to gather user feedback before the final release.
Seamless Code Signing and App Store Connect Integration: Xcode Cloud simplifies code signing, a critical aspect of iOS app development, by managing certificates, profiles, and provisioning profiles automatically. It seamlessly integrates with App Store Connect, automating the app submission and release process.
Comparison
Now, let’s compare Github CI/CD and Xcode Cloud across several key dimensions:
Ecosystem and Integration
GitHub CI/CD: Offers extensive integrations with third-party tools and services, allowing developers to integrate with various services beyond the Apple ecosystem.
Xcode Cloud: Excels in its native integration with Xcode and the Apple ecosystem, providing a seamless experience for iOS and macOS developers. It leverages Apple’s testing infrastructure and simplifies code signing and distribution within the Apple ecosystem.
Flexibility and Customizability
GitHub CI/CD: Provides more flexibility and customizability through its YAML-based configuration files, enabling developers to define complex workflows and integrate various tools according to their specific requirements.
Xcode Cloud: Focuses on streamlining the development experience within Xcode, limiting customization options compared to GitHub CI/CD.
Scalability and Parallelism
GitHub CI/CD: Offers robust scalability with support for parallel job execution, making it suitable for large-scale projects that require efficient job execution in parallel.
Xcode Cloud: Scalability is limited to Apple’s device testing infrastructure, which may not provide the same level of scalability for non-Apple platforms or projects with extensive parallel job execution requirements.
Community and Resources
GitHub CI/CD: Benefits from a large and vibrant community, offering extensive resources, shared workflows, and active community support. Developers can leverage the knowledge and experience shared by the community.
Xcode Cloud: As a newer offering, Xcode Cloud is still building its community ecosystem. It may have a smaller community compared to GitHub CI/CD, resulting in fewer shared workflows and resources. However, developers can still rely on Apple’s developer forums and support channels for assistance.
Pricing
GitHub CI/CD: GitHub offers both free and paid plans. The pricing depends on the number of parallel jobs and additional features required. The paid plans provide more scalability and advanced features.
Xcode Cloud: Apple offers Xcode Cloud as part of its broader Apple Developer Program, which has an annual subscription fee. The specific pricing details for Xcode Cloud are available on Apple’s official website.
Performance
GitHub CI/CD: The performance of GitHub CI/CD depends on the underlying infrastructure and resources allocated to the CI/CD pipeline. It provides scalability and parallelism options for faster job execution.
Xcode Cloud: Xcode Cloud leverages Apple’s testing infrastructure, which is designed for iOS and macOS app development. It offers optimized performance and reliability for testing and distribution processes within the Apple ecosystem.
Conclusion
Choosing between Github CI/CD and Xcode Cloud for your iOS development projects depends on your specific needs and priorities. If you value native integration with Xcode and the Apple ecosystem, seamless code signing, and distribution, Xcode Cloud provides a comprehensive solution. On the other hand, if flexibility, customizability, and an extensive ecosystem of integrations are crucial, Github CI/CD offers a powerful CI/CD platform for iOS apps. Consider your project’s unique requirements and evaluate the features and limitations of each platform to make an informed decision that aligns with your development workflow and goals.
In the fast-paced and ever-changing world of software development, the task of designing applications that can smoothly operate on various platforms has become a significant hurdle. Developers frequently encounter a dilemma where they must decide between constructing distinct codebases for different platforms or opting for hybrid frameworks that come with certain trade-offs.
Kotlin Multiplatform (KMP) is an extension of the Kotlin programming language that simplifies cross-platform development by bridging the gap between platforms. This game-changing technology has emerged as a powerful solution for creating cross-platform applications.
Kotlin Multiplatform Mobile (KMM) is a subset of KMP that provides a specific framework and toolset for building cross-platform mobile applications using Kotlin. KMM is developed by JetBrains to simplify the process of building mobile apps that can run seamlessly on multiple platforms.
In this article, we will take a deep dive into Kotlin Multiplatform Mobile, exploring its features and benefits and how it enables developers to write shared code that runs natively on multiple platforms.
What is Kotlin Multiplatform Mobile (KMM)?
With KMM, developers can share code between Android and iOS platforms, eliminating the need for duplicating efforts and maintaining separate codebases. This significantly reduces development time and effort while improving code consistency and maintainability.
KMM offers support for a wide range of UI frameworks, libraries, and app architectures, providing developers with flexibility and options. It can seamlessly integrate with existing Android projects, allowing for the gradual adoption of cross-platform development. Additionally, KMM projects can be developed and tested using familiar build tools, making the transition to KMM as smooth as possible.
KMM vs. Other Platforms
Here’s a table comparing the KMM (Kotlin Multiplatform Mobile) framework with some other popular cross-platform mobile development platforms:
Sharing Code Across Multiple Platforms:
Advantages of Utilizing Kotlin Multiplatform (KMM) in Projects
Code sharing: Encourages code reuse and reduces duplication, leading to faster development.
Faster time-to-market: Accelerates mobile app development by reducing codebase development.
Consistency: Ensures consistency across platforms for better user experience.
Collaboration between Android and iOS teams: Facilitates collaboration between Android and iOS development teams to improve efficiency.
Access to Native APIs: Allows developers to access platform-specific APIs and features.
Reduced maintenance overhead: Shared codebase makes maintenance easier and more efficient.
Existing Kotlin and Android ecosystem: Provides access to libraries, tools, and resources for developers.
Gradual adoption: Facilitates cross-platform development by sharing modules and components.
Performance and efficiency: Generates optimized code for each platform, resulting in efficient and performant applications.
Community and support: Benefits from active community, resources, tutorials, and support.
Limitations of Using KMM in Projects
Limited platform-specific APIs: Provides a common codebase, but does not provide direct access to platform-specific APIs.
Platform-dependent setup and tooling: Platform-agnostic, but setup and tooling can be platform-dependent.
Limited interoperability with existing platform code: Interoperability between Kotlin Multiplatform and existing platform code can be challenging.
Development and debugging experience: Provides code sharing, but development and debugging experience differ.
Limited third-party library support: There aren’t many ready-to-use libraries available, so developers must implement from scratch or look for alternatives.
Setting Up Environment for Cross-Platform Development in Android Studio
Developing Kotlin Multiplatform Mobile (KMM) apps as an Android developer is relatively straightforward. You can use Android Studio, the same IDE that you use for Android app development.
To get started, we will need to install the KMM plugin through the IDE plugin manager, which is a simple step. The advantage of using Android Studio for KMM development is that we can create and run iOS apps from within the same IDE. This can help streamline the development process, making it easier to build and test apps across multiple platforms.
In order to enable the building and running of iOS apps through Android Studio, it’s necessary to have Xcode installed on your system. Xcode is an Integrated Development Environment (IDE) used for iOS programming.
To ensure that all dependencies are installed correctly for our Kotlin Multiplatform Mobile (KMM) project, we can use kdoctor. This tool can be installed via brew by running the following command in the command-line:
$ brew install kdoctor
Note: If you don’t have Homebrew yet, please install it.
This will confirm that all required dependencies are properly installed and configured for our KMM project.
kdoctor will perform comprehensive checks and provide a detailed report with the results.
Assuming that all the necessary tools are installed correctly, if kdoctor detects any issues, it will generate a corresponding result or report.
To resolve the warning mentioned above, touch ~/.zprofile and export changes.
$ touch ~/.zprofile
$ export LANG=en_US.UTF-8
export LC_ALL=en_US.UTF-8
After making the above necessary changes to our environment, we can run kdoctor again to verify that everything is set up correctly. Once kdoctor confirms that all dependencies are properly installed and configured, we are done.
Building Biometric Face & Fingerprint Authentication Application
Let’s explore Kotlin Multiplatform Mobile (KMM) by creating an application for face and fingerprint authentication. Here our aim is to leverage KMM’s potential by developing shared code for both Android and iOS platforms. This will promote code reuse and reduce redundancy, leading to optimized code for each platform.
Set Up an Android project
To initiate a new project, we will launch Android Studio, select the Kotlin Multiplatform App option from the New Project template, and click on “Next.”
We will add the fundamental application information, such as the name of the application and the project’s location, on the following screen.
Lastly, we opt for the recommended dependency manager for the iOS app from the Regular framework and click on “Next.”
For the iOS app, we can switch the dependency between the regular framework or CocoPods dependency manager.
After clicking the “Finish” button, the KMM project is created successfully and ready to be utilized.
After finishing the Gradle sync process, we can execute both the iOS and Android apps by simply clicking the run button located in the toolbar.
In this illustration, we can observe the structure of a KMM project. The KMM project is organized into three directories: shared, androidApp, and iosApp.
androidApp: It contains Android app code and follows the typical structure of a standard Android application.
iosApp: It contains iOS application code, which can be opened in Xcode using the .xcodeproj file.
shared: It contains code and resources that are shared between the Android (androidApp) and iOS (iosApp) platforms. It allows developers to write platform-independent logic and components that can be reused across both platforms, reducing code duplication and improving development efficiency.
Launch the iOS app and establish a connection with the framework.
Before proceeding with iOS app development, ensure that both Xcode and Cocoapods are installed on your system.
Open the root project folder of the KMM application (KMM_Biometric_App) developed using Android studio and navigate to the iosApp folder. Within the iosApp folder, locate the .xcodeproj file and double-click on it to open it.
After launching the iosApp in Xcode, the next step is to establish a connection between the framework and the iOS application. To do this, you will need to access the iOS project settings by double-clicking on the project name. Once you are in the project settings, navigate to the Build Phases tab and select the “+” button to add a new Run Script Phase.
Move the Run Script phase before the Compile Sources phase.
Navigate to the All build settings on the Build Settings tab and locate the Search Paths section. Within this section, specify the Framework Search Path:
In the Linking section of the Build Settings tab, specify the Other Linker flags:
$(inherited) -framework shared
Compile the project in Xcode. If all the settings are configured correctly, the project should build successfully.
Implement Biometric Authentication in the Android App
To enable Biometric Authentication, we will utilize the BiometricPrompt component available in the Jetpack Biometric library. This component simplifies the process of implementing biometric authentication, but it is only compatible with Android 6.0 (API level 23) and later versions. If we require support for earlier Android versions, we must explore alternative approaches.
To add the Biometric Dependency for Android development, we must include it in the androidMain of sourceSets in the build.gradle file located in the shared folder. This step is specific to Android development.
// shared/build.gradle.kts
…………sourceSets { val androidMain by getting { dependencies {implementation("androidx.biometric:biometric-ktx:1.2.0-alpha05") } …………… }…………….
Next, we will generate the FaceAuthenticator class within the commonMain folder, which will allow us to share the Biometric Authentication business logic between the Android and iOS platforms.
// shared/commonMain/FaceAuthenticator
expect classFaceAuthenticator { fun isDeviceHasBiometric():Boolean fun authenticateWithFace(callback: (Boolean) -> Unit)}
In shared code, the “expect” keyword signifies an expected behavior or interface. It indicates a declaration that is expected to be implemented differently on each platform. By using “expect,” you establish a contract or API that the platform-specific implementations must satisfy.
The “actual” keyword is utilized to provide the platform-specific implementation for the expected behavior or interface defined with the “expect” keyword. It represents the concrete implementation that varies across different platforms. By using “actual,” you supply the code that fulfills the contract established by the “expect” declaration.
There are 3 different types of authenticators, defined at a level of granularity supported by BiometricManager and BiometricPrompt.
At the level of granularity supported by BiometricManager and BiometricPrompt, there exist three distinct types of authenticators.
Multiple authenticators, such as BIOMETRIC_STRONG | DEVICE_CREDENTIAL | BIOMETRIC_WEAK, can be represented as a single integer by combining their types using bitwise OR.
BIOMETRIC_STRONG: Any biometric (e.g., fingerprint, iris, or face) on the device that meets or exceeds the requirements for Class 3 (formerly Strong), as defined by the Android CDD.
BIOMETRIC_WEAK: Any biometric (e.g., fingerprint, iris, or face) on the device that meets or exceeds the requirements for Class 2 (formerly Weak), as defined by the Android CDD.
DEVICE_CREDENTIAL: Authentication using a screen lock credential—the user’s PIN, pattern, or password.
Now let’s create an actual implementation of FaceAuthenticator class in the androidMain folder of the shared folder.
// shared/androidMain/FaceAuthenticator
actual classFaceAuthenticator(context: FragmentActivity) { actual fun isDeviceHasBiometric():Boolean {// code to check biometric available } actual fun authenticateWithFace(callback: (Boolean) -> Unit) {// code to authenticate using biometric }}
Add the following code to the isDeviceHasBiometric() function to determine whether the device supports biometric authentication or not.
actual classFaceAuthenticator(context: FragmentActivity) { var activity:FragmentActivity= context @RequiresApi(Build.VERSION_CODES.R) actual fun isDeviceHasBiometric():Boolean { val biometricManager = BiometricManager.from(activity)when (biometricManager.canAuthenticate(BIOMETRIC_STRONG or BIOMETRIC_WEAK)) { BiometricManager.BIOMETRIC_SUCCESS-> { Log.d("`FaceAuthenticator.kt`", "App can authenticate using biometrics.") return true } BiometricManager.BIOMETRIC_ERROR_NO_HARDWARE -> { Log.e("`FaceAuthenticator.kt`", "No biometric features available on this device.") return false } BiometricManager.BIOMETRIC_ERROR_HW_UNAVAILABLE -> { Log.e("`FaceAuthenticator.kt`", "Biometric features are currently unavailable.") return false } BiometricManager.BIOMETRIC_ERROR_NONE_ENROLLED -> { Log.e("`FaceAuthenticator.kt`","Prompts the user to create credentials that your app accepts." ) val enrollIntent = Intent(Settings.ACTION_BIOMETRIC_ENROLL).apply { putExtra( Settings.EXTRA_BIOMETRIC_AUTHENTICATORS_ALLOWED, BIOMETRIC_STRONG or BIOMETRIC_WEAK ) } startActivityForResult(activity, enrollIntent, 100, null) } BiometricManager.BIOMETRIC_ERROR_SECURITY_UPDATE_REQUIRED -> { Log.e("`FaceAuthenticator.kt`","Security vulnerability has been discovered and the sensor is unavailable until a security update has addressed this issue." ) } BiometricManager.BIOMETRIC_ERROR_UNSUPPORTED -> { Log.e("`FaceAuthenticator.kt`","The user can't authenticate because the specified options are incompatible with the current Android version." ) } BiometricManager.BIOMETRIC_STATUS_UNKNOWN -> { Log.e("`FaceAuthenticator.kt`","Unable to determine whether the user can authenticate" ) } } return false } actual fun authenticateWithFace(callback: (Boolean) -> Unit) {// code to authenticate using biometric }}
In the provided code snippet, an instance of BiometricManager is created, and the canAuthenticate() method is invoked to determine whether the user can authenticate with an authenticator that satisfies the specified requirements. To accomplish this, you must pass the same bitwise combination of types, which you declared using the setAllowedAuthenticators() method, into the canAuthenticate() method.
To perform biometric authentication, insert the following code into the authenticateWithFace() method.
actual classFaceAuthenticator(context: FragmentActivity) { var activity:FragmentActivity= context @RequiresApi(Build.VERSION_CODES.R) actual fun isDeviceHasBiometric():Boolean { val biometricManager = BiometricManager.from(activity)when (biometricManager.canAuthenticate(BIOMETRIC_STRONG or BIOMETRIC_WEAK)) { BiometricManager.BIOMETRIC_SUCCESS-> { Log.d("`FaceAuthenticator.kt`", "App can authenticate using biometrics.") return true } BiometricManager.BIOMETRIC_ERROR_NO_HARDWARE -> { Log.e("`FaceAuthenticator.kt`", "No biometric features available on this device.") return false } BiometricManager.BIOMETRIC_ERROR_HW_UNAVAILABLE -> { Log.e("`FaceAuthenticator.kt`", "Biometric features are currently unavailable.") return false } BiometricManager.BIOMETRIC_ERROR_NONE_ENROLLED -> { Log.e("`FaceAuthenticator.kt`","Prompts the user to create credentials that your app accepts." ) val enrollIntent = Intent(Settings.ACTION_BIOMETRIC_ENROLL).apply { putExtra( Settings.EXTRA_BIOMETRIC_AUTHENTICATORS_ALLOWED, BIOMETRIC_STRONG or BIOMETRIC_WEAK ) } startActivityForResult(activity, enrollIntent, 100, null) } BiometricManager.BIOMETRIC_ERROR_SECURITY_UPDATE_REQUIRED -> { Log.e("`FaceAuthenticator.kt`","Security vulnerability has been discovered and the sensor is unavailable until a security update has addressed this issue." ) } BiometricManager.BIOMETRIC_ERROR_UNSUPPORTED -> { Log.e("`FaceAuthenticator.kt`","The user can't authenticate because the specified options are incompatible with the current Android version." ) } BiometricManager.BIOMETRIC_STATUS_UNKNOWN -> { Log.e("`FaceAuthenticator.kt`","Unable to determine whether the user can authenticate" ) } } return false } @RequiresApi(Build.VERSION_CODES.P) actual fun authenticateWithFace(callback: (Boolean) -> Unit) {// Create prompt Info to set prompt details val promptInfo = BiometricPrompt.PromptInfo.Builder() .setTitle("Authentication using biometric") .setSubtitle("Authenticate using face/fingerprint") .setAllowedAuthenticators(BIOMETRIC_STRONG or BIOMETRIC_WEAK or DEVICE_CREDENTIAL) .setNegativeButtonText("Cancel") .build()// Create biometricPrompt object to get authentication callback result val biometricPrompt = BiometricPrompt(activity, activity.mainExecutor, object : BiometricPrompt.AuthenticationCallback() { override fun onAuthenticationError(errorCode:Int,errString:CharSequence, ) {super.onAuthenticationError(errorCode, errString) Toast.makeText(activity, "Authentication error: $errString", Toast.LENGTH_SHORT) .show()callback(false) } override fun onAuthenticationSucceeded(result:BiometricPrompt.AuthenticationResult, ) {super.onAuthenticationSucceeded(result) Toast.makeText(activity, "Authentication succeeded!", Toast.LENGTH_SHORT).show()callback(true) } override fun onAuthenticationFailed() {super.onAuthenticationFailed() Toast.makeText(activity, "Authentication failed", Toast.LENGTH_SHORT).show()callback(false) } })//Authenticate using biometric prompt biometricPrompt.authenticate(promptInfo) }}
In the code above, the BiometricPrompt.Builder gathers the arguments to be displayed on the biometric dialog provided by the system.
The setAllowedAuthenticators() function enables us to indicate the authenticators that are permitted for biometric authentication.
// Create prompt Info to set prompt details
// Create prompt Info to set prompt detailsval promptInfo = BiometricPrompt.PromptInfo.Builder() .setTitle("Authentication using biometric") .setSubtitle("Authenticate using face/fingerprint") .setAllowedAuthenticators(BIOMETRIC_STRONG or BIOMETRIC_WEAK) .setNegativeButtonText("Cancel") .build()
It is not possible to include both .setAllowedAuthenticators(BIOMETRIC_WEAK or DEVICE_CREDENTIAL) and .setNegativeButtonText(“Cancel”) simultaneously in a BiometricPrompt.PromptInfo.Builder instance because the last mode of device authentication is being utilized.
However, it is possible to include both .setAllowedAuthenticators(BIOMETRIC_WEAK or BIOMETRIC_STRONG) and .setNegativeButtonText(“Cancel“) simultaneously in a BiometricPrompt.PromptInfo.Builder instance. This allows for a fallback to device credentials authentication when the user cancels the biometric authentication process.
The BiometricPrompt object facilitates biometric authentication and provides an AuthenticationCallback to handle the outcomes of the authentication process, indicating whether it was successful or encountered a failure.
val biometricPrompt =BiometricPrompt(activity, activity.mainExecutor, object : BiometricPrompt.AuthenticationCallback() { override fun onAuthenticationError(errorCode:Int,errString:CharSequence, ) {super.onAuthenticationError(errorCode, errString) Toast.makeText(activity, "Authentication error: $errString", Toast.LENGTH_SHORT) .show()callback(false) } override fun onAuthenticationSucceeded(result:BiometricPrompt.AuthenticationResult, ) {super.onAuthenticationSucceeded(result) Toast.makeText(activity, "Authentication succeeded!", Toast.LENGTH_SHORT).show()callback(true) } override fun onAuthenticationFailed() {super.onAuthenticationFailed() Toast.makeText(activity, "Authentication failed", Toast.LENGTH_SHORT).show()callback(false) } })//Authenticate using biometric prompt biometricPrompt.authenticate(promptInfo)
Now, we have completed the coding of the shared code for Android in the androidMain folder. To utilize this code, we can create a new file named LoginActivity.kt within the androidApp folder.
For authentication, we have a special framework in iOS, i.e., Local Authentication Framework.
The Local Authentication framework provides a way to integrate biometric authentication (such as Touch ID or Face ID) and device passcode authentication into your app. This framework allows you to enhance the security of your app by leveraging the biometric capabilities of the device or the device passcode.
Now, let’s create an actual implementation of FaceAuthenticator class of shared folder in iosMain folder.
// shared/iosMain/FaceAuthenticator
actual classFaceAuthenticator(context: FragmentActivity) { actual fun isDeviceHasBiometric():Boolean {// code to check biometric available } actual fun authenticateWithFace(callback: (Boolean) -> Unit) {// code to authenticate using biometric }}
Add the following code to the isDeviceHasBiometric() function to determine whether the device supports biometric authentication or not.
actual classFaceAuthenticator { actual fun isDeviceHasBiometric():Boolean {// Check if face authentication is available val context =LAContext() val error = memScoped { allocPointerTo<ObjCObjectVar<NSError?>>() } return context.canEvaluatePolicy(LAPolicyDeviceOwnerAuthentication, error = error.value) } actual fun authenticateWithFace(callback: (Boolean) -> Unit) {// code to authenticate using biometric }}
In the above code, LAContext class is part of the Local Authentication framework in iOS. It represents a context for evaluating authentication policies and handling biometric or passcode authentication.
LAPolicy represents different authentication policies that can be used with the LAContext class. The LAPolicy enum defines the following policies:
.deviceOwnerAuthenticationWithBiometrics
This policy allows the user to authenticate using biometric authentication, such as Touch ID or Face ID. If the device supports biometric authentication and the user has enrolled their biometric data, the authentication prompt will appear for biometric verification.
.deviceOwnerAuthentication
This policy allows the user to authenticate using either biometric authentication (if available) or the device passcode. If biometric authentication is supported and the user has enrolled their biometric data, the prompt will appear for biometric verification. Otherwise, the device passcode will be used for authentication.
We have used the LAPolicyDeviceOwnerAuthentication policy constant, which authenticates either by biometry or the device passcode.
We have used the canEvaluatePolicy(_:error:) method to check if the device supports biometric authentication and if the user has added any biometric information (e.g., Touch ID or Face ID).
To perform biometric authentication, insert the following code into the authenticateWithFace() method.
// shared/iosMain/FaceAuthenticator
actual classFaceAuthenticator { actual fun isDeviceHasBiometric():Boolean {// Check if face authentication is available val context =LAContext() val error = memScoped { allocPointerTo<ObjCObjectVar<NSError?>>() } return context.canEvaluatePolicy(LAPolicyDeviceOwnerAuthentication, error = error.value) } actual fun authenticateWithFace(callback: (Boolean) -> Unit) {// Authenticate using biometric val context =LAContext() val reason ="Authenticate using face"if (isDeviceHasBiometric()) {// Perform face authentication context.evaluatePolicy( LAPolicyDeviceOwnerAuthentication, localizedReason = reason ) { b: Boolean, nsError: NSError?->callback(b)if (!b) {print(nsError?.localizedDescription ?:"Failed to authenticate") } } }callback(false) }}
The primary purpose of LAContext is to evaluate authentication policies, such as biometric authentication or device passcode authentication. The main method for this is
evaluatePolicy(_:localizedReason:reply:):
This method triggers an authentication request, which is returned in the completion block. The localizedReason parameter is a message that explains why the authentication is required and is shown during the authentication process.
When using evaluatePolicy(_:localizedReason:reply:), we may have the option to fall back to device passcode authentication or cancel the authentication process. We can handle these scenarios by inspecting the LAError object passed in the error parameter of the completion block:
iflet error = error as? LAError { switch error.code {case .userFallback:// User tapped on fallback button, provide a passcode entry UIcase .userCancel:// User canceled the authentication// Handle other error cases as needed }}
That concludes the coding of the shared code for iOS in the iosMain folder. We can utilize this by creating LoginView.swift in the iosApp folder.
This ends our implementation of biometric authentication using the KMM application that runs smoothly on both Android and iOS platforms. If you’re interested, you can find the code for this project on our GitHub repository. We would love to hear your thoughts and feedback on our implementation.
Conclusion
It is important to acknowledge that while KMM offers numerous advantages, it may not be suitable for every project. Applications with extensive platform-specific requirements or intricate UI components may still require platform-specific development. Nonetheless, KMM can still prove beneficial in such scenarios by facilitating the sharing of non-UI code and minimizing redundancy.
On the whole, Kotlin Multiplatform Mobile is an exciting framework that empowers developers to effortlessly create cross-platform applications. It provides an efficient and adaptable solution for building robust and high-performing mobile apps, streamlining development processes, and boosting productivity. With its expanding ecosystem and strong community support, KMM is poised to play a significant role in shaping the future of mobile app development.