Microsoft recently launched session replay support for iOS on both Native iOS and React Native applications. We decided to see how it performs compared to competitors like LogRocket and UXCam.
This blog discusses what session replay is, how it works, and its benefits for debugging applications and understanding user behavior. We will also quickly integrate Microsoft Clarity in React Native applications and compare its performance with competitors like LogRocket and UXCam.
Below, we will explore the key features of session replay, the steps to integrate Microsoft Clarity into your React Native application, and benchmark its performance against other popular tools.
Key Features of Session Replay
Session replay provides a visual playback of user interactions on your application. This allows developers to observe how users navigate the app, identify any issues they encounter, and understand user behavior patterns. Here are some of the standout features:
User Interaction Tracking: Record clicks, scrolls, and navigation paths for a comprehensive view of user activities.
Error Monitoring: Capture and analyze errors in real time to quickly diagnose and fix issues.
Heatmaps: Visualize areas of high interaction to understand which parts of the app are most engaging.
Anonymized Data: Ensure user privacy by anonymizing sensitive information during session recording.
Integrating Microsoft Clarity with React Native
Integrating Microsoft Clarity into your React Native application is a straightforward process. Follow these steps to get started:
Sign Up for Microsoft Clarity:
a. Visit the Microsoft Clarity website and sign up for a free account.
b. Create a new project and obtain your Clarity tracking code.
Install the Clarity SDK:
Use npm or yarn to install the Clarity SDK in your React Native project:
a. Run your application and navigate through various screens to ensure Clarity is capturing session data correctly.
b. Log into your Clarity dashboard to see the recorded sessions and analytics.
Benchmarking Against Competitors
To evaluate the performance of Microsoft Clarity, we’ll compare it against two popular session replay tools, LogRocket and UXCam, assessing them based on the following criteria:
Ease of Integration: How simple is integrating the tool into a React Native application?
Feature Set: What features does each tool offer for session replay and user behavior analysis?
Performance Impact: How does the tool impact the app’s performance and user experience?
Cost: What are the pricing models and how do they compare?
Detailed Comparison
Ease of Integration
Microsoft Clarity: The integration process is straightforward and well-documented, making it easy for developers to get started.
LogRocket: LogRocket also offers a simple integration process with comprehensive documentation and support.
UXCam: UXCam provides detailed guides and support for integration, but it may require additional configuration steps compared to Clarity and LogRocket.
Feature Set
Microsoft Clarity: Offers robust session replay, heatmaps, and error monitoring. However, it may lack some advanced features found in premium tools.
LogRocket: Provides a rich set of features, including session replay, performance monitoring, Network request logs, and integration with other tools like Redux and GraphQL.
UXCam: Focuses on mobile app analytics with features like session replay, screen flow analysis, and retention tracking.
Performance Impact
Microsoft Clarity: Minimal impact on app performance, making it a suitable choice for most applications.
LogRocket: Slightly heavier than Clarity but offers more advanced features. Performance impact is manageable with proper configuration.
UXCam: Designed for mobile apps with performance optimization in mind. The impact is generally low but can vary based on app complexity.
Cost
Microsoft Clarity: Free to use, making it an excellent option for startups and small teams.
LogRocket: Offers tiered pricing plans, with a free tier for basic usage and paid plans for advanced features.
UXCam: Provides a range of pricing options, including a free tier. Paid plans offer more advanced features and higher data limits.
Final Verdict
After evaluating the key aspects of session replay tools, Microsoft Clarity stands out as a strong contender, especially for teams looking for a cost-effective solution with essential features. LogRocket and UXCam offer more advanced capabilities, which may be beneficial for larger teams or more complex applications.
Ultimately, the right tool will depend on your specific needs and budget. For basic session replay and user behavior insights, Microsoft Clarity is a fantastic choice. If you require more comprehensive analytics and integrations, LogRocket or UXCam may be worth the investment.
Sample App
I have also created a basic sample app to demonstrate how to set up Microsoft Clarity for React Native apps.
In the fast-paced world of mobile technology, iOS widgets stand out as dynamic tools that enhance user engagement and convenience. With iOS 14’s introduction of widgets, Apple has empowered developers to create versatile, interactive components that provide valuable information and functionality right from the Home screen.
In this blog, we’ll delve into the world of iOS widgets, exploring the topic to create exceptional user experiences.
Understanding WidgetKit:
WidgetKit is a framework provided by Apple that simplifies creating and managing widgets for iOS, iPadOS, and macOS. It offers a set of APIs and tools that enable developers to easily design, develop, and deploy widgets. WidgetKit handles various aspects of widget development, including data management, layout rendering, and update scheduling, allowing developers to focus on creating compelling widget experiences.
Key Components of WidgetKit:
Widget Extension: A widget extension is a separate target within an iOS app project responsible for defining and managing the widget’s behavior, appearance, and data.
Widget Configuration: The widget configuration determines the appearance and behavior of the widget displayed on the Home screen. It includes attributes such as the widget’s name, description, supported sizes, and placeholder content.
Timeline Provider: The timeline provider supplies the widget with dynamic content based on predefined schedules or user interactions.
Widget Views: Widget views are SwiftUI views used to define the layout and presentation of the widget’s content.
Understanding iOS Widgets:
Widgets offer a convenient way to present timely and relevant information from your app or provide quick access to app features directly on the device’s Home screen. Introduced in iOS 14, widgets come in various sizes and can showcase a wide range of content, including weather forecasts, calendar events, news headlines, and app-specific data.
Benefits of iOS Widgets:
Enhanced Accessibility: Widgets enable users to access important information and perform tasks without navigating away from the Home screen, saving time and effort.
Increased Engagement: By displaying dynamic content and interactive elements, widgets encourage users to interact with apps more frequently, leading to higher engagement rates.
Personalization: Users can customize their Home screen by adding, resizing, and rearranging widgets to suit their preferences and priorities.
Improved Productivity: Widgets provide at-a-glance updates on calendar events, reminders, and to-do lists, helping users stay organized and productive throughout the day.
Widget Sizes
Widget sizes refer to the dimensions and layouts available for widgets on different platforms and devices. In the context of iOS, iPadOS, and macOS, widgets come in various sizes, each offering a distinct layout and content display.
These sizes are designed to accommodate different amounts of information and fit various screen sizes, ensuring a consistent user experience across devices.
Here are the common widget sizes available:
Small: This size is compact, displaying essential information in a concise format. Small widgets are ideal for providing quick updates or notifications without taking up much space on the screen.
Medium: Medium-sized widgets offer slightly more space for content display compared to small widgets. They can accommodate additional information or more detailed visualizations while remaining relatively compact.
Large: Large widgets provide ample space for displaying extensive content or detailed visuals. They offer a comprehensive view of information and may include interactive elements for enhanced functionality.
Extra Large: This size is available primarily on iPadOS and macOS, offering the most significant amount of space for content display. Extra-large widgets are suitable for showcasing extensive data or intricate visualizations, maximizing visibility and usability on larger screens.
These widget sizes cater to different user preferences and use cases, allowing developers to choose the most appropriate size based on the content and functionality of their widgets. By offering a range of sizes, developers can ensure their widgets deliver a tailored experience that meets the diverse needs of users across various devices and platforms.
Best Practices for Widget Design and Development:
Building on the existing best practices, let’s introduce additional tips:
Accessibility Considerations: Ensure that widgets are accessible to all users, including those with disabilities, by implementing features such as VoiceOver support and high contrast modes.
Localization Support: Localize widget content and interface elements to cater to users from diverse linguistic and cultural backgrounds, enhancing the app’s global reach and appeal.
Data Privacy and Security: Safeguard users’ personal information and sensitive data by implementing robust security measures and adhering to privacy best practices outlined in Apple’s guidelines.
Integration with App Clips: Explore opportunities to integrate widgets with App Clips, which are lightweight app experiences that allow users to access specific features or content without installing the full app.
Creating a Month-Wise Holiday Widget
In this example, we will create a widget that displays the holidays of a month, allowing users to quickly see the month’s holidays at a glance right on their home screen.
Initial Setup
Open Xcode: Launch Xcode on your Mac.
Create a New Project: Select “Create a new Xcode project” from the welcome screen or go to File > New > Project from the menu bar.
Choose a Template: In the template chooser window, select the “App” template under the iOS tab. Make sure to select SwiftUI as the User Interface and click “Next.”
Configure Your Project: Enter the name of your project, choose the organization identifier (usually your reverse domain name), interface as swiftUI and select Swift as the language and click “Next.”
Xcode will generate a default SwiftUI view for your app.
Add a Widget Extension: In Xcode, navigate to the File menu and select New > Target. In the template chooser window, select the “Widget Extension” template under the iOS tab and click “Next.”
Configure the Widget Extension: Enter a name for your widget extension as “Monthly Holiday” and choose the parent app for the extension (your main project). Click “Finish.”
Select “Activate” when the Activate scheme pops up.
Set Up the Widget Extension: Xcode will generate the necessary files for your widget extension, including a view file (e.g., WidgetView.swift) and a provider file (e.g., WidgetProvider.swift).
Developing the Month-Wise Holidays Widget
Implementing Provider Struct and TimelineProvider Protocol:
The TimelineProvider protocol provides the data that a widget displays over time. By conforming to this protocol, you define how and when the data for your widget should be updated.
struct Provider: TimelineProvider {// Provides a placeholder entry while the widget is loading. func placeholder(in context: Context) -> DayEntry {DayEntry(date: Date(), configuration: ConfigurationIntent()) }// Provides a snapshot of the widget's current state. func getSnapshot(in context: Context, completion: @escaping (DayEntry) -> ()) {let entry =DayEntry(date: Date(), configuration: ConfigurationIntent())completion(entry) }// Provides a timeline of entries for the widget. func getTimeline(in context: Context, completion: @escaping (Timeline<DayEntry>) -> ()) {var entries: [DayEntry] = []// Generate a timeline consisting of seven entries an day apart, starting from the current date.let currentDate =Date() for dayOffset in0 ..<7 {let entryDate = Calendar.current.date(byAdding: .day, value: dayOffset, to: currentDate)!let startOfDate = Calendar.current.startOfDay(for: entryDate)let entry =DayEntry(date: startOfDate, configuration: ConfigurationIntent()) entries.append(entry)let timeline =Timeline(entries: entries, policy: .atEnd)completion(timeline) } }}
Define a struct named DayEntry that conforms to the TimelineEntry protocol.
TimelineEntry is used in conjunction with TimelineProvider to manage and provide the data that the widget displays over time. By creating multiple timeline entries, you can control what your widget displays at different times throughout the day.
Define a SwiftUI view named MonthlyHolidayWidgetEntryView to display each entry in the widget.
struct MonthlyHolidayWidgetEntryView: View {var entry:DayEntryvar config:MonthConfig// Custom initializer to configure the view based on the entry's dateinit(entry: DayEntry) { self.entry = entry self.config = MonthConfig.determineConfig(from: entry.date) }var body:someView { ZStack {// Background shape with gradient color based on the month configurationContainerRelativeShape() .fill(config.backgroundColor.gradient) VStack {Spacer()// Display the date associated with the monthHStack(spacing: 4) {Text(config.dateText) .foregroundColor(config.dayTextColor) .font(.system(size: 25, weight: .heavy)) }Spacer()// Display the name of the monthText(config.month) .font(.system(size: 38, weight: .heavy)) .foregroundColor(config.dayTextColor)Spacer() } .padding() } }}
Define a widget named MonthlyHolidayWidget using SwiftUI and WidgetKit.
struct MonthlyHolidayWidget: Widget {let kind:String="MonthlyHolidaysWidget"var body:someWidgetConfiguration {StaticConfiguration(kind: kind, provider: Provider()) { entry inMonthlyHolidayWidgetEntryView(entry: entry) } .configurationDisplayName("Monthly style widget") // Display name for the widget in the widget gallery .description("The date of the widget changes based on holidays of month.") // Description of the widget's functionality .supportedFamilies([.systemLarge]) // Specify the widget size supported (large in this case) }}
Define a PreviewProvider struct named MonthlyHolidayWidget_Previews.
struct MonthlyHolidayWidget_Previews: PreviewProvider { static var previews:someView {// Provide a preview of the MonthlyHolidayWidgetEntryView for the widget galleryMonthlyHolidayWidgetEntryView(entry: DayEntry(date: dateToDisplay(month: 12, day: 22), configuration: ConfigurationIntent())) .previewContext(WidgetPreviewContext(family: .systemLarge)) }// Helper function to create a date for the given month and day in the year 2024 static func dateToDisplay(month: Int, day: Int) -> Date {let components =DateComponents(calendar: Calendar.current, year: 2024, month: month, day: day)return Calendar.current.date(from: components)! }}
Define an extension on the Date struct, adding computed properties to format dates in a specific way.
extension Date {// Computed property to get the weekday in a wide format (e.g., "Monday")var weekDayDisplayFormat:String { self.formatted(.dateTime.weekday(.wide)) }// Computed property to get the day of the month (e.g., "22")var dayDisplayFormat:String {formatted(.dateTime.day()) }}
Define `MonthConfig` struct that encapsulates configuration data.
For displaying month-specific attributes such as background color, date text, weekday text color, day text color, and month name based on a given date.
struct MonthConfig {let backgroundColor:Color// Background color for the month displaylet dateText:String// Text describing specific dates or holidays in the monthlet weekdayTextColor:Color// Text color for weekdayslet dayTextColor:Color// Text color for days of the monthlet month:String// Name of the month/// Determines and returns the configuration (MonthConfig) based on the given date.////// - Parameter date: The date used to determine the month configuration./// - Returns: A MonthConfig object corresponding to the month of the given date. static func determineConfig(from date: Date) -> MonthConfig {let monthInt = Calendar.current.component(.month, from: date)switch monthInt {case1: // JanuaryreturnMonthConfig(backgroundColor: .gray, dateText: "1 and 26", weekdayTextColor: .black.opacity(0.6), dayTextColor: .white.opacity(0.8), month: "Jan")case2: // FebruaryreturnMonthConfig(backgroundColor: .palePink, dateText: "No Holiday", weekdayTextColor: .pink.opacity(0.5), dayTextColor: .white.opacity(0.8), month: "Feb")case3: // MarchreturnMonthConfig(backgroundColor: .paleGreen, dateText: "25", weekdayTextColor: .black.opacity(0.7), dayTextColor: .white.opacity(0.8), month: "March")case4: // AprilreturnMonthConfig(backgroundColor: .paleBlue, dateText: "No Holiday", weekdayTextColor: .black.opacity(0.5), dayTextColor: .white.opacity(0.8), month: "April")case5: // MayreturnMonthConfig(backgroundColor: .paleYellow, dateText: "1", weekdayTextColor: .black.opacity(0.5), dayTextColor: .white.opacity(0.7), month: "May")case6: // JunereturnMonthConfig(backgroundColor: .skyBlue, dateText: "No Holiday", weekdayTextColor: .black.opacity(0.5), dayTextColor: .white.opacity(0.7), month: "June")case7: // JulyreturnMonthConfig(backgroundColor: .blue, dateText: "No Holiday", weekdayTextColor: .black.opacity(0.5), dayTextColor: .white.opacity(0.8), month: "July")case8: // AugustreturnMonthConfig(backgroundColor: .paleOrange, dateText: "15", weekdayTextColor: .black.opacity(0.5), dayTextColor: .white.opacity(0.8), month: "August")case9: // SeptemberreturnMonthConfig(backgroundColor: .paleRed, dateText: "No Holiday", weekdayTextColor: .black.opacity(0.5), dayTextColor: .paleYellow.opacity(0.9), month: "Sep")case10: // OctoberreturnMonthConfig(backgroundColor: .black, dateText: "2", weekdayTextColor: .white.opacity(0.6), dayTextColor: .orange.opacity(0.8), month: "Oct")case11: // NovemberreturnMonthConfig(backgroundColor: .paleBrown, dateText: "31", weekdayTextColor: .black.opacity(0.6), dayTextColor: .white.opacity(0.6), month: "Nov")case12: // DecemberreturnMonthConfig(backgroundColor: .paleRed, dateText: "25", weekdayTextColor: .white.opacity(0.6), dayTextColor: .darkGreen.opacity(0.8), month: "Dec")default:// Default case for unexpected month values (shouldn't typically happen)returnMonthConfig(backgroundColor: .gray, dateText: " ", weekdayTextColor: .black.opacity(0.6), dayTextColor: .white.opacity(0.8), month: "None") } }}
Call MonthlyHolidayWidget and MonthlyWidgetLiveActivity inside “MonthlyWidgetBundle.”
import WidgetKitimport SwiftUI@mainstruct MonthlyWidgetBundle: WidgetBundle { var body: some Widget { MonthlyHolidayWidget() MonthlyWidgetLiveActivity() }}
Now, finally add our created widget to a device.some text
Tap on the blank area of the screen and hold it for 2 seconds.
Then click on the plus(+) button at the top left corner.
Then, enter the widget name in the search widgets search bar.
Finally, select the widget name, “Monthly Holiday” in our case, to add it to the screen.
Visual effects of widgets will be as follows:
Conclusion:
iOS widgets represent a powerful tool for developers to enhance user experiences, drive engagement, and promote app adoption. By understanding the various types of widgets, implementing best practices for design and development, and exploring innovative use cases, developers can leverage their full potential to create compelling and impactful experiences for iOS users worldwide. As Apple continues to evolve the platform and introduce new features, widgets will remain a vital component of the iOS ecosystem, offering endless possibilities for innovation and creativity.
As we have already discussed in the previous blog about Apache Iceberg’s basic concepts, setup process, and how to load data. Further, we will now delve into some of Iceberg’s advanced features, including upsert functionality, schema evolution, time travel, and partitioning.
Upsert Functionality
One of Iceberg’s key features is its support for upserts. Upsert, which stands for update and insert, allows you to efficiently manage changes to your data. With Iceberg, you can perform these operations seamlessly, ensuring that your data remains accurate and up-to-date without the need for complex and time-consuming processes.
Schema Evolution
Schema evolution is another of its powerful features. Over time, the schema of your data may need to change due to new requirements or updates. Iceberg handles schema changes gracefully, allowing you to add, remove, or modify columns without having to rewrite your entire dataset. This flexibility ensures that your data architecture can evolve in tandem with your business needs.
Time Travel
Iceberg also provides time travel capabilities, enabling you to query historical data as it existed at any given point in time. This feature is particularly useful for debugging, auditing, and compliance purposes. By leveraging snapshots, you can easily access previous states of your data and perform analyses on how it has changed over time.
Setup Iceberg on the local machine using the local catalog option or Hive
You can also configure Iceberg in your Spark session like this:
Some configurations must pass while setting up Iceberg.
Create Tables in Iceberg and Insert Data
CREATETABLE demo.db.data_sample (index string, organization_id string, name string, website string, country string, description string, founded string, industry string, num_of_employees string) USING iceberg
We can either create the sample table using Spark SQL or directly write the data by mentioning the DB name and table name, which will create the Iceberg table for us.
You can see the data we have inserted. Apart from appending, you can use the overwrite method as well as Delta Lake tables. You can also see an example of how to read the data from an iceberg table.
Handling Upserts
This Iceberg feature is similar to Delta Lake. You can update the records in existing Iceberg tables without impacting the complete data. This is also used to handle the CDC operations. We can take input from any incoming CSV and merge the data in the existing table without any duplication. It will always have a single Record for each primary key. This is how Iceberg maintains the ACID properties.
Incoming Data
input_data = spark.read.option("header", "true").csv("../data/input-data/organizations-11111.csv")# Creating the temp view of that dataframe to mergeinput_data.createOrReplaceTempView("input_data")spark.sql("select * from input_data").show()
We will merge this data into our existing Iceberg Table using Spark SQL.
MERGEINTO demo.db.data_sample tUSING (SELECT*FROM input_data) sON t.organization_id = s.organization_idWHENMATCHEDTHENUPDATESET*WHENNOTMATCHEDTHENINSERT*select * from demo.db.data_sample
Here, we can see the data once the merge operation has taken place.
Schema Evolution
Iceberg supports the following schema evolution changes:
Add – Add a new column to the iceberg table
Drop – If any columns get removed from the existing tables
Rename – Change the name of the columns from the existing table
Update – Change the data type or partition columns of the Iceberg table
Reorder – Change in the order of the Iceberg table
After updating the schema, there will be no need to overwrite or re-write the data again. Like previously, your table has four columns, and all of them have data. If you added two more columns, you wouldn’t need to rewrite the data now that you have six columns. You can still easily access the data. This unique feature was lacking in Delta Lake but is present here. These are just some characteristics of the Iceberg scheme evolutions.
If we add any columns, they won’t impact the existing columns.
If we delete or drop any columns, they won’t impact other columns.
Updating a column or field does not change values in any other column.
Iceberg uses unique IDs to track each column added to a table.
Let’s run some queries to update the schema, or let’s try to delete some columns.
%%sqlALTERTABLE demo.db.data_sampleADDCOLUMN fare_per_distance_unit float AFTER num_of_employees;
After adding another column, if we try to access the data again from the table, we can do so without seeing any kind of error. This is also how Iceberg solves schema-related problems.
Partition Evolution and Sort Order Evolution
Iceberg came up with this option, which was missing in Delta Lake. When you evolve a partition spec, the old data written with an earlier spec remains unchanged. New data is written using the new spec in a new layout. Metadata for each of the partition versions is kept separately. Because of this, when you start writing queries, you get split planning. This is where each partition layout plans files separately using the filter it derives for that specific partition layout.
Similar to partition spec, Iceberg sort order can also be updated in an existing table. When you evolve a sort order, the old data written with an earlier order remains unchanged.
Iceberg supports both COW and MOR while loading the data into the Iceberg table. We can set up configuration for this by either altering the table or while creating the iceberg table.
Copy-On-Write (COW) – Best for tables with frequent reads, infrequent writes/updates, or large batch updates:
When your requirement is to frequently read but less often write and update, you can configure this property in an Iceberg table. In COW, when we update or delete any rows from the table, a new data file with another version is created, and the latest version holds the latest updated data. The data is rewritten when updates or deletions occur, making it slower and can be a bottleneck when large updates occur. As its name specifies, it creates another copy on write of data.
When reading occurs, it is an ideal process as we are not updating or deleting anything we are only reading so we can read the data faster.
Merge-On-Read (MOR) – Best for tables with frequent writes/updates:
This is just opposite of the COW, as we do not rewrite the data again on the update or deletion of any rows. It creates a change log with updated records and then merges this into the original data file to create a new state of file with updated records.
Query engine and integration supported:
Conclusion
After performing this research, we learned about the Iceberg’s features and its compatibility with various metastore for integrations. We got the basic idea of configuring Iceberg on different cloud platforms and locally well. We had some basic ideas for Upsert, schema evolution and partition evolution.
Have you ever encountered vague or misleading data analytics reports? Are you struggling to provide accurate data values to your end users? Have you ever experienced being misdirected by a geographical map application, leading you to the wrong destination? Imagine Amazon customers expressing dissatisfaction due to receiving the wrong product at their doorstep.
These issues stem from the use of incorrect or vague data by application/service providers. The need of the hour is to address these challenges by enhancing data quality processes and implementing robust data quality solutions. Through effective data management and validation, organizations can unlock valuable insights and make informed decisions.
“Harnessing the potential of clean data is like painting a masterpiece with accurate brushstrokes.”
Introduction
Data quality assurance (QA) is the systematic approach organizations use to ensure they have reliable, correct, consistent, and relevant data. It involves various methods, approaches, and tools to maintain good data quality from commencement to termination.
What is Data Quality?
Data quality refers to the overall utility of a dataset and its ability to be easily processed and analyzed for other uses. It is an integral part of data governance that ensures your organization’s data is fit for purpose.
How can I measure Data Quality?
What is the critical importance of Data Quality?
Remember, good data is super important! So, invest in good data—it’s the secret sauce for business success!
What are the Data Quality Challenges?
1. Data quality issues on production:
Production-specific data quality issues are primarily caused by unexpected changes in the data and infrastructure failures.
A. Source and third-party data changes:
External data sources, like websites or companies, may introduce errors or inconsistencies, making it challenging to use the data accurately. These issues can lead to system errors or missing values, which might go unnoticed without proper monitoring.
Example:
File formats change without warning:
Imagine we’re using an API to get data in CSV format, and we’ve made a pipeline that handles it well.
import csvdef process_csv_data(csv_file): with open(csv_file, 'r') as file: csv_reader = csv.DictReader(file) for row in csv_reader: print(row)csv_file = 'data.csv'process_csv_data(csv_file)
The data source switched to using the JSON format, breaking our pipeline. This inconsistency can cause errors or missing data if our system can’t adapt. Monitoring and adjustments will ensure the accuracy of data analysis or applications.
Malformed data values and schema changes:
Suppose we’re handling inventory data for an e-commerce site. The starting schema for your inventory dataset might have fields like:
Now, imagine that the inventory file’s schema changed suddenly. A “quantity” column has been renamed to “qty,” and the last_updated_at timestamp format switches to epoch timestamp.
This change might not be communicated in advance, leaving our data pipeline unprepared to handle the new field and time format.
B. Infrastructure failures:
Reliable software is crucial for processing large data volumes, but even the best tools can encounter issues. Infrastructure failures, like glitches or overloads, can disrupt data processing regardless of the software used.
Solution:
Data observability tools such as Monte Carlo, BigEye, and Great Expectations help detect these issues by monitoring for changes in data quality and infrastructure performance. These tools are essential for identifying and alerting the root causes of data problems, ensuring data reliability in production environments.
2. Data quality issues during development:
Development-specific data quality issues are primarily caused by untested code changes.
A. Incorrect parsing of data:
Data transformation bugs can occur due to mistakes in code or parsing, leading to data type mismatches or schema inaccuracies.
Example:
Imagine we’re converting a date string (“YYYY-MM-DD”) to a Unix epoch timestamp using Python. But misunderstanding the strptime() function’s format specifier leads to unexpected outcomes.
from datetime import datetimetimestamp_str = "2024-05-10" # %Y-%d-%m correct format from incoming data# Incorrectly using '%d' for month (should be '%m')format_date ="%Y-%m-%d"timestamp_dt = datetime.strptime(timestamp_str, format_date)epoch_seconds =int(timestamp_dt.timestamp())
This error makes strptime() interpret “2024” as the year, “05” as the month (instead of the day), and “10” as the day (instead of the month), leading to inaccurate data in the timestamp_dt variable.
B. Misapplied or misunderstood requirements:
Even with the right code, data quality problems can still occur if requirements are misunderstood, resulting in logic errors and data quality issues.
Example: Imagine we’re assigned to validate product prices in a dataset, ensuring they fall between $10 and $100.
The requirement states prices should range from $10 to $100. But a misinterpretation leads the code to check if prices are >= $10 and <= $100. This makes $10 valid, causing a data quality problem.
C. Unaccounted downstream dependencies:
Despite careful planning and logic, data quality incidents can occur due to overlooked dependencies. Understanding data lineage and communicating effectively across all users is crucial to preventing such incidents.
Example:
Suppose we’re working on a database schema migration project for an e-commerce system. In the process, we rename the order_date column to purchase_date in the orders table. Despite careful planning and testing, a data quality issue arises due to an overlooked downstream dependency. The marketing team’s reporting dashboard relies on a SQL query referencing the order_date column, now renamed purchase_date, resulting in inaccurate reporting and potentially misinformed business decisions.
Here’s an example SQL query that represents the overlooked downstream dependency:
--SQL query used by the marketing team's reporting dashboardSELECTDATE_TRUNC('month', order_date) AS month,SUM(total_amount) AS total_salesFROM ordersGROUPBYDATE_TRUNC('month', order_date)
This SQL query relies on the order_date column to calculate monthly sales metrics. After the schema migration, this column no longer exists, causing query failure and inaccurate reporting.
Solutions:
Data Quality tools like Great Expectations and Deequ proactively catch data quality issues by testing changes introduced from data-processing code, preventing issues from reaching production.
a. Testing assertions: Assertions validate data against expectations, ensuring data integrity. While useful, they require careful maintenance and should be selectively applied.
Example: Suppose we have an “orders” table in your dbt project and need to ensure the “total_amount” column contains only numeric values; we can write a dbt test to validate this data quality rule.
We specify the dbt version (version: 2), model named “orders,” and “total_amount” column.
Within the “total_amount” column definition, we add a test named “data_type” with the value “numeric,” ensuring the column contains only numeric data.
Running the dbt test command will execute this test, checking if the “total_amount” column adheres to the numeric data type. Any failure indicates a data quality issue.
b. Comparing staging and production data: Data Diff is a CLI tool that compares datasets within or across databases, highlighting changes in data similar to how git diff highlights changes in source code. Aiding in detecting data quality issues early in the development process.
Here’s a data-diff example between staging and production databases for the payment_table.
What are some best practices for maintaining high-quality data?
Establish Data Standards: Define clear data standards and guidelines for data collection, storage, and usage to ensure consistency and accuracy across the organization.
Data Validation: Implement validation checks to ensure data conforms to predefined rules and standards, identifying and correcting errors early in the data lifecycle.
Regular Data Cleansing: Schedule regular data cleansing activities to identify and correct inaccuracies, inconsistencies, and duplicates in the data, ensuring its reliability and integrity over time.
Data Governance: Establish data governance policies and procedures to manage data assets effectively, including roles and responsibilities, data ownership, access controls, and compliance with regulations.
Metadata Management: Maintain comprehensive metadata to document data lineage, definitions, and usage, providing transparency and context for data consumers and stakeholders.
Data Security: Implement robust data security measures to protect sensitive information from unauthorized access, ensuring data confidentiality, integrity, and availability.
Data Quality Monitoring: Continuously monitor data quality metrics and KPIs to track performance, detect anomalies, and identify areas for improvement, enabling proactive data quality management.
Data Training and Awareness: Provide data training and awareness programs for employees to enhance their understanding of data quality principles, practices, and tools, fostering a data-driven culture within the organization.
Collaboration and Communication: Encourage collaboration and communication among stakeholders, data stewards, and IT teams to address data quality issues effectively and promote accountability and ownership of data quality initiatives.
Continuous Improvement: Establish a culture of continuous improvement by regularly reviewing and refining data quality processes, tools, and strategies based on feedback, lessons learned, and evolving business needs.
Can you recommend any tools for improving data quality?
AWS Deequ: AWS Deequ is an open-source data quality library built on top of Apache Spark. It provides tools for defining data quality rules and validating large-scale datasets in Spark-based data processing pipelines.
Great Expectations: GX Cloud is a fully managed SaaS solution that simplifies deployment, scaling, and collaboration and lets you focus on data validation.
Soda: Soda allows data engineers to test data quality early and often in pipelines to catch data quality issues before they have a downstream impact.
Datafold: Datafold is a cloud-based data quality platform that automates and simplifies the process of monitoring and validating data pipelines. It offers features such as automated data comparison, anomaly detection, and integration with popular data processing tools like dbt.
Considerations for Selecting a Data QA Tool:
Selecting a data QA (Quality Assurance) tool hinges on your specific needs and requirements. Consider factors such as:
1. Scalability and Performance: Ensure the tool can handle current and future data volumes efficiently, with real-time processing capabilities. some text
Example: Great Expectations help validate data in a big data environment by providing a scalable and customizable way to define and monitor data quality across different sources
2. Data Profiling and Cleansing Capabilities: Look for comprehensive data profiling and cleansing features to detect anomalies and improve data quality.some text
Example: AWS Glue DataBrew offers profiling, cleaning and normalizing, creating map data lineage, and automating data cleaning and normalization tasks.
3. Data Monitoring Features: Choose tools with continuous monitoring capabilities, allowing you to track metrics and establish data lineage.some text
Example: Datafold’s monitoring feature allows data engineers to write SQL commands to find anomalies and create automated alerts.
4. Seamless Integration with Existing Systems: Select a tool compatible with your existing systems to minimize disruption and facilitate seamless integration.some text
Example: dbt offers seamless integration with existing data infrastructure, including data warehouses and BI tools. It allows users to define data transformation pipelines using SQL, making it compatible with a wide range of data systems.
5. User-Friendly Interface: Prioritize tools with intuitive interfaces for quick adoption and minimal training requirements.some text
Example: Soda SQL is an open-source tool with a simple command line interface (CLI) and Python library to test your data through metric collection.
6. Flexibility and Customization Options: Seek tools that offer flexibility to adapt to changing data requirements and allow customization of rules and workflows.some text
Example: dbt offers flexibility and customization options for defining data transformation workflows.
7. Vendor Support and Community: Evaluate vendors based on their support reputation and active user communities for shared knowledge and resources.some text
Example: AWS Deequ is supported by Amazon Web Services (AWS) and has an active community of users. It provides comprehensive documentation, tutorials, and forums for users to seek assistance and share knowledge about data quality best practices.
8. Pricing and Licensing Options: Consider pricing models that align with your budget and expected data usage, such as subscription-based or volume-based pricing. some text
Example: Great Expectations offers flexible pricing and licensing options, including both open-source (freely available) and enterprise editions(subscription-based).
Ultimately, the right tool should effectively address your data quality challenges and seamlessly fit into your data infrastructure and workflows.
Conclusion: The Vital Role of Data Quality
In conclusion, data quality is paramount in today’s digital age. It underpins informed decisions, strategic formulation, and business success. Without it, organizations risk flawed judgments, inefficiencies, and competitiveness loss. Recognizing its vital role empowers businesses to drive innovation, enhance customer experiences, and achieve sustainable growth. Investing in robust data management, embracing technology, and fostering data integrity are essential. Prioritizing data quality is key to seizing new opportunities and staying ahead in the data-driven landscape.
As we already discussed in our previous Delta Lake blog, there are already table formats in use, ones with very high specifications and their own benefits. Iceberg is one of them. So, in this blog, we will discuss Iceberg.
What is Apache Iceberg?
Iceberg, from the open-source Apache, is a table format used to handle large amounts of data stored locally or on various cloud storage platforms. Netflix developed Iceberg to solve its big data problem. After that, they donated it to Apache, and it became open source in 2018. Iceberg now has a large number of contributors all over the world on GitHub and is the most widely used table format.
Iceberg mainly solves all the key problems we once faced when using the Hive table format to deal with data stored on various cloud storage like S3.
Iceberg has similar features and capabilities, like SQL tables. Yes, it is open source, so multiple engines like Spark can operate on it to perform transformations and such. It also has all ACID properties. This is a quick introduction to Iceberg, covering its features and initial setup.
Why to go with Iceberg
The main reason to use Iceberg is that it performs better when we need to load data from S3, or metadata is available on a cloud storage medium. Unlike Hive, Iceberg tracks the data at the file level rather than the folder level, which can decrease performance; that’s why we want to choose Iceberg. Here is the folder hierarchy that Iceberg uses while saving the data into its tables. Each Iceberg table is a combination of four files: snapshot metadata list, manifest list, manifest file, and data file.
Snapshot Metadata File: This file holds the metadata information about the table, such as the schema, partitions, and manifest list.
Manifest List: This list records each manifest file along with the path and metadata information. At this point, Iceberg decides which manifest files to ignore and which to read.
Manifest File: This file contains the paths to real data files, which hold the real data along with the metadata.
Data File: Here is the real parquet, ORC, and Avro file, along with the real data.
Features of Iceberg:
Some Iceberg features include:
Schema Evolution: Iceberg allows you to evolve your schema without having to rewrite your data. This means you can easily add, drop, or rename columns, providing flexibility to adapt to changing data requirements without impacting existing queries.
Partition Evolution: Iceberg supports partition evolution, enabling you to modify the partitioning scheme as your data and query patterns evolve. This feature helps maintain query performance and optimize data layout over time.
Time Travel: Iceberg’s time travel feature allows you to query historical versions of your data. This is particularly useful for debugging, auditing, and recreating analyses based on past data states.
Multiple Query Engine Support: Iceberg supports multiple query engines, including Trino, Presto, Hive, and Amazon Athena. This interoperability ensures that you can read and write data across different tools seamlessly, facilitating a more versatile and integrated data ecosystem.
AWS Support: Iceberg is well-integrated with AWS services, making it easy to use with Amazon S3 for storage and other AWS analytics services. This integration helps leverage the scalability and reliability of AWS infrastructure for your data lake.
ACID Compliance: Iceberg ensures ACID (Atomicity, Consistency, Isolation, Durability) transactions, providing reliable data consistency and integrity. This makes it suitable for complex data operations and concurrent workloads, ensuring data reliability and accuracy.
Hidden Partitioning: Iceberg’s hidden partitioning abstracts the complexity of managing partitions from the user, automatically handling partition management to improve query performance without manual intervention.
Snapshot Isolation: Iceberg supports snapshot isolation, enabling concurrent read and write operations without conflicts. This isolation ensures that users can work with consistent views of the data, even as it is being updated.
Support for Large Tables: Designed for high scalability, Iceberg can efficiently handle petabyte-scale tables, making it ideal for large datasets typical in big data environments.
Compatibility with Modern Data Lakes: Iceberg’s design is tailored for modern data lake architectures, supporting efficient data organization, metadata management, and performance optimization, aligning well with contemporary data management practices.
These features make Iceberg a powerful and flexible table format for managing data lakes, ensuring efficient data processing, robust performance, and seamless integration with various tools and platforms. By leveraging Iceberg, organizations can achieve greater data agility, reliability, and efficiency, enhancing their data analytics capabilities and driving better business outcomes.
Prerequisite:
PySpark: Ensure that you have PySpark installed and properly configured. PySpark provides the Python API for Spark, enabling you to harness the power of distributed computing with Spark using Python.
Python: Make sure you have Python installed on your system. Python is essential for writing and running your PySpark scripts. It’s recommended to use a virtual environment to manage your dependencies effectively.
Iceberg-Spark JAR: Download the appropriate Iceberg-Spark JAR file that corresponds to your Spark version. This JAR file is necessary to integrate Iceberg with Spark, allowing you to utilize Iceberg’s advanced table format capabilities within your Spark jobs.
Jars to Configure Cloud Storage: Obtain and configure the necessary JAR files for your specific cloud storage provider. For example, if you are using Amazon S3, you will need the hadoop-aws JAR and its dependencies. For Google Cloud Storage, you need the gcs-connector JAR. These JARs enable Spark to read from and write to cloud storage systems.
Spark and Hadoop Configuration: Ensure your Spark and Hadoop configurations are correctly set up to integrate with your cloud storage. This might include setting the appropriate access keys, secret keys, and endpoint configurations in your spark-defaults.conf and core-site.xml.
Iceberg Configuration: Configure Iceberg settings specific to your environment. This might include catalog configurations (e.g., Hive, Hadoop, AWS Glue) and other Iceberg properties that optimize performance and compatibility.
Development Environment: Set up a development environment with an IDE or text editor that supports Python and Spark development, such as IntelliJ IDEA with the PyCharm plugin, Visual Studio Code, or Jupyter Notebooks.
Data Source Access: Ensure you have access to the data sources you will be working with, whether they are in cloud storage, relational databases, or other data repositories. Proper permissions and network configurations are necessary for seamless data integration.
Basic Understanding of Data Lakes: A foundational understanding of data lake concepts and architectures will help effectively utilize Iceberg. Knowledge of how data lakes differ from traditional data warehouses and their benefits will also be helpful.
Version Control System: Use a version control system like Git to manage your codebase. This helps in tracking changes, collaborating with team members, and maintaining code quality.
Documentation and Resources: Familiarize yourself with Iceberg documentation and other relevant resources. This will help you troubleshoot issues, understand best practices, and leverage advanced features effectively.
You can download the run time JAR from here —according to the Spark version installed on your machine or cluster. It will be the same as the Delta Lake setup. You can either download these JAR files to your machine or cluster, provide a Spark submit command, or you can download these while initializing the Spark session by passing these in Spark config as a JAR package, along with the appropriate version.
To use cloud storage, we are using these JARs with the S3 bucket for reading and writing Iceberg tables. Here is the basic example of a spark session:
Save this file with docker-compose.yaml. And run the command: docker compose up. Now, you can log into your container by using this command:
docker exec -it <container-id> bash
You can mount the sample data directory in a container or copy it from your local machine to the container. To copy the data inside the Docker directory, we can use the CP command.
We read the data in Spark and create an Iceberg table out of it, storing the iceberg tables in the S3 bucket only.
Some Iceberg functionality won’t work if we haven’t installed or used the appropriate JAR file of the Iceberg version. The Iceberg version should be compatible with the Spark version you are using; otherwise, some feature partitions will throw an error of noSuchMethod. This must be taken care of carefully while setting this up, either in EC2 or EMR.
Create an Iceberg table on S3 and write data into that table. The sample data we have used is generated using a Spark job for Delta tables. We are using the same data and schema of the data as follows.
Step 2
We created Iceberg tables in the location of the S3 bucket and wrote the data with partition columns in the S3 bucket only.
spark.sql(""" CREATE TABLE IF NOT EXISTS demo.db.iceberg_data_2(id INT, first_name String,last_name String, address String, pincocde INT, net_income INT, source_of_income String,state String, email_id String, description String, population INT, population_1 String,population_2 String, population_3 String, population_4 String, population_5 String, population_6 String,population_7 String, date INT)USING icebergTBLPROPERTIES ('format'='parquet', 'format-version'='2')PARTITIONEDBY (`date`)location 's3a://abhishek-test-01012023/iceberg_v2/db/iceberg_data_2'""")# Read the data that need to be written# Reading the data from delta tables in spark Dataframedf = spark.read.parquet("s3a://abhishek-test-01012023/delta-lake-sample-data/")logging.info("Starting writing the data")df.sortWithinPartitions("date").writeTo("demo.db.iceberg_data").partitionedBy("date").createOrReplace()logging.info("Writing has been finished")logging.info("Query the data from iceberg using spark SQL")spark.sql("describe table demo.db.iceberg_data").show()spark.sql("Select * from demo.db.iceberg_data limit 10").show()
This is how we can use Iceberg over S3. There is another option: We can also create Iceberg tables in the AWS Glue catalog. Most tables created in the Glue catalog using Ahena are external tables that we use externally after generating the manifest files, like Delta Lake.
Step 3
We print the Iceberg table’s data along with the table descriptions.
Using Iceberg, we can directly create the table in the Glue catalog using Athena, and it supports all read and write operations on the data available. These are the configurations that need to use in spark while using Glue catalog.
Now, we can easily create the Iceberg table using the Spark or Athena, and it will be accessible via Delta. We can perform upserts, too.
Conclusion
We’ve learned the basics of the Iceberg table format, its features, and the reasons for choosing Iceberg. We discussed how Iceberg provides significant advantages such as schema evolution, partition evolution, hidden partitioning, and ACID compliance, making it a robust choice for managing large-scale data. We also delved into the fundamental setup required to implement this table format, including configuration and integration with data processing engines like Apache Spark and query engines like Presto and Trino. By leveraging Iceberg, organizations can ensure efficient data management and analytics, facilitating better performance and scalability. With this knowledge, you are well-equipped to start using Iceberg for your data lake needs, ensuring a more organized, scalable, and efficient data infrastructure.
The Business Intelligence (BI) tool has become a cornerstone in modern data analysis by transcending the limitations of traditional methods like Excel and databases.
With plenty of options, selecting the right BI tool is crucial for unlocking the full potential of your organization’s data. In this blog, we will explore some popular BI tools, their features, and key considerations to help you make an informed decision.
Here are some of the leading tools at the forefront of our discussion.
Your selected BI tool must align with your business objectives and user expertise:
Identify the specific goals and outcomes you want to achieve from the BI tool. It could be improving sales, optimizing operations, or enhancing competitive insights.
Be sure to also assess the technical proficiency of your users and choose a BI tool that matches the skill level of your team to achieve optimal utilization and efficiency.
After solidifying the objectives, dive into the additional considerations explained below to craft your ultimate decision.
2. Factors Related to Installation
When choosing the BI tool from an installation and deployment perspective, various factors come into play. A selection of these considerations is outlined in the table below.
Based on these points, we can summarise that:
Smaller businesses might prefer user-friendly options like PowerBI or Qlik Sense.
Larger enterprises with extensive IT support might opt for Tableau or SAP BI for their comprehensive features.
Open-source enthusiasts might find Apache Superset appealing, but it requires a solid understanding of software deployment.
3. Ease of Use & Learning Curve
To ensure widespread adoption within your organization, we must choose the BI tool that prioritizes ease of use and has a manageable learning curve.
Power BI and Tableau offer user-friendly interfaces, making them accessible to a wide range of users, with moderate learning curves.
SAP BI is ideal for organizations already familiar with SAP products, leveraging existing expertise for seamless integration.
Superset and Qlik Sense provide a balanced approach, accommodating users with different levels of technical proficiency while ensuring accessibility and usability.
4. Integration with Existing Infrastructure
You must also consider how well the BI tool aligns with existing IT infrastructure, applications, and databases:
Power BI:
Integrates well with Microsoft products, providing seamless connectivity and robust integration. It is well-suited for businesses leveraging Microsoft technologies.
Tableau:
It’s a leading BI and data visualization tool with robust integration capabilities. Like many other BI platforms, it also supports a wide range of data sources, Cloud Platforms, and big data techs like Spark and Hadoop. This makes it suitable for organizations with a diverse tech stack. Learn More
It integrates well with SAP products. For third-party applications, Business Connector is used for integration. It can be challenging and requires additional configuration. Best suited for organizations that are heavily invested in SAP products.
Apache Superset:
Apache Superset Provides integration options with a wide range of system techs due to open source and active community support. However additional setup and configuration must be done first for specific technologies. Thus, it would be wise to use this for small-scale businesses as using it for a large organization can become a very complex & tedious task.
Qlik Sense:
Qlik Sense is known for its strong integration capabilities and real-time data analysis. Much like Tableau, it also seamlessly connects with various data sources, big data techs like Hadoop and Spark, and major cloud platforms like GCP, AWS, and Azure. Learn More
5. Cost Estimation
BI platforms can vary significantly in their pricing models and associated costs. So, you need to evaluate costs against your current and future usage and team size. Here, I’ve mentioned some key points to consider when comparing BI tools with a focus on budget constraints:
If an organization possesses the expertise to manage its cloud infrastructure and has a dedicated team to oversee resource scaling and monitoring, Apache Superset stands out as an excellent choice. This minimizes your licensing costs.
However, if building a cloud infrastructure isn’t your preference and you need a Software as a Service (SaaS) solution, Power BI Premium could be suitable for small teams focused on analysis.
SAP BI presents a viable option for large organizations needing customized pricing plans tailored to specific requirements.
Alternatively, if you require both cloud and on-premise options, Qlik Sense and Tableau offer versatile solutions, catering well to the needs of small and medium-sized businesses.
Summary
So, in a nutshell, when choosing a BI tool, carefully assess your organization’s individual needs, technical infrastructure, budget limitations, and technical proficiency. Each tool has its strengths, so tailor your choice to match your specific requirements, enabling you to maximize your data’s potential.
Hands up if you’ve ever built a React project with Create-React-App (CRA)—and that’s all of us, isn’t it? Now, how about we pull back the curtain and see what’s actually going on behind the scenes? Buckle up, it’s time to understand what CRA really is and explore the wild, untamed world of creating a React project without it. Sounds exciting, huh?
What is CRA?
CRA—Create React App (https://create-react-app.dev/)—is a command line utility provided by Facebook for creating react apps with preconfigured setup. CRA provides an abstraction layer over the nitty-gritty details of configuring tools like Babel and Webpack, allowing us to focus on writing code. Apart from this, it basically comes with everything preconfigured, and developers don’t need to worry about anything but code.
That’s all well and good, but why do we need to learn about manual configuration? At some point in your career, you’ll likely have to adjust webpack configurations. And if that’s not a convincing reason, how about satisfying your curiosity? 🙂
“At its core, webpack is a static module bundler for modern JavaScript applications.”
But what does that actually mean? Let’s break it down:
static:It refers to the static assets (HTML, CSS, JS, images) on our application.
module:It refers to a piece of code in one of our files. In a large application, it’s not usually possible to write everything in a single file, so we have multiple modules piled up together.
bundler:It is a tool (which is webpack in our case), that bundles up everything we have used in our project and converts it to native, browser understandable JS, CSS, HTML (static assets).
So, in essence, webpack takes our application’s static assets (like JavaScript modules, CSS files, and more) and bundles them together, resolving dependencies and optimizing the final output.
Webpack is preconfigured in our Create-React-App (CRA), and for most use cases, we don’t need to adjust it. You’ll find that many tutorials begin a React project with CRA. However, to truly understand webpack and its functionalities, we need to configure it ourselves. In this guide, we’ll attempt to do just that.
Let’s break this whole process into multiple steps:
Step 1: Let us name our new project
Create a new project folder and navigate into it:
mkdir react-webpack-waycd react-webpack-way
Step 2: Initialize npm
Run the following command to initialize a new npm project. Answer the prompts or press Enter to accept the default values.
npm init # if you are patient enough to answer the prompts :)Ornpm init -y
This will generate a package.json for us.
Step 3: Install React and ReactDOM
Install React and ReactDOM as dependencies:
npm install react react-dom
Step 4: Create project structure
You can create any folder structure that you are used to. But for the sake of simplicity, let’s stick to the following structure:
<!-- public/index.html --><!DOCTYPEhtml><htmllang="en"> <head> <metacharset="utf-8" /> <title>React with Webpack</title> </head> <body> <divid="root"></div> <!-- Do not miss this one --> </body></html>
Step 7: Install Webpack and Babel
Install Webpack, Babel, and html-webpack-plugin as development dependencies:
In a nutshell, some of the reasons we use Babel are:
JavaScript ECMAScript Compatibility:
Babel allows developers to use the latest ECMAScript (ES) features in their code, even if the browser or Node.js environment doesn’t yet support them. This is achieved through the process of transpiling, where Babel converts modern JavaScript code (ES6 and beyond) into a version that is compatible with a wider range of browsers and environments.
JSX Transformation:
JSX (JavaScript XML) is a syntax extension for JavaScript used with React. Babel is required to transform JSX syntax into plain JavaScript, as browsers do not understand JSX directly. This transformation is necessary for React components to be properly rendered in the browser.
Module System Transformation:
Babel helps in transforming the module system used in JavaScript. It can convert code written using the ES6 module syntax (import and export) into the CommonJS or AMD syntax that browsers and older environments understand.
Polyfilling:
Babel can include polyfills for features not present in the target environment. This ensures your application can use newer language features or APIs even if they are not supported natively.
Browser Compatibility:
Different browsers have varying levels of support for JavaScript features. Babel helps address these compatibility issues by allowing developers to write code using the latest features and then automatically transforming it to a version that works across different browsers.
The html-webpack-plugin is a popular webpack plugin that simplifies the process of creating an HTML file to serve your bundled JavaScript files. It automatically injects the bundled script(s) into the HTML file, saving you from having to manually update the script tags every time your bundle changes. To put it in perspective, if you don’t have this plugin, you won’t see your React index file injected into the HTML file.
Step 8: Configure Babel
Create a .babelrc file in the project root and add the following configuration:
Let’s go through each section of the provided webpack.config.js file and explain what each keyword means:
const path = require('path');
This line imports the Node.js path module, which provides utilities for working with file and directory paths. Our webpack configuration ensures that file paths are specified correctly and consistently across different operating systems.
This line imports the HtmlWebpackPlugin module. This webpack plugin simplifies the process of creating an HTML file to include the bundled JavaScript files. It’s a convenient way of automatically generating an HTML file that includes the correct script tags for our React application.
module.exports = { ... };
This line exports a JavaScript object, which contains the configuration for webpack. It specifies how webpack should bundle and process your code.
entry: './src/index.js',
This configuration tells webpack the entry point of your application, which is the main JavaScript file where the bundling process begins. In this case, it’s ./src/index.js.
This configuration specifies where the bundled JavaScript file should be output: path is the directory, and filename is the name of the output file. In this case, it will be placed in the dist directory with the name bundle.js.
module: { rules: [ ... ], },
This section defines rules for how webpack should process different types of files. In this case, it specifies a rule for JavaScript and JSX files (those ending with .js or .jsx). The babel-loader is used to transpile these files using Babel, excluding files in the node_modules directory.
plugins: [ new HtmlWebpackPlugin({ template: 'public/index.html', }), ],
This section includes an array of webpack plugins. In particular, it adds the HtmlWebpackPlugin, configured to use the public/index.html file as a template. This plugin will automatically generate an HTML file with the correct script tags for the bundled JavaScript.
This configuration is for the webpack development server. It specifies the base directory for serving static files (public in this case) and the port number (3000) on which the development server will run. The development server provides features like hot-reloading during development.
And there you have it! We’ve just scratched the surface of the wild world of webpack. But don’t worry, this is just the opening act. Grab your gear, because in the upcoming articles, we’re going to plunge into the deep end, exploring the advanced terrains of webpack. Stay tuned!
Fast-growing tech companies rely heavily on Amazon EKS clusters to host a variety of microservices and applications. The pairing of Amazon EKS for managing the Kubernetes Control Plane and Amazon EC2 for flexible Kubernetes nodes creates an optimal environment for running containerized workloads.
With the increasing scale of operations, optimizing costs across multiple EKS clusters has become a critical priority. This blog will demonstrate how we can leverage various tools and strategies to analyze, optimize, and manage EKS costs effectively while maintaining performance and reliability.
Cost Analysis:
Working on cost optimization becomes absolutely necessary for cost analysis. Data plays an important role, and trust your data. The total cost of operating an EKS cluster encompasses several components. The EKS Control Plane (or Master Node) incurs a fixed cost of $0.20 per hour, offering straightforward pricing.
Meanwhile, EC2 instances, serving as the cluster’s nodes, introduce various cost factors, such as block storage and data transfer, which can vary significantly based on workload characteristics. For this discussion, we’ll focus primarily on two aspects of EC2 cost: instance hours and instance pricing. Let’s look at how to do the cost analysis on your EKS cluster.
Tool Selection: We can begin our cost analysis journey by selecting Kubecost, a powerful tool specifically designed for Kubernetes cost analysis. Kubecost provides granular insights into resource utilization and costs across our EKS clusters.
Deployment and Usage: Deploying Kubecost is straightforward. We can integrate it with our Kubernetes clusters following the provided documentation. Kubecost’s intuitive dashboard allowed us to visualize resource usage, cost breakdowns, and cost allocation by namespace, pod, or label. Once deployed, you can see the Kubecost overview page in your browser by port-forwarding the Kubecost k8s service. It might take 5-10 minutes for Kubecost to gather metrics. You can see your Amazon EKS spend, including cumulative cluster costs, associated Kubernetes asset costs, and monthly aggregated spend.
Cluster Level Cost Analysis: For multi-cluster cost analysis and cluster level scoping, consider using the AWS Tagging strategy and tag your EKS clusters. Learn more about tagging strategy from the following documentations. You can then view your cost analysis in AWS Cost Explorer. AWS Cost Explorer provided additional insights into our AWS usage and spending trends. By analyzing cost and usage data at a granular level, we can identify areas for further optimization and cost reduction.
Multi-Cluster Cost Analysis using Kubecost and Prometheus: Kubecost deployment comes with a Prometheus cluster to send cost analysis metrics to the Prometheus server. For multiple EKS clusters, we can enable the remote Prometheus server, either AWS-Managed Prometheus server or self-managed Prometheus. To get cost analysis metrics from multiple clusters, we need to run Kubeost with an additional Sigv4 pod that sends individual and combined cluster metrics to a common Prometheus cluster. You can follow the AWS documentation for Multi-Cluster Cost Analysis using Kubecost and Prometheus.
Cost Optimization Strategies:
Based on the cost analysis, the next step is to plan your cost optimization strategies. As explained in the previous section, the Control Plane has a fixed cost and straightforward pricing model. So, we will focus mainly on optimizing the data nodes and optimizing the application configuration. Let’s look at the following strategies when optimizing the cost of the EKS cluster and supporting AWS services:
Right Sizing: On the cost optimization pillar of the AWS Well-Architected Framework, we find a section on Cost-Effective Resources, which describes Right Sizing as:
“… using the lowest cost resource that still meets the technical specifications of a specific workload.”
Application Right Sizing: Right-sizing is the strategy to optimize pod resources by allocating the appropriate CPU and memory resources to pods. Care must be taken to try to set requests that align as close as possible to the actual utilization of these resources. If the value is too low, then the containers may experience throttling of the resources and impact the performance. However, if the value is too high, then there is waste, since those unused resources remain reserved for that single container. When actual utilization is lower than the requested value, the difference is called slack cost. A tool like kube-resource-report is valuable for visualizing the slack cost and right-sizing the requests for the containers in a pod. Installation instructions demonstrate how to install via an included helm chart.
You can also consider tools like VPA recommender with Goldilocks to get an insight into your pod resource consumption and utilization.
Compute Right Sizing: Application right sizing and Kubecost analysis are required to right-size EKS Compute. Here are several strategies for computing right sizing:some text
Mixed Instance Auto Scaling group: Employ a mixed instance policy to create a diversified pool of instances within your auto scaling group. This mix can include both spot and on-demand instances. However, it’s advisable not to mix instances of different sizes within the same Node group.
Node Groups, Taints, and Tolerations: Utilize separate Node Groups with varying instance sizes for different application requirements. For example, use distinct node groups for GPU-intensive and CPU-intensive applications. Use taints and tolerations to ensure applications are deployed on the appropriate node group.
Graviton Instances: Explore the adoption of Graviton Instances, which offer up to 40% better price performance compared to traditional instances. Consider migrating to Graviton Instances to optimize costs and enhance application performance.
“Spot Instances allow you to use spare compute capacity at a significantly lower cost than On-Demand EC2 instances (up to 90%).”
Understanding purchase options for Amazon EC2 is crucial for cost optimization. The Amazon EKS data plane consists of worker nodes or serverless compute resources responsible for running Kubernetes application workloads. These nodes can utilize different capacity types and purchase options, including On-Demand, Spot Instances, Savings Plans, and Reserved Instances.
On-Demand and Spot capacity offer flexibility without spending commitments. On-Demand instances are billed based on runtime and guarantee availability at On-Demand rates, while Spot instances offer discounted rates but are preemptible. Both options are suitable for temporary or bursty workloads, with Spot instances being particularly cost-effective for applications tolerant of compute availability fluctuations.
Reserved Instances involve upfront spending commitments over one or three years for discounted rates. Once a steady-state resource consumption profile is established, Reserved Instances or Savings Plans become effective. Savings Plans, introduced as a more flexible alternative to Reserved Instances, allow for commitments based on a “US Dollar spend amount,” irrespective of provisioned resources. There are two types: Compute Savings Plans, offering flexibility across instance types, Fargate, and Lambda charges, and EC2 Instance Savings Plans, providing deeper discounts but restricting compute choice to an instance family.
Tailoring your approach to your workload can significantly impact cost optimization within your EKS cluster. For non-production environments, leveraging Spot Instances exclusively can yield substantial savings. Meanwhile, implementing Mixed-Instances Auto Scaling Groups for production workloads allows for dynamic scaling and cost optimization. Additionally, for steady workloads, investing in a Savings Plan for EC2 instances can provide long-term cost benefits. By strategically planning and optimizing your EC2 instances, you can achieve a notable reduction in your overall EKS compute costs, potentially reaching savings of approximately 60-70%.
“… this (matching supply and demand) accomplished using Auto Scaling, which helps you to scale your EC2 instances and Spot Fleet capacity up or down automatically according to conditions you define.”
Cluster Autoscaling: Therefore, a prerequisite to cost optimization on a Kubernetes cluster is to ensure you have Cluster Autoscaler running. This tool performs two critical functions in the cluster. First, it will monitor the cluster for pods that are unable to run due to insufficient resources. Whenever this occurs, the Cluster Autoscaler will update the Amazon EC2 Auto Scaling group to increase the desired count, resulting in additional nodes in the cluster. Additionally, the Cluster Autoscaler will detect nodes that have been underutilized and reschedule pods onto other nodes. Cluster Autoscaler will then decrease the desired count for the Auto Scaling group to scale in the number of nodes.
The Amazon EKS User Guide has a great section on the configuration of the Cluster Autoscaler. There are a couple of things to pay attention to when configuring the Cluster Autoscaler:
IAM Roles for Service Account – Cluster Autoscaler will require access to update the desired capacity in the Auto Scaling group. The recommended approach is to create a new IAM role with the required policies and a trust policy that restricts access to the service account used by Cluster Autoscaler. The role name must then be provided as an annotation on the service account:
Setup your Cluster Autoscaler in Auto-Discovery Setup by enabling the –node-group-auto-discovery flag as an argument. Also, make sure to tag your EKS nodes’ Autoscaling groups with the following tags:
Auto Scaling Group per AZ – When Cluster Autoscaler scales out, it simply increases the desired count for the Auto Scaling group, leaving the responsibility for launching new EC2 instances to the AWS Auto Scaling service. If an Auto Scaling group is configured for multiple availability zones, then the new instance may be provisioned in any of those availability zones.
For deployments that use persistent volumes, you will need to provision a separate Auto Scaling group for each availability zone. This way, when Cluster Autoscaler detects the need to scale out in response to a given pod, it can target the correct availability zone for the scale-out based on persistent volume claims that already exist in a given availability zone.
When using multiple Auto Scaling groups, be sure to include the following argument in the pod specification for Cluster Autoscaler:
–balance-similar-node-groups=true
Pod Autoscaling: Now that Cluster Autoscaler is running in the cluster, you can be confident that the instance hours will align closely with the demand from pods within the cluster. Next up is to use Horizontal Pod Autoscaler (HPA) to scale out or in the number of pods for a deployment based on specific metrics for the pods to optimize pod hours and further optimize our instance hours.
The HPA controller is included with Kubernetes, so all that is required to configure HPA is to ensure that the Kubernetes metrics server is deployed in your cluster and then defining HPA resources for your deployments. For example, the following HPA resource is configured to monitor the CPU utilization for a deployment named nginx-ingress-controller. HPA will then scale out or in the number of pods between 1 and 5 to target an average CPU utilization of 80% across all the pods:
The combination of Cluster Autoscaler and Horizontal Pod Autoscaler is an effective way to keep EC2 instance hours tied as close as possible to the actual utilization of the workloads running in the cluster.
Down Scaling: In addition to demand-based automatic scaling, the Matching Supply and Demand section of the AWS Well-Architected Frameworkcost optimization pillar includes a section, which recommends the following:
“Systems can be scheduled to scale out or in at defined times, such as the start of business hours, thus ensuring that resources are available when users arrive.”
There are many deployments that only need to be available during business hours. A tool named kube-downscaler can be deployed to the cluster to scale in and out the deployments based on time of day.
Some example use case of kube-downscaler is:
Deploy the downscaler to a test (non-prod) cluster with a default uptime or downtime time range to scale down all deployments during the night and weekend.
Deploy the downscaler to a production cluster without any default uptime/downtime setting and scale down specific deployments by setting the downscaler/uptime (or downscaler/downtime) annotation. This might be useful for internal tooling front ends, which are only needed during work time.
AWS Fargate with EKS: You can run Kubernetes without managing clusters of K8s servers with AWS Fargate, a serverless compute service.
AWS Fargate pricing is based on usage (pay-per-use). There are no upfront charges here as well. There is, however, a one-minute minimum charge. All charges are also rounded up to the nearest second. You will also be charged for any additional services you use, such as CloudWatch utilization charges and data transfer fees. Fargate can also reduce your management costs by reducing the number of DevOps professionals and tools you need to run Kubernetes on Amazon EKS.
Conclusion:
Effectively managing costs across multiple Amazon EKS clusters is essential for optimizing operations. By utilizing tools like Kubecost and AWS Cost Explorer, coupled with strategies such as right-sizing, mixed instance policies, and Spot Instances, organizations can streamline cost analysis and optimize resource allocation. Additionally, implementing auto-scaling mechanisms like Cluster Autoscaler ensures dynamic resource scaling based on demand, further optimizing costs. Leveraging AWS Fargate with EKS can eliminate the need to manage Kubernetes clusters, reducing management costs. Overall, by combining these strategies, organizations can achieve significant cost savings while maintaining performance and reliability in their containerized environments.
Go interfaces are powerful tools for designing flexible and adaptable code. However, their inner workings can often seem hidden behind the simple syntax.
This blog post aims to peel back the layers and explore the internals of Go interfaces, providing you with a deeper understanding of their power and capabilities.
1. Interfaces: Not Just Method Signatures
While interfaces appear as collections of method signatures, they are deeper than that. An interface defines a contract: any type that implements the interface guarantees the ability to perform specific actions through those methods. This contract-based approach promotes loose coupling and enhances code reusability.
// Interface defining a "printable" behaviortypePrintable interface { String() string}// Struct types implementing the Printable interfacetypeBook struct { Title string}typeArticle struct { Title string Content string}// Implement String() method to fulfill the contractfunc (b Book) String() string {return b.Title}// Implement String() method to fulfill the contractfunc (a Article) String() string {return fmt.Sprintf("%s", a.Title)}
Here, both Book and Article types implement the Printable interface by providing a String() method. This allows us to treat them interchangeably in functions expecting Printable values.
2. Interface Values and Dynamic Typing
An interface variable itself cannot hold a value. Instead, it refers to an underlying concrete type that implements the interface. Go uses dynamic typing to determine the actual type at runtime. This allows for flexible operations like:
func printAll(printables []Printable) { for _, p := range printables { fmt.Println(p.String()) // Calls the appropriate String() based on concrete type }}book := Book{Title: "Go for Beginners"}article := Article{Title: "The power of interfaces"}printables := []Printable{book, article}printAll(printables)
The printAll function takes a slice of Printable and iterates over it. Go dynamically invokes the correct String() method based on the concrete type of each element (Book or Article) within the slice.
3. Embedded Interfaces and Interface Inheritance
Go interfaces support embedding existing interfaces to create more complex contracts. This allows for code reuse and hierarchical relationships, further enhancing the flexibility of your code:
typeWriter interface { Write(data []byte) (int, error)}typeReadWriter interface { Writer Read([]byte) (int, error)}typeMyFile struct {// ... file data and methods}// MyFile implements both Writer and ReadWriter by embedding their interfacesfunc (f *MyFile) Write(data []byte) (int, error) {// ... write data to file}func (f *MyFile) Read(data []byte) (int, error) {// ... read data from file}
Here, ReadWriter inherits all methods from the embedded Writer interface, effectively creating a more specific “read-write” contract.
4. The Empty Interface and Its Power
The special interface{} represents the empty interface, meaning it requires no specific methods. This seemingly simple concept unlocks powerful capabilities:
// Function accepting any type using the empty interfacefunc PrintAnything(value interface{}) { fmt.Println(reflect.TypeOf(value), value)}PrintAnything(42) // Output: int 42PrintAnything("Hello") // Output: string HelloPrintAnything(MyFile{}) // Output: main.MyFile {}
This function can accept any type because interface{} has no requirements. Internally, Go uses reflection to extract the actual type and value at runtime, enabling generic operations.
5. Understanding Interface Equality and Comparisons
Equality checks on interface values involve both the dynamic type and underlying value:
book1 := Book{Title: "Go for Beginners"}book2 := Book{Title: "Go for Beginners"}// Same type and value, so equalfmt.Println(book1 == book2) // TruedifferentBook := Book{Title: "Go for Dummies"}// Same type, different value, so not equalfmt.Println(book1 == differentBook) // Falsearticle := Article{Title: "Go for Beginners"}// This will cause a compilation errorfmt.Println(book1 == article) // Error: invalid operation: book1 == article (mismatched types Book and Article)
However, it’s essential to remember that interfaces themselves cannot be directly compared using the == operator unless they both contain exactly the same value of the same type.
To compare interface values effectively, you can utilize two main approaches:
1. Type Assertions: These allow you to safely access the underlying value and perform comparisons if you’re certain about the actual type:
func getBookTitleFromPrintable(p Printable) (string, bool) { book, ok := p.(Book) // Check if p is a Bookif ok {return book.Title, true }return"", false// Return empty string and false if not a Book}bookTitle, ok :=getBookTitleFromPrintable(article)if ok { fmt.Println("Extracted book title:", bookTitle)} else { fmt.Println("Article is not a Book")}
2. Custom Comparison Functions: You can also create dedicated functions to compare interface values based on specific criteria:
The Increment method receives a pointer to MyCounter, allowing it to directly modify the count field.
7. Error Handling and Interfaces
Go interfaces play a crucial role in error handling. The built-in error interface defines a single method, Error() string, used to represent errors:
typeerror interface { Error() string}// Custom error type implementing the error interfacetypeMyError struct { message string}func (e MyError) Error() string {return e.message}func myFunction() error {// ... some operationreturn MyError{"Something went wrong"}}iferr :=myFunction(); err != nil { fmt.Println("Error:", err.Error()) // Prints "Something went wrong"}
By adhering to the error interface, custom errors can be seamlessly integrated into Go’s error-handling mechanisms.
8. Interface Values and Nil
Interface values can be nil, indicating they don’t hold any concrete value. However, attempting to call methods on a nil interface value results in a panic.
var printable Printable // nil interface valuefmt.Println(printable.String()) // Panics!
Always check for nil before calling methods on interface values.
However, it’s important to understand that an interface{} value doesn’t simply hold a reference to the underlying data. Internally, Go creates a special structure to store both the type information and the actual value. This hidden structure is often referred to as “boxing” the value.
Imagine a small container holding both a label indicating the type (e.g., int, string) and the actual data inside something like this:
typeiface struct { tab *itab data unsafe.Pointer}
Technically, this structure involves two components:
tab: This type descriptor carries details like the interface’s method set, the underlying type, and the methods of the underlying type that implement the interface.
data pointer: This pointer directly points to the memory location where the actual value resides.
When you retrieve a value from an interface{}, Go performs “unboxing.” It reads the type information and data pointer and then creates a new variable of the appropriate type based on this information.
This internal mechanism might seem complex, but the Go runtime handles it seamlessly. However, understanding this concept can give you deeper insights into how Go interfaces work under the hood.
9. Conclusion
This journey through the magic of Go interfaces has hopefully provided you with a deeper understanding of their capabilities and how they work. We’ve explored how they go beyond simple method signatures to define contracts, enable dynamic behavior, and making it way more flexible.
Remember, interfaces are not just tools for code reuse, but also powerful mechanisms for designing adaptable and maintainable applications.
Here are some key takeaways to keep in mind:
Interfaces define contracts, not just method signatures.
Interfaces enable dynamic typing and flexible operations.
Embedded interfaces allow for hierarchical relationships and code reuse.
The empty interface unlocks powerful generic capabilities.
Understand the nuances of interface equality and comparisons.
Interfaces play a crucial role in Go’s error-handling mechanisms.
Be mindful of nil interface values and potential panics.