Tag: docker

  • Serverpod: The Ultimate Backend for Flutter

    Join us on this exhilarating journey, where we bridge the gap between frontend and backend development with the seamless integration of Serverpod and Flutter.

    Gone are the days of relying on different programming languages for frontend and backend development. With Flutter’s versatile framework, you can effortlessly create stunning user interfaces for a myriad of platforms. However, the missing piece has always been the ability to build the backend in Dart as well—until now.

    Introducing Serverpod, the missing link that completes the Flutter ecosystem. Now, with Serverpod, you can develop your entire application, from frontend to backend, all within the familiar and elegant Dart language. This synergy enables a seamless exchange of data and functions between the client and the server, reducing development complexities and boosting productivity.

    1. What is Serverpod?

    As a developer or tech enthusiast, we recognize the critical role backend services play in the success of any application. Whether you’re building a web, mobile, or desktop project, a robust backend infrastructure is the backbone that ensures seamless functionality and scalability.

    That’s where “Serverpod” comes into the picture—an innovative backend solution developed entirely in Dart, just like your Flutter projects. With Serverpod at your disposal, you can harness the full power of Dart on both the frontend and backend, creating a harmonious development environment that streamlines your workflow.

    The biggest advantage of using Serverpod is that it automates protocol and client-side code generation by analyzing your server, making remote endpoint calls as simple as local method calls.

    1.1. Current market status

    The top 10 programming languages for backend development in 2023 are as follows: 

    [Note: The results presented here are not absolute and are based on a combination of surveys conducted in 2023, including ‘Stack Overflow Developer Survey – 2023,’ ‘State of the Developer Ecosystem Survey,’ ‘New Stack Developer Survey,’ and more.]

    • Node.js – ~32%
    • Python (Django, Flask) – ~28%
    • Java (Spring Boot, Java EE) – ~18%
    • Ruby (Ruby on Rails) – ~7%
    • PHP (Laravel, Symfony) – ~6%
    • Go (Golang) – ~3%
    • .NET (C#) – ~2%
    • Rust – Approximately 1%
    • Kotlin (Spring Boot with Kotlin) – ~1%
    • Express.js (for Node.js) – ~1%
    Figure 01

    Figure 01 provides a comprehensive overview of the current usage of backend development technologies, showcasing a plethora of options with diverse features and capabilities. However, the landscape takes a different turn when it comes to frontend development. While the backend technologies offer a wealth of choices, most of these languages lack native multiplatform support for frontend applications.

    As a result, developers find themselves in a situation where they must choose between two sets of languages or technologies for backend and frontend business logic development.

    1.2. New solution

    As the demand for multiplatform applications continues to grow, developers are actively exploring new frameworks and languages that bridge the gap between backend and frontend development. Recently, a groundbreaking solution has emerged in the form of Serverpod. With Serverpod, developers can now accomplish server development in Dart, filling the crucial gap that was previously missing in the Flutter ecosystem.

    Flutter has already demonstrated its remarkable support for a wide range of platforms. The absence of server development capabilities was a notable limitation that has now been triumphantly addressed with the introduction of Serverpod. This remarkable achievement enables developers to harness the power of Dart to build both frontend and backend components, creating unified applications with a shared codebase.

    2. Configurations 

    Prior to proceeding with the code implementation, it is essential to set up and install the necessary tools.

    [Note: Given Serverpod’s initial stage, encountering errors without readily available online solutions is plausible. In such instances, seeking assistance from the Flutter community forum is highly recommended. Drawing from my experience, I suggest running the application on Flutter web first, particularly for Serverpod version 1.1.1, to ensure a smoother development process and gain insights into potential challenges.]

    2.1. Initial setup

    2.1.1 Install Docker

    Docker serves a crucial role in Serverpod, facilitating:

    • Containerization: Applications are packaged and shipped as containers, enabling seamless deployment and execution across diverse infrastructures.
    • Isolation: Applications are isolated from one another, enhancing both security and performance aspects, safeguarding against potential vulnerabilities, and optimizing system efficiency.

    Download & Install Docker from here.

    2.1.2 Install Serverpod CLI 

    • Run the following command:
    dart pub global activate serverpod_cli

    • Now test the installation by running:
    serverpod

    With proper configuration, the Serverpod command displays help information.

    2.2. Project creation

    To initiate serverpod commands, the Docker application must be launched first. Ensuring an active Docker instance in the backend environment is imperative to execute Serverpod commands successfully.

    • Create a new project with the command:
    serverpod create <your_project_name>

    Upon execution, a new directory will be generated with the specified project name, comprising three Dart packages:

    <your_project_name>_server: This package is designated for server-side code, encompassing essential components such as business logic, API endpoints, DB connections, and more.
    <your_project_name>_client: Within this package, the code responsible for server communication is auto-generated. Manual editing of files in this package is typically avoided.
    <your_project_name>_flutter: Representing the Flutter app, it comes pre-configured to seamlessly connect with your local server, ensuring seamless communication between frontend and backend elements.

    2.3. Project execution

    Step 1: Navigate to the server package with the following command:

    cd <your_project_name>/<your_project_name>_server

    Step 2: (Optional) Open the project in the VS Code IDE using the command:

    (Note: You can use any IDE you prefer, but for our purposes, we’ll use VS Code, which also simplifies DB connection later.)

    code .

    Step 3: Once the project is open in the IDE, stop any existing Docker containers with this command:

    .setup-tables.cmd

    Step 4: Before starting the server, initiate new Docker containers with the following command:

    docker-compose up --build --detach

    Step 5: The command above will start PostgreSQL and Redis containers, and you should receive the output:

    ~> docker-compose up --build --detach
    	[+] Running 2/2
     	✔ Container <your_project_name>_server-redis-1     Started                                                                                                
     	✔ Container <your_project_name>_server-postgres-1  Started

    (Note: If the output doesn’t match, refer to this Stack Overflow link for missing commands in the official documentation.)

    Step 6: Proceed to start the server with this command:

    dart bin/main.dart

    Step 7: Upon successful execution, you will receive the following output, where the “Server Default listening on port” value is crucial. Please take note of this value.

    ~> dart bin/main.dart
     	SERVERPOD version: 1.1.1, dart: 3.0.5 (stable) (Mon Jun 12 18:31:49 2023 +0000) on "windows_x64", time: 2023-07-19 15:24:27.704037Z
     	mode: development, role: monolith, logging: normal, serverId: default
     	Insights listening on port 8081
     	Server default listening on port 8080
     	Webserver listening on port 8082
     	CPU and memory usage metrics are not supported on this platform.

    Step 8: Use the “Server Default listening on port” value after “localhost,” i.e., “127.0.0.1,” and load this URL in your browser. Accessing “localhost:8080” will display the desired output, indicating that your server is running and ready to process requests.

    Figure 02

    Step 9: Now, as the containers reach the “Started” state, you can establish a connection with the database. We have opted for PostgreSQL as our DB choice, and the rationale behind this selection lies in the “docker-compose.yaml” file at the server project’s root. In the “service” section, PostgreSQL is already added, making it an ideal choice as the required setup is readily available. 

    Figure 03

    For the database setup, we need key information, such as Host, Port, Username, and Password. You can find all this vital information in the “config” directory’s “development.yaml” and “passwords.yaml” files. If you encounter difficulties locating these details, please refer to Figure 04.

    Figure 04

    Step 10: To establish the connection, you can install an application similar to Postico or, alternatively, I recommend using the MySQL extension, which can be installed in your VS Code with just one click.

    Figure 05

    Step 11: Follow these next steps:

    1. Select the “Database” option.
    2. Click on “Create Connection.”
    3. Choose the “PostgreSQL” option.
    4. Add a name for your Connection.
    5. Fill in the information collected in the last step.
    6. Finally, select the “Connect” option.
    Figure 06
    1. Upon success, you will receive a “Connect Success!” message, and the new connection will be added to the Explorer Tab.
    Figure 07

    Step 12: Now, we shift our focus to the Flutter project (Frontend):

    Thus far, we have been working on the server project. Let us open a new VS Code instance for a separate Flutter project while keeping the current VS Code instance active, serving as the server.

    Step 13: Execute the following command to run the Flutter project on Chrome:

    flutter run -d chrome

    With this, the default project will generate the following output:

    Step 14: When you are finished, you can shut down Serverpod with “Ctrl-C.”

    Step 15: Then stop Postgres and Redis.

    docker compose stop

    Figure 08

    3. Sample Project

    So far, we have successfully created and executed the project, identifying three distinct components. The server project caters to server/backend development, while the Flutter project handles application/frontend development. The client project, automatically generated, serves as the vital intermediary, bridging the gap between the frontend and backend.

    However, merely acknowledging the projects’ existence is insufficient. To maximize our proficiency, it is crucial to grasp the code and file structure comprehensively. To achieve this, we will embark on a practical journey, constructing a small project to gain hands-on experience and unlock deeper insights into all three components. This approach empowers us with a well-rounded understanding, further enhancing our capabilities in building remarkable applications.

    3.1. What are we building?

    In this blog, we will construct a sample project with basic Login and SignUp functionality. The SignUp process will collect user information such as Email, Password, Username, and age. Users can subsequently log in using their email and password, leading to the display of user details on the dashboard screen. With the initial system setup complete and the newly created project up and running, it’s time to commence coding. 

    3.1.1 Create custom models for API endpoints

    Step1: Create a new file in the “lib >> src >> protocol” directory named “users.yaml”:

    class: Users
    table: users
    fields:
      username: String
      email: String
      password: String
      age: int

    Step 2: Save the file and run the following command to generate essential data classes and table creation queries:

    serverpod generate

    (Note: Add “–watch” after the command for continuous code generation). 

    Successful execution of the above command will generate a new file named “users.dart” in the “lib >> src >> generated” folder. Additionally, the “tables.pgsql” file now contains SQL queries for creating the “users” table. The same command updates the auto-generated code in the client project. 

    3.1.2 Create Tables in DB for the generated model 

    Step 1: Copy the queries written in the “generated >> tables.pgsql” file.

    In the MySQL Extension’s Database section, select the created database >> [project_name] >> public >> Tables >> + (Create New Table).

    Figure 09

    Step 2: Paste the queries into the newly created .sql file and click “Execute” above both queries.

    Figure 10

    Step 3: After execution, you will obtain an empty table with the “id” as the Primary key.

    Figure 11

    If you found multiple tables already present in your database like shown in the next figure, you can ignore those. These tables are created by the system with queries present in the “generated >> tables-serverpod.pgsql” file.

    Figure 12

    3.1.3 Create an API endpoint

    Step 1: Generate a new file in the “lib >> src >> endpoints” directory named “session_endpoints.dart”:

    class SessionEndpoint extends Endpoint {
      Future<Users?> login(Session session, String email, String password) async {
        List<Users> userList = await Users.find(session,
            where: (p0) =>
                (p0.email.equals(email)) & (p0.password.equals(password)));
        return userList.isEmpty ? null : userList[0];
      }
    
    
      Future<bool> signUp(Session session, Users newUser) async {
        try {
          await Users.insert(session, newUser);
          return true;
        } catch (e) {
          print(e.toString());
          return false;
        }
      }
    }

    If “serverpod generate –watch” is already running, you can ignore this step 2.

    Step 2: Run the command:

    serverpod generate

    Step 3: Start the server.
    [For help, check out Step 1 Step 6 mentioned in Project Execution part.]

    3.1.3 Create three screens

    Login Screen:

    Figure 13

    SignUp Screen:

    Figure 14

    Dashboard Screen:

    Figure 15

    3.1.4 Setup Flutter code

    Step 1: Add the code provided to the SignUp button in the SignUp screen to handle user signups.

    try {
            final result = await client.session.signUp(
              Users(
                email: _emailEditingController.text.trim(),
                username: _usernameEditingController.text.trim(),
                password: _passwordEditingController.text.trim(),
                age: int.parse(_ageEditingController.text.trim()),
              ),
            );
            if (result) {
              Navigator.pop(context);
            } else {
              _errorText = 'Something went wrong, Try again.';
            }
          } catch (e) {
            debugPrint(e.toString());
            _errorText = e.toString();
          }

    Step 2: Add the code provided to the Login button in the Login screen to handle user logins.

    try {
            final result = await client.session.login(
              _emailEditingController.text.trim(),
              _passwordEditingController.text.trim(),
            );
            if (result != null) {
              _emailEditingController.text = '';
              _passwordEditingController.text = '';
              Navigator.push(
                context,
                MaterialPageRoute(
                  builder: (context) => DashboardPage(user: result),
                ),
              );
            } else {
              _errorText = 'Something went wrong, Try again.';
            }
          } catch (e) {
            debugPrint(e.toString());
            _errorText = e.toString();
          }

    Step 3: Implement logic to display user data on the dashboard screen.

    With these steps completed, our Flutter app becomes a fully functional project, showcasing the power of this new technology. Armed with Dart knowledge, every Flutter developer can transform into a proficient full-stack developer.

    4. Result

    Figure 16

    To facilitate your exploration, the entire project code is conveniently available in this code repository. Feel free to refer to this repository for an in-depth understanding of the implementation details and access to the complete source code, enabling you to delve deeper into the project’s intricacies and leverage its functionalities effectively.

    5. Conclusion

    In conclusion, we have provided a comprehensive walkthrough of the step-by-step setup process for running Serverpod seamlessly. We explored creating data models, integrating the database with our server project, defining tables, executing data operations, and establishing accessible API endpoints for Flutter applications.

    Hopefully, this blog post has kindled your curiosity to delve deeper into Serverpod’s immense potential for elevating your Flutter applications. Embracing Serverpod unlocks a world of boundless possibilities, empowering you to achieve remarkable feats in your development endeavors.

    Thank you for investing your time in reading this informative blog!

    6. References

    1. https://docs.flutter.dev/
    2. https://pub.dev/packages/serverpod/
    3. https://serverpod.dev/
    4. https://docs.docker.com/get-docker/
    5. https://medium.com/serverpod/introducing-serverpod-a-complete-backend-for-flutter-written-in-dart-f348de228e19
    6. https://medium.com/serverpod/serverpod-our-vision-for-a-seamless-scalable-backend-for-the-flutter-community-24ba311b306b
    7. https://stackoverflow.com/questions/76180598/serverpod-sql-error-when-starting-a-clean-project
    8. https://www.youtube.com/watch?v=3Q2vKGacfh0
    9. https://www.youtube.com/watch?v=8sCxWBWhm2Y

  • Monitoring a Docker Container with Elasticsearch, Kibana, and Metricbeat

    Since you are on this page, you have probably already started using Docker to deploy your applications and are enjoying it compared to virtual machines, because of it being lightweight, easy to deploy and its exceptional security management features.

    And, once the applications are deployed, monitoring your containers and tracking their activities in real time is very essential. Imagine a scenario where you are managing one or many virtual machines. Your pre-configured session will be doing everything, including monitoring. If you face any problems during production, then—with a handful of commands such as top, htop, iotop, and with flags like -o, %CPU, and %MEM—you are good to troubleshoot the issue.

    On the other hand, consider a scenario where you have the same nodes spread across 100-200 containers. You will need to see all activity in one place to query for information about what happened. Here, monitoring comes into the picture. We will be discussing more benefits as we move further.

    This blog will cover Docker monitoring with Elasticsearch, Kibana, and Metricbeat. Basically, Elasticsearch is a platform that allows us to have distributed search and analysis of data in real-time along with visualization. We’ll be discussing how all these work interdependently as we move ahead. Like Elasticsearch, Kibana is also open-source software. Kibana is an interface mainly used to visualize the data sent from Elasticsearch. Metricbeat is a lightweight shipper of collected metrics from your system to the desired target (Elasticsearch in this case). 

    What is Docker Monitoring?

    In simple terms, monitoring containers is how we keep track of the above metrics and analyze them to ensure the performance of applications built on microservices and to keep track of issues so that they can be solved more easily. This monitoring is vital for performance improvement and optimization and to find the RCA of various issues.

    There is a lot of software available for monitoring the Docker container, both open-source as well as proprietary, like Prometheus, AppOptics, Metricbeats, Datadog, Sumologic, etc.

    You can choose any of these based on convenience. 

    Why is Docker Monitoring needed?

    1. Monitoring helps early detection and to fix issues to avoid a breakdown during production
    2. New feature additions/updates implemented safely as the entire application is monitored
    3. Docker monitoring is beneficial for developers, IT pros, and enterprises as well.
    • For developers, Docker monitoring tracks bugs and helps to resolve them quickly along with enhancing security.
    • For IT pros, it helps with flexible integration of existing processes and enterprise systems and satisfies all the requirements.
    • For enterprises, it helps to build the application within a certified container within a secured ecosystem that runs smoothly. 

    Elasticsearch is a platform that allows us to have distributed search and analysis of data in real-time, along with visualization. Elasticsearch is free and open-source software. It goes well with a huge number of technologies, like Metricbeat, Kibana, etc. Let’s move onto the installation of Elasticsearch.

    Installation of Elasticsearch:

    Prerequisite: Elasticsearch is built in Java. So, make sure that your system at least has Java8 to run Elasticsearch.

    For installing Elasticsearch for your OS, please follow the steps at Installing Elasticsearch | Elasticsearch Reference [7.11].

    After installing,  check the status of Elasticsearch by sending an HTTP request on port 9200 on localhost.

    http://localhost:9200/

    This will give you a response as below:

    You can configure Elasticsearch by editing $ES_HOME/config/elasticsearch.yml 

    Learn more about configuring Elasticsearch here.

    Now, we are done with the Elasticsearch setup and are ready to move onto Kibana.

    Kibana:

    Like Elasticsearch, Kibana is also open-source software. Kibana is an interface mainly used to visualize the data from Elasticsearch. Kibana allows you to do anything via query and let’s you generate numerous visuals as per your requirements. Kibana lets you visualize enormous amounts of data in terms of line graphs, gauges, and all other graphs.

    Let’s cover the installation steps of Kibana.

    Installing Kibana

    Prerequisites: 

    • Must have Java1.8+ installed 
    • Elasticsearch v1.4.4+
    • Web browser such as Chrome, Firefox

    For installing Kibana with respect to your OS, please follow the steps at Install Kibana | Kibana Guide [7.11]

    Kibana runs on default port number 5601. Just send an HTTP request to port 5601 on localhost with http://localhost:5601/ 

    You should land on the Kibana dashboard, and it is now ready to use:

    You can configure Kibana by editing $KIBANA_HOME/config. For more about configuring Kibana, visit here.

    Let’s move onto the final part—setting up with Metricbeat.

    Metricbeat

    Metricbeat sends metrics frequently, and we can say it’s a lightweight shipper of collected metrics from your system.

    You can simply install Metricbeat to your system or servers to periodically collect metrics from the OS and the microservices running on services. The collected metrics are shipped to the output you specified, e.g., Elasticsearch, Logstash. 

    Installing Metricbeat

    For installing Metricbeat according to your OS, follow the steps at Install Kibana | Kibana Guide [7.11]

    As soon as we start the Metricbeat service, it sends Docker metrics to the Elasticsearch index, which can be confirmed by curling Elasticsearch indexes with the command:

    curl -XGET 'localhost:9200/_cat/indices?v&pretty'

    How Are They Internally Connected?

    We have now installed all three and they are up and running. As per the period mentioned, docker.yml will hit the Docker API and send the Docker metrics to Elasticsearch. Those metrics are now available in different indexes of Elasticsearch. As mentioned earlier, Kibana queries the data of Elasticsearch and visualizes it in the form of graphs. In this, all three are connected. 

    Please refer to the flow chart for more clarification:

    How to Create Dashboards?

    Now that we are aware of how these three tools work interdependently, let’s create dashboards to monitor our containers and understand those. 

    First of all, open the Dashboards section on Kibana (localhost:5601/) and click the Create dashboard button:

     

    You will be directed to the next page:

    Choose the type of visualization you want from all options:

    For example, let’s go with Lens

    (Learn more about Kibana Lens)

    Here, we will be looking for the number of containers vs. timestamps by selecting the timestamp on X-axis and the unique count of docker.container.created on Y-axis.

    As soon we have selected both parameters, it will generate a graph as shown in the snapshot, and we will be getting the count of created containers (here Count=1). If you create move containers on your system, when that data metric is sent to Kibana, the graph and the counter will be modified. In this way, you can monitor how many containers are created over time. In similar fashion, depending on your monitoring needs, you can choose a parameter from the left panel showing available fields like: 

    activemq.broker.connections.count

    docker.container.status

    Docker.container.tags

    Now, we will show one more example of how to create a bar graph:

    As mentioned above, to create a bar graph just choose “vertical bar” from the above snapshot. Here, I’m trying to get a bar graph for the count of documents vs. metricset names, such as network, file system, cpu, etc. So, as shown in the snapshot on the left, choose the Y-axis parameter as count and X-axis parameter as metricset.name as shown in the right side of the snapshot

    After hitting enter, a graph will be generated: 

    Similarly, you can try it out with multiple parameters with different types of graphs to monitor. Now, we will move onto the most important and widely used monitoring tool to track warnings, errors, etc., which is DISCOVER.

    Discover for Monitoring:

    Basically, Discover provides deep insights into data, showing you where you can apply searches and filters as well. With it, you can show which processes are taking more time and show only those. Filter out errors occurring with the message filter with a value of ERROR. Check the health of the container; check for logged-in users. These kinds of queries can be sent and the desired results can be achieved, leading to good monitoring of containers, same as the SQL queries. 

    [More about Discover here.]

    To apply filters, just click on the “filter by type” from the left panel, and you will see all available filtering options. From there, you can select one as per your requirements, and view those on the central panel. 

    Similar to filter, you can choose fields to be shown on the dashboard from the left panel with “Selected fields” right below the filters. (Here, we have only selected info for Source.)

    Now, if you take a look at the top part of the snapshot, you will find the search bar. This is the most useful part of Discover for monitoring.

    In that bar, you just need to put a query, and according to that query, logs will be filtered. For example, I will be putting a query for error messages equal to No memory stats data available.

    When we hit the update button on the right side, only logs containing that error message will be there and highlighted for differentiation, as shown in the snapshot. All other logs will not be shown. In this way, you can track a particular error and ensure that it does not exist after fixing it.

    In addition to query, it also provides keyword search. So, if you input a word like warning, error, memory, or user, then it will provide logs for that word, like “memory” in the snapshot:

     

    Similar to Kibana, we also receive logs in the terminal. For example, the following highlighted portion is about the state of your cluster. In the terminal, you can put a simple grep command for required logs. 

    With this, you can monitor Docker containers with multiple queries, such as nested queries for the Discover facility. There are many different graphs you can try depending on your requirements to keep your application running smoothly.

    Conclusion

    Monitoring requires a lot of time and effort. What we have seen here is a drop in the ocean. For some next steps, try:

    1. Monitoring network
    2. Aggregating logs from your different applications
    3. Aggregating logs from multiple containers
    4. Alerts setting and monitoring
    5. Nested queries for logs