Asynchronous programming is a characteristic of modern programming languages that allows an application to perform various operations without waiting for any of them. Asynchronicity is one of the big reasons for the popularity of Node.js.
We have discussed Python’s asynchronous features as part of our previous post: an introduction to asynchronous programming in Python. This blog is a natural progression on the same topic. We are going to discuss async features in Python in detail and look at some hands-on examples.
Consider a traditional web scraping application that needs to open thousands of network connections. We could open one network connection, fetch the result, and then move to the next ones iteratively. This approach increases the latency of the program. It spends a lot of time opening a connection and waiting for others to finish their bit of work.
On the other hand, async provides you a method of opening thousands of connections at once and swapping among each connection as they finish and return their results. Basically, it sends the request to a connection and moves to the next one instead of waiting for the previous one’s response. It continues like this until all the connections have returned the outputs.
From the above chart, we can see that using synchronous programming on four tasks took 45 seconds to complete, while in asynchronous programming, those four tasks took only 20 seconds.
Where Does Asynchronous Programming Fit in the Real-world?
Asynchronous programming is best suited for popular scenarios such as:
1. The program takes too much time to execute.
2. The reason for the delay is waiting for input or output operations, not computation.
3. For the tasks that have multiple input or output operations to be executed at once.
And application-wise, these are the example use cases:
Web Scraping
Network Services
Difference Between Parallelism, Concurrency, Threading, and Async IO
Because we discussed this comparison in detail in our previous post, we will just quickly go through the concept as it will help us with our hands-on example later.
Parallelism involves performing multiple operations at a time. Multiprocessing is an example of it. It is well suited for CPU bound tasks.
Concurrency is slightly broader than Parallelism. It involves multiple tasks running in an overlapping manner.
Threading – a thread is a separate flow of execution. One process can contain multiple threads and each thread runs independently. It is ideal for IO bound tasks.
Async IO is a single-threaded, single-process design that uses cooperative multitasking. In simple words, async IO gives a feeling of concurrency despite using a single thread in a single process.
Fig:- A comparison in concurrency and parallelism
Components of Async IO Programming
Let’s explore the various components of Async IO in depth. We will also look at an example code to help us understand the implementation.
1. Coroutines
Coroutines are mainly generalization forms of subroutines. They are generally used for cooperative tasks and behave like Python generators.
An async function uses the await keyword to denote a coroutine. When using the await keyword, coroutines release the flow of control back to the event loop.
To run a coroutine, we need to schedule it on the event loop. After scheduling, coroutines are wrapped in Tasks as a Future object.
Example:
In the below snippet, we called async_func from the main function. We have to add the await keyword while calling the sync function. As you can see, async_func will do nothing unless the await keyword implementation accompanies it.
import asyncioasync def async_func(): print('Velotio ...') await asyncio.sleep(1) print('... Technologies!')async def main(): async_func()#this will do nothing because coroutine object is created but not awaited await async_func()asyncio.run(main())
Output
RuntimeWarning: coroutine 'async_func' was never awaitedasync_func()#this will do nothing because coroutine object is created but not awaitedRuntimeWarning: Enable tracemalloc to get the object allocation tracebackVelotio ...... Blog!
2. Tasks
Tasks are used to schedule coroutines concurrently.
When submitting a coroutine to an event loop for processing, you can get a Task object, which provides a way to control the coroutine’s behavior from outside the event loop.
Example:
In the snippet below, we are creating a task using create_task (an inbuilt function of asyncio library), and then we are running it.
This mechanism runs coroutines until they complete. You can imagine it as while(True) loop that monitors coroutine, taking feedback on what’s idle, and looking around for things that can be executed in the meantime.
It can wake up an idle coroutine when whatever that coroutine is waiting on becomes available.
Only one event loop can run at a time in Python.
Example:
In the snippet below, we are creating three tasks and then appending them in a list and executing all tasks asynchronously using get_event_loop, create_task and the await function of the asyncio library.
A future is a special, low-level available object that represents an eventual result of an asynchronous operation.
When a Future object is awaited, the co-routine will wait until the Future is resolved in some other place.
We will look into the sample code for Future objects in the next section.
A Comparison Between Multithreading and Async IO
Before we get to Async IO, let’s use multithreading as a benchmark and then compare them to see which is more efficient.
For this benchmark, we will be fetching data from a sample URL (the Velotio Career webpage) with different frequencies, like once, ten times, 50 times, 100 times, 500 times, respectively.
We will then compare the time taken by both of these approaches to fetch the required data.
Implementation
Code of Multithreading:
import requestsimport timefrom concurrent.futures import ProcessPoolExecutordef fetch_url_data(pg_url): try: resp = requests.get(pg_url) except Exception as e: print(f"Error occured during fetch data from url{pg_url}") else: return resp.contentdef get_all_url_data(url_list): with ProcessPoolExecutor() as executor: resp = executor.map(fetch_url_data, url_list) return respif __name__=='__main__': url = "https://www.velotio.com/careers" for ntimes in [1,10,50,100,500]: start_time = time.time() responses = get_all_url_data([url] * ntimes) print(f'Fetch total {ntimes} urls and process takes {time.time() - start_time} seconds')
Output
Fetch total 1 urls and process takes 1.8822264671325684 secondsFetch total 10 urls and process takes 2.3358211517333984 secondsFetch total 50 urls and process takes 8.05638575553894 secondsFetch total 100 urls and process takes 14.43302869796753 secondsFetch total 500 urls and process takes 65.25404500961304 seconds
ProcessPoolExecutor is a Python package that implements the Executor interface. The fetch_url_data is a function to fetch the data from the given URL using the requests python package, and the get_all_url_data function is used to map the fetch_url_data function to the lists of URLs.
Async IO Programming Example:
import asyncioimport timefrom aiohttp import ClientSession, ClientResponseErrorasync def fetch_url_data(session, url): try: async with session.get(url, timeout=60) as response: resp = await response.read() except Exception as e: print(e) else: return resp returnasync def fetch_async(loop, r): url = "https://www.velotio.com/careers" tasks = [] async with ClientSession() as session: for i in range(r): task = asyncio.ensure_future(fetch_url_data(session, url)) tasks.append(task) responses = await asyncio.gather(*tasks) return responsesif __name__ == '__main__': for ntimes in [1, 10, 50, 100, 500]: start_time = time.time() loop = asyncio.get_event_loop() future = asyncio.ensure_future(fetch_async(loop, ntimes)) loop.run_until_complete(future) #will run until it finish or get any error responses = future.result() print(f'Fetch total {ntimes} urls and process takes {time.time() - start_time} seconds')
Output
Fetch total 1 urls and process takes 1.3974951362609863 secondsFetch total 10 urls and process takes 1.4191942596435547 secondsFetch total 50 urls and process takes 2.6497368812561035 secondsFetch total 100 urls and process takes 4.391665458679199 secondsFetch total 500 urls and process takes 4.960426330566406 seconds
We need to use the get_event_loop function to create and add the tasks. For running more than one URL, we have to use ensure_future and gather function.
The fetch_async function is used to add the task in the event_loop object and the fetch_url_data function is used to read the data from the URL using the session package. The future_result method returns the response of all the tasks.
Results:
As you can see from the plot, async programming is much more efficient than multi-threading for the program above.
The graph of the multithreading program looks linear, while the asyncio program graph is similar to logarithmic.
Conclusion
As we saw in our experiment above, Async IO showed better performance with the efficient use of concurrency than multi-threading.
Async IO can be beneficial in applications that can exploit concurrency. Though, based on what kind of applications we are dealing with, it is very pragmatic to choose Async IO over other implementations.
We hope this article helped further your understanding of the async feature in Python and gave you some quick hands-on experience using the code examples shared above.
Zappa is a very powerful open source python project which lets you build, deploy and update your WSGI app hosted on AWS Lambda + API Gateway easily.This blog is a detailed step-by-step focusing on challenges faced while deploying Django application on AWS Lambda using Zappa as a deployment tool.
Building Your Application
If you do not have a Django application already you can build one by cloning this GitHub repository.
Once you have cloned the repository you will need a virtual environment which provides an isolated Python environment for your application. I prefer virtualenvwrapper to create one.
Now if you run the server directly it will log a warning as the database is not set up yet.
$ python manage.py runserver
Performing system checks...System check identified no issues (0 silenced).You have 13 unapplied migration(s). Your project may not work properly until you apply the migrations for app(s): admin, auth, contenttypes, sessions.Run 'python manage.py migrate' to apply them.May 20, 2018-14:47:32Django version 1.11.11, usingsettings'django_zappa_sample.settings'Starting development server at http://127.0.0.1:8000/Quit the server withCONTROL-C.
Also trying to access admin page (http://localhost:8000/admin/) will throw an “OperationalError” exception with below log at server end.
Internal Server Error: /admin/Traceback (most recent call last): File "/home/velotio/Envs/django_zappa_sample/local/lib/python2.7/site-packages/django/core/handlers/exception.py", line 41, in inner response =get_response(request) File "/home/velotio/Envs/django_zappa_sample/local/lib/python2.7/site-packages/django/core/handlers/base.py", line 187, in _get_response response = self.process_exception_by_middleware(e, request) File "/home/velotio/Envs/django_zappa_sample/local/lib/python2.7/site-packages/django/core/handlers/base.py", line 185, in _get_response response =wrapped_callback(request, *callback_args, **callback_kwargs) File "/home/velotio/Envs/django_zappa_sample/local/lib/python2.7/site-packages/django/contrib/admin/sites.py", line 242, in wrapperreturn self.admin_view(view, cacheable)(*args, **kwargs) File "/home/velotio/Envs/django_zappa_sample/local/lib/python2.7/site-packages/django/utils/decorators.py", line 149, in _wrapped_view response =view_func(request, *args, **kwargs) File "/home/velotio/Envs/django_zappa_sample/local/lib/python2.7/site-packages/django/views/decorators/cache.py", line 57, in _wrapped_view_func response =view_func(request, *args, **kwargs) File "/home/velotio/Envs/django_zappa_sample/local/lib/python2.7/site-packages/django/contrib/admin/sites.py", line 213, in innerif not self.has_permission(request): File "/home/velotio/Envs/django_zappa_sample/local/lib/python2.7/site-packages/django/contrib/admin/sites.py", line 187, in has_permissionreturn request.user.is_active and request.user.is_staff File "/home/velotio/Envs/django_zappa_sample/local/lib/python2.7/site-packages/django/utils/functional.py", line 238, in inner self._setup() File "/home/velotio/Envs/django_zappa_sample/local/lib/python2.7/site-packages/django/utils/functional.py", line 386, in _setup self._wrapped = self._setupfunc() File "/home/velotio/Envs/django_zappa_sample/local/lib/python2.7/site-packages/django/contrib/auth/middleware.py", line 24, in<lambda> request.user =SimpleLazyObject(lambda: get_user(request)) File "/home/velotio/Envs/django_zappa_sample/local/lib/python2.7/site-packages/django/contrib/auth/middleware.py", line 12, in get_user request._cached_user = auth.get_user(request) File "/home/velotio/Envs/django_zappa_sample/local/lib/python2.7/site-packages/django/contrib/auth/__init__.py", line 211, in get_user user_id =_get_user_session_key(request) File "/home/velotio/Envs/django_zappa_sample/local/lib/python2.7/site-packages/django/contrib/auth/__init__.py", line 61, in _get_user_session_keyreturnget_user_model()._meta.pk.to_python(request.session[SESSION_KEY]) File "/home/velotio/Envs/django_zappa_sample/local/lib/python2.7/site-packages/django/contrib/sessions/backends/base.py", line 57, in __getitem__return self._session[key] File "/home/velotio/Envs/django_zappa_sample/local/lib/python2.7/site-packages/django/contrib/sessions/backends/base.py", line 207, in _get_session self._session_cache = self.load() File "/home/velotio/Envs/django_zappa_sample/local/lib/python2.7/site-packages/django/contrib/sessions/backends/db.py", line 35, in load expire_date__gt=timezone.now() File "/home/velotio/Envs/django_zappa_sample/local/lib/python2.7/site-packages/django/db/models/manager.py", line 85, in manager_methodreturngetattr(self.get_queryset(), name)(*args, **kwargs) File "/home/velotio/Envs/django_zappa_sample/local/lib/python2.7/site-packages/django/db/models/query.py", line 374, in get num =len(clone) File "/home/velotio/Envs/django_zappa_sample/local/lib/python2.7/site-packages/django/db/models/query.py", line 232, in __len__ self._fetch_all() File "/home/velotio/Envs/django_zappa_sample/local/lib/python2.7/site-packages/django/db/models/query.py", line 1118, in _fetch_all self._result_cache =list(self._iterable_class(self)) File "/home/velotio/Envs/django_zappa_sample/local/lib/python2.7/site-packages/django/db/models/query.py", line 53, in __iter__ results = compiler.execute_sql(chunked_fetch=self.chunked_fetch) File "/home/velotio/Envs/django_zappa_sample/local/lib/python2.7/site-packages/django/db/models/sql/compiler.py", line 899, in execute_sql raise original_exceptionOperationalError: no such table: django_session[20/May/201814:59:23] "GET /admin/ HTTP/1.1"500153553Not Found: /favicon.ico
In order to fix this you need to run the migration into your database so that essential tables like auth_user, sessions, etc are created before any request is made to the server.
NOTE: Use DATABASES from project settings file to configure your database that you would want your Django application to use once hosted on AWS Lambda. By default, its configured to create a local SQLite database file as backend.
You can run the server again and it should now load the admin panel of your website.
Do verify if you have the zappa python package into your virtual environment before moving forward.
Configuring Zappa Settings
Deploying with Zappa is simple as it only needs a configuration file to run and rest will be managed by Zappa. To create this configuration file run from your project root directory –
$ zappa init
███████╗ █████╗ ██████╗ ██████╗ █████╗╚══███╔╝██╔══██╗██╔══██╗██╔══██╗██╔══██╗ ███╔╝ ███████║██████╔╝██████╔╝███████║ ███╔╝ ██╔══██║██╔═══╝ ██╔═══╝ ██╔══██║███████╗██║ ██║██║ ██║ ██║ ██║╚══════╝╚═╝ ╚═╝╚═╝ ╚═╝ ╚═╝ ╚═╝Welcome to Zappa!Zappa is a system for running server-less Python web applications on AWS Lambda and AWSAPI Gateway.This `init` command will help you create and configure your new Zappa deployment.Let's get started!Your Zappa configuration can support multiple production stages, like 'dev', 'staging', and 'production'.What do you want to call thisenvironment (default 'dev'): AWS Lambda and API Gateway are only available in certain regions. Let's check to make sure you have a profile set up in one that will work.We found the following profiles: default, and hdx. Which would you like us to use? (default 'default'):Your Zappa deployments will need to be uploaded to a private S3 bucket.If you don't have a bucket yet, we'll create one for you too.What do you want call your bucket? (default 'zappa-108wqhyn4'): django-zappa-sample-bucketIt looks like this is a Django application!What is the modulepathtoyourprojects's Django settings?Wediscovered: django_zappa_sample.settingsWhereareyourproject's settings? (default 'django_zappa_sample.settings'):Youcanoptionallydeploytoallavailableregionsinordertoprovidefastglobalservice.IfyouareusingZappaforthefirsttime, youprobablydon't want to do this!Wouldyouliketodeploythisapplicationglobally? (default'n') [y/n/(p)rimary]: nOkay, here's your zappa_settings.json:{"dev": {"aws_region": "us-east-1", "django_settings": "django_zappa_sample.settings", "profile_name": "default", "project_name": "django-zappa-sa", "runtime": "python2.7", "s3_bucket": "django-zappa-sample-bucket" }}Does this look okay? (default 'y') [y/n]: yDone! Now you can deploy your Zappa application by executing: $ zappa deploy devAfter that, you can update your application code with: $ zappa update devTo learn more, check out our project page on GitHub here: https://github.com/Miserlou/Zappaand stop by our Slack channel here: https://slack.zappa.ioEnjoy!,~ Team Zappa!
You can verify zappa_settings.json generated at your project root directory.
TIP: The virtual environment name should not be the same as the Zappa project name, as this may cause errors.
Additionally, you could specify other settings in zappa_settings.json file as per requirement using Advanced Settings.
Now, you’re ready to deploy!
IAM Permissions
In order to deploy the Django Application to Lambda/Gateway, setup an IAM role (eg. ZappaLambdaExecutionRole) with the following permissions:
Before deploying the application, ensure that the IAM role is set in the config JSON as follows:
{"dev": {..."manage_roles": false, // Disable Zappa client managing roles."role_name": "MyLambdaRole", // Name of your Zappa execution role. Optional, default: --ZappaExecutionRole."role_arn": "arn:aws:iam::12345:role/app-ZappaLambdaExecutionRole", // ARN of your Zappa execution role. Optional....},...}
Once your settings are configured, you can package and deploy your application to a stage called “dev” with a single command:
$ zappa deploy dev
Calling deploy for stage dev..Downloading and installing dependencies..Packaging project aszip.Uploading django-zappa-sa-dev-1526831069.zip (10.9MiB)..100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 11.4M/11.4M [01:02<00:00, 75.3KB/s]Scheduling..Scheduled django-zappa-sa-dev-zappa-keep-warm-handler.keep_warm_callback with expression rate(4 minutes)!Uploading django-zappa-sa-dev-template-1526831157.json (1.6KiB)..100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.60K/1.60K [00:02<00:00, 792B/s]Waiting for stack django-zappa-sa-dev to create (this can take a bit)..100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████|4/4 [00:11<00:00, 2.92s/res]Deploying API Gateway..Deployment complete!: https://akg59b222b.execute-api.us-east-1.amazonaws.com/dev
You should see that your Zappa deployment completed successfully with URL to API gateway created for your application.
Troubleshooting
1. If you are seeing the following error while deployment, it’s probably because you do not have sufficient privileges to run deployment on AWS Lambda. Ensure your IAM role has all the permissions as described above or set “manage_roles” to true so that Zappa can create and manage the IAM role for you.
Calling deploy for stage dev..Creating django-zappa-sa-dev-ZappaLambdaExecutionRole IAM Role..Error: Failed to manage IAM roles!You may lack the necessary AWS permissions to automatically manage a Zappa execution role.To fix this, see here: https://github.com/Miserlou/Zappa#using-custom-aws-iam-roles-and-policies
2. The below error will be caused as you have not listed “events.amazonaws.com” as Trusted Entity for your IAM Role. You can add the same or set “keep_warm” parameter to false in your Zappa settings file. Your Zappa deployment was partially deployed as it got terminated abnormally.
Downloading and installing dependencies..100%|████████████████████████████████████████████|44/44 [00:05<00:00, 7.92pkg/s]Packaging project aszip..Uploading django-zappa-sample-dev-1482817370.zip (8.8MiB)..100%|█████████████████████████████████████████| 9.22M/9.22M [00:17<00:00, 527KB/s]Scheduling...Oh no! An error occurred! :(==============Traceback (most recent call last):Traceback (most recent call last): File "/home/velotio/Envs/django_zappa_sample/local/lib/python2.7/site-packages/zappa/cli.py", line 2610, in handle sys.exit(cli.handle()) File "/home/velotio/Envs/django_zappa_sample/local/lib/python2.7/site-packages/zappa/cli.py", line 505, in handle self.dispatch_command(self.command, stage) File "/home/velotio/Envs/django_zappa_sample/local/lib/python2.7/site-packages/zappa/cli.py", line 539, in dispatch_command self.deploy(self.vargs['zip']) File "/home/velotio/Envs/django_zappa_sample/local/lib/python2.7/site-packages/zappa/cli.py", line 800, in deploy self.zappa.add_binary_support(api_id=api_id, cors=self.cors) File "/home/velotio/Envs/django_zappa_sample/local/lib/python2.7/site-packages/zappa/core.py", line 1490, in add_binary_support restApiId=api_id File "/home/velotio/Envs/django_zappa_sample/local/lib/python2.7/site-packages/botocore/client.py", line 314, in _api_call return self._make_api_call(operation_name, kwargs) File "/home/velotio/Envs/django_zappa_sample/local/lib/python2.7/site-packages/botocore/client.py", line 612, in _make_api_call raise error_class(parsed_response, operation_name)ClientError: An error occurred (ValidationError) when calling the PutRole operation: Provided role 'arn:aws:iam:484375727565:role/lambda_basic_execution' cannot be assumed by principal'events.amazonaws.com'.==============Need help? Found a bug? Let us know!:DFile bug reports on GitHub here: https://github.com/Miserlou/ZappaAnd join our Slack channel here: https://slack.zappa.ioLove!,~ Team Zappa!
3. Adding the parameter and running zappa update will cause above error. As you can see it says “Stack django-zappa-sa-dev does not exists” as the previous deployment was unsuccessful. To fix this, delete the Lambda function from console and rerun the deployment.
4. If you run into any distribution error, please try down-grading your pip version to 9.0.1.
$ pip install pip==9.0.1
Calling deploy for stage dev..Downloading and installing dependencies..Oh no! An error occurred! :(==============Traceback (most recent call last): File "/home/velotio/Envs/django_zappa_sample/local/lib/python2.7/site-packages/zappa/cli.py", line 2610, in handle sys.exit(cli.handle()) File "/home/velotio/Envs/django_zappa_sample/local/lib/python2.7/site-packages/zappa/cli.py", line 505, in handle self.dispatch_command(self.command, stage) File "/home/velotio/Envs/django_zappa_sample/local/lib/python2.7/site-packages/zappa/cli.py", line 539, in dispatch_command self.deploy(self.vargs['zip']) File "/home/velotio/Envs/django_zappa_sample/local/lib/python2.7/site-packages/zappa/cli.py", line 709, in deploy self.create_package() File "/home/velotio/Envs/django_zappa_sample/local/lib/python2.7/site-packages/zappa/cli.py", line 2171, in create_package disable_progress=self.disable_progress File "/home/velotio/Envs/django_zappa_sample/local/lib/python2.7/site-packages/zappa/core.py", line 595, in create_lambda_zip installed_packages = self.get_installed_packages(site_packages, site_packages_64) File "/home/velotio/Envs/django_zappa_sample/local/lib/python2.7/site-packages/zappa/core.py", line 751, in get_installed_packages pip.get_installed_distributions()AttributeError: 'module' object has no attribute 'get_installed_distributions'==============Need help? Found a bug? Let us know!:DFile bug reports on GitHub here: https://github.com/Miserlou/ZappaAnd join our Slack channel here: https://slack.zappa.ioLove!,~ Team Zappa!
or,
If you run into NotFoundException(Invalid REST API Identifier issue) please try undeploying the Zappa stage and retry again.
Calling deploy for stage dev..Downloading and installing dependencies..Packaging project aszip.Uploading django-zappa-sa-dev-1526830532.zip (10.9MiB)..100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 11.4M/11.4M [00:42<00:00, 331KB/s]Scheduling..Scheduled django-zappa-sa-dev-zappa-keep-warm-handler.keep_warm_callback with expression rate(4 minutes)!Uploading django-zappa-sa-dev-template-1526830690.json (1.6KiB)..100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.60K/1.60K [00:01<00:00, 801B/s]Oh no! An error occurred! :(==============Traceback (most recent call last): File "/home/velotio/Envs/django_zappa_sample/local/lib/python2.7/site-packages/zappa/cli.py", line 2610, in handle sys.exit(cli.handle()) File "/home/velotio/Envs/django_zappa_sample/local/lib/python2.7/site-packages/zappa/cli.py", line 505, in handle self.dispatch_command(self.command, stage) File "/home/velotio/Envs/django_zappa_sample/local/lib/python2.7/site-packages/zappa/cli.py", line 539, in dispatch_command self.deploy(self.vargs['zip']) File "/home/velotio/Envs/django_zappa_sample/local/lib/python2.7/site-packages/zappa/cli.py", line 800, in deploy self.zappa.add_binary_support(api_id=api_id, cors=self.cors) File "/home/velotio/Envs/django_zappa_sample/local/lib/python2.7/site-packages/zappa/core.py", line 1490, in add_binary_support restApiId=api_id File "/home/velotio/Envs/django_zappa_sample/local/lib/python2.7/site-packages/botocore/client.py", line 314, in _api_call return self._make_api_call(operation_name, kwargs) File "/home/velotio/Envs/django_zappa_sample/local/lib/python2.7/site-packages/botocore/client.py", line 612, in _make_api_call raise error_class(parsed_response, operation_name)NotFoundException: An error occurred (NotFoundException) when calling the GetRestApi operation: Invalid RESTAPI identifier specified 484375727565:akg59b222b==============Need help? Found a bug? Let us know!:DFile bug reports on GitHub here: https://github.com/Miserlou/ZappaAnd join our Slack channel here: https://slack.zappa.ioLove!,~ Team Zappa!
TIP: To understand how your application works on serverless environment please visit this link.
Post Deployment Setup
Migrate database
At this point, you should have an empty database for your Django application to fill up with a schema.
$ zappa manage.py migrate dev
Once you run above command the database migrations will be applied on the database as specified in your Django settings.
Creating Superuser of Django Application
You also might need to create a new superuser on the database. You could use the following command on your project directory.
Note that your application must be connected to the same database as this is run as standard Django administration command (not a Zappa command).
Managing static files
Your Django application will be having a dependency on static files, Django admin panel uses a combination of JS, CSS and image files.
NOTE: Zappa is for running your application code, not for serving static web assets. If you plan on serving custom static assets in your web application (CSS/JavaScript/images/etc.), you’ll likely want to use a combination of AWS S3 and AWS CloudFront.
You will need to add following packages to your virtual environment required for management of files to and from S3 django-storages and boto.
$ pip install django-storages botoAdd Django-Storage to your INSTALLED_APPSin settings.pyINSTALLED_APPS= (...,storages',)Configure Django-storage in settings.py asAWS_STORAGE_BUCKET_NAME='django-zappa-sample-bucket'AWS_S3_CUSTOM_DOMAIN='%s.s3.amazonaws.com'%AWS_STORAGE_BUCKET_NAMESTATIC_URL="https://%s/"%AWS_S3_CUSTOM_DOMAINSTATICFILES_STORAGE='storages.backends.s3boto.S3BotoStorage'
Once you have setup the Django application to serve your static files from AWS S3, run following command to upload the static file from your project to S3.
$ python manage.py collectstatic --noinput
or
$ zappa update dev$ zappa manage dev "collectstatic --noinput"
Check that at least 61 static files are moved to S3 bucket. Admin panel is built over 61 static files.
NOTE: STATICFILES_DIR must be configured properly to collect your files from the appropriate location.
Tip: You need to render static files in your templates by loading static path and using the same. Example, {% static %}
Setting Up API Gateway
To connect to your Django application you also need to ensure you have API gateway setup for your AWS Lambda Function. You need to have GET methods set up for all the URL resources used in your Django application. Alternatively, you can setup a proxy method to allow all subresources to be processed through one API method.
Go to AWS Lambda function console and add API Gateway from ‘Add triggers’.
1. Configure API, Deployment Stage, and Security for API Gateway. Click Save once it is done.
2. Go to API Gateway console and,
a. Recreate ANY method for / resource.
i. Check `Use Lambda Proxy integration`
ii. Set `Lambda Region` and `Lambda Function` and `Save` it.
a. Recreate ANY method for /{proxy+} resource.
i. Select `Lambda Function Proxy`
ii. Set`Lambda Region` and `Lambda Function` and `Save` it.
3. Click on Action and select Deploy API. Set Deployment Stage and click Deploy
4. Ensure that GET and POST method for / and Proxy are set as Override for this method
Setting Up Custom SSL Endpoint
Optionally, you could also set up your own custom defined SSL endpoint with Zappa and install your certificate with your domain by running certify with Zappa.
Now you are ready to launch your Django Application hosted on AWS Lambda.
Additional Notes:
Once deployed, you must run “zappa update <stage-name>” for updating your already hosted AWS Lambda function.</stage-name>
You can check server logs for investigation by running “zappa tail” command.
To un-deploy your application, simply run: `zappa undeploy <stage-name>`</stage-name>
You’ve seen how to deploy Django application on AWS Lambda using Zappa. If you are creating your Django application for first time you might also want to read Edgar Roman’s Django Zappa Guide.
Start building your Django application and let us know in the comments if you need any help during your application deployment over AWS Lambda.
This blog post is written assuming you have an understanding of the basics of React and routing inside React SPAs with react-router. We will also be using Chrome DevTools for measuring the actual performance benefits achieved in the example. We will be using Webpack, a well-known bundler for JavaScript projects.
What is Code Splitting?
Code splitting is simply dividing huge code bundles into smaller code chunks that can be loaded ad hoc. Usually, when the SPAs grow in terms of components and plugins, the need to split the code into smaller-sized chunks arises. Bundlers like Webpack and Rollup provide support for code splitting.
Several different code splitting strategies can be implemented depending on the application structure. We will be taking a look at an example in which we implement code splitting inside an admin dashboard for better performance.
Let’s Get Started
We will be starting with a project configured with Webpack as a bundler having a considerable bundle size. This simple Github repository dashboard will have four routes for showing various details regarding the same repository. The dashboard uses some packages to show details in the app such as react-table, TinyMCE, and recharts.
Before optimizing the bundle
Just to get an idea of performance changes, let us note the metrics from the prior bundle of the app. Let’s check loading time in the network tab with the following setup:
Browser incognito tab
Cache disabled
Throttling enabled to Fast 3G
Development Build
As you can see, the development bundle without any optimization has around a 1.3 MBs network transfer size, which takes around 7.85 seconds to load for the first time on a fast 3G connection.
However, we know that we will probably never want to serve this unoptimized development bundle in production. So, let’s figure out metrics for the production bundle with the same setup.
Production Build
The project is already configured for generating a webpack production build. The production bundle is much smaller, with a 534 kBs network transfer size compared to the development bundle, which takes around 3.54 seconds to load on a fast 3G connection. This is still a problem as the best practice suggests keeping the page load times below 3 seconds. Let’s check what happens if we check with a slow 3G connection.
The production bundle took 12.70 seconds to load for the first time on a slow 3G connection. Now, this can annoy users.
If we look at the lighthouse report, we see a warning indicating that we’re loading more code than needed:
As per the warning, we’re loading some unused code while rendering the first time, which we can get rid of and load later instead. Lighthouse report indicates that we can save up to 404 KiBs while loading the page for the first time.
There’s one more suggestion for splitting the bundle using React.lazy(). The lighthouse also gives us various metrics that can be measured for improvement of the application. However, we will be focusing on bundle size in this case.
The extra unused code inside the bundle is not only bad in terms of download size, but it also impacts the user experience. Let’s use the performance tab for figuring out how this is affecting the user experience. Navigate to the performance tab and profile the page. It shows that it takes around 10 seconds for the user to see actual content on the page reload:
Webpack Bundle Analyzer Report
We can visualize the bundles with the webpack bundle analyzer tool, which gives us a way to track and measure the bundle size changes over time. Please follow the installation instructions given here.
So, this is what our production build bundle report looks like:
As we can see, our current production build has a giant chunk man.201d82c8.js, which can be divided into smaller chunks.
The bundle analyzer report not only gives us information about the chunk sizes but also what modules the chunk contains and their size. This gives an opportunity to find out and free up such modules and achieve better performance. Here for example that adds considerable size to our main bundle:
Using React.lazy() for Code Splitting
React.lazy allows us to use dynamically imported components. This means that we can load these components when they’re needed and reduce bundle size. As our dashboard app has four top-level routes that are wrapped inside react-router’s Switch, we know that they will never need to be at once.
So, apparently, we can split these top-level components into four different bundle chunks and load them ad hoc. For doing that, we need to convert our imports from:
This also requires us to implement a Suspense wrapper around our routes, which does the work of showing fallback visuals till the dynamically loading component is visible.
Just after this, the Webpack recognizes the dynamic imports and splits the main chunk into smaller chunks. In the production build, we can notice the following bundles being downloaded. We have reduced the load time for the main bundle chunk from 12 seconds to 3.10 seconds, which is quite good. This is an improvement as we’re not loading unnecessary JS for the first time.
As we can see in the waterfall view of the requests tab, other required chunks are loaded parallel as soon as the main chunk is loaded.
If we look at the lighthouse report, the warning for removing unused JS has been gone and we can see the check passing.
This is good for the landing page. How about the other routes when we visit them? The following shows that we are now loading more small chunks when we render that lazily loaded component on menu item click.
With the current setup, we should be able to see improved performance inside our applications. We can always go ahead and tweak Webpack chunks when needed.
To measure how this change affects user experience, we can again generate the performance report with Chrome DevTools. We can quickly notice that the idle frame time has dropped to around 1 second—far better than the previous setup.
If we read through the timeline, we can see the user sees a blank frame up to 1 second, and they’re able to see the sidebar in the next second. Once the main bundle is loaded, we’re loading the lazy-loaded commits chunk till that time we see our fallback loading component.
Also, when we navigate to the other routes, we can see the chunks loaded lazily when they’re needed.
Let’s have a look at the bundle analyzer report generated after the changes. We can easily see that the chunks are divided into smaller chunks. Also, we can notice that the chunks contain only the code they need. For example, the 51.573370a6.js chunk is actually the commits route containing the react-table code. It’s similar for the charts module in the other chunk.
Conclusion
Depending on the project structure, we can easily set up code-splitting inside the React applications, which is useful for better-performing applications and leads to a positive impact for the users.
It has almost been a decade since Marc Andreessen made this prescient statement. Software is not only eating the world but doing so at an accelerating pace. There is no industry that hasn’t been challenged by technology startups with disruptive approaches.
Automakers are no longer just manufacturing companies: Tesla is disrupting the industry with their software approach to vehicle development and continuous over-the-air software delivery. Waymo’s autonomous cars have driven millions of miles and self-driving cars are a near-term reality. Uber is transforming the transportation industry into a service, potentially affecting the economics and incentives of almost 3–4% of the world GDP!
Social networks and media platforms had a significant and decisive impact on the US election results.
Banks and large financial institutions are being attacked by FinTech startups like WealthFront, Venmo, Affirm, Stripe, SoFi, etc. Bitcoin, Ethereum and the broader blockchain revolution can upend the core structure of banks and even sovereign currencies.
Traditional retail businesses are under tremendous pressure due to Amazon and other e-commerce vendors. Retail is now a customer ownership, recommendations, and optimization business rather than a brick and mortar one.
Enterprises need to adopt a new approach to software development and digital innovation. At Velotio, we are helping customers to modernize and transform their business with all of the approaches and best practices listed below.
Agility
In this fast-changing world, your business needs to be agile and fast-moving. You need to ship software faster, at a regular cadence, with high quality and be able to scale it globally.
Agile practices allow companies to rally diverse teams behind a defined process that helps to achieve inclusivity and drives productivity. Agile is about getting cross-functional teams to work in concert in planned short iterations with continuous learning and improvement.
Generally, teams that work in an Agile methodology will:
Conduct regular stand-ups and Scrum/Kanban planning meetings with the optimal use of tools like Jira, PivotalTracker, Rally, etc.
Use pair programming and code review practices to ensure better code quality.
Use continuous integration and delivery tools like Jenkins or CircleCI.
Design processes for all aspects of product management, development, QA, DevOps and SRE.
Use Slack, Hipchat or Teams for communication between team members and geographically diverse teams. Integrate all tools with Slack to ensure that it becomes the central hub for notifications and engagement.
Cloud-Native
Businesses need software that is purpose-built for the cloud model. What does that mean? Software team sizes are now in the hundreds of thousands. The number of applications and software stacks is growing rapidly in most companies. All companies use various cloud providers, SaaS vendors and best-of-breed hosted or on-premise software. Essentially, software complexity has increased exponentially which required a “cloud-native” approach to manage effectively. Cloud Native Computing Foundation defines cloud native as a software stack which is:
Containerized: Each part (applications, processes, etc) is packaged in its own container. This facilitates reproducibility, transparency, and resource isolation.
Dynamically orchestrated: Containers are actively scheduled and managed to optimize resource utilization.
Microservices oriented: Applications are segmented into micro services. This significantly increases the overall agility and maintainability of applications.
You can deep-dive into cloud native with this blog by our CTO, Chirag Jog.
Cloud native is disrupting the traditional enterprise software vendors. Software is getting decomposed into specialized best of breed components — much like the micro-services architecture. See the Cloud Native landscape below from CNCF.
DevOps
Process and toolsets need to change to enable faster development and deployment of software. Enterprises cannot compete without mature DevOps strategies. DevOps is essentially a set of practices, processes, culture, tooling, and automation that focuses on delivering software continuously with high quality.
DevOps tool chains & process
As you begin or expand your DevOps journey, a few things to keep in mind:
Customize to your needs: There is no single DevOps process or toolchain that suits all needs. Take into account your organization structure, team capabilities, current software process, opportunities for automation and goals while making decisions. For example, your infrastructure team may have automated deployments but the main source of your quality issues could be the lack of code reviews in your development team. So identify the critical pain points and sources of delay to address those first.
Automation: Automate everything that can be. The lesser the dependency on human intervention, the higher are the chances for success.
Culture: Align the incentives and goals with your development, ITOps, SecOps, SRE teams. Ensure that they collaborate effectively and ownership in the DevOps pipeline is well established.
Small wins: Pick one application or team and implement your DevOps strategy within it. That way you can focus your energies and refine your experiments before applying them broadly. Show success as measured by quantifiable parameters and use that to transform the rest of your teams.
Organizational dynamics & integrations: Adoption of new processes and tools will cause some disruptions and you may need to re-skill part of your team or hire externally. Ensure that compliance, SecOps & audit teams are aware of your DevOps journey and get their buy-in.
DevOps is a continuous journey: DevOps will never be done. Train your team to learn continuously and refine your DevOps practice to keep achieving your goal: delivering software reliably and quickly.
Micro-services
As the amount of software in an enterprise explodes, so does the complexity. The only way to manage this complexity is by splitting your software and teams into smaller manageable units. Micro-services adoption is primarily to manage this complexity.
Development teams across the board are choosing micro services to develop new applications and break down legacy monoliths. Every micro-service can be deployed, upgraded, scaled, monitored and restarted independent of other services. Micro-services should ideally be managed by an automated system so that teams can easily update live applications without affecting end-users.
There are companies with 100s of micro-services in production which is only possible with mature DevOps, cloud-native and agile practice adoption.
Interestingly, serverless platforms like Google Functions and AWS Lambda are taking the concept of micro-services to the extreme by allowing each function to act like an independent piece of the application. You can read about my thoughts on serverless computing in this blog: Serverless Computing Predictions for 2017.
Digital Transformation
Digital transformation involves making strategic changes to business processes, competencies, and models to leverage digital technologies. It is a very broad term and every consulting vendor twists it in various ways. Let me give a couple of examples to drive home the point that digital transformation is about using technology to improve your business model, gain efficiencies or built a moat around your business:
GE has done an excellent job transforming themselves from a manufacturing company into an IoT/software company with Predix. GE builds airplane engines, medical equipment, oil & gas equipment and much more. Predix is an IoT platform that is being embedded into all of GE’s products. This enabled them to charge airlines on a per-mile basis by taking the ownership of maintenance and quality instead of charging on a one-time basis. This also gives them huge amounts of data that they can leverage to improve the business as a whole. So digital innovation has enabled a business model improvement leading to higher profits.
Car companies are exploring models where they can provide autonomous car fleets to cities where they will charge on a per-mile basis. This will convert them into a “service” & “data” company from a pure manufacturing one.
Insurance companies need to built digital capabilities to acquire and retain customers. They need to build data capabilities and provide ongoing value with services rather than interact with the customer just once a year.
You would be better placed to compete in the market if you have automation and digital process in place so that you can build new products and pivot in an agile manner.
Big Data / Data Science
Businesses need to deal with increasing amounts of data due to IoT, social media, mobile and due to the adoption of software for various processes. And they need to use this data intelligently. Cloud platforms provide the services and solutions to accelerate your data science and machine learning strategies. AWS, Google Cloud & open-source libraries like Tensorflow, SciPy, Keras, etc. have a broad set of machine learning and big data services that can be leveraged. Companies need to build mature data processing pipelines to aggregate data from various sources and store it for quick and efficient access to various teams. Companies are leveraging these services and libraries to build solutions like:
Predictive analytics
Cognitive computing
Robotic Process Automation
Fraud detection
Customer churn and segmentation analysis
Recommendation engines
Forecasting
Anomaly detection
Companies are creating data science teams to build long term capabilities and moats around their business by using their data smartly.
Re-platforming & App Modernization
Enterprises want to modernize their legacy, often monolithic apps as they migrate to the cloud. The move can be triggered due to hardware refresh cycles or license renewals or IT cost optimization or adoption of software-focused business models.
Benefits of modernization to customers and businesses
Intelligent Applications
Software is getting more intelligent and to enable this, businesses need to integrate disparate datasets, distributed teams, and processes. This is best done on a scalable global cloud platform with agile processes. Big data and data science enables the creation of intelligent applications.
How can smart applications help your business?
New intelligent systems of engagement: intelligent apps surface insights to users enabling the user to be more effective and efficient. For example, CRMs and marketing software is getting intelligent and multi-platform enabling sales and marketing reps to become more productive.
Personalisation: E-Commerce, social networks and now B2B software is getting personalized. In order to improve user experience and reduce churn, your applications should be personalized based on the user preferences and traits.
Drive efficiencies: IoT is an excellent example where the efficiency of machines can be improved with data and cloud software. Real-time insights can help to optimize processes or can be used for preventive maintenance.
Creation of new business models: Traditional and modern industries can use AI to build new business models. For example, what if insurance companies allow you to pay insurance premiums only for the miles driven?
Security
Security threats to governments, enterprises and data have never been greater. As business adopt cloud native, DevOps & micro-services practices, their security practices need to evolve.
In our experience, these are few of the features of a mature cloud native security practice:
Automated: Systems are updated automatically with the latest fixes. Another approach is immutable infrastructure with the adoption of containers and serverless.
Proactive: Automated security processes tend to be proactive. For example, if a malware of vulnerability is found in one environment, automation can fix it in all environments. Mature DevOps & CI/CD processes ensure that fixes can be deployed in hours or days instead of weeks or months.
Cloud Platforms: Businesses have realized that the mega-clouds are way more secure than their own data centers can be. Many of these cloud platforms have audit, security and compliance services which should be leveraged.
Protecting credentials: Use AWS KMS, Hashicorp Vault or other solutions for protecting keys, passwords and authorizations.
Bug bounties: Either setup bug bounties internally or through sites like HackerOne. You want the good guys to work for you and this is an easy way to do that.
Conclusion
As you can see, all of these approaches and best practices are intertwined and need to be implemented in concert to gain the desired results. It is best to start with one project, one group or one application and build on early wins. Remember, that is is a process and you are looking for gradual improvements to achieve your final objectives.
Please let us know your thoughts and experiences by adding comments to this blog or reaching out to @kalpakshah or RSI. We would love to help your business adopt these best practices and help to build great software together. Drop me a note at kalpak (at) velotio (dot) com.
In this blog, I will compare various methods to avoid the dreaded callback hells that are common in Node.js. What exactly am I talking about? Have a look at this piece of code below. Every child function executes only when the result of its parent function is available. Callbacks are the very essence of the unblocking (and hence performant) nature of Node.js.
Convinced yet? Even though there is some seemingly unnecessary error handling done here, I assume you get the drift! The problem with such code is more than just indentation. Instead, our programs entire flow is based on side effects – one function only incidentally calling the inner function.
There are multiple ways in which we can avoid writing such deeply nested code. Let’s have a look at our options:
Promises
According to the official specification, promise represents an eventual result of an asynchronous operation. Basically, it represents an operation that has not completed yet but is expected to in the future. The then method is a major component of a promise. It is used to get the return value (fulfilled or rejected) of a promise. Only one of these two values will ever be set. Let’s have a look at a simple file read example without using promises:
Now, if readFile function returned a promise, the same logic could be written like so:
var fileReadPromise = fs.readFile(filePath);fileReadPromise.then(console.log, console.error)
The fileReadPromise can then be passed around multiple times in a code where you need to read a file. This helps in writing robust unit tests for your code since you now only have to write a single test for a promise. And more readable code!
Chaining using promises
The then function itself returns a promise which can again be used to do the next operation. Changing the first code snippet to using promises results in this:
As in evident, it makes the code more composed, readable and easier to maintain. Also, instead of chaining we could have used Promise.all. Promise.all takes an array of promises as input and returns a single promise that resolves when all the promises supplied in the array are resolved. Other useful information on promises can be found here.
The async utility module
Async is an utility module which provides a set of over 70 functions that can be used to elegantly solve the problem of callback hells. All these functions follow the Node.js convention of error-first callbacks which means that the first callback argument is assumed to be an error (null in case of success). Let’s try to solve the same foo-bar-baz problem using async module. Here is the code snippet:
Here, I have used the async.waterfall function as an example. There are a multiple functions available according to the nature of the problem you are trying to solve like async.each – for parallel execution, async.eachSeries – for serial execution etc.
Async/Await
Now, this is one of the most exciting features coming to Javascript in near future. It internally uses promises but handles them in a more intuitive manner. Even though it seems like promises and/or 3rd party modules like async would solve most of the problems, a further simplification is always welcome! For those of you who have worked with C# async/await, this concept is directly cribbed from there and being brought into ES7.
Async/await enables us to write asynchronous promise-based code as if it were synchronous, but without blocking the main thread. An async function always returns a promise whether await is used or not. But whenever an await is observed, the function is paused until the promise either resolves or rejects. Following code snippet should make it clearer:
Here, asyncFun is an async function which captures the promised result using await. This has made the code readable and a major convenience for developers who are more comfortable with linearly executed languages, without blocking the main thread.
Now, like before, lets solve the foo-bar-baz problem using async/await. Note that foo, bar and baz individually return promises just like before. But instead of chaining, we have written the code linearly.
How long should you (a)wait for async to come to fore?
Well, it’s already here in the Chrome 55 release and the latest update of the V8 engine. The native support in the language means that we should see a much more widespread use of this feature. The only, catch is that if you would want to use async/await on a codebase which isn’t promise aware and based completely on callbacks, it probably will require a lot of wrapping the existing functions to make them usable.
To wrap up, async/await definitely make coding numerous async operations an easier job. Although promises and callbacks would do the job for most, async/await looks like the way to make some architectural problems go away and improve code quality.
After spending a couple of years in JavaScript development, I’ve realized how incredibly important design patterns are, in modern JavaScript (ES6). And I’d love to share my experience and knowledge on the subject, hoping you’d make this a critical part of your development process as well.
Note: All the examples covered in this post are implemented with ES6 features, but you can also integrate the design patterns with ES5.
At Velotio, we always follow best practices to achieve highly maintainable and more robust code. And we are strong believers of using design patterns as one of the best ways to write clean code.
In the post below, I’ve listed the most useful design patterns I’ve implemented so far and how you can implement them too:
1. Module
The module pattern simply allows you to keep units of code cleanly separated and organized.
Modules promote encapsulation, which means the variables and functions are kept private inside the module body and can’t be overwritten.
// usageimport { sum } from'modules/sum';constresult=sum(20, 30); // 50
ES6 also allows us to export the module as default. The following example gives you a better understanding of this.
// All the variables and functions which are not exported are private within the module and cannot be used outside. Only the exported members are public and can be used by importing them.// Here the businessList is private member to city moduleconstbusinessList=newWeakMap();// Here City uses the businessList member as it’s in same moduleclassCity {constructor() { businessList.set(this, ['Pizza Hut', 'Dominos', 'Street Pizza']); }// public method to access the private ‘businessList’getBusinessList() {return businessList.get(this); }// public method to add business to ‘businessList’addBusiness(business) { businessList.get(this).push(business); }}// export the City class as default moduleexportdefault City;
// usageimport City from'modules/city';constcity=newCity();city.getBusinessList();
There is a great article written on the features of ES6 modules here.
2. Factory
Imagine creating a Notification Management application where your application currently only allows for a notification through Email, so most of the code lives inside the EmailNotification class. And now there is a new requirement for PushNotifications. So, to implement the PushNotifications, you have to do a lot of work as your application is mostly coupled with the EmailNotification. You will repeat the same thing for future implementations.
To solve this complexity, we will delegate the object creation to another object called factory.
An observer pattern maintains the list of subscribers so that whenever an event occurs, it will notify them. An observer can also remove the subscriber if the subscriber no longer wishes to be notified.
On YouTube, many times, the channels we’re subscribed to will notify us whenever a new video is uploaded.
// PublisherclassVideo {constructor(observable, name, content) {this.observable = observable;this.name = name;this.content = content;// publish the ‘video-uploaded’ eventthis.observable.publish('video-uploaded', { name, content, }); }}// SubscriberclassUser {constructor(observable) {this.observable = observable;this.intrestedVideos = [];// subscribe with the event naame and the call back functionthis.observable.subscribe('video-uploaded', this.addVideo.bind(this)); }addVideo(video) {this.intrestedVideos.push(video); }}// Observer classObservable {constructor() {this.handlers = []; }subscribe(event, handler) {this.handlers[event] =this.handlers[event] || [];this.handlers[event].push(handler); }publish(event, eventData) {consteventHandlers=this.handlers[event];if (eventHandlers) {for (var i =0, l = eventHandlers.length; i < l; ++i) { eventHandlers[i].call({}, eventData); } } }}// usageconstobservable=newObservable();constuser=newUser(observable);constvideo=newVideo(observable, 'ES6 Design Patterns', videoFile);
4. Mediator
The mediator pattern provides a unified interface through which different components of an application can communicate with each other.
If a system appears to have too many direct relationships between components, it may be time to have a central point of control that components communicate through instead.
The mediator promotes loose coupling.
A real-time analogy could be a traffic light signal that handles which vehicles can go and stop, as all the communications are controlled from a traffic light.
Let’s create a chatroom (mediator) through which the participants can register themselves. The chatroom is responsible for handling the routing when the participants chat with each other.
// each participant represented by Participant objectclassParticipant {constructor(name) {this.name = name; }getParticiantDetails() {returnthis.name; }}// MediatorclassChatroom {constructor() {this.participants = {}; }register(participant) {this.participants[participant.name] = participant; participant.chatroom =this; }send(message, from, to) {if (to) {// single message to.receive(message, from); } else {// broadcast message to everyonefor (key inthis.participants) {if (this.participants[key] !== from) {this.participants[key].receive(message, from); } } } }}// usage// Create two participants constjohn=newParticipant('John');constsnow=newParticipant('Snow');// Register the participants to Chatroomvar chatroom =newChatroom(); chatroom.register(john); chatroom.register(snow);// Participants now chat with each other john.send('Hey, Snow!'); john.send('Are you there?'); snow.send('Hey man', yoko); snow.send('Yes, I heard that!');
5. Command
In the command pattern, an operation is wrapped as a command object and passed to the invoker object. The invoker object passes the command to the corresponding object, which executes the command.
The command pattern decouples the objects executing the commands from objects issuing the commands. The command pattern encapsulates actions as objects. It maintains a stack of commands whenever a command is executed, and pushed to stack. To undo a command, it will pop the action from stack and perform reverse action.
You can consider a calculator as a command that performs addition, subtraction, division and multiplication, and each operation is encapsulated by a command object.
// The list of operations can be performedconstaddNumbers= (num1, num2) => num1 + num2;constsubNumbers= (num1, num2) => num1 - num2;constmultiplyNumbers= (num1, num2) => num1 * num2;constdivideNumbers= (num1, num2) => num1 / num2;// CalculatorCommand class initialize with execute function, undo function // and the value classCalculatorCommand {constructor(execute, undo, value) {this.execute = execute;this.undo = undo;this.value = value; }}// Here we are creating the command objectsconstDoAddition=value=>newCalculatorCommand(addNumbers, subNumbers, value);constDoSubtraction=value=>newCalculatorCommand(subNumbers, addNumbers, value);constDoMultiplication=value=>newCalculatorCommand(multiplyNumbers, divideNumbers, value);constDoDivision=value=>newCalculatorCommand(divideNumbers, multiplyNumbers, value);// AdvancedCalculator which maintains the list of commands to execute and // undo the executed commandclassAdvancedCalculator {constructor() {this.current =0;this.commands = []; }execute(command) {this.current = command.execute(this.current, command.value);this.commands.push(command); }undo() {let command =this.commands.pop();this.current = command.undo(this.current, command.value); }getCurrentValue() {returnthis.current; }}// usageconstadvCal=newAdvancedCalculator();// invoke commandsadvCal.execute(newDoAddition(50)); //50advCal.execute(newDoSubtraction(25)); //25advCal.execute(newDoMultiplication(4)); //100advCal.execute(newDoDivision(2)); //50// undo commandsadvCal.undo();advCal.getCurrentValue(); //100
6. Facade
The facade pattern is used when we want to show the higher level of abstraction and hide the complexity behind the large codebase.
A great example of this pattern is used in the common DOM manipulation libraries like jQuery, which simplifies the selection and events adding mechanism of the elements.
Though it seems simple on the surface, there is an entire complex logic implemented when performing the operation.
The following Account Creation example gives you clarity about the facade pattern:
// Here AccountManager is responsible to create new account of type // Savings or Current with the unique account numberlet currentAccountNumber =0;classAccountManager {createAccount(type, details) {constaccountNumber= AccountManager.getUniqueAccountNumber();let account;if (type ==='current') { account =newCurrentAccount(); } else { account =newSavingsAccount(); }return account.addAccount({ accountNumber, details }); }staticgetUniqueAccountNumber() {return++currentAccountNumber; }}// class Accounts maintains the list of all accounts createdclassAccounts {constructor() {this.accounts = []; }addAccount(account) {this.accounts.push(account);returnthis.successMessage(complaint); }getAccount(accountNumber) {returnthis.accounts.find(account=> account.accountNumber === accountNumber); }successMessage(account) {}}// CurrentAccounts extends the implementation of Accounts for providing more specific success messages on successful account creationclassCurrentAccountsextendsAccounts {constructor() {super();if (CurrentAccounts.exists) {return CurrentAccounts.instance; } CurrentAccounts.instance =this; CurrentAccounts.exists =true;returnthis; }successMessage({ accountNumber, details }) {return`Current Account created with ${details}. ${accountNumber} is your account number.`; }}// Same here, SavingsAccount extends the implementation of Accounts for providing more specific success messages on successful account creationclassSavingsAccountextendsAccounts {constructor() {super();if (SavingsAccount.exists) {return SavingsAccount.instance; } SavingsAccount.instance =this; SavingsAccount.exists =true;returnthis; }successMessage({ accountNumber, details }) {return`Savings Account created with ${details}. ${accountNumber} is your account number.`; }}// usage// Here we are hiding the complexities of creating accountconstaccountManager=newAccountManager();constcurrentAccount= accountManager.createAccount('current', { name: 'John Snow', address: 'pune' });constsavingsAccount= accountManager.createAccount('savings', { name: 'Petter Kim', address: 'mumbai' });
7. Adapter
The adapter pattern converts the interface of a class to another expected interface, making two incompatible interfaces work together.
With the adapter pattern, you might need to show the data from a 3rd party library with the bar chart representation, but the data formats of the 3rd party library API and the display bar chart are different. Below, you’ll find an adapter that converts the 3rd party library API response to Highcharts’ bar representation:
This has been a brief introduction to the design patterns in modern JavaScript (ES6). This subject is massive, but hopefully this article has shown you the benefits of using it when writing code.
Hackathons for technology startups are like picking up a good book. It may take a long time before you start, but once you do, you wonder why you didn’t do it sooner? Last Friday, on 31st May 2019, we conducted our first Hackathon at Velotio and it was a grand success!
Although challenging projects from our clients are always pushing us to learn new things, we saw a whole new level of excitement and enthusiasm among our employees to bring their own ideas to life during the event. The 12-hour Hackathon saw participation from 15 teams, a lot of whom came well-prepared in advance with frameworks and ideas to start building immediately.
Here are some pictures from the event:
The intense coding session was then followed by a series of presentations where all the teams showcased their solutions.
The first prize was bagged by Team Mitron who worked on a performance review app and was awarded a cash prize of 75,000 Rs.
The second prize of 50,000 Rs. was awarded to Team WireQ. Their solution was an easy sync up platform that would serve as a single source of truth for the designers, developers, and testers to work seamlessly on projects together — a problem we have often struggled with in-house as well.
Our QA Team put together a complete test suite framework that would perform all functional and non-functional testing activities, including maintaining consistency in testing, minimal code usage, improvement in test structuring and so on. They won the third prize worth 25,000 Rs.
Our heartiest congratulations to all the winners!
This Hackathon has definitely injected a lot of positive energy and innovation into our work culture and got so many of us to collaborate more effectively and learn from each other. We cannot wait to do our next Hackathon and share more with you all.
React Native provides a mobile app development experience without sacrificing user experience or visual performance. And when it comes to mobile app UI testing, Appium is a great way to test indigenous React Native apps out of the box. Creating native apps from the same code and being able to do it using JavaScript has made Appium popular. Apart from this, businesses are attracted by the fact that they can save a lot of money by using this application development framework.
In this blog, we are going to cover how to add automated tests for React native apps using Appium & WebdriverIO with a Node.js framework.
What are React Native Apps
React Native is an open-source framework for building Android and iOS apps using React and local app capabilities. With React Native, you can use JavaScript to access the APIs on your platform and define the look and behavior of your UI using React components: lots of usable, non-compact code. In the development of Android and iOS apps, “viewing” is the basic building block of a UI: this small rectangular object on the screen can be used to display text, photos, or user input. Even the smallest detail of an app, such as a text line or a button, is a kind of view. Some views may contain other views.
What is Appium
Appium is an open-source tool for traditional automation, web, and hybrid apps on iOS, Android, and Windows desktop mobile platforms. Indigenous apps are those written using iOS and Android. Mobile web applications are accessed using a mobile browser (Appium supports Safari for iOS apps and Chrome or the built-in ‘Browser’ for Android apps). Hybrid apps have a wrapper around “web view”—a traditional controller that allows you to interact with web content. Projects like Apache Cordova make it easy to build applications using web technology and integrate it into a traditional wrapper, creating a hybrid application.
Importantly, Appium is “cross-platform”, allowing you to write tests against multiple platforms (iOS, Android), using the same API. This enables code usage between iOS, Android, and Windows test suites. It runs on iOS and Android applications using the WebDriver protocol.
Fig:- Appium Architecture
What is WebDriverIO
WebdriverIO is a next-gen browser and Node.js automated mobile testing framework. It allows you to customize any application written with modern web frameworks for mobile devices or browsers, such as React, Angular, Polymeror, and Vue.js.
WebdriverIO is a widely used test automation framework in JavaScript. It has various features like it supports many reports and services, Test Frameworks, and WDIO CLI Test Runners
The following are examples of supported services:
Appium Service
Devtools Service
Firefox Profile Service
Selenium Standalone Service
Shared Store Service
Static Server Service
ChromeDriver Service
Report Portal Service
Docker Service
The followings are supported by the test framework:
Mocha
Jasmine
Cucumber
Fig:- WebdriverIO Architecture
Key features of Appium & WebdriverIO
Appium
Does not require application source code or library
Provides a strong and active community
Has multi-platform support, i.e., it can run the same test cases on multiple platforms
Allows the parallel execution of test scripts
In Appium, a small change does not require reinstallation of the application
Supports various languages like C#, Python, Java, Ruby, PHP, JavaScript with node.js, and many others that have a Selenium client library
WebdriverIO
Extendable
Compatible
Feature-rich
Supports modern web and mobile frameworks
Runs automation tests both for web applications as well as native mobile apps.
Simple and easy syntax
Integrates tests to third-party tools such as Appium
‘Wdio setup wizard’ makes the setup simple and easy
The web driver configuration file must be created to apply the configuration during the test Generate command below project:
$ npx wdio config
With the following series of questions, install the required dependencies,
$ Where is your automation backend located?- On my local machine$ Which framework do you want to use?- mocha $ Do you want to use a compiler? No!$ Where are your test specs located?- ./test/specs/**/*.js$ Do you want WebdriverIO to autogenerate some test files?- Yes$ Do you want to use page objects (https://martinfowler.com/bliki/PageObject.html)? - No$ Which reporter do you want to use?- Allure$ Do you want to add a service to your test setup?- No$ What is the base url?- http://localhost
Steps to follow if npm legacy peer deeps problem occurred:
npm install --save --legacy-peer-depsnpm config set legacy-peer-deps truenpm i --legacy-peer-depsnpm config set legacy-peer-deps truenpm cache clean --force
This is how the folder structure will look in Appium with the WebDriverIO Framework:
Fig:- Appium Framework Outline
Step-by-Step Configuration of Android Emulator using Android Studio
Fig:- Android Studio Launch
Fig:- Android Studio AVD Manager
Fig:- Create Virtual Device
Fig:- Choose a device Definition
Fig:- Select system image
Fig:- License Agreement
Fig:- Component Installer
Fig:- System Image Download
Fig:- Configuration Verification
Fig:- Virtual Device Listing
Appium Desktop Configuration
Fig:- Appium Desktop Launch
Setup of ANDROID_HOME + ANDROID_SDK_ROOT & JAVA_HOME
Follow these steps for setting up ANDROID_HOME:
vi ~/.bash_profileAdd following exportANDROID_HOME=/Users/pushkar/android-sdk exportPATH=$PATH:$ANDROID_HOME/platform-tools exportPATH=$PATH:$ANDROID_HOME/tools exportPATH=$PATH:$ANDROID_HOME/tools/bin exportPATH=$PATH:$ANDROID_HOME/emulatorSave ~/.bash_profile source ~/.bash_profile echo $ANDROID_HOME/Users/pushkar/Library/Android/sdk
Follow these steps for setting up ANDROID_SDK_ROOT:
vi ~/.bash_profileAdd following exportANDROID_HOME=/Users/pushkar/Android/sdkexportANDROID_SDK_ROOT=/Users/pushkar/Android/sdkexportANDROID_AVD_HOME=/Users/pushkar/.android/avdSave ~/.bash_profile source ~/.bash_profile echo $ANDROID_SDK_ROOT/Users/pushkar/Library/Android/sdk
Follow these steps for setting up JAVA_HOME:
java --versionvi ~/.bash_profileAdd following exportJAVA_HOME=/Library/Java/JavaVirtualMachines/jdk-16.0.1.jdk/Contents/Home.echo $JAVA_HOME/Library/Java/JavaVirtualMachines/jdk-16.0.1.jdk/Contents/Home
Fig:- Environment Variables in Appium
Fig:- Appium Server Starts
Fig:- Appium Start Inspector Session
Fig:- Inspector Session Configurations
Note – Make sure you need to install the app from Google Play Store.
Fig:- Android Emulator Launch
Fig: – Android Emulator with Facebook React Native Mobile App
Fig:- Success of Appium with Emulator
Fig:- Locating Elements using Appium Inspector
How to write E2E React Native Mobile App Tests
Fig:- Test Suite Structure of Mocha
Here is an example of how to write E2E test in Appium:
Positive Testing Scenario – Validate Login & Nav Bar
Open Facebook React Native App
Enter valid email and password
Click on Login
Users should be able to login into Facebook
Negative Testing Scenario – Invalid Login
Open Facebook React Native App
Enter invalid email and password
Click on login
Users should not be able to login after receiving an “Incorrect Password” message popup
Negative Testing Scenario – Invalid Element
Open Facebook React Native App
Enter invalid email and password
Click on login
Provide invalid element to capture message
Make sure test_script should be under test/specs folder
var expect =require('chai').expectbeforeEach(() => { driver.launchApp()})afterEach(() => { driver.closeApp()})describe('Verify Login Scenarios on Facebook React Native Mobile App', () => {it('User should be able to login using valid credentials to Facebook Mobile App', () => { $(`~Username`).waitForDisplayed(20000)$(`~Username`).setValue('Valid-Email')$(`~Password`).waitForDisplayed(20000)$(`~Password`).setValue('Valid-Password')$('~Log In').click() browser.pause(10000) })it('User should not be able to login with invalid credentials to Facebook Mobile App', () => {$(`~Username`).waitForDisplayed(20000)$(`~Username`).setValue('Invalid-Email')$(`~Password`).waitForDisplayed(20000)$(`~Password`).setValue('Invalid-Password') $('~Log In').click()$('//android.widget.TextView[@resource-id="com.facebook.katana:id/(name removed)"]' ) .waitForDisplayed(11000)conststatus=$('//android.widget.TextView[@resource-id="com.facebook.katana:id/(name removed)"]' ).getText()expect(status).to.equal(`You Can't Use This Feature Right Now` ) })it('Test Case should Fail Because of Invalid Element', () => {$(`~Username`).waitForDisplayed(20000)$(`~Username`).setValue('Invalid-Email')$(`~Password`).waitForDisplayed(20000)$(`~Password`).setValue('Invalid-Pasword') $('~Log In').click()$('//android.widget.TextView[@resource-id="com.facebook.katana:id/(name removed)"' ) .waitForDisplayed(11000)conststatus=$('//android.widget.TextView[@resource-id="com.facebook.katana"' ).getText()expect(status).to.equal(`You Can't Use This Feature Right Now` ) })})
How to Run Mobile Tests Scripts
$ npm test This will create a Results folder with .xml report
Reporting
The following are examples of the supported reporters:
Allure Reporter
Concise Reporter
Dot Reporter
JUnit Reporter
Spec Reporter
Sumologic Reporter
Report Portal Reporter
Video Reporter
HTML Reporter
JSON Reporter
Mochawesome Reporter
Timeline Reporter
CucumberJS JSON Reporter
Here, we are using Allure Reporting. Allure Reporting in WebdriverIO is a plugin to create Allure Test Reports.
The easiest way is to keep @wdio/allure-reporter as a devDependency in your package.json with
$ npm install @wdio/allure-reporter --save-dev
Reporter options can be specified in the wdio.conf.js configuration file
To convert Allure .xml report to .html report, run the following command:
$ allure generate && allure openAllure HTML report should be opened in browser
This is what Allure Reports look like:
Fig:- Allure Report Overview
Fig:- Allure Categories
Fig:- Allure Suites
Fig: – Allure Graphs
Fig:- Allure Timeline
Fig:- Allure Behaviors
Fig:- Allure Packages
Limitations with Appium & WebDriverIO
Appium
Android versions lower than 4.2 are not supported for testing
Limited support for hybrid app testing
Doesn’t support image comparison.
WebdriverIO
It has a custom implementation
It can be used for automating AngularJS apps, but it is not as customized as Protractor.
Conclusion
In the QA and developer ecosystem, using Appium to test React native applications is common. Appium makes it easy to record test cases on both Android and iOS platforms while working with React Native. Selenium, a basic web developer, acts as a bridge between Appium and mobile platforms for delivery and testing. Appium is a solid framework for automatic UI testing. This article explains that this framework is capable of conducting test cases quickly and reliably. Most importantly, it can test both Android and iOS apps developed by the React Native framework on the basis of a single code.
Redux has greatly helped in reducing the complexities of state management. Its one way data flow is easier to reason about and it also provides a powerful mechanism to include middlewares which can be chained together to do our biding. One of the most common use cases for the middleware is to make async calls in the application. Different middlewares like redux-thunk, redux-sagas, redux-observable, etc are a few examples. All of these come with their own learning curve and are best suited for tackling different scenarios.
But what if our use-case is simple enough and we don’t want to have the added complexities that implementing a middleware brings? Can we somehow implement the most common use-case of making async API calls using only redux and javascript?
The answer is Yes! This blog will try to explain on how to implement async action calls in redux without the use of any middlewares.
So let us first start by making a simple react project by using create-react-app
Also we will be using react-redux in addition to redux to make our life a little easier. And to mock the APIs we will be using https://jsonplaceholder.typicode.com/
We will just implement two API calls to not to over complicate things.
Create a new file called api.js .It is the file in which we will keep the fetch calls to the endpoint.
Each API call has three base actions associated with it. Namely, REQUEST, SUCCESS and FAIL. Each of our APIs will be in one of these three states at any given time. And depending on these states we can decide how to show our UI. Like when it is in REQUEST state we can have the UI show a loader and when it is in FAIL state we can show a custom UI to tell the user that something has went wrong.
So we create three constants of REQUEST, SUCCESS and FAIL for each API call which we will be making. In our case the constants.js file will look something like this:
As can be seen from the above code, each of our APIs data lives in one object the the state object. Keys isLoading tells us if the API is in the REQUEST state.
Now as we have our store defined, let us see how we will manipulate the statewith different phases that an API call can be in. Below is our reducers.js file.
By giving each individual API call its own variable to denote the loading phase we can now easily implement something like multiple loaders in the same screen according to which API call is in which phase.
Now to actually implement the async behaviour in the actions we just need a normal JavaScript function which will pass the dispatch as the first argument. We pass dispatch to the function because it dispatches actions to the store. Normally a component has access to dispatch but since we want an external function to take control over dispatching, we need to give it control over dispatching.
This is how we do async calls without middlewares in redux. This is a much simpler approach than using a middleware and the learning curve associated with it. If this approach covers all your use cases then by all means use it.
Conclusion
This type of approach really shines when you have to make a simple enough application like a demo of sorts, where API calls is all the side-effect that you need. In larger and more complicated applications there are a few inconveniences with this approach. First we have to pass dispatch around to which seems kind of yucky. Also, remember which call requires dispatch and which do not.
In the world of data centers with wings and wheels, there is an opportunity to lay some work off from the centralized cloud computing by taking less compute intensive tasks to other components of the architecture. In this blog, we will explore the upcoming frontier of the web – Edge Computing.
What is the “Edge”?
The ‘Edge’ refers to having computing infrastructure closer to the source of data. It is the distributed framework where data is processed as close to the originating data source possible. This infrastructure requires effective use of resources that may not be continuously connected to a network such as laptops, smartphones, tablets, and sensors. Edge Computing covers a wide range of technologies including wireless sensor networks, cooperative distributed peer-to-peer ad-hoc networking and processing, also classifiable as local cloud/fog computing, mobile edge computing, distributed data storage and retrieval, autonomic self-healing networks, remote cloud services, augmented reality, and more.
Cloud Computing is expected to go through a phase of decentralization. Edge Computing is coming up with an ideology of bringing compute, storage and networking closer to the consumer.
But Why?
Legit question! Why do we even need Edge Computing? What are the advantages of having this new infrastructure?
Imagine a case of a self-driving car where the car is sending a live stream continuously to the central servers. Now, the car has to take a crucial decision. The consequences can be disastrous if the car waits for the central servers to process the data and respond back to it. Although algorithms like YOLO_v2 have sped up the process of object detection the latency is at that part of the system when the car has to send terabytes to the central server and then receive the response and then act! Hence, we need the basic processing like when to stop or decelerate, to be done in the car itself.
The goal of Edge Computing is to minimize the latency by bringing the public cloud capabilities to the edge. This can be achieved in two forms – custom software stack emulating the cloud services running on existing hardware, and the public cloud seamlessly extended to multiple point-of-presence (PoP) locations.
Following are some promising reasons to use Edge Computing:
Privacy: Avoid sending all raw data to be stored and processed on cloud servers.
Real-time responsiveness: Sometimes the reaction time can be a critical factor.
Reliability: The system is capable to work even when disconnected to cloud servers. Removes a single point of failure.
To understand the points mentioned above, let’s take the example of a device which responds to a hot keyword. Example, Jarvis from Iron Man. Imagine if your personal Jarvis sends all of your private conversations to a remote server for analysis. Instead, It is intelligent enough to respond when it is called. At the same time, it is real-time and reliable.
Intel CEO Brian Krzanich said in an event that autonomous cars will generate 40 terabytes of data for every eight hours of driving. Now with that flood of data, the time of transmission will go substantially up. In cases of self-driving cars, real-time or quick decisions are an essential need. Here edge computing infrastructure will come to rescue. These self-driving cars need to take decisions is split of a second whether to stop or not else consequences can be disastrous.
Another example can be drones or quadcopters, let’s say we are using them to identify people or deliver relief packages then the machines should be intelligent enough to take basic decisions like changing the path to avoid obstacles locally.
This model of Edge Computing is basically an extension of the public cloud. Content Delivery Networks are classic examples of this topology in which the static content is cached and delivered through a geographically spread edge locations.
Vapor IO is an emerging player in this category. They are attempting to build infrastructure for cloud edge. Vapor IO has various products like Vapor Chamber. These are self-monitored. They have sensors embedded in them using which they are continuously monitored and evaluated by Vapor Software, VEC(Vapor Edge Controller). They also have built OpenDCRE, which we will see later in this blog.
The fundamental difference between device edge and cloud edge lies in the deployment and pricing models. The deployment of these models – device edge and cloud edge – are specific to different use cases. Sometimes, it may be an advantage to deploy both the models.
Edges around you
Edge Computing examples can be increasingly found around us:
Smart street lights
Automated Industrial Machines
Mobile devices
Smart Homes
Automated Vehicles (cars, drones etc)
Data Transmission is expensive. By bringing compute closer to the origin of data, latency is reduced as well as end users have better experience. Some of the evolving use cases of Edge Computing are Augmented Reality(AR) or Virtual Reality(VR) and the Internet of things. For example, the rush which people got while playing an Augmented Reality based pokemon game, wouldn’t have been possible if “real-timeliness” was not present in the game. It was made possible because the smartphone itself was doing AR not the central servers. Even Machine Learning(ML) can benefit greatly from Edge Computing. All the heavy-duty training of ML algorithms can be done on the cloud and the trained model can be deployed on the edge for near real-time or even real-time predictions. We can see that in today’s data-driven world edge computing is becoming a necessary component of it.
There is a lot of confusion between Edge Computing and IOT. If stated simply, Edge Computing is nothing but the intelligent Internet of things(IOT) in a way. Edge Computing actually complements traditional IOT. In the traditional model of IOT, all the devices, like sensors, mobiles, laptops etc are connected to a central server. Now let’s imagine a case where you give the command to your lamp to switch off, for such simple task, data needs to be transmitted to the cloud, analyzed there and then lamp will receive a command to switch off. Edge Computing brings computing closer to your home, that is either the fog layer present between lamp and cloud servers is smart enough to process the data or the lamp itself.
If we look at the below image, it is a standard IOT implementation where everything is centralized. While Edge Computing philosophy talks about decentralizing the architecture.
The Fog
Sandwiched between edge layer and cloud layer, there is the Fog Layer. It bridges connection between other two layers.
The difference between fog and edge computing is described in this article –
Fog Computing – Fog computing pushes intelligence down to the local area network level of network architecture, processing data in a fog node or IoT gateway.
Edge computing pushes the intelligence, processing power and communication capabilities of an edge gateway or appliance directly into devices like programmable automation controllers (PACs).
How do we manage Edge Computing?
The Device Relationship Management or DRM refers to managing, monitoring the interconnected components over the internet. AWS IOT Core and AWS Greengrass, Nebbiolo Technologies have developed Fog Node and Fog OS, Vapor IO has OpenDCRE using which one can control and monitor the data centers.
Following image (source – AWS) shows how to manage ML on Edge Computing using AWS infrastructure.
AWS Greengrass makes it possible for users to use Lambda functions to build IoT devices and application logic. Specifically, AWS Greengrass provides cloud-based management of applications that can be deployed for local execution. Locally deployed Lambda functions are triggered by local events, messages from the cloud, or other sources.
This GitHub repo demonstrates a traffic light example using two Greengrass devices, a light controller, and a traffic light.
Conclusion
We believe that next-gen computing will be influenced a lot by Edge Computing and will continue to explore new use-cases that will be made possible by the Edge.