Tag: protobuf

  • Introduction to the Modern Server-side Stack – Golang, Protobuf, and gRPC

    There are some new players in town for server programming and this time it’s all about Google. Golang has rapidly been gaining popularity ever since Google started using it for their own production systems. And since the inception of Microservice Architecture, people have been focusing on modern data communication solutions like gRPC along with Protobuf. In this post, I will walk you through each of these briefly.

    Golang

    Golang or Go is an open source, general purpose programming language by Google. It has been gaining popularity recently for all the good reasons. It may come as a surprise to most people that language is almost 10 years old and has been production ready for almost 7 years, according to Google.

    Golang is designed to be simple, modern, easy to understand, and quick to grasp. The creators of the language designed it in such a way that an average programmer can have a working knowledge of the language over a weekend. I can attest to the fact that they definitely succeeded. Speaking of the creators, these are the experts that have been involved in the original draft of the C language so we can be assured that these guys know what they are doing.

    That’s all good but why do we need another language?

    For most of the use cases, we actually don’t. In fact, Go doesn’t solve any new problems that haven’t been solved by some other language/tool before. But it does try to solve a specific set of relevant problems that people generally face in an efficient, elegant, and intuitive manner. Go’s primary focus is the following:

    • First class support for concurrency
    • An elegant, modern language that is very simple to its core
    • Very good performance
    • First hand support for the tools required for modern software development

    I’m going to briefly explain how Go provides all of the above. You can read more about the language and its features in detail from Go’s official website.

    Concurrency

    Concurrency is one of the primary concerns in most of the server applications and it should be the primary concern of the language, considering the modern microprocessors. Go introduces a concept called a ‘goroutine’. A ‘goroutine’ is analogous to a ‘lightweight user-space thread’. It is much more complicated than that in reality as several goroutines multiplex on a single thread but the above expression should give you a general idea. These are light enough that you can actually spin up a million goroutines simultaneously as they start with a very tiny stack. In fact, that’s recommended. Any function/method in Go can be used to spawn a Goroutine. You can just do ‘go myAsyncTask()’ to spawn a goroutine from ‘myAsyncTask’ function. The following is an example:

    // This function performs the given task concurrently by spawing a goroutine
    // for each of those tasks.
    
    func performAsyncTasks(task []Task) {
      for _, task := range tasks {
        // This will spawn a separate goroutine to carry out this task.
        // This call is non-blocking
        go task.Execute()
      }
    }

    Yes, it’s that easy and it is meant to be that way as Go is a simple language and you are expected to spawn a goroutine for every independent async task without caring much. Go’s runtime automatically takes care of running the goroutines in parallel if multiple cores are available. But how do these goroutines communicate? The answer is channels.

    ‘Channel’ is also a language primitive that is meant to be used for communication among goroutines. You can pass anything from a channel to another goroutine (A primitive Go type or a Go struct or even other channels). A channel is essentially a blocking double ended queue (can be single ended too). If you want a goroutine(s) to wait for a certain condition to be met before continuing further you can implement cooperative blocking of goroutines with the help of channels.

    These two primitives give a lot of flexibility and simplicity in writing asynchronous or parallel code. Other helper libraries like a goroutine pool can be easily created from the above primitives. One basic example is:

    package executor
    
    import (
    	"log"
    	"sync/atomic"
    )
    
    // The Executor struct is the main executor for tasks.
    // 'maxWorkers' represents the maximum number of simultaneous goroutines.
    // 'ActiveWorkers' tells the number of active goroutines spawned by the Executor at given time.
    // 'Tasks' is the channel on which the Executor receives the tasks.
    // 'Reports' is channel on which the Executor publishes the every tasks reports.
    // 'signals' is channel that can be used to control the executor. Right now, only the termination
    // signal is supported which is essentially is sending '1' on this channel by the client.
    type Executor struct {
    	maxWorkers    int64
    	ActiveWorkers int64
    
    	Tasks   chan Task
    	Reports chan Report
    	signals chan int
    }
    
    // NewExecutor creates a new Executor.
    // 'maxWorkers' tells the maximum number of simultaneous goroutines.
    // 'signals' channel can be used to control the Executor.
    func NewExecutor(maxWorkers int, signals chan int) *Executor {
    	chanSize := 1000
    
    	if maxWorkers > chanSize {
    		chanSize = maxWorkers
    	}
    
    	executor := Executor{
    		maxWorkers: int64(maxWorkers),
    		Tasks:      make(chan Task, chanSize),
    		Reports:    make(chan Report, chanSize),
    		signals:    signals,
    	}
    
    	go executor.launch()
    
    	return &executor
    }
    
    // launch starts the main loop for polling on the all the relevant channels and handling differents
    // messages.
    func (executor *Executor) launch() int {
    	reports := make(chan Report, executor.maxWorkers)
    
    	for {
    		select {
    		case signal := <-executor.signals:
    			if executor.handleSignals(signal) == 0 {
    				return 0
    			}
    
    		case r := <-reports:
    			executor.addReport(r)
    
    		default:
    			if executor.ActiveWorkers < executor.maxWorkers && len(executor.Tasks) > 0 {
    				task := <-executor.Tasks
    				atomic.AddInt64(&executor.ActiveWorkers, 1)
    				go executor.launchWorker(task, reports)
    			}
    		}
    	}
    }
    
    // handleSignals is called whenever anything is received on the 'signals' channel.
    // It performs the relevant task according to the received signal(request) and then responds either
    // with 0 or 1 indicating whether the request was respected(0) or rejected(1).
    func (executor *Executor) handleSignals(signal int) int {
    	if signal == 1 {
    		log.Println("Received termination request...")
    
    		if executor.Inactive() {
    			log.Println("No active workers, exiting...")
    			executor.signals <- 0
    			return 0
    		}
    
    		executor.signals <- 1
    		log.Println("Some tasks are still active...")
    	}
    
    	return 1
    }
    
    // launchWorker is called whenever a new Task is received and Executor can spawn more workers to spawn
    // a new Worker.
    // Each worker is launched on a new goroutine. It performs the given task and publishes the report on
    // the Executor's internal reports channel.
    func (executor *Executor) launchWorker(task Task, reports chan<- Report) {
    	report := task.Execute()
    
    	if len(reports) < cap(reports) {
    		reports <- report
    	} else {
    		log.Println("Executor's report channel is full...")
    	}
    
    	atomic.AddInt64(&executor.ActiveWorkers, -1)
    }
    
    // AddTask is used to submit a new task to the Executor is a non-blocking way. The Client can submit
    // a new task using the Executor's tasks channel directly but that will block if the tasks channel is
    // full.
    // It should be considered that this method doesn't add the given task if the tasks channel is full
    // and it is up to client to try again later.
    func (executor *Executor) AddTask(task Task) bool {
    	if len(executor.Tasks) == cap(executor.Tasks) {
    		return false
    	}
    
    	executor.Tasks <- task
    	return true
    }
    
    // addReport is used by the Executor to publish the reports in a non-blocking way. It client is not
    // reading the reports channel or is slower that the Executor publishing the reports, the Executor's
    // reports channel is going to get full. In that case this method will not block and that report will
    // not be added.
    func (executor *Executor) addReport(report Report) bool {
    	if len(executor.Reports) == cap(executor.Reports) {
    		return false
    	}
    
    	executor.Reports <- report
    	return true
    }
    
    // Inactive checks if the Executor is idle. This happens when there are no pending tasks, active
    // workers and reports to publish.
    func (executor *Executor) Inactive() bool {
    	return executor.ActiveWorkers == 0 && len(executor.Tasks) == 0 && len(executor.Reports) == 0
    }

    Simple Language

    Unlike a lot of other modern languages, Golang doesn’t have a lot of features. In fact, a compelling case can be made for the language being too restrictive in its feature set and that’s intended. It is not designed around a programming paradigm like Java or designed to support multiple programming paradigms like Python. It’s just bare bones structural programming. Just the essential features thrown into the language and not a single thing more.

    After looking at the language, you may feel that the language doesn’t follow any particular philosophy or direction and it feels like every feature is included in here to solve a specific problem and nothing more than that. For example, it has methods and interfaces but not classes; the compiler produces a statically linked binary but still has a garbage collector; it has strict static typing but doesn’t support generics. The language does have a thin runtime but doesn’t support exceptions.

    The main idea here that the developer should spend the least amount of time expressing his/her idea or algorithm as code without thinking about “What’s the best way to do this in x language?” and it should be easy to understand for others. It’s still not perfect, it does feel limiting from time to time and some of the essential features like Generics and Exceptions are being considered for the ‘Go 2’.

    Performance

    Single threaded execution performance NOT a good metric to judge a language, especially when the language is focused around concurrency and parallelism. But still, Golang sports impressive benchmark numbers only beaten by hardcore system programming languages like C, C++, Rust, etc. and it is still improving. The performance is actually very impressive considering its a Garbage collected language and is good enough for almost every use case.

    (Image Source: Medium)

    Developer Tooling

    The adoption of a new tool/language directly depends on its developer experience. And the adoption of Go does speak for its tooling. Here we can see that same ideas and tooling is very minimal but sufficient. It’s all achieved by the ‘go’ command and its subcommands. It’s all command line.

    There is no package manager for the language like pip, npm. But you can get any community package by just doing

    go get github.com/velotiotech/WebCrawler/blob/master/executor/executor.go

    CODE: https://gist.github.com/velotiotech/3977b7932b96564ac9a041029d760d6d.js

    Yes, it works. You can just pull packages directly from github or anywhere else. They are just source files.

    But what about package.json..? I don’t see any equivalent for `go get`. Because there isn’t. You don’t need to specify all your dependency in a single file. You can directly use:

    import "github.com/xlab/pocketsphinx-go/sphinx"

    In your source file itself and when you do `go build` it will automatically `go get` it for you. You can see the full source file here:

    package main
    
    import (
    	"encoding/binary"
    	"bytes"
    	"log"
    	"os/exec"
    
    	"github.com/xlab/pocketsphinx-go/sphinx"
    	pulse "github.com/mesilliac/pulse-simple" // pulse-simple
    )
    
    var buffSize int
    
    func readInt16(buf []byte) (val int16) {
    	binary.Read(bytes.NewBuffer(buf), binary.LittleEndian, &val)
    	return
    }
    
    func createStream() *pulse.Stream {
    	ss := pulse.SampleSpec{pulse.SAMPLE_S16LE, 16000, 1}
    	buffSize = int(ss.UsecToBytes(1 * 1000000))
    	stream, err := pulse.Capture("pulse-simple test", "capture test", &ss)
    	if err != nil {
    		log.Panicln(err)
    	}
    	return stream
    }
    
    func listen(decoder *sphinx.Decoder) {
    	stream := createStream()
    	defer stream.Free()
    	defer decoder.Destroy()
    	buf := make([]byte, buffSize)
    	var bits []int16
    
    	log.Println("Listening...")
    
    	for {
    		_, err := stream.Read(buf)
    		if err != nil {
    			log.Panicln(err)
    		}
    
    		for i := 0; i < buffSize; i += 2 {
    			bits = append(bits, readInt16(buf[i:i+2]))
    		}
    
    		process(decoder, bits)
    		bits = nil
    	}
    }
    
    func process(dec *sphinx.Decoder, bits []int16) {
    	if !dec.StartUtt() {
    		panic("Decoder failed to start Utt")
    	}
    	
    	dec.ProcessRaw(bits, false, false)
    	dec.EndUtt()
    	hyp, score := dec.Hypothesis()
    	
    	if score > -2500 {
    		log.Println("Predicted:", hyp, score)
    		handleAction(hyp)
    	}
    }
    
    func executeCommand(commands ...string) {
    	cmd := exec.Command(commands[0], commands[1:]...)
    	cmd.Run()
    }
    
    func handleAction(hyp string) {
    	switch hyp {
    		case "SLEEP":
    		executeCommand("loginctl", "lock-session")
    		
    		case "WAKE UP":
    		executeCommand("loginctl", "unlock-session")
    
    		case "POWEROFF":
    		executeCommand("poweroff")
    	}
    }
    
    func main() {
    	cfg := sphinx.NewConfig(
    		sphinx.HMMDirOption("/usr/local/share/pocketsphinx/model/en-us/en-us"),
    		sphinx.DictFileOption("6129.dic"),
    		sphinx.LMFileOption("6129.lm"),
    		sphinx.LogFileOption("commander.log"),
    	)
    	
    	dec, err := sphinx.NewDecoder(cfg)
    	if err != nil {
    		panic(err)
    	}
    
    	listen(dec)
    }

    This binds the dependency declaration with source itself.

    As you can see by now, it’s simple, minimal and yet sufficient and elegant. There is first hand support for both unit tests and benchmarks with flame charts too. Just like the feature set, it also has its downsides. For example, `go get` doesn’t support versions and you are locked to the import URL passed in you source file. It is evolving and other tools have come up for dependency management.

    Golang was originally designed to solve the problems that Google had with their massive code bases and the imperative need to code efficient concurrent apps. It makes coding applications/libraries that utilize the multicore nature of modern microchips very easy. And, it never gets into a developer’s way. It’s a simple modern language and it never tries to become anything more that that.

    Protobuf (Protocol Buffers)

    Protobuf or Protocol Buffers is a binary communication format by Google. It is used to serialize structured data. A communication format? Kind of like JSON? Yes. It’s more than 10 years old and Google has been using it for a while now.

    But don’t we have JSON and it’s so ubiquitous…

    Just like Golang, Protobufs doesn’t really solve anything new. It just solves existing problems more efficiently and in a modern way. Unlike Golang, they are not necessarily more elegant than the existing solutions. Here are the focus points of protobuf:

    • It’s a binary format, unlike JSON and XML, which are text based and hence it’s vastly space efficient.
    • First hand and sophisticated support for schemas.
    • First hand support for generating parsing and consumer code in various languages.

    Binary format and speed

    So are protobuf really that fast? The short answer is, yes. According to the Google Developers they are 3 to 10 times smaller and 20 to 100 times faster than XML. It’s not a surprise as it is a binary format, the serialized data is not human readable.

    (Image Source: Beating JSON performance with Protobuf)

    Protobufs take a more planned approach. You define `.proto` files which are kind of the schema files but are much more powerful. You essentially define how you want your messages to be structured, which fields are optional or required, their data types etc. After that the protobuf compiler will generate the data access classes for you. You can use these classes in your business logic to facilitate communication.

    Looking at a `.proto` file related to a service will also give you a very clear idea of the specifics of the communication and the features that are exposed. A typical .proto file looks like this:

    message Person {
      required string name = 1;
      required int32 id = 2;
      optional string email = 3;
    
      enum PhoneType {
        MOBILE = 0;
        HOME = 1;
        WORK = 2;
      }
    
      message PhoneNumber {
        required string number = 1;
        optional PhoneType type = 2 [default = HOME];
      }
    
      repeated PhoneNumber phone = 4;
    }

    Fun Fact: Jon Skeet, the king of Stack Overflow is one of the main contributors in the project.

    gRPC

    gRPC, as you guessed it, is a modern RPC (Remote Procedure Call) framework. It is a batteries included framework with built in support for load balancing, tracing, health checking, and authentication. It was open sourced by Google in 2015 and it’s been gaining popularity ever since.

    An RPC framework…? What about REST…?

    SOAP with WSDL has been used long time for communication between different systems in a Service Oriented Architecture. At the time, the contracts used to be strictly defined and systems were big and monolithic, exposing a large number of such interfaces.

    Then came the concept of ‘browsing’ where the server and client don’t need to be tightly coupled. A client should be able to browse service offerings even if they were coded independently. If the client demanded the information about a book, the service along with what’s requested may also offer a list of related books so that client can browse. REST paradigm was essential to this as it allows the server and client to communicate freely without strict restriction using some primitive verbs.

    As you can see above, the service is behaving like a monolithic system, which along with what is required is also doing n number of other things to provide the client with the intended `browsing` experience. But this is not always the use case. Is it?

    Enter the Microservices

    There are many reasons to adopt for a Microservice Architecture. The prominent one being the fact that it is very hard to scale a Monolithic system. While designing a big system with Microservices Architecture each business or technical requirement is intended to be carried out as a cooperative composition of several primitive ‘micro’ services.

    These services don’t need to be comprehensive in their responses. They should perform specific duties with expected responses. Ideally, they should behave like pure functions for seamless composability.

    Now using REST as a communication paradigm for such services doesn’t provide us with much of a benefit. However, exposing a REST API for a service does enable a lot of expression capability for that service but again if such expression power is neither required nor intended we can use a paradigm that focuses more on other factors.

    gRPC intends to improve upon the following technical aspects over traditional HTTP requests:

    • HTTP/2 by default with all its goodies.
    • Protobuf as machines are talking.
    • Dedicated support for streaming calls thanks to HTTP/2.
    • Pluggable auth, tracing, load balancing and health checking because you always need these.

    As it’s an RPC framework, we again have concepts like Service Definition and Interface Description Language which may feel alien to the people who were not there before REST but this time it feels a lot less clumsy as gRPC uses Protobuf for both of these.

    Protobuf is designed in such a way that it can be used as a communication format as well as a protocol specification tool without introducing anything new. A typical gRPC service definition looks like this:

    service HelloService {
      rpc SayHello (HelloRequest) returns (HelloResponse);
    }
    
    message HelloRequest {
      string greeting = 1;
    }
    
    message HelloResponse {
      string reply = 1;
    }

    You just write a `.proto` file for your service describing the interface name, what it expects, and what it returns as Protobuf messages. Protobuf compiler will then generate both the client and server side code. Clients can call this directly and server-side can implement these APIs to fill in the business logic.

    Conclusion

    Golang, along with gRPC using Protobuf is an emerging stack for modern server programming. Golang simplifies making concurrent/parallel applications and gRPC with Protobuf enables efficient communication with a pleasing developer experience.

  • Implementing gRPC In Python: A Step-by-step Guide

    In the last few years, we saw a great shift in technology, where projects are moving towards “microservice architecture” vs the old “monolithic architecture”. This approach has done wonders for us. 

    As we say, “smaller things are much easier to handle”, so here we have microservices that can be handled conveniently. We need to interact among different microservices. I handled it using the HTTP API call, which seems great and it worked for me.

    But is this the perfect way to do things?

    The answer is a resounding, “no,” because we compromised both speed and efficiency here. 

    Then came in the picture, the gRPC framework, that has been a game-changer.

    What is gRPC?

    Quoting the official documentation

    gRPC or Google Remote Procedure Call is a modern open-source high-performance RPC framework that can run in any environment. It can efficiently connect services in and across data centers with pluggable support for load balancing, tracing, health checking and authentication.”

     

    Credit: gRPC

    RPC or remote procedure calls are the messages that the server sends to the remote system to get the task(or subroutines) done.

    Google’s RPC is designed to facilitate smooth and efficient communication between the services. It can be utilized in different ways, such as:

    • Efficiently connecting polyglot services in microservices style architecture
    • Connecting mobile devices, browser clients to backend services
    • Generating efficient client libraries

    Why gRPC? 

    HTTP/2 based transport – It uses HTTP/2 protocol instead of HTTP 1.1. HTTP/2 protocol provides multiple benefits over the latter. One major benefit is multiple bidirectional streams that can be created and sent over TCP connections parallelly, making it swift. 

    Auth, tracing, load balancing and health checking – gRPC provides all these features, making it a secure and reliable option to choose.

    Language independent communication– Two services may be written in different languages, say Python and Golang. gRPC ensures smooth communication between them.

    Use of Protocol Buffers – gRPC uses protocol buffers for defining the type of data (also called Interface Definition Language (IDL)) to be sent between the gRPC client and the gRPC server. It also uses it as the message interchange format. 

    Let’s dig a little more into what are Protocol Buffers.

    Protocol Buffers

    Protocol Buffers like XML, are an efficient and automated mechanism for serializing structured data. They provide a way to define the structure of data to be transmitted. Google says that protocol buffers are better than XML, as they are:

    • simpler
    • three to ten times smaller
    • 20 to 100 times faster
    • less ambiguous
    • generates data access classes that make it easier to use them programmatically

    Protobuf are defined in .proto files. It is easy to define them. 

    Types of gRPC implementation

    1. Unary RPCs:- This is a simple gRPC which works like a normal function call. It sends a single request declared in the .proto file to the server and gets back a single response from the server.

    rpc HelloServer(RequestMessage) returns (ResponseMessage);

    2. Server streaming RPCs:- The client sends a message declared in the .proto file to the server and gets back a stream of message sequence to read. The client reads from that stream of messages until there are no messages.

    rpc HelloServer(RequestMessage) returns (stream ResponseMessage);

    3. Client streaming RPCs:- The client writes a message sequence using a write stream and sends the same to the server. After all the messages are sent to the server, the client waits for the server to read all the messages and return a response.

    rpc HelloServer(stream RequestMessage) returns (ResponseMessage);

    4. Bidirectional streaming RPCs:- Both gRPC client and the gRPC server use a read-write stream to send a message sequence. Both operate independently, so gRPC clients and gRPC servers can write and read in any order they like, i.e. the server can read a message then write a message alternatively, wait to receive all messages then write its responses, or perform reads and writes in any other combination.

    rpc HelloServer(stream RequestMessage) returns (stream ResponseMessage);

    **gRPC guarantees the ordering of messages within an individual RPC call. In the case of Bidirectional streaming, the order of messages is preserved in each stream.

    Implementing gRPC in Python

    Currently, gRPC provides support for many languages like Golang, C++, Java, etc. I will be focussing on its implementation using Python.

    mkdir grpc_example
    cd grpc_example
    virtualenv -p python3 env
    source env/bin/activate
    pip install grpcio grpcio-tools

    This will install all the required dependencies to implement gRPC.

    Unary gRPC 

    For implementing gRPC services, we need to define three files:-

    • Proto file – Proto file comprises the declaration of the service that is used to generate stubs (<package_name>_pb2.py and <package_name>_pb2_grpc.py). These are used by the gRPC client and the gRPC server.</package_name></package_name>
    • gRPC client – The client makes a gRPC call to the server to get the response as per the proto file.
    • gRPC Server – The server is responsible for serving requests to the client.
    syntax = "proto3";
    
    package unary;
    
    service Unary{
      // A simple RPC.
      //
      // Obtains the MessageResponse at a given position.
     rpc GetServerResponse(Message) returns (MessageResponse) {}
    
    }
    
    message Message{
     string message = 1;
    }
    
    message MessageResponse{
     string message = 1;
     bool received = 2;
    }

    In the above code, we have declared a service named Unary. It consists of a collection of services. For now, I have implemented a single service GetServerResponse(). This service takes an input of type Message and returns a MessageResponse. Below the service declaration, I have declared Message and Message Response.

    Once we are done with the creation of the .proto file, we need to generate the stubs. For that, we will execute the below command:-

    python -m grpc_tools.protoc --proto_path=. ./unary.proto --python_out=. --grpc_python_out=.

    Two files are generated named unary_pb2.py and unary_pb2_grpc.py. Using these two stub files, we will implement the gRPC server and the client.

    Implementing the Server

    import grpc
    from concurrent import futures
    import time
    import unary.unary_pb2_grpc as pb2_grpc
    import unary.unary_pb2 as pb2
    
    
    class UnaryService(pb2_grpc.UnaryServicer):
    
        def __init__(self, *args, **kwargs):
            pass
    
        def GetServerResponse(self, request, context):
    
            # get the string from the incoming request
            message = request.message
            result = f'Hello I am up and running received "{message}" message from you'
            result = {'message': result, 'received': True}
    
            return pb2.MessageResponse(**result)
    
    
    def serve():
        server = grpc.server(futures.ThreadPoolExecutor(max_workers=10))
        pb2_grpc.add_UnaryServicer_to_server(UnaryService(), server)
        server.add_insecure_port('[::]:50051')
        server.start()
        server.wait_for_termination()
    
    
    if __name__ == '__main__':
        serve()

    In the gRPC server file, there is a GetServerResponse() method which takes `Message` from the client and returns a `MessageResponse` as defined in the proto file.

    server() function is called from the main function, and makes sure that the server is listening to all the time. We will run the unary_server to start the server

    python3 unary_server.py

    Implementing the Client

    import grpc
    import unary.unary_pb2_grpc as pb2_grpc
    import unary.unary_pb2 as pb2
    
    
    class UnaryClient(object):
        """
        Client for gRPC functionality
        """
    
        def __init__(self):
            self.host = 'localhost'
            self.server_port = 50051
    
            # instantiate a channel
            self.channel = grpc.insecure_channel(
                '{}:{}'.format(self.host, self.server_port))
    
            # bind the client and the server
            self.stub = pb2_grpc.UnaryStub(self.channel)
    
        def get_url(self, message):
            """
            Client function to call the rpc for GetServerResponse
            """
            message = pb2.Message(message=message)
            print(f'{message}')
            return self.stub.GetServerResponse(message)
    
    
    if __name__ == '__main__':
        client = UnaryClient()
        result = client.get_url(message="Hello Server you there?")
        print(f'{result}')

    In the __init__func. we have initialized the stub using ` self.stub = pb2_grpc.UnaryStub(self.channel)’ And we have a get_url function which calls to server using the above-initialized stub  

    This completes the implementation of Unary gRPC service.

    Let’s check the output:-

    Run -> python3 unary_client.py 

    Output:-

    message: “Hello Server you there?”

    message: “Hello I am up and running. Received ‘Hello Server you there?’ message from you”

    received: true

    Bidirectional Implementation

    syntax = "proto3";
    
    package bidirectional;
    
    service Bidirectional {
      // A Bidirectional streaming RPC.
      //
      // Accepts a stream of Message sent while a route is being traversed,
       rpc GetServerResponse(stream Message) returns (stream Message) {}
    }
    
    message Message {
      string message = 1;
    }

    In the above code, we have declared a service named Bidirectional. It consists of a collection of services. For now, I have implemented a single service GetServerResponse(). This service takes an input of type Message and returns a Message. Below the service declaration, I have declared Message.

    Once we are done with the creation of the .proto file, we need to generate the stubs. To generate the stub, we need the execute the below command:-

    python -m grpc_tools.protoc --proto_path=.  ./bidirecctional.proto --python_out=. --grpc_python_out=.

    Two files are generated named bidirectional_pb2.py and bidirectional_pb2_grpc.py. Using these two stub files, we will implement the gRPC server and client.

    Implementing the Server

    from concurrent import futures
    
    import grpc
    import bidirectional.bidirectional_pb2_grpc as bidirectional_pb2_grpc
    
    
    class BidirectionalService(bidirectional_pb2_grpc.BidirectionalServicer):
    
        def GetServerResponse(self, request_iterator, context):
            for message in request_iterator:
                yield message
    
    
    def serve():
        server = grpc.server(futures.ThreadPoolExecutor(max_workers=10))
        bidirectional_pb2_grpc.add_BidirectionalServicer_to_server(BidirectionalService(), server)
        server.add_insecure_port('[::]:50051')
        server.start()
        server.wait_for_termination()
    
    
    if __name__ == '__main__':
        serve()

    In the gRPC server file, there is a GetServerResponse() method which takes a stream of `Message` from the client and returns a stream of `Message` independent of each other. server() function is called from the main function and makes sure that the server is listening to all the time.

    We will run the bidirectional_server to start the server:

    python3 bidirectional_server.py

    Implementing the Client

    from __future__ import print_function
    
    import grpc
    import bidirectional.bidirectional_pb2_grpc as bidirectional_pb2_grpc
    import bidirectional.bidirectional_pb2 as bidirectional_pb2
    
    
    def make_message(message):
        return bidirectional_pb2.Message(
            message=message
        )
    
    
    def generate_messages():
        messages = [
            make_message("First message"),
            make_message("Second message"),
            make_message("Third message"),
            make_message("Fourth message"),
            make_message("Fifth message"),
        ]
        for msg in messages:
            print("Hello Server Sending you the %s" % msg.message)
            yield msg
    
    
    def send_message(stub):
        responses = stub.GetServerResponse(generate_messages())
        for response in responses:
            print("Hello from the server received your %s" % response.message)
    
    
    def run():
        with grpc.insecure_channel('localhost:50051') as channel:
            stub = bidirectional_pb2_grpc.BidirectionalStub(channel)
            send_message(stub)
    
    
    if __name__ == '__main__':
        run()

    In the run() function. we have initialised the stub using `  stub = bidirectional_pb2_grpc.BidirectionalStub(channel)’

    And we have a send_message function to which the stub is passed and it makes multiple calls to the server and receives the results from the server simultaneously.

    This completes the implementation of Bidirectional gRPC service.

    Let’s check the output:-

    Run -> python3 bidirectional_client.py 

    Output:-

    Hello Server Sending you the First message

    Hello Server Sending you the Second message

    Hello Server Sending you the Third message

    Hello Server Sending you the Fourth message

    Hello Server Sending you the Fifth message

    Hello from the server received your First message

    Hello from the server received your Second message

    Hello from the server received your Third message

    Hello from the server received your Fourth message

    Hello from the server received your Fifth message

    For code reference, please visit here.

    Conclusion‍

    gRPC is an emerging RPC framework that makes communication between microservices smooth and efficient. I believe gRPC is currently confined to inter microservice but has many other utilities that we will see in the coming years. To know more about modern data communication solutions, check out this blog.