Tag: server programming

  • How to Implement Server Sent Events Using Python Flask and React

    A typical Request Response cycle works such that client sends request to server and server responds to that request. But there are few use cases where we might need to send data from server without request or client is expecting a data that can arrive at anonymous time. There are few mechanisms available to solve this problem.

    Server Sent Events

    Broadly we can classify these as client pull and server push mechanisms. Websockets is a bi directional mechanism where data is transmitted via full duplex TCP protocol. Client Pull can be done using various mechanisms like –

    1. Manual refresh – where client is refreshed manually
    2. Long polling where a client sends request to server and waits until response is received, as soon as it gets response, a new request is sent.
    3. Short Polling is when a client continuously sends request to server in a definite short intervals.

    Server sent events are a type of Server Push mechanism, where client subscribes to a stream of updates generated by a server and, whenever a new event occurs, a notification is sent to the client.

    Why ServerSide events are better than polling:

    • Scaling and orchestration of backend in real time needs to be managed as users grow.
    • When mobile devices rapidly switch between WiFi and cellular networks or lose connections, and the IP address changes, long polling needs to re-establish connections.
    • With long polling, we need to manage the message queue and catch up missed message.
    • Long polling needs to provide load balancing or fail-over support across multiple servers.

    SSE vs Websockets

    SSEs cannot provide bidirectional client-server communication as opposed to WebSockets. Use cases that require such communication are real-time multiplayer games and messaging and chat apps. When there’s no need for sending data from a client, SSEs might be a better option than WebSockets. Examples of such use cases are status updates, news feeds and other automated data push mechanisms. And backend implementation could be easy with SSE than with Websockets. Also number of open connections is limited for browser for SSE.

    Also, learn about WS vs SSE here.

    Implementation

    The server side code for this can be implemented in any of the high level language. Here is a sample code for Python Flask SSE. Flask SSE requires a broker such as Redis to store the message. Here we are also using Flask APScheduler, to schedule background processes with flask .

    Here we need to install and import ‘flask_sse’ and ‘apscheduler.’

    from flask import Flask, render_template
    from flask_sse import sse
    from apscheduler.schedulers.background import BackgroundScheduler

    Now we need to initialize flask app and provide config for Redis and a route or an URL where the client would be listening to this event.

    app = Flask(__name__)
    app.config["REDIS_URL"] = "redis://localhost"
    app.register_blueprint(sse, url_prefix='/stream')

    To publish data to a stream we need to call publish method from SSE and provide a type of stream.

    sse.publish({"message": datetime.datetime.now()}, type='publish')

    In client, we need to add an event listener which would listen to our stream and read messages.

    var source = new EventSource("{{ url_for('sse.stream') }}");
        source.addEventListener('publish', function(event) {
            var data = JSON.parse(event.data);
            console.log("The server says " + data.message);
        }, false);
        source.addEventListener('error', function(event) {
            console.log("Error"+ event)
            alert("Failed to connect to event stream. Is Redis running?");
        }, false);

    Check out a sample Flask-React-Redis based application demo for server side events.

    Here are some screenshots of client –

    Fig: First Event

     Fig: Second Event

    Server logs:

    api_1    | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 31, 0, 24564))
    api_1    | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 31, 14, 30164))
    api_1    | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 31, 28, 37840))
    api_1    | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 31, 42, 58162))
    api_1    | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 31, 56, 46456))
    api_1    | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 32, 10, 56276))
    api_1    | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 32, 24, 58445))
    api_1    | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 32, 38, 57183))
    api_1    | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 32, 52, 65886))
    api_1    | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 33, 6, 49818))
    api_1    | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 33, 20, 22731))
    api_1    | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 33, 34, 59084))
    api_1    | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 33, 48, 70346))
    api_1    | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 34, 2, 58889))
    api_1    | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 34, 16, 26020))
    api_1    | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 34, 30, 44040))
    api_1    | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 34, 44, 61620))
    api_1    | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 34, 58, 38699))
    api_1    | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 35, 12, 26067))
    api_1    | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 35, 26, 71504))
    api_1    | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 35, 40, 31429))
    api_1    | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 35, 54, 74451))
    api_1    | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 36, 8, 63001))
    api_1    | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 36, 22, 47671))
    api_1    | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 36, 36, 55458))
    api_1    | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 36, 50, 68975))
    api_1    | ('Event Scheduled at ', datetime.datetime(2019, 5, 1, 7, 37, 4, 62491))
    api_1    | ('Event SchedINFO:apscheduler.executors.default:Job "server_side_event (trigger: interval[0:00:14], next run at: 2019-05-01 07:37:31 UTC)" executed successfully
    api_1    | INFO:apscheduler.executors.default:Running job "server_side_event (trigger: interval[0:00:16], next run at: 2019-05-01 07:37:38 UTC)" (scheduled at 2019-05-01 07:37:22.362874+00:00)
    api_1    | INFO:apscheduler.executors.default:Job "server_side_event (trigger: interval[0:00:16], next run at: 2019-05-01 07:37:38 UTC)" executed successfully
    api_1    | INFO:apscheduler.executors.default:Running job "server_side_event (trigger: interval[0:00:14], next run at: 2019-05-01 07:37:31 UTC)" (scheduled at 2019-05-01 07:37:31.993944+00:00)
    api_1    | INFO:apscheduler.executors.default:Job "server_side_event (trigger: interval[0:00:14], next run at: 2019-05-01 07:37:45 UTC)" executed successfully
    api_1    | INFO:apscheduler.executors.default:Running job "server_side_event (trigger: interval[0:00:16], next run at: 2019-05-01 07:37:54 UTC)" (scheduled at 2019-05-01 07:37:38.362874+00:00)
    api_1    | INFO:apscheduler.executors.default:Job "server_side_event (trigger: interval[0:00:16], next run at: 2019-05-01 07:37:54 UTC)" executed successfully

    Use Cases of Server Sent Events

    Let’s see the use case with an example – Consider we have a real time graph showing on our web app, one of the possible options is polling where continuously client will poll the server to get new data. Other option would be to use server sent events, which are asynchronous, here the server will send data when updates happen.

    Other applications could be

    • Real time stock price analysis system
    • Real time social media feeds
    • Resource monitoring for health, uptime

    Conclusion

    In this blog, we have covered how we can implement server sent events using Python Flask and React and also how we can use background schedulers with that. This can be used to implement a data delivery from the server to the client using server push.

  • Introduction to the Modern Server-side Stack – Golang, Protobuf, and gRPC

    There are some new players in town for server programming and this time it’s all about Google. Golang has rapidly been gaining popularity ever since Google started using it for their own production systems. And since the inception of Microservice Architecture, people have been focusing on modern data communication solutions like gRPC along with Protobuf. In this post, I will walk you through each of these briefly.

    Golang

    Golang or Go is an open source, general purpose programming language by Google. It has been gaining popularity recently for all the good reasons. It may come as a surprise to most people that language is almost 10 years old and has been production ready for almost 7 years, according to Google.

    Golang is designed to be simple, modern, easy to understand, and quick to grasp. The creators of the language designed it in such a way that an average programmer can have a working knowledge of the language over a weekend. I can attest to the fact that they definitely succeeded. Speaking of the creators, these are the experts that have been involved in the original draft of the C language so we can be assured that these guys know what they are doing.

    That’s all good but why do we need another language?

    For most of the use cases, we actually don’t. In fact, Go doesn’t solve any new problems that haven’t been solved by some other language/tool before. But it does try to solve a specific set of relevant problems that people generally face in an efficient, elegant, and intuitive manner. Go’s primary focus is the following:

    • First class support for concurrency
    • An elegant, modern language that is very simple to its core
    • Very good performance
    • First hand support for the tools required for modern software development

    I’m going to briefly explain how Go provides all of the above. You can read more about the language and its features in detail from Go’s official website.

    Concurrency

    Concurrency is one of the primary concerns in most of the server applications and it should be the primary concern of the language, considering the modern microprocessors. Go introduces a concept called a ‘goroutine’. A ‘goroutine’ is analogous to a ‘lightweight user-space thread’. It is much more complicated than that in reality as several goroutines multiplex on a single thread but the above expression should give you a general idea. These are light enough that you can actually spin up a million goroutines simultaneously as they start with a very tiny stack. In fact, that’s recommended. Any function/method in Go can be used to spawn a Goroutine. You can just do ‘go myAsyncTask()’ to spawn a goroutine from ‘myAsyncTask’ function. The following is an example:

    // This function performs the given task concurrently by spawing a goroutine
    // for each of those tasks.
    
    func performAsyncTasks(task []Task) {
      for _, task := range tasks {
        // This will spawn a separate goroutine to carry out this task.
        // This call is non-blocking
        go task.Execute()
      }
    }

    Yes, it’s that easy and it is meant to be that way as Go is a simple language and you are expected to spawn a goroutine for every independent async task without caring much. Go’s runtime automatically takes care of running the goroutines in parallel if multiple cores are available. But how do these goroutines communicate? The answer is channels.

    ‘Channel’ is also a language primitive that is meant to be used for communication among goroutines. You can pass anything from a channel to another goroutine (A primitive Go type or a Go struct or even other channels). A channel is essentially a blocking double ended queue (can be single ended too). If you want a goroutine(s) to wait for a certain condition to be met before continuing further you can implement cooperative blocking of goroutines with the help of channels.

    These two primitives give a lot of flexibility and simplicity in writing asynchronous or parallel code. Other helper libraries like a goroutine pool can be easily created from the above primitives. One basic example is:

    package executor
    
    import (
    	"log"
    	"sync/atomic"
    )
    
    // The Executor struct is the main executor for tasks.
    // 'maxWorkers' represents the maximum number of simultaneous goroutines.
    // 'ActiveWorkers' tells the number of active goroutines spawned by the Executor at given time.
    // 'Tasks' is the channel on which the Executor receives the tasks.
    // 'Reports' is channel on which the Executor publishes the every tasks reports.
    // 'signals' is channel that can be used to control the executor. Right now, only the termination
    // signal is supported which is essentially is sending '1' on this channel by the client.
    type Executor struct {
    	maxWorkers    int64
    	ActiveWorkers int64
    
    	Tasks   chan Task
    	Reports chan Report
    	signals chan int
    }
    
    // NewExecutor creates a new Executor.
    // 'maxWorkers' tells the maximum number of simultaneous goroutines.
    // 'signals' channel can be used to control the Executor.
    func NewExecutor(maxWorkers int, signals chan int) *Executor {
    	chanSize := 1000
    
    	if maxWorkers > chanSize {
    		chanSize = maxWorkers
    	}
    
    	executor := Executor{
    		maxWorkers: int64(maxWorkers),
    		Tasks:      make(chan Task, chanSize),
    		Reports:    make(chan Report, chanSize),
    		signals:    signals,
    	}
    
    	go executor.launch()
    
    	return &executor
    }
    
    // launch starts the main loop for polling on the all the relevant channels and handling differents
    // messages.
    func (executor *Executor) launch() int {
    	reports := make(chan Report, executor.maxWorkers)
    
    	for {
    		select {
    		case signal := <-executor.signals:
    			if executor.handleSignals(signal) == 0 {
    				return 0
    			}
    
    		case r := <-reports:
    			executor.addReport(r)
    
    		default:
    			if executor.ActiveWorkers < executor.maxWorkers && len(executor.Tasks) > 0 {
    				task := <-executor.Tasks
    				atomic.AddInt64(&executor.ActiveWorkers, 1)
    				go executor.launchWorker(task, reports)
    			}
    		}
    	}
    }
    
    // handleSignals is called whenever anything is received on the 'signals' channel.
    // It performs the relevant task according to the received signal(request) and then responds either
    // with 0 or 1 indicating whether the request was respected(0) or rejected(1).
    func (executor *Executor) handleSignals(signal int) int {
    	if signal == 1 {
    		log.Println("Received termination request...")
    
    		if executor.Inactive() {
    			log.Println("No active workers, exiting...")
    			executor.signals <- 0
    			return 0
    		}
    
    		executor.signals <- 1
    		log.Println("Some tasks are still active...")
    	}
    
    	return 1
    }
    
    // launchWorker is called whenever a new Task is received and Executor can spawn more workers to spawn
    // a new Worker.
    // Each worker is launched on a new goroutine. It performs the given task and publishes the report on
    // the Executor's internal reports channel.
    func (executor *Executor) launchWorker(task Task, reports chan<- Report) {
    	report := task.Execute()
    
    	if len(reports) < cap(reports) {
    		reports <- report
    	} else {
    		log.Println("Executor's report channel is full...")
    	}
    
    	atomic.AddInt64(&executor.ActiveWorkers, -1)
    }
    
    // AddTask is used to submit a new task to the Executor is a non-blocking way. The Client can submit
    // a new task using the Executor's tasks channel directly but that will block if the tasks channel is
    // full.
    // It should be considered that this method doesn't add the given task if the tasks channel is full
    // and it is up to client to try again later.
    func (executor *Executor) AddTask(task Task) bool {
    	if len(executor.Tasks) == cap(executor.Tasks) {
    		return false
    	}
    
    	executor.Tasks <- task
    	return true
    }
    
    // addReport is used by the Executor to publish the reports in a non-blocking way. It client is not
    // reading the reports channel or is slower that the Executor publishing the reports, the Executor's
    // reports channel is going to get full. In that case this method will not block and that report will
    // not be added.
    func (executor *Executor) addReport(report Report) bool {
    	if len(executor.Reports) == cap(executor.Reports) {
    		return false
    	}
    
    	executor.Reports <- report
    	return true
    }
    
    // Inactive checks if the Executor is idle. This happens when there are no pending tasks, active
    // workers and reports to publish.
    func (executor *Executor) Inactive() bool {
    	return executor.ActiveWorkers == 0 && len(executor.Tasks) == 0 && len(executor.Reports) == 0
    }

    Simple Language

    Unlike a lot of other modern languages, Golang doesn’t have a lot of features. In fact, a compelling case can be made for the language being too restrictive in its feature set and that’s intended. It is not designed around a programming paradigm like Java or designed to support multiple programming paradigms like Python. It’s just bare bones structural programming. Just the essential features thrown into the language and not a single thing more.

    After looking at the language, you may feel that the language doesn’t follow any particular philosophy or direction and it feels like every feature is included in here to solve a specific problem and nothing more than that. For example, it has methods and interfaces but not classes; the compiler produces a statically linked binary but still has a garbage collector; it has strict static typing but doesn’t support generics. The language does have a thin runtime but doesn’t support exceptions.

    The main idea here that the developer should spend the least amount of time expressing his/her idea or algorithm as code without thinking about “What’s the best way to do this in x language?” and it should be easy to understand for others. It’s still not perfect, it does feel limiting from time to time and some of the essential features like Generics and Exceptions are being considered for the ‘Go 2’.

    Performance

    Single threaded execution performance NOT a good metric to judge a language, especially when the language is focused around concurrency and parallelism. But still, Golang sports impressive benchmark numbers only beaten by hardcore system programming languages like C, C++, Rust, etc. and it is still improving. The performance is actually very impressive considering its a Garbage collected language and is good enough for almost every use case.

    (Image Source: Medium)

    Developer Tooling

    The adoption of a new tool/language directly depends on its developer experience. And the adoption of Go does speak for its tooling. Here we can see that same ideas and tooling is very minimal but sufficient. It’s all achieved by the ‘go’ command and its subcommands. It’s all command line.

    There is no package manager for the language like pip, npm. But you can get any community package by just doing

    go get github.com/velotiotech/WebCrawler/blob/master/executor/executor.go

    CODE: https://gist.github.com/velotiotech/3977b7932b96564ac9a041029d760d6d.js

    Yes, it works. You can just pull packages directly from github or anywhere else. They are just source files.

    But what about package.json..? I don’t see any equivalent for `go get`. Because there isn’t. You don’t need to specify all your dependency in a single file. You can directly use:

    import "github.com/xlab/pocketsphinx-go/sphinx"

    In your source file itself and when you do `go build` it will automatically `go get` it for you. You can see the full source file here:

    package main
    
    import (
    	"encoding/binary"
    	"bytes"
    	"log"
    	"os/exec"
    
    	"github.com/xlab/pocketsphinx-go/sphinx"
    	pulse "github.com/mesilliac/pulse-simple" // pulse-simple
    )
    
    var buffSize int
    
    func readInt16(buf []byte) (val int16) {
    	binary.Read(bytes.NewBuffer(buf), binary.LittleEndian, &val)
    	return
    }
    
    func createStream() *pulse.Stream {
    	ss := pulse.SampleSpec{pulse.SAMPLE_S16LE, 16000, 1}
    	buffSize = int(ss.UsecToBytes(1 * 1000000))
    	stream, err := pulse.Capture("pulse-simple test", "capture test", &ss)
    	if err != nil {
    		log.Panicln(err)
    	}
    	return stream
    }
    
    func listen(decoder *sphinx.Decoder) {
    	stream := createStream()
    	defer stream.Free()
    	defer decoder.Destroy()
    	buf := make([]byte, buffSize)
    	var bits []int16
    
    	log.Println("Listening...")
    
    	for {
    		_, err := stream.Read(buf)
    		if err != nil {
    			log.Panicln(err)
    		}
    
    		for i := 0; i < buffSize; i += 2 {
    			bits = append(bits, readInt16(buf[i:i+2]))
    		}
    
    		process(decoder, bits)
    		bits = nil
    	}
    }
    
    func process(dec *sphinx.Decoder, bits []int16) {
    	if !dec.StartUtt() {
    		panic("Decoder failed to start Utt")
    	}
    	
    	dec.ProcessRaw(bits, false, false)
    	dec.EndUtt()
    	hyp, score := dec.Hypothesis()
    	
    	if score > -2500 {
    		log.Println("Predicted:", hyp, score)
    		handleAction(hyp)
    	}
    }
    
    func executeCommand(commands ...string) {
    	cmd := exec.Command(commands[0], commands[1:]...)
    	cmd.Run()
    }
    
    func handleAction(hyp string) {
    	switch hyp {
    		case "SLEEP":
    		executeCommand("loginctl", "lock-session")
    		
    		case "WAKE UP":
    		executeCommand("loginctl", "unlock-session")
    
    		case "POWEROFF":
    		executeCommand("poweroff")
    	}
    }
    
    func main() {
    	cfg := sphinx.NewConfig(
    		sphinx.HMMDirOption("/usr/local/share/pocketsphinx/model/en-us/en-us"),
    		sphinx.DictFileOption("6129.dic"),
    		sphinx.LMFileOption("6129.lm"),
    		sphinx.LogFileOption("commander.log"),
    	)
    	
    	dec, err := sphinx.NewDecoder(cfg)
    	if err != nil {
    		panic(err)
    	}
    
    	listen(dec)
    }

    This binds the dependency declaration with source itself.

    As you can see by now, it’s simple, minimal and yet sufficient and elegant. There is first hand support for both unit tests and benchmarks with flame charts too. Just like the feature set, it also has its downsides. For example, `go get` doesn’t support versions and you are locked to the import URL passed in you source file. It is evolving and other tools have come up for dependency management.

    Golang was originally designed to solve the problems that Google had with their massive code bases and the imperative need to code efficient concurrent apps. It makes coding applications/libraries that utilize the multicore nature of modern microchips very easy. And, it never gets into a developer’s way. It’s a simple modern language and it never tries to become anything more that that.

    Protobuf (Protocol Buffers)

    Protobuf or Protocol Buffers is a binary communication format by Google. It is used to serialize structured data. A communication format? Kind of like JSON? Yes. It’s more than 10 years old and Google has been using it for a while now.

    But don’t we have JSON and it’s so ubiquitous…

    Just like Golang, Protobufs doesn’t really solve anything new. It just solves existing problems more efficiently and in a modern way. Unlike Golang, they are not necessarily more elegant than the existing solutions. Here are the focus points of protobuf:

    • It’s a binary format, unlike JSON and XML, which are text based and hence it’s vastly space efficient.
    • First hand and sophisticated support for schemas.
    • First hand support for generating parsing and consumer code in various languages.

    Binary format and speed

    So are protobuf really that fast? The short answer is, yes. According to the Google Developers they are 3 to 10 times smaller and 20 to 100 times faster than XML. It’s not a surprise as it is a binary format, the serialized data is not human readable.

    (Image Source: Beating JSON performance with Protobuf)

    Protobufs take a more planned approach. You define `.proto` files which are kind of the schema files but are much more powerful. You essentially define how you want your messages to be structured, which fields are optional or required, their data types etc. After that the protobuf compiler will generate the data access classes for you. You can use these classes in your business logic to facilitate communication.

    Looking at a `.proto` file related to a service will also give you a very clear idea of the specifics of the communication and the features that are exposed. A typical .proto file looks like this:

    message Person {
      required string name = 1;
      required int32 id = 2;
      optional string email = 3;
    
      enum PhoneType {
        MOBILE = 0;
        HOME = 1;
        WORK = 2;
      }
    
      message PhoneNumber {
        required string number = 1;
        optional PhoneType type = 2 [default = HOME];
      }
    
      repeated PhoneNumber phone = 4;
    }

    Fun Fact: Jon Skeet, the king of Stack Overflow is one of the main contributors in the project.

    gRPC

    gRPC, as you guessed it, is a modern RPC (Remote Procedure Call) framework. It is a batteries included framework with built in support for load balancing, tracing, health checking, and authentication. It was open sourced by Google in 2015 and it’s been gaining popularity ever since.

    An RPC framework…? What about REST…?

    SOAP with WSDL has been used long time for communication between different systems in a Service Oriented Architecture. At the time, the contracts used to be strictly defined and systems were big and monolithic, exposing a large number of such interfaces.

    Then came the concept of ‘browsing’ where the server and client don’t need to be tightly coupled. A client should be able to browse service offerings even if they were coded independently. If the client demanded the information about a book, the service along with what’s requested may also offer a list of related books so that client can browse. REST paradigm was essential to this as it allows the server and client to communicate freely without strict restriction using some primitive verbs.

    As you can see above, the service is behaving like a monolithic system, which along with what is required is also doing n number of other things to provide the client with the intended `browsing` experience. But this is not always the use case. Is it?

    Enter the Microservices

    There are many reasons to adopt for a Microservice Architecture. The prominent one being the fact that it is very hard to scale a Monolithic system. While designing a big system with Microservices Architecture each business or technical requirement is intended to be carried out as a cooperative composition of several primitive ‘micro’ services.

    These services don’t need to be comprehensive in their responses. They should perform specific duties with expected responses. Ideally, they should behave like pure functions for seamless composability.

    Now using REST as a communication paradigm for such services doesn’t provide us with much of a benefit. However, exposing a REST API for a service does enable a lot of expression capability for that service but again if such expression power is neither required nor intended we can use a paradigm that focuses more on other factors.

    gRPC intends to improve upon the following technical aspects over traditional HTTP requests:

    • HTTP/2 by default with all its goodies.
    • Protobuf as machines are talking.
    • Dedicated support for streaming calls thanks to HTTP/2.
    • Pluggable auth, tracing, load balancing and health checking because you always need these.

    As it’s an RPC framework, we again have concepts like Service Definition and Interface Description Language which may feel alien to the people who were not there before REST but this time it feels a lot less clumsy as gRPC uses Protobuf for both of these.

    Protobuf is designed in such a way that it can be used as a communication format as well as a protocol specification tool without introducing anything new. A typical gRPC service definition looks like this:

    service HelloService {
      rpc SayHello (HelloRequest) returns (HelloResponse);
    }
    
    message HelloRequest {
      string greeting = 1;
    }
    
    message HelloResponse {
      string reply = 1;
    }

    You just write a `.proto` file for your service describing the interface name, what it expects, and what it returns as Protobuf messages. Protobuf compiler will then generate both the client and server side code. Clients can call this directly and server-side can implement these APIs to fill in the business logic.

    Conclusion

    Golang, along with gRPC using Protobuf is an emerging stack for modern server programming. Golang simplifies making concurrent/parallel applications and gRPC with Protobuf enables efficient communication with a pleasing developer experience.