Logo
Published on

Remote Procedure Call & gRPC

Authors
  • avatar
    Name
    Ahmed Sedik
    Github

Remote Procedure Call (RPC)

RPC is a software communication protocol commonly mentioned in distributed computing, allowing programs to invoke subroutines that run on other devices, thus enabling communication between processes.

RPC abstracts remote service calls, and HTTP/2 is one of the transport protocols that optimizes the way these calls occur. By using HTTP/2 in frameworks like gRPC, RPC calls can be made faster and more efficiently.

With RPC, developers can write code as if the subroutines were local. Using RPC means that, regardless of whether the calls are local or remote, the procedure itself remains fundamentally the same.

💡 Since RPC implements a remote request-response protocol, network issues can cause remote procedure calls to fail!

HTTP/2

Although RPC and HTTP/2 belong to different layers, RPC frameworks benefit from using HTTP/2 to enable faster and more efficient remote communication.

HTTP/2, the second version of the HTTP protocol, was designed to overcome the following limitations of HTTP/1:

Limitations of HTTP/1

  • An HTTP/1 connection can only handle one request and response at a time, requiring multiple TCP connections to request multiple resources. This increases overhead and reduces performance.
  • When the first request on a connection is delayed, subsequent requests are also delayed, negatively affecting web page load times.
  • In HTTP/1, headers are retransmitted with every request, often containing duplicate or unnecessary information, which wastes bandwidth.

Key improvements in HTTP/2 that address these limitations include:

  • The use of a binary protocol to improve transmission efficiency and reduce the likelihood of errors.
  • HTTP/2 can handle multiple requests and responses simultaneously over a single TCP connection, reducing network latency and improving page load times.
  • The server can send resources to the client before they are requested, improving performance by delivering resources faster.
  • HTTP/2 uses HPACK compression to reduce header overhead significantly.
  • It allows for setting dependencies and priorities between resources, ensuring that critical resources are loaded first.

RPC Structure

The structure of gRPC can be divided into three layers: the application layer, the framework layer, and the transport layer. Each layer has specific roles and responsibilities, and this layered architecture, combined with HTTP/2-based communication, maximizes gRPC’s efficiency and performance.

Application Layer

  • Service Definition: Developers use Protocol Buffers (protobuf) to define service interfaces and message formats in a .proto file, which serves as a contract between the client and server, providing the foundation for implementing application logic.

  • Client and Server Implementation: Using the automatically generated client and server stubs, developers implement the service logic. Clients can call the server’s methods as if they were local, while the server handles the requests.

Framework Layer

  • Serialization and Deserialization: gRPC uses protobuf to serialize and deserialize messages, ensuring efficient data transmission, and the framework handles this automatically.

  • Code Generation: The gRPC framework automatically generates client and server stubs based on the service definitions in the .proto file. It supports multiple programming languages, reducing language barriers.

  • Authentication and Security: gRPC supports secure connections via TLS/SSL and provides authentication mechanisms like OAuth2, managed at the framework level.

Transport Layer

  • HTTP/2-Based Communication: gRPC uses HTTP/2 at the transport layer, leveraging features such as multiplexing, server push, stream prioritization, and header compression to improve communication efficiency and performance.

  • Bidirectional Streaming: With HTTP/2’s streaming capabilities, gRPC supports bidirectional streaming between the client and server, which is useful in scenarios like real-time data transmission.

Stub

The stub is a core concept of RPC, acting as a layer that marshals (transforms) and unmarshals (reverses) parameter objects into messages. It is divided into the client stub and server stub.

ProtoBuf

Protobuf (Protocol Buffers) is an open-source, cross-platform data format developed by Google. It helps serialize structured data and supports program communication over networks or data storage. In the Interchain stack, Protobuf is used by developers as a data serialization method for defining message formats.

✏️ From Cosmos SDK v0.40 onwards, the encoding format for chain state and transaction data has been replaced from Amino to Protobuf, which offers better encoding/decoding performance. The automatic generation and definition of gRPC functionality by Protobuf also helped promote the use of gRPC.

gRPC Channel

gRPC opens multiple subchannels for communication. Reusing these channels helps reduce communication costs.

How RPC Works

Typically, when a remote procedure call is invoked, the procedure parameters are sent over the network to the execution environment where the procedure is executed. Once completed, the result of the called procedure is sent back to the calling environment, and the execution resumes in the calling environment, just like a regular local procedure call.

The steps of an RPC request are as follows:

1. The client calls the client stub, which is a piece of code that transforms the parameters passed between the client and the server during RPC.

A small program routine that allows a remote system’s program to operate as if it were local is called a Stub. It unifies the interface and abstracts the method call between the client and the server, allowing the client to call remote methods without worrying about network protocols, serialization, or deserialization details.

2. The client stub packs the procedure parameters into a message.

The process of converting or packaging data into a form that can be processed by the remote server is called Marshaling. Marshaled data is serialized into a byte stream for transmission over the network.

3. The client stub makes a system call to send the message.

4. The client’s local operating system (OS) sends the message from the client to the server through the transport layer.

5. The server OS passes the received packet to the server stub.

6. The server stub unpacks the procedure parameters included with the message, a process called UnMarshaling.

7. The server stub calls the server procedure, and the procedure is executed.

8. Once the procedure is completed, the output is returned to the server stub, which packs the return value into a message.

9. The message is transmitted through the transport layer to the client stub.

10. The client stub unmarshals the return parameters and returns them to the original calling client.

Advantages

1. MSA gRPC is suitable for most architectures, but it is particularly well-suited for Microservice Architecture (MSA). It improves performance by mitigating API call overhead between services, and facilitates security, API gateway, logging, etc.

2. High Productivity Once the IDL of Protocol Buffers is defined, source code for services and messages is automatically generated, enabling easy data exchange.

3. Speed and Efficiency gRPC supports bidirectional streaming between the client and server with HTTP/2-based communication. Using Protocol Buffers for serialized byte-stream communication makes it lighter and faster than JSON-based communication.

4. Language Neutrality gRPC supports various programming languages, making it easier for systems written in different languages to communicate.

Disadvantages

1. Limited Browser Support While gRPC uses HTTP/2, it is limited in terms of direct use in web browsers. However, gRPC-Web provides a partial solution.

2. Overhead Though Protocol Buffers is an efficient serialization method, it may have more overhead than a simple JSON API for small data.


Reference