In a real-world scenario, the service logic will most likely be more resource / time consuming. It should be noted that the below metrics represent the gRPC/Proto layer only. What a real-world service would do is most likely more interesting – like fetch an asset from the database, compute something non-trivial or act as in-memory cache. The work that is done with each request is a simple unary call – it’s basically an echo service. Just like we write our code at Nexthink! The tested RPC The benchmarked service should be written in an idiomatic way (for the language used) so it can be easily maintained and extended. While those are interesting and would boost the performance of some implementations, we decided to prohibit them. In most programming languages you can do some tricks to improve your program performance at the cost of readability and/or portability. The process is summarized in the diagram below.ĭuring the benchmark development we bumped into several challenges that molded the final shape of the project. The stats from docker and ghz are then collected, parsed and combined by a Ruby script to be presented in a table with the language/library name, requests per second, average latency and 90/95/99 percentiles. Tests are run sequentially with optional warm-up phase to compensate for non-optimal startup, e.g., before JIT kicks in.The server side resources, CPU and RAM can be limited. The common ground for those implementations is a simple protobuf contract. On the server side, we run the containerized service in each programming language/gRPC library.It is also possible to define the payload to be sent. Request rate can also be optionally limited. Thanks to ghz, the client side can be parametrized with number of connections to use, number of requests to send concurrently, CPU usage limit, and payload size. For simulating the client-side we use containerized ghz, a gRPC benchmarking and load testing tool written in Go.Optimally, the benchmark for gRPC performance should be run on a Linux machine to avoid introducing virtual machine uncertainty, which is present on Docker Desktop for Mac and Windows. The entire benchmark is based on Docker, and it is the only prerequisite.Using the feedback from all those sources, we strived to make the benchmark even better, e.g., including more statistics, making several aspects of the test suite configurable and adding a warm-up phase. It was also mentioned in a Microsoft Blog post. The repository got quite popular on Reddit and Hacker News. Occasionally, they even found a bug in the framework implementation! Their contributions made the entire benchmark more realistic and objective. We managed to collect several domain experts from different technologies – including Java. Each implementation can be improved by the community so the results can be objective. So with those points in mind, we created a completely open-source benchmark where everyone is welcome to contribute and which could be run with a single command, having only Docker as a prerequisite. We wanted a simple yet viable solution that could be run on most personal computers. For example, official benchmarks are expected to be run on a dedicated GKE cluster. Creating a benchmark is not that straightforward.We wanted to compare performance across not only popular languages and their official gRPC libraries but also less popular languages that are still used by many developers across the world.It’s hard to objectively judge two technologies performances if they are not written in the most optimal and idiomatic way possible, reviewed by experts. The implementation details for most gRPC benchmarks are not very clear. Why would we torture ourselves doing such a thing? There exist several gRPC benchmarks including an official one, yet we still wanted to create our own. In GraphQL, data is represented with schemas that define objects, their fields, and types.GRPC is an open-source Remote Procedure Call system focusing on high performance.
0 Comments
Leave a Reply. |