Dive into the network regression affection to network service

In recent work my team devoted in a cross region data replication solution, a simplified abstraction of the workload is as follows

gRPC service ——(bidirectional gRPC stream)——> data transfer service ——(translate to SQL and write to downstream MySQL)——> downstream MySQL

The gRPC service and downstream MySQL are located in different AWS regions(such as one in EU, and the other one in US East), and the average network latency between these two regions is 70ms. We have a choice to deploy the data transfer service in either gRPC service side or downstream MySQL side. After a series of tests we observed that deploying data transfer service in the downstream(the downstream MySQL) side has much larger throughput than deploying it in the upstream(the gRPC service) side. This case does not happen by accident, it has a deep reason and deserves to dive into. This article will analyze the root cause, find potential solutions and give some advice about such scenarios.

Comparing different workloads

Intuitively the impact of network regression (including latency, packet loss etc.) is different in the above scenario, gRPC stream has no regression when having 75ms latency, while MySQL execution throughput drops a lot. In this section I will compare several workloads under network regression. The experiment was conducted on several servers in one IDC, and the network regression was simulated by chaosd. I simulated the network latency and packet loss, which are the most common network regression in the real world. The simulated workloads are as follows:

  1. The first one is a simple sysbench insert workload, writing data to 32 tables in MySQL directly with 32 sysbench threads.

  2. The second one is to simulate a gRPC bidirectional stream between client and server, the client starts N workers, each one sets up a gRPC stream and receives data from gRPC server. The client will collect received data and report throughput every 10 seconds. The benchmark code can be found in grpc-test.

  3. The third one is another MySQL usage scenario, but it has elastic workers compared to sysbench workload. Elastic worker means when the write to MySQL is slow, the program will spawn more workers to write to MySQL. Each worker writes simple insert SQL to MySQL, and there is no wait condition among workers, which means the worker can run concurrently and transactions in MySQL have no lock contention. The benchmark code can be found in elastic_mysql, besides the max pending workers in this experiment is worker size * max pending per worker = 20 * 120 = 2400.

The benchmark result is as follows, note the absolute value of throughput is not essential, but we should pay attention to the throughput change when network regression happens.

normal network latency+50ms latency+75ms latency+100ms packet loss 5% packet loss 10%
sysbench insert 1898 208 141 102 406 240
gRPC bi-stream 17.28M 17.28M 15.67M 13.18M 16.02M 12.42M
elastic MySQL worker (qps/pending workers) 3998/128 3997/623 3816/2256 3091/2322 3997/219 3999/377

It is easy to draw some conclusions from the above benchmark result

  • The network regression(either latency or packet loss) impact on throughput in sysbench workload is significant. As for the reasons, in sysbench workload the throughput is decided by the thread count (which is a fixed number and configured by --threads when running sysbench) and throughput of a single thread. Each single thread executes transaction serially in a loop until meeting some exit condition(ref: internal/sysbench.lua:thread_run). So in the sysbench workload, when response from MySQL is slowed down by network regression, the worker thread will waste a lot of time waiting for SQL execution response and leads to severe regression of throughput.

  • The network regression impact on throughput in gRPC bi-stream workload is not significant, which is achieved by the throughput optimization on high latency connections. gRPC-go implements a BDP estimation and dynamic flow control window feature. This feature calculates the current BDP (Bandwidth Delay Product (BDP) is the bandwidth of a network connection times its round-trip latency. This effectively tells us how many bytes can be “on the wire” at a given moment, if full utilization is achieved) and bandwidth sample and decides if the flow control windows should go up (algorithm). There exists a long issue(grpc-go issue#1043) that discusses this topic, which deserves to take a look.

  • The network regression has no impact on throughput in elastic worker workload when the worker doesn’t reach the upper limit. Since more workers can be spawned when some pending executions are not returned, the SQL execution bandwidth (from the point view of server side) will be fully used, without time wasted in waiting for response to arrive at the client side. Compared with the gRPC scenario, the elastic worker is another kind of dynamic window.

    • Note we can’t achieve performance optimization by increasing worker size or buffer size infinitely. For example in the elastic MySQL worker scenario, when MySQL worker count increases, more lock contention will happend in either client side (more workers lead to heavy workload of runtime) or server side (more transactions lead to more lock conflicts). This can be treated as a kind of bufferbloat.

Potential solutions to increase throughput

Coming back to the scenario that is mentioned at the beginning of the article, our solution is to put the data transfer service in the same region with the downstream MySQL, but whether there exists other solutions? There may or may not, which is based on the cost of system modification, this section will talk about two potential solutions, one solution is to relay the data via a suitable protocol, and the other one is trying to tune the network via some networking tunnel.

The relay service solution

This solution is straightforward, it still deploys data transfer service in the upstream region, but it doesn’t execute SQLs directly to downstream MySQL, instead it works as a relay service, it works as a gRPC server that serves gRPC stream with translated SQLs, in the downstream we deploy another consumer service to receives translated SQLs from the relay service, and the consumer writes SQLs to the downstream MySQL. This solution has several advantages, as well as several disadvantages

  • Advantages
    • Reduce throughput loss with the help of good performance of gRPC on high latency network.
    • Reduce network bandwidth between upstream and downstream regions, because the data transferred is the minimal SQL data.
  • Disadvantages
    • Long data links will introduce larger latency.
    • More components will introduce more costs, both in development and system maintenance.

Tuning the network with kcptun

kcptun is a tunnel based on KCP with N:M multiplexing and FEC(Forward error correction) for transfer error correction. KCP can achieve the transmission effect of a reduction of the average latency by 30% to 40% and reduction of the maximum delay by a factor of three, at the cost of 10% to 20% more bandwidth wasted than TCP. kcptun can be used in some cross-region scenarios to accelerate TCP stream, in order to verify whether kcptun can help the above scenario, I test the sysbench workload with kcptun again. The basic topology is as follows

sysbench client -> kcptun client -> (network with latency injection) -> kcptun server -> MySQL

latency 10ms latency 25ms latency 75ms packet loss 5% packet loss 10%
MySQL direct write 869 405 141 259 144
MySQL write via kcptun 855 405 141 270 148

From the test result, we can see if the network has pure latency, kcptun doesn’t make any sense to improve throughput, and if packet loss exists, using kcptun increases 3%-4% throughput in the sysbench insert scenario. Comparing to the throughput regression, using kuptun is not the silver bullet for such MySQL write scenario.

Advices about system performance

  • The performance of a system should be estimated. It is important to abstract the architecture and estimate the performance indicators based on known knowledge.
  • If the estimation is beyond precocious, try to write a minimal demo to simulate the workload.
  • The deployment topology is worth thinking deeply before running the system in a production environment.
  • There always exist tradeoffs in software systems, we can add enough metrics to the system and tune the performance and resource usage dynamically.

Remaining problems

There still exist many topics that have not been discussed in this article, such as

  • In real word, when the performance of a network service drops or jitter happens because of network regression, is there any way to diagnose network conditions quickly or even find the problems before it affects the service and adopt some strategies to reduce the impact. For example
    • In the gRPC scenario, is there any gRPC metric to show the flow control window size, message buffer size that can help to monitor the gRPC running status.
    • In the MySQL scenario, how to measure whether the bandwidth of client side and server side is busy or free.
    • In real world network regression is not a decline of a single indicator, it may be multifactor functioning and how to evaluate the network regression is a big topic. A related article: network quality monitoring in Linux kernel.
  • This article only talks about the network regression affections to MySQL protocol and gRPC streaming(HTTP/2 underlying), there are many commonly used protocols that have not been discussed, such as HTTP/1.1, WebSocket etc.

Reference