How to measure network throughput during runtime

谁说胖子不能爱 提交于 2019-12-12 10:15:54

问题


I'm wondering how to best measure network throughput during runtime. I'm writing a client/server application (both in java). The server regularly sends messages (of compressed media data) over a socket to the client. I would like to adjust the compression level used by the server to match the network quality.

So I would like to measure the time a big chunk of data (say 500kb) takes to completely reach the client including all delays in between. Tools like Iperf don't seem to be an option because they do their measurements by creating their own traffic.

The best idea I could come up with is: somehow determine the clock difference of client and server, include a server send timestamp with each message and then have the client report back to the server the difference between this timestamp and the time the client received the message. The server can then determine the time it took a message to reach the client.

Is there an easier way to do this? Are there any libraries for this?


回答1:


A simple solution:

Save a timestamp on the server before you send a defined amount of packages.

Then send the packages to the client and let the client report back to the server when it has recieved the last package.

Save a new timestamp on the server when the client has answered.

all you need to to now is determine die RTT and substract RTT/2 from the difference between the two timestamps.

This should get you a fairly accurate measurement.



来源:https://stackoverflow.com/questions/7861586/how-to-measure-network-throughput-during-runtime

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!