TCP Server high CPU usage

徘徊边缘 提交于 2019-12-24 03:26:31

问题


C# Visual Studio 2013

I'm working on a rough TCP Server/Client. It works like this:
Client sends message to server > Server sends "response" to client. I have this in a loop as I'm going to use this transfer of data for multiplayer in a game. However, I ran a performance test because my TCP Server was using a lot of my CPU when more than three clients connected. The performance profiler said the following method was responsible for 96% utilization. Can you help me fix this?

private static void ReceiveCallback(IAsyncResult AR)
    {
        Socket current = (Socket)AR.AsyncState;
        int received;

        try
        {
            received = current.EndReceive(AR);
        }
        catch (SocketException)
        {
            Console.WriteLine("Client forcefully disconnected");
            current.Close(); // Dont shutdown because the socket may be disposed and its disconnected anyway
            _clientSockets.Remove(current);
            return;
        }

        byte[] recBuf = new byte[received];
        Array.Copy(_buffer, recBuf, received);
        string text = Encoding.ASCII.GetString(recBuf);
        Console.WriteLine("Received Text: " + text);


        string msg = "Response!";
        byte[] data = Encoding.ASCII.GetBytes(msg);
        current.Send(data);


        current.BeginReceive(_buffer, 0, _BUFFER_SIZE, SocketFlags.None, ReceiveCallback, current);
    }

Just in case, here's the AcceptCallback method which calls the ReceiveCallback.

private static void AcceptCallback(IAsyncResult AR)
    {
        Socket socket;

        try
        {
            socket = _serverSocket.EndAccept(AR);
        }
        catch (ObjectDisposedException) // I cannot seem to avoid this (on exit when properly closing sockets)
        {
            return;
        }

        _clientSockets.Add(socket);
        socket.BeginReceive(_buffer, 0, _BUFFER_SIZE, SocketFlags.None, ReceiveCallback, socket);
        Console.WriteLine("Client connected...");
        _serverSocket.BeginAccept(AcceptCallback, null);
    } 

回答1:


In the comments you say that your code sends data as fast as CPU and network allow but you want to throttle it. You probably should think about what the optimal frequency is that you want to send at. Then, send at that frequency.

var delay = TimeSpan.FromMilliseconds(50);
while (true) {
 await Task.Delay(delay);
 await SendMessageAsync(mySocket, someData);
 await ReceiveReplyAsync(mySocket);
}

Note, that I have made use of await to untangle the callback mess. If you now add timers or delays into the mix callbacks can get unwieldy. You can do it any way you like, though. Or, you simply use synchronous socket IO on a background thread/task. That is even simpler and the preferred way if there aren't too many threads. Note, that MSDN usually uses the APM pattern with sockets for no good reason.

Note, that Thread.Sleep/Task.Delay are totally fine to use if you want to wait based on time.




回答2:


Why are you doing this :

byte[] recBuf = new byte[received];
Array.Copy(_buffer, recBuf, received);
string text = Encoding.ASCII.GetString(recBuf);

You have a copy operation that copies your received buffer in recBuf, and then another one to create the string. You can avoid one and your performance will improve. But the fact that cpu is high is normal because even tranforming to the string will make use of the cpu.



来源:https://stackoverflow.com/questions/30409759/tcp-server-high-cpu-usage

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!