I\'m writing an Interface under Linux which gets Data from a TCP socket. The user provides a Buffer in which the received Data is stored. If the provided Buffer is to small
Your design has a flaw.
If a client provides a buffer that's too small, how do you know how much data to discard? Is there something in the data that tells you when you've reached the end of the message to be discarded? If that's the case, then you need to buffer the input stream in your code so you can detect these boundaries. If your code sees the stream as undifferentiated bytes, then your question doesn't make sense, as your code cannot in principle know when to stop discarding data. With TCP streams, unless there's an embedded protocol that delimits "messages", then it's all-or-nothing up until the connection is closed.
There is no knowledge at TCP level about what constitutes an application protocol message. There are, however, two most common ways to delimit messages in a TCP stream:
In this light, a generic TCP reader should provide two reading functions to be universally useful:
A design similar to Tornado IOStream reading functions would do.