I read some posts and checked Linux kernel code like inet_listen()->inet_csk_listen_start() and it seems that backlog argument of listen(
In case of 4.3 kernel you specified it's something like:
tcp_v4_do_rcv()->tcp_rcv_state_process()->tcp_v4_conn_request()->tcp_conn_request()->inet_csk_reqsk_queue_is_full()
Here we can see the most important details about queues:
/* TW buckets are converted to open requests without
* limitations, they conserve resources and peer is
* evidently real one.
*/
if ((sysctl_tcp_syncookies == 2 ||
inet_csk_reqsk_queue_is_full(sk)) && !isn) {
want_cookie = tcp_syn_flood_action(sk, skb, rsk_ops->slab_name);
if (!want_cookie)
goto drop;
}
/* Accept backlog is full. If we have already queued enough
* of warm entries in syn queue, drop request. It is better than
* clogging syn queue with openreqs with exponentially increasing
* timeout.
*/
if (sk_acceptq_is_full(sk) && inet_csk_reqsk_queue_young(sk) > 1) {
NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_LISTENOVERFLOWS);
goto drop;
}
Pay your attention to inet_csk_reqsk_queue_is_full():
static inline int inet_csk_reqsk_queue_is_full(const struct sock *sk)
{
return inet_csk_reqsk_queue_len(sk) >= sk->sk_max_ack_backlog;
}
Finally it compares current queue icsk_accept_queue with sk_max_ack_backlog size which was previously set by inet_csk_listen_start(). So yep, backlog affects incoming queue in current case.
You can see that both sk_acceptq_is_full() and inet_csk_reqsk_queue_is_full() make comparison with the same socket's sk_max_ack_backlog which is set through the listen():
static inline bool sk_acceptq_is_full(const struct sock *sk)
{
return sk->sk_ack_backlog > sk->sk_max_ack_backlog;
}
Useful links: 1, 2