epoll

about epoll_ctl()

倾然丶 夕夏残阳落幕 提交于 2019-11-30 19:00:17
when using epoll_ctl(), I found that the third parameter "fd" is another file descriptor besides the epoll file descriptor "epfd". And I saw an example like this: event.data.fd = sfd; //sfd is a fd for listening event.events = EPOLLIN | EPOLLET; s = epoll_ctl (efd, EPOLL_CTL_ADD, sfd, &event); As I saw, file descriptor in event.data.fd is the same as the third parameter in epoll_ctl, why need to pass this descriptor twice? is there any difference? Actually you don't have to set event.data.fd . It's a union, you can set other members. When epoll_wait returns you get the event.data associated

epoll详解

流过昼夜 提交于 2019-11-30 18:54:07
epoll--- http://blog.chinaunix.net/uid/28541347/sid-193117-list-1.html 应用缓冲区设计: https://blog.csdn.net/daaikuaichuan/article/details/88814044 来源: https://www.cnblogs.com/bwbfight/p/11635830.html

non blocking tcp connect with epoll

跟風遠走 提交于 2019-11-30 15:37:38
问题 My linux application is performing non-blocking TCP connect syscall and then use epoll_wait to detect three way handshake completion. Sometimes epoll_wait returns with both POLLOUT & POLLERR events set for the same socket descriptor. I would like to understand what's going on at TCP level. I'm not able to reproduce it on demand. My guess is that between two calls to epoll_wait inside my event loop we had a SYN+ACK/ACK/FIN sequence but again I'm not able to reproduce it. 回答1: It is likely for

non blocking tcp connect with epoll

北城以北 提交于 2019-11-30 15:21:41
My linux application is performing non-blocking TCP connect syscall and then use epoll_wait to detect three way handshake completion. Sometimes epoll_wait returns with both POLLOUT & POLLERR events set for the same socket descriptor. I would like to understand what's going on at TCP level. I'm not able to reproduce it on demand. My guess is that between two calls to epoll_wait inside my event loop we had a SYN+ACK/ACK/FIN sequence but again I'm not able to reproduce it. It is likely for this to happen if the connect has failed - for example with "connection timed out" (for sockets doing a non

select、poll、epoll之间的区别总结

家住魔仙堡 提交于 2019-11-30 11:47:56
   select,poll,epoll都是IO多路复用的机制。I/O多路复用就通过一种机制,可以监视多个描述符,一旦某个描述符就绪(一般是读就绪或者写就绪),能够通知程序进行相应的读写操作。 但select,poll,epoll本质上都是同步I/O,因为他们都需要在读写事件就绪后自己负责进行读写,也就是说这个读写过程是阻塞的 ,而异步I/O则无需自己负责进行读写,异步I/O的实现会负责把数据从内核拷贝到用户空间。 关于这三种IO多路复用的用法,前面三篇总结写的很清楚,并用服务器回射echo程序进行了测试。连接如下所示: select: http://www.cnblogs.com/Anker/archive/2013/08/14/3258674.html poll: http://www.cnblogs.com/Anker/archive/2013/08/15/3261006.html epoll: http://www.cnblogs.com/Anker/archive/2013/08/17/3263780.html   今天对这三种IO多路复用进行对比,参考网上和书上面的资料,整理如下: 1、select实现 select的调用过程如下所示: (1)使用copy_from_user从用户空间拷贝fd_set到内核空间 (2)注册回调函数__pollwait (3)遍历所有fd

How do you use AIO and epoll together in a single event loop?

断了今生、忘了曾经 提交于 2019-11-30 07:11:57
How can you combine AIO and epoll together in a single event loop? Google finds lots of talk from 2002 and 2003 about unifying them, but its unclear if anything happened, or if it's possible. Has anyone rolled-their-own with an epoll loop using eventfd for the aio signal? try libevent: http://www.monkey.org/~provos/libevent/ there are patches to support both. you can see http://www.xmailserver.org/eventfd-aio-test.c for a sample of aio and eventfd Tried eventfd with epoll? "A key point about an eventfd file descriptor is that it can be monitored just like any other file descriptor using select

Poorly-balanced socket accepts with Linux 3.2 kernel vs 2.6 kernel

六眼飞鱼酱① 提交于 2019-11-30 06:24:39
I am running a fairly large-scale Node.js 0.8.8 app using Cluster with 16 worker processes on a 16-processor box with hyperthreading (so 32 logical cores). We are finding that since moving to the Linux 3.2.0 kernel (from 2.6.32), the balancing of incoming requests between worker child processes seems be heavily weighted to 5 or so processes, with the other 11 not doing much work at all. This may be more efficient for throughput, but seems to increase request latency and is not optimal for us because many of these are long-lived websocket connections that can start doing work at the same time.

负载均衡通讯转发分发器G5更新日志v1.2.0

最后都变了- 提交于 2019-11-30 05:33:44
负载均衡通讯转发分发器G5更新日志v1.2.0 一些网友来信希望支持WINDOWS、类UNIX 本周版本更新至v1.2.0,主要做了如下更新: * 头文件include了limits.h,解决某些类UNIX环境里的包含问题 * 新增支持WINDOWS、类UNIX平台(使用select事件模型) 目前支持的平台及事件模型如下: * Linux - epoll * WINDOWS - select * 类UNIX - select 开源项目首页 : http://git.oschina.net/calvinwilliams/G5 作者邮箱 : calvinwilliams.c@gmail.com 来源: oschina 链接: https://my.oschina.net/u/988092/blog/224403

[开源软件]负载均衡通讯分发器(LB dispatch)

耗尽温柔 提交于 2019-11-30 05:33:31
负载均衡通讯分发器(LB dispatch) - G5 1.开发背景 今天和系统运维的老大聊天,谈到一直在用的F5,行里对其评价为价格过高、功能复杂难懂,反正印象不是很好,使用前景不明。因为以前我曾经给行里开发过一个通讯中间件,附带软实现了负载均衡,几年使用下来一直效果不错,突然想自己再软实现一个纯负载均衡通讯分发器,并开源分享给大家。 说干就干,回到家,搜了一下网上同类软件,整理技术需求 软件定义如下:基于规则的通讯分发器,匹配来源网络地址,从哪个端口进入,参照负载均衡算法转发到目标网络地址集合中的其中一个。 实现目标如下: * 支持长/短TCP,后续还会支持UDP * 与应用层协议无关,即支持HTTP,FTP,TELNET,SSH等等所有应用层协议 * 稳定高效,Linux下首选epoll(ET模式),全异步设计,也决定了目前仅支持Linux * 分发规则配置文件;也支持远程在线管理规则,以及查询状态 * 支持多种主流负载均衡算法 * 源码和可执行程序体型轻巧,概念简单,使用快捷 使用场景如下: * 通讯转发、分发 * 与无负载均衡功能的通讯软件配合实现本地连接对端的负载均衡分发,避免改造通讯软件带来的工作量和风险 * 低成本的网站前端负载均衡通讯网关 研发之前,取个好听的名字,相对于硬实现F5,就取名为软实现G5吧 ^_^ 经过5个晚上的奋笔疾书,捣鼓出v1.0.0

通讯转发、(负载均衡)通讯分发器(G5)

旧时模样 提交于 2019-11-30 05:33:17
通讯转发、(负载均衡)通讯分发器(G5) - 更新日志v1.2.1 G5是一款高性能高并发负载、易配置使用、支持远程管理的轻量级TCP/IP的通讯转发、(负载均衡)通讯分发器软件。基于epoll(ET)事件驱动非堵塞全异步无锁框架实现(在非Linux操作系统上退化为select实现),能运行在Linux、UNIX和WINDOWS等多种主流操作系统上。 G5支持所有TCP应用层协议,这意味着不仅可以用于网站HTTP服务,还能用在SMTP、POP、FTP上等,甚至非常见TCP应用协议。 G5支持几乎所有主流负载均衡算法,如轮询、最少连接数、最小响应时间等。 使用场景如下: * 简单的TCP通讯转发 * 与无负载均衡功能的通讯软件配合实现负载均衡分发,避免改造通讯软件带来的工作量和风险 * 网站反向代理通讯网关 版本更新至v1.2.1,主要做了如下更新: * G5能作为WINDOWS服务运行,新增命令行参数用于安装、卸载WINDWOS服务 * 解决BUG : 当一条socket上双工数据同时传输且转发都比接收慢时,有数据接收饿死现象 * 解决BUG : 远程管理导出规则时没有导出属性 开源项目首页 : http://git.oschina.net/calvinwilliams/G5 作者邮箱 : calvinwilliams.c @gmail.com 来源: oschina 链接: