epoll

Apache和Nginx的区别

岁酱吖の 提交于 2019-12-03 17:03:47
Nginx 轻量级,采用 C 进行编写,同样的 web 服务,会占用更少的内存及资源 抗并发,nginx 以 epoll and kqueue 作为开发模型,处理请求是异步非阻塞的,负载能力比 apache 高很多,而 apache 则是阻塞型的。在高并发下 nginx 能保持低资源低消耗高性能 ,而 apache 在 PHP 处理慢或者前端压力很大的情况下,很容易出现进程数飙升,从而拒绝服务的现象。 nginx 处理静态文件好,静态处理性能比 apache 高三倍以上 nginx 的设计高度模块化,编写模块相对简单 nginx 配置简洁,正则配置让很多事情变得简单,而且改完配置能使用 -t 测试配置有没有问题,apache 配置复杂 ,重启的时候发现配置出错了,会很崩溃 nginx 作为负载均衡服务器,支持 7 层负载均衡 nginx 本身就是一个反向代理服务器,而且可以作为非常优秀的邮件代理服务器 启动特别容易, 并且几乎可以做到 7*24 不间断运行,即使运行数个月也不需要重新启动,还能够不间断服务的情况下进行软件版本的升级 社区活跃,各种高性能模块出品迅速 Apache apache 的 rewrite 比 nginx 强大,在 rewrite 频繁的情况下,用 apache apache 发展到现在,模块超多,基本想到的都可以找到 apache 更为成熟,少 bug

使用epoll实现一个udp server && client

人走茶凉 提交于 2019-12-03 16:52:23
udp server #!/usr/bin/env python #-*- coding:utf-8 -*- import socket import select import Queue #创建socket对象 serversocket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) #设置IP地址复用 #serversocket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) #ip地址和端口号 server_address = ("127.0.0.1", 50000) #绑定IP地址 serversocket.bind(server_address) print "服务器启动成功,监听IP:" , server_address #服务端设置非阻塞 #serversocket.setblocking(False) #超时时间 timeout = 10 #创建epoll事件对象,后续要监控的事件添加到其中 epoll = select.epoll() #注册服务器监听fd到等待读事件集合 print "serversocket.fileno():%s" % serversocket.fileno() epoll.register(serversocket

What are the underlying differences among select, epoll, kqueue, and evport?

冷暖自知 提交于 2019-12-03 16:27:28
I am reading Redis recently. Redis implements a simple event-driven library based on I/O multiplexing. Redis says it would choose the best multiplexing supported by the system, and gives the following code: /* Include the best multiplexing layer supported by this system. * The following should be ordered by performances, descending. */ #ifdef HAVE_EVPORT #include "ae_evport.c" #else #ifdef HAVE_EPOLL #include "ae_epoll.c" #else #ifdef HAVE_KQUEUE #include "ae_kqueue.c" #else #include "ae_select.c" #endif #endif #endif I wanna know whether they have fundamental performance differences? If so,

Boost Message Queue not based on POSIX message queue? Impossible to select(2)?

可紊 提交于 2019-12-03 12:46:51
I thought I'd use Boost.Interprocess's Message Queue in place of sockets for communication within one host. But after digging into it, it seems that this library for some reason eschews the POSIX message queue facility (which my Linux system supports), and instead is implemented on top of POSIX shared memory. The interface is similar enough that you might not guess this right away, but it seems to be the case. The downside for me is that shared memory obtained via shm_open(3) does not appear to be usable with select(2) , as opposed to POSIX message queues obtained via mq_open(3) . It seems

多路复用

生来就可爱ヽ(ⅴ<●) 提交于 2019-12-03 12:13:16
为什么会考虑到深入理解多路复用?在Http/2,Redis等内容中,反复提到多路复用带来的效率提升,也只有了解了基础概念,才能掌握它们,一步一步来吧。 了解多路复用前,先对五中IO模型进行初步了解。 省略--后续补充 多路复用最重要的知识点是因为内部用了一个红黑树记录添加的socket,用了一个双向链表接收内核触发的事件(双向链接描述错误,“双向链表的每个节点都是基于epitem结构中的rdllink成员”) 红黑树 就是因为多了这个存储,可以直接拿到就绪socket,而不用像select那样一个个检查。由于epoll需要往这个结构里添加或删除数据,就要求这个结构能够快速的添加和删减元素,双向链表刚好比较适合。 情况分类|添加|删除 双向链表 时间复杂度 情况分类 添加 删除 最好头节点 O(1) O(1) 最好尾节点 O(1) O(1) 平均 O(n) O(n) 细节 当某一进程调用epoll_create方法时,Linux内核会创建一个eventpoll结构体,这个结构体中有两个成员与epoll的使用方式密切相关。eventpoll结构体如下所示: - struct eventpoll{ .... /*红黑树的根节点,这颗树中存储着所有添加到epoll中的需要监控的事件*/ struct rb_root rbr; /*双链表中则存放着将要通过epoll

Why exactly does ePoll scale better than Poll?

瘦欲@ 提交于 2019-12-03 11:42:42
Short question but for me its difficult to understand. Why exactly does ePoll scale better than Poll? The poll system call needs to copy your list of file descriptors to the kernel each time. This happens only once with epoll_ctl , but not every time you call epoll_wait . Also, epoll_wait is O(1) in respect of the number of descriptors watched 1 , which means it does not matter whether you wait on one descriptor or on 5,000 or 50,000 descriptors. poll , while being more efficient than select , still has to walk over the list every time (i.e. it is O(N) in respect of number of descriptors). And

python 实现一个简单epoll socket

江枫思渺然 提交于 2019-12-03 11:37:40
python 实现一个epoll server #!/usr/bin/env python #-*- coding:utf-8 -*- import socket import select import Queue #创建socket对象 serversocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) #设置IP地址复用 serversocket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) #ip地址和端口号 server_address = ("127.0.0.1", 8888) #绑定IP地址 serversocket.bind(server_address) #监听,并设置最大连接数 serversocket.listen(10) print "服务器启动成功,监听IP:" , server_address #服务端设置非阻塞 serversocket.setblocking(False) #超时时间 timeout = 5 #创建epoll事件对象,后续要监控的事件添加到其中 epoll = select.epoll() #注册服务器监听fd到等待读事件集合 epoll.register(serversocket.fileno(),

epoll with edge triggered event

谁说我不能喝 提交于 2019-12-03 10:16:22
问题 The man page of epoll has a sample code for edge triggered like the following : for (;;) { nfds = epoll_wait(epollfd, events, MAX_EVENTS, -1); if (nfds == -1) { perror("epoll_pwait"); exit(EXIT_FAILURE); } for (n = 0; n < nfds; ++n) { if (events[n].data.fd == listen_sock) { conn_sock = accept(listen_sock, (struct sockaddr *) &local, &addrlen); if (conn_sock == -1) { perror("accept"); exit(EXIT_FAILURE); } setnonblocking(conn_sock); ev.events = EPOLLIN | EPOLLET; ev.data.fd = conn_sock; if

EPOLLRDHUP not reliable

可紊 提交于 2019-12-03 08:33:14
I'm using non-blocking read/writes over a client-server TCP connection with epoll_wait . Problem is, I can't reliably detect 'peer closed connection' event using the EPOLLRDHUP flag. It often happens that the flag is not set. The client uses close() and the server, most of the time, receives, from epoll_wait , an EPOLLIN | EPOLLRDHUP event. Reading yields zero bytes, as expected. Sometimes, though, only EPOLLIN comes, yielding zero bytes. Investigation using tcpdump shows that normal shutdown occurs as far as I can tell. I see a Flags [F.], Flags [F.], Flags [.] sequence of events, which

C: epoll and multithreading

二次信任 提交于 2019-12-03 08:17:35
问题 I need to create specialized HTTP server, for this I plan to use epoll sycall, but I want to utilize multiple processors/cores and I can't come up with architecture solution. ATM my idea is followng: create multiple threads with own epoll descriptors, main thread accepts connections and distributes them among threads epoll. But are there any better solutions? Which books/articles/guides can I read on high load architectures? I've seen only C10K article, but most links to examples are dead :(