epoll

比较Apache与Nginx的优缺点

淺唱寂寞╮ 提交于 2019-12-02 09:45:33
1、nginx相对于apache的优点: 轻量级,同样起web 服务,比apache 占用更少的内存及资源 抗并发,nginx 处理请求是异步非阻塞的,而apache 则是阻塞型的,在高并发下nginx 能保持低资源低消耗高性能 高度模块化的设计,编写模块相对简单 社区活跃,各种高性能模块出品迅速啊 apache 相对于nginx 的优点: rewrite ,比nginx 的rewrite 强大 模块超多,基本想到的都可以找到 少bug ,nginx 的bug 相对较多 超稳定 存 在就是理由,一般来说,需要性能的web 服务,用nginx 。如果不需要性能只求稳定,那就apache 吧。后者的各种功能模块实现得比前者,例如ssl 的模块就比前者好,可配置项多。这里要注意一点,epoll(freebsd 上是 kqueue )网络IO 模型是nginx 处理性能高的根本理由,但并不是所有的情况下都是epoll 大获全胜的,兄弟连教育(www.lampbrother.net)提示:如果本身提供静态服务的就只有寥寥几个文件,apache 的select 模型或许比epoll 更高性能。当然,这只是根据网络IO 模型的原理作的一个假设,真正的应用还是需要实测了再说的。 2、作为 Web 服务器:相比 Apache,Nginx 使用更少的资源,支持更多的并发连接,体现更高的效率,这点使

redis 单线程的理解

故事扮演 提交于 2019-12-02 06:08:36
1. redis单线程问题   单线程指的是网络请求模块使用了一个线程(所以不需考虑并发安全性),即一个线程处理所有网络请求,其他模块仍用了多个线程。 2. 为什么说redis能够快速执行 (1) 绝大部分请求是纯粹的内存操作(非常快速) (2) 采用单线程,避免了不必要的上下文切换和竞争条件 (3) 非阻塞IO - IO多路复用 3. redis的内部实现   内部实现采用epoll,采用了epoll+自己实现的简单的事件框架。epoll中的读、写、关闭、连接都转化成了事件,然后利用epoll的多路复用特性,绝不在io上浪费一点时间 这3个条件不是相互独立的,特别是第一条,如果请求都是耗时的,采用单线程吞吐量及性能可想而知了。应该说redis为特殊的场景选择了合适的技术方案。 4. Redis关于线程安全问题    redis实际上是采用了线程封闭的观念,把任务封闭在一个线程,自然避免了线程安全问题,不过对于需要依赖多个redis操作的复合操作来说,依然需要锁,而且有可能是分布式锁。 另一篇对redis单线程的理解: Redis单线程理解 个人理解 redis分客户端和服务端,一次完整的redis请求事件有多个阶段(客户端到服务器的网络连接-->redis读写事件发生--> redis服务端的数据处理(单线程) -->数据返回)。平时所说的redis单线程模型

epoll loops on disconnection of a client

烈酒焚心 提交于 2019-12-02 00:36:06
问题 I am trying to implement a socket server by using epoll . I have 2 threads doing 2 tasks: listening to incoming connection writing on screen the data the client is sending. For my test I have the client and the server on the same machine with 3 or 4 clients running. The server works fine until I don't kill one of the client by issuing a CTRL-C : as soon I do that the server starts looping and printing at a very fast rate data from other client. The strange thing is that the client sends data

I/O复用机制概述

江枫思渺然 提交于 2019-12-01 21:53:32
接下来我们将介绍几种常见的I/O模型及其区别 blocking I/O nonblocking I/O I/O multiplexing (select and poll) signal driven I/O (SIGIO) asynchronous I/O (the POSIX aio_functions) blocking I/O 这个不用多解释吧,阻塞套接字。下图是它调用过程的图示: 重点解释下上图,下面例子都会讲到。首先application调用 recvfrom()转入kernel,注意kernel有2个过程,wait for data和copy data from kernel to user。直到最后copy complete后,recvfrom()才返回。此过程一直是阻塞的。 nonblocking I/O: 与blocking I/O对立的,非阻塞套接字,调用过程图如下: 可以看见,如果直接操作它,那就是个轮询。。直到内核缓冲区有数据。 I/O multiplexing (select and poll) 最常见的I/O复用模型,select。 select先阻塞,有活动套接字才返回。与 blocking I/O 相比,select会有两次系统调用,但是select能处理多个套接字。 signal driven I/O (SIGIO) 只有UNIX系统支持

epoll loops on disconnection of a client

孤者浪人 提交于 2019-12-01 21:14:42
I am trying to implement a socket server by using epoll . I have 2 threads doing 2 tasks: listening to incoming connection writing on screen the data the client is sending. For my test I have the client and the server on the same machine with 3 or 4 clients running. The server works fine until I don't kill one of the client by issuing a CTRL-C : as soon I do that the server starts looping and printing at a very fast rate data from other client. The strange thing is that the client sends data each 2 seconds but the rate of the server is higher epoll_wait is also supposed to print something when

Epoll with edge triggered and oneshot only reports once

蓝咒 提交于 2019-12-01 18:38:32
I'm currently adding sockfds created from accept to an epoll instance with the following events: const int EVENTS = ( EPOLLET | EPOLLIN | EPOLLRDHUP | EPOLLONESHOT | EPOLLERR | EPOLLHUP); Once an event is triggered, I pass it off to a handler thread, read and then re-enable the sockfd through epoll_ctl with the same flags. However, I only receive the EPOLLIN event one time. Also, if I kill the client anytime after the first event is received, I do not get hangup events either. From reading the man pages, I thought I understood the correct approach with EdgeTriggered and OneShot. Below is some

What is the order in which File Descriptors in epoll are returned?

▼魔方 西西 提交于 2019-12-01 18:11:37
Let's say I have set a set of file descriptors, say 8, 9, 10, 11, 12 in the order specified and do an epoll_wait() for data to be read on them. epoll_wait returns with data to be read on socket 8,10 and 11. Will the order of the file descriptors returned in the epoll array be 8, 10 and 11 or could they be jumbled? The man page does not say anything specifically about the order, so it probably would not be a good idea to depend on the order when you call it. Even if they were returned in order in one implementation, they might not be in another. It would be best to assume that they could be

Does epoll preserve the order in which fd's was registered?

江枫思渺然 提交于 2019-12-01 17:18:05
I'm playing around with Linux system call and I found some aspect of epoll , that is not clear to me. Say, I create a epoll instance: epollfd = epoll_create(50); Next, I register 50 file descriptors in for -loop: for(i=0; i<50; i++){ // open file "file-i".txt // construct epoll_event // register new file descriptor with epoll_ctl(epollfd, EPOLL_CTL_ADD ... Now we have 50 file, that are ready for action(read or write -- doesn't matter). We set MAX_EVENTS to 3: #define MAX_EVENTS 3 ... struct epoll_event events[MAX_EVENTS] ... epoll_wait(epollfd, events, MAX_EVENTS, -1) All of those 50 files

What is the order in which File Descriptors in epoll are returned?

旧时模样 提交于 2019-12-01 17:05:28
问题 Let's say I have set a set of file descriptors, say 8, 9, 10, 11, 12 in the order specified and do an epoll_wait() for data to be read on them. epoll_wait returns with data to be read on socket 8,10 and 11. Will the order of the file descriptors returned in the epoll array be 8, 10 and 11 or could they be jumbled? 回答1: The man page does not say anything specifically about the order, so it probably would not be a good idea to depend on the order when you call it. Even if they were returned in