epoll

Communication with child process using pipes and epoll

独自空忆成欢 提交于 2019-12-11 11:50:35
问题 I'm writing an application which will start some processes (fork and exec) depending on users input and should inform user about every error in those processes (Print some internal ID + message written to stderr by the process). I'd also like to detect exiting processes. I'm facing the problem, that I cannot receive data after executing execl() command. Test data is received in epoll_wait loop, but process called by exec seems to not write anything to stderr nor even end (I don't know if I'm

Is there any benefit to using epoll with a very small number of file descriptors?

落花浮王杯 提交于 2019-12-11 02:49:37
问题 Would the following single threaded UDP client application see a performance benefit from using epoll over simply calling recvfrom/sendto on non-blocking sockets? Let me explain the client. I am writing single threaded UDP based client (custom protocol) that both sends and receives data using non-blocking I/O and my colleague suggested I use epoll for this. The client sends and receives multiple packets of information that are all associated with a unique session id and multiple sessions can

TCP选项之SO_RCVLOWAT和SO_SNDLOWAT

為{幸葍}努か 提交于 2019-12-10 19:35:25
【推荐】2019 Java 开发者跳槽指南.pdf(吐血整理) >>> 每个套接口都有一个 接收低潮限度 和一个 发送低潮限度 。 接收低潮限度:对于TCP套接口而言,接收缓冲区中的数据必须达到规定数量,内核才通知进程“可读”。比如触发select或者epoll,返回“套接口可读”。 发送低潮限度:对于TCP套接口而言,和接收低潮限度一个道理。 TCP的接收缓冲区和发送缓冲区概念不是很清晰的话,请参考另一篇博文 《 TCP选项之SO_RCVBUF和SO_SNDBUF 》 。 理解 接收低潮限度 :如果应用程序没有调用recv()去读取socket的接受缓冲区的 数据,则接受缓冲区数据将注一直保存在接受缓冲区中,所以随着接受缓冲区接受到更多发送端发送缓冲区中的数据,则肯定会导致接受缓冲区溢出,所以设置一个接受低潮限度,当epoll监听到某一个socket的接受缓冲区的数据超过了接受低潮限度,则触发读就绪,使得epoll循环返回,开始处理读I/O事件。 接收低潮限度:默认为1字节 理解发送 低潮限度:如果应用程序没有调用send()来copy应用程序buff中的数据到socket发送缓冲区中,则随着发送缓冲区的数据被内核通过tcp协议发送出去,最后socket发送缓冲区的数据越来越少,可用的剩余空间越来越多,最后超过发送缓冲区的发送低潮限度,则epoll监听到这个socket可写,

Do I get a notification from epoll when a fd is closed?

只愿长相守 提交于 2019-12-10 16:26:26
问题 I am currently building something that uses epoll . It works pretty nice, but it would be good to have a notification when a file descriptor gets removed from epoll when the underlying fd is closed. Is there a way to get a notification from epoll as soon as an fd is closed? 回答1: No. Here's a Zig program to demonstrate: const std = @import("std"); pub fn main() !void { const epollfd = blk: { const rc = std.os.linux.epoll_create1(std.os.linux.EPOLL_CLOEXEC); const err = std.os.linux.getErrno(rc

Boost Asio On Linux Not Using Epoll

好久不见. 提交于 2019-12-10 14:22:48
问题 I was under the impression that boost::asio would use an epoll setup by default instead of a select implementation, but after running some tests it looks like my setup is using select. OS: RHEL 4 Kernel:2.6 GCC:3.4.6 I wrote a little test program to verify which reactor header was being used, and it looks like its using the select reactor rather than the epoll reactor. #include <boost/asio.hpp> #include <string> #include <iostream> std::string output; #if defined(BOOST_ASIO_EPOLL_REACTOR_HPP)

What is the state of C10K-like event-based server development in TCL?

笑着哭i 提交于 2019-12-10 13:54:20
问题 TCL is a nice simple programming language, but does not seem to get the credit and/or respect it deserves. I learned it back in 1995 in college and promptly forgot about it only to stumble upon it again recently. I am mostly interested TCL for developing TCP-based network services as well as for web development. It has been mentioned that TCL makes network programming simple. However, it seems that TCL uses select() under the covers which does not scale well with "web scale" in mind (see the

SIGCHLD not caught in epoll_wait?

空扰寡人 提交于 2019-12-10 12:19:17
问题 I wanted to understand the behavior of signals on fork. I wrote a small program to catch SIGCHLD with epoll_wait but when I do a "kill -9" on the forked child, I am not getting any signal and the child is in defunct state (I have a handler that does a wait()). Here is the code. //.... sigemptyset(&mask); sigaddset(&mask, SIGCHLD); pthread_sigmask(SIG_BLOCK, &mask, NULL); signal_fd = signalfd(-1, &mask, 0); memset(&tev, 0, sizeof(tev)); tev.events = EPOLLIN | EPOLLONESHOT; tev.data.fd = signal

epoll的本质是什么

混江龙づ霸主 提交于 2019-12-10 09:25:56
从事服务端开发,少不了要接触网络编程。epoll 作为 Linux 下高性能网络服务器的必备技术至关重要,nginx、Redis、Skynet 和大部分游戏服务器都使用到这一多路复用技术。 epoll 很重要,但是 epoll 与 select 的区别是什么呢?epoll 高效的原因是什么? 网上虽然也有不少讲解 epoll 的文章,但要么是过于浅显,或者陷入源码解析,很少能有通俗易懂的。笔者于是决定编写此文,让缺乏专业背景知识的读者也能够明白 epoll 的原理。 文章核心思想是:要让读者清晰明白 epoll 为什么性能好。 本文会从网卡接收数据的流程讲起,串联起 CPU 中断、操作系统进程调度等知识;再一步步分析阻塞接收数据、select 到 epoll 的进化过程;最后探究 epoll 的实现细节。 一、从网卡接收数据说起 下边是一个典型的计算机结构图,计算机由 CPU、存储器(内存)与网络接口等部件组成,了解 epoll 本质的第一步,要从硬件的角度看计算机怎样接收网络数据。 计算机结构图(图片来源:Linux内核完全注释之微型计算机组成结构) 下图展示了网卡接收数据的过程。 在 ① 阶段,网卡收到网线传来的数据; 经过 ② 阶段的硬件电路的传输; 最终 ③ 阶段将数据写入到内存中的某个地址上。 这个过程涉及到 DMA 传输、IO 通路选择等硬件有关的知识,但我们只需知道:

JavaWeb网站性能优化的相关技术

拈花ヽ惹草 提交于 2019-12-10 02:09:43
一、提高服务器并发处理能力 我们总是希望一台服务器在单位时间内能处理的请求越多越好,这也成了web服务器的能力高低的关键所在。服务器之所以可以同时处理多个请求,在于操作系统通过多执行流体系设计,使得多个任务可以轮流使用系统资源,这些资源包括CPU、内存以及I/O等。这就需要选择一个合适的并发策略来合理利用这些资源,从而提高服务器的并发处理能力。这些并发策略更多的应用在apache、nginx、lighttpd等底层web server软件中。 二、Web组件分离 这里所说的web组件是指web服务器提供的所有基于URL访问的资源,包括动态内容,静态网页,图片,样式表,脚本,视频等等。这些资源在文件大小,文件数量,内容更新频率,预计并发用户数,是否需要脚本解释器等方面有着很大的差异,对不同特性资源采用能充分发挥其潜力的优化策略,能极大的提高web站点的性能。例如:将图片部署在独立的服务器上并为其分配独立的新域名,对静态网页使用epoll模型可以在大并发数情况下吞吐率保持稳定。 三、数据库性能优化和扩展。 Web服务器软件在数据库方面做的优化主要是减少访问数据库的次数,具体做法就是使用各种缓存方法。也可以从数据库本身入手提高其查询性能,这涉及到数据库性能优化方面的知识本文不作讨论。另外也可以通过主从复制,读写分离,使用反向代理,写操作分离等方式来扩展数据库规模,提升数据库服务能力。 四

Is there any good examples or tutorial about epoll UDP?

↘锁芯ラ 提交于 2019-12-09 13:54:15
问题 I have been working with linux server using epoll and almost finished it. And I realized that clients will send packets using udp :( Could you please provide me any good tutorials or example using epoll udp? Thanks in advance. 回答1: The man pages were helpful for me. There's also a good code example in there. http://kernel.org/doc/man-pages/online/pages/man4/epoll.4.html http://kernel.org/doc/man-pages/online/pages/man2/epoll_create1.2.html If you're really insisting on tutorial, I'd recommend