network-programming

iOS re-routing requests w/ GCDWebServer (not redirecting)

坚强是说给别人听的谎言 提交于 2019-12-24 18:51:14
问题 I want to create a server on iOS with GCDWebServer , which will accept request to localhost , and then, draw the data from another url (a video file) and stream the data to the response. I intend to use plain NSURLConnection , and in the didReceiveData of the NSURLConnection callback, I want to pass this data to the GCDWebServerResponse . I am having a hard time to figure out how can I keep the connection from a request open, so that I can initiate another request w/ NSURLConnection , and

IO Completion ports: separate thread pool to process the dequeued packets?

爷,独闯天下 提交于 2019-12-24 18:41:41
问题 NOTE : I have added the C++ tag to this because a) the code is C++ and b) people using C++ may well have used IO completion ports. So please don't shout. I am playing with IO completion ports, and have eventually fully understood (and tested, to prove) - both with help from RbMm - the meaning of the NumberOfConcurrentThreads parameter within CreateIoCompletionPort() . I have the following small program which creates 10 threads all waiting on the completion port. I tell my completion port to

c# - ping exceeds timeout but reports success

不问归期 提交于 2019-12-24 16:39:03
问题 Problem: in my code, I set a timeout of a ping to 100ms new Ping().Send(item._deviceIP, 100, new byte[1]) which pings correctly and replies correctly, but the IPStatus.TimeExceeded is "faulty" and reports a success after the RTT is > 100ms What should happen: when recieving a pingreply, if the IPStatus is a: TimeExceeded (>100ms), _devicePing should have a color set to Red Success(<=100ms), _devicePing should have a color set to green any other, appropriate color is set. What happens: Any

How to write a minimal-overhead proxy to localhost:3389 in Haskell?

Deadly 提交于 2019-12-24 15:27:49
问题 Update: question now contains the final edited answer! I now use the following (final answer): module Main where import Control.Concurrent (forkIO) import Control.Monad (when,forever,void) import Network (PortID(PortNumber),listenOn) import Network.Socket hiding (listen,recv,send) import Network.Socket.ByteString (recv,sendAll) import qualified Data.ByteString as B import System type Host = String type Port = PortNumber main :: IO () main = do [lp,h,p] <- getArgs start (port lp) h (port p)

Application hangs on GetRequestStream() after first request

烈酒焚心 提交于 2019-12-24 13:25:52
问题 I've googled and searched here. Some suggest that streams were not being close, others suggested that it's a connection limit with ServicePointManager.DefaultConnectionLimit being set to 1. However, none of these seem to work. My problem is, when i use this for the first time, it works: using (var stream = request.GetRequestStream()) { var data = Encoding.UTF8.GetBytes(post.ToString()); stream.Write(data, 0, data.Length); } When I use it a second time, it freezes. Yes, I'm disposing my stream

How many tcp connections on different ports a sever can handle?

怎甘沉沦 提交于 2019-12-24 13:07:25
问题 I am designing a server client app in C#. the client connect and communicate with the sever threw tcp socket. in the server side I am using the socket.accept() method in order to handle new connection from client. when client is connecting, the server use a random port in order to communicate with the client. so my question is.. how many clients the server can receive in this kind of form? is there another form that I should use in order to handle lots of clients? 回答1: in the server side i am

Exposing Docker Container Ports

落花浮王杯 提交于 2019-12-24 12:17:19
问题 I understand that to expose ports in a docker container, you can use the -p flag (e.g. -p 1-100:1-100 ). But is there a nice way to expose a large percentage of possible ports from the container to the host machine? For instance if I am running a router of sorts in a container that lives in a VM, and I would like to expose all ports in the container from 32768 upwards to 65535, is there a nice way to do this? As it stands I've tried using the -p flag and it complains about memory allocation

Unix TCP servers and UDP Servers

自作多情 提交于 2019-12-24 10:46:58
问题 Why is the design of TCP servers mostly such that whenever it accepts a connection, a new process is invoked to handle it . But, why in the case of UDP servers, mostly there is only a single process that handles all client requests ? 回答1: The main difference between TCP and UDP is, as stated before, that UDP is connectionless. A program using UDP has only one socket where it receives messages. So there's no problem if you just block and wait for a message. If using TCP you get one socket for

Asynchronous client broadcast receiver

三世轮回 提交于 2019-12-24 09:58:49
问题 I would appreciate any help/feedback on this issue. I'm developing an Asynchronous socket connection in C#, i would like to set a broadcast client receiver such that it broadcast local network servers and then it receives the messages from the local servers. the main issue is that first i want to broadcast to different servers from one client and then retrieve the ip addresses from all the servers. here is part of the client code. also the server side works fine. public void

shared_ptr and logical pointer ownership use-case in a complex design

时光毁灭记忆、已成空白 提交于 2019-12-24 09:54:06
问题 I have an Object A that contains a shared resource (shared_ptr) r , A is the creator/owner of r upon construction the object "registers" its r with another object B. Object B holds a reference to A 's r in a std::set . Object A is used as a Boost::asio::handler. I need to unregister r from B when A is being destructed, and when A holds unique access to r , as A is r 's creator it is responsible for destroying it. Note: A is copy constructed multiple times when it is used as an boost::asio