node-cluster

Node.js on multi-core machines

时光毁灭记忆、已成空白 提交于 2019-12-16 19:58:09
问题 Node.js looks interesting, BUT I must miss something - isn't Node.js tuned only to run on a single process and thread? Then how does it scale for multi-core CPUs and multi-CPU servers? After all, it is all great to make fast as possible single-thread server, but for high loads I would want to use several CPUs. And the same goes for making applications faster - seems today the way is use multiple CPUs and parallelize the tasks. How does Node.js fit into this picture? Is its idea to somehow

Is there a way to send a message from a worker to other workers in node.js?

不想你离开。 提交于 2019-12-12 02:42:41
问题 I can use process.send to send a message from a worker to a master. I can use the following code to send a message from the master to each worker. for (var id in cluster.workers) { cluster.workers[id].send({command: 'doSomething'}); } To send a message from a worker to other workers I have to send a message to the master and then have it forward the message. This results in the original sender also receiving the message and is something that I'd like to avoid but I can live with it! I also

Using Node cluster module with SailsJs: EADDRINUSE

拟墨画扇 提交于 2019-12-08 02:43:16
问题 I have a SailsJs (http://sailsjs.org/) based application that has to deal with some CPU intensive tasks. In short, I want to use the cluster (https://nodejs.org/api/cluster.html) module to delegate the processing of these tasks to worker processes so that the main event loop of the Sails application isn't blocked (and so can respond to requests as normal). When creating a worker, I'm getting an EADDRINUSE error as Sails is trying to run again and bind to the same port. Sample code: //

Clustering Node JS in Heavy Traffic Production Environment

僤鯓⒐⒋嵵緔 提交于 2019-12-06 11:27:55
问题 I have a web service handling http requests to redirect to specific URLs. Right the CPU is hammered at about 5 million hits per day, but I need to scale it up to handle 20 million plus. This is a production environment so I am a little apprehensive about the new Node Cluster method b/c it is still listed as experimental. I need suggestions on how to cluster Node on handle the traffic on a linux server. Any thoughts? 回答1: 5 million per day is equivalent to 57.87 per second, and 25 million is

Clustering Node JS in Heavy Traffic Production Environment

有些话、适合烂在心里 提交于 2019-12-04 15:45:19
I have a web service handling http requests to redirect to specific URLs. Right the CPU is hammered at about 5 million hits per day, but I need to scale it up to handle 20 million plus. This is a production environment so I am a little apprehensive about the new Node Cluster method b/c it is still listed as experimental. I need suggestions on how to cluster Node on handle the traffic on a linux server. Any thoughts? 5 million per day is equivalent to 57.87 per second, and 25 million is 289.4 per second. These numbers are not too much for a single server for your case. If you only want to

how to keep variables that share all node processes in node cluster?

岁酱吖の 提交于 2019-12-03 08:40:52
问题 It seems like all the node woker processes are working as if it is executing a new copy of the same application. But would like to keep some variables that are shared by all node workers (child processes) in node cluster. Is there a simple way to do this? 回答1: All worker processes are indeed new copies of your application. Each worker is a full featured process created with child_process.spawn. So no, they don't share variables. And it's probably best this way. If you want to share

how to keep variables that share all node processes in node cluster?

喜夏-厌秋 提交于 2019-12-02 22:30:07
It seems like all the node woker processes are working as if it is executing a new copy of the same application. But would like to keep some variables that are shared by all node workers (child processes) in node cluster. Is there a simple way to do this? All worker processes are indeed new copies of your application. Each worker is a full featured process created with child_process.spawn. So no, they don't share variables. And it's probably best this way. If you want to share information between worker processes (typically sessions) you should look into storing these information in a database

Puppeteer - Protocol error (Page.navigate): Target closed

微笑、不失礼 提交于 2019-11-27 16:47:45
问题 As you can see with the sample code below, I'm using Puppeteer with a cluster of workers in Node to run multiple requests of websites screenshots by a given URL: const cluster = require('cluster'); const express = require('express'); const bodyParser = require('body-parser'); const puppeteer = require('puppeteer'); async function getScreenshot(domain) { let screenshot; const browser = await puppeteer.launch({ args: ['--no-sandbox', '--disable-setuid-sandbox', '--disable-dev-shm-usage'] });