worker

How do I create celery queues on runtime so that tasks sent to that queue gets picked up by workers?

走远了吗. 提交于 2019-12-21 05:57:20
问题 I'm using django 1.4, celery 3.0, rabbitmq To describe the problem, I have many content networks in a system and I want a queue for processing tasks related to each of these network. However content is created on the fly when the system is live and therefore I need to create queues on the fly and have existing workers start picking up on them. I've tried scheduling tasks in the following way (where content is a django model instance): queue_name = 'content.{}'.format(content.pk) # E.g. queue

How to share worker among two different applications on heroku?

生来就可爱ヽ(ⅴ<●) 提交于 2019-12-21 02:19:49
问题 I have two separate applications running on heroku and pointing to same database, first one responsible for user interface and second one for admin interface , I am using sidekiq with redis for background job processing, I have added one worker and I am able to share 'redis-server' by setting environment variable pointing to same Redis providing Addon, Now i wish to share worker too, because adding the extra worker will cost double. Please suggest, whether this is even possible or not? 回答1:

What's best practice for HA gearman job servers

和自甴很熟 提交于 2019-12-20 19:38:14
问题 From gearman's main page, they mention running with multiple job servers so if a job server dies, the clients can pick up a new job server. Given the statement and diagram below, it seems that the job servers do not communicate with each other. Our question is what happens to those jobs that are queued in the job server that died? What is the best practice to have high-availability for these servers to make sure jobs aren't interrupted in a failure? You are able to run multiple job servers

Java Performance Processes vs Threads

你说的曾经没有我的故事 提交于 2019-12-20 12:34:27
问题 I am implementing a worker pool in Java. This is essentially a whole load of objects which will pick up chunks of data, process the data and then store the result. Because of IO latency there will be significantly more workers than processor cores. The server is dedicated to this task and I want to wring the maximum performance out of the hardware (but no I don't want to implement it in C++). The simplest implementation would be to have a single Java process which creates and monitors a

Airflow Worker Daemon exits for no visible reason

故事扮演 提交于 2019-12-20 01:43:09
问题 I have Airflow 1.9 running inside a virtual environment, set up with Celery and Redis and it works well. However, I wanted to daemon-ize the set up and used the instructions here. It works well for the Webserver, Scheduler and Flower, but fails for the Worker, which is of course, the core of it all. My airflow-worker.service file looks like this: [Unit] Description=Airflow celery worker daemon After=network.target postgresql.service mysql.service redis.service rabbitmq-server.service Wants

Is it feasible to run multiple processeses on a Heroku dyno?

喜欢而已 提交于 2019-12-19 05:25:13
问题 I am aware of the memory limitations of the Heroku platform, and I know that it is far more scalable to separate an app into web and worker dynos. However, I still would like to run asynchronous tasks alongside the web process for testing purposes. Dynos are costly and I would like to prototype on the free instance that Heroku provides. Are there any issues with spawning a new job as a process or subprocess in the same dyno as a web process? 回答1: On the newer Cedar stack, there are no issues

differentiate driver code and work code in Apache Spark

早过忘川 提交于 2019-12-19 01:37:34
问题 In Apache Spark program how do we know which part of code will execute in driver program and which part of code will execute in worker nodes? With Regards 回答1: It is actually pretty simple. Everything that happens inside the closure created by a transformation happens on a worker. It means if something is passed inside map(...) , filter(...) , mapPartitions(...) , groupBy*(...) , aggregateBy*(...) is executed on the workers. It includes reading data from a persistent storage or remote sources

How to make each unicorn worker of my Rails application log to a different file?

随声附和 提交于 2019-12-18 10:52:40
问题 How can I make each unicorn worker of my Rails application writting in a different log file ? The why : problem of mixed log files... In its default configuration, Rails will write its log messages to a single log file: log/<environment>.log . Unicorn workers will write to the same log file at once, the messages can get mixed up. This is a problem when request-log-analyzer parses a log file. An example: Processing Controller1#action1 ... Processing Controller2#action2 ... Completed in 100ms..

What happens to QThread when application is being closed without proper wait() call?

北城余情 提交于 2019-12-18 06:13:37
问题 In the example below (inside Qt GUI application) a new thread is started (with an event loop in which I want some work to be done): void doWork() { QThread* workerThread = new QThread(); Worker* worker = new Worker(); worker->moveToThread(workerThread); connect(workerThread, SIGNAL(started()), worker, SLOT(startWork())); connect(worker, SIGNAL(finished()), workerThread, SLOT(quit())); connect(workerThread, SIGNAL(finished()), worker, SLOT(deleteLater())); connect(workerThread, SIGNAL(finished

TCP Socket communication between processes on Heroku worker dyno

做~自己de王妃 提交于 2019-12-18 04:08:38
问题 I'd like to know how to communicate between processes on a Heroku worker dyno. We want a Resque worker to read off a queue and send the data to another process running on the same dyno. The "other process" is an off-the-shelf piece of software that usually uses TCP sockets (port xyz) to listen for commands. It is set up to run as a background process before the Resque worker starts. However, when we try to connect locally to that TCP socket, we get nowhere. Our Rake task for setting up the