delayed-job

run multi delayed_job instances per RAILS_ENV

Deadly 提交于 2019-12-04 06:09:41
I'm working on a Rails app with multi RAILS_Env env_name1: adapter: mysql username: root password: host: localhost database: db_name_1 env_name2: adapter: mysql username: root password: host: localhost database: db_name_2 ... .. . And i'm using delayed_job (2.0.5) plugin to manage asynchrone and background work. I would like start multi delayed_job per RAILS_ENV: RAILS_ENV=env_name1 script/delayed_job start RAILS_ENV=env_name2 script/delayed_job start .. I noticed that I can run only one delayed_job instance for the 2nd, I have this error "ERROR: there is already one or more instance(s) of the

monitoring multiple delayed job workers with monit

送分小仙女□ 提交于 2019-12-04 05:18:06
I have read a lot about monitoring delayed_job with monit. The implementation is pretty easy and straight forward. But When one worker is not enough how do I setup monit to ensure that, let's say, 10 workers are constantly running? You can just replicate the same config you have for the first worker N times. Suppose you have 5 workers, you'll monitor all of them with the following: check process delayed_job.0 with pidfile /path/to/shared/pids/delayed_job.0.pid start program = "/bin/su -c '/usr/bin/env RAILS_ENV=production /path/to/current/script/delayed_job -n 5 start' - user" stop program = "

Bundler with Capistrano doesn't generate a binary for DelayedJob

我与影子孤独终老i 提交于 2019-12-04 05:01:41
I'm using Bundler for a Rails app deployed by Capistrano. I'm trying to add the DelayedJob gem, but the bin/delayed_job file is missing from the remote server after I do a deploy. It exists on my local machine. I tried manually creating it with bundle binstubs delayed_job but it fails with: There are no executables for the gem delayed_job. What am I missing here? The gems in question are: Bundler 1.3.5, Capistrano 3.1.0, DelayedJob 4.0.0, Rails 4.0.2 EDIT: Here is my full Gemfile: http://pastebin.com/WuE3eJrj I think you need to include the gem "daemons" , according to the documentation: To do

Rails + foreman + worker hangs server

偶尔善良 提交于 2019-12-04 04:31:29
On my local machine I'm trying to start my rails app and delayed job worker using Foreman. My Procfile looks like this: web: bundle exec rails server -p $PORT worker: bundle exec rake jobs:work When I start foreman only the first two web requests get executed. With the third request the server hangs. The first request is outputted in the console, the second isn't. If I leave out the worker in my Procfile the server is running just fine and is outputting everything to the console. Also when I start the rails server and worker without Foreman everything is working fine. So it looks like there's

How can I force delayed_job to use a specific db connection?

江枫思渺然 提交于 2019-12-04 03:32:45
I have a Rails 3 applications that uses different databases depending on the subdomain. I do this by using "establish_connection" in the ApplicationController. Now I'm trying to use delayed_job gem to do some background processing, however it uses the database connection that it's active in that moment. It's connecting to the subdomain database. I'd like to force it to use the "common" database. I've done this for some models calling "establish_connection" in the model like this: class Customer < ActiveRecord::Base establish_connection ActiveRecord::Base.configurations["#{Rails.env}"] ... end

Error reporting when sending emails with delayed_job

对着背影说爱祢 提交于 2019-12-04 03:23:39
What's the proper way to get error reports, when using a tool like AirBrake or ExceptionNotifier from mailing delayed jobs? I tried to creating my own delayed job class, but the mail object created by Mailer.welcome() (or similar) is not serialized correctly. I also tried adding an error(job, exception) method to the PerformableMailer and PerformableMethod classes, but I got more errors generally related to serializing I believe. I tried both psych and sych for the serialization. Updated Solution Overall the solution is quite simple. If you have are doing delayed_job on an Object (like MyClass

Running code after Rails is done loading?

*爱你&永不变心* 提交于 2019-12-04 02:35:10
I have a periodic task that needs to execute once a minute (using delayed_job). I would like for Rails to automatically queue it up as soon as it's finished loading, if one such task isn't already present in the system. What is a good place for me to run some code right at the end of the entire Rails boot flow? Someone suggested config/environments/development.rb (or other environment), but delayed_job give me ActiveRecord issues when I queue up jobs from there. I consulted http://guides.rubyonrails.org/initialization.html , and there doesn't seem to be a clear location for that kind of code

Starting multiple DelayedJob workers w/ specific queues via Capistrano tasks

感情迁移 提交于 2019-12-04 00:32:08
I'm looking into using queues with delayed_job. I've found this page which outlines various ways of starting workers, however I'd like to keep my currently Capistrano method: set :delayed_job_args, "-n 2 -p ecv2.production" after "deploy:start", "delayed_job:start" ... I was wondering how I could modify the delayed_job_args to handle spawning 1 worker with a specific queue, and 1 worker for every other job. So far, all I have is overriding each task like so: namespace :delayed_job do task :restart, :roles => :app do run "cd #{current_path}; RAILS_ENV=#{rails_env} script/delayed_job -p ecv2

Can a Heroku app add/remove dynos or workers to/from itself?

微笑、不失礼 提交于 2019-12-03 21:24:46
Heroku allows you to add and remove dynos and workers on the fly and charges you per second that each is used. Is it possible to set up my app so that it can add/remove dynos and workers from itself depending on the load it's under? This paragraph on Heroku.com mentions an API, but I can't find out much more about it. Yes. What you want is something like this: require 'heroku' Heroku::Client.new("username", "password").set_workers("appname", num_workers) 来源: https://stackoverflow.com/questions/3039639/can-a-heroku-app-add-remove-dynos-or-workers-to-from-itself

log doesn't work in production with delayed job

爱⌒轻易说出口 提交于 2019-12-03 19:42:19
问题 I'm suffering some weird issue where my delayed_jobs are failing in production. Finally I narrowed it down to the logger. If I comment out my log function calls, everything works. However if I try to log, I get this in the delayed_job handler: --- !ruby/struct:Delayed::PerformableMethod object: AR:User:1 method: :load_and_update_without_send_later args: [] | closed stream /opt/ruby/lib/ruby/1.8/logger.rb:504:in `write' /opt/ruby/lib/ruby/1.8/logger.rb:504:in `write' /opt/ruby/lib/ruby/1.8