Resque: time-critical jobs that are executed sequentially per user

僤鯓⒐⒋嵵緔 提交于 2019-12-03 15:43:53

Thanks to the answer of @Isotope I finally came to a solution that seems to work (using resque-retry and locks in redis:

class MyJob
  extend Resque::Plugins::Retry

  # directly enqueue job when lock occurred
  @retry_delay = 0 
  # we don't need the limit because sometimes the lock should be cleared
  @retry_limit = 10000 
  # just catch lock timeouts
  @retry_exceptions = [Redis::Lock::LockTimeout]

  def self.perform(user_id, ...)
    # Lock the job for given user. 
    # If there is already another job for the user in progress, 
    # Redis::Lock::LockTimeout is raised and the job is requeued.
    Redis::Lock.new("my_job.user##{user_id}", 
      :expiration => 1, 
      # We don't want to wait for the lock, just requeue the job as fast as possible
      :timeout => 0.1
    ).lock do
      # do your stuff here ...
    end
  end
end

I am using here Redis::Lock from https://github.com/nateware/redis-objects (it encapsulates the pattern from http://redis.io/commands/setex).

I've done this before.

Best solution to ensure sequentially for things like this is to have the end of job1 queue the job2. job1's and job2's can then either go in the same queue or different queues, it won't matter for sequentially, it's up to you.

Any other solution, such as queuing jobs1+2 at the same time BUT telling job2 to start in 0.5secs would result in race conditions, so that's not recommended.

Having job1 trigger job2 is also really easy to do.

If you want another option for the sake of it: My final suggestion would be to bundle both jobs into a single job and add a param for if the 2nd part should be triggered also.

e.g.

def my_job(id, etc, etc, do_job_two = false)
  ...job_1 stuff...
  if do_job_two
    ...job_2 stuff...
  end
end
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!