execute only one of many duplicate jobs with sidekiq?

对着背影说爱祢 提交于 2019-12-04 19:33:19

问题


I have a background job that does a map/reduce job on MongoDB. When the user sends in more data to the document, it kicks of the background job that runs on the document. If the user sends in multiple requests, it will kick off multiple background jobs for the same document, but only one really needs to run. Is there a way I can prevent multiple duplicate instances? I was thinking of creating a queue for each document and making sure it is empty before I submit a new job. Or perhaps I can set a job id somehow that is the same as my document id, and check that none exists before submitting it?

Also, I just found a sidekiq-unique-jobs gem. But the documentation is non-existent. Does this do what I want?


回答1:


My initial suggestion would be a mutex for this specific job. But as there's a chance that you may have multiple application servers working the sidekiq jobs, I would suggest something at the redis level.

For instance, use redis-semaphore within your sidekiq worker definition. An untested example:

def perform
  s = Redis::Semaphore.new(:map_reduce_semaphore, connection: "localhost")

  # verify that this sidekiq worker is the first to reach this semaphore.
  unless s.locked?

    # auto-unlocks in 90 seconds. set to what is reasonable for your worker.
    s.lock(90)
    your_map_reduce()
    s.unlock
  end
end

def your_map_reduce
  # ...
end



回答2:


https://github.com/krasnoukhov/sidekiq-middleware

UniqueJobs Provides uniqueness for jobs.

Usage

Example worker:

class UniqueWorker
  include Sidekiq::Worker

  sidekiq_options({
    # Should be set to true (enables uniqueness for async jobs)
    # or :all (enables uniqueness for both async and scheduled jobs)
    unique: :all,

    # Unique expiration (optional, default is 30 minutes)
    # For scheduled jobs calculates automatically based on schedule time and expiration period
    expiration: 24 * 60 * 60
  })

  def perform
    # Your code goes here
  end
end



回答3:


There also is https://github.com/mhenrixon/sidekiq-unique-jobs (SidekiqUniqueJobs).




回答4:


You can do this, assuming you have all the jobs are getting added to Enqueued bucket.

class SidekiqUniqChecker
  def self.perform_unique_async(action, model_name, id)
    key = "#{action}:#{model_name}:#{id}"
    queue = Sidekiq::Queue.new('elasticsearch')
    queue.each { |q| return if q.args.join(':') == key }
    Indexer.perform_async(action, model_name, id)
  end
end

The above code is just a sample, but you may tweak it to your needs.

Source



来源:https://stackoverflow.com/questions/14713540/execute-only-one-of-many-duplicate-jobs-with-sidekiq

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!