Sidekiq is a great library for background job processing. It uses Redis as a backend which makes queuing jobs extremely fast. In this article I will discuss various options for scaling and managing job processing with greater control.
We will build a POC application where we have
What if we discover a bug in
SendEmailJob and need to stop these jobs from running in production? We can easily stop entire Sidekiq process via GUI or CLI but that will stop ALL jobs from running. We still want
UpdateStatsJob to continue running.
First thing is we will create separate queues for our jobs to run through. Another benefit of multiple queues is jobs have different priorities and time urgencies. We do not want to have 10K low priority jobs queued BEFORE 10 high priority jobs. Here is a good overview.
Now we will configure our Sidekiq to run different processes to watch various queues. Here is sample configuration for capistrano-sidekiq
We can now stop specific Sidekiq processes if we want jobs in those queues to not execute (they will continue queuing in Redis). Other jobs in different queues will run normally. When we deploy our fix we will restart Sidekiq process that watches
send_email queue and it will then execute those jobs.
This approach can also be extended to separate Sidekiq processes per server. To really scale our applications we may need to create multiple servers so that
generate_report jobs run completely separately.
Here is configuration for capistrano-sidekiq:
Now we can deploy to the same codebase to different servers and activate only some of the functionality. The other code files will just sit there unused.
This approach works well for a few applications derived from a shared codebase. Beyond that we might need to break things up into separate microservices but that’s a different blog post.