BackgrounDRb offers seamless integration with rails. You can invoke random tasks defined in your workers from rails. You can pass arguments, collect results, monitor status of workers and other stuff.

Invoke a task asynchronously on a worker

Let’s say, you have following worker code:

class FooWorker < BackgrounDRb::MetaWorker
  set_worker_name :foo_worker
  def create(args = nil)
    # this method is called, when worker is loaded for the first time

  def some_task args
    # perform a long running task

And you want to invoke some_task method with appropriate arguments from rails. Following snippet will invoke method some_task with argument data in foo_worker. Also, method will be invoked asynchronously and Rails won’t wait for result from BackgrounDRb server.

worker = MiddleMan.worker(:foo_worker)
worker.async_some_task(:arg => data) 

It should be noted that, since some_task method is being executed asynchronously, don’t expect any meaningful return values from method invocation. If you want to invoke a method on worker and collect results returned by it, you should read next section (Invoke method and collect results).

When you invoke MiddleMan.worker(:foo_worker) it returns a worker proxy, hence you can combine above two lines in one as follows:

     async_some_task(:arg => data) 

Above snippet also demonstrates that, if your worker was started with a worker_key you can use it to get correct worker proxy.

Synchronous Task invocation (Invoke task and wait for results)

Following snippet will invoke method some_task with argument data in foo_worker. Also, method will block until BackgrounDRb server returns a result.

worker = MiddleMan.worker(:foo_worker)
result = worker.some_task(:arg => data) 

Since, now you are expecting a return value from your worker method, new worker code will look like:

class FooWorker < BackgrounDRb::MetaWorker
  set_worker_name :foo_worker
  def create(args = nil)
    # this method is called, when worker is loaded for the first time

  def some_task args
    billing_result = UserPayment.bill!
    return billing_result

As illustrated above, you can use worker_key or make them in single line too.

Retrieve Cached Worker results

If you are using cache in your worker code to store result objects, you can retrieve them from rails using:

status_obj = MiddleMan.worker(:foo_worker).ask_result(cache_key) 

You must use worker_key if worker was started with a worker_key.

From controller, you can also reset result stored for a particular worker, with particular cache key. This is only applicable, if you are using memcache for storing results.

MiddleMan.worker(:foo_worker).reset_memcache_result(cache_key) # or

Enqueue task to the persistent job queue :

Jobs executed via synchronous and asynchronous APIs are fine, but these tasks are usually kept in memory(and hence they are fast) and hence aren’t entirely failsafe.

To solve this BackgrounDRb also lets you add jobs to a persistent job queue, which is automatically picked by responsible worker and invoked. To use this:

MiddleMan(:hello_worker).enq_some_task(:arg => "hello_world",:job_key => "boy")

With BackgrounDRb version >= 1.1, you can also schedule a persistent task to be executed at a particular time,

MiddleMan(:hello_worker).enq_some_task(:arg => "hello_world",
                      :job_key => "boy",:scheduled_at => ( + 1.hour))

Above line will add specified task to the job queue and set to be invoked at specified time. For more information about scheduling see scheduling section.

Start a new worker from controller

To start a worker from rails:

used_job_key = MiddleMan.new_worker(:worker => :foo_worker,\
     :worker_key => "my_secret_job_key") 

Worker key passed here, while starting the worker can be used later for invoking tasks on started worker or for accessing cached result objects and stuff like that.

Important thing to be kept in mind is, when you are creating a worker using above approach, you must use a unique worker_key while starting the worker. Also, while invoking any of the other methods like ask_result, worker_info or one of the worker methods, you must user same worker_key.

Worker Info

You can get worker specific information using:


The return value will look something like:

{:worker=>:foo_worker, :status=>:running, :worker_key=>"hello"} 

Information about all currently running workers can be obtained using:


Return value will look like:

{""=>nil, ""=>
[{:worker_key=>"", :status=>:running, :worker=>:log_worker},
{:worker_key=>"", :status=>:running, :worker=>:foo_worker}]}

BackgrounDRb Clustering

By using following option in your backgroundrb.yml you can cluster more than one backgroundrb server.

  :port: 11006
  :environment: production
:client: ","

So what happens here is, now BackgrounDRb client will talk to bdrb servers running on both and So when you invoke a task like this:

MiddleMan.worker(:foo_worker).async_some_task(:arg => data) 

Your task gets executed in round robin manner in specified servers by default. Also, once a server goes down, it will automatically stop participating in clustering and when it comes back, it will be automatically start participating in clustering.

In addition to default round robin task distribution, you can override this behaviour by passing additional :host option while invoking task from rails.For example:

# run method 'some_task' on all backgroundrb servers
MiddleMan.worker(:hello_worker).async_some_task(:arg => data,
               :job_key => session[:user_id],:host => :all)

# run method 'some_task' on only locally configured server
MiddleMan.worker(:hello_worker).async_some_task(:arg => data,
               :job_key => session[:user_id],:host => :local)

# run the task on specified server
MiddleMan.worker(:hello_worker).async_some_task(:arg => data,:job_key => \
               session[:user_id],:host => "")