In my opinion queue workers are often overlooked and underutilized as a way of dealing with background tasks. In the bad ole days of Drupal, we were stuck with hook_cron and a single wget call to cron.php with a so-called “secure key.” This post is not a “history of cron in Drupal” post. Over time, modules were created to make cron more maintainable, and Drush commands made the wget call obsolete. Today’s core drupal still has the core-cron Drush command, and some documentation of how to best set it up exists.

Queue workers are not cron, but cron can execute them. Queue workers work on a task queue and handle each task in a single process. You could add a job to a queue every time something happened and then process that task in the background. In the following example, we’ll set up a Queue Worker to process an audit log, so we can keep an eye on what’s going on in our system.

Setting up queues

You add Queues using the annotation API, and they are considered plugins. Place your queue worker class in the Plugin/QueueWorker directory for your module and define a QueueWorker annotation for it, and you will have both a worker and a queue. In our example, we’ll create the audit log worker as follows.

namespace Drupal\queue_worker_examples\Plugin\QueueWorker;

use Drupal\Core\Queue\QueueWorkerBase;

 * @QueueWorker(
 *   id = "audit_log",
 *   title = @Translation("Audit log worker"),
 *   cron = {"time" = 60}
 * )
class AuditLogQueueWorker extends QueueWorkerBase implements ContainerFactoryPluginInterface {}

We’ll give the queue worker an ID, a human title, and a maximum cron execution time in our annotation. Next, we’ll need to extend the QueueWorkerBase base class, as this is where most of the magic happens. The base class implements QueueWorkerInterface, and this requires an implementation of the processItem method. We’ll do the actual audit logging here, and since we’re keeping it simple, we’ll output it to a log channel.

Defining a logging channel is done in the [module_name].services.yml file, so we’ll define an audit log channel.

    parent: logger.channel_base
    arguments: [ 'audit_log' ]

We implement the ContainerFactoryPluginInterface to be able to add the logging channel to the worker class using dependency injection. It’s good practice, as calls to the container using the \Drupal static are discouraged due to the best practices of inversion of control.

Now that we have a logging channel for our audit log, we need to decide what to log. We’d need to log who, what and when for an audit log. “Who” is the user. “What” is the operation performed, so this could be any CRUD (Create, Read, Update, Delete) function, but it should also include on what entity it was performed. “When” is simply a timestamp for the operation.

Adding jobs

Since we’re creating an audit log, it makes sense we get the information from the permission layer. Drupal comes with a hook, called hook_entity_access. The hook is called whenever entity access is checked. The arguments for the hook contain everything we need, account, entity, and operation. The hooks is placed in the module file, so we’ll add our hook_entity_access, and create the job with the information we want to log.

 * Implements hook_entity_access().
function queue_worker_examples_entity_access(\Drupal\Core\Entity\EntityInterface $entity, $operation, \Drupal\Core\Session\AccountInterface $account) {
  $job = [
    'user' => $account->getAccountName(),
    'uid' => $account->id(),
    'entity' => $entity->getEntityTypeId() . ':' . $entity->bundle() . ':' . $entity->id(),
    'op' => $operation,
    'timestamp' => Drupal::time()->getCurrentTime(),


  return AccessResult::neutral();

We implement an access hook whose sole job is to queue the audit log job without any permission changes. It’s worth noting that automated operations also do access checks, so if cron publishes a node (such as through the scheduler module), you will likely see some interesting audit logs from your admin user.

Job processing

Moving back to the worker, we should add the processItem method and log the actual audit. We grab our logging channel and add the log message to it.

   * {@inheritDoc}
  public function processItem($data) {
      '@user (uid: @uid) performed @op on @entity at @timestamp',
        '@user' => $data['user'],
        '@uid' => $data['uid'],
        '@op' => $data['op'],
        '@entity' => $data['entity'],
        '@timestamp' => $data['timestamp']

With this, our basic audit log is complete. Go test it out by creating a node and then running your queue either with drush cron or using drush queue:run audit_log. Then check your logfile for the messages. By default, the example drupal installation will log to stdout, but you can enable dblog to log to your database.

Background processing

As a general rule, I prefer using background processes for anything I can offload and don’t need to happen immediately. The more you do real-time processing, the slower the page requests become. Considering you don’t need this information right this minute, it seems like wasting user time to me.

We could replace the audit log example with an API integration, where requests to Drupal API are validated and queued for further processing. Background processing makes the API faster to respond, but you might need to implement a ticket system if you need to let external sources keep track of progress. As a rule, though, unless you need to do it right now, queue it. Having a queue or cron server allows you to keep your main web servers focused on serving web responses rather than having their PHP and Nginx workers stay busy longer.

In our example, we pushed the information into the queue at run-time and let a queue worker pick up the jobs. You can find the audit log example in the links below. Try it out, and consider adding a queue worker in your next project.