Failed Jobs

When jobs fail permanently (exceed max_tries), they're moved to the failed jobs storage for inspection and potential retry. This guide explains how to handle failed jobs.

What Makes a Job Fail?

A job fails permanently when:

  1. Exceeds max_tries - Retried the maximum number of times

  2. Unrecoverable error - Exception that can't be resolved by retrying

  3. Timeout - Job exceeds its timeout limit repeatedly

Where Failed Jobs Are Stored

Failed jobs are stored in the queue_failed_jobs table:

CREATE TABLE queue_failed_jobs (
    id INT AUTO_INCREMENT PRIMARY KEY,
    connection VARCHAR(255) NOT NULL,
    queue VARCHAR(255) NOT NULL,
    payload TEXT NOT NULL,
    exception TEXT NOT NULL,
    failed_at DATETIME NOT NULL,
    INDEX(queue)
);

Redis Storage (Fallback)

If MySQL is disabled, failed jobs are stored in Redis:

Viewing Failed Jobs

Using MySQL

Using Redis CLI

The failed() Method

Override the failed() method in your job to handle permanent failures:

Common Failed Job Scenarios

Scenario 1: External API Failure

Scenario 2: Invalid Data

Scenario 3: Resource Unavailable

Inspecting Failed Jobs

Get Job Details

Analyze Failure Patterns

Retrying Failed Jobs

Manual Retry

Bulk Retry

Cleaning Up Failed Jobs

Delete Old Failed Jobs

Automated Cleanup Script

Schedule with cron:

Monitoring Failed Jobs

Alert on Failure Threshold

Dashboard Query

Best Practices

1. Always Implement failed()

2. Set Appropriate max_tries

3. Regular Cleanup

4. Monitor Failure Patterns

Next Steps

Last updated