Download the PHP package stancl/jobpipeline without Composer
On this page you can find all versions of the php package stancl/jobpipeline. It is possible to download/install these versions without Composer. Possible dependencies are resolved automatically.
Download stancl/jobpipeline
More information about stancl/jobpipeline
Files in stancl/jobpipeline
Package jobpipeline
Short Description Turn any series of jobs into Laravel listeners.
License MIT
Informations about the package jobpipeline
Job Pipeline
The JobPipeline
is a simple, yet extremely powerful class that lets you convert any (series of) jobs into event listeners.
You may use a job pipeline like any other listener, so you can register it in the EventServiceProvider
using the $listen
array, or in any other place using Event::listen()
— up to you.
Creating job pipelines
These code snippets will use examples from my multi-tenancy package.
To create a job pipeline, start by specifying the jobs you want to use:
Then, specify what variable you want to pass to the jobs. This will usually come from the event.
Next, decide if you want to queue the pipeline. By default, pipelines are synchronous (= not queued) by default.
🔥 If you do want pipelines to be queued by default, you can do that by setting a static property:
\Stancl\JobPipeline\JobPipeline::$shouldBeQueuedByDefault = true;
If you wish to push the job to a different queue, you can pass a string as the second parameter:
This can be simplified by calling shouldBeQueued(queue: 'another-queue')
since the first parameter defaults to true
.
Finally, convert the pipeline to a listener and bind it to an event:
Note that you can use job pipelines even for converting single jobs to event listeners. That's useful if you have some logic in job classes and don't want to create listener classes just to be able to run these jobs as a result of an event being fired.
Tip: Returning false
from a job cancels the execution of all following jobs in the pipeline. This can be useful to cancel a job pipeline that creates, migrates, and seeds databases if the create database job exists (e.g. because it detects that a database already exists). So it can be good to separate jobs into multiple pipelines, so that each logical category of jobs can be stopped individually.