Download the PHP package ktomk/pipelines without Composer

On this page you can find all versions of the php package ktomk/pipelines. It is possible to download/install these versions without Composer. Possible dependencies are resolved automatically.

FAQ

After the download, you have to make one include require_once('vendor/autoload.php');. After that you have to import the classes with use statements.

Example:
If you use only one package a project is not needed. But if you use more then one package, without a project it is not possible to import the classes with use statements.

In general, it is recommended to use always a project to download your libraries. In an application normally there is more than one library needed.
Some PHP packages are not free to download and because of that hosted in private repositories. In this case some credentials are needed to access such packages. Please use the auth.json textarea to insert credentials, if a package is coming from a private repository. You can look here for more information.

  • Some hosting areas are not accessible by a terminal or SSH. Then it is not possible to use Composer.
  • To use Composer is sometimes complicated. Especially for beginners.
  • Composer needs much resources. Sometimes they are not available on a simple webspace.
  • If you are using private repositories you don't need to share your credentials. You can set up everything on our site and then you provide a simple download link to your team member.
  • Simplify your Composer build process. Use our own command line tool to download the vendor folder as binary. This makes your build process faster and you don't need to expose your credentials for private repositories.
Please rate this library. Is it a good library?

Informations about the package pipelines

Pipelines

Run Bitbucket Pipelines Wherever They Dock

CI Status Build Status Code Coverage Scrutinizer Code Quality

Command line pipeline runner written in PHP. Available from Github or Packagist.

Environment | Details | References

Usage

From anywhere within a project or (Git) repository with a Bitbucket Pipeline file:

$ pipelines

Runs pipeline commands from bitbucket-pipelines.yml [BBPL].

Memory and time limits are ignored. Press ctrl + c to quit.

The Bitbucket limit of 100 (previously 10) steps per pipeline is ignored.

Exit status is from last pipeline script command, if a command fails the following script commands and steps are not executed.

The default pipeline is run, if there is no default pipeline in the file, pipelines tells it and exists with non-zero status.

To execute a different pipeline use the --pipeline <id> option where <id> is one of the list by the --list option. Even more information about the pipelines is available via --show. Both --list and --show output and exit.

Use --steps <steps> to specify which step(s) to execute (also in which order).

If the next pipeline step has a manual trigger, pipelines stops the execution and outputs a short message on standard error giving info about the fact. Manual triggers can be ignored with the --no-manual option.

Run the pipeline as if a tag/branch or bookmark has been pushed with --trigger <ref> where <ref> is tag:<name>, branch:<name>, bookmark:<name> or pr:<branch-name>[:<destination-branch>]. If there is no tag, branch, bookmark or pull-request pipeline with that name, the name is compared against the patterns of the referenced type and if found, that pipeline is run.

Otherwise the default pipeline is run, if there is no default pipeline, no pipeline at all is run and the command exits with non-zero status.

--pipeline and --trigger can be used together, --pipeline overrides pipeline from --trigger but --trigger still influences the container environment variables.

To specify a different file use the --basename <basename> or --file <path> option and/or set the working directory --working-dir <path> in which the file is looked for unless an absolute path is set by --file <path>.

By default pipelines operates on the current working tree which is copied into the container to isolate running the pipeline from the working directory (implicit --deploy copy).

Alternatively the working directory can be mounted into the pipelines container by using --deploy mount.

Use --keep flag to keep containers after the pipeline has finished for further inspection. By default all containers are destroyed. Sometimes for development it is interesting to keep containers on error only, the --error-keep flag is for that.

In any case, if a pipeline runs again and it finds an existing container with the same name (generated by the pipeline name etc.), the existing container will be re-used. This can be very useful to re-iterate quickly.

Manage leftover containers with --docker-list showing all pipeline containers, --docker-kill to kill running containers and --docker-clean to remove stopped pipeline containers. Use in combination to fully clean, e.g.:

$ pipelines --docker-list --docker-kill --docker-clean

Or just run for a more shy clean-up:

$ pipelines --docker-zap

to kill and remove all pipeline containers (w/o showing a list) first. "zap" is pipelines "make clean" equivalent for --keep.

All containers run by pipelines are labeled to ease maintaining them.

Validate your bitbucket-pipelines.yml file with --show which highlights errors found.

For schema-validation use --validate [<file>]. Schema validation might show errors that are not an issue when executing a pipeline (--show and/or --dry-run is better for that) but validates against a schema which is aligned with the one that Atlassian/ Bitbucket provides (the schema is more lax compared to upstream for the cases known to offer a better practical experience). E.g. use it for checks in your CI pipeline or linting files before push in a pre-commit hook or your local build.

Inspect your pipeline with --dry-run which will process the pipeline but not execute anything. Combine with -v (, --verbose) to show the commands which would have run verbatim which allows to better understand how pipelines actually works. Nothing to hide here.

Use --no-run to not run the pipeline at all, this can be used to test the utilities' options.

Pipeline environment variables can be passed/exported to or set for your pipeline by name or file with -e, --env and --env-file options.

Environment variables are also loaded from dot env files named .env.dist and .env and processed in that order before the environment options. Use of --no-dot-env-files prevents automatic loading, --no-dot-env-dot-dist for the .env.dist file only.

More information on pipelines environment variables in the environment section below.

Help

A full display of the pipelines utility options and arguments is available via -h, --help:

Usage Scenario

Give your project and pipeline changes a quick test run from the staging area. As pipelines are normally executed far away, setting them up becomes cumbersome, the guide given in Bitbucket Pipelines documentation [BBPL-LOCAL-RUN] has some hints and is of help, but it is not about a bitbucket pipelines runner.

This is where the pipelines command jumps in.

The pipelines command closes the gap between local development and remote pipeline execution by executing any pipeline configured on your local development box. As long as Docker is accessible locally, the bitbucket-pipelines.yml file is parsed and it is taken care of to execute all steps and their commands within the container of choice.

Pipelines YAML file parsing, container creation and script execution is done as closely as possible compared to the Atlassian Bitbucket Pipeline service. Environment variables can be passed into each pipeline as needed. You can even switch to a different CI/CD service like Github/Travis with little integration work fostering your agility and vendor independence.

Features

Features include:

Dev Mode

Pipeline from your working tree like never before. Pretend to be on any branch, tag or bookmark (--trigger) even in a different repository or none at all.

Check if the reference matches a pipeline or just run the default (default) or a specific one (--list, --pipeline). Use a different pipelines file (--file) or swap the "repository" by changing the working directory (--working-dir <path>).

If a pipeline step fails, the steps container can be kept for further inspection on error with the --error-keep option. The container id is shown then which makes it easy to spawn a shell inside:

Containers can be always kept for debugging and manual testing of a pipeline with --keep and with the said --error-keep on error only. Kept containers are re-used by their name regardless of any --keep (, --error-keep) option.

Continue on a (failed) step with the --steps <steps> argument, the <steps> option can be any step number or sequence (1-3), separate multiple with comma (3-,1-2), you can even repeat steps or reverse order (4,3,2,1).

For example, if the second step failed, continue with use of --steps 2- to re-run the second and all following steps (--steps 2 or --step 2 will run only the next step; to do a step-by-step approach).

Afterwards manage left overs with --docker-list|kill|clean or clean up with --docker-zap.

Debugging options to dream for; benefit from the local build, the pipeline container.

Container Isolation

There is one container per step, like it is on Bitbucket.

Files are isolated by being copied into the container before the pipeline step script is executed (implicit --deploy copy).

Alternatively files can be mounted into the container instead with --deploy mount which normally is faster on Linux, but the working tree might become changed by the container script which causes side-effect that may be unwanted. Docker runs system-wide and containers do not isolate users (e.g. root is root).

Better with --deploy mount (and peace of mind) is using Docker in rootless mode where files manipulated in the pipeline container are accessible to the own user account (like root is your user automatically mapped).

Pipeline Integration

Export files from the pipeline by making use of artifacts, these are copied back into the working tree while in (implicit) --deploy copy mode. Artifacts' files are always created by the user running pipelines. This also (near) perfectly emulates the file format artifacts section with the benefit/downside that you might want to prepare a clean build in a pipeline step script while you can keep artifacts from pipelines locally. This is a trade-off that has turned out to be acceptable over the years.

wrap pipelines in a script for clean checkouts or wait for future options to stage first (git-deployment feature). In any case, control your build first of all.

Ready for Offline

On the plane? Riding Deutsche Bahn? Or just a rainy day on a remote location with broken net? Coding while abroad? Or just Bitbucket down again?

Before going into offline mode, read about Working Offline you'll love it.

Services? Check!

The local pipeline runner runs service containers on your local box/system (that is your pipelines' host). This is similar to use services and databases in Bitbucket Pipelines [BBPL-SRV].

Even before any pipeline step makes use of a service, a service definition can already be tested with the --service option turning setting up services in pipelines into a new experience. A good way to test service definitions and to get an impression on additional resources being consumed.

Default Image

The pipelines command uses the default image like Bitbucket Pipelines does ("atlassian/default-image"). Get started out of the box, but keep in mind it has roughly 1.4 GB.

Pipelines inside Pipeline

As a special feature and by default pipelines mounts the docker socket into each container (on systems where the socket is available). This allows to launch pipelines from a pipeline as long as pipelines and the Docker client is available in the pipelines' container. pipelines will take care of the Docker client as /usr/bin/docker as long as the pipeline has the docker service (services: [docker]).

This feature is similar to run Docker commands in Bitbucket Pipelines [BBPL-DCK].

The pipelines inside pipeline feature serves pipelines itself well for integration testing the projects build. In combination with --deploy mount, the original working-directory is mounted from the host (again). Additional protection against endless loops by recursion is implemented to prevent accidental pipelines inside pipeline invocations that would be endlessly on-going.

Environment

Pipelines mimics "all" of the Bitbucket Pipeline in-container environment variables [BBPL-ENV], also known as environment parameters:

All of these (but not BITBUCKET_CLONE_DIR) can be set within the environment pipelines runs in and are taken over into container environment. Example:

$ BITBUCKET_BUILD_NUMBER=123 pipelines # build no. 123

More information on (Bitbucket) pipelines environment variables can be found in the Pipelines Environment Variable Usage Reference.

Additionally pipelines sets some environment variables for introspection:

These environment variables are managed by pipelines itself. Some of them can be injected which can lead to undefined behaviour and can have system security implications as well.

Next to these special purpose environment variables, any other environment variable can be imported into or set in the container via the -e, --env and --env-file options. These behave exactly as documented for the docker run command [DCK-RN].

Instead of specifying custom environment parameters for each invocation, pipelines by default automatically uses the .env.dist and .env files from each project supporting the same file-format for environment variables as docker.

Exit Status

Exit status on success is 0 (zero).

A non zero exit status denotes an error:

Example

Not finding a file might cause exit status 2 (two) on error because a file is not found, however with a switch like --show the exit status might still be 1 (one) as there was an error showing that the file does not exists (indirectly) and the error is more prominently showing all pipelines of that file.

Details

User Tests | Known Bugs | Todo

Requirements

Pipelines works best on a POSIX compatible system having a PHP runtime.

Docker needs to be available locally as docker command as it is used to run the pipelines. Rootless Docker is supported.

A recent PHP version is favored, the pipelines command needs PHP to run. It should work with PHP 5.3.3+. A development environment should be PHP 7+, this is especially suggested for future releases. PHP 8+ is supported as well.

Installing the PHP YAML extension [PHP-YAML] is highly recommended as it does greatly improve parsing the pipelines file which is otherwise with a YAML parser on it's own as a fall-back and is not bad at all. There are subtle differences between these parsers, so why not have both at hand?

User Tests

Successful use on Ubuntu 16.04 LTS, Ubuntu 18.04 LTS, Ubuntu 20.04 LTS and Mac OS X Sierra and High Sierra with PHP and Docker installed.

Known Bugs

Installation

Phar (Download) | Composer | Phive | Source (also w/ Phar) | Full Project (Development)

Installation is available by downloading the phar archive from Github, via Composer/Packagist or with Phive and it should always work from source which includes building the phar file.

Download the PHAR (PHP Archive) File

Downloads are available on Github. To obtain the latest released version, use the following URL:

https://github.com/ktomk/pipelines/releases/latest/download/pipelines.phar

Rename the phar file to just "pipelines", set the executable bit and move it into a directory where executables are found.

Downloads from Github are available since version 0.0.4. All releases are listed on the following website:

https://github.com/ktomk/pipelines/releases

Install with Composer

Suggested is to install it globally (and to have the global composer vendor/bin in $PATH) so that it can be called with ease and there are no dependencies in a local project:

$ composer global require ktomk/pipelines

This will automatically install the latest available version. Verify the installation by invoking pipelines and output the version:

$ pipelines --version
pipelines version 0.0.19

To uninstall remove the package:

$ composer global remove ktomk/pipelines

Take a look at Composer from getcomposer.org [COMPOSER], a Dependency Manager for PHP. Pipelines has support for composer based installations, which might include upstream patches (composer 2 is supported, incl. upstream patches).

Install with Phive

Perhaps the most easy way to install when phive is available:

$ phive install pipelines

Even if your PHP version does not have the Yaml extension this should work out of the box. If you use composer and you're a PHP aficionado, dig into phive for your systems and workflow.

Take a look at Phive from phar.io [PHARIO], the PHAR Installation and Verification Environment (PHIVE). Pipelines has full support for phar.io/phar based installations which includes support for the phive utility including upstream patches.

Install from Source

To install from source, checkout the source repository and symlink the executable file bin/pipelines into a segment of $PATH, e.g. your $HOME/bin directory or similar. Verify the installation by invoking pipelines and output the version:

$ pipelines --version
pipelines version 0.0.19 # NOTE: the version is exemplary

To create a phar archive from sources, invoke from within the projects root directory the build script:

$ composer build
building 0.0.19-1-gbba5a43 ...
pipelines version 0.0.19-1-gbba5a43
file.....: build/pipelines.phar
size.....: 240 191 bytes
SHA-1....: 9F118A276FC755C21EA548A77A9DBAF769B93524
SHA-256..: 0C38CBBB12E10E80F37ECA5C4C335BF87111AC8E8D0490D38683BB3DA7E82DEF
file.....: 1.1.0
api......: 1.1.1
extension: 2.0.2
php......: 7.2.16-[...]
uname....: [...]
count....: 62 file(s)
signature: SHA-1 E638E7B56FAAD7171AE9838DF6074714630BD486

The phar archive then is (as written in the output of the build):

build/pipelines.phar

Check the version by invoking it:

$ build/pipelines.phar --version
pipelines version 0.0.19-1-gbba5a43
# NOTE: the version is exemplary
Php Compatibility and Undefined Behaviour

The pipelines project aims to support php 5.3.3 up to php 8.1.

Using any of its PHP functions or methods with named parameters falls into undefined behaviour.

Reproducible Phar Builds

The pipelines project practices reproducible builds since it's first phar build. The build is self-contained, which means that the repository ships with all required files to build with only little dependencies:

Reproducible builds of the phar file would be incomplete without the fine work from the composer projects phar-utils (Seldaek/Jordi Boggiano) which is forked by the pipelines project in Timestamps.php by keeping the original license with the file (MIT), providing bug-fixes to upstream under that license (see Phar-Utils #2 and Phar-Utils #3).

This file is used to set the timestamps inside the phar file to that of the release as otherwise those would be at the time of build. This is the same as the Composer project does (see Composer #3927).

Additionally in the pipelines project that file is used to change the access permissions of the files in the phar. That is because across PHP versions the behaviour has changed so the build is kept backwards and forwards compatible. As this has been noticed later in the projects' history, the build might show different binaries depending on which PHP version is used (see PHP #77022 and PHP #79082) and the patch state of the timestamps file.

Install Full Project For Development

When working with git, clone the repository and then invoke composer install. The project is setup for development then.

Alternatively it's possible to do the same via composer directly:

$ composer create-project --prefer-source --keep-vcs ktomk/pipelines
...
$ cd pipelines

Verify the installation by invoking the local build:

$ composer ci

Should exit with status 0 when it went fine, non 0 when there is an issue. Composer tells which individual script did fail.

Follow the instructions in Install from Source to use the development version for pipelines.

Todo

References


All versions of pipelines with dependencies

PHP Build Version
Package Version
Requires php Version ^5.3.3 || ^7.0 || ^8.0
ext-json Version *
justinrainbow/json-schema Version ^5.2
ktomk/symfony-yaml Version ~2.6.13
Composer command for our command line client (download client) This client runs in each environment. You don't need a specific PHP version etc. The first 20 API calls are free. Standard composer command

The package ktomk/pipelines contains the following files

Loading the files please wait ....