Download the PHP package flownative/azure-blobstorage without Composer

On this page you can find all versions of the php package flownative/azure-blobstorage. It is possible to download/install these versions without Composer. Possible dependencies are resolved automatically.

FAQ

After the download, you have to make one include require_once('vendor/autoload.php');. After that you have to import the classes with use statements.

Example:
If you use only one package a project is not needed. But if you use more then one package, without a project it is not possible to import the classes with use statements.

In general, it is recommended to use always a project to download your libraries. In an application normally there is more than one library needed.
Some PHP packages are not free to download and because of that hosted in private repositories. In this case some credentials are needed to access such packages. Please use the auth.json textarea to insert credentials, if a package is coming from a private repository. You can look here for more information.

  • Some hosting areas are not accessible by a terminal or SSH. Then it is not possible to use Composer.
  • To use Composer is sometimes complicated. Especially for beginners.
  • Composer needs much resources. Sometimes they are not available on a simple webspace.
  • If you are using private repositories you don't need to share your credentials. You can set up everything on our site and then you provide a simple download link to your team member.
  • Simplify your Composer build process. Use our own command line tool to download the vendor folder as binary. This makes your build process faster and you don't need to expose your credentials for private repositories.
Please rate this library. Is it a good library?

Informations about the package azure-blobstorage

MIT license Packagist Packagist Maintenance level: Love

Azure Blob Storage Adaptor for Neos and Flow

This Flow package allows you to store assets (resources) in Azure Blob Storage and publish resources there. Because Neos CMS is using Flow's resource management under the hood, this adaptor also works nicely for all kinds of assets in Neos.

Key Features

Using this connector, you can run a Neos website which does not store any asset (images, PDFs etc.) on your webserver.

Installation

The Flownative Azure Blob Storage connector is installed as a regular Flow package via Composer. For your existing project, simply include flownative/azure-blobstorage into the dependencies of your Flow or Neos distribution:

Configuration

Credentials

In order to communicate with the Azure API, you need to provide the credentials of an account which has access to ABS. Add the following configuration to the Settings.yaml for your desired Flow context (for example in Configuration/Production/Settings.yaml) and make sure to replace the credentials with your own data:

Instead of using name and key with the default connection string (which is DefaultEndpointsProtocol=https;AccountName=myAccountName;AccountKey=myAccountKey), the connection string can also be specified directly. This allows for providing the variations described in the Azure documentation.

Right now, you can only define one connection profile, namely the "default" profile. Additional profiles may be supported in future versions.

Container Setup

You need one container each for use as a resource storage and a publishing target. How you name them is up to you. The one used as the storage container should not be publicly accessible, the container used as the publishing target must have blobs publicly accessible. See the section on configuring anonymous public read access in the Azure documentation for instructions on how to do that.

Testing the Setup

You can test your settings by executing the connect command with a container of your choice.

Publish Assets to Azure Blob Storage

Once the connector package is in place, you add a new publishing target which uses that connect and assign this target to your collection.

Since the new publishing target will be empty initially, you need to publish your assets to the new target by using the resource:publish command:

This command will upload your files to the target and use the calculated remote URL for all your assets from now on.

Switching the Storage of a Collection

If you want to migrate from your default local filesystem storage to a remote storage, you need to copy all your existing persistent resources to that new storage and use that storage afterwards by default.

You start by adding a new storage with the ABS connector to your configuration. As you might want also want to serve your assets by the remote storage system, you also add a target that contains your published resources.

Some words regarding the configuration options:

The keyPrefix option allows you to share one container across multiple websites or applications. All object keys will be prefixed by the given string.

The baseUri option defines the root of the publicly accessible address pointing to your published resources. In the example above, baseUri points to a subdomain which needs to be set up separately. If baseUri is empty, the Azure Blob Storage Publishing Target will determine a public URL automatically.

In order to copy the resources to the new storage we need a temporary collection that uses the storage and the new publication target.

Now you can use the resource:copy command:

This will copy all your files from your current storage (local filesystem) to the new remote storage. The --publish flag means that this command also publishes all the resources to the new target, and you have the same state on your current storage and publication target as on the new one.

Now you can overwrite your old collection configuration and remove the temporary one:

Clear caches and you're done.

Two-Container Setup

Due to the way public access for blobs is handled in Azure Blob Storage, only a two-container setup is possible: One container is private and one is publicly accessible.

In a two-container setup, resources will be duplicated: the original is stored in the "storage" container and then copied to the "target" container. Each time a new resource is created or imported, it will be stored in the storage container and then automatically published (i.e. copied) into the target container.

Om the positive side, this allows to have human- and SEO-friendly URLs pointing to your resources, because objects copied into the target container can have a more telling name which includes the original filename of the resource (check the publicPersistentResourceUris options further below).

Customizing the Public URLs

The Azure Blob Storage Target supports a way to customize the URLs which are presented to the user. Even though the paths and filenames used for objects in the containers are rather fixed (see above for the baseUri and keyPrefix options), you may want to use a reverse proxy or content delivery network to deliver resources stored in your target container. In that case, you can tell the Target to render URLs according to your own rules. It is then your responsibility to make sure that these URLs actually work.

Let's assume that we have set up a webserver acting as a reverse proxy. Requests to assets.flownative.com are re-written so that using a URI like https://assets.flownative.com/a817…cb1/logo.svg will actually deliver a file stored in the Storage container using the given SHA1.

You can tell the Target to render URIs like these by defining a pattern with placeholders:

The possible placeholders are:

For legacy and convenience reasons, the default pattern depends on the setup being used:

The respective setup is auto-detected by the Target and the patterns set accordingly. You may, of course, override the patterns, by specifying the pattern setting as explained above.

Dynamic Custom Base Uri

Your application may take the responsibility to render a base URI by registering a custom method. After the options were set, the Target will call your method and use the returned string as a base URI.

This mechanism allows you to tweak the domain, or other parts of the base URI, depending on the current request. In the following example, we replace the domain "example.com" by "replaced.com", using a custom base URI method.

The following options are passed to your render method:

GZIP Compression

Azure Blob Storage supports GZIP compression for delivering files to the user, however, these files need to be compressed outside Azure Blob Storage and then uploaded as GZIP compressed data. This plugin supports transcoding resources on the fly, while they are being published. Data in the storage is always stored uncompressed, as-is. Files which are of one of the media types configured for GZIP compression are automatically converted to GZIP while they are being published to the target.

You can configure the compression level and the media types which should be compressed as such:

Note that adding media types for data which is already compressed – for example images or movies – will likely rather increase the data size and thus should be avoided.

Full Example Configuration for ABS


All versions of azure-blobstorage with dependencies

PHP Build Version
Package Version
Requires php Version ^8.0
neos/flow Version ^7.0 || ^8.0 || ^9.0 || dev-main
microsoft/azure-storage-blob Version ^1.5
Composer command for our command line client (download client) This client runs in each environment. You don't need a specific PHP version etc. The first 20 API calls are free. Standard composer command

The package flownative/azure-blobstorage contains the following files

Loading the files please wait ....