Download the PHP package php-llm/llm-chain without Composer

On this page you can find all versions of the php package php-llm/llm-chain. It is possible to download/install these versions without Composer. Possible dependencies are resolved automatically.

FAQ

After the download, you have to make one include require_once('vendor/autoload.php');. After that you have to import the classes with use statements.

Example:
If you use only one package a project is not needed. But if you use more then one package, without a project it is not possible to import the classes with use statements.

In general, it is recommended to use always a project to download your libraries. In an application normally there is more than one library needed.
Some PHP packages are not free to download and because of that hosted in private repositories. In this case some credentials are needed to access such packages. Please use the auth.json textarea to insert credentials, if a package is coming from a private repository. You can look here for more information.

  • Some hosting areas are not accessible by a terminal or SSH. Then it is not possible to use Composer.
  • To use Composer is sometimes complicated. Especially for beginners.
  • Composer needs much resources. Sometimes they are not available on a simple webspace.
  • If you are using private repositories you don't need to share your credentials. You can set up everything on our site and then you provide a simple download link to your team member.
  • Simplify your Composer build process. Use our own command line tool to download the vendor folder as binary. This makes your build process faster and you don't need to expose your credentials for private repositories.
Please rate this library. Is it a good library?

Informations about the package llm-chain

LLM Chain

PHP library for building LLM-based features and applications.

This library is not a stable yet, but still rather experimental. Feel free to try it out, give feedback, ask questions, contribute or share your use cases. Abstractions, concepts and interfaces are not final and potentially subject of change.

Requirements

Installation

The recommended way to install LLM Chain is through Composer:

When using Symfony Framework, check out the integration bundle php-llm/llm-chain-bundle.

Examples

See examples folder to run example implementations using this library. Depending on the example you need to export different environment variables for API keys or deployment configurations or create a .env.local based on .env file.

To run all examples, use make run-all-examples or php example.

For a more sophisticated demo, see the Symfony Demo Application.

Basic Concepts & Usage

Models & Platforms

LLM Chain categorizes two main types of models: Language Models and Embeddings Models.

Language Models, like GPT, Claude and Llama, as essential centerpiece of LLM applications and Embeddings Models as supporting models to provide vector representations of text.

Those models are provided by different platforms, like OpenAI, Azure, Google, Replicate, and others.

Example Instantiation

Supported Models & Platforms

See issue #28 for planned support of other models and platforms.

Chain & Messages

The core feature of LLM Chain is to interact with language models via messages. This interaction is done by sending a MessageBag to a Chain, which takes care of LLM invocation and response handling.

Messages can be of different types, most importantly UserMessage, SystemMessage, or AssistantMessage, and can also have different content types, like Text, Image or Audio.

Example Chain call with messages

The MessageInterface and Content interface help to customize this process if needed, e.g. additional state handling.

Options

The second parameter of the call method is an array of options, which can be used to configure the behavior of the chain, like stream, output_structure, or response_format. This behavior is a combination of features provided by the underlying model and platform, or additional features provided by processors registered to the chain.

Options design for additional features provided by LLM Chain can be found in this documentation. For model and platform specific options, please refer to the respective documentation.

Code Examples

  1. Anthropic's Claude: chat-claude-anthropic.php
  2. OpenAI's GPT with Azure: chat-gpt-azure.php
  3. OpenAI's GPT: chat-gpt-openai.php
  4. OpenAI's o1: chat-o1-openai.php
  5. Meta's Llama with Ollama: chat-llama-ollama.php
  6. Meta's Llama with Replicate: chat-llama-replicate.php
  7. Google's Gemini with OpenRouter: chat-gemini-openrouter.php

Tools

To integrate LLMs with your application, LLM Chain supports tool calling out of the box. Tools are services that can be called by the LLM to provide additional features or process data.

Tool calling can be enabled by registering the processors in the chain:

Custom tools can basically be any class, but must configure by the #[AsTool] attribute.

Tool Return Value

In the end, the tool's response needs to be a string, but LLM Chain converts arrays and objects, that implement the JsonSerializable interface, to JSON strings for you. So you can return arrays or objects directly from your tool.

Tool Methods

You can configure the method to be called by the LLM with the #[AsTool] attribute and have multiple tools per class:

Tool Parameters

LLM Chain generates a JSON Schema representation for all tools in the ToolBox based on the #[AsTool] attribute and method arguments and param comments in the doc block. Additionally, JSON Schema support validation rules, which are partially support by LLMs like GPT.

To leverage this, configure the #[ToolParameter] attribute on the method arguments of your tool:

See attribute class ToolParameter for all available options.

[!NOTE] Please be aware, that this is only converted in a JSON Schema for the LLM to respect, but not validated by LLM Chain.

Fault Tolerance

To gracefully handle errors that occur during tool calling, e.g. wrong tool names or runtime errors, you can use the FaultTolerantToolBox as a decorator for the ToolBox. It will catch the exceptions and return readable error messages to the LLM.

Tool Filtering

To limit the tools provided to the LLM in a specific chain call to a subset of the configured tools, you can use the tools option with a list of tool names:

Tool Result Interception

To react to the result of a tool, you can implement an EventListener or EventSubscriber, that listens to the ToolCallsExecuted event. This event is dispatched after the ToolBox executed all current tool calls and enables you to skip the next LLM call by setting a response yourself:

Code Examples (with built-in tools)

  1. Clock Tool: toolbox-clock.php
  2. SerpAPI Tool: toolbox-serpapi.php
  3. Tavily Tool: toolbox-tavily.php
  4. Weather Tool with Event Listener: toolbox-weather-event.php
  5. Wikipedia Tool: toolbox-wikipedia.php
  6. YouTube Transcriber Tool: toolbox-youtube.php (with streaming)

Document Embedding, Vector Stores & Similarity Search (RAG)

LLM Chain supports document embedding and similarity search using vector stores like ChromaDB, Azure AI Search, MongoDB Atlas Search, or Pinecone.

For populating a vector store, LLM Chain provides the service DocumentEmbedder, which requires an instance of an EmbeddingsModel and one of StoreInterface, and works with a collection of Document objects as input:

The collection of Document instances is usually created by text input of your domain entities:

[!NOTE] Not all data needs to be stored in the vector store, but you could also hydrate the original data entry based on the ID or metadata after retrieval from the store.*

In the end the chain is used in combination with a retrieval tool on top of the vector store, e.g. the built-in SimilaritySearch tool provided by the library:

Code Examples

  1. MongoDB Store: store-mongodb-similarity-search.php
  2. Pinecone Store: store-pinecone-similarity-search.php

Supported Stores

See issue #28 for planned support of other models and platforms.

Advanced Usage & Features

Structured Output

A typical use-case of LLMs is to classify and extract data from unstructured sources, which is supported by some models by features like Structured Output or providing a Response Format.

PHP Classes as Output

LLM Chain support that use-case by abstracting the hustle of defining and providing schemas to the LLM and converting the response back to PHP objects.

To achieve this, a specific chain processor needs to be registered:

Array Structures as Output

Also PHP array structures as response_format are supported, which also requires the chain processor mentioned above:

Code Examples

  1. Structured Output (PHP class): structured-output-math.php
  2. Structured Output (array): structured-output-clock.php

Response Streaming

Since LLMs usually generate a response word by word, most of them also support streaming the response using Server Side Events. LLM Chain supports that by abstracting the conversion and returning a Generator as content of the response.

In a terminal application this generator can be used directly, but with a web app an additional layer like Mercure needs to be used.

Code Examples

  1. Streaming Claude: stream-claude-anthropic.php
  2. Streaming GPT: stream-gpt-openai.php

Image Processing

Some LLMs also support images as input, which LLM Chain supports as Content type within the UserMessage:

Code Examples

  1. Image Description: image-describer-binary.php (with binary file)
  2. Image Description: image-describer-url.php (with URL)

Audio Processing

Similar to images, some LLMs also support audio as input, which is just another Content type within the UserMessage:

Code Examples

  1. Audio Description: audio-describer.php

Embeddings

Creating embeddings of word, sentences or paragraphs is a typical use case around the interaction with LLMs and therefore LLM Chain implements a EmbeddingsModel interface with various models, see above.

The standalone usage results in an Vector instance:

Code Examples

  1. OpenAI's Emebddings: embeddings-openai.php
  2. Voyage's Embeddings: embeddings-voyage.php

Parallel Platform Calls

Platform supports multiple model calls in parallel, which can be useful to speed up the processing:

[!NOTE] This requires cURL and the ext-curl extension to be installed.

Code Examples

  1. Parallel GPT Calls: parallel-chat-gpt.php
  2. Parallel Embeddings Calls: parallel-embeddings.php

[!NOTE] Please be aware that some embeddings models also support batch processing out of the box.

Input & Output Processing

The behavior of the Chain is extendable with services that implement InputProcessor and/or OutputProcessor interface. They are provided while instantiating the Chain instance:

InputProcessor

InputProcessor instances are called in the chain before handing over the MessageBag and the $options array to the LLM and are able to mutate both on top of the Input instance provided.

OutputProcessor

OutputProcessor instances are called after the LLM provided a response and can - on top of options and messages - mutate or replace the given response:

Chain Awareness

Both, Input and Output instances, provide access to the LLM used by the Chain, but the chain itself is only provided, in case the processor implemented the ChainAwareProcessor interface, which can be combined with using the ChainAwareTrait:

Contributions

Contributions are always welcome, so feel free to join the development of this library.

Current Contributors

LLM Chain Contributors

Made with contrib.rocks.

Fixture Licenses

For testing multi-modal features, the repository contains binary media content, with the following owners and licenses:


All versions of llm-chain with dependencies

PHP Build Version
Package Version
Requires php Version >=8.2
oskarstark/enum-helper Version ^1.5
phpdocumentor/reflection-docblock Version ^5.4
phpstan/phpdoc-parser Version ^2.1
psr/cache Version ^3.0
psr/log Version ^3.0
symfony/clock Version ^6.4 || ^7.1
symfony/http-client Version ^6.4 || ^7.1
symfony/property-access Version ^6.4 || ^7.1
symfony/property-info Version ^6.4 || ^7.1
symfony/serializer Version ^6.4 || ^7.1
symfony/type-info Version ^7.2.3
symfony/uid Version ^6.4 || ^7.1
webmozart/assert Version ^1.11
Composer command for our command line client (download client) This client runs in each environment. You don't need a specific PHP version etc. The first 20 API calls are free. Standard composer command

The package php-llm/llm-chain contains the following files

Loading the files please wait ....