Download the PHP package llm-agents/agents without Composer
On this page you can find all versions of the php package llm-agents/agents. It is possible to download/install these versions without Composer. Possible dependencies are resolved automatically.
Please rate this library. Is it a good library?
Informations about the package agents
# LLM Agents PHP SDK
LLM Agents is a PHP library for building and managing Language Model (LLM) based agents. It provides a framework for creating autonomous agents that can perform complex tasks, make decisions, and interact with various tools and APIs.
The library enables developers to integrate LLM capabilities into PHP applications efficiently, allowing for the creation of intelligent systems that can understand and respond to user inputs, process information, and carry out actions based on that processing.The library enables developers to integrate LLM capabilities into PHP applications efficiently.
> For a comprehensive explanation of LLM agents and their applications, you can read the article: A PHP dev's dream:
> [A PHP devβs dream: An AI home that really gets you](https://butschster.medium.com/a-php-devs-dream-an-ai-home-that-really-gets-you-dd97ae2ca0b0)
[![PHP](https://img.shields.io/packagist/php-v/llm-agents/agents.svg?style=flat-square)](https://packagist.org/packages/llm-agents/agents)
[![Latest Version on Packagist](https://img.shields.io/packagist/v/llm-agents/agents.svg?style=flat-square)](https://packagist.org/packages/llm-agents/agents)
[![Total Downloads](https://img.shields.io/packagist/dt/llm-agents/agents.svg?style=flat-square)](https://packagist.org/packages/llm-agents/agents)
> For a complete example with sample agents and a CLI interface to interact with them, check out our sample application repository https://github.com/llm-agents-php/sample-app.
>
> This sample app demonstrates practical implementations and usage patterns of the LLM Agents library.
The package does not include any specific LLM implementation. Instead, it provides a framework for creating agents that
can interact with any LLM service or API.
## β¨ Key Features
- **π€ Agent Creation:** Create and configure **LLM-based agents** in PHP with customizable behaviors.
- **π§ Tool Integration:** Seamlessly integrate various tools and APIs for agent use in PHP applications.
- **π§ Memory Management:** Support for agent memory, enabling information retention and recall across interactions.
- **π‘ Prompt Management:** Efficient handling of prompts and instructions to guide agent behavior.
- **π Extensible Architecture:** Easily add new agent types, tools, and capabilities to your PHP projects.
- **π€ Multi-Agent Support:** Build systems with multiple interacting agents for complex problem-solving scenarios in
PHP.
## π Installation
You can install the LLM Agents package via Composer:
## π» Usage
### β Creating an Agent
To create an agent, you'll need to define its behavior, tools, and configuration. Here's a basic example:
### β Implementing a Tool
Now, let's implement the tool used by this agent:
And the input schema for the tool:
### β Linking Agents
LLM Agents supports creating complex systems by linking multiple agents together. This allows you to build hierarchical
or collaborative agent networks. Here's how you can link one agent to another:
#### Creating an Agent Link
To link one agent to another, you use the `AgentLink` class. Here's an example of how to modify our
`SiteStatusCheckerAgent` to include a link to another agent:
In this example, we're linking a `network_diagnostics_agent`. The `outputSchema` parameter specifies the expected output
format from the linked agent. The output schema is used to standardize the data format that should be returned by the
linked agent.
#### Using a Linked Agent
We don't provide an implementation for the linked agent here, but you can use the linked agent in your agent's
execution.
Here's an example of how you might call the linked agent:
And the input schema for the tool:
And just add the tool to the agent that has linked agents. When the agent is executed, it will call the linked agent
if it decides to do so.
### β Executing an Agent
To execute an agent, you'll use the `AgentExecutor` class:
This example demonstrates how to create a simple agent that can perform a specific task using a custom tool.
### β Agent Memory and Prompts
Agents can use memory and predefined prompts to guide their behavior:
## Executor Interceptors
The package includes a powerful interceptor system for the executor. This allows developers to inject data
into prompts, modify execution options, and handle LLM responses at various stages of the execution process. Here's a
detailed look at each available interceptor:
### Available Interceptors
1. **GeneratePromptInterceptor**
- **Purpose**: Generates the initial prompt for the agent.
- **Functionality**:
- Uses the `AgentPromptGeneratorInterface` to create a comprehensive prompt.
- Incorporates agent instructions, memory, and user input into the prompt.
- **When to use**: Always include this interceptor to ensure proper prompt generation.
2. **InjectModelInterceptor**
- **Purpose**: Injects the appropriate language model for the agent.
- **Functionality**:
- Retrieves the model associated with the agent.
- Adds the model information to the execution options.
- **When to use**: Include this interceptor when you want to ensure the correct model is used for each agent,
especially in multi-agent systems.
3. **InjectToolsInterceptor**
- **Purpose**: Adds the agent's tools to the execution options.
- **Functionality**:
- Retrieves all tools associated with the agent.
- Converts tool schemas into a format understood by the LLM.
- Adds tool information to the execution options.
- **When to use**: Include this interceptor when your agent uses tools and you want them available during execution.
4. **InjectOptionsInterceptor**
- **Purpose**: Incorporates additional configuration options for the agent.
- **Functionality**:
- Retrieves any custom configuration options defined for the agent.
- Adds these options to the execution options.
- **When to use**: Include this interceptor when you have agent-specific configuration that should be applied during
execution.
5. **InjectResponseIntoPromptInterceptor**
- **Purpose**: Adds the LLM's response back into the prompt for continuous conversation.
- **Functionality**:
- Takes the LLM's response from the previous execution.
- Appends this response to the existing prompt.
- **When to use**: Include this interceptor in conversational agents or when context from previous interactions is
important.
### Creating Custom Interceptors
You can create custom interceptors to add specialized behavior to your agent execution pipeline.
Here's an example of a custom interceptor that adds time-aware and user-specific context to the prompt:
Then, you can add your custom interceptor to the executor:
This example demonstrates how to create a more complex and useful interceptor. The token counting interceptor can be
valuable for monitoring API usage, optimizing prompt length, or ensuring you stay within token limits of your LLM
provider.
**You can create various other types of interceptors to suit your specific needs, such as:**
- Caching interceptors to store and retrieve responses for identical prompts
- Rate limiting interceptors to control the frequency of API calls
- Error handling interceptors to gracefully manage and log exceptions
- Analytics interceptors to gather data on agent performance and usage patterns
## Implementing Required Interfaces
To use the LLM Agents package, you'll need to implement the required interfaces in your project.
### β LLMInterface
It serves as a bridge between your application and LLM you're using, such as OpenAI, Claude, etc.
Here is an example of `MessageMapper` that converts messages to the format required by the LLM API:
## Prompt Generation
It plays a vital role in preparing the context and instructions for an agent before it processes a user's request. It
ensures that the agent has all necessary information, including its own instructions, memory, associated agents, and any
relevant session context.
- System message with the agent's instruction and important rules.
- System message with the agent's memory (experiences).
- System message about associated agents (if any).
- System message with session context (if provided).
- User message with the actual prompt.
You can customize the prompt generation logic to suit your specific requirements.
Instead of implementing the `AgentPromptGeneratorInterface` yourself, you can use the `llm-agents/prompt-generator`
package as an implementation. This package provides a flexible and extensible system for generating chat prompts with
all required system and user messages for LLM agents.
> **Note:** Read full documentation of the `llm-agents/prompt-generator`
> package [here](https://github.com/llm-agents-php/prompt-generator)
To use it, first install the package:
Then, set it up in your project. Here's an example using Spiral Framework:
### β SchemaMapperInterface
This class is responsible for handling conversions between JSON schemas and PHP objects.
We provide a schema mapper package that you can use to implement the `SchemaMapperInterface` in your project. This
package is a super handy JSON Schema Mapper for the LLM Agents project.
**To install the package:**
> **Note:** Read full documentation of the `llm-agents/json-schema-mapper`
> package [here](https://github.com/llm-agents-php/schema-mapper)
### β ContextFactoryInterface
It provides a clean way to pass execution-specific data through the system without tightly coupling components or overly
complicating method signatures.
### β OptionsFactoryInterface
The options is a simple key-value store that allows you to store and retrieve configuration options that can be passed
to LLM clients and other components. For example, you can pass a model name, max tokens, and other configuration options
to an LLM client.
## ποΈ Architecture
The LLM Agents package is built around several key components:
- **AgentInterface**: Defines the contract for all agents.
- **AgentAggregate**: Implements AgentInterface and aggregates an Agent instance with other Solution objects.
- **Agent**: Represents a single agent with its key, name, description, and instruction.
- **Solution**: Abstract base class for various components like Model and ToolLink.
- **AgentExecutor**: Responsible for executing agents and managing their interactions.
- **Tool**: Represents a capability that an agent can use to perform tasks.
For a visual representation of the architecture, refer to the class diagram in the documentation.
## π¨ Class Diagram
Here's a class diagram illustrating the key components of the LLM Agents PHP SDK:
## π Want to Contribute?
Thank you for considering contributing to the llm-agents-php community! We are open to all kinds of contributions. If
you want to:
- π€ [Suggest a feature](https://github.com/llm-agents-php/agents/issues/new?assignees=&labels=type%3A+enhancement&projects=&template=2-feature-request.yml&title=%5BFeature%5D%3A+)
- π [Report an issue](https://github.com/llm-agents-php/agents/issues/new?assignees=&labels=type%3A+documentation%2Ctype%3A+maintenance&projects=&template=1-bug-report.yml&title=%5BBug%5D%3A+)
- π [Improve documentation](https://github.com/llm-agents-php/agents/issues/new?assignees=&labels=type%3A+documentation%2Ctype%3A+maintenance&projects=&template=4-docs-bug-report.yml&title=%5BDocs%5D%3A+)
- π¨βπ» Contribute to the code
You are more than welcome. Before contributing, kindly check our [contribution guidelines](.github/CONTRIBUTING.md).
[![Conventional Commits](https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg?style=for-the-badge)](https://conventionalcommits.org)
## βοΈ License
LLM Agents is open-source software licensed under the [MIT license](https://opensource.org/licenses/MIT).
[![Licence](https://img.shields.io/github/license/llm-agents-php/agents?style=for-the-badge&color=blue)](./LICENSE.md)
All versions of agents with dependencies
PHP Build Version
Package Version
The package llm-agents/agents contains the following files
Loading the files please wait ....