Download the PHP package assistant-engine/filament-assistant without Composer
On this page you can find all versions of the php package assistant-engine/filament-assistant. It is possible to download/install these versions without Composer. Possible dependencies are resolved automatically.
Download assistant-engine/filament-assistant
More information about assistant-engine/filament-assistant
Files in assistant-engine/filament-assistant
Package filament-assistant
Short Description A Filament Assistant package that enables AI features, bringing advanced assistant capabilities directly into Filament.
License MIT
Homepage https://github.com/assistant-engine/filament-assistant
Informations about the package filament-assistant
Filament Assistant
Free & Open Source Version
For more features and a cloud-based assistant experience, check out Assistant Engine.
Filament Assistant is an AI-powered plugin that seamlessly integrates conversational capabilities into your Laravel Filament projects. Developed by Assistant Engine, it provides a feature-rich chat sidebar, intelligent context resolution, and effortless integration with popular tools like Slack, Trello, Notion, Jira Service Desk, Bitbucket, and GitHub.
Key Features:
- ✅ Multiple Assistants – Create AI assistants for different use cases.
- ✅ Tool Calls – Let your assistant execute actions, retrieve data and integrate with external tools.
- ✅ Context Awareness – Pass relevant page data for smarter responses.
- ✅ Easy Setup – Configurable & ready to go in minutes.
- ✅ Privacy First – Runs entirely locally, ensuring full control over your data.
- ✅ Flexible AI Integration – Works with OpenAI & any LLM using the same API format.
Requirements
- PHP: 8.2 or higher
- Composer
- Filament: (See Filament Installation Guide)
- Filament Custom Theme: (See Installation Guide)
- OpenAI API Key: (See OpenAI Documentation)
Installation
You can install Filament Assistant via Composer:
After installing the plugin, follow the instructions to create a custom theme and add the following lines to your new theme's tailwind.config.js
:
As well as enabling the plugin within your panel:
Now add you OPEN_AI_KEY to your .env File
Run the migrations, start a queue worker and building the theme:
After that you can directly talk to one of the Demo Assistants (eg. Frank) and have a conversation about food delivery :)
Dark Mode Support
The Filament Assistant
also supports dark mode based on the Tailwind Concept.
Configuration
You can publish the configuration file using the command below:
After publishing the configuration, you can find it in config/assistant-engine.php
:
If you need runtime information during tool calling you can also inject the active run into the closure and then you have access to the actual thread and user identifier.
If you want to add a presenter to your tool you can do it by defining the presenter key to the tool configuration. The presenter is an optional callable that returns an instance implementing the MessageListExtensionInterface. Learn more about Extensions in the Core Repository.
If you want to cache your config you need to convert the closures to callables for example create a ConfigFactory Class like
and then convert the closures to callables like
If you convert all closures to callables you should be able to cache your config with
Feel free to change the assistants, add new tools and also update the other configuration parameters as needed.
Tool Calling
If you want your assistant to access your application, all you need to do is implement the AbstractOpenFunction to create a new Tool and add it to your configuration file. Please read also the Open Function Repository to learn more about Open Functions.
An example implementation could be:
Meta Mode
Enable MetaMode in the assistant configuration to expose only the meta registry functions, letting the assistant dynamically activate or deactivate additional functions as needed—ideal for when too many functions may overwhelm an LLM call. For more details, see the Open Functions Core repository.
Available Open Function Implementations
In addition to creating your own Open Functions, there are several ready-to-use implementations available to extend your assistant’s capabilities. Simply add the corresponding tool configuration in your config/filament-assistant.php file to integrate them. Here’s a quick overview:
- Memory: Provides a standardized API for storing, updating, retrieving, and removing conversational memories.
- Notion: Connects to your Notion workspace and enables functionalities such as listing databases, retrieving pages, and managing content blocks.
- GitHub: Integrates with GitHub to allow repository operations like listing branches, reading files, and committing changes.
- Bitbucket: Provides an interface similar to GitHub’s, enabling you to interact with Bitbucket repositories to list files, read file contents, and commit modifications.
- Trello: Enables interactions with Trello boards, lists, and cards, facilitating project management directly within your assistant.
- Slack: Seamlessly connects your assistant to Slack and perform actions like listing channels, posting messages, replying to threads, adding reactions, and retrieving channel history and user profiles.
- Jira Service Desk: Integrates with Jira Service Desk to interact with service requests—enabling you to create, update, and manage requests (cards), list queues, add comments, transition statuses, and manage priorities.
Resolvers
Conversation Option Resolver
The Conversation Option Resolver is used to determine which conversation option should be used when initializing the assistant. It allows you to implement custom logic based on the current page or other factors to control whether an assistant should be displayed and how it should behave.
You can create a custom conversation option resolver by implementing the ConversationOptionResolverInterface
. This gives you complete control over the behavior, including the ability to determine whether to return a conversation option or not. If you return null
, no conversation or assistant will be shown on the page.
Example of the built-in Conversation Option Resolver:
You can also customize the resolver logic to adapt to different pages or user roles, providing a tailored conversational experience by extending the built-in ConversationOptionResolver or implement the interface on your own.
ConversationOption Object
The ConversationOption
object allows you to configure how a conversation is created or retrieved. The available fields include:
- assistantKey (required): Unique key identifying the assistant.
- userId (required): ID of the user associated with the conversation, allowing multiple users to have different conversations with the same assistant.
- additionalRunData (optional): Arbitrary data to provide context to the conversation. This context is included with the conversation data sent to the LLM.
- metadata (optional): Data intended for the front-end or client application, allowing additional operations based on its content.
- recreate (optional): If set to true, recreates the conversation, deactivating the previous one.
Note: The Filament Assistant will attempt to locate an existing conversation based on the combination of
assistantKey
,userId
. If a match is found, that conversation will be retrieved; otherwise, a new one will be created.
Context Resolver
The Context Resolver is responsible for resolving context models that are visible on the page and providing them to the assistant. This helps the assistant understand the context of the current page and allows it to access relevant information during the conversation.
The default Context Resolver (ContextResolver
) tries to collect models related to the page, such as records or list items, and injects them into the context of the ConversationOption
object.
Example of a Context Resolver:
The Context Resolver automatically gathers information about the page and its related models, enabling the assistant to leverage this information during a conversation.
Custom Context Resolvers
Sometimes you have pages which are fully custom, and where the standard Context Resolver doesn't get all the models visible to the customer. In this case, you can either implement your own Context Resolver based on the interface, or you can extend it, like in the example below, to add more context. You can extend the Context Resolver and, based on different pages, inject other contexts, like models or the description of the page, to give the LLM even more context about what the user is seeing right now.
Example of a Custom Context Resolver:
Custom Model Serialization
The standard resolving mechanism for models is to transform them to arrays. But sometimes you want to have a different model serialization. Maybe you want to hide properties or give the LLM a little bit more context regarding the models it sees. Therefore, another interface exists called Context Model Interface, which defines a static function resolveModels
that you can implement and use to resolve a list of models of the same type.
There is also a trait implementing this interface called Context Model, where you can group models from the same class inside a data object and provide the LLM with metadata as well as exclude properties from the model itself. This ensures that sensitive data is not sent to the LLM directly, but you can adjust it to your needs.
This Trait you can implement in your Model Classes and overwrite the defined methods if needed:
Assistant Chat Page
In addition to the built-in chat sidebar, you can add a dedicated Assistant Chat Page in your Filament panel. This page allows you to interact with your assistants directly from a full-page interface. To enable this feature, simply add the chat page to your panel provider.
For example, you can update your panel provider as follows:
Events
After the message is processed, the page component automatically refreshes so that you can see what the assistant updated for you. If you want, you can also manually listen to the event; just implement a listener on and then you can process your custom logic.
More Repositories
We’ve created more repositories to make AI integration even simpler and more powerful! Check them out:
- Open Functions Core: Powerful primitives that simplify LLM calling.
- Open Functions Actions: Serializes OpenFunctions into ChatGPT actions.
Ready-to-use OpenFunctions implementations:
- Memory: Provides a standardized API for storing, updating, retrieving, and removing conversational memories.
- Notion: Connects to your Notion workspace and enables functionalities such as listing databases, retrieving pages, and managing content blocks.
- GitHub: Integrates with GitHub to allow repository operations like listing branches, reading files, and committing changes.
- Bitbucket: Provides an interface similar to GitHub’s, enabling you to interact with Bitbucket repositories to list files, read file contents, and commit modifications.
- Trello: Enables interactions with Trello boards, lists, and cards, facilitating project management directly within your assistant.
- Slack: Seamlessly connects your assistant to Slack and perform actions like listing channels, posting messages, replying to threads, adding reactions, and retrieving channel history and user profiles.
- Jira Service Desk: Integrates with Jira Service Desk to interact with service requests—enabling you to create, update, and manage requests (cards), list queues, add comments, transition statuses, and manage priorities.
We are a young startup aiming to make it easy for developers to add AI to their applications. We welcome feedback, questions, comments, and contributions. Feel free to contact us at [email protected].
PRO Version
For users looking for enhanced functionality, the PRO Version offers advanced features beyond the standard Filament Assistant capabilities:
- Tool Call Confirmations: Require explicit user confirmation for specific tool calls before they are executed, ensuring an extra layer of safety.
- RAG: Index documents and make them accessible via a dedicated RAG tool.
- Conversational Memory Management: Enhance your assistants with dynamic conversational memory, featuring an intuitive admin panel that lets users view and manage stored memories for improved context awareness and more natural interactions.
- Assistant Admin Panel: Easily configure your assistants and tools.
- Monitoring and Analytics: Benefit from built-in monitoring and analytic capabilities to keep track of performance and usage.
If you are interested in the PRO Version or would like to learn more about its implementation, please contact us at [email protected] for further details and access options.
Consultancy & Support
Do you need assistance integrating Filament Assistant into your Laravel Filament application, or help setting it up?
We offer consultancy services to help you get the most out of our package, whether you’re just getting started or looking to optimize an existing setup.
Reach out to us at [email protected].
Contributing
We welcome contributions from the community! Feel free to submit pull requests, open issues, and help us improve the package.
License
This project is licensed under the MIT License. Please see License File for more information.
All versions of filament-assistant with dependencies
assistant-engine/open-functions-core Version ^1.1
filament/filament Version ^3.3
openai-php/client Version ^0.10.3
spatie/laravel-package-tools Version ^1.16