Download the PHP package assistant-engine/open-functions-core without Composer
On this page you can find all versions of the php package assistant-engine/open-functions-core. It is possible to download/install these versions without Composer. Possible dependencies are resolved automatically.
Download assistant-engine/open-functions-core
More information about assistant-engine/open-functions-core
Files in assistant-engine/open-functions-core
Package open-functions-core
Short Description A core library that provides a framework for building and registering Open Functions for AI-powered applications. It simplifies integration with large language models by standardizing function definitions, message handling, and response formatting, reducing boilerplate for advanced assistant capabilities.
License MIT
Homepage https://github.com/AssistantEngine/open-functions-core
Informations about the package open-functions-core
OpenFunctions Core
This library provides a set of primitives that simplify LLM calling. It offers an easy way to define messages and message lists, create tool definitions, and execute tool calls. By abstracting these core functionalities, OpenFunctions Core helps you reduce boilerplate code and quickly integrate advanced tool-calling capabilities into your LLM-powered applications.
Key Features
- ✅ Structured Conversation Management - Offers robust primitives for messages and message lists.
- ✅ Unified LLM Tool Interface - OpenFunctions provide a standardized API for defining and invoking tools within LLM-powered applications.
- ✅ Function Registry & Meta-mode - Manages multiple OpenFunctions and namespaces seamlessly, with support for dynamic activation and deactivation to keep tool definitions clear and concise.
- ✅ Extensible Architecture - Designed to be highly customizable, allowing you to extend or replace components with your own implementations.
- ✅ Developer-Friendly Abstractions - Reduces boilerplate code through intuitive helper classes for defining functions, parameters, and responses.
- ✅ Pre-built Integrations - Includes ready-to-use implementations for popular platforms such as Slack, GitHub, Bitbucket, Trello, and Jira Service Desk.
Installation
Install the package via Composer:
Usage
You can use this library for the following challenges:
Messages
Based on the OpenAI schema, the available message types are exposed as primitives. You can use these as building blocks to structure your conversation. The following example shows how to define an array of messages, add them to a MessageList, and convert that list to an array for use in an LLM call.
Tool Calling
In order to enable tool calling, the common challenge is to both define the function definitions and expose the methods to make tool calling possible. To address this, the concept of an OpenFunction is introduced. Inside an OpenFunction, you generate the function definitions and implement the methods that the LLM can invoke. The AbstractOpenFunction class provides convenience methods such as callMethod() to wrap the output in a standardized response and handle errors consistently.
Function Definitions
Each class that extends the abstract open function must implement the generateFunctionDefinitions() method. This method is responsible for describing the functions that your tool exposes. To build these descriptions, you can use the provided helper classes:
- FunctionDefinition: This class is used to create a structured schema for a function. It accepts a function name and a short description and can include details about parameters.
- Parameter: This helper is used to define parameters for your function. It allows you to set the type (e.g., string, number, boolean) and additional details like description and whether the parameter is required.
For example, here’s a simple implementation:
Once you have implemented your open functions (such as the WeatherOpenFunction), you can generate their function definitions and pass them to the OpenAI client as the tools parameter. For example:
Open Function Registry
In some scenarios, you may want to use the same OpenFunction more than once with different configurations, or you might have different OpenFunctions that define methods with the same name. To handle these cases, the library provides an OpenFunction Registry, which itself is also an OpenFunction. This registry allows you to register each function under a unique namespace, ensuring that even if functions share the same underlying method name, they remain distinct.
For example, suppose you want one WeatherOpenFunction instance to operate in Celsius (the default) and another in Fahrenheit. You could register them as follows:
In this example, the registry ensures that even though both WeatherOpenFunction instances share the same method names (like getWeather), they are uniquely identified by their namespaces (celsius and fahrenheit). This separation allows you to call the appropriate function based on the desired temperature unit without any naming collisions.
Meta Mode
The above example ensures that even if multiple functions are registered under similar method names, each one is uniquely namespaced and easily distinguishable. In some scenarios, however, the total number of methods you want to provide to the LLM may exceed the maximum number it can ingest in a single call. To address this, the OpenFunction Registry offers a meta-mode.
In meta-mode, only three core registry functions are initially registered to the LLM. These functions give the LLM the ability to activate and deactivate additional methods on the fly. This approach keeps the number of tool definitions sent to the LLM both limited and understandable. The following image demonstrates an activated meta-mode within the Filament Assistant Plugin:
The three registry functions provided in meta-mode are:
Method | Description | Parameters |
---|---|---|
listFunctions | Lists all available functions grouped by namespace, including their names, descriptions, and namespace details. | None |
activateFunction | Activates one or more functions from the registry. Duplicate activations are ignored. | functionNames: array of string (required) An array of valid function names to activate. |
deactivateFunction | Deactivates one or more functions from the registry. | functionNames: array of string (required) An array of valid function names to deactivate. |
Registry Presenter
It can be a good idea to inform the LLM about the different namespaces and give a little more context. In order to do this, you might want to add a dedicated developer message to the message list that explains the namespaces. To achieve this dynamically, the concept of a Message List Extension is implemented. Extensions implement a specific interface and are invoked when the message list is built, giving you the opportunity to modify or extend the messages.
Message List Extensions
Sometimes it’s useful to add a message that explains context details to the LLM. For instance, you might want to inform the model about the namespaces registered by your tool. To achieve this dynamically, you can create an extension that implements the MessageListExtensionInterface. Once added to the message list, the extension is invoked automatically during the conversion process, allowing you to inject extra messages.
For example, the RegistryPresenter class implements this interface and in its extend() method it prepends a developer message that lists the namespaces:
To add an extension to your message list, simply use the addExtension() method on your MessageList instance. Here’s how you can register the RegistryPresenter as an extension:
Function Calling
Implement all callable methods within your OpenFunction class. Each method should return a string, a text response, a binary response, or a list of responses. The callMethod in the abstract class ensures the output is consistently wrapped.
For example, in WeatherOpenFunction:
To invoke the function:
and if you used the registry
so if you would use it within the llm loop it could look like something like this
Available Open Function Implementations
In addition to creating your own OpenFunction, there are several ready-to-use implementations available. Here’s a quick overview:
- Memory: Provides a standardized API for storing, updating, retrieving, and removing conversational memories.
- Notion: Connects to your Notion workspace and enables functionalities such as listing databases, retrieving pages, and managing content blocks.
- GitHub: Integrates with GitHub to allow repository operations like listing branches, reading files, and committing changes.
- Bitbucket: Provides an interface similar to GitHub’s, enabling you to interact with Bitbucket repositories to list files, read file contents, and commit modifications.
- Trello: Enables interactions with Trello boards, lists, and cards, facilitating project management directly within your assistant.
- Slack: Seamlessly connects your assistant to Slack and perform actions like listing channels, posting messages, replying to threads, adding reactions, and retrieving channel history and user profiles.
- Jira Service Desk: Integrates with Jira Service Desk to interact with service requests—enabling you to create, update, and manage requests (cards), list queues, add comments, transition statuses, and manage priorities.
More Repositories
We’ve created more repositories to make AI integration even simpler and more powerful! Check them out:
- Filament Assistant: Add conversational AI capabilities directly into Laravel Filament.
We are a young startup aiming to make it easy for developers to add AI to their applications. We welcome feedback, questions, comments, and contributions. Feel free to contact us at [email protected].
Consultancy & Support
Do you need assistance integrating Filament Assistant into your Laravel Filament application, or help setting it up?
We offer consultancy services to help you get the most out of our package, whether you’re just getting started or looking to optimize an existing setup.
Reach out to us at [email protected].
Contributing
We welcome contributions from the community! Feel free to submit pull requests, open issues, and help us improve the package.
License
This project is licensed under the MIT License. Please see License File for more information.