Download the PHP package inspector-apm/neuron-ai without Composer
On this page you can find all versions of the php package inspector-apm/neuron-ai. It is possible to download/install these versions without Composer. Possible dependencies are resolved automatically.
Download inspector-apm/neuron-ai
More information about inspector-apm/neuron-ai
Files in inspector-apm/neuron-ai
Package neuron-ai
Short Description PHP AI Framework with built-in observability.
License
Informations about the package neuron-ai
Create Full-Featured AI Agents As Standalone Components In Any PHP Application
Before moving on, support the community giving a GitHub star ⭐️. Thank you!
Requirements
- PHP: ^8.1
Official documentation
Go to the official documentation
Guides & Tutorials
Check out the technical guides and tutorials archive to learn how to start creating your AI Agents with Neuron https://docs.neuron-ai.dev/resources/guides-and-tutorials.
Neuron AI Examples
- Install
- Create an Agent
- Talk to the Agent
- Monitoring
- Supported LLM Providers
- Tools & Function Calls
- MCP server connector
- Implement RAG systems
- Structured Output
- Official Documentation
Install
Install the latest version of the package:
Create an Agent
Neuron provides you with the Agent class you can extend to inherit the main features of the framework
and create fully functional agents. This class automatically manages some advanced mechanisms for you, such as memory,
tools and function calls, up to the RAG systems. You can go deeper into these aspects in the documentation.
In the meantime, let's create the first agent, extending the NeuronAI\Agent
class:
The SystemPrompt
class is designed to take your base instructions and build a consistent prompt for the underlying model
reducing the effort for prompt engineering.
Talk to the Agent
Send a prompt to the agent to get a response from the underlying LLM:
As you can see in the example above, the Agent automatically has memory of the ongoing conversation. Learn more about memory in the documentation.
Monitoring
Integrating AI Agents into your application you’re not working only with functions and deterministic code, you program your agent also influencing probability distributions. Same input ≠ output. That means reproducibility, versioning, and debugging become real problems.
Many of the Agents you build with NeuronAI will contain multiple steps with multiple invocations of LLM calls, tool usage, access to external memories, etc. As these applications get more and more complex, it becomes crucial to be able to inspect what exactly your agent is doing and why.
Why is the model taking certain decisions? What data is the model reacting to? Prompting is not programming in the common sense. No static types, small changes break output, long prompts cost latency, and no two models behave exactly the same with the same prompt.
The best way to do this is with Inspector. After you sign up,
make sure to set the INSPECTOR_INGESTION_KEY
variable in the application environment file to start monitoring:
After configuring the environment variable, you will see the agent execution timeline in your Inspector dashboard.
Learn more about Monitoring in the documentation.
Supported LLM Providers
With NeuronAI, you can switch between LLM providers with just one line of code, without any impact on your agent implementation. Supported providers:
- Anthropic
- Ollama (also available as an embeddings provider)
- OpenAI (also available as an embeddings provider)
- Gemini
Tools & Toolkits
You can add abilities to your agent to perform concrete tasks:
Learn more about Tools in the documentation.
MCP server connector
Instead of implementing tools manually, you can connect tools exposed by an MCP server with the McpConnector
component:
Learn more about MCP connector in the documentation.
Implement RAG systems
For RAG use case, you must extend the NeuronAI\RAG\RAG
class instead of the default Agent class.
To create a RAG you need to attach some additional components other than the AI provider, such as a vector store
,
and an embeddings provider
.
Here is an example of a RAG implementation:
Learn more about RAG in the documentation.
Structured Output
For many applications, such as chatbots, Agents need to respond to users directly in natural language. However, there are scenarios where we need Agents to understand natural language, but output in a structured format.
One common use-case is extracting data from text to insert into a database or use with some other downstream system. This guide covers a few strategies for getting structured outputs from the agent.
Learn more about Structured Output on the documentation.
Official documentation
All versions of neuron-ai with dependencies
guzzlehttp/guzzle Version ^7.0
psr/log Version ^1.0|^2.0|^3.0
psr/http-message Version ^1.0|^2.0
inspector-apm/inspector-php Version ^3.15.5