Download the PHP package php-graph/inference without Composer

On this page you can find all versions of the php package php-graph/inference. It is possible to download/install these versions without Composer. Possible dependencies are resolved automatically.

FAQ

After the download, you have to make one include require_once('vendor/autoload.php');. After that you have to import the classes with use statements.

Example:
If you use only one package a project is not needed. But if you use more then one package, without a project it is not possible to import the classes with use statements.

In general, it is recommended to use always a project to download your libraries. In an application normally there is more than one library needed.
Some PHP packages are not free to download and because of that hosted in private repositories. In this case some credentials are needed to access such packages. Please use the auth.json textarea to insert credentials, if a package is coming from a private repository. You can look here for more information.

  • Some hosting areas are not accessible by a terminal or SSH. Then it is not possible to use Composer.
  • To use Composer is sometimes complicated. Especially for beginners.
  • Composer needs much resources. Sometimes they are not available on a simple webspace.
  • If you are using private repositories you don't need to share your credentials. You can set up everything on our site and then you provide a simple download link to your team member.
  • Simplify your Composer build process. Use our own command line tool to download the vendor folder as binary. This makes your build process faster and you don't need to expose your credentials for private repositories.
Please rate this library. Is it a good library?

Informations about the package inference

Inference

This project is a PHP component designed to simplify the integration and usage of Large Language Models (LLMs) from various providers, including Ollama, OpenAI, Mistral, DeepSeek, and more. It offers a unified interface for:

The goal is to provide a simple, extensible, and developer-oriented abstraction for AI inference, enabling optimal performance and rapid integration into your PHP projects.

Get Started

First, install phpGraph Inference via the Composer package manager:

Usages

Ollama

Chat

The following example demonstrates how to use Ollama as a local LLM provider for a standard chat interaction. You instantiate a resource, build your message history, and get the assistant's response.

Chat Stream

To stream the response token-by-token (useful for chat UIs and progress feedback), instantiate the resource with chatStreamed() and provide a handler that processes each chunk as it arrives.

Embeddings

To generate vector embeddings with Ollama, simply use the embed() method on the resource and pass your text or batch of texts. The returned array contains the embedding vector(s).


OpenAI

Chat

For OpenAI's GPT models, the usage is nearly identical. After configuring your API key and provider, send your conversation history and extract the assistant's response from the returned structure.

Chat Stream

OpenAI also supports streaming chat completions. Use the streaming resource and provide a handler that is called for each streamed content delta.

Embeddings

To get high-quality embeddings from OpenAI, specify the embedding model and input texts. The response contains an array of embeddings matching the input order.


Mistral

Chat

Mistral provides OpenAI-compatible chat endpoints. Simply use your API key and the right model. The format and usage remain consistent with the other providers.

Chat Stream

Streaming responses are supported just like with OpenAI, making it easy to plug into chat UIs or progressive LLM pipelines.

Embeddings

For embeddings, use the appropriate Mistral model. The structure of the response is similar to OpenAI's, with each embedding in the data array.


DeepSeek

Chat

DeepSeek also exposes an OpenAI-compatible API. You can interact with it in the same way, making switching providers simple.

Chat Stream

Streaming works identically: use the streamed resource and process each chunk via your handler.

Embeddings

DeepSeek embeddings are retrieved with the same API contract. Specify the embedding model and input; parse the embeddings from the data array in the response.

Example :

Symfony Command


All versions of inference with dependencies

PHP Build Version
Package Version
Requires php Version >=8.3
symfony/http-client-contracts Version ^3.6
symfony/http-client Version ^7.3
doctrine/collections Version ^2.3
Composer command for our command line client (download client) This client runs in each environment. You don't need a specific PHP version etc. The first 20 API calls are free. Standard composer command

The package php-graph/inference contains the following files

Loading the files please wait ....