Download the PHP package lewenbraun/ollama-php-sdk without Composer
On this page you can find all versions of the php package lewenbraun/ollama-php-sdk. It is possible to download/install these versions without Composer. Possible dependencies are resolved automatically.
Download lewenbraun/ollama-php-sdk
More information about lewenbraun/ollama-php-sdk
Files in lewenbraun/ollama-php-sdk
Package ollama-php-sdk
Short Description PHP SDK for interacting with the Ollama API, a service for running and deploying large language models.
License MIT
Homepage https://github.com/lewenbraun/ollama-php-sdk
Informations about the package ollama-php-sdk
Ollama PHP Client Library
A lightweight PHP client library to interact with the Ollama API. This library provides a simple, fluent interface to work with text and chat completions, manage models, handle blobs, and generate embeddings.
Installation
Install the library via Composer:
Usage
Initialize the Client
Create a new client instance (default host is http://localhost:11434
):
Completion
Generate a text completion using a model. Under the hood, this wraps the /api/generate
endpoint.
Non-Streaming Completion
Returns a single response object:
Streaming Completion
Enables streaming to receive the response in chunks:
Chat Completion
Generate chat responses using a conversational context. This wraps the /api/chat
endpoint.
Non-Streaming Chat Completion
Returns the final chat response as a single object:
Streaming Chat Completion
Stream the chat response as it is generated:
Model Management
Manage models through various endpoints (create, list, show, copy, delete, pull, push).
Create a Model
Create a new model from an existing model (or other sources):
List Models
Retrieve a list of available models:
List Running Models
List models currently loaded into memory:
Show Model Information
Retrieve details about a specific model:
Copy a Model
Create a copy of an existing model:
Delete a Model
Delete a model by name:
Pull a Model
Download a model from the Ollama library:
Push a Model
Upload a model to your model library:
Blobs
Work with binary large objects (blobs) used for model files.
Check if a Blob Exists
Verify if a blob (by its SHA256 digest) exists on the server:
Push a Blob
Upload a blob to the Ollama server:
Embeddings
Generate embeddings from a model using the /api/embed
endpoint.
Version
Retrieve the version of the Ollama server.
Advanced Parameters
Each method accepts additional parameters (e.g., suffix, format, options, keep_alive, etc.) that can be used to fine-tune the request. For example, you can pass advanced options in the request arrays for structured outputs, reproducible responses, and more. For full details, refer to the Ollama API documentation.
Contributing
Contributions, issues, and feature requests are welcome. Please check the issues page for more details.
License
This project is licensed under the MIT License.