Download the PHP package theodo-group/llphant without Composer

On this page you can find all versions of the php package theodo-group/llphant. It is possible to download/install these versions without Composer. Possible dependencies are resolved automatically.

FAQ

After the download, you have to make one include require_once('vendor/autoload.php');. After that you have to import the classes with use statements.

Example:
If you use only one package a project is not needed. But if you use more then one package, without a project it is not possible to import the classes with use statements.

In general, it is recommended to use always a project to download your libraries. In an application normally there is more than one library needed.
Some PHP packages are not free to download and because of that hosted in private repositories. In this case some credentials are needed to access such packages. Please use the auth.json textarea to insert credentials, if a package is coming from a private repository. You can look here for more information.

  • Some hosting areas are not accessible by a terminal or SSH. Then it is not possible to use Composer.
  • To use Composer is sometimes complicated. Especially for beginners.
  • Composer needs much resources. Sometimes they are not available on a simple webspace.
  • If you are using private repositories you don't need to share your credentials. You can set up everything on our site and then you provide a simple download link to your team member.
  • Simplify your Composer build process. Use our own command line tool to download the vendor folder as binary. This makes your build process faster and you don't need to expose your credentials for private repositories.
Please rate this library. Is it a good library?

Informations about the package llphant

LLPhant - A comprehensive PHP Generative AI Framework

We designed this framework to be as simple as possible, while still providing you with the tools you need to build powerful apps. It is compatible with Symfony and Laravel.

We are working to expand the support of different LLMs. Right now, we are supporting OpenAI and Ollama that can be used to run LLM locally such as Llama 2.

If you want to use other LLMs, you can use genossGPT as a proxy.

We want to thank few amazing projects that we use here or inspired us:

We can find great external resource on LLPhant (ping us to add yours):

Table of Contents

Get Started

Requires PHP 8.1+

First, install LLPhant via the Composer package manager:

You may also want to check the requirements for OpenAI PHP SDK as it is the main client.

Use Case

There are plenty use cases for Generative AI and new ones are creating every day. Let's see the most common ones. Based on a survey from the MLOPS community and this survey from Mckinsey the most common use case of AI are the following:

Not widely spread yet but with increasing adoption:

If you want to discover more usage from the community, you can see here a list of GenAI Meetups. You can also see other use cases on Qdrant's website.

Usage

You can use OpenAI or Ollama as LLM.

OpenAI

The most simple to allow the call to OpenAI is to set the OPENAI_API_KEY environment variable.

You can also create an OpenAIConfig object and pass it to the constructor of the OpenAIChat or OpenAIEmbeddings.

Ollama

If you want to use Ollama, you can just specify the model to use using the OllamaConfig object and pass it to the OllamaChat.

Chat

💡 This class can be used to generate content, to create a chatbot or to create a text summarizer.

You can use the OpenAIChat or OllamaChat to generate text or to create a chat.

We can use it to simply generate text from a prompt. This will ask directly an answer from the LLM.

If you want to display in your frontend a stream of text like in ChatGPT you can use the following method.

You can add instruction so the LLM will behave in a specific manner.

Customizing System Messages in Question Answering

When using the QuestionAnswering class, it is possible to customize the system message to guide the AI's response style and context sensitivity according to your specific needs. This feature allows you to enhance the interaction between the user and the AI, making it more tailored and responsive to specific scenarios.

Here's how you can set a custom system message:

Tools

This feature is amazing and is available only for OpenAI.

OpenAI has refined its model to determine whether tools should be invoked. To utilize this, simply send a description of the available tools to OpenAI, either as a single prompt or within a broader conversation.

In the response, the model will provide the called tools names along with the parameter values, if it deems the one or more tools should be called.

One potential application is to ascertain if a user has additional queries during a support interaction. Even more impressively, it can automate actions based on user inquiries.

We made it as simple as possible to use this feature.

Let's see an example of how to use it. Imagine you have a class that send emails.

You can create a FunctionInfo object that will describe your method to OpenAI. Then you can add it to the OpenAIChat object. If the response from OpenAI contains a tools' name and parameters, LLPhant will call the tool.

This PHP script will most likely call the sendMail method that we pass to OpenAI.

If you want to have more control about the description of your function, you can build it manually:

You can safely use the following types in the Parameter object: string, int, float, bool. The array type is supported but still experimental.

Embeddings

💡 Embeddings are used to compare two texts and see how similar they are. This is the base of semantic search.

An embedding is a vector representation of a text that captures the meaning of the text. It is a float array of 1536 elements for OpenAI for the small model.

To manipulate embeddings we use the Document class that contains the text and some metadata useful for the vector store. The creation of an embedding follow the following flow:

Read data

The first part of the flow is to read data from a source. This can be a database, a csv file, a json file, a text file, a website, a pdf, a word document, an excel file, ... The only requirement is that you can read the data and that you can extract the text from it.

For now we only support text files, pdf and docx but we plan to support other data type in the future.

You can use the FileDataReader class to read a file. It takes a path to a file or a directory as parameter. The second parameter is the class name of the entity that will be used to store the embedding. The class needs to extend the Document class and even the DoctrineEmbeddingEntityBase class (that extends the Document class) if you want to use the Doctrine vector store.

To create your own data reader you need to create a class that implements the DataReader interface.

Document Splitter

The embeddings models have a limit of string size that they can process. To avoid this problem we split the document into smaller chunks. The DocumentSplitter class is used to split the document into smaller chunks.

Embedding Formatter

The EmbeddingFormatter is an optional step to format each chunk of text into a format with the most context. Adding a header and links to other documents can help the LLM to understand the context of the text.

Embedding Generator

This is the step where we generate the embedding for each chunk of text by calling the LLM.

30 january 2024 : Adding Mistral embedding API You need to have a Mistral account to use this API. More information on the Mistral website. And you need to set up the MISTRAL_API_KEY environment variable or pass it to the constructor of the MistralEmbeddingGenerator class.

25 january 2024 : New embedding models and API updates OpenAI has 2 new models that can be used to generate embeddings. More information on the OpenAI Blog.

Status Model Embedding size
Default text-embedding-ada-002 1536
New text-embedding-3-small 1536
New text-embedding-3-large 3072

You can embed the documents using the following code:

You can also create a embedding from a text using the following code:

VectorStores

Once you have embeddings you need to store them in a vector store. The vector store is a database that can store vectors and perform a similarity search. There are currently 4 vectorStore class:

Example of usage with the DoctrineVectorStore class to store the embeddings in a database:

Once you have done that you can perform a similarity search over your data. You need to pass the embedding of the text you want to search and the number of results you want to get.

To get full example you can have a look at Doctrine integration tests files.

Doctrine VectorStore

One simple solution for web developers is to use a postgresql database as a vectorStore with the pgvector extension. You can find all the information on the pgvector extension on its github repository.

We suggest you 3 simple solutions to get a postgresql database with the extension enabled:

In any case you will need to activate the extension:

Then you can create a table and store vectors. This sql query will create the table corresponding to PlaceEntity in the test folder.

⚠️ If the embedding length is not 1536 you will need to specify it in the entity by overriding the $embedding property. Typically, if you use the OpenAI3LargeEmbeddingGenerator class, you will need to set the length to 3072 in the entity. Or if you use the MistralEmbeddingGenerator class, you will need to set the length to 1024 in the entity.

The PlaceEntity

Redis VectorStore

Prerequisites :

Then create a new Redis Client with your server credentials, and pass it to the RedisVectorStore constructor :

You can now use the RedisVectorStore as any other VectorStore.

Elasticsearch VectorStore

Prerequisites :

Then create a new Elasticsearch Client with your server credentials, and pass it to the ElasticsearchVectorStore constructor :

`

You can now use the ElasticsearchVectorStore as any other VectorStore.

Milvus VectorStore

Prerequisites : Milvus server running (see Milvus docs)

Then create a new Milvus client (LLPhant\Embeddings\VectorStores\Milvus\MiluvsClient) with your server credentials, and pass it to the MilvusVectorStore constructor :

`

You can now use the MilvusVectorStore as any other VectorStore.

Question Answering

A popular use case of LLM is to create a chatbot that can answer questions over your private data. You can build one using LLPhant using the QuestionAnswering class. It leverages the vector store to perform a similarity search to get the most relevant information and return the answer generated by OpenAI.

Here is one example using the MemoryVectorStore:

AutoPHP

You can now make your AutoGPT clone in PHP using LLPhant.

Here is a simple example using the SerpApiSearch tool to create an autonomous PHP agent. You just need to describe the objective and add the tools you want to use. We will add more tools in the future.

FAQ

Why use LLPhant and not directly the OpenAI PHP SDK ?

The OpenAI PHP SDK is a great tool to interact with the OpenAI API. LLphant will allow you to perform complex tasks like storing embeddings and perform a similarity search. It also simplifies the usage of the OpenAI API by providing a much more simple API for everyday usage.

Contributors

Thanks to our contributors:

Sponsor

LLPhant is sponsored by Theodo a leading digital agency building web application with Generative AI.

Theodo logo

All versions of llphant with dependencies

PHP Build Version
Package Version
Requires php Version ^8.1.0
guzzlehttp/guzzle Version ^7.1.0
guzzlehttp/psr7 Version ^2.6
nunomaduro/termwind Version ^1.15 || ^2.0
openai-php/client Version ^v0.7.7 || ^v0.8.4
phpoffice/phpword Version ^1.1
psr/http-message Version ^2.0
smalot/pdfparser Version ^2.7
Composer command for our command line client (download client) This client runs in each environment. You don't need a specific PHP version etc. The first 20 API calls are free. Standard composer command

The package theodo-group/llphant contains the following files

Loading the files please wait ....