Download the PHP package nwilging/laravel-chatgpt without Composer

On this page you can find all versions of the php package nwilging/laravel-chatgpt. It is possible to download/install these versions without Composer. Possible dependencies are resolved automatically.

FAQ

After the download, you have to make one include require_once('vendor/autoload.php');. After that you have to import the classes with use statements.

Example:
If you use only one package a project is not needed. But if you use more then one package, without a project it is not possible to import the classes with use statements.

In general, it is recommended to use always a project to download your libraries. In an application normally there is more than one library needed.
Some PHP packages are not free to download and because of that hosted in private repositories. In this case some credentials are needed to access such packages. Please use the auth.json textarea to insert credentials, if a package is coming from a private repository. You can look here for more information.

  • Some hosting areas are not accessible by a terminal or SSH. Then it is not possible to use Composer.
  • To use Composer is sometimes complicated. Especially for beginners.
  • Composer needs much resources. Sometimes they are not available on a simple webspace.
  • If you are using private repositories you don't need to share your credentials. You can set up everything on our site and then you provide a simple download link to your team member.
  • Simplify your Composer build process. Use our own command line tool to download the vendor folder as binary. This makes your build process faster and you don't need to expose your credentials for private repositories.
Please rate this library. Is it a good library?

Informations about the package laravel-chatgpt

Laravel ChatGPT

Super simple wrapper for openai-php/client with error handling. Specifically for ChatGPT conversations.

Tests Latest Stable Version License Total Downloads


About

This package is a very simple wrapper for interacting with OpenAI Chat Completions (ChatGPT). A common problem with larger conversations is "too many tokens", which happens when a prompt is sent to the API that contains a number of tokens greater than the specified model's token limit.

This package will attempt to prune messages from the conversation starting from the beginning, so that the most recent conversation context still exists in the prompt. Additionally, if an "initial prompt" or other "system" level instruction message is required, this message will be locked to the top of the message stack so that it is always the first message.


Installation

Pre Requisites

  1. Laravel v8+
  2. PHP 7.4+
  3. OpenAI API Key

Install with Composer

Configuration

Two things must be configured for this package to work:

  1. OpenAI API key
  2. OpenAI Tokenizer

.env setup

First, get an API key from OpenAI.

Add this key to your .env as:

Tokenizer Setup

To publish the tokenizer files to storage/app/openai_tokenizer:

This will add 3 files to the storage/app/openai_tokenizer directory:

  1. characters.json
  2. encoder.json
  3. vocab.bpe

These files must be present for the tokenizer to work! It is best to commit these files to your codebase since they are relatively small. You may also need to add the following to your storage/app/.gitignore:

Usage

You may use this package to execute chat completions while automatically pruning message payloads that are too large for the given OpenAI model. Additionally you may use each component separately, for example if you wish to tokenize a prompt.

The ChatCompletionMessage Model

This is a helper model that must be used to generate chat completions. Since the chat completion API supports message objects, this class exists to help build lists of those message objects.

Example:

These messages may be sent in an array to the ChatGptService.

Automatic Chat Completions

Send any number of messages to the ChatGptService and automatically generate a chat completion based on the conversation context, automatically pruning messages from the top of the stack in the event of a token exceeded exception.

Example:

In the above example, createChat will prune messages from the top of the stack when the payload is too large, disregarding the initial prompt.

If the initial prompt helps define parameters for the entire conversation, you should retain it in the payload. Use createChatRetainInitialPrompt to do this.

Tokenizer

The Tokenizer is very similar to OpenAI's tokenizer and can be used to extract tokens from a prompt. This can be used to determine number of tokens in a prompt, etc.

The tokenizer has the ability to tokenize an array of ChatCompletionMessages, or just tokenize a basic string prompt.

Tokenizing Prompts:

Tokenizing ChatCompletionMessages is slightly more complicated. The tokenizer will wrap each message in ChatGPT directives denoting messages and their attributes. This differs from simple prompt tokenization since messages themselves are more complex than simply text -- e.g. they include a role, username, and the message content.

For bot and user messages, the format is as follows:

For user messages:

For bot messages:

Finally, system messages are treated slightly differently:

The resulting formatted messages are what will be tokenized.


All versions of laravel-chatgpt with dependencies

PHP Build Version
Package Version
Requires php Version >=8.1
laravel/framework Version >=8
guzzlehttp/guzzle Version ^7.4
openai-php/client Version ^0.3.5
Composer command for our command line client (download client) This client runs in each environment. You don't need a specific PHP version etc. The first 20 API calls are free. Standard composer command

The package nwilging/laravel-chatgpt contains the following files

Loading the files please wait ....