Download the PHP package kargnas/instructrice without Composer

On this page you can find all versions of the php package kargnas/instructrice. It is possible to download/install these versions without Composer. Possible dependencies are resolved automatically.

FAQ

After the download, you have to make one include require_once('vendor/autoload.php');. After that you have to import the classes with use statements.

Example:
If you use only one package a project is not needed. But if you use more then one package, without a project it is not possible to import the classes with use statements.

In general, it is recommended to use always a project to download your libraries. In an application normally there is more than one library needed.
Some PHP packages are not free to download and because of that hosted in private repositories. In this case some credentials are needed to access such packages. Please use the auth.json textarea to insert credentials, if a package is coming from a private repository. You can look here for more information.

  • Some hosting areas are not accessible by a terminal or SSH. Then it is not possible to use Composer.
  • To use Composer is sometimes complicated. Especially for beginners.
  • Composer needs much resources. Sometimes they are not available on a simple webspace.
  • If you are using private repositories you don't need to share your credentials. You can set up everything on our site and then you provide a simple download link to your team member.
  • Simplify your Composer build process. Use our own command line tool to download the vendor folder as binary. This makes your build process faster and you don't need to expose your credentials for private repositories.
Please rate this library. Is it a good library?

Informations about the package instructrice

👩‍🏫 adrienbrault/instructrice

GitHub Actions Packagist

Typing LLM completions

Best in class LLMs are able to output JSON following a schema you provide, usually JSON-Schema. This significantly expands the ways you can leverage LLMs in your application!

Think of the input as:

And the output/outcome is whichever structure best matches your use case and domain.

The python instructor cookbook has interesting examples.

Introduction

Instructrice is a PHP library that simplifies working with structured output from LLMs in a type-safe manner.

Features:

A Symfony Bundle is also available.

Installation and Usage

List of object

Object

Dynamic Schema

You can also use third party json schema libraries like goldspecdigital/oooas to generate the schema:

https://github.com/adrienbrault/instructrice/assets/611271/da69281d-ac56-4135-b2ef-c5e306a56de2

Supported providers

Provider Environment Variables Enum API Key Creation URL
Ollama OLLAMA_HOST Ollama
OpenAI OPENAI_API_KEY OpenAi API Key Management
Anthropic ANTHROPIC_API_KEY Anthropic API Key Management
Mistral MISTRAL_API_KEY Mistral API Key Management
Fireworks AI FIREWORKS_API_KEY Fireworks API Key Management
Groq GROQ_API_KEY Groq API Key Management
Together AI TOGETHER_API_KEY Together API Key Management
Deepinfra DEEPINFRA_API_KEY Deepinfra API Key Management
Perplexity PERPLEXITY_API_KEY Perplexity API Key Management
Anyscale ANYSCALE_API_KEY Anyscale API Key Management
OctoAI OCTOAI_API_KEY OctoAI API Key Management

The supported providers are Enums, which you can pass to the llm argument of InstructriceFactory::create:

Supported models

Strategy 📄 Text 🧩 JSON 🚀 Function
Commercial usage 💼 ✅ Yes ⚠️ Yes, but ❌ Nope

Open Weights

Foundation

💼 ctx Ollama Mistral Fireworks Groq Together DeepInfra Perplexity Anyscale OctoAI
Mistral 7B 32k 🧩 🧩 68/s 📄 98/s 📄 88/s !ctx=16k! 🧩 🧩
Mixtral 8x7B 32k 🧩 🧩 44/s 🧩 237/s 🚀 560/s 🚀 99/s 📄 119/s !ctx=16k! 🧩 🧩
Mixtral 8x22B 65k 🧩 🧩 77/s 🧩 77/s 📄 52/s 🧩 40/s 📄 62/s !ctx=16k! 🧩 🧩
Phi-3-Mini-4K 4k 🧩
Phi-3-Mini-128K 128k 🧩
Phi-3-Medium-4K 4k 🧩
Phi-3-Medium-128K 128k 🧩
Qwen2 0.5B 32k 🧩
Qwen2 1.5B 32k 🧩
Qwen2 7B 128k 🧩
Llama3 8B ⚠️ 8k 📄 🧩 280/s 🚀 800/s 📄 194/s 🧩 133/s 📄 121/s 🧩 🧩
Llama3 70B ⚠️ 8k 🧩 🧩 116/s 🚀 270/s 📄 105/s 🧩 26/s 📄 42/s 🧩 🧩
Gemma 7B ⚠️ 8k 🚀 800/s 📄 118/s 🧩 64/s 🧩
DBRX ⚠️ 32k 🧩 50/s 📄 72/s 🧩
Qwen2 72B ⚠️ 128k 🧩
Qwen1.5 32B ⚠️ 32k 📄 🧩
Command R 128k 📄
Command R+ 128k 📄

Throughputs from https://artificialanalysis.ai/leaderboards/providers .

Fine Tune

💼 ctx Base Ollama Fireworks Together DeepInfra OctoAI
Hermes 2 Pro Mistral 7B Mistral 7B 🧩 🧩 🧩
FireFunction V1 Mixtral 8x7B 🚀
WizardLM 2 7B Mistral 7B 🧩
WizardLM 2 8x22B Mixtral 8x7B 📄 🧩 🧩
Capybara 34B 200k Yi 34B 🧩
Hermes 2 Pro Llama3 8B ⚠️ Llama3 8B 📄
Hermes 2 Theta Llama3 8B ⚠️ Llama3 8B 📄
Dolphin 2.9 ⚠️ 8k Llama3 8B 🧩 📄 🧩

Proprietary

Provider Model ctx
Mistral Large 32k ✅ 26/s
OpenAI GPT-4o 128k 🚀 83/s
OpenAI GPT-4o mini 128k 🚀 140/s
OpenAI GPT-4 Turbo 128k 🚀 28/s
OpenAI GPT-3.5 Turbo 16k 🚀 72/s
Anthropic Claude 3 Haiku 200k 📄 88/s
Anthropic Claude 3 Sonnet 200k 📄 59/s
Anthropic Claude 3 Opus 200k 📄 26/s
Google Gemini 1.5 Flash 1000k 🧩 136/s
Google Gemini 1.5 Pro 1000k 🧩 57/s
Perplexity Sonar Small Chat 16k 📄
Perplexity Sonar Small Online 12k 📄
Perplexity Sonar Medium Chat 16k 📄
Perplexity Sonar Medium Online 12k 📄

Throughputs from https://artificialanalysis.ai/leaderboards/providers .

Automate updating these tables by scraping https://artificialanalysis.ai , along with chatboard arena elo.? Would be a good use case / showcase of this library/cli?

Custom Models

Ollama

If you want to use an Ollama model that is not available in the enum, you can use the Ollama::create static method:

OpenAI

You can also use any OpenAI compatible api by passing an LLMConfig:

DSN

You may configure the LLM using a DSN:

Examples:

LLMInterface

You may also implement LLMInterface.

Acknowledgements

Obviously inspired by instructor-php and instructor.

How is it different from instructor php?

Both libraries essentially do the same thing:

However, instructice differs with:

Notes/Ideas

Things to look into:

DSPy is very interesting. There are great ideas to be inspired by.

Ideally this library is good to prototype with, but can support more advanced extraction workflows with few shot examples, some sort of eval system, generating samples/output like DSPy, etc

Would be cool to have a CLI, that accepts a FQCN and a context.

Autosave all input/schema/output in sqlite db. Like llm? Leverage that to test examples, add few shots, evals?


All versions of instructrice with dependencies

PHP Build Version
Package Version
Requires php Version ^8.1
adrienbrault/partial-json Version ^1.2
api-platform/json-schema Version ^3.0
azjezz/psl Version ^2.0
gioni06/gpt3-tokenizer Version ^1.0
guzzlehttp/guzzle Version ^7.0
nesbot/carbon Version ^2 || ^3
phpdocumentor/reflection-docblock Version ^5.0
phpdocumentor/type-resolver Version ^1.0
symfony/property-access Version ^6.0 || ^7.0
symfony/property-info Version ^6.0 || ^7.0
Composer command for our command line client (download client) This client runs in each environment. You don't need a specific PHP version etc. The first 20 API calls are free. Standard composer command

The package kargnas/instructrice contains the following files

Loading the files please wait ....