Download the PHP package google-gemini-php/laravel without Composer
On this page you can find all versions of the php package google-gemini-php/laravel. It is possible to download/install these versions without Composer. Possible dependencies are resolved automatically.
Informations about the package laravel
Gemini PHP for Laravel is a community-maintained PHP API client that allows you to interact with the Gemini AI API.
- Fatih AYDIN github.com/aydinfatih
For more information, take a look at the google-gemini-php/client repository.
Table of Contents
- Prerequisites
- Setup
- Installation
- Setup your API key
- Usage
- Chat Resource
- Text-only Input
- Text-and-image Input
- Multi-turn Conversations (Chat)
- Stream Generate Content
- Count tokens
- Configuration
- Embedding Resource
- Models
- List Models
- Get Model
- Chat Resource
- Testing
Prerequisites
To complete this quickstart, make sure that your development environment meets the following requirements:
- Requires PHP 8.1+
- Requires Laravel 9,10,11
Setup
Installation
First, install Gemini via the Composer package manager:
Next, execute the install command:
This will create a config/gemini.php configuration file in your project, which you can modify to your needs using environment variables. Blank environment variables for the Gemini API key is already appended to your .env file.
You can also define the following environment variables.
Setup your API key
To use the Gemini API, you'll need an API key. If you don't already have one, create a key in Google AI Studio.
Usage
Interact with Gemini's API:
Chat Resource
Text-only Input
Generate a response from the model given an input message. If the input contains only text, use the gemini-pro
model.
Text-and-image Input
If the input contains both text and image, use the gemini-pro-vision
model.
Multi-turn Conversations (Chat)
Using Gemini, you can build freeform conversations across multiple turns.
The
gemini-pro-vision
model (for text-and-image input) is not yet optimized for multi-turn conversations. Make sure to use gemini-pro and text-only input for chat use cases.
Stream Generate Content
By default, the model returns a response after completing the entire generation process. You can achieve faster interactions by not waiting for the entire result, and instead use streaming to handle partial results.
Count tokens
When using long prompts, it might be useful to count tokens before sending any content to the model.
Configuration
Every prompt you send to the model includes parameter values that control how the model generates a response. The model can generate different results for different parameter values. Learn more about model parameters.
Also, you can use safety settings to adjust the likelihood of getting responses that may be considered harmful. By default, safety settings block content with medium and/or high probability of being unsafe content across all dimensions. Learn more about safety settings.
Embedding Resource
Embedding is a technique used to represent information as a list of floating point numbers in an array. With Gemini, you can represent text (words, sentences, and blocks of text) in a vectorized form, making it easier to compare and contrast embeddings. For example, two texts that share a similar subject matter or sentiment should have similar embeddings, which can be identified through mathematical comparison techniques such as cosine similarity.
Use the embedding-001
model with either embedContents
or batchEmbedContents
:
Models
List Models
Use list models to see the available Gemini models:
Get Model
Get information about a model, such as version, display name, input token limit, etc.
Testing
The package provides a fake implementation of the Gemini\Client
class that allows you to fake the API responses.
To test your code ensure you swap the Gemini\Client
class with the Gemini\Testing\ClientFake
class in your test case.
The fake responses are returned in the order they are provided while creating the fake client.
All responses are having a fake()
method that allows you to easily create a response object by only providing the parameters relevant for your test case.
In case of a streamed response you can optionally provide a resource holding the fake response data.
After the requests have been sent there are various methods to ensure that the expected requests were sent:
To write tests expecting the API request to fail you can provide a Throwable
object as the response.
All versions of laravel with dependencies
google-gemini-php/client Version ^1.0.0-beta
laravel/framework Version ^9.0|^10.0|^11.0