Download the PHP package nlpcloud/nlpcloud-client without Composer

On this page you can find all versions of the php package nlpcloud/nlpcloud-client. It is possible to download/install these versions without Composer. Possible dependencies are resolved automatically.

FAQ

After the download, you have to make one include require_once('vendor/autoload.php');. After that you have to import the classes with use statements.

Example:
If you use only one package a project is not needed. But if you use more then one package, without a project it is not possible to import the classes with use statements.

In general, it is recommended to use always a project to download your libraries. In an application normally there is more than one library needed.
Some PHP packages are not free to download and because of that hosted in private repositories. In this case some credentials are needed to access such packages. Please use the auth.json textarea to insert credentials, if a package is coming from a private repository. You can look here for more information.

  • Some hosting areas are not accessible by a terminal or SSH. Then it is not possible to use Composer.
  • To use Composer is sometimes complicated. Especially for beginners.
  • Composer needs much resources. Sometimes they are not available on a simple webspace.
  • If you are using private repositories you don't need to share your credentials. You can set up everything on our site and then you provide a simple download link to your team member.
  • Simplify your Composer build process. Use our own command line tool to download the vendor folder as binary. This makes your build process faster and you don't need to expose your credentials for private repositories.
Please rate this library. Is it a good library?

Informations about the package nlpcloud-client

PHP Client For NLP Cloud

This is the PHP client for the NLP Cloud API. See the documentation for more details.

NLP Cloud serves high performance pre-trained or custom models for NER, sentiment-analysis, classification, summarization, dialogue summarization, paraphrasing, intent classification, product description and ad generation, chatbot, grammar and spelling correction, keywords and keyphrases extraction, text generation, image generation, code generation, question answering, automatic speech recognition, machine translation, language detection, semantic search, semantic similarity, tokenization, POS tagging, embeddings, and dependency parsing. It is ready for production, served through a REST API.

You can either use the NLP Cloud pre-trained models, fine-tune your own models, or deploy your own models.

If you face an issue, don't hesitate to raise it as a Github issue. Thanks!

Installation

Install via composer.

Create a composer.json file containing at least the following:

Then launch the following:

Examples

Here is a full example that summarizes a text using Facebook's Bart Large CNN model, with a fake token:

Here is a full example that does the same thing, but on a GPU:

Here is a full example that does the same thing, but on a French text:

A json object is returned:

Usage

Client Initialization

Pass the model you want to use and the NLP Cloud token to the client during initialization.

The model can either be a pretrained model like en_core_web_lg, bart-large-mnli... but also one of your custom models, using custom_model/<model id> (e.g. custom_model/2568).

Your token can be retrieved from your NLP Cloud dashboard.

If you want to use a GPU, pass true as a 3rd argument.

If you want to use the multilingual add-on in order to process non-English texts, set '<your language code>' as a 4th argument. For example, if you want to process French text, you should set 'fra_Latn'.

If you want to make asynchronous requests, pass true as a 4th argument.

If you are making asynchronous requests, you will always receive a quick response containing a URL. You should then poll this URL with asyncResult() on a regular basis (every 10 seconds for example) in order to check if the result is available. Here is an example:

The above command returns an object if the response is availble. It returns nothing otherwise (NULL).

Automatic Speech Recognition (Speech to Text) Endpoint

Call the asr() method and pass the following arguments:

  1. (Optional: either this or the encoded file should be set) url: a URL where your audio or video file is hosted
  2. (Optional: either this or the url should be set) encodedFile: a base 64 encoded version of your file
  3. (Optional) inputLanguage: the language of your file as ISO code

The above command returns an object.

Chatbot Endpoint

Call the chatbot() method and pass your input. As an option, you can also pass a context and a conversation history that is an array of named arrays. Each named array is made of an input and a response from the chatbot.

The above command returns an object.

Classification Endpoint

Call the classification() method and pass 3 arguments:

  1. The text you want to classify, as a string
  2. The candidate labels for your text, as an array of strings
  3. Whether the classification should be multi-class or not, as a boolean

The above command returns an object.

Code Generation Endpoint

Call the codeGeneration() method and pass the description of your program:

The above command returns an object.

Dependencies Endpoint

Call the dependencies() method and pass the text you want to perform part of speech tagging (POS) + arcs on.

The above command returns an object.

Embeddings Endpoint

Call the embeddings() method and pass an array of blocks of text that you want to extract embeddings from.

The above command returns an object.

Entities Endpoint

Call the entities() method and pass the text you want to perform named entity recognition (NER) on.

The above command returns an object.

Generation Endpoint

Call the generation() method and pass the following arguments:

  1. The block of text that starts the generated text. 256 tokens maximum for GPT-J on CPU, 1024 tokens maximum for GPT-J and GPT-NeoX 20B on GPU, and 2048 tokens maximum for Fast GPT-J and Finetuned GPT-NeoX 20B on GPU.
  2. (Optional) max_length: Optional. The maximum number of tokens that the generated text should contain. 256 tokens maximum for GPT-J on CPU, 1024 tokens maximum for GPT-J and GPT-NeoX 20B on GPU, and 2048 tokens maximum for Fast GPT-J and Finetuned GPT-NeoX 20B on GPU. If length_no_input is false, the size of the generated text is the difference between max_length and the length of your input text. If length_no_input is true, the size of the generated text simply is max_length. Defaults to 50.
  3. (Optional) length_no_input: Whether min_length and max_length should not include the length of the input text, as a boolean. If false, min_length and max_length include the length of the input text. If true, min_length and max_length don't include the length of the input text. Defaults to false.
  4. (Optional) end_sequence: A specific token that should be the end of the generated sequence, as a string. For example if could be . or \n or ### or anything else below 10 characters.
  5. (Optional) remove_input: Whether you want to remove the input text form the result, as a boolean. Defaults to false.
  6. (Optional) num_beams: Number of beams for beam search. 1 means no beam search. This is an integer. Defaults to 1.
  7. (Optional) num_return_sequences: The number of independently computed returned sequences for each element in the batch, as an integer. Defaults to 1.
  8. (Optional) top_k: The number of highest probability vocabulary tokens to keep for top-k-filtering, as an integer. Maximum 1000 tokens. Defaults to 0.
  9. (Optional) top_p: If set to float < 1, only the most probable tokens with probabilities that add up to top_p or higher are kept for generation. This is a float. Should be between 0 and 1. Defaults to 0.7.
  10. (Optional) temperature: The value used to module the next token probabilities, as a float. Should be between 0 and 1. Defaults to 1.
  11. (Optional) repetition_penalty: The parameter for repetition penalty, as a float. 1.0 means no penalty. Defaults to 1.0.
  12. (Optional) bad_words: List of tokens that are not allowed to be generated, as a list of strings. Defaults to null.
  13. (Optional) remove_end_sequence: Optional. Whether you want to remove the end_sequence string from the result. Defaults to false.

Grammar and Spelling Correction Endpoint

Call the gsCorrection() method and pass the text you want correct:

The above command returns an object.

Image Generation Endpoint

Call the imageGeneration() method and pass the text you want to use to generate your image:

The above command returns an object.

Intent Classification Endpoint

Call the intentClassification() method and pass the text you want to extract intents from:

The above command returns an object.

Keywords and Keyphrases Extraction Endpoint

Call the kwKpExtraction() method and pass the text you want to extract keywords and keyphrases from:

The above command returns an object.

Language Detection Endpoint

Call the langdetection() method and pass the text you want to analyze.

The above command returns an object.

Paraphrasing Endpoint

Call the paraphrasing() method and pass the text you want to paraphrase.

The above command returns an object.

Question Answering Endpoint

Call the question() method and pass the following:

  1. Your question
  2. A context that the model will use to try to answer your question

The above command returns an object.

Semantic Search Endpoint

Call the semanticSearch() method and pass your search query:

The above command returns an object.

Semantic Similarity Endpoint

Call the semanticSimilarity() method and pass an array made up of 2 blocks of text that you want to compare.

The above command returns an object.

Sentence Dependencies Endpoint

Call the sentenceDependencies() method and pass a block of text made up of several sentencies you want to perform POS + arcs on.

The above command returns an object.

Sentiment Analysis Endpoint

Call the sentiment() method and pass the text you want to analyze the sentiment of:

The above command returns an object.

Speech Synthesis Endpoint

Call the speechSynthesis() method and pass the text you want to convert to audio:

The above command returns a JSON object.

Summarization Endpoint

Call the summarization() method and pass the text you want to summarize.

The above command returns an object.

Tokenization Endpoint

Call the tokens() method and pass the text you want to tokenize.

The above command returns an object.

Translation Endpoint

Call the translation() method and pass the text you want to translate.

The above command returns an object.


All versions of nlpcloud-client with dependencies

PHP Build Version
Package Version
Requires php Version >=7.2
nategood/httpful Version *
Composer command for our command line client (download client) This client runs in each environment. You don't need a specific PHP version etc. The first 20 API calls are free. Standard composer command

The package nlpcloud/nlpcloud-client contains the following files

Loading the files please wait ....