Download the PHP package patrickschur/language-detection without Composer
On this page you can find all versions of the php package patrickschur/language-detection. It is possible to download/install these versions without Composer. Possible dependencies are resolved automatically.
Download patrickschur/language-detection
More information about patrickschur/language-detection
Files in patrickschur/language-detection
Package language-detection
Short Description A language detection library for PHP. Detects the language from a given text string.
License MIT
Homepage https://github.com/patrickschur/language-detection
Rated 5.00 based on 1 reviews
Informations about the package language-detection
language-detection
Build Status | Code Coverage | Version | Total Downloads | Minimum PHP Version | License |
---|---|---|---|---|---|
This library can detect the language of a given text string. It can parse given training text in many different idioms into a sequence of N-grams and builds a database file in PHP to be used in the detection phase. Then it can take a given text and detect its language using the database previously generated in the training phase. The library comes with text samples used for training and detecting text in 110 languages.
Table of Contents
- Installation with Composer
- How to upgrade from 3.y.z to 4.y.z?
- Basic Usage
- API
- Method Chaining
- Array Access
- List of supported languages
- Other languages
- FAQ
- Contributing
- License
Installation with Composer
Note: This library requires the Multibyte String extension in order to work.
How to upgrade from 3.y.z
to 4.y.z
?
Important: Only for people who are using a custom directory with their own translation files.
Starting with version 4.y.z
we have updated the resource files. For performance reasons we now use PHP instead of JSON as a format. That means people who want to use 4.y.z
and used 3.y.z
before, have to upgrade their JSON files to PHP. To upgrade your resource files you must generate a language profile again. The JSON files are then no longer needed.
You can delete unnecessary JSON files under Linux with the following command.
Basic Usage
To detect the language correctly, the length of the input text should be at least some sentences.
Result:
API
__construct(array $result = [], string $dirname = '')
You can pass an array of languages to the constructor. To compare the desired sentence only with the given languages. This can dramatically increase the performance. The other parameter is optional and the name of the directory where the translations files are located.
whitelist(string ...$whitelist)
Provide a whitelist. Returns a list of languages, which are required.
Result:
blacklist(string ...$blacklist)
Provide a blacklist. Removes the given languages from the result.
Result:
bestResults()
Returns the best results.
Result:
limit(int $offset, int $length = null)
You can specify the number of records to return. For example the following code will return the top three entries.
Result:
close()
Returns the result as an array.
Result:
setTokenizer(TokenizerInterface $tokenizer)
The script use a tokenizer for getting all words in a sentence. You can define your own tokenizer to deal with numbers for example.
This will return only characters from the alphabet in lowercase and numbers between 0 and 9.
__toString()
Returns the top entrie of the result. Note the echo
at the beginning.
Result:
jsonSerialize()
Serialized the data to JSON.
Result:
Method chaining
You can also combine methods with each other. The following example will remove all entries specified in the blacklist and returns only the top four entries.
Result:
ArrayAccess
You can also access the object directly as an array.
Result:
Supported languages
The library currently supports 110 languages. To get an overview of all supported languages please have a look at here.
Other languages
The library is trainable which means you can change, remove and add your own language files to it.
If your language not supported, feel free to add your own language files.
To do that, create a new directory in resources
and add your training text to it.
Note: The training text should be a .txt file.
Example
As you can see, we can also used it to detect spam or ham.
When you stored your translation files outside of resources
, you have to specify the path.
Whenever you change one of the translation files you must first generate a language profile for it. This may take a few seconds.
Remove these few lines after execution and now we can classify texts by their language with our own training text.
FAQ
How can I improve the detection phase?
To improve the detection phase you have to use more n-grams. But be careful this will slow down the script. I figured out that the detection phase is much better when you are using around 9.000 n-grams (default is 310). To do that look at the code right below:
First you have to train it. Now you can classify texts like before but you must specify how many n-grams you want to use.
Result:
Is the detection process slower if language files are very big?
No it is not. The trainer class will only use the best 310 n-grams of the language. If you don't change this number or add more language files it will not affect the performance. Only creating the N-grams is slower. However, the creation of N-grams must be done only once. The detection phase is only affected when you are trying to detect big chunks of texts.
Summary: The training phase will be slower but the detection phase remains the same.
Contributing
Feel free to contribute. Any help is welcome.
License
This projects is licensed under the terms of the MIT license.
All versions of language-detection with dependencies
ext-mbstring Version *
ext-json Version *