Download the PHP package golivehost/brain without Composer
On this page you can find all versions of the php package golivehost/brain. It is possible to download/install these versions without Composer. Possible dependencies are resolved automatically.
Download golivehost/brain
More information about golivehost/brain
Files in golivehost/brain
Package brain
Short Description A comprehensive PHP neural network library with support for various network architectures
License MIT
Informations about the package brain
GoLiveHost Brain
A comprehensive PHP neural network library for machine learning and artificial intelligence applications.
Developed by: Go Live Web Solutions (golive.host)
Author: Shubhdeep Singh (GitHub.com/shubhdeepdev)
⚠️ IMPORTANT NOTE
The examples provided are for demonstration purposes only and are trained on very limited datasets.
The outputs may not be accurate or reliable for real-world applications.
For production use, you must train models on larger, representative datasets and thoroughly validate their performance before deployment.
Table of Contents
- Installation
- Features
- Quick Start
- Neural Network Types
- Advanced Features
- Configuration Options
- Examples
- API Reference
- Contributing
- License
Installation
Install via Composer:
Features
Neural Network Architectures
- Feedforward Neural Network - Traditional multi-layer perceptron
- Recurrent Neural Network (RNN) - For sequential data processing
- Long Short-Term Memory (LSTM) - Advanced RNN with memory cells
- Gated Recurrent Unit (GRU) - Simplified LSTM variant
- Liquid State Machine (LSM) - Reservoir computing for complex temporal patterns
Training Features
- Multiple Optimization Algorithms
- Stochastic Gradient Descent (SGD) with momentum
- Adam optimizer
- RMSprop optimizer
- AdaGrad optimizer
- Advanced Training Options
- Batch training with configurable batch sizes
- Learning rate decay
- Dropout regularization
- Early stopping with patience
- Gradient clipping
- Model checkpointing
- Activation Functions
- Sigmoid
- Tanh
- ReLU
- Leaky ReLU
- Softmax
- Linear
Data Processing
- Preprocessing Utilities
- Data normalization
- Data formatting for sequences
- One-hot encoding
- Evaluation Tools
- Cross-validation (k-fold, stratified k-fold, leave-one-out)
- Train-test split
- Multiple evaluation metrics (MSE, RMSE, R², accuracy, precision, recall, F1)
- Model Management
- Model serialization (JSON)
- Model export to standalone PHP
- Model checkpointing during training
Additional Features
- Batch Normalization - For improved training stability
- Matrix and Tensor Operations - Comprehensive mathematical utilities
- Model Validation - Input validation and error handling
- GPU Support Consideration - Architecture designed for future GPU acceleration
- PHP 8.0+ Compatibility - Modern PHP features and type safety
Quick Start
Basic Neural Network (XOR Problem)
Neural Network Types
Using the Brain Factory
The Brain class provides a convenient factory for creating different types of neural networks:
LSTM for Time Series Prediction
GRU for Sequential Data
Liquid State Machine for Complex Patterns
Advanced Features
Cross-Validation
Model Checkpointing
Batch Normalization
Custom Optimizers
Data Preprocessing
Model Validation
Configuration Options
Neural Network Options
Option | Description | Default |
---|---|---|
inputSize |
Number of input neurons | 0 (auto-detect) |
hiddenLayers |
Array of hidden layer sizes | [10] |
outputSize |
Number of output neurons | 0 (auto-detect) |
activation |
Activation function | 'sigmoid' |
learningRate |
Initial learning rate | 0.3 |
momentum |
Momentum for SGD | 0.1 |
iterations |
Maximum training iterations | 20000 |
errorThresh |
Error threshold to stop training | 0.005 |
log |
Enable training progress logging | false |
logPeriod |
Iterations between log outputs | 10 |
dropout |
Dropout rate for regularization | 0 |
decayRate |
Learning rate decay factor | 0.999 |
batchSize |
Batch size for training | 10 |
praxis |
Optimization algorithm ('sgd', 'adam') | 'adam' |
beta1 |
Adam optimizer beta1 | 0.9 |
beta2 |
Adam optimizer beta2 | 0.999 |
epsilon |
Adam optimizer epsilon | 1e-8 |
normalize |
Auto-normalize data | true |
LSTM Options
Option | Description | Default |
---|---|---|
inputSize |
Size of input vectors | 0 (auto-detect) |
hiddenLayers |
Array of LSTM layer sizes | [20] |
outputSize |
Size of output vectors | 0 (auto-detect) |
activation |
Activation function | 'tanh' |
learningRate |
Initial learning rate | 0.01 |
iterations |
Maximum training iterations | 20000 |
clipGradient |
Gradient clipping threshold | 5 |
batchSize |
Batch size for training | 10 |
Liquid State Machine Options
Option | Description | Default |
---|---|---|
inputSize |
Number of input neurons | 0 (auto-detect) |
reservoirSize |
Size of the reservoir | 100 |
outputSize |
Number of output neurons | 0 (auto-detect) |
connectivity |
Reservoir connectivity ratio | 0.1 |
spectralRadius |
Spectral radius of reservoir | 0.9 |
leakingRate |
Leaking rate for neurons | 0.3 |
regularization |
L2 regularization parameter | 0.0001 |
washoutPeriod |
Initial timesteps to discard | 10 |
Examples
The library includes comprehensive examples in the examples
directory:
basic.php
- Simple XOR problem demonstrationadvanced.php
- Character recognition and time series predictionadvanced-examples.php
- Non-linear regression, LSTM time series, and LSM classificationimage-recognition.php
- Simple shape recognition with neural networkstext-generation.php
- Character-level text generation using LSTMstock-prediction.php
- Time series prediction for financial datacross-validation-example.php
- Demonstration of cross-validation techniquesmodel-checkpoint-example.php
- Model checkpointing during trainingbatch-normalization-example.php
- Using batch normalization layersoptimizer-example.php
- Comparing different optimization algorithms
API Reference
Matrix Operations
Tensor Operations
Model Export
Error Handling
The library uses custom exceptions for better error handling:
Performance Considerations
- Memory Usage: The library is optimized for memory efficiency, especially in LSTM implementations
- Batch Processing: Use batch training for better performance with large datasets
- Learning Rate Decay: Helps achieve better convergence
- Early Stopping: Prevents overfitting and reduces training time
- Model Checkpointing: Save progress during long training sessions
Contributing
We welcome contributions! Please see CONTRIBUTING.md for guidelines.
Development Setup
License
This library is licensed under the MIT License. See LICENSE file for details.
Support
- Issues: GitHub Issues
- Documentation: Wiki
- Examples: See the
examples
directory
Credits
Developed by Go Live Web Solutions (golive.host)
Author: Shubhdeep Singh (GitHub.com/shubhdeepdev)
Building intelligent PHP applications with neural networks made simple.
All versions of brain with dependencies
ext-json Version *
ext-mbstring Version *