Download the PHP package rubix/cifar-10 without Composer

On this page you can find all versions of the php package rubix/cifar-10. It is possible to download/install these versions without Composer. Possible dependencies are resolved automatically.

FAQ

After the download, you have to make one include require_once('vendor/autoload.php');. After that you have to import the classes with use statements.

Example:
If you use only one package a project is not needed. But if you use more then one package, without a project it is not possible to import the classes with use statements.

In general, it is recommended to use always a project to download your libraries. In an application normally there is more than one library needed.
Some PHP packages are not free to download and because of that hosted in private repositories. In this case some credentials are needed to access such packages. Please use the auth.json textarea to insert credentials, if a package is coming from a private repository. You can look here for more information.

  • Some hosting areas are not accessible by a terminal or SSH. Then it is not possible to use Composer.
  • To use Composer is sometimes complicated. Especially for beginners.
  • Composer needs much resources. Sometimes they are not available on a simple webspace.
  • If you are using private repositories you don't need to share your credentials. You can set up everything on our site and then you provide a simple download link to your team member.
  • Simplify your Composer build process. Use our own command line tool to download the vendor folder as binary. This makes your build process faster and you don't need to expose your credentials for private repositories.
Please rate this library. Is it a good library?

Informations about the package cifar-10

Rubix ML - CIFAR-10 Image Recognizer

CIFAR-10 (short for Canadian Institute For Advanced Research) is a famous dataset consisting of 60,000 32 x 32 color images in 10 classes (dog, cat, car, ship, etc.) with 6,000 images per class. In this tutorial, we'll use the CIFAR-10 dataset to train a feed forward neural network to recognize the primary object in images.

Installation

Clone the project locally using Composer:

Note: Installation may take longer than usual due to the large dataset.

Requirements

Recommended

Tutorial

Introduction

Computer vision is one of the most fascinating use cases for deep learning because it allows a computer to see the world that we live in. Deep learning is a subset of machine learning concerned with breaking down raw data into higher order feature representations through layered computations. Neural networks are a type of deep learning model inspired by the human nervous system that uses structured computational units called hidden layers. In the case of image recognition, the hidden layers are able to break down an image into its component parts such that the network can readily comprehend the similarities and differences among objects by their characteristic features at the final output layer. Let's get started!

Extracting the Data

The CIFAR-10 dataset comes to us in the form of 60,000 32 x 32 pixel PNG image files which we'll import as PHP resources into our project using the imagecreatefrompng() provided by the GD extension. If you do not have the extension installed, you'll need to do so before running the project script. We also use preg_replace() to extract the label from the filename of the images in the train folder.

Now, load the extracted samples and labels into a Labeled dataset object.

Dataset Preparation

The images we imported in the previous step will eventually need to be converted into samples of continuous features. An Image Resizer ensures that all images have the same dimensionality, just in case. The Image Vectorizer handles extracting the red, green, and blue (RGB) intensities (0 - 255) from the images. Finally, the Z Scale Standardizer scales and centers the vectorized color channel data to a mean of 0 and a standard deviation of 1. This last step helps the network converge quicker. We'll wrap the 3 transformers in a Pipeline so we can use them again in another process after we save the model.

Instantiating the Learner

The Multilayer Perceptron classifier is a type of neural network model we'll train to recognize images in the CIFAR-10 dataset. Under the hood it uses Gradient Descent with Backpropagation to learn the weights of the network by gradually updating the signal that each neuron produces in response to an input. One of the key aspects of neural networks are the use of hidden layers that perform intermediate computations. In between Dense neuronal layers we use an Activation layer to perform a non-linear transformation of the neuron's output using a user-defined activation function. The non-linearities introduced by the activation layer are crucial for learning complex patterns within the data. For the purpose of this tutorial we'll use the ELU activation function, which is a good default but feel free to experiment with different activation functions on your own. A Dropout layer is added after the first two sets of Dense/Activation layers to act as a regularizer. Lastly, we'll add a Batch Norm layer to help the network train faster by re-normalizing the activations partway through the network.

Wrapping the learner and transformer pipeline in a Persistent Model meta-estimator allows us to save the model so we can use it in another process to make predictions.

There are a few more hyper-parameters of the MLP that we'll need to set in addition to the hidden layers. The batch size parameter is the number of samples that will be sent through the neural network at a time. We'll set this to 512. Next, the Gradient Descent optimizer and learning rate, which control the update step of the learning algorithm, will be set to Adam and 0.001 respectively. Feel free to experiment with these settings on your own.

Training

Now, pass the training dataset to the train() method to begin training the network.

Validation Score and Loss

We can visualize the training progress at each stage by dumping the values of the loss function and validation metric after training. The steps() method will output an iterator containing the loss values of the default Cross Entropy cost function and validation scores from the default F Beta metric.

Note: You can change the cost function and validation metric by setting them as hyper-parameters of the learner.

Then, we can plot the values using our favorite plotting software such as Tableu or Excel. If all goes well, the value of the loss should go down as the value of the validation score goes up. Due to snapshotting, the epoch at which the validation score is highest and the loss is lowest is the point at which the values of the network parameters are taken.

Cross Entropy Loss

F1 Score

Saving

Before exiting the script, save the model so we can run cross validation on it in another process.

Now we're ready to execute the training script from the command line.

Cross Validation

Cross validation is the process of testing a model using samples that the learner has never seen before. The goal is to be able to detect problems such as selection bias or overfitting. In addition to the training set, the CIFAR-10 dataset includes 10,000 testing samples that we'll use to score the model's generalization ability. We start by importing the testing samples and labels located in the test folder using the technique from earlier.

Instantiate a Labeled dataset object with the testing samples and labels.

Load Model from Storage

Since we saved our model after training in the last section, we can load it whenever we need to use it in another process. The static load() method on the Persistent Model class takes a pre-configured Persister object pointing to the location of the model in storage as its only argument and returns the wrapped estimator in the last known saved state.

Make Predictions

We'll need the predictions produced by the MLP on the testing set to pass to a report generator along with the ground-truth class labels. To return an array of predictions, pass the testing set to the predict() method on the estimator.

Generate Reports

The Multiclass Breakdown and Confusion Matrix are cross validation reports that show performance of the model on a class by class basis. We'll wrap them both in an Aggregate Report and pass our predictions along with the ground-truth labels from the testing set to the generate() method to generate both reports at once.

To run the validation script, enter the following command at the command prompt.

Take a look at the results to see how the model performed on inference. Below is an excerpt of a multiclass breakdown report showing the overall performance. As you can see, the model does a fair job at recognizing the objects in the images, however there is room for improvement.

This excerpt from the confusion matrix shows that the estimator does a good job identifying automobiles but sometimes confuses them for trucks, which makes sense since they are similar in many ways.

Next Steps

Congratulations on finishing the CIFAR-10 tutorial using Rubix ML! Now is your chance to experiment with other network architectures, activation functions, and learning rates on your own. Try adding additional hidden layers to deepen the network and add flexibility to the model. Is a fully-connected network the best architecture for this problem? Are there other network architectures that can use the spatial information of the images?

Original Dataset

Creator: Alex Krizhevsky Email: akrizhevsky '@' gmail.com

References

  • [1] A. Krizhevsky. (2009). Learning Multiple Layers of Features from Tiny Images.

License

The code is licensed CC BY-NC 4.0.


All versions of cifar-10 with dependencies

PHP Build Version
Package Version
Requires php Version >=7.4
ext-gd Version *
rubix/ml Version ^1.0
Composer command for our command line client (download client) This client runs in each environment. You don't need a specific PHP version etc. The first 20 API calls are free. Standard composer command

The package rubix/cifar-10 contains the following files

Loading the files please wait ....