Download the PHP package ozcan39/ir_evaluation_php without Composer

On this page you can find all versions of the php package ozcan39/ir_evaluation_php. It is possible to download/install these versions without Composer. Possible dependencies are resolved automatically.

FAQ

After the download, you have to make one include require_once('vendor/autoload.php');. After that you have to import the classes with use statements.

Example:
If you use only one package a project is not needed. But if you use more then one package, without a project it is not possible to import the classes with use statements.

In general, it is recommended to use always a project to download your libraries. In an application normally there is more than one library needed.
Some PHP packages are not free to download and because of that hosted in private repositories. In this case some credentials are needed to access such packages. Please use the auth.json textarea to insert credentials, if a package is coming from a private repository. You can look here for more information.

  • Some hosting areas are not accessible by a terminal or SSH. Then it is not possible to use Composer.
  • To use Composer is sometimes complicated. Especially for beginners.
  • Composer needs much resources. Sometimes they are not available on a simple webspace.
  • If you are using private repositories you don't need to share your credentials. You can set up everything on our site and then you provide a simple download link to your team member.
  • Simplify your Composer build process. Use our own command line tool to download the vendor folder as binary. This makes your build process faster and you don't need to expose your credentials for private repositories.
Please rate this library. Is it a good library?

Informations about the package ir_evaluation_php

IREEL: Information Retrieval (IR) Effectiveness Evaluation Library for PHP

This library was created in order to evaluate any kind of algorithm used in IR systems and analyze how well they perform. For this purpose, 14 different effectiveness measurements have been put together. All of these measurements consist of mostly used ones in the literature. They are as follow:

This library has also 5 datasets, which were organized for learning and testing each method with their different parameters. Even though this library was dynamically used on an online IR system with real user data, it can be used for static datasets as well.

Before Starting:

Some explanations about datasets

Shared datasets are in the format of txt. Each row in the datasets was separated by a pipeline. Even though these datasets have similar attributes, they were created in different formats and have been used in different measurements together or separately for showing how the methods work and what kind of attributes(parameters) these methods need. The attributes used in the datasets are as follow:

The parameters used in the methods

There are 5 different parameters used in the methods. While some of them are used in common, some of them are used separately. All parameters and their explanations are as follow:

Installing

The package can be installed using the code below:

Importing to a study

After installing the package, the code below is used:

The example datasets might not be preferred to include in case of that different dataset will be used.

Viewing the datasets

The example below is for viewing the dataset 1.

The other datasets can be viewed by just changing the number between 1 and 5 at the end of the term dataset1 on the first line.

Usage of the methods

As mentioned before, each method uses some of the datasets together or separately. In this direction, the methods, which need the same datasets, have been explained together.

Precision based methods: AP@n, MAP, GMAP, IAP, R-Precision and F-Measure

These methods use dataset3 and dataset4 together. Before calling the methods, a variable named interactions is created as follow:

After creating interactions variable, the usage of the related methods is below:

Gain based methods: CG, NCG, DCG and NDCG

These methods use just dataset1. Before calling the methods, a variable named interactions is created as follow:

After creating interactions variable, the usage of the related methods is below:

Mean Reciprocal Rank (MRR)

This method use just dataset2. Before calling the method, a variable named interactions is created as follow:

After creating interactions variable, the usage of the method is below:

Rank-Biased Precision (RBP)

This method use just dataset4. Before calling the method, a variable named interactions is created as follow:

After creating interactions variable, the usage of the method is below:

Expected Reciprocal Rank (ERR)

This method use just dataset1. Before calling the method, a variable named interactions is created as follow:

After creating interactions variable, the usage of the method is below:

BPref

This method use just dataset5. Before calling the method, a variable named interactions is created as follow:

After creating interactions variable, the usage of the method is below:

How the analysis result is shown

If the method has boundaries parameter, the results are shown for every cut-off point separately. For example:

If the method has boundaries and persistence (or probability) levels parameters, the results are shown for every cut-off point and persistence (or probability) levels separately. For example:

If the method has just data parameter, the results are shown as a single value except for IAP method. For example:

License

This library is distributed under the LGPL 2.1 license. Please read LICENSE for information on the library availability and distribution.

For citation this library

This section is going to be updated.

For further reading about the measurements

Average Precision @n, Mean Average Precision (MAP), R-Precision:

Ricardo Baeza-Yates and Berthier Ribeiro-Neto. 2011. Modern Information Retrieval: The concepts and technology behind search (2nd. ed.). Addison-Wesley Publishing Company, USA.

Geometric Mean Average Precision: https://trec.nist.gov/pubs/trec15/appendices/CE.MEASURES06.pdf

Eleven Point - Interpolated Average Precision (IAP):

Bruce Croft, Donald Metzler, and Trevor Strohman. 2009. Search Engines: Information Retrieval in Practice (1st. ed.). Addison-Wesley Publishing Company, USA.

F-Measure:

C. J. Van Rijsbergen. 1979. Information Retrieval (2nd. ed.). Butterworth-Heinemann, USA.

Cumulative Gain, Normalized Cumulative Gain, Discounted Cumulative Gain, Normalized Discounted Cumulative Gain:

Kalervo Järvelin and Jaana Kekäläinen. 2000. IR evaluation methods for retrieving highly relevant documents. In Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval (SIGIR ’00). Association for Computing Machinery, New York, NY, USA, 41–48. DOI:https://doi.org/10.1145/345508.345545

Kalervo Järvelin and Jaana Kekäläinen. 2002. Cumulated gain-based evaluation of IR techniques. ACM Trans. Inf. Syst. 20, 4 (October 2002), 422–446. DOI:https://doi.org/10.1145/582415.582418

Mean Reciprocal Rank:

Ellen Voorhees. 1999. The TREC-8 Question Answering Track Report. Proceedings of the 8th Text Retrieval Conference. 77-82.

Rank-Biased Precision (RBP):

Alistair Moffat and Justin Zobel. 2008. Rank-biased precision for measurement of retrieval effectiveness. ACM Trans. Inf. Syst. 27, 1, Article 2 (December 2008), 27 pages. DOI:https://doi.org/10.1145/1416950.1416952

Expected Reciprocal Rank:

Olivier Chapelle, Donald Metlzer, Ya Zhang, and Pierre Grinspan. 2009. Expected reciprocal rank for graded relevance. In Proceedings of the 18th ACM conference on Information and knowledge management (CIKM ’09). Association for Computing Machinery, New York, NY, USA, 621–630. DOI:https://doi.org/10.1145/1645953.1646033

Bpref:

Chris Buckley and Ellen M. Voorhees. 2004. Retrieval evaluation with incomplete information. In Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval (SIGIR ’04). Association for Computing Machinery, New York, NY, USA, 25–32. DOI:https://doi.org/10.1145/1008992.1009000


All versions of ir_evaluation_php with dependencies

PHP Build Version
Package Version
Requires php Version >=5.6.0
Composer command for our command line client (download client) This client runs in each environment. You don't need a specific PHP version etc. The first 20 API calls are free. Standard composer command

The package ozcan39/ir_evaluation_php contains the following files

Loading the files please wait ....