Download the PHP package nadar/crawler without Composer

On this page you can find all versions of the php package nadar/crawler. It is possible to download/install these versions without Composer. Possible dependencies are resolved automatically.

FAQ

After the download, you have to make one include require_once('vendor/autoload.php');. After that you have to import the classes with use statements.

Example:
If you use only one package a project is not needed. But if you use more then one package, without a project it is not possible to import the classes with use statements.

In general, it is recommended to use always a project to download your libraries. In an application normally there is more than one library needed.
Some PHP packages are not free to download and because of that hosted in private repositories. In this case some credentials are needed to access such packages. Please use the auth.json textarea to insert credentials, if a package is coming from a private repository. You can look here for more information.

  • Some hosting areas are not accessible by a terminal or SSH. Then it is not possible to use Composer.
  • To use Composer is sometimes complicated. Especially for beginners.
  • Composer needs much resources. Sometimes they are not available on a simple webspace.
  • If you are using private repositories you don't need to share your credentials. You can set up everything on our site and then you provide a simple download link to your team member.
  • Simplify your Composer build process. Use our own command line tool to download the vendor folder as binary. This makes your build process faster and you don't need to expose your credentials for private repositories.
Please rate this library. Is it a good library?

Informations about the package crawler

Website Crawler for PHP

Tests Test Coverage Maintainability Packagist Downloads

A highly extendible, dependency free Crawler for HTML, PDFS or any other type of Documents.

Why another Page Crawler? Yes, indeed, there are already very good Crawlers around, therefore those where my goals:

Installation

Composer is required to install this library:

In order to use the PDF Parser, the optional library smalot/pdfparser must be installed:

Usage

  1. First we need to provide the crawler the information what should be done with the results from a crawler run:

Create your handler, those are the classes which interact with the crawler in order to store your content/results somwehere. The afterRun() method will run whenever an URL is crawled and contains the results:

  1. Then we attach the handler and setup all required informations for crawler:

Attention: Keep in mind that wen you enable the PDF Parser and have multiple concurrent requests this can drastically increases memory usage (Especially if there are large PDFs)! Therefore it's recommend to lower the concurrent value when enabling PDF Parser!

Benchmark

Of course those benchmarks may vary depending on internet connection, bandwidth, servers but we made all the tests under the same circumstances. The memory peak varys strong when using the PDF parsers, therefore we test only with HTML parser:

Index Size Concurrent Requests Memory Peak Time Storage
308 30 6MB 19s ArrayStorage
308 30 6MB 20s FileStorage

Still looking for a good website to use for benchmarking. See the benchmark.php file for the test setup.

Developer Informations

For a better understanding, here is en explenation of how the classes are capsulated and for what they are used.

Lifecycle

Crawler -> Job -> (ItemQueue -> Storage) -> RequestResponse -> Parser -> ParserResult -> Result


All versions of crawler with dependencies

PHP Build Version
Package Version
Requires ext-curl Version *
Composer command for our command line client (download client) This client runs in each environment. You don't need a specific PHP version etc. The first 20 API calls are free. Standard composer command

The package nadar/crawler contains the following files

Loading the files please wait ....