Download the PHP package coooold/crawler without Composer

On this page you can find all versions of the php package coooold/crawler. It is possible to download/install these versions without Composer. Possible dependencies are resolved automatically.

FAQ

After the download, you have to make one include require_once('vendor/autoload.php');. After that you have to import the classes with use statements.

Example:
If you use only one package a project is not needed. But if you use more then one package, without a project it is not possible to import the classes with use statements.

In general, it is recommended to use always a project to download your libraries. In an application normally there is more than one library needed.
Some PHP packages are not free to download and because of that hosted in private repositories. In this case some credentials are needed to access such packages. Please use the auth.json textarea to insert credentials, if a package is coming from a private repository. You can look here for more information.

  • Some hosting areas are not accessible by a terminal or SSH. Then it is not possible to use Composer.
  • To use Composer is sometimes complicated. Especially for beginners.
  • Composer needs much resources. Sometimes they are not available on a simple webspace.
  • If you are using private repositories you don't need to share your credentials. You can set up everything on our site and then you provide a simple download link to your team member.
  • Simplify your Composer build process. Use our own command line tool to download the vendor folder as binary. This makes your build process faster and you don't need to expose your credentials for private repositories.
Please rate this library. Is it a good library?

Informations about the package crawler

PHPCrawler

Most powerful, popular and production crawling/scraping package for PHP, happy hacking :)

Features:

Thanks to

node-crawler is really a great crawler. PHPCrawler tries its best effort to keep similarity with it.

中文说明

Table of Contents

Get started

Install

Basic usage

Slow down

Use rateLimit to slow down when you are visiting web sites.

Custom parameters

Sometimes you have to access variables from previous request/response session, what should you do is passing parameters as same as options:

then access them in callback via $res->task['parameter1'], $res->task['parameter2'] ...

Raw body

If you are downloading files like image, pdf, word etc, you have to save the raw response body which means Crawler shouldn't convert it to string. To make it happen, you need to set encoding to null

Events

Event::RESPONSE

Triggered when a request is done.

Event::DRAIN

Triggered when queue is empty.

Advanced

Encoding

HTTP body will be converted to utf-8 from the default encoding.

Logger

A PSR logger instance could be used.

See Monolog Reference.

Coroutine

PHPCrawler, based on amp non-blocking concurrency framework, could work with coroutines, ensuring excellent performance. Amp async packages should be used in callbacks, that is to say, neither php native mysql client nor php native file io is not recommended. The keyword yield like await in ES6, introduced the non-blocking io.

Work with DomParser

Symfony\Component\DomCrawler is a handy tool for crawling pages. Response::dom will be injected with an instance of Symfony\Component\DomCrawler\Crawler.

See DomCrawler Reference.

Other

API reference

Configuration


All versions of crawler with dependencies

PHP Build Version
Package Version
Requires php Version >=7.0
ext-json Version *
amphp/amp Version ^2.4
amphp/artax Version ^3.0
symfony/dom-crawler Version ^5.0
psr/log Version ^1.0.1
symfony/css-selector Version ^5.0
Composer command for our command line client (download client) This client runs in each environment. You don't need a specific PHP version etc. The first 20 API calls are free. Standard composer command

The package coooold/crawler contains the following files

Loading the files please wait ....