Libraries tagged by large data

behzaddev/searchable

0 Favers
0 Downloads

Search Package OverviewThe Search package is a powerful tool designed to facilitate efficient and effective search operations within various datasets or databases. It provides a set of functions and classes that enable users to perform complex search queries, filter results, and retrieve relevant data with ease. The package is highly customizable, allowing users to define their own search criteria, implement sorting mechanisms, and handle large volumes of data seamlessly.Key Features: Customizable Search Queries: Users can create tailored search queries using various operators and conditions, making it possible to perform both simple and advanced searches. Sorting and Filtering: The package includes built-in methods to sort and filter search results, enabling users to organize data based on specific parameters such as date, relevance, or custom fields. Scalability: Designed to handle large datasets, the Search package is optimized for performance, ensuring quick response times even with millions of records. Integration: The package is compatible with a variety of databases and data sources, making it a versatile solution for different types of projects. User-Friendly Interface: It offers a straightforward API that is easy to use, even for those who are not experts in programming. This allows a broader audience to leverage the power of advanced search capabilities.Use Cases: Data Analysis: Quickly find and retrieve specific information from large datasets for analysis. Content Management Systems: Implement efficient search functionality in content-heavy websites or applications. E-commerce: Enhance product search features in online stores, improving the user experience by providing relevant results swiftly. Knowledge Bases: Help users find relevant articles or documentation based on keyword searches.Overall, the Search package is an essential tool for anyone needing to implement or enhance search functionality in their applications, providing both power and flexibility in managing and retrieving data.

Go to Download


mr4-lc/php-json-exporter

0 Favers
28 Downloads

Export large datasets to a JSON file without memory exhaustion

Go to Download


jhhb01/silverstripe_ajax_dropdown

0 Favers
10 Downloads

Use Ajax in Dropdowns for large datasets

Go to Download


iliain/silverstripe-queuedexport

0 Favers
249 Downloads

Provides a GridField button to queue the export to a CSV, good for handling large datasets

Go to Download


epocsquadron/gather-content-streaming-client

0 Favers
742 Downloads

Get large datasets from GatherContent API without memory overload.

Go to Download


andrewandante/silverstripe-async-publisher

2 Favers
189 Downloads

An asynchronous publishing hook for large datasets.

Go to Download


o-p/json-logic

1 Favers
32 Downloads

Make it easier to process large amount data with JsonLogic

Go to Download


woonoz/slimdump

0 Favers
162 Downloads

slimdump is a little tool to help you creating dumps of large MySQL-databases. Patched version

Go to Download


tubber/model-indexer

0 Favers
117 Downloads

This package makes it simple to index large amounts of data to Solr.

Go to Download


coderofsalvation/sql-table-diff

0 Favers
13 Downloads

Find differences between 2 similar SQL tables. Compare large collections of data fast and stylish

Go to Download


perry-rylance/split-batch

0 Favers
7 Downloads

This library provides a means to do large batch processing when unix cron jobs aren't necessarily available. This is useful when you need to process a large amount of data, or carry out a time consuming task, where cron isn't available, such as on low tiered shared hosting.

Go to Download


tcgunel/xml-aligner

0 Favers
56 Downloads

Converts small/large xml files by the data structure of given array with minimum memory consumption.

Go to Download


cbeyersdorf/easybill

3 Favers
1 Downloads

The first version of the easybill REST API. [CHANGELOG](https://api.easybill.de/rest/v1/CHANGELOG.md) ## Authentication You can choose between two available methods: `Basic Auth` or `Bearer Token`. In each HTTP request, one of the following HTTP headers is required: ``` # Basic Auth Authorization: Basic base64_encode(':') # Bearer Token Authorization: Bearer ``` ## Limitations ### Request Limit * PLUS: 10 requests per minute * BUSINESS: 60 requests per minute If the limit is exceeded, you will receive the HTTP error: `429 Too Many Requests` ### Result Limit All result lists are limited to 100 by default. This limit can be increased by the query parameter `limit` to a maximum of 1000. ## Query filter Many list resources can be filtered. In `/documents` you can filter e.g. by number with `/documents?number=111028654`. If you want to filter multiple numbers, you can either enter them separated by commas `/documents?number=111028654,222006895` or as an array `/documents?number[]=111028654&number[]=222006895`. **Warning**: The maximum size of an HTTP request line in bytes is 4094. If this limit is exceeded, you will receive the HTTP error: `414 Request-URI Too Large` ### Escape commas in query You can escape commans in query `name=Patrick\, Peter` if you submit the header `X-Easybill-Escape: true` in your request. ## Property login_id This is the login of your admin or employee account. ## Date and Date-Time format Please use the timezone `Europe/Berlin`. * **date** = *Y-m-d* = `2016-12-31` * **date-time** = *Y-m-d H:i:s* = `2016-12-31 03:13:37` Date or datetime can be `null` because the attributes have been added later and the entry is older.

Go to Download


ianrothmann/laravel-model-convention

1 Favers
4413 Downloads

A Model Trait to override standard laravel 5 model conventions for use with larger scale databases

Go to Download


vcian/laravel-data-bringin

12 Favers
465 Downloads

Dynamic import csv and column mapping to a Laravel application mysql database.

Go to Download


<< Previous Next >>