Libraries tagged by meta_data
jwohlfert23/laravel-seo
5351 Downloads
SEO tools to insert meta data in laravel projects
josh-hornby/sauce
16 Downloads
Codeception extension to support automated testing via sauce labs with meta data
jobcerto/laravel-metable
523 Downloads
A package to give the possibility of add meta data to Eloquent Models
hauerheinrich/hh-simple-job-posts
92 Downloads
Adds plugins for list / show job postings with meta-data and schema.org stuff. Uses tt_address for job-contacts and job-location.
fieldwork/socialmeta
35 Downloads
Rule based social tags and meta data.
fhusquinet/laravel-model-seoable
9 Downloads
Easily add seo meta data to your Eloquent model.
fabianblum/flysystem-cached-adapter
7 Downloads
Fork of an adapter decorator to enable meta-data caching.
drongotech/iplocationmanager
35 Downloads
Using IP Stack to get request ip and save the ip meta data
cupracode/wp-activity-log
37 Downloads
An WordPress activity log, designed to allow for recording of meta data through custom field adapters such as ACF
composedcreative/expandedsearch
27 Downloads
Craft CMS search results with expanded meta data including the matched field, matched value and related values for certain field types
augmentedlogic/socialpreview
9 Downloads
Retrieve social media preview data (opengraph, twitter card, meta data) from a web page
arraypress/edd-product-query
0 Downloads
This library extends Easy Digital Downloads (EDD) by providing an advanced product query class that supports complex filtering, sorting, and meta queries. It enhances the querying capabilities within WordPress environments by utilizing custom taxonomy and meta data operations, along with transient caching for optimized performance. Ideal for developers needing refined control over EDD product data retrieval.
piedweb/qwanturank
2877 Downloads
Harvest statistics and meta data from an URL or his source code (seo oriented).
riodevnet/elephscraper
1 Downloads
ElephScraper is a lightweight and PHP-native web scraping toolkit built using Guzzle and Symfony DomCrawler. It provides a clean and powerful interface to extract HTML content, metadata, and structured data from any website.
numeno/api-art-rec
2 Downloads
## Introduction Use the Numeno Article Recommender API to receive a curated selection of articles from across the web. See below for the steps to creating a Feed, as well as an introduction to the top-level concepts making up the Article Recommender API. ## Steps to creating a Feed 1. Create a Feed - [`/feeds`](create-feed) 2. Create a number of Stream queries associated with the Feed - [`/feeds/:feedId/streams`](create-stream) 3. Pull from the Feed as the Feed refreshes - [`/feeds/:feedId/articles`](get-articles-in-feed) 4. Use those Article IDs to look up metadata for the Articles -[`/articles/:id`](get-article-by-id) 5. Visit the Article links and render to your server DB or client app. ## Sources, Articles and Topics A **Source** is a place where Articles come from, typically a website, a blog, or a knowledgebase endpoint. Sources can be queried for activity via the [`/sources`](get-sources) endpoint. Beyond the Sources Numeno regaularly indexes, additional Sources can be associated with Stream queries, and Sources can be `allowlist`/`denylist`'d. **Articles** are the documents produced by Sources, typically pages from a blogpost or website, articles from a news source, or posts from a social platform or company intranet. See the [`/articles`](search-articles) endpoint. **Topics** - Numeno has millions of Topics that it associates with Articles when they are sourced. Topics are used in Stream queries, which themselves are composed to create Feeds. Get topics via the [`/topics`](get-topics) endpoint. ## Feeds **A Feed is a collection of Streams.** Feeds are configured to refresh on a regular schedule. No new Articles are published to a Feed except when it's refreshed. Feeds can be refreshed manually if the API Key Scopes allow. You can ask for Articles chronologically or by decreasing score. You can also limit Articles to a date-range, meaning that you can produce Feeds from historical content. Interact with Feeds via the [`/feeds`](create-feed) endpoint. ## Streams Think of a **Stream** as a search query with a "volume control knob". It's a collection of Topics that you're interested and a collection of Sources you'd explicitly like to include or exclude. Streams are associated with a Feed, and a collection of Streams produce the sequence of Articles that appear when a Feed is refreshed. The "volume control knob" on a Stream is a way to decide how many of the search results from the Stream query are included in the Feed. Our searches are "soft", and with a such a rich `Article x Topic` space to draw on, the "volume control" allows you to put a cuttoff on what you'd like included. Streams are a nested resource of `/feeds` - get started by explorting [`/feeds/:feedId/streams`](create-stream).