All Projects → scrapedia → scrapy-pipelines

scrapedia / scrapy-pipelines

Licence: GPL-3.0 License
A collection of pipelines for Scrapy

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to scrapy-pipelines

WebSocketPipe
System.IO.Pipelines API adapter for System.Net.WebSockets
Stars: ✭ 17 (+6.25%)
Mutual labels:  pipelines
GPlayCrawler
No description or website provided.
Stars: ✭ 47 (+193.75%)
Mutual labels:  scrapy
julia-workshop
"Integrating Julia in real-world, distributed pipelines" for JuliaCon 2017
Stars: ✭ 39 (+143.75%)
Mutual labels:  pipelines
painless-continuous-delivery
A cookiecutter for projects with continuous delivery baked in.
Stars: ✭ 46 (+187.5%)
Mutual labels:  pipelines
scrapy facebooker
Collection of scrapy spiders which can scrape posts, images, and so on from public Facebook Pages.
Stars: ✭ 22 (+37.5%)
Mutual labels:  scrapy
dspatch
The Refreshingly Simple Cross-Platform C++ Dataflow / Pipelining / Stream Processing / Reactive Programming Framework
Stars: ✭ 124 (+675%)
Mutual labels:  pipelines
restaurant-finder-featureReviews
Build a Flask web application to help users retrieve key restaurant information and feature-based reviews (generated by applying market-basket model – Apriori algorithm and NLP on user reviews).
Stars: ✭ 21 (+31.25%)
Mutual labels:  scrapy
Python Master Courses
人生苦短 我用Python
Stars: ✭ 61 (+281.25%)
Mutual labels:  scrapy
scrapy-admin
A django admin site for scrapy
Stars: ✭ 44 (+175%)
Mutual labels:  scrapy
pythonSpider
🕷️some python spiders with BeautifulSoup or scarpy
Stars: ✭ 28 (+75%)
Mutual labels:  scrapy
ImageGrabber
A Scrapy demo : Download all images from a site
Stars: ✭ 33 (+106.25%)
Mutual labels:  scrapy
hk0weather
Web scraper project to collect the useful Hong Kong weather data from HKO website
Stars: ✭ 49 (+206.25%)
Mutual labels:  scrapy
XMQ-BackUp
小密圈备份,圈子/话题/图片/文件。
Stars: ✭ 22 (+37.5%)
Mutual labels:  scrapy
codeflare
Simplifying the definition and execution, scaling and deployment of pipelines on the cloud.
Stars: ✭ 163 (+918.75%)
Mutual labels:  pipelines
allitebooks.com
Download all the ebooks with indexed csv of "allitebooks.com"
Stars: ✭ 24 (+50%)
Mutual labels:  scrapy
tibanna
Tibanna helps you run your genomic pipelines on Amazon cloud (AWS). It is used by the 4DN DCIC (4D Nucleome Data Coordination and Integration Center) to process data. Tibanna supports CWL/WDL (w/ docker), Snakemake (w/ conda) and custom Docker/shell command.
Stars: ✭ 61 (+281.25%)
Mutual labels:  pipelines
scrapy xiuren
秀人网爬虫 55156爬虫
Stars: ✭ 43 (+168.75%)
Mutual labels:  scrapy
dannyAVgleDownloader
知名網站avgle下載器
Stars: ✭ 27 (+68.75%)
Mutual labels:  scrapy
SpiderManager
爬虫管理平台
Stars: ✭ 27 (+68.75%)
Mutual labels:  scrapy
scrapy-zyte-smartproxy
Zyte Smart Proxy Manager (formerly Crawlera) middleware for Scrapy
Stars: ✭ 317 (+1881.25%)
Mutual labels:  scrapy

Read more: noffle/art-of-readme: Learn the art of writing quality READMEs.

Scrapy-Pipelines

Overview

CII Best Practices

pylint Score

Travis branch Coverage Report codebeat badge https://api.codacy.com/project/badge/Grade/aeda92e058434a9eb2e8b0512a02235f Updates Known Vulnerabilities Code style: black License: AGPL v3

Since Scrapy doesn't provide enough pipelines examples for different backends or databases, this repository provides severals to demostrate the decent usages, including:

  • MongoDB
  • Redis (todo)
  • InfluxDB (todo)
  • LevelDB (todo)

And also these pipelines provide multiple ways to save or update the items, and return id created by backends

Requirements

Python 3
  • Python 3.6+
  • Works on Linux, Windows, Mac OSX

Installation

PyPI PyPI - Python Version PyPI - Wheel

The quick way:

pip install scrapy-pipelines

For more details see the installation section in the documentation: https://scrapy-pipelines.readthedocs.io/en/latest/intro/installation.html

Documentation

Documentation is available online at https://scrapy-pipelines.readthedocs.io/en/latest/ and in the docs directory.

Community (blog, twitter, mail list, IRC)

Keeping this section same as Scrapy is intending to benefit back to Scrapy.

See https://scrapy.org/community/

Contributing

Keeping this section same as Scrapy is intending to be easier when this repo merge back to Scrapy.

See https://doc.scrapy.org/en/master/contributing.html

Code of Conduct

Please note that this project is released with a Contributor Code of Conduct (see https://github.com/scrapy/scrapy/blob/master/CODE_OF_CONDUCT.md).

By participating in this project you agree to abide by its terms. Please report unacceptable behavior to [email protected].

Companies using Scrapy

Keeping this section same as Scrapy is intending to benefit back to Scrapy.

See https://scrapy.org/companies/

Commercial Support

Keeping this section same as Scrapy is intending to benefit back to Scrapy.

See https://scrapy.org/support/

TODO

  • [X] Add indexes creation in open_spider()
  • [X] Add item_completed method
  • [X] Add signals for MongoDB document's id return
  • [ ] Add MongoDB document update
  • [ ] Add Percona Server for MongoDB docker support
  • [ ] Add Redis support
  • [ ] Add InfluxDB support
  • [ ] Add LevelDB support
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].