All Projects → Insutanto → scrapy-distributed

Insutanto / scrapy-distributed

Licence: other
A series of distributed components for Scrapy. Including RabbitMQ-based components, Kafka-based components, and RedisBloom-based components for Scrapy.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to scrapy-distributed

Easy Scraping Tutorial
Simple but useful Python web scraping tutorial code.
Stars: ✭ 583 (+1434.21%)
Mutual labels:  scraping, crawling, scrapy
scrapy facebooker
Collection of scrapy spiders which can scrape posts, images, and so on from public Facebook Pages.
Stars: ✭ 22 (-42.11%)
Mutual labels:  spider, scraping, scrapy
ARGUS
ARGUS is an easy-to-use web scraping tool. The program is based on the Scrapy Python framework and is able to crawl a broad range of different websites. On the websites, ARGUS is able to perform tasks like scraping texts or collecting hyperlinks between websites. See: https://link.springer.com/article/10.1007/s11192-020-03726-9
Stars: ✭ 68 (+78.95%)
Mutual labels:  scraping, crawling, scrapy
scrapy-fieldstats
A Scrapy extension to log items coverage when the spider shuts down
Stars: ✭ 17 (-55.26%)
Mutual labels:  scraping, crawling, scrapy
wget-lua
Wget-AT is a modern Wget with Lua hooks, Zstandard (+dictionary) WARC compression and URL-agnostic deduplication.
Stars: ✭ 52 (+36.84%)
Mutual labels:  spider, scraping, crawling
Gopa
[WIP] GOPA, a spider written in Golang, for Elasticsearch. DEMO: http://index.elasticsearch.cn
Stars: ✭ 277 (+628.95%)
Mutual labels:  spider, scraping, crawling
Dotnetcrawler
DotnetCrawler is a straightforward, lightweight web crawling/scrapying library for Entity Framework Core output based on dotnet core. This library designed like other strong crawler libraries like WebMagic and Scrapy but for enabling extandable your custom requirements. Medium link : https://medium.com/@mehmetozkaya/creating-custom-web-crawler-with-dotnet-core-using-entity-framework-core-ec8d23f0ca7c
Stars: ✭ 100 (+163.16%)
Mutual labels:  scraping, crawling, scrapy
Colly
Elegant Scraper and Crawler Framework for Golang
Stars: ✭ 15,535 (+40781.58%)
Mutual labels:  spider, scraping, crawling
Django Dynamic Scraper
Creating Scrapy scrapers via the Django admin interface
Stars: ✭ 1,024 (+2594.74%)
Mutual labels:  spider, scraping, scrapy
Crawly
Crawly, a high-level web crawling & scraping framework for Elixir.
Stars: ✭ 440 (+1057.89%)
Mutual labels:  spider, scraping, crawling
Linkedin Profile Scraper
🕵️‍♂️ LinkedIn profile scraper returning structured profile data in JSON. Works in 2020.
Stars: ✭ 171 (+350%)
Mutual labels:  spider, scraping, crawling
double-agent
A test suite of common scraper detection techniques. See how detectable your scraper stack is.
Stars: ✭ 123 (+223.68%)
Mutual labels:  scraping, crawling, scrapy
diffbot-php-client
[Deprecated - Maintenance mode - use APIs directly please!] The official Diffbot client library
Stars: ✭ 53 (+39.47%)
Mutual labels:  scraping, crawling
small-spider-project
日常爬虫
Stars: ✭ 14 (-63.16%)
Mutual labels:  spider, scrapy
zcrawl
An open source web crawling platform
Stars: ✭ 21 (-44.74%)
Mutual labels:  scraping, crawling
RARBG-scraper
With Selenium headless browsing and CAPTCHA solving
Stars: ✭ 38 (+0%)
Mutual labels:  scraping, scrapy
crawling-framework
Easily crawl news portals or blog sites using Storm Crawler.
Stars: ✭ 22 (-42.11%)
Mutual labels:  scraping, crawling
NScrapy
NScrapy is a .net core corss platform Distributed Spider Framework which provide an easy way to write your own Spider
Stars: ✭ 88 (+131.58%)
Mutual labels:  spider, scrapy
proxycrawl-python
ProxyCrawl Python library for scraping and crawling
Stars: ✭ 51 (+34.21%)
Mutual labels:  scraping, crawling
InstaBot
Simple and friendly Bot for Instagram, using Selenium and Scrapy with Python.
Stars: ✭ 32 (-15.79%)
Mutual labels:  scraping, scrapy

Scrapy-Distributed

Scrapy-Distributed is a series of components for you to develop a distributed crawler base on Scrapy in an easy way.

Now! Scrapy-Distributed has supported RabbitMQ Scheduler, Kafka Scheduler and RedisBloom DupeFilter. You can use either of those in your Scrapy's project very easily.

Features

  • RabbitMQ Scheduler
    • Support custom declare a RabbitMQ's Queue. Such as passivedurableexclusiveauto_delete, and all other options.
  • RabbitMQ Pipeline
    • Support custom declare a RabbitMQ's Queue for the items of spider. Such as passivedurableexclusiveauto_delete, and all other options.
  • Kafaka Scheduler
    • Support custom declare a Kafka's Topic. Such as num_partitionsreplication_factor and will support other options.
  • RedisBloom DupeFilter
    • Support custom the keyerrorRatecapacityexpansion and auto-scaling(noScale) of a bloom filter.

Requirements

  • Python >= 3.6
  • Scrapy >= 1.8.0
  • Pika >= 1.0.0
  • RedisBloom >= 0.2.0
  • Redis >= 3.0.1
  • kafka-python >= 1.4.7

TODO

  • RabbitMQ Item Pipeline
  • Support Delayed Message in RabbitMQ Scheduler
  • Support Scheduler Serializer
  • Custom Interface for DupeFilter
  • RocketMQ Scheduler
  • RocketMQ Item Pipeline
  • SQLAlchemy Item Pipeline
  • Mongodb Item Pipeline
  • Kafka Scheduler
  • Kafka Item Pipeline

Usage

Step 0:

pip install scrapy-distributed

OR

git clone https://github.com/Insutanto/scrapy-distributed.git && cd scrapy-distributed
&& python setup.py install

There is a simple demo in examples/simple_example. Here is the fast way to use Scrapy-Distributed.

Examples of RabbitMQ

# pull and run a RabbitMQ container.
docker run -d --name rabbitmq -p 0.0.0.0:15672:15672 -p 0.0.0.0:5672:5672 rabbitmq:3
# enable rabbitmq_management
docker exec -it <rabbitmq-container-id> /bin/bash
cd /etc/rabbitmq/
rabbitmq-plugins enable rabbitmq_management

# pull and run a RedisBloom container.
docker run -d --name redis-redisbloom -p 6379:6379 redislabs/rebloom:latest

cd examples/rabbitmq_example
python run_simple_example.py

Examples of Kafka

# make sure you have a Kafka running on localhost:9092
# pull and run a RedisBloom container.
docker run -d --name redis-redisbloom -p 6379:6379 redislabs/rebloom:latest

cd examples/kafka_example
python run_simple_example.py

RabbitMQ Support

If you don't have the required environment for tests:

# pull and run a RabbitMQ container.
docker run -d --name rabbitmq -p 0.0.0.0:15672:15672 -p 0.0.0.0:5672:5672 rabbitmq:3
# enable rabbitmq_management
docker exec -it <rabbitmq-container-id> /bin/bash
cd /etc/rabbitmq/
rabbitmq-plugins enable rabbitmq_management
# pull and run a RedisBloom container.
docker run -d --name redis-redisbloom -p 6379:6379 redislabs/rebloom:latest

Step 1:

Only by change SCHEDULERDUPEFILTER_CLASS and add some configs, you can get a distributed crawler in a moment.

SCHEDULER = "scrapy_distributed.schedulers.DistributedScheduler"
SCHEDULER_QUEUE_CLASS = "scrapy_distributed.queues.amqp.RabbitQueue"
RABBITMQ_CONNECTION_PARAMETERS = "amqp://guest:guest@localhost:5672/example/?heartbeat=0"
DUPEFILTER_CLASS = "scrapy_distributed.dupefilters.redis_bloom.RedisBloomDupeFilter"
BLOOM_DUPEFILTER_REDIS_URL = "redis://:@localhost:6379/0"
BLOOM_DUPEFILTER_REDIS_HOST = "localhost"
BLOOM_DUPEFILTER_REDIS_PORT = 6379
REDIS_BLOOM_PARAMS = {
    "redis_cls": "redisbloom.client.Client"
}
BLOOM_DUPEFILTER_ERROR_RATE = 0.001
BLOOM_DUPEFILTER_CAPACITY = 100_0000

# disable the RedirectMiddleware, because the RabbitMiddleware can handle those redirect request.
DOWNLOADER_MIDDLEWARES = {
    ...
    "scrapy.downloadermiddlewares.redirect.RedirectMiddleware": None,
    "scrapy_distributed.middlewares.amqp.RabbitMiddleware": 542
}

# add RabbitPipeline, it will push your items to rabbitmq's queue. 
ITEM_PIPELINES = {
    ...
   'scrapy_distributed.pipelines.amqp.RabbitPipeline': 301,
}


Step 2:

scrapy crawl <your_spider>

Kafka Support

Step 1:

SCHEDULER = "scrapy_distributed.schedulers.DistributedScheduler"
SCHEDULER_QUEUE_CLASS = "scrapy_distributed.queues.kafka.KafkaQueue"
KAFKA_CONNECTION_PARAMETERS = "localhost:9092"
DUPEFILTER_CLASS = "scrapy_distributed.dupefilters.redis_bloom.RedisBloomDupeFilter"
BLOOM_DUPEFILTER_REDIS_URL = "redis://:@localhost:6379/0"
BLOOM_DUPEFILTER_REDIS_HOST = "localhost"
BLOOM_DUPEFILTER_REDIS_PORT = 6379
REDIS_BLOOM_PARAMS = {
    "redis_cls": "redisbloom.client.Client"
}
BLOOM_DUPEFILTER_ERROR_RATE = 0.001
BLOOM_DUPEFILTER_CAPACITY = 100_0000

DOWNLOADER_MIDDLEWARES = {
    ...
   "scrapy_distributed.middlewares.kafka.KafkaMiddleware": 542
}

Step 2:

scrapy crawl <your_spider>

Reference Project

scrapy-rabbitmq-link(scrapy-rabbitmq-link)

scrapy-redis(scrapy-redis)

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].