All Projects → hellock → Icrawler

hellock / Icrawler

Licence: mit
A multi-thread crawler framework with many builtin image crawlers provided.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Icrawler

Crawlab Lite
Lite version of Crawlab. 轻量版 Crawlab 爬虫管理平台
Stars: ✭ 122 (-80.6%)
Mutual labels:  crawler, spider, scrapy
Haipproxy
💖 High available distributed ip proxy pool, powerd by Scrapy and Redis
Stars: ✭ 4,993 (+693.8%)
Mutual labels:  crawler, spider, scrapy
Crawlab
Distributed web crawler admin platform for spiders management regardless of languages and frameworks. 分布式爬虫管理平台,支持任何语言和框架
Stars: ✭ 8,392 (+1234.18%)
Mutual labels:  crawler, spider, scrapy
Python3 Spider
Python爬虫实战 - 模拟登陆各大网站 包含但不限于:滑块验证、拼多多、美团、百度、bilibili、大众点评、淘宝,如果喜欢请start ❤️
Stars: ✭ 2,129 (+238.47%)
Mutual labels:  crawler, spider, scrapy
Fbcrawl
A Facebook crawler
Stars: ✭ 536 (-14.79%)
Mutual labels:  crawler, spider, scrapy
Marmot
💐Marmot | Web Crawler/HTTP protocol Download Package 🐭
Stars: ✭ 186 (-70.43%)
Mutual labels:  crawler, spider, scrapy
Scrapingoutsourcing
ScrapingOutsourcing专注分享爬虫代码 尽量每周更新一个
Stars: ✭ 164 (-73.93%)
Mutual labels:  crawler, spider, scrapy
Goribot
[Crawler/Scraper for Golang]🕷A lightweight distributed friendly Golang crawler framework.一个轻量的分布式友好的 Golang 爬虫框架。
Stars: ✭ 190 (-69.79%)
Mutual labels:  crawler, spider, scrapy
Newcrawler
Free Web Scraping Tool with Java
Stars: ✭ 589 (-6.36%)
Mutual labels:  crawler, spider
Learnpython
Python的基础练习代码与各种爬虫代码
Stars: ✭ 451 (-28.3%)
Mutual labels:  crawler, spider
Easy Scraping Tutorial
Simple but useful Python web scraping tutorial code.
Stars: ✭ 583 (-7.31%)
Mutual labels:  crawler, scrapy
Wechatsogou
基于搜狗微信搜索的微信公众号爬虫接口
Stars: ✭ 5,220 (+729.89%)
Mutual labels:  crawler, scrapy
Crawly
Crawly, a high-level web crawling & scraping framework for Elixir.
Stars: ✭ 440 (-30.05%)
Mutual labels:  crawler, spider
Scrapple
A framework for creating semi-automatic web content extractors
Stars: ✭ 464 (-26.23%)
Mutual labels:  crawler, scrapy
Html2article
Html网页正文提取
Stars: ✭ 441 (-29.89%)
Mutual labels:  crawler, spider
Python Spider
豆瓣电影top250、斗鱼爬取json数据以及爬取美女图片、淘宝、有缘、CrawlSpider爬取红娘网相亲人的部分基本信息以及红娘网分布式爬取和存储redis、爬虫小demo、Selenium、爬取多点、django开发接口、爬取有缘网信息、模拟知乎登录、模拟github登录、模拟图虫网登录、爬取多点商城整站数据、爬取微信公众号历史文章、爬取微信群或者微信好友分享的文章、itchat监听指定微信公众号分享的文章
Stars: ✭ 615 (-2.23%)
Mutual labels:  spider, scrapy
Gosint
OSINT Swiss Army Knife
Stars: ✭ 401 (-36.25%)
Mutual labels:  crawler, spider
Awesome Crawler
A collection of awesome web crawler,spider in different languages
Stars: ✭ 4,793 (+662%)
Mutual labels:  crawler, spider
Netdiscovery
NetDiscovery 是一款基于 Vert.x、RxJava 2 等框架实现的通用爬虫框架/中间件。
Stars: ✭ 573 (-8.9%)
Mutual labels:  crawler, spider
Xsrfprobe
The Prime Cross Site Request Forgery (CSRF) Audit and Exploitation Toolkit.
Stars: ✭ 532 (-15.42%)
Mutual labels:  crawler, spider

icrawler

.. image:: https://img.shields.io/pypi/v/icrawler.svg :target: https://pypi.python.org/pypi/icrawler :alt: PyPI Version

.. image:: https://anaconda.org/hellock/icrawler/badges/version.svg :target: https://anaconda.org/hellock/icrawler :alt: Anaconda Version

.. image:: https://img.shields.io/pypi/pyversions/icrawler.svg :alt: Python Version

.. image:: https://img.shields.io/github/license/hellock/icrawler.svg :alt: License

Introduction

Documentation: http://icrawler.readthedocs.io/

Try it with pip install icrawler or conda install -c hellock icrawler.

This package is a mini framework of web crawlers. With modularization design, it is easy to use and extend. It supports media data like images and videos very well, and can also be applied to texts and other type of files. Scrapy is heavy and powerful, while icrawler is tiny and flexible.

With this package, you can write a multiple thread crawler easily by focusing on the contents you want to crawl, keeping away from troublesome problems like exception handling, thread scheduling and communication.

It also provides built-in crawlers for popular image sites like Flickr and search engines such as Google, Bing and Baidu. (Thank all the contributors and pull requests are always welcome!)

Requirements

Python 2.7+ or 3.5+ (recommended).

Examples

Using built-in crawlers is very simple. A minimal example is shown as follows.

.. code:: python

from icrawler.builtin import GoogleImageCrawler

google_crawler = GoogleImageCrawler(storage={'root_dir': 'your_image_dir'})
google_crawler.crawl(keyword='cat', max_num=100)

You can also configurate number of threads and apply advanced search options. (Note: compatible with 0.6.0 and later versions)

.. code:: python

from icrawler.builtin import GoogleImageCrawler

google_crawler = GoogleImageCrawler(
    feeder_threads=1,
    parser_threads=2,
    downloader_threads=4,
    storage={'root_dir': 'your_image_dir'})
filters = dict(
    size='large',
    color='orange',
    license='commercial,modify',
    date=((2017, 1, 1), (2017, 11, 30)))
google_crawler.crawl(keyword='cat', filters=filters, max_num=1000, file_idx_offset=0)

For more advanced usage about built-in crawlers, please refer to the documentation <http://icrawler.readthedocs.io/en/latest/builtin.html>_.

Writing your own crawlers with this framework is also convenient, see the tutorials <http://icrawler.readthedocs.io/en/latest/extend.html>_.

Architecture

A crawler consists of 3 main components (Feeder, Parser and Downloader), they are connected with each other with FIFO queues. The workflow is shown in the following figure.

.. figure:: http://7xopqn.com1.z0.glb.clouddn.com/workflow.png :alt:

  • url_queue stores the url of pages which may contain images
  • task_queue stores the image url as well as any meta data you like, each element in the queue is a dictionary and must contain the field img_url
  • Feeder puts page urls to url_queue
  • Parser requests and parses the page, then extracts the image urls and puts them into task_queue
  • Downloader gets tasks from task_queue and requests the images, then saves them in the given path.

Feeder, parser and downloader are all thread pools, so you can specify the number of threads they use.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].