All Projects → xianhu → Pspider

xianhu / Pspider

Licence: bsd-2-clause
简单易用的Python爬虫框架,QQ交流群:597510560

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Pspider

Spoon
🥄 A package for building specific Proxy Pool for different Sites.
Stars: ✭ 173 (-89.26%)
Mutual labels:  crawler, spider, proxies
flink-crawler
Continuous scalable web crawler built on top of Flink and crawler-commons
Stars: ✭ 48 (-97.02%)
Mutual labels:  crawler, spider, web-crawler
Zhihu Crawler People
A simple distributed crawler for zhihu && data analysis
Stars: ✭ 182 (-88.7%)
Mutual labels:  crawler, spider, web-crawler
Fooproxy
稳健高效的评分制-针对性- IP代理池 + API服务,可以自己插入采集器进行代理IP的爬取,针对你的爬虫的一个或多个目标网站分别生成有效的IP代理数据库,支持MongoDB 4.0 使用 Python3.7(Scored IP proxy pool ,customise proxy data crawler can be added anytime)
Stars: ✭ 195 (-87.9%)
Mutual labels:  crawler, spider, multiprocessing
Awesome Crawler
A collection of awesome web crawler,spider in different languages
Stars: ✭ 4,793 (+197.52%)
Mutual labels:  crawler, spider, web-crawler
Abot
Cross Platform C# web crawler framework built for speed and flexibility. Please star this project! +1.
Stars: ✭ 1,961 (+21.73%)
Mutual labels:  crawler, spider, web-crawler
Zhihuspider
多线程知乎用户爬虫,基于python3
Stars: ✭ 201 (-87.52%)
Mutual labels:  multi-threading, crawler, spider
Crawlab Lite
Lite version of Crawlab. 轻量版 Crawlab 爬虫管理平台
Stars: ✭ 122 (-92.43%)
Mutual labels:  crawler, spider, web-crawler
Spider Flow
新一代爬虫平台,以图形化方式定义爬虫流程,不写代码即可完成爬虫。
Stars: ✭ 365 (-77.34%)
Mutual labels:  crawler, spider, web-crawler
Crawlertutorial
爬蟲極簡教學(fetch, parse, search, multiprocessing, API)- PTT 為例
Stars: ✭ 282 (-82.5%)
Mutual labels:  crawler, spider, multiprocessing
Gopa
[WIP] GOPA, a spider written in Golang, for Elasticsearch. DEMO: http://index.elasticsearch.cn
Stars: ✭ 277 (-82.81%)
Mutual labels:  crawler, spider, web-crawler
Maman
Rust Web Crawler saving pages on Redis
Stars: ✭ 39 (-97.58%)
Mutual labels:  crawler, spider, web-crawler
Spidr
A versatile Ruby web spidering library that can spider a site, multiple domains, certain links or infinitely. Spidr is designed to be fast and easy to use.
Stars: ✭ 656 (-59.28%)
Mutual labels:  crawler, spider, web-crawler
Crawlab
Distributed web crawler admin platform for spiders management regardless of languages and frameworks. 分布式爬虫管理平台,支持任何语言和框架
Stars: ✭ 8,392 (+420.92%)
Mutual labels:  crawler, spider, web-crawler
Geziyor
Geziyor, a fast web crawling & scraping framework for Go. Supports JS rendering.
Stars: ✭ 1,246 (-22.66%)
Mutual labels:  crawler, spider
Puppeteer Walker
a puppeteer walker 🕷 🕸
Stars: ✭ 78 (-95.16%)
Mutual labels:  crawler, spider
Infinitycrawler
A simple but powerful web crawler library for .NET
Stars: ✭ 97 (-93.98%)
Mutual labels:  crawler, web-crawler
Bilibili member crawler
B站用户爬虫 好耶~是爬虫
Stars: ✭ 115 (-92.86%)
Mutual labels:  crawler, spider
Crawler examples
Some classic web crawler projects.一些经典的爬虫
Stars: ✭ 74 (-95.41%)
Mutual labels:  crawler, spider
Gopa Abandoned
GOPA, a spider written in Go.(NOTE: this project moved to https://github.com/infinitbyte/gopa )
Stars: ✭ 98 (-93.92%)
Mutual labels:  crawler, spider

PSpider

A simple web spider frame written by Python, which needs Python3.5+

Features of PSpider

  1. Support multi-threading crawling mode (using threading)
  2. Support using proxies for crawling (using threading and queue)
  3. Define some utility functions and classes, for example: UrlFilter, get_string_num, etc
  4. Fewer lines of code, easyer to read, understand and expand

Modules of PSpider

  1. utilities module: define some utilities functions and classes for multi-threading spider
  2. instances module: define classes of Fetcher, Parser, Saver for multi-threading spider
  3. concurrent module: define WebSpiderFrame of multi-threading spider

Procedure of PSpider

①: Fetchers get url from UrlQueue, and make requests based on this url
②: Put the result(content) of ① to HtmlQueue, and so Parser can get it
③: Parser gets content from HtmlQueue, and parses it to get new urls and item
④: Put the new urls to UrlQueue, and so Fetchers can get it
⑤: Put the item to ItemQueue, and so Saver can get it
⑥: Saver gets item from ItemQueue, and saves it to filesystem or database
⑦: Proxieser gets proxies from web or database, and puts proxies to ProxiesQueue
⑧: Fetcher gets proxies from ProxiesQueue if needed, and makes requests based on this proxies

Tutorials of PSpider

Installation: you'd better use the first method
(1)Copy the "spider" directory to your project directory, and import spider
(2)Install spider to your python system using python3 setup.py install

See test.py

TodoList

  1. More Demos
  2. Distribute Spider
  3. Execute JavaScript

If you have any questions or advices, you can commit "Issues" or "Pull requests"

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].