All Projects → mazzzystar → Baiducrawler

mazzzystar / Baiducrawler

Sample of using proxies to crawl baidu search results.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Baiducrawler

Proxybroker
Proxy [Finder | Checker | Server]. HTTP(S) & SOCKS 🎭
Stars: ✭ 2,767 (+2285.34%)
Mutual labels:  crawler, proxy, proxies
Spoon
🥄 A package for building specific Proxy Pool for different Sites.
Stars: ✭ 173 (+49.14%)
Mutual labels:  crawler, proxy, proxies
Ppspider
web spider built by puppeteer, support task-queue and task-scheduling by decorators,support nedb / mongodb, support data visualization; 基于puppeteer的web爬虫框架,提供灵活的任务队列管理调度方案,提供便捷的数据保存方案(nedb/mongodb),提供数据可视化和用户交互的实现方案
Stars: ✭ 237 (+104.31%)
Mutual labels:  crawler, proxy
Decryptlogin
APIs for loginning some websites by using requests.
Stars: ✭ 1,861 (+1504.31%)
Mutual labels:  baidu, crawler
Scrapy Crawlera
Crawlera middleware for Scrapy
Stars: ✭ 281 (+142.24%)
Mutual labels:  crawler, proxy
Marmot
💐Marmot | Web Crawler/HTTP protocol Download Package 🐭
Stars: ✭ 186 (+60.34%)
Mutual labels:  crawler, proxy
Ok ip proxy pool
🍿爬虫代理IP池(proxy pool) python🍟一个还ok的IP代理池
Stars: ✭ 196 (+68.97%)
Mutual labels:  crawler, proxy
Python3Webcrawler
🌈Python3网络爬虫实战:QQ音乐歌曲、京东商品信息、房天下、破解有道翻译、构建代理池、豆瓣读书、百度图片、破解网易登录、B站模拟扫码登录、小鹅通、荔枝微课
Stars: ✭ 208 (+79.31%)
Mutual labels:  crawler, baidu
Free proxy website
获取免费socks/https/http代理的网站集合
Stars: ✭ 119 (+2.59%)
Mutual labels:  crawler, proxy
Udpx
A Fast UDP Proxy written in Golang
Stars: ✭ 56 (-51.72%)
Mutual labels:  proxy, proxies
Proxy Scraper
Proxy-Scraper is simple Perl script for scraping proxies from multiple websites.
Stars: ✭ 24 (-79.31%)
Mutual labels:  proxy, proxies
Hproxy
hproxy - Asynchronous IP proxy pool, aims to make getting proxy as convenient as possible.(异步爬虫代理池)
Stars: ✭ 62 (-46.55%)
Mutual labels:  crawler, proxy
Proxy pool
Python爬虫代理IP池(proxy pool)
Stars: ✭ 13,964 (+11937.93%)
Mutual labels:  crawler, proxy
Http request randomizer
Proxying Python Requests
Stars: ✭ 110 (-5.17%)
Mutual labels:  proxy, proxies
Pspider
简单易用的Python爬虫框架,QQ交流群:597510560
Stars: ✭ 1,611 (+1288.79%)
Mutual labels:  crawler, proxies
Ecommercecrawlers
码云仓库链接:AJay13/ECommerceCrawlers Github 仓库链接:DropsDevopsOrg/ECommerceCrawlers 项目展示平台链接:http://wechat.doonsec.com
Stars: ✭ 3,073 (+2549.14%)
Mutual labels:  baidu, crawler
Smartproxy
HTTP(S) Rotating Residential proxies - Code examples & General information
Stars: ✭ 205 (+76.72%)
Mutual labels:  proxy, proxies
Baiduimagespider
一个超级轻量的百度图片爬虫
Stars: ✭ 591 (+409.48%)
Mutual labels:  baidu, crawler
Scrapoxy
Scrapoxy hides your scraper behind a cloud. It starts a pool of proxies to send your requests. Now, you can crawl without thinking about blacklisting!
Stars: ✭ 1,322 (+1039.66%)
Mutual labels:  crawler, proxy
Baiduspider
BaiduSpider,一个爬取百度搜索结果的爬虫,目前支持百度网页搜索,百度图片搜索,百度知道搜索,百度视频搜索,百度资讯搜索,百度文库搜索,百度经验搜索和百度百科搜索。
Stars: ✭ 105 (-9.48%)
Mutual labels:  baidu, crawler

BaiduCrawler

爬取百度搜索结果中c-abstract里的数据,并使用不断更换代理ip的方式绕过百度反爬虫策略,从而实现对数以10w计的词条的百度搜索结果进行连续爬取。

获取代理ip策略

    1. 抓取页面上全部[ip:port]对,并检测可用性(有的代理ip是连不通的)。
    1. 使用"多轮检测"策略,即每个ip要经历N轮,间隔为duration连接测试,每轮都会丢弃连接时间超过timeout的ip。N轮下来,存活的ip都是每次都在timeout范围以内连通的,从而避免了"辉煌的15分钟"效应。

爬取策略

有3个策略:

    1. 每当出现download_error,更换一个IP
    1. 每爬取200条文本,更换一个IP
    1. 每爬取20,000次,更新一次IP资源池

上述参数均可手动调整。 目前ip池的使用都是一次性的,如果需要更多的优质ip,可参考我的另一个项目Proxy,它是一个代理ip抓取测试评估存储一体化工具,也许可以帮到你。

TODO

    1. 对因网络原因未爬取的词进行二次爬取,直到达到用户指定的爬取率
    1. 对爬取速度快的优质ip增加权重,从而形成一个具有优先级的ip池
    1. ip评估改写成多线程

使用

准备工作

pip install requests
pip install lxml
pip install beautifulsoup4

git clone https://github.com/fancoo/BaiduCrawler
cd BaiduCrawler

Python 2.7

python baidu_crawler.py

Python 3

本程序仅在win版本的Python3.6测试通过。

cd Py3
python baidu_crawler.py

2017/5/4更新

  • 原有的判断ip是否有效的网站失效,已替换。
  • 增加更多代理ip网站。
  • 提高可配置性。

2017/6/13更新

  • 新增抓取的代理IP数据存到MySql中 下次先从库中读取 再从网站抓取

2017/6/18更新

  • 修改了部分BoBoGithub提交的PR,并重构了ip_pool.py的代码。
  • 目前这个版本其实只将有效ip保存到数据库,没能实现ip质量评优以及爬取的多线程,因时间精力有限,考虑未来再加入。

2017/7/25更新

  • 增加对Python3.6的支持。
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].