All Projects → Litreily → Capturer

Litreily / Capturer

Licence: mit
capture pictures from website like sina, lofter, huaban and so on

Programming Languages

python
139335 projects - #7 most used programming language
python3
1442 projects

Projects that are alternatives of or similar to Capturer

Alipayspider Scrapy
AlipaySpider on Scrapy(use chrome driver); 支付宝爬虫(基于Scrapy)
Stars: ✭ 70 (-7.89%)
Mutual labels:  spider, scrapy
Image Downloader
Download images from Google, Bing, Baidu. 谷歌、百度、必应图片下载.
Stars: ✭ 1,173 (+1443.42%)
Mutual labels:  spider, scrapy
Icrawler
A multi-thread crawler framework with many builtin image crawlers provided.
Stars: ✭ 629 (+727.63%)
Mutual labels:  spider, scrapy
Haipproxy
💖 High available distributed ip proxy pool, powerd by Scrapy and Redis
Stars: ✭ 4,993 (+6469.74%)
Mutual labels:  spider, scrapy
App comments spider
爬取百度贴吧、TapTap、appstore、微博官方博主上的游戏评论(基于redis_scrapy),过滤器采用了bloomfilter。
Stars: ✭ 38 (-50%)
Mutual labels:  spider, scrapy
Fbcrawl
A Facebook crawler
Stars: ✭ 536 (+605.26%)
Mutual labels:  spider, scrapy
Funpyspidersearchengine
Word2vec 千人千面 个性化搜索 + Scrapy2.3.0(爬取数据) + ElasticSearch7.9.1(存储数据并提供对外Restful API) + Django3.1.1 搜索
Stars: ✭ 782 (+928.95%)
Mutual labels:  spider, scrapy
Happy Spiders
🔧 🔩 🔨 收集整理了爬虫相关的工具、模拟登陆技术、代理IP、scrapy模板代码等内容。
Stars: ✭ 261 (+243.42%)
Mutual labels:  spider, scrapy
Jspider
JSpider会每周更新至少一个网站的JS解密方式,欢迎 Star,交流微信:13298307816
Stars: ✭ 914 (+1102.63%)
Mutual labels:  spider, scrapy
Mailinglistscraper
A python web scraper for public email lists.
Stars: ✭ 19 (-75%)
Mutual labels:  spider, scrapy
Gosint
OSINT Swiss Army Knife
Stars: ✭ 401 (+427.63%)
Mutual labels:  spider, telegram
Django Dynamic Scraper
Creating Scrapy scrapers via the Django admin interface
Stars: ✭ 1,024 (+1247.37%)
Mutual labels:  spider, scrapy
Elves
🎊 Design and implement of lightweight crawler framework.
Stars: ✭ 315 (+314.47%)
Mutual labels:  spider, scrapy
Python Spider
豆瓣电影top250、斗鱼爬取json数据以及爬取美女图片、淘宝、有缘、CrawlSpider爬取红娘网相亲人的部分基本信息以及红娘网分布式爬取和存储redis、爬虫小demo、Selenium、爬取多点、django开发接口、爬取有缘网信息、模拟知乎登录、模拟github登录、模拟图虫网登录、爬取多点商城整站数据、爬取微信公众号历史文章、爬取微信群或者微信好友分享的文章、itchat监听指定微信公众号分享的文章
Stars: ✭ 615 (+709.21%)
Mutual labels:  spider, scrapy
Alltheplaces
A set of spiders and scrapers to extract location information from places that post their location on the internet.
Stars: ✭ 277 (+264.47%)
Mutual labels:  spider, scrapy
Darknet chinesetrading
🚇暗网中文网监控爬虫(DEEPMIX)
Stars: ✭ 649 (+753.95%)
Mutual labels:  spider, telegram
Douban Crawler
Uno Crawler por https://douban.com
Stars: ✭ 13 (-82.89%)
Mutual labels:  spider, scrapy
Tieba spider
百度贴吧爬虫(基于scrapy和mysql)
Stars: ✭ 257 (+238.16%)
Mutual labels:  spider, scrapy
Seeker
Seeker - another job board aggregator.
Stars: ✭ 16 (-78.95%)
Mutual labels:  spider, scrapy
Crawlab
Distributed web crawler admin platform for spiders management regardless of languages and frameworks. 分布式爬虫管理平台,支持任何语言和框架
Stars: ✭ 8,392 (+10942.11%)
Mutual labels:  spider, scrapy

What's Capturer

A capture tool used to capture pictures from web like Sina, LOFTER, huaban and so on.

If you have any suggestions or awesome websites of pictures want to capture, please let me know!!!

Support Websites

How to use

  • install python3 and libs
  • update your Parameters of each kind of web
  • run ./capturer or run main.py or ***_spider.py to capture images from
    • sina
    • lofter
    • toutiao
    • qqzone
    • telegram
    • netbian
  • run huaban/run.py to capture images from huaban
  • run vmgirls/run.py to capture images from vmgirls
  • run fabiaoqing/fabiaoqing_spider.py key1 [key2] [key3] ...

Notices

Almost all of the file path based on ~/Pictures/python, ~ means home dir.

Parameters

huaban

  • USERNAME: username of huaban which you want to capture
  • ROOT_DIR: directories where to store the images

Sina

  • uid: user-id(10 numbers) of sina weibo that you want to capture
  • cookies: your cookies after login the sina weibo
  • path: directory to save the pictures

Lofter

  • username: username of lofter that you want to capture
  • path: directory to save the pictures, see the function _get_path in lofter_spider.py
  • query_number: number of blogs in each query packet, default value is 40

Telegram

Blogs

You can find all the relate blogs in https://www.litreily.top/tags/spider/.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].