All Projects → howie6879 → talospider

howie6879 / talospider

Licence: other
talospider - A simple,lightweight scraping micro-framework

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to talospider

Gopa
[WIP] GOPA, a spider written in Golang, for Elasticsearch. DEMO: http://index.elasticsearch.cn
Stars: ✭ 277 (+385.96%)
Mutual labels:  spider, crawling
Crawly
Crawly, a high-level web crawling & scraping framework for Elixir.
Stars: ✭ 440 (+671.93%)
Mutual labels:  spider, crawling
Skycaiji
蓝天采集器是一款免费的数据采集发布爬虫软件,采用php+mysql开发,可部署在云服务器,几乎能采集所有类型的网页,无缝对接各类CMS建站程序,免登录实时发布数据,全自动无需人工干预!是网页大数据采集软件中完全跨平台的云端爬虫系统
Stars: ✭ 1,514 (+2556.14%)
Mutual labels:  spider, crawling
flink-crawler
Continuous scalable web crawler built on top of Flink and crawler-commons
Stars: ✭ 48 (-15.79%)
Mutual labels:  spider, crawling
Colly
Elegant Scraper and Crawler Framework for Golang
Stars: ✭ 15,535 (+27154.39%)
Mutual labels:  spider, crawling
Arachnid
Powerful web scraping framework for Crystal
Stars: ✭ 68 (+19.3%)
Mutual labels:  spider, crawling
Webster
a reliable high-level web crawling & scraping framework for Node.js.
Stars: ✭ 364 (+538.6%)
Mutual labels:  spider, crawling
wget-lua
Wget-AT is a modern Wget with Lua hooks, Zstandard (+dictionary) WARC compression and URL-agnostic deduplication.
Stars: ✭ 52 (-8.77%)
Mutual labels:  spider, crawling
Linkedin Profile Scraper
🕵️‍♂️ LinkedIn profile scraper returning structured profile data in JSON. Works in 2020.
Stars: ✭ 171 (+200%)
Mutual labels:  spider, crawling
Pspider
简单易用的Python爬虫框架,QQ交流群:597510560
Stars: ✭ 1,611 (+2726.32%)
Mutual labels:  spider, web-spider
BaiduSpider
项目已经移动至:https://github.com/BaiduSpider/BaiduSpider !! 一个爬取百度搜索结果的爬虫,目前支持百度网页搜索,百度图片搜索,百度知道搜索,百度视频搜索,百度资讯搜索,百度文库搜索,百度经验搜索和百度百科搜索。
Stars: ✭ 29 (-49.12%)
Mutual labels:  spider, crawling
scrapy-distributed
A series of distributed components for Scrapy. Including RabbitMQ-based components, Kafka-based components, and RedisBloom-based components for Scrapy.
Stars: ✭ 38 (-33.33%)
Mutual labels:  spider, crawling
SpiderDemo
爬虫Demo,基于Python实现
Stars: ✭ 56 (-1.75%)
Mutual labels:  spider
kasthack.osp
Генератор сырых дампов пользователей VK.
Stars: ✭ 15 (-73.68%)
Mutual labels:  crawling
Infect
Create you virus in termux!
Stars: ✭ 33 (-42.11%)
Mutual labels:  crawling
Z-Spider
一些爬虫开发的技巧和案例
Stars: ✭ 33 (-42.11%)
Mutual labels:  spider
pomp
Screen scraping and web crawling framework
Stars: ✭ 61 (+7.02%)
Mutual labels:  crawling
FofaMap
FofaMap是一款基于Python3开发的跨平台FOFA数据采集器,支持网站图标查询、批量查询和自定义查询FOFA数据,能够根据查询结果自动去重并生成对应的Excel表格。另外春节特别版还可以调用Nuclei对目标进行漏洞扫描,让你在挖洞路上快人一步。
Stars: ✭ 118 (+107.02%)
Mutual labels:  spider
douyin-api
抖音接口、抖音API、抖音数据爬虫、抖音直播数据、抖音直播Api、抖音视频Api、抖音爬虫、抖音去水印、抖音视频下载、抖音视频解析、抖音直播监控、抖音数据采集
Stars: ✭ 41 (-28.07%)
Mutual labels:  spider
Scrapy-Spiders
一个基于Scrapy的数据采集爬虫代码库
Stars: ✭ 34 (-40.35%)
Mutual labels:  spider

talospider

travis PyPI

1.为什么写这个?

一些简单的页面,无需用比较大的框架来进行爬取,自己纯手写又比较麻烦,适用于单页面的爬虫编写

微爬虫框架 - 小巧、方便、练手学习

因此针对这个需求写了talospider:

  • 1.针对单页面的item提取 - 具体介绍点这里
  • 2.spider模块 - 具体介绍点这里

注意:此项目已经废弃,有需求请大家转用我新编写的异步框架ruia

2.介绍&&使用

process

使用

pip install talospider

2.1.item

这个模块是可以独立使用的,对于一些请求比较简单的网站(比如只需要get请求),单单只用这个模块就可以快速地编写出你想要的爬虫,比如(以下使用python3,python2见examples目录):

2.1.1.单页面单目标

比如要获取这个网址http://book.qidian.com/info/1004608738 的书籍信息,封面等信息,可直接这样写:

import time

from pprint import pprint
from talospider import Item, TextField, AttrField

class QidianSpider(Item):
    title = TextField(css_select='.book-info>h1>em')
    author = TextField(css_select='a.writer')
    cover = AttrField(css_select='a#bookImg>img', attr='src')

    def tal_title(self, title):
        return title

    def tal_cover(self, cover):
        return 'http:' + cover

if __name__ == '__main__':
    item_data = QidianSpider.get_item(url='http://book.qidian.com/info/1004608738')
    pprint(item_data)

具体见qidian_details_by_item.py

2.1.1.单页面多目标

比如获取豆瓣250电影首页展示的25部电影,这一个页面有25个目标,可直接这样写:

from pprint import pprint
from talospider import Item, TextField, AttrField

class DoubanSpider(Item):
    # 定义继承自item的Item类
    target_item = TextField(css_select='div.item')
    title = TextField(css_select='span.title')
    cover = AttrField(css_select='div.pic>a>img', attr='src')
    abstract = TextField(css_select='span.inq')

    def tal_title(self, title):
        if isinstance(title, str):
            return title
        else:
            return ''.join([i.text.strip().replace('\xa0', '') for i in title])

if __name__ == '__main__':
    items_data = DoubanSpider.get_items(url='https://movie.douban.com/top250')
    result = []
    for item in items_data:
        result.append({
            'title': item.title,
            'cover': item.cover,
            'abstract': item.abstract,
        })
    pprint(result)

具体见douban_page_by_item.py

2.2.spider

当需要爬取有层次的页面时,比如爬取豆瓣250全部电影,这时候spider部分就派上了用场:

# !/usr/bin/env python
from talospider import AttrField, Request,Spider, Item, TextField
from talospider.utils import get_random_user_agent


class DoubanItem(Item):
    # 定义继承自item的Item类
    target_item = TextField(css_select='div.item')
    title = TextField(css_select='span.title')
    cover = AttrField(css_select='div.pic>a>img', attr='src')
    abstract = TextField(css_select='span.inq')

    def tal_title(self, title):
        if isinstance(title, str):
            return title
        else:
            return ''.join([i.text.strip().replace('\xa0', '') for i in title])


class DoubanSpider(Spider):
    # 定义起始url,必须
    start_urls = ['https://movie.douban.com/top250']
    # requests配置
    request_config = {
        'RETRIES': 3,
        'DELAY': 0,
        'TIMEOUT': 20
    }
    def parse(self, res):
        # 解析函数 必须有
        # 将html转化为etree
        etree = self.e_html(res.html)
        # 提取目标值生成新的url
        pages = [i.get('href') for i in etree.cssselect('.paginator>a')]
        pages.insert(0, '?start=0&filter=')
        headers = {
            "User-Agent": get_random_user_agent()
        }
        for page in pages:
            url = self.start_urls[0] + page
            yield Request(url, request_config=self.request_config, headers=headers, callback=self.parse_item)

    def parse_item(self, res):
        items_data = DoubanItem.get_items(html=res.html)
        # result = []
        for item in items_data:
            # result.append({
            #     'title': item.title,
            #     'cover': item.cover,
            #     'abstract': item.abstract,
            # })
            # 保存
            with open('douban250.txt', 'a+') as f:
                f.writelines(item.title + '\n')


if __name__ == '__main__':
    DoubanSpider.start()

控制台:

2018-01-02 09:33:34 - [talospider ]: talospider started
2018-01-02 09:33:35 - [downloading]: GET: https://movie.douban.com/top250
2018-01-02 09:33:35 - [downloading]: GET: https://movie.douban.com/top250?start=0&filter=
2018-01-02 09:33:35 - [downloading]: GET: https://movie.douban.com/top250?start=25&filter=
2018-01-02 09:33:36 - [downloading]: GET: https://movie.douban.com/top250?start=50&filter=
2018-01-02 09:33:36 - [downloading]: GET: https://movie.douban.com/top250?start=75&filter=
2018-01-02 09:33:36 - [downloading]: GET: https://movie.douban.com/top250?start=100&filter=
2018-01-02 09:33:37 - [downloading]: GET: https://movie.douban.com/top250?start=125&filter=
2018-01-02 09:33:37 - [downloading]: GET: https://movie.douban.com/top250?start=150&filter=
2018-01-02 09:33:37 - [downloading]: GET: https://movie.douban.com/top250?start=175&filter=
2018-01-02 09:33:37 - [downloading]: GET: https://movie.douban.com/top250?start=200&filter=
2018-01-02 09:33:38 - [downloading]: GET: https://movie.douban.com/top250?start=225&filter=
2018-01-02 09:33:38 - [talospider ]: Time usage:0:00:03.367604

此时当前目录会生成douban250.txt,具体见douban_page_by_spider.py

3.说明

学习之作,待完善的地方还有很多

talospider编写的示例:

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].