All Projects → clemfromspace → Scrapy Selenium

clemfromspace / Scrapy Selenium

Licence: wtfpl
Scrapy middleware to handle javascript pages using selenium

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Scrapy Selenium

Python3 Spider
Python爬虫实战 - 模拟登陆各大网站 包含但不限于:滑块验证、拼多多、美团、百度、bilibili、大众点评、淘宝,如果喜欢请start ❤️
Stars: ✭ 2,129 (+287.09%)
Mutual labels:  scrapy, selenium
python-crawler
爬虫学习仓库,适合零基础的人学习,对新手比较友好
Stars: ✭ 37 (-93.27%)
Mutual labels:  selenium, scrapy
double-agent
A test suite of common scraper detection techniques. See how detectable your scraper stack is.
Stars: ✭ 123 (-77.64%)
Mutual labels:  crawling, scrapy
Dotnetcrawler
DotnetCrawler is a straightforward, lightweight web crawling/scrapying library for Entity Framework Core output based on dotnet core. This library designed like other strong crawler libraries like WebMagic and Scrapy but for enabling extandable your custom requirements. Medium link : https://medium.com/@mehmetozkaya/creating-custom-web-crawler-with-dotnet-core-using-entity-framework-core-ec8d23f0ca7c
Stars: ✭ 100 (-81.82%)
Mutual labels:  scrapy, crawling
bots-zoo
No description or website provided.
Stars: ✭ 59 (-89.27%)
Mutual labels:  crawling, selenium
Wswp
Code for the second edition Web Scraping with Python book by Packt Publications
Stars: ✭ 112 (-79.64%)
Mutual labels:  scrapy, selenium
scrapy-fieldstats
A Scrapy extension to log items coverage when the spider shuts down
Stars: ✭ 17 (-96.91%)
Mutual labels:  crawling, scrapy
Python Spider
豆瓣电影top250、斗鱼爬取json数据以及爬取美女图片、淘宝、有缘、CrawlSpider爬取红娘网相亲人的部分基本信息以及红娘网分布式爬取和存储redis、爬虫小demo、Selenium、爬取多点、django开发接口、爬取有缘网信息、模拟知乎登录、模拟github登录、模拟图虫网登录、爬取多点商城整站数据、爬取微信公众号历史文章、爬取微信群或者微信好友分享的文章、itchat监听指定微信公众号分享的文章
Stars: ✭ 615 (+11.82%)
Mutual labels:  scrapy, selenium
XMQ-BackUp
小密圈备份,圈子/话题/图片/文件。
Stars: ✭ 22 (-96%)
Mutual labels:  selenium, scrapy
scrapy-distributed
A series of distributed components for Scrapy. Including RabbitMQ-based components, Kafka-based components, and RedisBloom-based components for Scrapy.
Stars: ✭ 38 (-93.09%)
Mutual labels:  crawling, scrapy
Alipayspider Scrapy
AlipaySpider on Scrapy(use chrome driver); 支付宝爬虫(基于Scrapy)
Stars: ✭ 70 (-87.27%)
Mutual labels:  scrapy, selenium
Post Tuto Deployment
Build and deploy a machine learning app from scratch 🚀
Stars: ✭ 368 (-33.09%)
Mutual labels:  scrapy, selenium
Pdf downloader
A Scrapy Spider for downloading PDF files from a webpage.
Stars: ✭ 18 (-96.73%)
Mutual labels:  scrapy, crawling
Seleniumcrawler
An example using Selenium webdrivers for python and Scrapy framework to create a web scraper to crawl an ASP site
Stars: ✭ 117 (-78.73%)
Mutual labels:  scrapy, selenium
Scrapyrt
HTTP API for Scrapy spiders
Stars: ✭ 637 (+15.82%)
Mutual labels:  scrapy, crawling
RARBG-scraper
With Selenium headless browsing and CAPTCHA solving
Stars: ✭ 38 (-93.09%)
Mutual labels:  selenium, scrapy
Easy Scraping Tutorial
Simple but useful Python web scraping tutorial code.
Stars: ✭ 583 (+6%)
Mutual labels:  scrapy, crawling
Pythonspidernotes
Python入门网络爬虫之精华版
Stars: ✭ 5,634 (+924.36%)
Mutual labels:  scrapy, selenium
InstaBot
Simple and friendly Bot for Instagram, using Selenium and Scrapy with Python.
Stars: ✭ 32 (-94.18%)
Mutual labels:  selenium, scrapy
ARGUS
ARGUS is an easy-to-use web scraping tool. The program is based on the Scrapy Python framework and is able to crawl a broad range of different websites. On the websites, ARGUS is able to perform tasks like scraping texts or collecting hyperlinks between websites. See: https://link.springer.com/article/10.1007/s11192-020-03726-9
Stars: ✭ 68 (-87.64%)
Mutual labels:  crawling, scrapy

Scrapy with selenium

PyPI Build Status Test Coverage Maintainability

Scrapy middleware to handle javascript pages using selenium.

Installation

$ pip install scrapy-selenium

You should use python>=3.6. You will also need one of the Selenium compatible browsers.

Configuration

  1. Add the browser to use, the path to the driver executable, and the arguments to pass to the executable to the scrapy settings:
    from shutil import which
    
    SELENIUM_DRIVER_NAME = 'firefox'
    SELENIUM_DRIVER_EXECUTABLE_PATH = which('geckodriver')
    SELENIUM_DRIVER_ARGUMENTS=['-headless']  # '--headless' if using chrome instead of firefox
    

Optionally, set the path to the browser executable: python SELENIUM_BROWSER_EXECUTABLE_PATH = which('firefox')

In order to use a remote Selenium driver, specify SELENIUM_COMMAND_EXECUTOR instead of SELENIUM_DRIVER_EXECUTABLE_PATH: python SELENIUM_COMMAND_EXECUTOR = 'http://localhost:4444/wd/hub'

  1. Add the SeleniumMiddleware to the downloader middlewares:
    DOWNLOADER_MIDDLEWARES = {
        'scrapy_selenium.SeleniumMiddleware': 800
    }
    

Usage

Use the scrapy_selenium.SeleniumRequest instead of the scrapy built-in Request like below:

from scrapy_selenium import SeleniumRequest

yield SeleniumRequest(url=url, callback=self.parse_result)

The request will be handled by selenium, and the request will have an additional meta key, named driver containing the selenium driver with the request processed.

def parse_result(self, response):
    print(response.request.meta['driver'].title)

For more information about the available driver methods and attributes, refer to the selenium python documentation

The selector response attribute work as usual (but contains the html processed by the selenium driver).

def parse_result(self, response):
    print(response.selector.xpath('//title/@text'))

Additional arguments

The scrapy_selenium.SeleniumRequest accept 4 additional arguments:

wait_time / wait_until

When used, selenium will perform an Explicit wait before returning the response to the spider.

from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC

yield SeleniumRequest(
    url=url,
    callback=self.parse_result,
    wait_time=10,
    wait_until=EC.element_to_be_clickable((By.ID, 'someid'))
)

screenshot

When used, selenium will take a screenshot of the page and the binary data of the .png captured will be added to the response meta:

yield SeleniumRequest(
    url=url,
    callback=self.parse_result,
    screenshot=True
)

def parse_result(self, response):
    with open('image.png', 'wb') as image_file:
        image_file.write(response.meta['screenshot'])

script

When used, selenium will execute custom JavaScript code.

yield SeleniumRequest(
    url=url,
    callback=self.parse_result,
    script='window.scrollTo(0, document.body.scrollHeight);',
)
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].