All Projects → sangaline → scrapy-wayback-machine

sangaline / scrapy-wayback-machine

Licence: ISC license
A Scrapy middleware for scraping time series data from Archive.org's Wayback Machine.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to scrapy-wayback-machine

scraping-ebay
Scraping Ebay's products using Scrapy Web Crawling Framework
Stars: ✭ 79 (-14.13%)
Mutual labels:  web-scraping, scrapy
Faster Than Requests
Faster requests on Python 3
Stars: ✭ 639 (+594.57%)
Mutual labels:  web-scraping, scrapy
restaurant-finder-featureReviews
Build a Flask web application to help users retrieve key restaurant information and feature-based reviews (generated by applying market-basket model – Apriori algorithm and NLP on user reviews).
Stars: ✭ 21 (-77.17%)
Mutual labels:  web-scraping, scrapy
scrapy plus
scrapy 常用爬网必备工具包
Stars: ✭ 18 (-80.43%)
Mutual labels:  scrapy, scrapy-extension
Netflix Clone
Netflix like full-stack application with SPA client and backend implemented in service oriented architecture
Stars: ✭ 156 (+69.57%)
Mutual labels:  web-scraping, scrapy
IMDB-Scraper
Scrapy project for scraping data from IMDB with Movie Dataset including 58,623 movies' data.
Stars: ✭ 37 (-59.78%)
Mutual labels:  web-scraping, scrapy
Scrapy Fake Useragent
Random User-Agent middleware based on fake-useragent
Stars: ✭ 520 (+465.22%)
Mutual labels:  web-scraping, scrapy
OLX Scraper
📻 An OLX Scraper using Scrapy + MongoDB. It Scrapes recent ads posted regarding requested product and dumps to NOSQL MONGODB.
Stars: ✭ 15 (-83.7%)
Mutual labels:  web-scraping, scrapy
Juno crawler
Scrapy crawler to collect data on the back catalog of songs listed for sale.
Stars: ✭ 150 (+63.04%)
Mutual labels:  web-scraping, scrapy
Scrapyd Cluster On Heroku
Set up free and scalable Scrapyd cluster for distributed web-crawling with just a few clicks. DEMO 👉
Stars: ✭ 106 (+15.22%)
Mutual labels:  web-scraping, scrapy
scrapy-fieldstats
A Scrapy extension to log items coverage when the spider shuts down
Stars: ✭ 17 (-81.52%)
Mutual labels:  scrapy, scrapy-extension
City Scrapers
Scrape, standardize and share public meetings from local government websites
Stars: ✭ 220 (+139.13%)
Mutual labels:  web-scraping, scrapy
Scrapple
A framework for creating semi-automatic web content extractors
Stars: ✭ 464 (+404.35%)
Mutual labels:  web-scraping, scrapy
Scrapy Craigslist
Web Scraping Craigslist's Engineering Jobs in NY with Scrapy
Stars: ✭ 54 (-41.3%)
Mutual labels:  web-scraping, scrapy
Scrapy Training
Scrapy Training companion code
Stars: ✭ 157 (+70.65%)
Mutual labels:  web-scraping, scrapy
wayback
⏪ Tools to Work with the Various Internet Archive Wayback Machine APIs
Stars: ✭ 52 (-43.48%)
Mutual labels:  web-scraping, wayback-machine
crawlzone
Crawlzone is a fast asynchronous internet crawling framework for PHP.
Stars: ✭ 70 (-23.91%)
Mutual labels:  web-scraping
scrapy-LBC
Araignée LeBonCoin avec Scrapy et ElasticSearch
Stars: ✭ 14 (-84.78%)
Mutual labels:  scrapy
scrapy helper
Dynamic configurable crawl (动态可配置化爬虫)
Stars: ✭ 84 (-8.7%)
Mutual labels:  scrapy
2017-summer-workshop
Exercises, data, and more for our 2017 summer workshop (funded by the Estes Fund and in partnership with Project Jupyter and Berkeley's D-Lab)
Stars: ✭ 33 (-64.13%)
Mutual labels:  web-scraping

The Scrapy Wayback Machine Logo

Scrapy Wayback Machine Middleware

This project provides a Scrapy middleware for scraping archived snapshots of webpages as they appear on archive.org's Wayback Machine. This can be useful if you're trying to scrape a site that has scraping measures that make direct scraping impossible or prohibitively slow. It's also useful if you want to scrape a website as it appeared at some point in the past or to scrape information that changes over time.

If you're just just interested in mirroring page content or would like to parse the HTML content in a language other than python then you should check out the Wayback Machine Scraper. It's a command-line utility that uses the middleware provided here to crawl through historical snapshots of a website and save them to disk. It's highly configurable in terms of what it scrapes but it only saves the unparsed content of the pages on the site. This may or may not suit your needs.

If you're using Scrapy already or interested in parsing the data that is crawled then this WaybackMachineMiddleware is probably what you want. This middleware handles all of the tricky parts and passes normal response objects to your Scrapy spiders with archive timestamp information attached. The middleware is very unobtrusive and should work seamlessly with existing Scrapy middlewares, extensions, and spiders.

Installation

The package can be installed using pip.

pip install scrapy-wayback-machine

Usage

To enable the middleware you simply have to add

DOWNLOADER_MIDDLEWARES = {
    'scrapy_wayback_machine.WaybackMachineMiddleware': 5,
}

WAYBACK_MACHINE_TIME_RANGE = (start_time, end_time)

to your Scrapy settings. The start and end times can be specified as datetime.datetime objects, Unix timestamps, YYYYmmdd timestamps, or YYYYmmddHHMMSS timestamps. The type will be automatically inferred from the content and the ranges will limit the range of snapshots to crawl. You can also pass a single time if you would like to scrape pages as they appeared at that time.

After configuration, responses will be passed to your spiders as they normally would. Both response.url and all links within response.body point to the unarchived content so your parsing code should work the same regardless of whether or not the middleware is enabled. If you need to access either the time of the snapshot or the archive.org URL for a response then this information is easily available as metadata attached to the response. Namely, response.meta['wayback_machine_time'] contains a datetime.datetime corresponding to the time of the crawl and response.meta['wayback_machine_url'] contains the actual URL that was requested. Unless you're scraping a single point in time, you will almost certainly want to include the timestamp in the items that your spiders produce to differentiate items scraped from the same URL.

Examples

The Wayback Machine Scraper command-line utility is a good example of how to use the middleware. The necessary settings are defined in __main__.py and the handling of responses is done in mirror_spider.py. The MirrorSpider class simply uses the response.meta['wayback_machine_time'] information attached to each response to construct the snapshot filenames and is otherwise a fairly generic spider.

There's also an article Internet Archeology: Scraping time series data from Archive.org that discusses the development of the middleware and includes an example of scraping time series data from Reddit.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].