All Projects → pykong → Pypergrabber

pykong / Pypergrabber

Licence: mit
Fetches PubMed article IDs (PMIDs) from email inbox, then crawls PubMed, Google Scholar and Sci-Hub for respective PDF files.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Pypergrabber

Bookcorpus
Crawl BookCorpus
Stars: ✭ 443 (+3064.29%)
Mutual labels:  crawler, scraper
Awesome Crawler
A collection of awesome web crawler,spider in different languages
Stars: ✭ 4,793 (+34135.71%)
Mutual labels:  crawler, scraper
Scrapedin
LinkedIn Scraper (currently working 2020)
Stars: ✭ 453 (+3135.71%)
Mutual labels:  crawler, scraper
Freshonions Torscraper
Fresh Onions is an open source TOR spider / hidden service onion crawler hosted at zlal32teyptf4tvi.onion
Stars: ✭ 348 (+2385.71%)
Mutual labels:  crawler, scraper
Spidr
A versatile Ruby web spidering library that can spider a site, multiple domains, certain links or infinitely. Spidr is designed to be fast and easy to use.
Stars: ✭ 656 (+4585.71%)
Mutual labels:  crawler, scraper
Gosint
OSINT Swiss Army Knife
Stars: ✭ 401 (+2764.29%)
Mutual labels:  crawler, scraper
Ferret
Declarative web scraping
Stars: ✭ 4,837 (+34450%)
Mutual labels:  crawler, scraper
Python Automation Scripts
Simple yet powerful automation stuffs.
Stars: ✭ 292 (+1985.71%)
Mutual labels:  crawler, pdf
Scrapyrt
HTTP API for Scrapy spiders
Stars: ✭ 637 (+4450%)
Mutual labels:  crawler, scraper
Headless Chrome Crawler
Distributed crawler powered by Headless Chrome
Stars: ✭ 5,129 (+36535.71%)
Mutual labels:  crawler, scraper
Xcrawler
快速、简洁且强大的PHP爬虫框架
Stars: ✭ 344 (+2357.14%)
Mutual labels:  crawler, scraper
Lulu
[Unmaintained] A simple and clean video/music/image downloader 👾
Stars: ✭ 789 (+5535.71%)
Mutual labels:  crawler, scraper
Autoscraper
A Smart, Automatic, Fast and Lightweight Web Scraper for Python
Stars: ✭ 4,077 (+29021.43%)
Mutual labels:  crawler, scraper
Crawly
Crawly, a high-level web crawling & scraping framework for Elixir.
Stars: ✭ 440 (+3042.86%)
Mutual labels:  crawler, scraper
Hquery.php
An extremely fast web scraper that parses megabytes of invalid HTML in a blink of an eye. PHP5.3+, no dependencies.
Stars: ✭ 295 (+2007.14%)
Mutual labels:  crawler, scraper
Nintendo Switch Eshop
Crawler for Nintendo Switch eShop
Stars: ✭ 463 (+3207.14%)
Mutual labels:  crawler, scraper
Weibo terminator workflow
Update Version of weibo_terminator, This is Workflow Version aim at Get Job Done!
Stars: ✭ 259 (+1750%)
Mutual labels:  crawler, scraper
Rcrawler
An R web crawler and scraper
Stars: ✭ 274 (+1857.14%)
Mutual labels:  crawler, scraper
Fbcrawl
A Facebook crawler
Stars: ✭ 536 (+3728.57%)
Mutual labels:  crawler, scraper
Crawler
A high performance web crawler in Elixir.
Stars: ✭ 781 (+5478.57%)
Mutual labels:  crawler, scraper

PyperGrabber

Fetches PubMed article IDs (PMIDs) from email inbox, then crawls PubMed, Google Scholar and Sci-Hub for respective PDF files.

PubMed can send you regular update on new articles matching your specified search criteria. PyperGrabber will automatically download thoe papers, saving you much time tracking on downloading those manually. When no PDF article is found PyperGrabber will save the PubMed abstract of the respective article to PDF. All files are named after PMID for convenience.

NOTES:

  • Messy code ahead!
  • Program may halt without error message. The source of this bug is yet to be determined.
  • The web crawler function may be used to work with other sources of PMIDs then email (e.g. command line parameter or file holding list of PMIDs)

Required dependencies:

sudo apt-get install wkhtmltopdf
sudo pip install pypdf

USAGE:

  • Step 1 - Put in your email access data into config.ini or prepare to be prompted (works with IMAP)
  • Step 2 - Start with: python ./PyperGrabber.py
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].