MemoriousDistributed crawling framework for documents and structured data.
Cdp4jcdp4j - Chrome DevTools Protocol for Java
CollyElegant Scraper and Crawler Framework for Golang
AntchAntch, a fast, powerful and extensible web crawling & scraping framework for Go
NutchApache Nutch is an extensible and scalable web crawler
CrawlerGo process used to crawl websites
MassivedlDownload a large list of files concurrently
Instagram BotAn Instagram bot developed using the Selenium Framework
NewspaperNews, full-text, and article metadata extraction in Python 3. Advanced docs:
Bhban rpa6개월 치 업무를 하루 만에 끝내는 업무 자동화(생능출판사, 2020)의 예제 코드입니다. 파이썬을 한 번도 배워본 적 없는 분들을 위한 예제이며, 엑셀부터 디자인, 매크로, 크롤링까지 업무 자동화와 관련된 다양한 분야 예제가 제공됩니다.
SquidwarcSquidwarc is a high fidelity, user scriptable, archival crawler that uses Chrome or Chromium with or without a head
ScrapyScrapy, a fast high-level web crawling & scraping framework for Python.
DotnetcrawlerDotnetCrawler is a straightforward, lightweight web crawling/scrapying library for Entity Framework Core output based on dotnet core. This library designed like other strong crawler libraries like WebMagic and Scrapy but for enabling extandable your custom requirements. Medium link : https://medium.com/@mehmetozkaya/creating-custom-web-crawler-with-dotnet-core-using-entity-framework-core-ec8d23f0ca7c
GrawlerGrawler is a tool written in PHP which comes with a web interface that automates the task of using google dorks, scrapes the results, and stores them in a file.
ArachnidPowerful web scraping framework for Crystal
Pdf downloaderA Scrapy Spider for downloading PDF files from a webpage.
Lulu[Unmaintained] A simple and clean video/music/image downloader 👾
FerretDeclarative web scraping
DataflowkitExtract structured data from web sites. Web sites scraping.
CrawlyCrawly, a high-level web crawling & scraping framework for Elixir.
Webstera reliable high-level web crawling & scraping framework for Node.js.
SpidermonScrapy Extension for monitoring spiders execution.
Gopa[WIP] GOPA, a spider written in Golang, for Elasticsearch. DEMO: http://index.elasticsearch.cn
Apify JsApify SDK — The scalable web scraping and crawling library for JavaScript/Node.js. Enables development of data extraction and web automation jobs (not only) with headless Chrome and Puppeteer.
SpidyThe simple, easy to use command line web crawler.
Skycaiji蓝天采集器是一款免费的数据采集发布爬虫软件,采用php+mysql开发,可部署在云服务器,几乎能采集所有类型的网页,无缝对接各类CMS建站程序,免登录实时发布数据,全自动无需人工干预!是网页大数据采集软件中完全跨平台的云端爬虫系统
ARGUSARGUS is an easy-to-use web scraping tool. The program is based on the Scrapy Python framework and is able to crawl a broad range of different websites. On the websites, ARGUS is able to perform tasks like scraping texts or collecting hyperlinks between websites. See: https://link.springer.com/article/10.1007/s11192-020-03726-9
bots-zooNo description or website provided.
flink-crawlerContinuous scalable web crawler built on top of Flink and crawler-commons
img-cliAn interactive Command-Line Interface Build in NodeJS for downloading a single or multiple images to disk from URL
talospidertalospider - A simple,lightweight scraping micro-framework
pompScreen scraping and web crawling framework
InfectCreate you virus in termux!
Mimo-CrawlerA web crawler that uses Firefox and js injection to interact with webpages and crawl their content, written in nodejs.
crawlkitA crawler based on Phantom. Allows discovery of dynamic content and supports custom scrapers.
scrapy-distributedA series of distributed components for Scrapy. Including RabbitMQ-based components, Kafka-based components, and RedisBloom-based components for Scrapy.
custom-crawler🌌 High productivity semi-automatic crawler generator 🛠️🧰
go-scrapyWeb crawling and scraping framework for Golang
wget-luaWget-AT is a modern Wget with Lua hooks, Zstandard (+dictionary) WARC compression and URL-agnostic deduplication.