All Projects β†’ evyatarmeged β†’ Humanoid

evyatarmeged / Humanoid

Licence: mit
Node.js package to bypass CloudFlare's anti-bot JavaScript challenges

Programming Languages

javascript
184084 projects - #8 most used programming language

Projects that are alternatives of or similar to Humanoid

selectorlib
A library to read a YML file with Xpath or CSS Selectors and extract data from HTML pages using them
Stars: ✭ 53 (-39.77%)
Mutual labels:  scraping, web-scraping
raspagem-de-dados-fatec
πŸ““ Minicurso de raspagem de dados web com Python ministrado na Semana de Tecnologia da FATEC JundiaΓ­
Stars: ✭ 22 (-75%)
Mutual labels:  scraping, web-scraping
browser-pool
A Node.js library to easily manage and rotate a pool of web browsers, using any of the popular browser automation libraries like Puppeteer, Playwright, or SecretAgent.
Stars: ✭ 71 (-19.32%)
Mutual labels:  scraping, web-scraping
PythonScrapyBasicSetup
Basic setup with random user agents and IP addresses for Python Scrapy Framework.
Stars: ✭ 57 (-35.23%)
Mutual labels:  scraping, web-scraping
Autoscraper
A Smart, Automatic, Fast and Lightweight Web Scraper for Python
Stars: ✭ 4,077 (+4532.95%)
Mutual labels:  scraping, web-scraping
trafilatura
Python & command-line tool to gather text on the Web: web crawling/scraping, extraction of text, metadata, comments
Stars: ✭ 711 (+707.95%)
Mutual labels:  scraping, web-scraping
papercut
Papercut is a scraping/crawling library for Node.js built on top of JSDOM. It provides basic selector features together with features like Page Caching and Geosearch.
Stars: ✭ 15 (-82.95%)
Mutual labels:  scraping, web-scraping
top-github-scraper
Scape top GitHub repositories and users based on keywords
Stars: ✭ 40 (-54.55%)
Mutual labels:  scraping, web-scraping
Linkedin
Linkedin Scraper using Selenium Web Driver, Chromium headless, Docker and Scrapy
Stars: ✭ 309 (+251.14%)
Mutual labels:  bot, scraping
Gopa
[WIP] GOPA, a spider written in Golang, for Elasticsearch. DEMO: http://index.elasticsearch.cn
Stars: ✭ 277 (+214.77%)
Mutual labels:  scraping, web-scraping
Scrape Linkedin Selenium
`scrape_linkedin` is a python package that allows you to scrape personal LinkedIn profiles & company pages - turning the data into structured json.
Stars: ✭ 239 (+171.59%)
Mutual labels:  scraping, web-scraping
Arachnid
Powerful web scraping framework for Crystal
Stars: ✭ 68 (-22.73%)
Mutual labels:  bot, web-scraping
Phpscraper
PHP Scraper - an highly opinionated web-interface for PHP
Stars: ✭ 148 (+68.18%)
Mutual labels:  scraping, web-scraping
ioweb
Web Scraping Framework
Stars: ✭ 31 (-64.77%)
Mutual labels:  scraping, web-scraping
Sqrape
Simple Query Scraping with CSS and Go Reflection (MOVED to Gitlab)
Stars: ✭ 144 (+63.64%)
Mutual labels:  scraping, web-scraping
Apify Js
Apify SDK β€” The scalable web scraping and crawling library for JavaScript/Node.js. Enables development of data extraction and web automation jobs (not only) with headless Chrome and Puppeteer.
Stars: ✭ 3,154 (+3484.09%)
Mutual labels:  scraping, web-scraping
Scrapple
A framework for creating semi-automatic web content extractors
Stars: ✭ 464 (+427.27%)
Mutual labels:  scraping, web-scraping
Detect Cms
PHP Library for detecting CMS
Stars: ✭ 78 (-11.36%)
Mutual labels:  scraping, web-scraping
Is Google
Verify that a request is from Google crawlers using Google's DNS verification steps
Stars: ✭ 82 (-6.82%)
Mutual labels:  bot
Omeglemiddleman
Lets you connect strangers to each other, and intercept messages AKA Man in the Middle Attack
Stars: ✭ 85 (-3.41%)
Mutual labels:  bot

Humanoid

Build Status license version tested with jest

A Node.js package to bypass WAF anti-bot JS challenges.

About

Humanoid is a Node.js package to solve and bypass CloudFlare (and hopefully in the future - other WAFs' as well) JavaScript anti-bot challenges.
While anti-bot pages are solvable via headless browsers, they are pretty heavy and are usually considered over the top for scraping.
Humanoid can solve these challenges using the Node.js runtime and present the protected HTML page.
The session cookies can also be delegated to other bots to continue scraping causing them to avoid the JS challenges altogether.

Features

  • Random browser User-Agent
  • Auto-retry on failed challenges
  • Highly configurable - hack custom cookies, headers, etc
  • Clearing cookies and rotating User-Agent is supported
  • Supports decompression of Brotli content-encoding. Not supported by Node.js' request by default!

Installation

via npm:

npm install --save humanoid-js

Usage

Basic usage with promises:

const Humanoid = require("humanoid-js");

let humanoid = new Humanoid();
humanoid.get("https://www.cloudflare-protected.com")
    .then(res => {
    	console.log(res.body) // <!DOCTYPE html><html lang="en">...
    })
    .catch(err => {
    	console.error(err)
    })

Humanoid uses auto-bypass by default. You can override it on instance creation:

let humanoid = new Humanoid(autoBypass=false)

humanoid.get("https://canyoupwn.me")
  .then(res => {
  	console.log(res.statusCode) // 503
  	console.log(res.isSessionChallenged) // true
    humanoid.bypassJSChallenge(res)
      .then(challengeResponse => {
      	// Note that challengeResponse.isChallengeSolved won't be set to true when doing manual bypassing.
      	console.log(challengeResponse.body) // <!DOCTYPE html><html lang="en">...
      })
    }
  )
	.catch(err => {
		console.error(err)
	})

async/await is also supported, and is the preferred way to go:

(async function() {
  let humanoid = new Humanoid();
  let response = await humanoid.sendRequest("www.cloudflare-protected.com")
  console.log(response.body) // <!DOCTYPE html><html lang="en">...
}())

Humanoid API Methods

  rotateUA() // Replace the currently set user agent with a different one
  
  clearCookies() // "Set a new, empty cookie jar for the humanoid instance"
  
  get(url, queryString=undefined, headers=undefined) // Send a GET request to `url`.
  // if passed, queryString and headers should be objects 
  
  post(url, postBody=undefined, headers=undefined, dataType=undefined) // Send a POST request to `url`.
  // `dataType` should be either "form" or "json" - based on the content type of the POST request.
  
  sendRequest(url, method=undefined, data=undefined, headers=undefined, dataType=undefined) 
  // Send a request of method `method` to `url`
  
  bypassJSChallenge(response) // Bypass the anti-bot JS challenge found in response.body

TODOs

  • [ ] Add command line support
    • Support a flag to return the cookie jar after challenge solved - for better integration with other tools and scrapers
    • Have an option to simply bypass and return the protected HTML
  • [ ] Solve other WAFs similar anti-bot challenges
  • [ ] Add tests for request sending and challenge solving
  • [ ] Add Docker support 🐳

Issues and Contributions

All anti-bot challenges are likely to change in the future. If this is the case, please open an issue explaining the problem - try to include the target page if possible. I'll do my best to keep the code up to date with new challenges.
Any and all contributions are welcome - and are highly appreciated.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].