All Projects → redco → Goose Parser

redco / Goose Parser

Licence: mit
Universal scrapping tool, which allows you to extract data using multiple environments

Programming Languages

javascript
184084 projects - #8 most used programming language

Projects that are alternatives of or similar to Goose Parser

papercut
Papercut is a scraping/crawling library for Node.js built on top of JSDOM. It provides basic selector features together with features like Page Caching and Geosearch.
Stars: ✭ 15 (-92.89%)
Mutual labels:  crawler, scraper, scraping
Crawly
Crawly, a high-level web crawling & scraping framework for Elixir.
Stars: ✭ 440 (+108.53%)
Mutual labels:  crawler, scraper, scraping
bots-zoo
No description or website provided.
Stars: ✭ 59 (-72.04%)
Mutual labels:  crawler, scraper, scraping
Scrapysharp
reborn of https://bitbucket.org/rflechner/scrapysharp
Stars: ✭ 226 (+7.11%)
Mutual labels:  scraper, parsing, scraping
Lulu
[Unmaintained] A simple and clean video/music/image downloader 👾
Stars: ✭ 789 (+273.93%)
Mutual labels:  crawler, scraper, scraping
Colly
Elegant Scraper and Crawler Framework for Golang
Stars: ✭ 15,535 (+7262.56%)
Mutual labels:  crawler, scraper, scraping
Autoscraper
A Smart, Automatic, Fast and Lightweight Web Scraper for Python
Stars: ✭ 4,077 (+1832.23%)
Mutual labels:  crawler, scraper, scraping
angel.co-companies-list-scraping
No description or website provided.
Stars: ✭ 54 (-74.41%)
Mutual labels:  scraper, parsing, scraping
Headless Chrome Crawler
Distributed crawler powered by Headless Chrome
Stars: ✭ 5,129 (+2330.81%)
Mutual labels:  crawler, scraper, scraping
Nickjs
Web scraping library made by the Phantombuster team. Modern, simple & works on all websites. (Deprecated)
Stars: ✭ 494 (+134.12%)
Mutual labels:  scraping, browser, phantomjs
Hquery.php
An extremely fast web scraper that parses megabytes of invalid HTML in a blink of an eye. PHP5.3+, no dependencies.
Stars: ✭ 295 (+39.81%)
Mutual labels:  parser, crawler, scraper
Geziyor
Geziyor, a fast web crawling & scraping framework for Go. Supports JS rendering.
Stars: ✭ 1,246 (+490.52%)
Mutual labels:  crawler, scraper, scraping
Ferret
Declarative web scraping
Stars: ✭ 4,837 (+2192.42%)
Mutual labels:  crawler, scraper, scraping
Parser Javascript
Browser sniffing gone too far — A useragent parser library for JavaScript
Stars: ✭ 66 (-68.72%)
Mutual labels:  parser, parsing, browser
Linkedin Profile Scraper
🕵️‍♂️ LinkedIn profile scraper returning structured profile data in JSON. Works in 2020.
Stars: ✭ 171 (-18.96%)
Mutual labels:  crawler, scraper, scraping
Anime Dl
Anime-dl is a command-line program to download anime from CrunchyRoll and Funimation.
Stars: ✭ 190 (-9.95%)
Mutual labels:  scraper, scraping
Goribot
[Crawler/Scraper for Golang]🕷A lightweight distributed friendly Golang crawler framework.一个轻量的分布式友好的 Golang 爬虫框架。
Stars: ✭ 190 (-9.95%)
Mutual labels:  crawler, scraper
Thepiratebay
💀 The Pirate Bay node.js client
Stars: ✭ 191 (-9.48%)
Mutual labels:  parser, scraper
Parse Xml
A fast, safe, compliant XML parser for Node.js and browsers.
Stars: ✭ 184 (-12.8%)
Mutual labels:  parser, parsing
Jvppeteer
Headless Chrome For Java (Java 爬虫)
Stars: ✭ 193 (-8.53%)
Mutual labels:  crawler, scraper

mr.Goose

goose-parser

CircleCI (all branches) Codecov Latest Stable Version Total Downloads NPM downloads

This tool moves routine crawling process to the new level. Now it's possible to parse a web page for a moment. All you need is to specify parsing rules based on css selectors. It's so simple as Goose can do it. This library allows to parse such data types as Grid, Collections, and Simple objects. Parser has support of pagination by extension goose-paginator. Also it offers you following features: actions to interact with the page and transforms to convert parsed data to friendly format.

Goose Starter Kit

Now it's easy to start with Goose, just try to use goose-starter-kit for it.

Key features

  • Declarative approach for definition of parsing rules, actions and transformations.
  • Multi environments to run parser on the browser, PhantomJS, Chrome, JsDOM and more.
  • Clear code with the latest features of ES6.
  • Clear and consistent API with promises all the way.
  • Improved Sizzle format of selectors.
  • Ajax and multi-pages parsing modes.
  • Docker Support.
  • It's easy extendable.

Installation

yarn add goose-parser goose-chrome-environment

Usage

const Parser = require('goose-parser');
const ChromeEnvironment = require('goose-chrome-environment');

const env = new ChromeEnvironment({
  url: 'https://www.google.com/search?q=goose-parser',
});

const parser = new Parser({ environment: env });

(async function () {
  try {
    const results = await parser.parse({
      actions: [
        {
          type: 'wait',
          timeout: 10 * 1000,
          scope: '.srg>.g',
          parentScope: 'body'
        }
      ],
      rules: {
        scope: '.srg>.g',
        collection: [[
          {
            name: 'url',
            scope: 'h3.r>a',
            attr: 'href',
          },
          {
            name: 'text',
            scope: 'h3.r>a',
          }
        ]]
      }
    });
    console.log(results);
  } catch (e) {
    console.log('Error occurred:');
    console.log(e.stack);
  }
})();

Environment

This is a special atmosphere where Parser has to be executed. The main purpose of an environment is to provide a method for evaluating JS on the page. Goose supports following environments:

  • PhantomJS (executes in NodeJS)
  • Chrome (executes in NodeJS)
  • JSDom (executes in NodeJS)
  • FireFox (coming soon)
  • Browser (executes in Browser)

Docker usage

For now it's available to run goose-parser as a docker service.

Params:

  • url - first param is an url to parser
  • Parsing rules [optional] - Rules to parse. It's optional, if --rules-file specified.

Options:

  • -e "DEBUG=*" - to enable debug mode and see all what happens inside the goose-parser. Reed more about debug here.
  • --rules-file - to specify rules file. Be aware that you need to mount a folder with rules as a volume to the docker container.

There are two options to run it:

Process parsing from the user input

docker run -it --rm -e "DEBUG=*,-puppeteer:*" redcode/goose-parser:chrome-1.1.3-parser-0.6.0\
    https://www.google.com/search?q=goose-parser\
    '{
      "actions": [
        {
          "type": "wait",
          "scope": ".g"
        }
      ],
      "rules": {
        "scope": ".g",
        "collection": [
          [
            {
              "scope": ".r>a h3",
              "name": "name"
            },
            {
              "scope": ".r>a:eq(0)",
              "name": "link",
              "attr": "href"
            }
          ]
        ]
      }
    }'

Process parsing from the mounted file with parsing rules

Create a file rules/rules.json which contains parser rules and run following command:

docker run -it --rm --volume="`pwd`/rules:/app/rules:ro" -e "DEBUG=*,-puppeteer:*" redcode/goose-parser:chrome-1.1.3-parser-0.6.0 --rules-file="/app/rules/rules.json" 'https://www.google.com/search?q=goose-parser'

Documentation

Based on the code you can find detailed documentation about actions and transformations

API reference - coming soon

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].