All Projects → christophebe → Serp

christophebe / Serp

Google Search SERP Scraper

Programming Languages

javascript
184084 projects - #8 most used programming language

Projects that are alternatives of or similar to Serp

Awesome Seo
Google SEO研究及流量变现
Stars: ✭ 942 (+2255%)
Mutual labels:  google, seo
site-audit-seo
Web service and CLI tool for SEO site audit: crawl site, lighthouse all pages, view public reports in browser. Also output to console, json, csv, xlsx, Google Drive.
Stars: ✭ 91 (+127.5%)
Mutual labels:  scraper, seo
Covid19 mobility
COVID-19 Mobility Data Aggregator. Scraper of Google, Apple, Waze and TomTom COVID-19 Mobility Reports🚶🚘🚉
Stars: ✭ 156 (+290%)
Mutual labels:  google, scraper
Search Engine Optimization
🔍 A helpful checklist/collection of Search Engine Optimization (SEO) tips and techniques.
Stars: ✭ 1,798 (+4395%)
Mutual labels:  google, seo
Sitemap Generator
Easily create XML sitemaps for your website.
Stars: ✭ 273 (+582.5%)
Mutual labels:  google, seo
Youtube Projects
This repository contains all the code I use in my YouTube tutorials.
Stars: ✭ 144 (+260%)
Mutual labels:  google, scraper
lopez
Crawling and scraping the Web for fun and profit
Stars: ✭ 20 (-50%)
Mutual labels:  scraper, seo
Serpscrap
SEO python scraper to extract data from major searchengine result pages. Extract data like url, title, snippet, richsnippet and the type from searchresults for given keywords. Detect Ads or make automated screenshots. You can also fetch text content of urls provided in searchresults or by your own. It's usefull for SEO and business related research tasks.
Stars: ✭ 153 (+282.5%)
Mutual labels:  scraper, seo
Laravel Robots Middleware
Enable or disable the indexing of your app
Stars: ✭ 259 (+547.5%)
Mutual labels:  google, seo
Google-rank-tracker
SEO: Python script + shell script and cronjob to check ranks on a daily basis
Stars: ✭ 124 (+210%)
Mutual labels:  google, seo
Curatedseotools
Best SEO Tools Stash
Stars: ✭ 128 (+220%)
Mutual labels:  google, seo
Googledictionaryapi
Google does not provide Google Dictionary API so I created one.
Stars: ✭ 528 (+1220%)
Mutual labels:  google, scraper
Laravel Sitemap
Create and generate sitemaps with ease
Stars: ✭ 1,325 (+3212.5%)
Mutual labels:  google, seo
Google2csv
Google2Csv a simple google scraper that saves the results on a csv/xlsx/jsonl file
Stars: ✭ 145 (+262.5%)
Mutual labels:  google, scraper
Image search
Python Library to download images and metadata from popular search engines.
Stars: ✭ 86 (+115%)
Mutual labels:  google, scraper
Sitemap Generator Cli
Creates an XML-Sitemap by crawling a given site.
Stars: ✭ 214 (+435%)
Mutual labels:  google, seo
SearchScraperAPI
Aiohttp web server API, which scrapes Google and returns scrape results as response. Supports proxies, multiple geos and number of results.
Stars: ✭ 31 (-22.5%)
Mutual labels:  scraper, seo
Katana
A Python Tool For google Hacking
Stars: ✭ 355 (+787.5%)
Mutual labels:  google, scraper
Schema Org
A fluent builder Schema.org types and ld+json generator
Stars: ✭ 894 (+2135%)
Mutual labels:  google, seo
Mlkit
A collection of sample apps to demonstrate how to use Google's ML Kit APIs on Android and iOS
Stars: ✭ 949 (+2272.5%)
Mutual labels:  google

serp

This module allows to get the result of a Google search based on a keyword.

It provides different options for scraping the google results called SERP (Search Engine Result Page) :

  • delay between requests
  • retry if error
  • with or without proxy, proxies or scrape API.

Installation

$ npm install serp -S

Simple usage

const serp = require("serp");

var options = {
  host : "google.be",
  qs : {
    q : "test",
    filter : 0,
    pws : 0
  },
  num : 100
};

const links = await serp.search(options);

Understanding the options structure :

  • For google.com, the param host is not necessary.
  • qs can contain the usual Google search parameters : https://moz.com/ugc/the-ultimate-guide-to-the-google-search-parameters.
  • options.qs.q is the keyword
  • num is the number of desired results (defaut is 10).
  • The options object can also contain all request options like http headers, ... . SERP is using the request module : https://github.com/request/request
  • The user agent is not mandatory. Default value will be : 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:40.0) Gecko/20100101 Firefox/40.1'

Delay between requests

It is possible to add a delay between each request made on Google with the option delay (value in ms). The delay is also applied when the tool read the next result page on Google.

const serp = require("serp");

var options = {

  qs : {
    q : "test"
  },
  num : 100,
  delay : 2000 // in ms
};

const links = await serp.search(options);

Retry if error

If an error occurs (timeout, network issue, invalid HTTP status, ...), it is possible to retry the same request on Google. If a proxyList is set into the options, another proxy will be used.

const serp = require("serp");

var options = {

  qs : {
    q : "test"
  },
  num : 100,
  retry : 3,
  proxyList : proxyList
};

const links = serp.search(options);

Get the number of results

You can get the number of indexed pages in Google by using the following code.

const serp = require("serp");

var options = {
  host : "google.fr",
  numberOfResults : true,
  qs : {
    q   : "site:yoursite.com"
  },
  proxyList : proxyList
};

const numberOfResults = await serp.search(options);

With proxy

You can add the proxy reference in the options

const serp = require("serp");

var options = {
  qs : {
    q : "test",
  },
  proxy : "http://username:[email protected]:port"  
};


const links = await serp.search(options);

With multiple proxies

You can also use the module simple proxy if you have several proxies (see : https://github.com/christophebe/simple-proxies). In this case, a different proxies (choose randomly) will be used of each serp.search call.

See this unit test to get the complete code.

const  serp = require("serp");

var options = {
  qs : {
    q : "test",
  },
  proxyList : proxyList
};

const links = await serp.search(options);

with a scrape API

This module can use a scrape API instead of a list of proxies.

This is an example with scrapeapi.com

const options = {
      num: 10,
      qs: {
        q: 'test'
      },
      scrapeApiUrl: `http://api.scraperapi.com/?api_key=${ accessKey }`
    };

    try {
      const links = await serp.search(options);

      // console.log(links);
      expect(links).to.have.lengthOf(10);
    } catch (e) {
      console.log('Error', e);
      expect(e).be.null;
    }

Proxies or Scrape API ?

If you make many requests at the same time or over a limited period of time, Google may ban your IP address. This can happen even faster if you use particular search commands such as: intitle, inurl, site:, ...

It is therefore recommended to use proxies. The SERP module supports two solutions:

  • Datacenter proxies like for example those proposed by Mexela. Shared proxies are more than enough.

  • Scrape APIs such as scrapeapi.com

What to choose? Datacenter proxies or Scrape API ?

It all depends on what you are looking for. Datacenter proxies will provide the best performance and are generally very reliable. You can use the "retry" option to guarantee even more reliability. It's also a solution that offers a good quality/price ratio but it will require more effort in terms of development, especially for the rotation of proxies. If you want to use rotation with datacenter proxies, see this unit test.

Although slower, the scrape APIs offer other features such as the geolocation of IP addresses over a larger number of countries and the ability to scrape dynamic pages. Using such an API can also simplify the code. Unfortunately, this solution is often more expensive than data center proxies. So, scrape APIs becomes interesting if you have other scrape needs.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].