All Projects → ecoron → Serpscrap

ecoron / Serpscrap

Licence: mit
SEO python scraper to extract data from major searchengine result pages. Extract data like url, title, snippet, richsnippet and the type from searchresults for given keywords. Detect Ads or make automated screenshots. You can also fetch text content of urls provided in searchresults or by your own. It's usefull for SEO and business related research tasks.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Serpscrap

Osint collection
Maintained collection of OSINT related resources. (All Free & Actionable)
Stars: ✭ 809 (+428.76%)
Mutual labels:  search, research
Django Dynamic Scraper
Creating Scrapy scrapers via the Django admin interface
Stars: ✭ 1,024 (+569.28%)
Mutual labels:  scraper, scraping
Duckduckgo
An unofficial DuckDuckGo search API.
Stars: ✭ 6 (-96.08%)
Mutual labels:  search, scraper
Phpscraper
PHP Scraper - an highly opinionated web-interface for PHP
Stars: ✭ 148 (-3.27%)
Mutual labels:  scraper, scraping
Jobfunnel
Scrape job websites into a single spreadsheet with no duplicates.
Stars: ✭ 1,528 (+898.69%)
Mutual labels:  search, scraper
Imagescraper
✂️ High performance, multi-threaded image scraper
Stars: ✭ 630 (+311.76%)
Mutual labels:  scraper, scraping
Serp
Google Search SERP Scraper
Stars: ✭ 40 (-73.86%)
Mutual labels:  scraper, seo
Crawly
Crawly, a high-level web crawling & scraping framework for Elixir.
Stars: ✭ 440 (+187.58%)
Mutual labels:  scraper, scraping
Geziyor
Geziyor, a fast web crawling & scraping framework for Go. Supports JS rendering.
Stars: ✭ 1,246 (+714.38%)
Mutual labels:  scraper, scraping
Email Extractor
The main functionality is to extract all the emails from one or several URLs - La funcionalidad principal es extraer todos los correos electrónicos de una o varias Url
Stars: ✭ 81 (-47.06%)
Mutual labels:  scraper, scraping
Headless Chrome Crawler
Distributed crawler powered by Headless Chrome
Stars: ✭ 5,129 (+3252.29%)
Mutual labels:  scraper, scraping
Search Engine Optimization
🔍 A helpful checklist/collection of Search Engine Optimization (SEO) tips and techniques.
Stars: ✭ 1,798 (+1075.16%)
Mutual labels:  search, seo
Ferret
Declarative web scraping
Stars: ✭ 4,837 (+3061.44%)
Mutual labels:  scraper, scraping
Lulu
[Unmaintained] A simple and clean video/music/image downloader 👾
Stars: ✭ 789 (+415.69%)
Mutual labels:  scraper, scraping
Dataflowkit
Extract structured data from web sites. Web sites scraping.
Stars: ✭ 456 (+198.04%)
Mutual labels:  scraper, scraping
Pypatent
Search for and retrieve US Patent and Trademark Office Patent Data
Stars: ✭ 31 (-79.74%)
Mutual labels:  scraper, scraping
Autoscraper
A Smart, Automatic, Fast and Lightweight Web Scraper for Python
Stars: ✭ 4,077 (+2564.71%)
Mutual labels:  scraper, scraping
Katana
A Python Tool For google Hacking
Stars: ✭ 355 (+132.03%)
Mutual labels:  scraper, scraping
Spam Bot 3000
Social media research and promotion, semi-autonomous CLI bot
Stars: ✭ 79 (-48.37%)
Mutual labels:  scraper, research
Seleniumcrawler
An example using Selenium webdrivers for python and Scrapy framework to create a web scraper to crawl an ASP site
Stars: ✭ 117 (-23.53%)
Mutual labels:  scraper, scraping

========= SerpScrap

.. image:: https://img.shields.io/pypi/v/SerpScrap.svg :target: https://pypi.python.org/pypi/SerpScrap

.. image:: https://readthedocs.org/projects/serpscrap/badge/?version=latest :target: http://serpscrap.readthedocs.io/en/latest/ :alt: Documentation Status

.. image:: https://travis-ci.org/ecoron/SerpScrap.svg?branch=master :target: https://travis-ci.org/ecoron/SerpScrap

.. image:: https://img.shields.io/docker/pulls/ecoron/serpscrap.svg :target: https://hub.docker.com/r/ecoron/serpscrap

SEO python scraper to extract data from major searchengine result pages. Extract data like url, title, snippet, richsnippet and the type from searchresults for given keywords. Detect Ads or make automated screenshots. You can also fetch text content of urls provided in searchresults or by your own. It's usefull for SEO and business related research tasks.

Extract these result types

  • ads_main - advertisements within regular search results
  • image - result from image search
  • news - news teaser within regular search results
  • results - standard search result
  • shopping - shopping teaser within regular search results
  • videos - video teaser within regular search results

For each result of a resultspage get

  • domain
  • rank
  • rich snippet
  • site links
  • snippet
  • title
  • type
  • url
  • visible url

Also get a screenshot of each result page. You can also scrape the text content of each result url. It is also possible to save the results as CSV for future analytics. If required you can also use your own proxylist.

Ressources

See http://serpscrap.readthedocs.io/en/latest/ for documentation.

Source is available at https://github.com/ecoron/SerpScrap

Install

The easy way to do:

.. code-block:: python

pip uninstall SerpScrap -y pip install SerpScrap --upgrade

More details in the install_ section of the documentation.

Usage

SerpScrap in your applications

.. code-block:: python

#!/usr/bin/python3

-- coding: utf-8 --

import pprint import serpscrap

keywords = ['example']

config = serpscrap.Config() config.set('scrape_urls', False)

scrap = serpscrap.SerpScrap() scrap.init(config=config.get(), keywords=keywords) results = scrap.run()

for result in results: pprint.pprint(result)

More detailes in the examples_ section of the documentation.

To avoid encode/decode issues use this command before you start using SerpScrap in your cli.

.. code-block:: bash

chcp 65001 set PYTHONIOENCODING=utf-8

.. image:: https://raw.githubusercontent.com/ecoron/SerpScrap/master/docs/logo.png :target: https://github.com/ecoron/SerpScrap

Supported OS

  • SerpScrap should work on Linux, Windows and Mac OS with installed Python >= 3.4
  • SerpScrap requieres lxml
  • Doesn't work on iOS

Changes

Notes about major changes between releases

0.13.0

  • updated dependencies: chromedriver >= 76.0.3809.68 to use actual driver, sqlalchemy>=1.3.7 to solve security issues and other minor update changes
  • minor changes install_chrome.sh

0.12.0

I recommend an update to the latest version of SerpScrap, because the searchengine has updated the markup of search result pages(serp)

  • Update and cleanup of selectors to fetch results
  • new resulttype videos

0.11.0

  • Chrome headless is now the default browser, usage of phantomJS is deprecated
  • chromedriver is installed on the first run (tested on Linux and Windows. Mac OS should also work)
  • behavior of scraping raw text contents from serp urls, and of course given urls, has changed
  • run scraping of serp results and contents at once
  • csv output format changed, now it's tab separated and quoted

0.10.0

  • support for headless chrome, adjusted default time between scrapes

0.9.0

  • result types added (news, shopping, image)
  • Image search is supported

0.8.0

  • text processing tools removed.
  • less requirements

References

SerpScrap is using Chrome headless_ and lxml_ to scrape serp results. For raw text contents of fetched URL's, it is using beautifulsoup4_ . SerpScrap also supports PhantomJs_ ,which is deprecated, a scriptable headless WebKit, which is installed automaticly on the first run (Linux, Windows). The scrapcore was based on GoogleScraper_ , an outdated project, and has many changes and improvemts.

.. target-notes::

.. _install: http://serpscrap.readthedocs.io/en/latest/install.html .. _examples: http://serpscrap.readthedocs.io/en/latest/examples.html .. _Chrome headless: http://chromedriver.chromium.org/ .. _lxml: https://lxml.de/ .. _beautifulsoup4: https://www.crummy.com/software/BeautifulSoup/ .. _PhantomJs: https://github.com/ariya/phantomjs .. _GoogleScraper: https://github.com/NikolaiT/GoogleScraper

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].