All Projects → alirezamika → Autoscraper

alirezamika / Autoscraper

Licence: mit
A Smart, Automatic, Fast and Lightweight Web Scraper for Python

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Autoscraper

papercut
Papercut is a scraping/crawling library for Node.js built on top of JSDOM. It provides basic selector features together with features like Page Caching and Geosearch.
Stars: ✭ 15 (-99.63%)
Mutual labels:  crawler, scraper, scraping, web-scraping
Rod
A Devtools driver for web automation and scraping
Stars: ✭ 1,392 (-65.86%)
Mutual labels:  automation, scraper, web-scraping
Apify Js
Apify SDK — The scalable web scraping and crawling library for JavaScript/Node.js. Enables development of data extraction and web automation jobs (not only) with headless Chrome and Puppeteer.
Stars: ✭ 3,154 (-22.64%)
Mutual labels:  automation, scraping, web-scraping
Rcrawler
An R web crawler and scraper
Stars: ✭ 274 (-93.28%)
Mutual labels:  crawler, scraper, webscraping
Huginn
Create agents that monitor and act on your behalf. Your agents are standing by!
Stars: ✭ 33,694 (+726.44%)
Mutual labels:  automation, scraper, webscraping
Goapy
Goal-Oriented Action Planning implementation in Python
Stars: ✭ 33 (-99.19%)
Mutual labels:  automation, artificial-intelligence, ai
Gopa
[WIP] GOPA, a spider written in Golang, for Elasticsearch. DEMO: http://index.elasticsearch.cn
Stars: ✭ 277 (-93.21%)
Mutual labels:  crawler, scraping, web-scraping
Linkedin Profile Scraper
🕵️‍♂️ LinkedIn profile scraper returning structured profile data in JSON. Works in 2020.
Stars: ✭ 171 (-95.81%)
Mutual labels:  crawler, scraper, scraping
BookingScraper
🌎 🏨 Scrape Booking.com 🏨 🌎
Stars: ✭ 68 (-98.33%)
Mutual labels:  scraper, web-scraping, webscraping
diffbot-php-client
[Deprecated - Maintenance mode - use APIs directly please!] The official Diffbot client library
Stars: ✭ 53 (-98.7%)
Mutual labels:  scraper, scraping, scrape
ioweb
Web Scraping Framework
Stars: ✭ 31 (-99.24%)
Mutual labels:  scraping, web-scraping, webscraping
Polite
Be nice on the web
Stars: ✭ 253 (-93.79%)
Mutual labels:  crawler, scraper, webscraping
Goose Parser
Universal scrapping tool, which allows you to extract data using multiple environments
Stars: ✭ 211 (-94.82%)
Mutual labels:  crawler, scraper, scraping
Sillynium
Automate the creation of Python Selenium Scripts by drawing coloured boxes on webpage elements
Stars: ✭ 100 (-97.55%)
Mutual labels:  automation, scraper, web-scraping
Colly
Elegant Scraper and Crawler Framework for Golang
Stars: ✭ 15,535 (+281.04%)
Mutual labels:  crawler, scraper, scraping
Anime Dl
Anime-dl is a command-line program to download anime from CrunchyRoll and Funimation.
Stars: ✭ 190 (-95.34%)
Mutual labels:  automation, scraper, scraping
Youtube Projects
This repository contains all the code I use in my YouTube tutorials.
Stars: ✭ 144 (-96.47%)
Mutual labels:  crawler, scraper, webscraping
Instagram Scraper
scrapes medias, likes, followers, tags and all metadata. Inspired by instagram-php-scraper,bot
Stars: ✭ 2,209 (-45.82%)
Mutual labels:  crawler, scraper, scrape
scrapers
scrapers for building your own image databases
Stars: ✭ 46 (-98.87%)
Mutual labels:  scraper, scraping, scrape
ha-multiscrape
Home Assistant custom component for scraping (html, xml or json) multiple values (from a single HTTP request) with a separate sensor/attribute for each value. Support for (login) form-submit functionality.
Stars: ✭ 103 (-97.47%)
Mutual labels:  scraper, scraping, scrape

AutoScraper: A Smart, Automatic, Fast and Lightweight Web Scraper for Python

img

This project is made for automatic web scraping to make scraping easy. It gets a url or the html content of a web page and a list of sample data which we want to scrape from that page. This data can be text, url or any html tag value of that page. It learns the scraping rules and returns the similar elements. Then you can use this learned object with new urls to get similar content or the exact same element of those new pages.

Installation

It's compatible with python 3.

  • Install latest version from git repository using pip:
$ pip install git+https://github.com/alirezamika/autoscraper.git
  • Install from PyPI:
$ pip install autoscraper
  • Install from source:
$ python setup.py install

How to use

Getting similar results

Say we want to fetch all related post titles in a stackoverflow page:

from autoscraper import AutoScraper

url = 'https://stackoverflow.com/questions/2081586/web-scraping-with-python'

# We can add one or multiple candidates here.
# You can also put urls here to retrieve urls.
wanted_list = ["What are metaclasses in Python?"]

scraper = AutoScraper()
result = scraper.build(url, wanted_list)
print(result)

Here's the output:

[
    'How do I merge two dictionaries in a single expression in Python (taking union of dictionaries)?', 
    'How to call an external command?', 
    'What are metaclasses in Python?', 
    'Does Python have a ternary conditional operator?', 
    'How do you remove duplicates from a list whilst preserving order?', 
    'Convert bytes to a string', 
    'How to get line count of a large file cheaply in Python?', 
    "Does Python have a string 'contains' substring method?", 
    'Why is “1000000000000000 in range(1000000000000001)” so fast in Python 3?'
]

Now you can use the scraper object to get related topics of any stackoverflow page:

scraper.get_result_similar('https://stackoverflow.com/questions/606191/convert-bytes-to-a-string')

Getting exact result

Say we want to scrape live stock prices from yahoo finance:

from autoscraper import AutoScraper

url = 'https://finance.yahoo.com/quote/AAPL/'

wanted_list = ["124.81"]

scraper = AutoScraper()

# Here we can also pass html content via the html parameter instead of the url (html=html_content)
result = scraper.build(url, wanted_list)
print(result)

Note that you should update the wanted_list if you want to copy this code, as the content of the page dynamically changes.

You can also pass any custom requests module parameter. for example you may want to use proxies or custom headers:

proxies = {
    "http": 'http://127.0.0.1:8001',
    "https": 'https://127.0.0.1:8001',
}

result = scraper.build(url, wanted_list, request_args=dict(proxies=proxies))

Now we can get the price of any symbol:

scraper.get_result_exact('https://finance.yahoo.com/quote/MSFT/')

You may want to get other info as well. For example if you want to get market cap too, you can just append it to the wanted list. By using the get_result_exact method, it will retrieve the data as the same exact order in the wanted list.

Another example: Say we want to scrape the about text, number of stars and the link to issues of Github repo pages:

from autoscraper import AutoScraper

url = 'https://github.com/alirezamika/autoscraper'

wanted_list = ['A Smart, Automatic, Fast and Lightweight Web Scraper for Python', '2.5k', 'https://github.com/alirezamika/autoscraper/issues']

scraper = AutoScraper()
scraper.build(url, wanted_list)

Simple, right?

Saving the model

We can now save the built model to use it later. To save:

# Give it a file path
scraper.save('yahoo-finance')

And to load:

scraper.load('yahoo-finance')

Tutorials

Issues

Feel free to open an issue if you have any problem using the module.

Support the project

Buy Me A Coffee

Happy Coding ♥️

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].