All Projects → proxycrawl → proxycrawl-python

proxycrawl / proxycrawl-python

Licence: Apache-2.0 license
ProxyCrawl Python library for scraping and crawling

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to proxycrawl-python

Ferret
Declarative web scraping
Stars: ✭ 4,837 (+9384.31%)
Mutual labels:  scraper, scraping, crawling, scraping-websites
Crawly
Crawly, a high-level web crawling & scraping framework for Elixir.
Stars: ✭ 440 (+762.75%)
Mutual labels:  scraper, scraping, crawling
bots-zoo
No description or website provided.
Stars: ✭ 59 (+15.69%)
Mutual labels:  scraper, scraping, crawling
gochanges
**[ARCHIVED]** website changes tracker 🔍
Stars: ✭ 12 (-76.47%)
Mutual labels:  scraper, scraping, scraping-websites
wget-lua
Wget-AT is a modern Wget with Lua hooks, Zstandard (+dictionary) WARC compression and URL-agnostic deduplication.
Stars: ✭ 52 (+1.96%)
Mutual labels:  scraper, scraping, crawling
Colly
Elegant Scraper and Crawler Framework for Golang
Stars: ✭ 15,535 (+30360.78%)
Mutual labels:  scraper, scraping, crawling
Dataflowkit
Extract structured data from web sites. Web sites scraping.
Stars: ✭ 456 (+794.12%)
Mutual labels:  scraper, scraping, crawling
document-dl
Command line program to download documents from web portals.
Stars: ✭ 14 (-72.55%)
Mutual labels:  scraper, scraping, scraping-websites
Linkedin Profile Scraper
🕵️‍♂️ LinkedIn profile scraper returning structured profile data in JSON. Works in 2020.
Stars: ✭ 171 (+235.29%)
Mutual labels:  scraper, scraping, crawling
Lulu
[Unmaintained] A simple and clean video/music/image downloader 👾
Stars: ✭ 789 (+1447.06%)
Mutual labels:  scraper, scraping, crawling
Headless Chrome Crawler
Distributed crawler powered by Headless Chrome
Stars: ✭ 5,129 (+9956.86%)
Mutual labels:  scraper, scraping, crawling
scrapman
Retrieve real (with Javascript executed) HTML code from an URL, ultra fast and supports multiple parallel loading of webs
Stars: ✭ 21 (-58.82%)
Mutual labels:  scraper, scraping, scraping-websites
diffbot-php-client
[Deprecated - Maintenance mode - use APIs directly please!] The official Diffbot client library
Stars: ✭ 53 (+3.92%)
Mutual labels:  scraper, scraping, crawling
Instagram-to-discord
Monitor instagram user account and automatically post new images to discord channel via a webhook. Working 2022!
Stars: ✭ 113 (+121.57%)
Mutual labels:  scraper, scraping, scraping-websites
zcrawl
An open source web crawling platform
Stars: ✭ 21 (-58.82%)
Mutual labels:  scraping, crawling
Pahe.ph-Scraper
Pahe.ph [Pahe.in] Movies Website Scraper
Stars: ✭ 57 (+11.76%)
Mutual labels:  scraper, scraping
readability-cli
A CLI for Mozilla Readability. Get clean, uncluttered, ready-to-read HTML from any webpage!
Stars: ✭ 41 (-19.61%)
Mutual labels:  scraping, scraping-websites
crawler-chrome-extensions
爬虫工程师常用的 Chrome 插件 | Chrome extensions used by crawler developer
Stars: ✭ 53 (+3.92%)
Mutual labels:  scraper, scraping
google-scraper
This class can retrieve search results from Google.
Stars: ✭ 33 (-35.29%)
Mutual labels:  scraper, scraping
scrape-github-trending
Tutorial for web scraping / crawling with Node.js.
Stars: ✭ 42 (-17.65%)
Mutual labels:  scraping, crawling

ProxyCrawl API Python class

A lightweight, dependency free Python class that acts as wrapper for ProxyCrawl API.

Installing

Choose a way of installing:

  • Download the python class from Github.
  • Or use PyPi Python package manager. pip install proxycrawl

Then import the CrawlingAPI, ScraperAPI, etc as needed.

from proxycrawl import CrawlingAPI, ScraperAPI, LeadsAPI, ScreenshotsAPI, StorageAPI

Upgrading to version 3

Version 3 deprecates the usage of ProxyCrawlAPI in favour of CrawlingAPI (although is still usable). Please test the upgrade before deploying to production.

Crawling API

First initialize the CrawlingAPI class.

api = CrawlingAPI({ 'token': 'YOUR_PROXYCRAWL_TOKEN' })

GET requests

Pass the url that you want to scrape plus any options from the ones available in the API documentation.

api.get(url, options = {})

Example:

response = api.get('https://www.facebook.com/britneyspears')
if response['status_code'] == 200:
    print(response['body'])

You can pass any options from ProxyCrawl API.

Example:

response = api.get('https://www.reddit.com/r/pics/comments/5bx4bx/thanks_obama/', {
    'user_agent': 'Mozilla/5.0 (Windows NT 6.2; rv:20.0) Gecko/20121202 Firefox/30.0',
    'format': 'json'
})
if response['status_code'] == 200:
    print(response['body'])

POST requests

Pass the url that you want to scrape, the data that you want to send which can be either a json or a string, plus any options from the ones available in the API documentation.

api.post(url, dictionary or string data, options = {})

Example:

response = api.post('https://producthunt.com/search', { 'text': 'example search' })
if response['status_code'] == 200:
    print(response['body'])

You can send the data as application/json instead of x-www-form-urlencoded by setting option post_content_type as json.

import json
response = api.post('https://httpbin.org/post', json.dumps({ 'some_json': 'with some value' }), { 'post_content_type': 'json' })
if response['status_code'] == 200:
    print(response['body'])

Javascript requests

If you need to scrape any website built with Javascript like React, Angular, Vue, etc. You just need to pass your javascript token and use the same calls. Note that only .get is available for javascript and not .post.

api = CrawlingAPI({ 'token': 'YOUR_JAVASCRIPT_TOKEN' })
response = api.get('https://www.nfl.com')
if response['status_code'] == 200:
    print(response['body'])

Same way you can pass javascript additional options.

response = api.get('https://www.freelancer.com', { 'page_wait': 5000 })
if response['status_code'] == 200:
    print(response['body'])

Original status

You can always get the original status and proxycrawl status from the response. Read the ProxyCrawl documentation to learn more about those status.

response = api.get('https://craiglist.com')
print(response['headers']['original_status'])
print(response['headers']['pc_status'])

If you have questions or need help using the library, please open an issue or contact us.

Scraper API

The usage of the Scraper API is very similar, just change the class name to initialize.

scraper_api = ScraperAPI({ 'token': 'YOUR_NORMAL_TOKEN' })

response = scraper_api.get('https://www.amazon.com/DualSense-Wireless-Controller-PlayStation-5/dp/B08FC6C75Y/')
if response['status_code'] == 200:
    print(response['json']['name']) # Will print the name of the Amazon product

Leads API

To find email leads you can use the leads API, you can check the full API documentation if needed.

leads_api = LeadsAPI({ 'token': 'YOUR_NORMAL_TOKEN' })

response = leads_api.get_from_domain('microsoft.com')

if response['status_code'] == 200:
    print(response['json']['leads'])

Screenshots API

Initialize with your Screenshots API token and call the get method.

screenshots_api = ScreenshotsAPI({ 'token': 'YOUR_NORMAL_TOKEN' })
response = screenshots_api.get('https://www.apple.com')
if response['status_code'] == 200:
    print(response['headers']['success'])
    print(response['headers']['url'])
    print(response['headers']['remaining_requests'])
    print(response['file'])

or specifying a file path

screenshots_api = ScreenshotsAPI({ 'token': 'YOUR_NORMAL_TOKEN' })
response = screenshots_api.get('https://www.apple.com', { 'save_to_path': 'apple.jpg' })
if response['status_code'] == 200:
    print(response['headers']['success'])
    print(response['headers']['url'])
    print(response['headers']['remaining_requests'])
    print(response['file'])

or if you set store=true then screenshot_url is set in the returned headers

screenshots_api = ScreenshotsAPI({ 'token': 'YOUR_NORMAL_TOKEN' })
response = screenshots_api.get('https://www.apple.com', { 'store': 'true' })
if response['status_code'] == 200:
    print(response['headers']['success'])
    print(response['headers']['url'])
    print(response['headers']['remaining_requests'])
    print(response['file'])
    print(response['headers']['screenshot_url'])

Note that screenshots_api.get(url, options) method accepts an options

Storage API

Initialize the Storage API using your private token.

storage_api = StorageAPI({ 'token': 'YOUR_NORMAL_TOKEN' })

Pass the url that you want to get from Proxycrawl Storage.

response = storage_api.get('https://www.apple.com')
if response['status_code'] == 200:
    print(response['headers']['original_status'])
    print(response['headers']['pc_status'])
    print(response['headers']['url'])
    print(response['headers']['rid'])
    print(response['headers']['stored_at'])
    print(response['body'])

or you can use the RID

response = storage_api.get('RID_REPLACE')
if response['status_code'] == 200:
    print(response['headers']['original_status'])
    print(response['headers']['pc_status'])
    print(response['headers']['url'])
    print(response['headers']['rid'])
    print(response['headers']['stored_at'])
    print(response['body'])

Note: One of the two RID or URL must be sent. So both are optional but it's mandatory to send one of the two.

Delete request

To delete a storage item from your storage area, use the correct RID

if storage_api.delete('RID_REPLACE'):
  print('delete success')
else:
  print('Unable to delete')

Bulk request

To do a bulk request with a list of RIDs, please send the list of rids as an array

response = storage_api.bulk(['RID1', 'RID2', 'RID3', ...])
if response['status_code'] == 200:
    for item in response['json']:
        print(item['original_status'])
        print(item['pc_status'])
        print(item['url'])
        print(item['rid'])
        print(item['stored_at'])
        print(item['body'])

RIDs request

To request a bulk list of RIDs from your storage area

rids = storage_api.rids()
print(rids)

You can also specify a limit as a parameter

storage_api.rids(100)

Total Count

To get the total number of documents in your storage area

total_count = storage_api.totalCount()
print(total_count)

Custom timeout

If you need to use a custom timeout, you can pass it to the class instance creation like the following:

api = CrawlingAPI({ 'token': 'TOKEN', 'timeout': 120 })

Timeout is in seconds.


Copyright 2022 ProxyCrawl

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].