All Projects → pr0gramista → memes-api

pr0gramista / memes-api

Licence: MIT License
API for scrapping common meme sites

Programming Languages

python
139335 projects - #7 most used programming language
Dockerfile
14818 projects

Projects that are alternatives of or similar to memes-api

scrapy-zyte-smartproxy
Zyte Smart Proxy Manager (formerly Crawlera) middleware for Scrapy
Stars: ✭ 317 (+1764.71%)
Mutual labels:  scraping, scrapy
double-agent
A test suite of common scraper detection techniques. See how detectable your scraper stack is.
Stars: ✭ 123 (+623.53%)
Mutual labels:  scraping, scrapy
Email Extractor
The main functionality is to extract all the emails from one or several URLs - La funcionalidad principal es extraer todos los correos electrónicos de una o varias Url
Stars: ✭ 81 (+376.47%)
Mutual labels:  scraping, scrapy
Easy Scraping Tutorial
Simple but useful Python web scraping tutorial code.
Stars: ✭ 583 (+3329.41%)
Mutual labels:  scraping, scrapy
torchestrator
Spin up Tor containers and then proxy HTTP requests via these Tor instances
Stars: ✭ 32 (+88.24%)
Mutual labels:  scraping, scrapy
Scrapy Cluster
This Scrapy project uses Redis and Kafka to create a distributed on demand scraping cluster.
Stars: ✭ 921 (+5317.65%)
Mutual labels:  scraping, scrapy
Seleniumcrawler
An example using Selenium webdrivers for python and Scrapy framework to create a web scraper to crawl an ASP site
Stars: ✭ 117 (+588.24%)
Mutual labels:  scraping, scrapy
Scrapy Crawlera
Crawlera middleware for Scrapy
Stars: ✭ 281 (+1552.94%)
Mutual labels:  scraping, scrapy
InstaBot
Simple and friendly Bot for Instagram, using Selenium and Scrapy with Python.
Stars: ✭ 32 (+88.24%)
Mutual labels:  scraping, scrapy
scrapy-fieldstats
A Scrapy extension to log items coverage when the spider shuts down
Stars: ✭ 17 (+0%)
Mutual labels:  scraping, scrapy
Scrapple
A framework for creating semi-automatic web content extractors
Stars: ✭ 464 (+2629.41%)
Mutual labels:  scraping, scrapy
proxi
Proxy pool. Finds and checks proxies with rest api for querying results. Can find over 25k proxies in under 5 minutes.
Stars: ✭ 32 (+88.24%)
Mutual labels:  scraping, scrapy
Post Tuto Deployment
Build and deploy a machine learning app from scratch 🚀
Stars: ✭ 368 (+2064.71%)
Mutual labels:  scraping, scrapy
Django Dynamic Scraper
Creating Scrapy scrapers via the Django admin interface
Stars: ✭ 1,024 (+5923.53%)
Mutual labels:  scraping, scrapy
Linkedin
Linkedin Scraper using Selenium Web Driver, Chromium headless, Docker and Scrapy
Stars: ✭ 309 (+1717.65%)
Mutual labels:  scraping, scrapy
Dotnetcrawler
DotnetCrawler is a straightforward, lightweight web crawling/scrapying library for Entity Framework Core output based on dotnet core. This library designed like other strong crawler libraries like WebMagic and Scrapy but for enabling extandable your custom requirements. Medium link : https://medium.com/@mehmetozkaya/creating-custom-web-crawler-with-dotnet-core-using-entity-framework-core-ec8d23f0ca7c
Stars: ✭ 100 (+488.24%)
Mutual labels:  scraping, scrapy
policy-data-analyzer
Building a model to recognize incentives for landscape restoration in environmental policies from Latin America, the US and India. Bringing NLP to the world of policy analysis through an extensible framework that includes scraping, preprocessing, active learning and text analysis pipelines.
Stars: ✭ 22 (+29.41%)
Mutual labels:  scraping, scrapy
ARGUS
ARGUS is an easy-to-use web scraping tool. The program is based on the Scrapy Python framework and is able to crawl a broad range of different websites. On the websites, ARGUS is able to perform tasks like scraping texts or collecting hyperlinks between websites. See: https://link.springer.com/article/10.1007/s11192-020-03726-9
Stars: ✭ 68 (+300%)
Mutual labels:  scraping, scrapy
RARBG-scraper
With Selenium headless browsing and CAPTCHA solving
Stars: ✭ 38 (+123.53%)
Mutual labels:  scraping, scrapy
scrapy-distributed
A series of distributed components for Scrapy. Including RabbitMQ-based components, Kafka-based components, and RedisBloom-based components for Scrapy.
Stars: ✭ 38 (+123.53%)
Mutual labels:  scraping, scrapy

Memes API Build Status codecov

API for scrapping common meme sites. Written in Python using parsel and Flask. Currently supports:

No sites to be supported, suggest one?

API

/

Response: available sites

[
  "/kwejk",
  "/jbzd",
  "/9gag",
  "/9gagnsfw",
  "/demotywatory",
  "/mistrzowie",
  "/anonimowe",
  "/ifunnyco"
]

Then you can access them by accessing fe. /kwejk

//shortened response
{
  "memes": [
    {
      "title": "Czasy się zmieniają",
      "url": "https://kwejk.pl/obrazek/3387625/czasy-sie-zmieniaja.html",
      "view_url": "/kwejk/3387625",
      "author": {
        "name": "Torendil",
        "url": "https://kwejk.pl/uzytkownik/torendil"
      },
      "comment_count": 18,
      "content": {
        "contentType": "IMAGE",
        "url": "https://i1.kwejk.pl/k/obrazki/2019/05/lJUqdnyKqJf1Katl.jpg"
      },
      "points": 205,
      "tags": [
        {
          "name": "#obrazek",
          "url": "https://kwejk.pl/tag/obrazek"
        },
        {
          "name": "#humor",
          "url": "https://kwejk.pl/tag/humor"
        },
        {
          "name": "#mem",
          "url": "https://kwejk.pl/tag/mem"
        },
        {
          "name": "#true",
          "url": "https://kwejk.pl/tag/true"
        }
      ]
    }
  ],
  "next_page_url": "/kwejk/page/40878",
  "title": "Ministerstwo memów, zdjęć i innych śmiesznych obrazków - KWEJK.pl"
}

Development

  1. Install dependencies with pipenv
  2. Run development server with python main.py
  3. Make your changes
  4. Write and run tests with pytest in project directory
  5. Format your code using black
  6. If you added new packages run pipenv run pipenv_to_requirements -f
  7. Make a pull request and be happy :)

Deploying Memes API

There are couple ways to deploy Memes API. For now supported options are:

  • Docker image (Dockerfile)
  • ZEIT Now (now.json)
  • Google App Engine (app.yaml)
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].