All Projects → dyweb → Scrala

dyweb / Scrala

Unmaintained 🐳 ☕️ 🕷 Scala crawler(spider) framework, inspired by scrapy, created by @gaocegege

Programming Languages

scala
5932 projects

Projects that are alternatives of or similar to Scrala

Funpyspidersearchengine
Word2vec 千人千面 个性化搜索 + Scrapy2.3.0(爬取数据) + ElasticSearch7.9.1(存储数据并提供对外Restful API) + Django3.1.1 搜索
Stars: ✭ 782 (+592.04%)
Mutual labels:  spider, scrapy
App comments spider
爬取百度贴吧、TapTap、appstore、微博官方博主上的游戏评论(基于redis_scrapy),过滤器采用了bloomfilter。
Stars: ✭ 38 (-66.37%)
Mutual labels:  spider, scrapy
Seeker
Seeker - another job board aggregator.
Stars: ✭ 16 (-85.84%)
Mutual labels:  spider, scrapy
Fbcrawl
A Facebook crawler
Stars: ✭ 536 (+374.34%)
Mutual labels:  spider, scrapy
Alipayspider Scrapy
AlipaySpider on Scrapy(use chrome driver); 支付宝爬虫(基于Scrapy)
Stars: ✭ 70 (-38.05%)
Mutual labels:  spider, scrapy
Python Spider
豆瓣电影top250、斗鱼爬取json数据以及爬取美女图片、淘宝、有缘、CrawlSpider爬取红娘网相亲人的部分基本信息以及红娘网分布式爬取和存储redis、爬虫小demo、Selenium、爬取多点、django开发接口、爬取有缘网信息、模拟知乎登录、模拟github登录、模拟图虫网登录、爬取多点商城整站数据、爬取微信公众号历史文章、爬取微信群或者微信好友分享的文章、itchat监听指定微信公众号分享的文章
Stars: ✭ 615 (+444.25%)
Mutual labels:  spider, scrapy
Jspider
JSpider会每周更新至少一个网站的JS解密方式,欢迎 Star,交流微信:13298307816
Stars: ✭ 914 (+708.85%)
Mutual labels:  spider, scrapy
Happy Spiders
🔧 🔩 🔨 收集整理了爬虫相关的工具、模拟登陆技术、代理IP、scrapy模板代码等内容。
Stars: ✭ 261 (+130.97%)
Mutual labels:  spider, scrapy
Reptile
🏀 Python3 网络爬虫实战(部分含详细教程)猫眼 腾讯视频 豆瓣 研招网 微博 笔趣阁小说 百度热点 B站 CSDN 网易云阅读 阿里文学 百度股票 今日头条 微信公众号 网易云音乐 拉勾 有道 unsplash 实习僧 汽车之家 英雄联盟盒子 大众点评 链家 LPL赛程 台风 梦幻西游、阴阳师藏宝阁 天气 牛客网 百度文库 睡前故事 知乎 Wish
Stars: ✭ 1,048 (+827.43%)
Mutual labels:  spider, scrapy
Django Dynamic Scraper
Creating Scrapy scrapers via the Django admin interface
Stars: ✭ 1,024 (+806.19%)
Mutual labels:  spider, scrapy
Haipproxy
💖 High available distributed ip proxy pool, powerd by Scrapy and Redis
Stars: ✭ 4,993 (+4318.58%)
Mutual labels:  spider, scrapy
Capturer
capture pictures from website like sina, lofter, huaban and so on
Stars: ✭ 76 (-32.74%)
Mutual labels:  spider, scrapy
Elves
🎊 Design and implement of lightweight crawler framework.
Stars: ✭ 315 (+178.76%)
Mutual labels:  spider, scrapy
Icrawler
A multi-thread crawler framework with many builtin image crawlers provided.
Stars: ✭ 629 (+456.64%)
Mutual labels:  spider, scrapy
Alltheplaces
A set of spiders and scrapers to extract location information from places that post their location on the internet.
Stars: ✭ 277 (+145.13%)
Mutual labels:  spider, scrapy
Mailinglistscraper
A python web scraper for public email lists.
Stars: ✭ 19 (-83.19%)
Mutual labels:  spider, scrapy
Douban Crawler
Uno Crawler por https://douban.com
Stars: ✭ 13 (-88.5%)
Mutual labels:  spider, scrapy
Tieba spider
百度贴吧爬虫(基于scrapy和mysql)
Stars: ✭ 257 (+127.43%)
Mutual labels:  spider, scrapy
Crawlab
Distributed web crawler admin platform for spiders management regardless of languages and frameworks. 分布式爬虫管理平台,支持任何语言和框架
Stars: ✭ 8,392 (+7326.55%)
Mutual labels:  spider, scrapy
Image Downloader
Download images from Google, Bing, Baidu. 谷歌、百度、必应图片下载.
Stars: ✭ 1,173 (+938.05%)
Mutual labels:  spider, scrapy

scrala

Codacy Badge Build Status License scrala published Docker Pulls Join the chat at https://gitter.im/gaocegege/scrala

scrala is a web crawling framework for scala, which is inspired by scrapy.

Installation

From Docker

gaocegege/scrala in dockerhub

Create a Dockerfile in your project.

FROM gaocegege/scrala:latest

// COPY the build.sbt and the src to the container

Run a single command in docker

docker run -v <your src>:/app/src -v <your ivy2 directory>:/root/.ivy2  gaocegege/scrala

From SBT

Step 1. Add it in your build.sbt at the end of resolvers:

resolvers += "jitpack" at "https://jitpack.io"

Step 2. Add the dependency

libraryDependencies += "com.github.gaocegege" % "scrala" % "0.1.5"

From Source Code

git clone https://github.com/gaocegege/scrala.git
cd ./scrala
sbt assembly

You will get the jar in ./target/scala-<version>/.

Example

import com.gaocegege.scrala.core.spider.impl.DefaultSpider
import com.gaocegege.scrala.core.common.response.Response
import java.io.BufferedReader
import java.io.InputStreamReader
import com.gaocegege.scrala.core.common.response.impl.HttpResponse
import com.gaocegege.scrala.core.common.response.impl.HttpResponse

class TestSpider extends DefaultSpider {
  def startUrl = List[String]("http://www.gaocegege.com/resume")

  def parse(response: HttpResponse): Unit = {
    val links = (response getContentParser) select ("a")
    for (i <- 0 to links.size() - 1) {
      request(((links get (i)) attr ("href")), printIt)
    }
  }

  def printIt(response: HttpResponse): Unit = {
    println((response getContentParser) title)
  }
}

object Main {
  def main(args: Array[String]) {
    val test = new TestSpider
    test begin
  }
}

Just like the scrapy, what you need to do is define a startUrl to tell me where to start, and override parse(...) to parse the response of the startUrl. And request(...) function is like yield scrapy.Request(...) in scrapy.

You can get the example project in the ./example/

For Developer

scrala is under active development, feel free to contribute documentation, test cases, pull requests, issues, and anything you want. I'm a newcomer to scala so the code is hard to read. I'm glad to see someone familiar with scala coding standards could do some code reviews for the repo :)

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].