All Projects → poozhu → Crawler For Github Trending

poozhu / Crawler For Github Trending

🕷️ A node crawler for github trending.

Programming Languages

javascript
184084 projects - #8 most used programming language

Labels

Projects that are alternatives of or similar to Crawler For Github Trending

Jlitespider
A lite distributed Java spider framework :-)
Stars: ✭ 151 (-12.21%)
Mutual labels:  crawler
Yispider
一款分布式爬虫平台,帮助你更好的管理和开发爬虫。 内置一套爬虫定义规则(模版),可使用模版快速定义爬虫,也可当作框架手动开发爬虫。(兴趣使然的项目,用的不爽了就更新)
Stars: ✭ 158 (-8.14%)
Mutual labels:  crawler
Douyin crawler
抖音爬虫,tiktok crawler,抖音数据采集接口,抖音视频去水印,百分百成功,不需要服务器,不需要代理 IP。
Stars: ✭ 169 (-1.74%)
Mutual labels:  crawler
Python3 Spider
Python爬虫实战 - 模拟登陆各大网站 包含但不限于:滑块验证、拼多多、美团、百度、bilibili、大众点评、淘宝,如果喜欢请start ❤️
Stars: ✭ 2,129 (+1137.79%)
Mutual labels:  crawler
Abot
Cross Platform C# web crawler framework built for speed and flexibility. Please star this project! +1.
Stars: ✭ 1,961 (+1040.12%)
Mutual labels:  crawler
Datmusic Api
Alternative for VK Audio API
Stars: ✭ 160 (-6.98%)
Mutual labels:  crawler
Dxy Covid 19 Crawler
2019新型冠状病毒疫情实时爬虫及API | COVID-19/2019-nCoV Realtime Infection Crawler and API
Stars: ✭ 1,865 (+984.3%)
Mutual labels:  crawler
Gain
Web crawling framework based on asyncio.
Stars: ✭ 2,002 (+1063.95%)
Mutual labels:  crawler
Downzemall
DownZemAll! is a download manager for Windows, MacOS and Linux
Stars: ✭ 157 (-8.72%)
Mutual labels:  crawler
Bitextor
Bitextor generates translation memories from multilingual websites.
Stars: ✭ 168 (-2.33%)
Mutual labels:  crawler
Weibo wordcloud
根据关键词抓取微博数据,再生成词云
Stars: ✭ 154 (-10.47%)
Mutual labels:  crawler
Instagram Scraper
scrapes medias, likes, followers, tags and all metadata. Inspired by instagram-php-scraper,bot
Stars: ✭ 2,209 (+1184.3%)
Mutual labels:  crawler
Gocrawl
Polite, slim and concurrent web crawler.
Stars: ✭ 1,962 (+1040.7%)
Mutual labels:  crawler
Ngmeta
Dynamic meta tags in your AngularJS single page application
Stars: ✭ 152 (-11.63%)
Mutual labels:  crawler
Sitemap Generator Crawler
Script that generates a sitemap by crawling a given URL
Stars: ✭ 169 (-1.74%)
Mutual labels:  crawler
Ptt Alertor
📢 Ptt 文章通知機器人!Notify Ptt Article in Realtime
Stars: ✭ 150 (-12.79%)
Mutual labels:  crawler
Js Reverse
JS逆向研究
Stars: ✭ 159 (-7.56%)
Mutual labels:  crawler
Proxy pool
Python爬虫代理IP池(proxy pool)
Stars: ✭ 13,964 (+8018.6%)
Mutual labels:  crawler
Fun crawler
Crawl some picture for fun
Stars: ✭ 169 (-1.74%)
Mutual labels:  crawler
Scrapingoutsourcing
ScrapingOutsourcing专注分享爬虫代码 尽量每周更新一个
Stars: ✭ 164 (-4.65%)
Mutual labels:  crawler

Crawler-for-Github-Trending

50 lines, minimalist node crawler for Trending.
一个50行的 node 爬虫,一个简单的 axios, express, cheerio 体验项目。

Usage

一篇简单的介绍 https://juejin.im/post/5cbab247e51d45789024d7cb
一个简单的应用 http://zy2071.com/Fun/todayGithub.html

首先保证电脑已存在 node10.0+ 环境,然后

1.拉取本项目

git clone https://github.com/ZY2071/Crawler-for-Github-Trending.git
cd Crawler-for-Github-Trending
npm i
node index.js

2.或者下载本项目压缩包,解压

cd Crawler-for-Github-Trending-master  // 进入项目文件夹
npm i
node index.js

Examples

当启动项目后,可以看到控制台输出

Listening on port 3000!

此时打开浏览器,进入本地服务 http://localhost:3000/daily

http://localhost:3000/time-language // time 表示周期,language 代表语言  例如:

http://localhost:3000/daily  // 代表今日 可选参数:weekly,monthly
http://localhost:3000/daily-JavaScript  // 代表今日的 JavaScript 分类 可选参数:任意语言

稍微等待即可看到爬取完毕的返回数据:

[
 {
  "title": "lib-pku / libpku",
  "links": "https://github.com/lib-pku/libpku",
  "description": "贵校课程资料民间整理",
  "language": "JavaScript",
  "stars": "14,297",
  "forks": "4,360",
  "info": "3,121 stars this week"
 },
 {
  "title": "SqueezerIO / squeezer",
  "links": "https://github.com/SqueezerIO/squeezer",
  "description": "Squeezer Framework - Build serverless dApps",
  "language": "JavaScript",
  "stars": "3,212",
  "forks": "80",
  "info": "2,807 stars this week"
 },
 ...
]

More

本项目每次访问都会实时爬取数据,所以数据返回速度会非常慢,期望作为接口数据请定时爬取到数据库。

但了解项目代码可以带来以上各个 node 模块和爬虫最基础的用法和概念,希望可以帮到大家。

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].