All Projects → TurboWay → Bigdata_practice

TurboWay / Bigdata_practice

大数据分析可视化实践

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Bigdata practice

God Of Bigdata
专注大数据学习面试,大数据成神之路开启。Flink/Spark/Hadoop/Hbase/Hive...
Stars: ✭ 6,008 (+3519.28%)
Mutual labels:  kafka, bigdata, hive
Bigdata Notes
大数据入门指南 ⭐
Stars: ✭ 10,991 (+6521.08%)
Mutual labels:  kafka, bigdata, hive
Bigdataguide
大数据学习,从零开始学习大数据,包含大数据学习各阶段学习视频、面试资料
Stars: ✭ 817 (+392.17%)
Mutual labels:  kafka, bigdata, hive
Datafaker
Datafaker is a large-scale test data and flow test data generation tool. Datafaker fakes data and inserts to varied data sources. 测试数据生成工具
Stars: ✭ 327 (+96.99%)
Mutual labels:  kafka, bigdata, hive
Azure Event Hubs Spark
Enabling Continuous Data Processing with Apache Spark and Azure Event Hubs
Stars: ✭ 140 (-15.66%)
Mutual labels:  stream, kafka, bigdata
Apache Spark Hands On
Educational notes,Hands on problems w/ solutions for hadoop ecosystem
Stars: ✭ 74 (-55.42%)
Mutual labels:  bigdata, hive
Repository
个人学习知识库涉及到数据仓库建模、实时计算、大数据、Java、算法等。
Stars: ✭ 92 (-44.58%)
Mutual labels:  kafka, hive
Springboot Templates
springboot和dubbo、netty的集成,redis mongodb的nosql模板, kafka rocketmq rabbit的MQ模板, solr solrcloud elasticsearch查询引擎
Stars: ✭ 100 (-39.76%)
Mutual labels:  kafka, hive
Illuminati
This is a Platform that collects all the data accuring in your Application and shows the data in real time by using Kibana or other tools.
Stars: ✭ 106 (-36.14%)
Mutual labels:  stream, kafka
Bigdata Interview
🎯 🌟[大数据面试题]分享自己在网络上收集的大数据相关的面试题以及自己的答案总结.目前包含Hadoop/Hive/Spark/Flink/Hbase/Kafka/Zookeeper框架的面试题知识总结
Stars: ✭ 857 (+416.27%)
Mutual labels:  kafka, bigdata
Bigdata Notebook
Stars: ✭ 100 (-39.76%)
Mutual labels:  kafka, bigdata
Flinkstreamsql
基于开源的flink,对其实时sql进行扩展;主要实现了流与维表的join,支持原生flink SQL所有的语法
Stars: ✭ 1,682 (+913.25%)
Mutual labels:  stream, bigdata
Ksql Fork With Deep Learning Function
Deep Learning UDF for KSQL, the Streaming SQL Engine for Apache Kafka with Elasticsearch Sink Example
Stars: ✭ 64 (-61.45%)
Mutual labels:  stream, kafka
Reddit sse stream
A Server Side Event stream to deliver Reddit comments and submissions in near real-time to a client.
Stars: ✭ 39 (-76.51%)
Mutual labels:  stream, bigdata
Docs4dev
后端开发常用框架文档及中文翻译,包含 Spring 系列文档(Spring, Spring Boot, Spring Cloud, Spring Security, Spring Session),大数据(Apache Hive, HBase, Apache Flume),日志(Log4j2, Logback),Http Server(NGINX,Apache),Python,数据库(OpenTSDB,MySQL,PostgreSQL)等最新官方文档以及对应的中文翻译。
Stars: ✭ 974 (+486.75%)
Mutual labels:  hive, nginx
Nginx Vod Module
NGINX-based MP4 Repackager
Stars: ✭ 1,378 (+730.12%)
Mutual labels:  stream, nginx
Goka
Goka is a compact yet powerful distributed stream processing library for Apache Kafka written in Go.
Stars: ✭ 1,862 (+1021.69%)
Mutual labels:  stream, kafka
Hadoopcryptoledger
Hadoop Crypto Ledger - Analyzing CryptoLedgers, such as Bitcoin Blockchain, on Big Data platforms, such as Hadoop/Spark/Flink/Hive
Stars: ✭ 126 (-24.1%)
Mutual labels:  bigdata, hive
Eel Sdk
Big Data Toolkit for the JVM
Stars: ✭ 140 (-15.66%)
Mutual labels:  kafka, hive
Szt Bigdata
深圳地铁大数据客流分析系统🚇🚄🌟
Stars: ✭ 826 (+397.59%)
Mutual labels:  kafka, hive

bigdata_practice

大数据实践项目 - nginx 日志分析可视化

功能说明

通过流、批两种方式,分析 nginx 日志,将分析结果通过 flask + echarts 进行可视化展示

数据收集分析过程

image-20201104093541868

方式一:离线批处理 hive + datax + mysql

方式二:实时流处理 flume + kafka + python + mysql

配置

  • 安装依赖
pip install -i https://pypi.tuna.tsinghua.edu.cn/simple -r requirements.txt
  • 修改 ironman/data_db.py 的数据库配置
ENGINE_CONFIG = 'mysql+pymysql://root:[email protected]:3306/test?charset=utf8'
  • mysql 建表
-- nginx_log 日志表
create table fact_nginx_log(
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `remote_addr` VARCHAR(20),
  `time_local` TIMESTAMP(0),
  `province` VARCHAR(20),
  `request` varchar(300),
  `device` varchar(50),
  `os` varchar(50),
  `browser` varchar(100),
  PRIMARY KEY (`id`)
) DEFAULT CHARSET=utf8 ;

-- ip 地区映射表
create table dim_ip(
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `ip` VARCHAR(20),
  `province` VARCHAR(20),
  `addtime` TIMESTAMP(0) default now(),
  PRIMARY KEY (`id`)
) DEFAULT CHARSET=utf8  ;

运行

运行 cd ironman; python app.py

打开 http://127.0.0.1:5000/

效果图

24 小时访问趋势

image

每日访问情况

image

客户端设备占比

image

用户分布

image

爬虫词云

image

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].