All Projects → gglinux → Wifi

gglinux / Wifi

Licence: apache-2.0
基于wifi抓取信息的大数据查询分析系统

Programming Languages

java
68154 projects - #9 most used programming language

Projects that are alternatives of or similar to Wifi

Bigdata
💎🔥大数据学习笔记
Stars: ✭ 488 (+424.73%)
Mutual labels:  hadoop, hive, hbase, hdfs
Bigdata Notes
大数据入门指南 ⭐
Stars: ✭ 10,991 (+11718.28%)
Mutual labels:  hadoop, hive, hbase, hdfs
aaocp
一个对用户行为日志进行分析的大数据项目
Stars: ✭ 53 (-43.01%)
Mutual labels:  hive, hadoop, hbase, hdfs
Bigdata docker
Big Data Ecosystem Docker
Stars: ✭ 161 (+73.12%)
Mutual labels:  hadoop, hive, hbase, hdfs
Repository
个人学习知识库涉及到数据仓库建模、实时计算、大数据、Java、算法等。
Stars: ✭ 92 (-1.08%)
Mutual labels:  hadoop, hive, hbase, hdfs
God Of Bigdata
专注大数据学习面试,大数据成神之路开启。Flink/Spark/Hadoop/Hbase/Hive...
Stars: ✭ 6,008 (+6360.22%)
Mutual labels:  hadoop, hive, hbase, hdfs
Bigdataguide
大数据学习,从零开始学习大数据,包含大数据学习各阶段学习视频、面试资料
Stars: ✭ 817 (+778.49%)
Mutual labels:  hadoop, hive, hbase
BigInsights-on-Apache-Hadoop
Example projects for 'BigInsights for Apache Hadoop' on IBM Bluemix
Stars: ✭ 21 (-77.42%)
Mutual labels:  hive, hadoop, hbase
Szt Bigdata
深圳地铁大数据客流分析系统🚇🚄🌟
Stars: ✭ 826 (+788.17%)
Mutual labels:  hadoop, hive, hbase
Bigdata Interview
🎯 🌟[大数据面试题]分享自己在网络上收集的大数据相关的面试题以及自己的答案总结.目前包含Hadoop/Hive/Spark/Flink/Hbase/Kafka/Zookeeper框架的面试题知识总结
Stars: ✭ 857 (+821.51%)
Mutual labels:  hadoop, hbase, hdfs
dockerfiles
Multi docker container images for main Big Data Tools. (Hadoop, Spark, Kafka, HBase, Cassandra, Zookeeper, Zeppelin, Drill, Flink, Hive, Hue, Mesos, ... )
Stars: ✭ 29 (-68.82%)
Mutual labels:  hive, hadoop, hbase
BigDataTools
tools for bigData
Stars: ✭ 36 (-61.29%)
Mutual labels:  hive, hbase, hdfs
swordfish
Open-source distribute workflow schedule tools, also support streaming task.
Stars: ✭ 35 (-62.37%)
Mutual labels:  hive, hadoop, hbase
wasp
WASP is a framework to build complex real time big data applications. It relies on a kind of Kappa/Lambda architecture mainly leveraging Kafka and Spark. If you need to ingest huge amount of heterogeneous data and analyze them through complex pipelines, this is the framework for you.
Stars: ✭ 19 (-79.57%)
Mutual labels:  hadoop, hbase, hdfs
xxhadoop
Data Analysis Using Hadoop/Spark/Storm/ElasticSearch/MachineLearning etc. This is My Daily Notes/Code/Demo. Don't fork, Just star !
Stars: ✭ 37 (-60.22%)
Mutual labels:  hive, hadoop, hbase
cloud
云计算之hadoop、hive、hue、oozie、sqoop、hbase、zookeeper环境搭建及配置文件
Stars: ✭ 48 (-48.39%)
Mutual labels:  hive, hadoop, hbase
Learning Spark
零基础学习spark,大数据学习
Stars: ✭ 37 (-60.22%)
Mutual labels:  hadoop, hbase, hdfs
bigdata-fun
A complete (distributed) BigData stack, running in containers
Stars: ✭ 14 (-84.95%)
Mutual labels:  hadoop, hbase, hdfs
Hadoop cookbook
Cookbook to install Hadoop 2.0+ using Chef
Stars: ✭ 82 (-11.83%)
Mutual labels:  hadoop, hive, hbase
bigdata-doc
大数据学习笔记,学习路线,技术案例整理。
Stars: ✭ 37 (-60.22%)
Mutual labels:  hive, hadoop, hdfs

wifi

Travis Travis

基于wifi抓取信息的大数据查询系统,主要内容为HBase的创建与导入,用户轨迹查询,碰撞分析,以及查询数据的明细和汇总统计。

使用说明

命令行方式

  1. 分别导入HBase和Hive项目
  2. 运行Hive服务端 "/mnt/hgfs/yyx/apache-hive-1.0.1-bin/hiveserver2"(此目录实际情况会有变化)
  3. 运行HBase服务端 "/mnt/hgfs/yyx/hbase-0.98.18-hadoop1/bin/start-hbase.sh"(此目录实际会有变化)
  4. 根据实际测试情况,调整参数,运行项目即可

Web方式

  1. Myeclipse导入webApp项目
  2. 运行Hive服务端 "/mnt/hgfs/yyx/apache-hive-1.0.1-bin/hiveserver2"(此目录实际情况会有变化)
  3. 运行HBase服务端 "/mnt/hgfs/yyx/hbase-0.98.18-hadoop1/bin/start-hbase.sh"(此目录实际会有变化)
  4. 访问index.jsp,改变参数,可查询到相关结果

测试结果展示

命令行方式

参数如下

String startTime = "'2016-04-13 10:43:36'";

String endTime = "'2016-04-14 11:43:36'";

String userMac = "'ff:ff:ff:ff:ff:ff'";

String deviceMac = "'b8:27:eb:1c:c0:09'";

String time1 = "'2016-04-13 10:43:36'";

String time2 = "'2016-04-14 11:43:36'";

HBase创建和数据导入

Hbase_insert

用户轨迹查询

track

伴随情况分析

accompany

碰撞分析

crash

Web方式

HBase创建和数据导入

Hbase_insert

用户轨迹查询

track

伴随情况分析

accompany

碰撞分析

crash

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].