georgia-tech-db / eva

Licence: Apache-2.0 License
Exploratory Video Analytics System

Programming Languages

python
139335 projects - #7 most used programming language
ANTLR
299 projects
CSS
56736 projects
Jupyter Notebook
11667 projects
HTML
75241 projects
shell
77523 projects

Projects that are alternatives of or similar to eva

Hydro Serving
MLOps Platform
Stars: ✭ 213 (+587.1%)
Mutual labels:  serving
Boostcamp-AI-Tech-Product-Serving
[Machine Learning Engineer Basic Guide] 부스트캠프 AI Tech - Product Serving 자료
Stars: ✭ 280 (+803.23%)
Mutual labels:  serving
dingo
A Hybrid Serving & Analytical Processing Database.
Stars: ✭ 108 (+248.39%)
Mutual labels:  serving
mux-go
Official Mux API wrapper for golang projects, supporting both Mux Data and Mux Video.
Stars: ✭ 69 (+122.58%)
Mutual labels:  video-analytics
fritz-tools
Useful tools for AVM devices
Stars: ✭ 22 (-29.03%)
Mutual labels:  eva
fasttext-serving
Serve your fastText models for text classification and word vectors
Stars: ✭ 21 (-32.26%)
Mutual labels:  serving
Tensorflow template application
TensorFlow template application for deep learning
Stars: ✭ 1,851 (+5870.97%)
Mutual labels:  serving
tfModelServing4s
Reasonable API for serving TensorFlow models using Scala
Stars: ✭ 29 (-6.45%)
Mutual labels:  serving
bodywork-ml-pipeline-project
Deployment template for a continuous training pipeline.
Stars: ✭ 22 (-29.03%)
Mutual labels:  serving
tensorflow-serving-arm
TensorFlow Serving ARM - A project for cross-compiling TensorFlow Serving targeting popular ARM cores
Stars: ✭ 75 (+141.94%)
Mutual labels:  serving
mux-python
Official Mux API wrapper for python projects, supporting both Mux Data and Mux Video.
Stars: ✭ 34 (+9.68%)
Mutual labels:  video-analytics
pyextremes
Extreme Value Analysis (EVA) in Python
Stars: ✭ 89 (+187.1%)
Mutual labels:  eva
tator
Video analytics web platform
Stars: ✭ 66 (+112.9%)
Mutual labels:  video-analytics
Ray
An open source framework that provides a simple, universal API for building distributed applications. Ray is packaged with RLlib, a scalable reinforcement learning library, and Tune, a scalable hyperparameter tuning library.
Stars: ✭ 18,547 (+59729.03%)
Mutual labels:  serving
sagemaker-sparkml-serving-container
This code is used to build & run a Docker container for performing predictions against a Spark ML Pipeline.
Stars: ✭ 44 (+41.94%)
Mutual labels:  serving
Seldon Core
An MLOps framework to package, deploy, monitor and manage thousands of production machine learning models
Stars: ✭ 2,815 (+8980.65%)
Mutual labels:  serving
CrowdFlow
Optical Flow Dataset and Benchmark for Visual Crowd Analysis
Stars: ✭ 87 (+180.65%)
Mutual labels:  video-analytics
eva icons flutter
Flutter package for Eva Icons. Eva Icons is a pack of more than 480 beautifully crafted Open Source icons for common actions and items. https://pub.dartlang.org/packages/eva_icons_flutter
Stars: ✭ 80 (+158.06%)
Mutual labels:  eva
spark-ml-serving
Spark ML Lib serving library
Stars: ✭ 49 (+58.06%)
Mutual labels:  serving
VIAME
Video and Image Analytics for Multiple Environments
Stars: ✭ 200 (+545.16%)
Mutual labels:  video-analytics

EVA (Exploratory Video Analytics)

Build Status Coverage Status License Documentation Status Join the chat at https://gitter.im/georgia-tech-db/eva

What is EVA?

EVA is a visual data management system (think MySQL for videos). It supports a declarative language similar to SQL and a wide range of commonly used computer vision models.

What does EVA do?

  • EVA enables querying of visual data in user facing applications by providing a simple SQL-like interface for a wide range of commonly used computer vision models.

  • EVA improves throughput by introducing sampling, filtering, and caching techniques.

  • EVA improves accuracy by introducing state-of-the-art model specialization and selection algorithms.

Installation

Dependency

EVA requires Python 3.7 or later and JAVA 8. On Ubuntu, you can install the JAVA by sudo -E apt install -y openjdk-8-jdk openjdk-8-jre.

Recommended

To install EVA, we recommend using virtual environment and pip:

python3 -m venv env37
. env37/bin/activate
pip install --upgrade pip
pip install evatestdb

Install From Source

git clone https://github.com/georgia-tech-db/eva.git && cd eva
python3 -m venv env37
. env37/bin/activate
pip install --upgrade pip
sh script/antlr4/generate_parser.sh
pip install .

Verify Installation

  1. Set up the server and client
  • Activate the virtual environment: . env37/bin/activate

  • Launch EVA database Server: eva_server

  • Launch CLI: eva_client

  1. Run the UPLOAD command in the client terminal (use the ua_detrac.mp4 as an example):
UPLOAD INFILE 'data/ua_detrac/ua_detrac.mp4' PATH 'test_video.mp4';
  1. Run the LOAD command in the client terminal: (may take a while)
LOAD DATA INFILE 'test_video.mp4' INTO MyVideo;
  1. Below is a basic query that should work on the client
SELECT id, data FROM MyVideo WHERE id < 5;

Quickstart Tutorial

Configure GPU (Recommended)

  1. If your workstation has a GPU, you need to first set it up and configure it. You can run the following command first to check your hardware capabilities.

    ubuntu-drivers devices
    

    If you do have an NVIDIA GPU, and its not been configured yet, follow all the steps in this link carefully. https://towardsdatascience.com/deep-learning-gpu-installation-on-ubuntu-18-4-9b12230a1d31.

    Some pointers:

    • When installing NVIDIA drivers, check the correct driver version for your GPU to avoid compatibiility issues.
    • When installing cuDNN, you will have to create an account. Make sure you get the correct deb files for your OS and architecture.
  2. You can run the following code in a jupyter instance to verify your GPU is working well along with PyTorch.

    import torch
    device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
    print(device)
    

    Output of cuda:0 indicates the presence of a GPU. (Note: 0 indicates the index of the GPU in system. Incase you have multiple GPUs, the index needs to be accordingly changed)

  3. Now configure the executor section in ~/.eva/eva.yml as follows:

    gpus: {'127.0.1.1': [0]}
    

    127.0.1.1 is the loopback address on which the eva server is started. 0 refers to the GPU index to be used.

Sample Notebook

  1. Open a terminal instance and start the server:

    eva_server
    
  2. Open another terminal instance. Start a jupyter lab/notebook instance, and navigate to tutorials/object_detection.ipynb

  3. You might have to install ipywidgets to visualize the input video and output. Follow steps in https://ipywidgets.readthedocs.io/en/latest/user_install.html as per your jupyter environment.

  4. Run each cell one by one. Each cell is self-explanatory. If everything has been configured correctly you should be able to see a ipywidgets Video instance with the bounding boxes output of the executed query.

Documentation

You can find documentation and code snippets for EVA here.

Contributing

To file a bug or request a feature, please file a GitHub issue. Pull requests are welcome.

For information on installing from source and contributing to EVA, see our contributing guidelines.

Contributors

See the people page for the full listing of contributors.

License

Copyright (c) 2018-2020 Georgia Tech Database Group Licensed under the Apache License.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].