All Projects → aoso3 → Real-Time-Abnormal-Events-Detection-and-Tracking-in-Surveillance-System

aoso3 / Real-Time-Abnormal-Events-Detection-and-Tracking-in-Surveillance-System

Licence: MIT License
The main abnormal behaviors that this project can detect are: Violence, covering camera, Choking, lying down, Running, Motion in restricted areas. It provides much flexibility by allowing users to choose the abnormal behaviors they want to be detected and keeps track of every abnormal event to be reviewed. We used three methods to detect abnorma…

Programming Languages

C#
18002 projects

Projects that are alternatives of or similar to Real-Time-Abnormal-Events-Detection-and-Tracking-in-Surveillance-System

Modosc
A set of Max abstractions designed for computing motion descriptors from raw motion capture data in real time.
Stars: ✭ 24 (-31.43%)
Mutual labels:  real-time, motion
Opticalflow visualization
Python optical flow visualization following Baker et al. (ICCV 2007) as used by the MPI-Sintel challenge
Stars: ✭ 183 (+422.86%)
Mutual labels:  motion, optical-flow
Hidden Two Stream
Caffe implementation for "Hidden Two-Stream Convolutional Networks for Action Recognition"
Stars: ✭ 179 (+411.43%)
Mutual labels:  real-time, optical-flow
Fastmot
High-performance multiple object tracking based on YOLO, Deep SORT, and optical flow
Stars: ✭ 284 (+711.43%)
Mutual labels:  real-time, optical-flow
briefmatch
BriefMatch real-time GPU optical flow
Stars: ✭ 36 (+2.86%)
Mutual labels:  real-time, optical-flow
platyplus
Low-code, offline-first apps with Hasura
Stars: ✭ 22 (-37.14%)
Mutual labels:  real-time
motion-transitioning-objc
Light-weight API for building UIViewController transitions.
Stars: ✭ 24 (-31.43%)
Mutual labels:  motion
optimo
Keyframe-based motion editing system using numerical optimization [CHI 2018]
Stars: ✭ 22 (-37.14%)
Mutual labels:  motion
Custom-Object-Detection-using-Darkflow
Make custom objects dataset and detect them using darkflow. Darkflow is a tensorflow translation of Darknet.
Stars: ✭ 21 (-40%)
Mutual labels:  real-time
face-recognition
A GPU-accelerated real-time face recognition system based on classical machine learning algorithms
Stars: ✭ 24 (-31.43%)
Mutual labels:  pattern-recognition
mute-structs
MUTE-structs is a Typescript library that provides an implementation of the LogootSplit CRDT algorithm.
Stars: ✭ 14 (-60%)
Mutual labels:  real-time
asana-webhooks-manager
Asana Webhooks Manager (AWM) is a free and open source management and event handling server, written in JavaScript (NodeJS, Angular) for Asana's webhooks API. Use AWM to manage webhooks subscriptions and accept event payloads from Asana in real-time. Want to create your own Asana Dashboard? Consider AWM as your starting point!
Stars: ✭ 23 (-34.29%)
Mutual labels:  real-time
ChatService
ChatService (SignalR).
Stars: ✭ 26 (-25.71%)
Mutual labels:  real-time
buildings-wave
🏤 A tutorial on how to create a 3D building wave animation with three.js and TweenMax
Stars: ✭ 66 (+88.57%)
Mutual labels:  motion
ByteTrack
ByteTrack: Multi-Object Tracking by Associating Every Detection Box
Stars: ✭ 1,991 (+5588.57%)
Mutual labels:  real-time
reactors
Maintain state, incorporate change, broadcast deltas. Reboot on error.
Stars: ✭ 17 (-51.43%)
Mutual labels:  real-time
pyconvsegnet
Semantic Segmentation PyTorch code for our paper: Pyramidal Convolution: Rethinking Convolutional Neural Networks for Visual Recognition (https://arxiv.org/pdf/2006.11538.pdf)
Stars: ✭ 32 (-8.57%)
Mutual labels:  pattern-recognition
Metaboverse
Visualizing and Analyzing Metabolic Networks with Reaction Pattern Recognition
Stars: ✭ 17 (-51.43%)
Mutual labels:  pattern-recognition
TorrentsDuck
A multi users bittorrents client with a responsive web UI that quacks 🦆
Stars: ✭ 42 (+20%)
Mutual labels:  real-time
SiamFC-tf
A TensorFlow implementation of the SiamFC tracker, use with your own camera and video, or integrate to your own project 实时物体追踪,封装API,可整合到自己的项目中
Stars: ✭ 22 (-37.14%)
Mutual labels:  real-time

Real-Time-Abnormal-Event-Detection-And-Tracking-In-Video

The main abnormal behaviors that this project can detect are: Violence, covering camera, Choking, lying down, Running, Motion in restricted areas. It provides much flexibility by allowing users to choose the abnormal behaviors they want to be detected and keeps track of every abnormal event to be reviewed. We used three methods to detect abnormal behaviors: Motion influence map, Pattern recognition models, State event model. For multi-camera tracking, we combined a single camera tracking algorithm with a spatial based algorithm.

Video

Requirements

Emgu
Telerik
Accord
Accord.MachineLearning
Accord.Math
MediaToolkit
Newtonsoft.Json


System Implementation

Following Class diagram of the system: Alt tag

1- Motion Influence Map

Introduction

Initially, Suspicious movement is divided into two parts, internal and external. The internal occurs in a small area of the scene such as a sudden appearance of an object (such as a bicycle or car) in an area where people are naturally present, or the rapid movement of a person while the rest of the people move slowly. On the whole, the external situation occurs as many people suddenly flee together.

Alt tag

The diagram illustrates the general framework of the proposed system. By entering a series of scenes, motion information is calculated at the pixel and mass level sequentially. The kinetic energy of each block is then calculated to construct the structure of the motion effect (Motion Influence Map). The proposed structure represents both temporal and spatial properties within the attribute array. To classify the natural event of suspicious we will apply K-means algorithm to determine the centers of natural events in the scene, then we can deduce the suspicious event by applying Euclidean distance law between the monitored scene and the centers of scenes, if it exceeds a certain threshold will be considered suspicious scene.

Algorithm Steps

1- Motion Descriptor: we estimate the motion information indirectly from the optical flows. Specifically, after computing the optical flows for every pixel within a frame, we partition the frame into M by N uniform blocks without a loss of generality, where the blocks can be indexed by {B1, B2, ... , BMN}, and then compute a representative optical flow for each block by taking the average of the optical flows of the pixels within the block.

2- Motion Influence Map: The direction of pedestrian movement within the crowd can be affected by several factors such as obstacles on the road, neighboring pedestrians and moving vehicles. This characteristic reaction is called the motion effect. We consider that the mass is under the influence of another moving object that will be determined by two factors: the direction of movement and the speed of movement. The faster the object moves, the greater the number of adjacent blocks affected by its movement. Nearby blocks are more affected than remote blocks. After calculating the effect weights for all the blocks we can build Motion Influence Map express patterns of motion effect within the scene. After calculating the impact weights that are only calculated between two blocks we will calculate the beam of motion weights for each block within the scene where we will take all the impact blocks into account. Alt tag
The previous diagram briefly illustrates the stages of Motion Influence Map algorithm :
A- Optical flow.
B- calculate the impact of movement between the blocks.
C- Calculate the effect weights between each two blocks.
D- Calculate the beam weights of impact for each block.

3- Feature Extraction: After we have built motion influence map of the scene, we can find the mass that contains a suspicious event where it has a characteristic motion beam. But the activity is tracked through several consecutive scenes so we will extract the beam of attributes for each of the adjacent blocks through a certain number of scenes together mega block. Thus each scene will be divided into a group of mega blocks each containing the motion effect Finally we extract the temporal and spatial features of each mega block for a number of blocks within the scene By collecting the rays of the movement effect within each scene separately.
Alt tag


4- Detection, and Localization: For each mega block, we then perform K-means clustering using the spatio-temporal features, Here, we should note that in our training stage, we use only video clips of normal activities. Therefore, the codewords of a mega block model the patterns of usual activities that can occur in the respective area.

2- Pattern Recognition and State Event Model

Steps

1- Corner Detector: Apply "Good Features to track" algorithm

Alt tag
2- Lucas-Kanade Optical Flow: On the extracted corners.

Alt tag
3- Classification

This project is based on the following papers:


https://ieeexplore.ieee.org/document/7024902?section=abstract https://www.researchgate.net/publication/221362278_Anomaly_Detection_in_Crowded_Scenes https://www.researchgate.net/publication/220278263_Motion-based_unusual_event_detection_in_human_crowds

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].