All Projects → fgadaleta → Deeplearning Ahem Detector

fgadaleta / Deeplearning Ahem Detector

Licence: mit

Projects that are alternatives of or similar to Deeplearning Ahem Detector

Getcards
Notebook to download machine learning flashcards
Stars: ✭ 435 (-2.03%)
Mutual labels:  jupyter-notebook
China job survey
stats of Chinese developers. 统计中国程序员的就业情况
Stars: ✭ 441 (-0.68%)
Mutual labels:  jupyter-notebook
Tcav
Code for the TCAV ML interpretability project
Stars: ✭ 442 (-0.45%)
Mutual labels:  jupyter-notebook
Code search
Code For Medium Article: "How To Create Natural Language Semantic Search for Arbitrary Objects With Deep Learning"
Stars: ✭ 436 (-1.8%)
Mutual labels:  jupyter-notebook
Dsp Theory
Theory of digital signal processing (DSP): signals, filtration (IIR, FIR, CIC, MAF), transforms (FFT, DFT, Hilbert, Z-transform) etc.
Stars: ✭ 437 (-1.58%)
Mutual labels:  jupyter-notebook
Deeplearningzerotoall
TensorFlow Basic Tutorial Labs
Stars: ✭ 4,239 (+854.73%)
Mutual labels:  jupyter-notebook
Tensorflow Lstm Regression
Sequence prediction using recurrent neural networks(LSTM) with TensorFlow
Stars: ✭ 433 (-2.48%)
Mutual labels:  jupyter-notebook
Modsimpy
Text and supporting code for Modeling and Simulation in Python
Stars: ✭ 443 (-0.23%)
Mutual labels:  jupyter-notebook
Generative Models
Annotated, understandable, and visually interpretable PyTorch implementations of: VAE, BIRVAE, NSGAN, MMGAN, WGAN, WGANGP, LSGAN, DRAGAN, BEGAN, RaGAN, InfoGAN, fGAN, FisherGAN
Stars: ✭ 438 (-1.35%)
Mutual labels:  jupyter-notebook
Reinforcement learning tutorial with demo
Reinforcement Learning Tutorial with Demo: DP (Policy and Value Iteration), Monte Carlo, TD Learning (SARSA, QLearning), Function Approximation, Policy Gradient, DQN, Imitation, Meta Learning, Papers, Courses, etc..
Stars: ✭ 442 (-0.45%)
Mutual labels:  jupyter-notebook
Tigramite
Tigramite is a time series analysis python module for causal discovery. The Tigramite documentation is at
Stars: ✭ 435 (-2.03%)
Mutual labels:  jupyter-notebook
Monk object detection
A one-stop repository for low-code easily-installable object detection pipelines.
Stars: ✭ 437 (-1.58%)
Mutual labels:  jupyter-notebook
Nglview
Jupyter widget to interactively view molecular structures and trajectories
Stars: ✭ 440 (-0.9%)
Mutual labels:  jupyter-notebook
Finbert
Financial Sentiment Analysis with BERT
Stars: ✭ 433 (-2.48%)
Mutual labels:  jupyter-notebook
Publaynet
Stars: ✭ 442 (-0.45%)
Mutual labels:  jupyter-notebook
Pandas Cookbook
Pandas Cookbook, published by Packt
Stars: ✭ 434 (-2.25%)
Mutual labels:  jupyter-notebook
Lucid
A collection of infrastructure and tools for research in neural network interpretability.
Stars: ✭ 4,344 (+878.38%)
Mutual labels:  jupyter-notebook
Pytorch Maml
PyTorch implementation of MAML: https://arxiv.org/abs/1703.03400
Stars: ✭ 444 (+0%)
Mutual labels:  jupyter-notebook
Python Ml Course
Curso de Introducción a Machine Learning con Python
Stars: ✭ 442 (-0.45%)
Mutual labels:  jupyter-notebook
Practical Deep Learning Book
Official code repo for the O'Reilly Book - Practical Deep Learning for Cloud, Mobile & Edge
Stars: ✭ 441 (-0.68%)
Mutual labels:  jupyter-notebook

Deep Learning 'ahem' detector

alt text

The ahem detector is a deep convolutional neural network that is trained on transformed audio signals to recognize "ahem" sounds. The network has been trained to detect such signals on the episodes of Data Science at Home, the podcast about data science at podcast.datascienceathome.com

Slides and some technical details provided here.

Two sets of audio files are required, very similarly to a cohort study:

  • a negative sample with clean voice/sound and

  • a positive one with "ahem" sounds concatenated

While the detector works for the aforementioned audio files, it can be generalized to any other audio input, provided enough data are available. The minimum required is ~10 seconds for the positive samples and ~3 minutes for the negative cohort. The network will adapt to the training data and can perform detection on different spoken voice.

How do I get set up?

Once the training audio files are provided, just load the training set and train the network with the code in the ipython notebook. Make sure to create the local folder that has been hardcoded in the script files below. Build training/testing set before running the script. Execute first

% python make_data_class_0.py
% python make_data_class_1.py

A GPU is recommended as, under the conditions specific to this example at least 5 epochs are required to obtain ~81% accuracy.

How do I clean a new dirty audio file?

A new audio file must be trasformed in the same way of training files. This can be done with

% python make_data_newsample.py

Then follow the script in the ipython notebook that is commented enough to proceed without particular issues.

License and Copyright Notice

MIT License Copyright (c) 2016 Francesco Gadaleta

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].