All Projects → audeering → opensmile

audeering / opensmile

Licence: other
The Munich Open-Source Large-Scale Multimedia Feature Extractor

Programming Languages

C++
36643 projects - #6 most used programming language
c
50402 projects - #5 most used programming language
PHP
23972 projects - #3 most used programming language
perl
6916 projects
CMake
9771 projects
C#
18002 projects

Projects that are alternatives of or similar to opensmile

SPHORB
feature detector and descriptor for spherical panorama
Stars: ✭ 66 (-76.43%)
Mutual labels:  feature-extraction
pyHSICLasso
Versatile Nonlinear Feature Selection Algorithm for High-dimensional Data
Stars: ✭ 125 (-55.36%)
Mutual labels:  feature-extraction
NTFk.jl
Unsupervised Machine Learning: Nonnegative Tensor Factorization + k-means clustering
Stars: ✭ 36 (-87.14%)
Mutual labels:  feature-extraction
image features
Extract deep learning features from images using simple python interface
Stars: ✭ 84 (-70%)
Mutual labels:  feature-extraction
50-days-of-Statistics-for-Data-Science
This repository consist of a 50-day program. All the statistics required for the complete understanding of data science will be uploaded in this repository.
Stars: ✭ 19 (-93.21%)
Mutual labels:  feature-extraction
Bike-Sharing-Demand-Kaggle
Top 5th percentile solution to the Kaggle knowledge problem - Bike Sharing Demand
Stars: ✭ 33 (-88.21%)
Mutual labels:  feature-extraction
pyAudioProcessing
Audio feature extraction and classification
Stars: ✭ 165 (-41.07%)
Mutual labels:  feature-extraction
antropy
AntroPy: entropy and complexity of (EEG) time-series in Python
Stars: ✭ 111 (-60.36%)
Mutual labels:  feature-extraction
time-series-classification
Classifying time series using feature extraction
Stars: ✭ 75 (-73.21%)
Mutual labels:  feature-extraction
towhee
Towhee is a framework that is dedicated to making neural data processing pipelines simple and fast.
Stars: ✭ 821 (+193.21%)
Mutual labels:  feature-extraction
SIFT-BoF
Feature extraction by using SITF+BoF.
Stars: ✭ 22 (-92.14%)
Mutual labels:  feature-extraction
mildnet
Visual Similarity research at Fynd. Contains code to reproduce 2 of our research papers.
Stars: ✭ 76 (-72.86%)
Mutual labels:  feature-extraction
PyTorch-Model-Compare
Compare neural networks by their feature similarity
Stars: ✭ 119 (-57.5%)
Mutual labels:  feature-extraction
Graph-Based-TC
Graph-based framework for text classification
Stars: ✭ 24 (-91.43%)
Mutual labels:  feature-extraction
Bag-of-Visual-Words
🎒 Bag of Visual words (BoW) approach for object classification and detection in images together with SIFT feature extractor and SVM classifier.
Stars: ✭ 39 (-86.07%)
Mutual labels:  feature-extraction
Python Computer Vision from Scratch
This repository explores the variety of techniques commonly used to analyze and interpret images. It also describes challenging real-world applications where vision is being successfully used, both for specialized applications such as medical imaging, and for fun, consumer-level tasks such as image editing and stitching, which students can apply…
Stars: ✭ 219 (-21.79%)
Mutual labels:  feature-extraction
GeobitNonrigidDescriptor ICCV 2019
C++ implementation of the nonrigid descriptor Geobit presented at ICCV 2019 "GEOBIT: A Geodesic-Based Binary Descriptor Invariant to Non-Rigid Deformations for RGB-D Images"
Stars: ✭ 11 (-96.07%)
Mutual labels:  feature-extraction
autoencoders tensorflow
Automatic feature engineering using deep learning and Bayesian inference using TensorFlow.
Stars: ✭ 66 (-76.43%)
Mutual labels:  feature-extraction
Speech Feature Extraction
Feature extraction of speech signal is the initial stage of any speech recognition system.
Stars: ✭ 78 (-72.14%)
Mutual labels:  feature-extraction
lung-image-analysis
A basic framework for pulmonary nodule detection and characterization in CT
Stars: ✭ 26 (-90.71%)
Mutual labels:  feature-extraction

Latest release Latest release date All releases Documentation

openSMILE (open-source Speech and Music Interpretation by Large-space Extraction) is a complete and open-source toolkit for audio analysis, processing and classification especially targeted at speech and music applications, e.g. automatic speech recognition, speaker identification, emotion recognition, or beat tracking and chord detection.

It is written purely in C++, has a fast, efficient, and flexible architecture, and runs on desktop, mobile, and embedded platforms such as Linux, Windows, macOS, Android, iOS and Raspberry Pi.

See also the standalone opensmile Python package for an easy-to-use wrapper if you are working in Python.

What's new

Please see our blog post on audeering.com for a summary of the new features in version 3.0.

Quick start

Pre-built x64 binaries for Windows, Linux, and macOS are provided on the Releases page. Alternatively, you may follow the steps below to build openSMILE yourself, if desired.

For more details on how to customize builds, build for other platforms, and use openSMILE, see Section Get started in the documentation.

Linux/MacOS

Prerequisites:

  • A version of gcc and g++ or Clang needs to be installed that supports C++11.
  • CMake 3.5.1 or later needs to be installed and in the PATH.
  1. In build_flags.sh, set build flags and options as desired.
  2. Run bash build.sh.

Build files will be generated in the ./build subdirectory. You can find the main SMILExtract binary in ./build/progsrc/smilextract.

Windows

Prerequisites:

  • Visual Studio 2017 or higher with C++ components is required.
  • CMake 3.15 or later needs to be installed and in the PATH.
  1. In build_flags.ps1, set build flags and options as desired.
  2. Run powershell -ExecutionPolicy Bypass -File build.ps1.

Build files will be generated in the ./build subdirectory. You can find the main SMILExtract.exe binary in ./build/progsrc/smilextract.

Documentation

You can find extensive documentation with step-by-step instructions on how to build openSMILE and get started at https://audeering.github.io/opensmile/.

History

The toolkit was first developed at the Institute for Human-Machine Communication at the Technische Universität München in Munich, Germany. It was started within the SEMAINE EU-FP7 research project. The toolkit is now owned and maintained by audEERING GmbH, who provide intelligent audio analysis solutions, automatic speech emotion recognition, and paralinguistic speech analysis software packages as well as consulting and development services on these topics.

Contributing and Support

We welcome contributions! For feedback and technical support, please use the issue tracker.

Licensing

openSMILE follows a dual-licensing model. Since the main goal of the project is widespread use of the software to facilitate research in the field of machine learning from audio-visual signals, the source code, and binaries are freely available for private, research, and educational use under an open-source license (see LICENSE). It is not allowed to use the open-source version of openSMILE for any sort of commercial product. Fundamental research in companies, for example, is permitted, but if a product is the result of the research, we require you to buy a commercial development license. Contact us at [email protected] (or visit us at https://www.audeering.com) for more information.

Original authors: Florian Eyben, Felix Weninger, Martin Wöllmer, Björn Schuller
Copyright © 2008-2013, Institute for Human-Machine Communication, Technische Universität München, Germany
Copyright © 2013-2015, audEERING UG (haftungsbeschränkt)
Copyright © 2016-2022, audEERING GmbH

Citing

Please cite openSMILE in your publications by citing the following paper:

Florian Eyben, Martin Wöllmer, Björn Schuller: "openSMILE - The Munich Versatile and Fast Open-Source Audio Feature Extractor", Proc. ACM Multimedia (MM), ACM, Florence, Italy, ISBN 978-1-60558-933-6, pp. 1459-1462, 25.-29.10.2010.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].