All Projects → NeuroTechX → Moabb

NeuroTechX / Moabb

Licence: bsd-3-clause
Mother of All BCI Benchmarks

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Moabb

Neurokit.py
A Python Toolbox for Statistics and Neurophysiological Signal Processing (EEG, EDA, ECG, EMG...).
Stars: ✭ 292 (+36.45%)
Mutual labels:  neuroscience, eeg
Neurotech Course
CS198-96: Intro to Neurotechnology @ UC Berkeley
Stars: ✭ 202 (-5.61%)
Mutual labels:  neuroscience, eeg
Fieldtrip
The MATLAB toolbox for MEG, EEG and iEEG analysis
Stars: ✭ 481 (+124.77%)
Mutual labels:  neuroscience, eeg
CereLink
Blackrock Microsystems Cerebus Link for Neural Signal Processing
Stars: ✭ 33 (-84.58%)
Mutual labels:  neuroscience, eeg
Mne Python
MNE: Magnetoencephalography (MEG) and Electroencephalography (EEG) in Python
Stars: ✭ 1,766 (+725.23%)
Mutual labels:  neuroscience, eeg
Pyriemann
Python package for covariance matrices manipulation and Biosignal classification with application in Brain Computer interface
Stars: ✭ 272 (+27.1%)
Mutual labels:  neuroscience, eeg
Deepeeg
Deep Learning with Tensor Flow for EEG MNE Epoch Objects
Stars: ✭ 100 (-53.27%)
Mutual labels:  neuroscience, eeg
python-meegkit
🔧🧠 MEEGkit: MEG & EEG processing toolkit in Python 🧠🔧
Stars: ✭ 99 (-53.74%)
Mutual labels:  neuroscience, eeg
Tapas
TAPAS - Translational Algorithms for Psychiatry-Advancing Science
Stars: ✭ 121 (-43.46%)
Mutual labels:  neuroscience, eeg
Analyzing neural time series
python implementations of Analyzing Neural Time Series Textbook
Stars: ✭ 117 (-45.33%)
Mutual labels:  neuroscience, eeg
cortex-v2-example
Example with Cortex V2 API
Stars: ✭ 121 (-43.46%)
Mutual labels:  neuroscience, eeg
Eegrunt
A Collection Python EEG (+ ECG) Analysis Utilities for OpenBCI and Muse
Stars: ✭ 171 (-20.09%)
Mutual labels:  neuroscience, eeg
antropy
AntroPy: entropy and complexity of (EEG) time-series in Python
Stars: ✭ 111 (-48.13%)
Mutual labels:  neuroscience, eeg
Fooof
Parameterizing neural power spectra into periodic & aperiodic components.
Stars: ✭ 162 (-24.3%)
Mutual labels:  neuroscience, eeg
mne-bids
MNE-BIDS is a Python package that allows you to read and write BIDS-compatible datasets with the help of MNE-Python.
Stars: ✭ 88 (-58.88%)
Mutual labels:  neuroscience, eeg
Openbci Dashboard
A fullstack javascript app for capturing and visualizing OpenBCI EEG data
Stars: ✭ 82 (-61.68%)
Mutual labels:  neuroscience, eeg
pyRiemann
Python machine learning package based on sklearn API for multivariate data processing and statistical analysis of symmetric positive definite matrices via Riemannian geometry
Stars: ✭ 470 (+119.63%)
Mutual labels:  neuroscience, eeg
Mne Cpp
MNE-CPP: A Framework for Electrophysiology
Stars: ✭ 104 (-51.4%)
Mutual labels:  neuroscience, eeg
Entropy
EntroPy: complexity of time-series in Python (DEPRECATED)
Stars: ✭ 142 (-33.64%)
Mutual labels:  neuroscience, eeg
Brainflow
BrainFlow is a library intended to obtain, parse and analyze EEG, EMG, ECG and other kinds of data from biosensors
Stars: ✭ 170 (-20.56%)
Mutual labels:  neuroscience, eeg

Mother of all BCI Benchmark

banner

Build a comprehensive benchmark of popular BCI algorithms applied on an extensive list of freely available EEG datasets.

Disclaimer

This is work in progress. API will change significantly (as well as the results of the benchmark).

Build Status Code style: black

Welcome!

First and foremost, Welcome! 🎉 Willkommen! 🎊 Bienvenue! 🎈🎈🎈

Thank you for visiting the Mother of all BCI Benchmark repository.

This document (the README file) is a hub to give you some information about the project. Jump straight to one of the sections below, or just scroll down to find out more.

We also have a recent paper in JNE.

What are we doing?

The problem

  • Reproducible Research in BCI has a long way to go.
  • While many BCI datasets are made freely available, researchers do not publish code, and reproducing results required to benchmark new algorithms turns out to be more tricky than it should be.
  • Performances can be significantly impacted by parameters of the preprocessing steps, toolboxes used and implementation “tricks” that are almost never reported in the literature.

As a results, there is no comprehensive benchmark of BCI algorithm, and newcomers are spending a tremendous amount of time browsing literature to find out what algorithm works best and on which dataset.

The solution

The Mother of all BCI Benchmark will:

  • Build a comprehensive benchmark of popular BCI algorithms applied on an extensive list of freely available EEG datasets.
  • The code will be made available on github, serving as a reference point for the future algorithmic developments.
  • Algorithms can be ranked and promoted on a website, providing a clear picture of the different solutions available in the field.

This project will be successful when we read in an abstract “ … the proposed method obtained a score of 89% on the MOABB (Mother of All BCI Benchmark), outperforming the state of the art by 5% ...”.

Who are we?

The founder of the Mother of all BCI Benchmark is Alexander Barachant. He is currently working with Vinay Jayaram to update and maintain the codebase. This project is under the umbrella of NeuroTechX, the international community for NeuroTech enthusiasts.

What do we need?

You! In whatever way you can help.

We need expertise in programming, user experience, software sustainability, documentation and technical writing and project management.

We'd love your feedback along the way.

Our primary goal is to build a comprehensive benchmark of popular BCI algorithms applied on an extensive list of freely available EEG datasets, and we're excited to support the professional development of any and all of our contributors. If you're looking to learn to code, try out working collaboratively, or translate you skills to the digital domain, we're here to help.

Get involved

If you think you can help in any of the areas listed above (and we bet you can) or in any of the many areas that we haven't yet thought of (and here we're sure you can) then please check out our contributors' guidelines and our roadmap.

Please note that it's very important to us that we maintain a positive and supportive environment for everyone who wants to participate. When you join us we ask that you follow our code of conduct in all interactions both on and offline.

Contact us

If you want to report a problem or suggest an enhancement we'd love for you to open an issue at this github repository because then we can get right on it. But you can also reach us on the NeuroTechX slack #moabb channel where we are happy to help!

Find out more

You might be interested in:

And of course, you'll want to know our:

Thank you

Thank you so much (Danke schön! Merci beaucoup!) for visiting the project and we do hope that you'll join us on this amazing journey to build a comprehensive benchmark of popular BCI algorithms applied on an extensive list of freely available EEG datasets.

Installation:

Must by running Python 3.6

To install, fork or clone the repository and go to the downloaded directory, then run

pip install -r requirements.txt
python setup.py develop    # because no stable release yet

Requirements we use

mne numpy scipy scikit-learn matplotlib seaborn pandas pyriemann h5py

Running:

Verify Installation

To ensure it is running correctly, you can also run

python -m unittest moabb.tests

once it is installed.

Run the Moabb

python -m moabb.run --verbose

Documentation:

http://moabb.neurotechx.com/docs/

Supported datasets:

The list of supported dataset can be found here : http://moabb.neurotechx.com/docs/datasets.html

Submit a new dataset

you can submit new dataset by filling this form. The datasets currently on our radar can be seen here.

Architecture and main concepts:

banner

there are 4 main concepts in the MOABB: the datasets, the paradigm, the evaluation, and the pipelines. In addition, we offer statistical and visualization utilities to simplify the workflow.

Datasets

A dataset handle and abstract low level access to the data. the dataset will takes data stored locally, in the format in which they have been downloaded, and will convert them into a MNE raw object. There are options to pool all the different recording sessions per subject or to evaluate them separately.

Paradigm

A paradigm defines how the raw data will be converted to trials ready to be processed by a decoding algorithm. This is a function of the paradigm used, i.e. in motor imagery one can have two-class, multi-class, or continuous paradigms; similarly, different preprocessing is necessary for ERP vs ERD paradigms.

Evaluations

An evaluation defines how we go from trials per subject and session to a generalization statistic (AUC score, f-score, accuracy, etc) -- it can be either within-recording-session accuracy, across-session within-subject accuracy, across-subject accuracy, or other transfer learning settings.

Pipelines

Pipeline defines all steps required by an algorithm to obtain predictions. Pipelines are typically a chain of sklearn compatible transformers and end with an sklearn compatible estimator. See Pipelines for more info.

Statistics and visualization

Once an evaluation has been run, the raw results are returned as a DataFrame. This can be further processed via the following commands to generate some basic visualization and statistical comparisons:

from moabb.analysis import analyze

results = evaluation.process(pipeline_dict)
analyze(results)

Generate the documentation

To generate the documentation :

cd docs
make html
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].