All Projects → deepBrains → TSception

deepBrains / TSception

Licence: other
PyTorch implementation of TSception

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to TSception

pyRiemann
Python machine learning package based on sklearn API for multivariate data processing and statistical analysis of symmetric positive definite matrices via Riemannian geometry
Stars: ✭ 470 (+803.85%)
Mutual labels:  time-series, eeg
dbnR
Gaussian dynamic Bayesian networks structure learning and inference based on the bnlearn package
Stars: ✭ 33 (-36.54%)
Mutual labels:  time-series
Awesome Time Series
list of papers, code, and other resources
Stars: ✭ 242 (+365.38%)
Mutual labels:  time-series
lstm-attention
Attention-based bidirectional LSTM for Classification Task (ICASSP)
Stars: ✭ 87 (+67.31%)
Mutual labels:  emotion-recognition
Ml sagemaker studies
Case studies, examples, and exercises for learning to deploy ML models using AWS SageMaker.
Stars: ✭ 249 (+378.85%)
Mutual labels:  time-series
laravel-quasar
⏰📊✨Laravel Time Series - Provides an API to create and maintain data projections (statistics, aggregates, etc.) from your Eloquent models, and convert them to time series.
Stars: ✭ 78 (+50%)
Mutual labels:  time-series
Deep Rl Trading
playing idealized trading games with deep reinforcement learning
Stars: ✭ 228 (+338.46%)
Mutual labels:  time-series
Emotion-Recognition
Emotion recognition from EEG and physiological signals using deep neural networks
Stars: ✭ 35 (-32.69%)
Mutual labels:  emotion-recognition
speech-emotion-recognition
Speaker independent emotion recognition
Stars: ✭ 269 (+417.31%)
Mutual labels:  emotion-recognition
facial-expression-recognition
Facial Expression Recognition Using CNN and Haar-Cascade
Stars: ✭ 44 (-15.38%)
Mutual labels:  emotion-recognition
eeg-explorer
Visual EEG readings from the Muse EEG Headset
Stars: ✭ 35 (-32.69%)
Mutual labels:  eeg
Netdata
Real-time performance monitoring, done right! https://www.netdata.cloud
Stars: ✭ 57,056 (+109623.08%)
Mutual labels:  time-series
ganglion-ble
Web Bluetooth client for the Ganglion brain-computer interface by OpenBCI
Stars: ✭ 27 (-48.08%)
Mutual labels:  eeg
Astgcn
⚠️[Deprecated] no longer maintained, please use the code in https://github.com/guoshnBJTU/ASTGCN-r-pytorch
Stars: ✭ 246 (+373.08%)
Mutual labels:  time-series
pybv
A lightweight I/O utility for the BrainVision data format, written in Python.
Stars: ✭ 18 (-65.38%)
Mutual labels:  eeg
Lightkurve
A friendly package for Kepler & TESS time series analysis in Python.
Stars: ✭ 232 (+346.15%)
Mutual labels:  time-series
emotic
Code repo for the EMOTIC dataset
Stars: ✭ 93 (+78.85%)
Mutual labels:  emotion-recognition
zestdb
ZestDB
Stars: ✭ 18 (-65.38%)
Mutual labels:  time-series
query-selector
LONG-TERM SERIES FORECASTING WITH QUERYSELECTOR – EFFICIENT MODEL OF SPARSEATTENTION
Stars: ✭ 63 (+21.15%)
Mutual labels:  time-series
EEG-Motor-Imagery-Classification-CNNs-TensorFlow
EEG Motor Imagery Tasks Classification (by Channels) via Convolutional Neural Networks (CNNs) based on TensorFlow
Stars: ✭ 125 (+140.38%)
Mutual labels:  eeg

TSception

This is the PyTorch implementation of the TSception in our paper:

Yi Ding, Neethu Robinson, Qiuhao Zeng, Dou Chen, Aung Aung Phyo Wai, Tih-Shih Lee, Cuntai Guan, "TSception: A Deep Learning Framework for Emotion Detection Using EEG", in IJCNN 2020, WCCI'20 available Arxiv, IEEE Xplore

It is an end-to-end deep learning framework to do classification from raw EEG signals. A journal version of TSception using DEAP dataset can be found at this website

Requirement

python == 3.6 and above
torch == 1.2.0 and above
numpy == 1.16.4
h5py == 2.9.0
pathlib 

Run the code

please save the data into a folder and set the path of the data in 'PrepareData.py'.

python PrepareData.py

After running the above script, a file named 'data_split.hdf' will be generated at the same location of the script. Please set the location of data_split.hdf in 'Train.py' before running it.

python Train.py

Acknowledgment

This code is double-checked by Quihao Zeng and Ravikiran Mane.

EEG data

Different from images, the EEG data can be treated as 2D time series, whose dimensions are channels (EEG electrodes) and time respectively, (Fig.1) The channels here are the EEG electrodes instead of RGB dimensions in image or the input/output channels for convolutional layers. Because the electrodes are located on different areas on the surface of the human's head, the channel dimension contains spatial information of EEG; The time dimension is full of temporal information instead. In order to train a classifier, the EEG signal will be split into shorter time segments by a sliding window with a certain overlap along the time dimension. Each segment will be one input sample for the classifier.

Fig.1 EEG data. The hight is channel dimesion and the width is the time dimension.

Data to use

There are 2 subjects' data available for researchers to run the code. Please find the data in the folder named 'data' in this repo. The data is cleared by a band-pass filter(0.3-45) and ICA (MNE). The file is in '.hdf' format. To load the data, please use:

dataset = h5py.File('NAME.hdf','r')

After loading, the keys are 'data' for data and 'label' for the label. The dimension of the data is (trials x channels x data). The dimension of the label is (trials x data). To use the data and label, please use:

data = dataset['data']

label = dataset['label']

The visilizations of the 2 subjects' data are shown in Fig.3:

Fig.3 Visilizations of the 2 subjects' data. Amplitudes of the data are in uV.

Structure of TSception

TSception can be divided into 3 main parts: temporal learner, spatial learner and classifier(Fig.2). The input is fed into the temporal learner first followed by spatial learner. Finally, the feature vector will be passed through 2 fully connected layer to map it to the corresponding label. The dimension of input EEG segment is (channels x 1 x timepoint_per_segment), in our case, it is (4 x 1 x 1024), since it has 4 channels, and 1024 data points per channel. There are 9 kernels for each type of temporal kernels in temporal learner, and 6 kernels for each type of spatial kernels in spatial learner. The multi-scale temporal convolutional kernels will operate convolution on the input data parallelly. For each convolution operation, Relu() and average pooling is applied to the feature. The output of each level temporal kernel are concatenated along feature dimension, after which batch normalization is applied. In the spatial learner, the global kernel and hemisphere kernel are used to extract spatial information. Specially, the output of the two spatial kernels will be concatenated along channel dimension after Relu, and average pooling. The flattened feature map will be fed into a fully connected layer. After the dropout layer and softmax activation function, the classification result will be generated. For more details, please see the comments in the code and our paper.

Fig.2 TSception structure

CBCR License

Permissions Limitations Conditions
Modification Commercial use ⚠️ License and copyright notice
Distribution
Private use

Cite

Please cite our paper if you use our code in your own work:

@INPROCEEDINGS{9206750,
  author={Y. {Ding} and N. {Robinson} and Q. {Zeng} and D. {Chen} and A. A. {Phyo Wai} and T. -S. {Lee} and C. {Guan}},
  booktitle={2020 International Joint Conference on Neural Networks (IJCNN)}, 
  title={TSception:A Deep Learning Framework for Emotion Detection Using EEG}, 
  year={2020},
  volume={},
  number={},
  pages={1-7},
  doi={10.1109/IJCNN48605.2020.9206750}}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].