All Projects → s3prl → S3prl

s3prl / S3prl

Licence: mit
Self-Supervised Speech Pre-training and Representation Learning Toolkit.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to S3prl

dropclass speaker
DropClass and DropAdapt - repository for the paper accepted to Speaker Odyssey 2020
Stars: ✭ 20 (-95.26%)
Mutual labels:  representation-learning
Pykg2vec
Python library for knowledge graph embedding and representation learning.
Stars: ✭ 280 (-33.65%)
Mutual labels:  representation-learning
Contrastive Predictive Coding
Keras implementation of Representation Learning with Contrastive Predictive Coding
Stars: ✭ 369 (-12.56%)
Mutual labels:  representation-learning
MaskedFaceRepresentation
Masked face recognition focuses on identifying people using their facial features while they are wearing masks. We introduce benchmarks on face verification based on masked face images for the development of COVID-safe protocols in airports.
Stars: ✭ 17 (-95.97%)
Mutual labels:  representation-learning
Decagon
Graph convolutional neural network for multirelational link prediction
Stars: ✭ 268 (-36.49%)
Mutual labels:  representation-learning
Simclr
PyTorch implementation of SimCLR: A Simple Framework for Contrastive Learning of Visual Representations by T. Chen et al.
Stars: ✭ 293 (-30.57%)
Mutual labels:  representation-learning
RG-Flow
This is project page for the paper "RG-Flow: a hierarchical and explainable flow model based on renormalization group and sparse prior". Paper link: https://arxiv.org/abs/2010.00029
Stars: ✭ 58 (-86.26%)
Mutual labels:  representation-learning
Modelsgenesis
Official Keras & PyTorch Implementation and Pre-trained Models for Models Genesis - MICCAI 2019
Stars: ✭ 416 (-1.42%)
Mutual labels:  representation-learning
Swem
The Tensorflow code for this ACL 2018 paper: "Baseline Needs More Love: On Simple Word-Embedding-Based Models and Associated Pooling Mechanisms"
Stars: ✭ 279 (-33.89%)
Mutual labels:  representation-learning
Gatne
Source code and dataset for KDD 2019 paper "Representation Learning for Attributed Multiplex Heterogeneous Network"
Stars: ✭ 343 (-18.72%)
Mutual labels:  representation-learning
VFS
Rethinking Self-Supervised Correspondence Learning: A Video Frame-level Similarity Perspective, in ICCV 2021 (Oral)
Stars: ✭ 109 (-74.17%)
Mutual labels:  representation-learning
HiCMD
[CVPR2020] Hi-CMD: Hierarchical Cross-Modality Disentanglement for Visible-Infrared Person Re-Identification
Stars: ✭ 64 (-84.83%)
Mutual labels:  representation-learning
Smore
SMORe: Modularize Graph Embedding for Recommendation
Stars: ✭ 307 (-27.25%)
Mutual labels:  representation-learning
srVAE
VAE with RealNVP prior and Super-Resolution VAE in PyTorch. Code release for https://arxiv.org/abs/2006.05218.
Stars: ✭ 56 (-86.73%)
Mutual labels:  representation-learning
Disentangling Vae
Experiments for understanding disentanglement in VAE latent representations
Stars: ✭ 398 (-5.69%)
Mutual labels:  representation-learning
disent
🧶 Modular VAE disentanglement framework for python built with PyTorch Lightning ▸ Including metrics and datasets ▸ With strongly supervised, weakly supervised and unsupervised methods ▸ Easily configured and run with Hydra config ▸ Inspired by disentanglement_lib
Stars: ✭ 41 (-90.28%)
Mutual labels:  representation-learning
Representation Learning On Heterogeneous Graph
Representation-Learning-on-Heterogeneous-Graph
Stars: ✭ 289 (-31.52%)
Mutual labels:  representation-learning
Awesome Vaes
A curated list of awesome work on VAEs, disentanglement, representation learning, and generative models.
Stars: ✭ 418 (-0.95%)
Mutual labels:  representation-learning
Graphwaveletneuralnetwork
A PyTorch implementation of "Graph Wavelet Neural Network" (ICLR 2019)
Stars: ✭ 404 (-4.27%)
Mutual labels:  representation-learning
Self Label
Self-labelling via simultaneous clustering and representation learning. (ICLR 2020)
Stars: ✭ 324 (-23.22%)
Mutual labels:  representation-learning



MIT License Build Codecov Bitbucket open issues

What's New

  • Jan 2021: Readme updated with detailed instructions on how to use our latest version!
  • Dec 2020: We are migrating to a newer version for a more general, flexible, and scalable code. See the introduction below for more information! The legacy verison can be accessed by checking out to the tag v0.1.0: git checkout v0.1.0.

Introduction

  • This is an open source toolkit called S3PRL, which stands for Self-Supervised Speech Pre-training and Representation Learning.
  • In this toolkit, various upstream self-supervised speech models are available with easy-to-load setups, and downstream evaluation tasks are available with easy-to-use scripts.
  • Below is an intuitive illustration on how this toolkit may help you:
  • Feel free to use or modify our toolkit in your research, any bug report or improvement suggestion will be appreciated.
  • If you have any questions, please open up a new issue.
  • If you find this toolkit helpful to your research, please do consider to cite our papers, thanks!
List of papers that used our toolkit (Feel free to add your own paper by making a pull request)


Table of Contents


Installation

  • Python >= 3.6
  • PyTorch version >= 1.7.0
  • For pre-training new upstream models, you'll also need high-end GPU(s).
  • To develop locally, install s3prl by:
git clone https://github.com/s3prl/s3prl.git
cd s3prl
pip install -r requirements.txt
  • If you encounter error with a specific upstream model, you can look into the README.md under each upsream folder.
  • To use upstream models with the hub interface, cloning this repo is not required, only the requirements.txt in root directory and the one located at each upstream folder are needed.

Back to Top


Using upstreams

Back to Top


Using downstreams

  • Warning: we are still developing and testing some downstream tasks, documentation of a task will be added once it has been fully tested.
  • Instructions are documented here: Downstream README

Back to Top


Train upstream models

  • If you wish to train your own upstream models, please follow the instructions here: Pretrain README

Back to Top


Development pattern for contributors

  1. Create a personal fork of the main S3PRL repository in GitHub.
  2. Make your changes in a named branch different from master, e.g. you create a branch new-awesome-feature.
  3. Contact us if you have any questions during development.
  4. Generate a pull request through the Web interface of GitHub.
  5. Please verify that your code is free of basic mistakes, we appreciate any contribution!

Back to Top


Reference Repos

Back to Top

Citation

  • The S3PRL Toolkit:
@misc{S3PRL,
  author = {Andy T. Liu and Yang Shu-wen},
  title = {S3PRL: The Self-Supervised Speech Pre-training and Representation Learning Toolkit},
  year = {2020},
  publisher = {GitHub},
  journal = {GitHub repository},
  url = {https://github.com/s3prl/s3prl}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].