All Projects → rwth-i6 → Returnn

rwth-i6 / Returnn

Licence: other
The RWTH extensible training framework for universal recurrent neural networks

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Returnn

Machine Learning Curriculum
💻 Make machines learn so that you don't have to struggle to program them; The ultimate list
Stars: ✭ 761 (+162.41%)
Mutual labels:  recurrent-neural-networks, theano
Rnn ctc
Recurrent Neural Network and Long Short Term Memory (LSTM) with Connectionist Temporal Classification implemented in Theano. Includes a Toy training example.
Stars: ✭ 220 (-24.14%)
Mutual labels:  recurrent-neural-networks, theano
Parrot
RNN-based generative models for speech.
Stars: ✭ 601 (+107.24%)
Mutual labels:  recurrent-neural-networks, theano
Punctuator2
A bidirectional recurrent neural network model with attention mechanism for restoring missing punctuation in unsegmented text
Stars: ✭ 483 (+66.55%)
Mutual labels:  recurrent-neural-networks, theano
rnn benchmarks
RNN benchmarks of pytorch, tensorflow and theano
Stars: ✭ 85 (-70.69%)
Mutual labels:  theano, recurrent-neural-networks
Komputation
Komputation is a neural network framework for the Java Virtual Machine written in Kotlin and CUDA C.
Stars: ✭ 295 (+1.72%)
Mutual labels:  gpu, recurrent-neural-networks
Theano Kaldi Rnn
THEANO-KALDI-RNNs is a project implementing various Recurrent Neural Networks (RNNs) for RNN-HMM speech recognition. The Theano Code is coupled with the Kaldi decoder.
Stars: ✭ 31 (-89.31%)
Mutual labels:  recurrent-neural-networks, theano
STORN-keras
This is a STORN (Stochastical Recurrent Neural Network) implementation for keras!
Stars: ✭ 23 (-92.07%)
Mutual labels:  theano, recurrent-neural-networks
Hdltex
HDLTex: Hierarchical Deep Learning for Text Classification
Stars: ✭ 191 (-34.14%)
Mutual labels:  gpu, recurrent-neural-networks
Aiopen
AIOpen是一个按人工智能三要素(数据、算法、算力)进行AI开源项目分类的汇集项目,项目致力于跟踪目前人工智能(AI)的深度学习(DL)开源项目,并尽可能地罗列目前的开源项目,同时加入了一些曾经研究过的代码。通过这些开源项目,使初次接触AI的人们对人工智能(深度学习)有更清晰和更全面的了解。
Stars: ✭ 62 (-78.62%)
Mutual labels:  gpu, theano
sequence-rnn-py
Sequence analyzing using Recurrent Neural Networks (RNN) based on Keras
Stars: ✭ 28 (-90.34%)
Mutual labels:  theano, recurrent-neural-networks
rindow-neuralnetworks
Neural networks library for machine learning on PHP
Stars: ✭ 37 (-87.24%)
Mutual labels:  gpu, recurrent-neural-networks
Lstm Human Activity Recognition
Human Activity Recognition example using TensorFlow on smartphone sensors dataset and an LSTM RNN. Classifying the type of movement amongst six activity categories - Guillaume Chevalier
Stars: ✭ 2,943 (+914.83%)
Mutual labels:  recurrent-neural-networks
Hemi
Simple utilities to enable code reuse and portability between CUDA C/C++ and standard C/C++.
Stars: ✭ 275 (-5.17%)
Mutual labels:  gpu
Vuh
Vulkan compute for people
Stars: ✭ 264 (-8.97%)
Mutual labels:  gpu
Climatemachine.jl
Climate Machine: an Earth System Model that automatically learns from data
Stars: ✭ 266 (-8.28%)
Mutual labels:  gpu
Adaptnlp
An easy to use Natural Language Processing library and framework for predicting, training, fine-tuning, and serving up state-of-the-art NLP models.
Stars: ✭ 278 (-4.14%)
Mutual labels:  gpu
Recurrent Entity Networks
TensorFlow implementation of "Tracking the World State with Recurrent Entity Networks".
Stars: ✭ 276 (-4.83%)
Mutual labels:  recurrent-neural-networks
Ergo
🧠 A tool that makes AI easier.
Stars: ✭ 264 (-8.97%)
Mutual labels:  gpu
Carrot
🥕 Evolutionary Neural Networks in JavaScript
Stars: ✭ 261 (-10%)
Mutual labels:  recurrent-neural-networks

================== Welcome to RETURNN

GitHub repository <https://github.com/rwth-i6/returnn>__. RETURNN paper 2016 <https://arxiv.org/abs/1608.00895>, RETURNN paper 2018 <https://arxiv.org/abs/1805.05225>.

RETURNN - RWTH extensible training framework for universal recurrent neural networks, is a Theano/TensorFlow-based implementation of modern recurrent neural network architectures. It is optimized for fast and reliable training of recurrent neural networks in a multi-GPU environment.

The high-level features and goals of RETURNN are:

  • Simplicity

    • Writing config / code is simple & straight-forward (setting up experiment, defining model)
    • Debugging in case of problems is simple
    • Reading config / code is simple (defined model, training, decoding all becomes clear)
  • Flexibility

    • Allow for many different kinds of experiments / models
  • Efficiency

    • Training speed
    • Decoding speed

All items are important for research, decoding speed is esp. important for production.

See our slides of the Interspeech 2020 tutorial "Efficient and Flexible Implementation of Machine Learning for ASR and MT" with an introduction of the core concepts <https://www-i6.informatik.rwth-aachen.de/publications/download/1154/Zeyer--2020.pdf>__.

More specific features include:

  • Mini-batch training of feed-forward neural networks
  • Sequence-chunking based batch training for recurrent neural networks
  • Long short-term memory recurrent neural networks including our own fast CUDA kernel
  • Multidimensional LSTM (GPU only, there is no CPU version)
  • Memory management for large data sets
  • Work distribution across multiple devices
  • Flexible and fast architecture which allows all kinds of encoder-attention-decoder models

See documentation <http://returnn.readthedocs.io/>. See basic usage <https://returnn.readthedocs.io/en/latest/basic_usage.html> and technological overview <https://returnn.readthedocs.io/en/latest/tech_overview.html>__.

Here is the video recording of a RETURNN overview talk <https://www-i6.informatik.rwth-aachen.de/web/Software/returnn/downloads/workshop-2019-01-29/01.recording.cut.mp4>_ (slides <https://www-i6.informatik.rwth-aachen.de/web/Software/returnn/downloads/workshop-2019-01-29/01.returnn-overview.session1.handout.v1.pdf>, exercise sheet <https://www-i6.informatik.rwth-aachen.de/web/Software/returnn/downloads/workshop-2019-01-29/01.exercise_sheet.pdf>; hosted by eBay).

There are many example demos <https://github.com/rwth-i6/returnn/blob/master/demos/>_ which work on artificially generated data, i.e. they should work as-is.

There are some real-world examples <https://github.com/rwth-i6/returnn-experiments>_ such as setups for speech recognition on the Switchboard or LibriSpeech corpus.

Some benchmark setups against other frameworks can be found here <https://github.com/rwth-i6/returnn-benchmarks>. The results are in the RETURNN paper 2016 <https://arxiv.org/abs/1608.00895>. Performance benchmarks of our LSTM kernel vs CuDNN and other TensorFlow kernels are in TensorFlow LSTM benchmark <https://returnn.readthedocs.io/en/latest/tf_lstm_benchmark.html>__.

There is also a wiki <https://github.com/rwth-i6/returnn/wiki>. Questions can also be asked on StackOverflow using the RETURNN tag <https://stackoverflow.com/questions/tagged/returnn>.

.. image:: https://github.com/rwth-i6/returnn/workflows/CI/badge.svg :target: https://github.com/rwth-i6/returnn/actions

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].