All Projects → Ravoxsg → A_chronology_of_deep_learning

Ravoxsg / A_chronology_of_deep_learning

Licence: other
Tracing back and exposing in chronological order the main ideas in the field of deep learning, to help everyone better understand the current intense research in AI.

Projects that are alternatives of or similar to A chronology of deep learning

open-speech-corpora
💎 A list of accessible speech corpora for ASR, TTS, and other Speech Technologies
Stars: ✭ 841 (+1689.36%)
Mutual labels:  speech-recognition
kaldi-long-audio-alignment
Long audio alignment using Kaldi
Stars: ✭ 21 (-55.32%)
Mutual labels:  speech-recognition
deep avsr
A PyTorch implementation of the Deep Audio-Visual Speech Recognition paper.
Stars: ✭ 104 (+121.28%)
Mutual labels:  speech-recognition
browserexport
backup and parse browser history databases (chrome, firefox, safari, and other chrome/firefox derivatives)
Stars: ✭ 54 (+14.89%)
Mutual labels:  history
speechless
Speech-to-text based on wav2letter built for transfer learning
Stars: ✭ 92 (+95.74%)
Mutual labels:  speech-recognition
react-client
An React client library for Speechly API
Stars: ✭ 71 (+51.06%)
Mutual labels:  speech-recognition
cliptext
Clipboard manager for macOS. Built with Electron.js
Stars: ✭ 37 (-21.28%)
Mutual labels:  history
Deep-learning-And-Paper
【仅作为交流学习使用】机器智能--相关书目及经典论文包括AutoML、情感分类、语音识别、声纹识别、语音合成实验代码等
Stars: ✭ 62 (+31.91%)
Mutual labels:  speech-recognition
citylines
Citylines.co is a collaborative platform for mapping the transit systems of the world!
Stars: ✭ 53 (+12.77%)
Mutual labels:  history
TinyCog
Small Robot, Toy Robot platform
Stars: ✭ 29 (-38.3%)
Mutual labels:  speech-recognition
mongolian-nlp
Useful resources for Mongolian NLP
Stars: ✭ 119 (+153.19%)
Mutual labels:  speech-recognition
console.history
📜 Store all javascript console logs in console.history
Stars: ✭ 30 (-36.17%)
Mutual labels:  history
cobra
On-device voice activity detection (VAD) powered by deep learning.
Stars: ✭ 76 (+61.7%)
Mutual labels:  speech-recognition
Speech-Backbones
This is the main repository of open-sourced speech technology by Huawei Noah's Ark Lab.
Stars: ✭ 205 (+336.17%)
Mutual labels:  speech-recognition
deepspeech.mxnet
A MXNet implementation of Baidu's DeepSpeech architecture
Stars: ✭ 82 (+74.47%)
Mutual labels:  speech-recognition
syn-speech-samples
An application that demostrate the usage of Syn.Speech library for Speech Recognition
Stars: ✭ 24 (-48.94%)
Mutual labels:  speech-recognition
srvk-eesen-offline-transcriber
Top level code to transcribe English audio/video files into text/subtitles
Stars: ✭ 22 (-53.19%)
Mutual labels:  speech-recognition
Tree-Style-History
Not only show browser history in tree style. 不止用树状形式展示浏览器历史 (For Edge / Chromium / Chrome)
Stars: ✭ 124 (+163.83%)
Mutual labels:  history
revai-node-sdk
Node.js SDK for the Rev AI API
Stars: ✭ 21 (-55.32%)
Mutual labels:  speech-recognition
spokestack-ios
Spokestack: give your iOS app a voice interface!
Stars: ✭ 27 (-42.55%)
Mutual labels:  speech-recognition

A chronology of deep learning

Hey everyone who is reading this!

So what is the hook with deep learning? Why is everyone talking about it? What happened? Well, in the last three decades, a lot of awesome ideas came out, leading to exceptional breakthroughs on general benchmark tasks to evaluate AI systems performance, like image classification, voice recognition, etc. To get the bigger picture, this repository tries to list in chronological order the main papers about deep learning. This list of 78 selected papers covers all deep learning applications and research areas, including image recognition, machine translation, speech recognition, optimization and meta-learning.The number of citations is given according to Google Scholar stats.

Before the 1980s

1980s

1990s

Despite promising breakthroughs in the late 1980s, in the 1990s AI entered a new Winter era, during the which there were few developments (especially compared to what happened in the 2010s). Deep learning approaches were discredited because of their average performance, mostly because of a lack of training data and computational power.

  • Bengio's team was the first to exhibit how hard it can be to learn patterns over a long time depth:
    Learning long-term dependencies with gradient is difficult, Bengio et al., 1994, IEEE, 2418 citations
  • The wake-sleep algorithm inspired the autoencoder type of neural networks:
    The wake-sleep algorithm for unsupervised neural networks, Hinton et al., 1995, Science, 942 citations
  • Convolutional neural networks (CNNs) were developed in the early 1990s, mostly by Yann LeCun, and their broad application was described here:
    Convolutional neural networks for images, speech and time-series, Yann LeCun & Yoshua Bengio, 1995, The Handbook of Brain Theory and Neural Networks, 1550 citations
  • LSTMs, still widely used today for sequence modeling, are actually quite an old invention:
    Long short-term memory, Hochreiter et al., 1997, Neural Computation, 9811 citations
  • Roughly around the same time as LSTMs came the idea of training RNNs in both directions, meaning that hidden states have access to input elements from the past and the future:
    Bidirectional recurrent neural networks, Schuster et al., 1997, IEEE Transactions on Neural Processing, 1167 citations
  • At the end of the 1990s, Yoshua Bengio and Yann LeCun, regarded today as two of the godfathers in deep learning, generalized document recognition via neural networks trained by gradient desent, and introduced Graph Transformer Networks :
    Gradient-based learning applied to document recognition, LeCun et al., 1998, IEEE, 12546 citations (!)

2000s

This AI Winter continued until roughly 2006, when research in deep learning started to flourish again.

2010s

2010-2011

2012

2013

2014

2014 was really a seminal year for deep learning, with major contributions from a broad variety of groups.

2015

2016

2017

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].