PhrasalA large-scale statistical machine translation system written in Java.
Stars: ✭ 190 (-10.8%)
Visdial RlPyTorch code for Learning Cooperative Visual Dialog Agents using Deep Reinforcement Learning
Stars: ✭ 157 (-26.29%)
Cogcomp NlpyCogComp's light-weight Python NLP annotators
Stars: ✭ 115 (-46.01%)
GeotextGeotext extracts country and city mentions from text
Stars: ✭ 91 (-57.28%)
Awesome Nlp ResourcesThis repository contains landmark research papers in Natural Language Processing that came out in this century.
Stars: ✭ 145 (-31.92%)
Deep Learning DrizzleDrench yourself in Deep Learning, Reinforcement Learning, Machine Learning, Computer Vision, and NLP by learning from these exciting lectures!!
Stars: ✭ 9,717 (+4461.97%)
Deep Math Machine Learning.aiA blog which talks about machine learning, deep learning algorithms and the Math. and Machine learning algorithms written from scratch.
Stars: ✭ 173 (-18.78%)
Bert As ServiceMapping a variable-length sentence to a fixed-length vector using BERT model
Stars: ✭ 9,779 (+4491.08%)
Unified SummarizationOfficial codes for the paper: A Unified Model for Extractive and Abstractive Summarization using Inconsistency Loss.
Stars: ✭ 114 (-46.48%)
Seq2seq chatbot new基于seq2seq模型的简单对话系统的tf实现,具有embedding、attention、beam_search等功能,数据集是Cornell Movie Dialogs
Stars: ✭ 144 (-32.39%)
Bible text gcnPytorch implementation of "Graph Convolutional Networks for Text Classification"
Stars: ✭ 90 (-57.75%)
Knockknock🚪✊Knock Knock: Get notified when your training ends with only two additional lines of code
Stars: ✭ 2,304 (+981.69%)
Multiffn NliImplementation of the multi feed-forward network architecture by Parikh et al. (2016) for Natural Language Inference.
Stars: ✭ 89 (-58.22%)
Attribute Aware Attention[ACM MM 2018] Attribute-Aware Attention Model for Fine-grained Representation Learning
Stars: ✭ 143 (-32.86%)
GermanwordembeddingsToolkit to obtain and preprocess german corpora, train models using word2vec (gensim) and evaluate them with generated testsets
Stars: ✭ 189 (-11.27%)
Virtual AssistantA linux based Virtual assistant on Artificial Intelligence in C
Stars: ✭ 88 (-58.69%)
Stanza OldStanford NLP group's shared Python tools.
Stars: ✭ 142 (-33.33%)
Neural kbqaKnowledge Base Question Answering using memory networks
Stars: ✭ 87 (-59.15%)
Dive Into Dl Pytorch本项目将《动手学深度学习》(Dive into Deep Learning)原书中的MXNet实现改为PyTorch实现。
Stars: ✭ 14,234 (+6582.63%)
Semantic Texual Similarity ToolkitsSemantic Textual Similarity (STS) measures the degree of equivalence in the underlying semantics of paired snippets of text.
Stars: ✭ 87 (-59.15%)
NlpaugData augmentation for NLP
Stars: ✭ 2,761 (+1196.24%)
Attention unetRaw implementation of attention gated U-Net by Keras
Stars: ✭ 85 (-60.09%)
Csa InpaintingCoherent Semantic Attention for image inpainting(ICCV 2019)
Stars: ✭ 202 (-5.16%)
Eval On Nn Of RcEmpirical Evaluation on Current Neural Networks on Cloze-style Reading Comprehension
Stars: ✭ 84 (-60.56%)
Efaqa Corpus Zh❤️Emotional First Aid Dataset, 心理咨询问答、聊天机器人语料库
Stars: ✭ 170 (-20.19%)
Greek BertA Greek edition of BERT pre-trained language model
Stars: ✭ 84 (-60.56%)
StringiTHE String Processing Package for R (with ICU)
Stars: ✭ 204 (-4.23%)
TexarToolkit for Machine Learning, Natural Language Processing, and Text Generation, in TensorFlow. This is part of the CASL project: http://casl-project.ai/
Stars: ✭ 2,236 (+949.77%)
Picanet ImplementationPytorch Implementation of PiCANet: Learning Pixel-wise Contextual Attention for Saliency Detection
Stars: ✭ 157 (-26.29%)
GeomanTensorflow Implement of GeoMAN, IJCAI-18
Stars: ✭ 113 (-46.95%)
Vec4irWord Embeddings for Information Retrieval
Stars: ✭ 188 (-11.74%)
Document Classifier LstmA bidirectional LSTM with attention for multiclass/multilabel text classification.
Stars: ✭ 136 (-36.15%)
Attention TransferImproving Convolutional Networks via Attention Transfer (ICLR 2017)
Stars: ✭ 1,231 (+477.93%)
Open SesameA frame-semantic parsing system based on a softmax-margin SegRNN.
Stars: ✭ 170 (-20.19%)
Opennmt TfNeural machine translation and sequence learning using TensorFlow
Stars: ✭ 1,223 (+474.18%)
DeepmojiState-of-the-art deep learning model for analyzing sentiment, emotion, sarcasm etc.
Stars: ✭ 1,215 (+470.42%)
MinervaMeandering In Networks of Entities to Reach Verisimilar Answers
Stars: ✭ 205 (-3.76%)
AdnetAttention-guided CNN for image denoising(Neural Networks,2020)
Stars: ✭ 135 (-36.62%)
Chinese XlnetPre-Trained Chinese XLNet(中文XLNet预训练模型)
Stars: ✭ 1,213 (+469.48%)
ErnieSimple State-of-the-Art BERT-Based Sentence Classification with Keras / TensorFlow 2. Built with HuggingFace's Transformers.
Stars: ✭ 170 (-20.19%)
AbigsurveyA collection of 500+ survey papers on Natural Language Processing (NLP) and Machine Learning (ML)
Stars: ✭ 1,203 (+464.79%)
Rasa💬 Open source machine learning framework to automate text- and voice-based conversations: NLU, dialogue management, connect to Slack, Facebook, and more - Create chatbots and voice assistants
Stars: ✭ 13,219 (+6106.1%)
Monkeylearn RubyOfficial Ruby client for the MonkeyLearn API. Build and consume machine learning models for language processing from your Ruby apps.
Stars: ✭ 76 (-64.32%)
Hunspell Dict KoKorean spellchecking dictionary for Hunspell
Stars: ✭ 187 (-12.21%)
RbertImplementation of BERT in R
Stars: ✭ 114 (-46.48%)
Holiday Cn📅🇨🇳 中国法定节假日数据 自动每日抓取国务院公告
Stars: ✭ 157 (-26.29%)
Tensorflow NlpNLP and Text Generation Experiments in TensorFlow 2.x / 1.x
Stars: ✭ 1,487 (+598.12%)
PygatPytorch implementation of the Graph Attention Network model by Veličković et. al (2017, https://arxiv.org/abs/1710.10903)
Stars: ✭ 1,853 (+769.95%)
Speech signal processing and classificationFront-end speech processing aims at extracting proper features from short- term segments of a speech utterance, known as frames. It is a pre-requisite step toward any pattern recognition problem employing speech or audio (e.g., music). Here, we are interesting in voice disorder classification. That is, to develop two-class classifiers, which can discriminate between utterances of a subject suffering from say vocal fold paralysis and utterances of a healthy subject.The mathematical modeling of the speech production system in humans suggests that an all-pole system function is justified [1-3]. As a consequence, linear prediction coefficients (LPCs) constitute a first choice for modeling the magnitute of the short-term spectrum of speech. LPC-derived cepstral coefficients are guaranteed to discriminate between the system (e.g., vocal tract) contribution and that of the excitation. Taking into account the characteristics of the human ear, the mel-frequency cepstral coefficients (MFCCs) emerged as descriptive features of the speech spectral envelope. Similarly to MFCCs, the perceptual linear prediction coefficients (PLPs) could also be derived. The aforementioned sort of speaking tradi- tional features will be tested against agnostic-features extracted by convolu- tive neural networks (CNNs) (e.g., auto-encoders) [4]. The pattern recognition step will be based on Gaussian Mixture Model based classifiers,K-nearest neighbor classifiers, Bayes classifiers, as well as Deep Neural Networks. The Massachussets Eye and Ear Infirmary Dataset (MEEI-Dataset) [5] will be exploited. At the application level, a library for feature extraction and classification in Python will be developed. Credible publicly available resources will be 1used toward achieving our goal, such as KALDI. Comparisons will be made against [6-8].
Stars: ✭ 155 (-27.23%)
DeclutrThe corresponding code from our paper "DeCLUTR: Deep Contrastive Learning for Unsupervised Textual Representations". Do not hesitate to open an issue if you run into any trouble!
Stars: ✭ 111 (-47.89%)