TextFeatureSelectionPython library for feature selection for text features. It has filter method, genetic algorithm and TextFeatureSelectionEnsemble for improving text classification models. Helps improve your machine learning models
Stars: ✭ 42 (-31.15%)
coreWIP - A personal life helper providing solutions and happiness
Stars: ✭ 17 (-72.13%)
PersianQAPersian (Farsi) Question Answering Dataset (+ Models)
Stars: ✭ 114 (+86.89%)
domain-attentioncodes for paper "Domain Attention Model for Multi-Domain Sentiment Classification"
Stars: ✭ 22 (-63.93%)
Image-CaptionUsing LSTM or Transformer to solve Image Captioning in Pytorch
Stars: ✭ 36 (-40.98%)
denspiReal-Time Open-Domain Question Answering with Dense-Sparse Phrase Index (DenSPI)
Stars: ✭ 188 (+208.2%)
PAM[TPAMI 2020] Parallax Attention for Unsupervised Stereo Correspondence Learning
Stars: ✭ 62 (+1.64%)
SpeakerDiarization RNN CNN LSTMSpeaker Diarization is the problem of separating speakers in an audio. There could be any number of speakers and final result should state when speaker starts and ends. In this project, we analyze given audio file with 2 channels and 2 speakers (on separate channels).
Stars: ✭ 56 (-8.2%)
Patient2VecPatient2Vec: A Personalized Interpretable Deep Representation of the Longitudinal Electronic Health Record
Stars: ✭ 85 (+39.34%)
VoiceNET.Library.NET library to easily create Voice Command Control feature.
Stars: ✭ 14 (-77.05%)
Hutoma-Conversational-AI-PlatformHu:toma AI is an open source stack designed to help you create compelling conversational interfaces with little effort and above industry accuracy
Stars: ✭ 35 (-42.62%)
AoA-pytorchA Pytorch implementation of Attention on Attention module (both self and guided variants), for Visual Question Answering
Stars: ✭ 33 (-45.9%)
AttentionGatedVNet3DAttention Gated VNet3D Model for KiTS19——2019 Kidney Tumor Segmentation Challenge
Stars: ✭ 35 (-42.62%)
Video-Cap🎬 Video Captioning: ICCV '15 paper implementation
Stars: ✭ 44 (-27.87%)
minimal-nmtA minimal nmt example to serve as an seq2seq+attention reference.
Stars: ✭ 36 (-40.98%)
ntua-slp-semeval2018Deep-learning models of NTUA-SLP team submitted in SemEval 2018 tasks 1, 2 and 3.
Stars: ✭ 79 (+29.51%)
squadgymEnvironment that can be used to evaluate reasoning capabilities of artificial agents
Stars: ✭ 27 (-55.74%)
Nuts自然语言处理常见任务(主要包括文本分类,序列标注,自动问答等)解决方案试验田
Stars: ✭ 21 (-65.57%)
LMFD-PADLearnable Multi-level Frequency Decomposition and Hierarchical Attention Mechanism for Generalized Face Presentation Attack Detection
Stars: ✭ 27 (-55.74%)
DAF3DDeep Attentive Features for Prostate Segmentation in 3D Transrectal Ultrasound
Stars: ✭ 60 (-1.64%)
lingvo--Ner-ruNamed entity recognition (NER) in Russian texts / Определение именованных сущностей (NER) в тексте на русском языке
Stars: ✭ 38 (-37.7%)
verseagilityRamp up your custom natural language processing (NLP) task, allowing you to bring your own data, use your preferred frameworks and bring models into production.
Stars: ✭ 23 (-62.3%)
memologyMemes - why so popular?
Stars: ✭ 32 (-47.54%)
Shukongdashi使用知识图谱,自然语言处理,卷积神经网络等技术,基于python语言,设计了一个数控领域故障诊断专家系统
Stars: ✭ 109 (+78.69%)
OverlapPredator[CVPR 2021, Oral] PREDATOR: Registration of 3D Point Clouds with Low Overlap.
Stars: ✭ 293 (+380.33%)
FreebaseQAThe release of the FreebaseQA data set (NAACL 2019).
Stars: ✭ 55 (-9.84%)
ake-datasetsLarge, curated set of benchmark datasets for evaluating automatic keyphrase extraction algorithms.
Stars: ✭ 125 (+104.92%)
iPerceiveApplying Common-Sense Reasoning to Multi-Modal Dense Video Captioning and Video Question Answering | Python3 | PyTorch | CNNs | Causality | Reasoning | LSTMs | Transformers | Multi-Head Self Attention | Published in IEEE Winter Conference on Applications of Computer Vision (WACV) 2021
Stars: ✭ 52 (-14.75%)
TransTQAAuthor: Wenhao Yu (
[email protected]). EMNLP'20. Transfer Learning for Technical Question Answering.
Stars: ✭ 12 (-80.33%)
phd-resourcesInternet Delivered Treatment using Adaptive Technology
Stars: ✭ 37 (-39.34%)
Multi-task-Conditional-Attention-NetworksA prototype version of our submitted paper: Conversion Prediction Using Multi-task Conditional Attention Networks to Support the Creation of Effective Ad Creatives.
Stars: ✭ 21 (-65.57%)
SANETArbitrary Style Transfer with Style-Attentional Networks
Stars: ✭ 105 (+72.13%)
PororoQAPororoQA, https://arxiv.org/abs/1707.00836
Stars: ✭ 26 (-57.38%)
embeddingsEmbeddings: State-of-the-art Text Representations for Natural Language Processing tasks, an initial version of library focus on the Polish Language
Stars: ✭ 27 (-55.74%)
visdialVisual Dialog: Light-weight Transformer for Many Inputs (ECCV 2020)
Stars: ✭ 27 (-55.74%)
TeBaQAA question answering system which utilises machine learning.
Stars: ✭ 17 (-72.13%)
SelfAttentiveImplementation of A Structured Self-attentive Sentence Embedding
Stars: ✭ 107 (+75.41%)
dialogbotdialogbot, provide search-based dialogue, task-based dialogue and generative dialogue model. 对话机器人,基于问答型对话、任务型对话、聊天型对话等模型实现,支持网络检索问答,领域知识问答,任务引导问答,闲聊问答,开箱即用。
Stars: ✭ 96 (+57.38%)
strategyqaThe official code of TACL 2021, "Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning Strategies".
Stars: ✭ 27 (-55.74%)
FragmentVCAny-to-any voice conversion by end-to-end extracting and fusing fine-grained voice fragments with attention
Stars: ✭ 134 (+119.67%)
resolutions-2019A list of data mining and machine learning papers that I implemented in 2019.
Stars: ✭ 19 (-68.85%)
word2vec-tsneGoogle News and Leo Tolstoy: Visualizing Word2Vec Word Embeddings using t-SNE.
Stars: ✭ 59 (-3.28%)
Medi-CoQAConversational Question Answering on Clinical Text
Stars: ✭ 22 (-63.93%)
enformer-pytorchImplementation of Enformer, Deepmind's attention network for predicting gene expression, in Pytorch
Stars: ✭ 146 (+139.34%)
nystrom-attentionImplementation of Nyström Self-attention, from the paper Nyströmformer
Stars: ✭ 83 (+36.07%)
egfr-attDrug effect prediction using neural network
Stars: ✭ 17 (-72.13%)
ibm-ai-dayPresentation for IBM Community Day AI
Stars: ✭ 13 (-78.69%)