context2vecPyTorch implementation of context2vec from Melamud et al., CoNLL 2016
Stars: ✭ 18 (-89.71%)
wikidata-corpusTrain Wikidata with word2vec for word embedding tasks
Stars: ✭ 109 (-37.71%)
lda2vecMixing Dirichlet Topic Models and Word Embeddings to Make lda2vec from this paper https://arxiv.org/abs/1605.02019
Stars: ✭ 27 (-84.57%)
QuestionClusteringClasificador de preguntas escrito en python 3 que fue implementado en el siguiente vídeo: https://youtu.be/qnlW1m6lPoY
Stars: ✭ 15 (-91.43%)
word2vec-tsneGoogle News and Leo Tolstoy: Visualizing Word2Vec Word Embeddings using t-SNE.
Stars: ✭ 59 (-66.29%)
SiameseCBOWImplementation of Siamese CBOW using keras whose backend is tensorflow.
Stars: ✭ 14 (-92%)
SIFRankThe code of our paper "SIFRank: A New Baseline for Unsupervised Keyphrase Extraction Based on Pre-trained Language Model"
Stars: ✭ 96 (-45.14%)
datastories-semeval2017-task6Deep-learning model presented in "DataStories at SemEval-2017 Task 6: Siamese LSTM with Attention for Humorous Text Comparison".
Stars: ✭ 20 (-88.57%)
JoSH[KDD 2020] Hierarchical Topic Mining via Joint Spherical Tree and Text Embedding
Stars: ✭ 55 (-68.57%)
Active-Explainable-ClassificationA set of tools for leveraging pre-trained embeddings, active learning and model explainability for effecient document classification
Stars: ✭ 28 (-84%)
compress-fasttextTools for shrinking fastText models (in gensim format)
Stars: ✭ 124 (-29.14%)
pair2vecpair2vec: Compositional Word-Pair Embeddings for Cross-Sentence Inference
Stars: ✭ 62 (-64.57%)
word-benchmarksBenchmarks for intrinsic word embeddings evaluation.
Stars: ✭ 45 (-74.29%)
contextualLSTMContextual LSTM for NLP tasks like word prediction and word embedding creation for Deep Learning
Stars: ✭ 28 (-84%)
dasemDanish Semantic analysis
Stars: ✭ 17 (-90.29%)
word2vec-on-wikipediaA pipeline for training word embeddings using word2vec on wikipedia corpus.
Stars: ✭ 68 (-61.14%)
S-WMDCode for Supervised Word Mover's Distance (SWMD)
Stars: ✭ 90 (-48.57%)
PersianNERNamed-Entity Recognition in Persian Language
Stars: ✭ 48 (-72.57%)
fuzzymaxCode for the paper: Don't Settle for Average, Go for the Max: Fuzzy Sets and Max-Pooled Word Vectors, ICLR 2019.
Stars: ✭ 43 (-75.43%)
wefeWEFE: The Word Embeddings Fairness Evaluation Framework. WEFE is a framework that standardizes the bias measurement and mitigation in Word Embeddings models. Please feel welcome to open an issue in case you have any questions or a pull request if you want to contribute to the project!
Stars: ✭ 164 (-6.29%)
sisterSImple SenTence EmbeddeR
Stars: ✭ 66 (-62.29%)
Word2VecfJavaWord2VecfJava: Java implementation of Dependency-Based Word Embeddings and extensions
Stars: ✭ 14 (-92%)
two-stream-cnnA two-stream convolutional neural network for learning abitrary similarity functions over two sets of training data
Stars: ✭ 24 (-86.29%)
HiCECode for ACL'19 "Few-Shot Representation Learning for Out-Of-Vocabulary Words"
Stars: ✭ 56 (-68%)
Pytorch Sentiment AnalysisTutorials on getting started with PyTorch and TorchText for sentiment analysis.
Stars: ✭ 3,209 (+1733.71%)
Spanish Word EmbeddingsSpanish word embeddings computed with different methods and from different corpora
Stars: ✭ 236 (+34.86%)
KoanA word2vec negative sampling implementation with correct CBOW update.
Stars: ✭ 232 (+32.57%)
WordgcnACL 2019: Incorporating Syntactic and Semantic Information in Word Embeddings using Graph Convolutional Networks
Stars: ✭ 230 (+31.43%)
Question GenerationGenerating multiple choice questions from text using Machine Learning.
Stars: ✭ 227 (+29.71%)
Chameleon recsysSource code of CHAMELEON - A Deep Learning Meta-Architecture for News Recommender Systems
Stars: ✭ 202 (+15.43%)
ShallowlearnAn experiment about re-implementing supervised learning models based on shallow neural network approaches (e.g. fastText) with some additional exclusive features and nice API. Written in Python and fully compatible with Scikit-learn.
Stars: ✭ 196 (+12%)
JfasttextJava interface for fastText
Stars: ✭ 193 (+10.29%)
GermanwordembeddingsToolkit to obtain and preprocess german corpora, train models using word2vec (gensim) and evaluate them with generated testsets
Stars: ✭ 189 (+8%)
Vec4irWord Embeddings for Information Retrieval
Stars: ✭ 188 (+7.43%)
Datastories Semeval2017 Task4Deep-learning model presented in "DataStories at SemEval-2017 Task 4: Deep LSTM with Attention for Message-level and Topic-based Sentiment Analysis".
Stars: ✭ 184 (+5.14%)
TextheroText preprocessing, representation and visualization from zero to hero.
Stars: ✭ 2,407 (+1275.43%)
DebiasweRemove problematic gender bias from word embeddings.
Stars: ✭ 175 (+0%)