DocSumA tool to automatically summarize documents abstractively using the BART or PreSumm Machine Learning Model.
Stars: ✭ 58 (-4.92%)
Transformer-in-PyTorchTransformer/Transformer-XL/R-Transformer examples and explanations
Stars: ✭ 21 (-65.57%)
AutoTabularAutomatic machine learning for tabular data. ⚡🔥⚡
Stars: ✭ 51 (-16.39%)
simple NERsimple rule based named entity recognition
Stars: ✭ 29 (-52.46%)
transformers-interpretModel explainability that works seamlessly with 🤗 transformers. Explain your transformers model in just 2 lines of code.
Stars: ✭ 861 (+1311.48%)
DCGCNDensely Connected Graph Convolutional Networks for Graph-to-Sequence Learning (authors' MXNet implementation for the TACL19 paper)
Stars: ✭ 73 (+19.67%)
WellcomeMLRepository for Machine Learning utils at the Wellcome Trust
Stars: ✭ 31 (-49.18%)
mirror-bert[EMNLP 2021] Mirror-BERT: Converting Pretrained Language Models to universal text encoders without labels.
Stars: ✭ 56 (-8.2%)
SA-BERTCIKM 2020: Speaker-Aware BERT for Multi-Turn Response Selection in Retrieval-Based Chatbots
Stars: ✭ 71 (+16.39%)
elastic transformersMaking BERT stretchy. Semantic Elasticsearch with Sentence Transformers
Stars: ✭ 153 (+150.82%)
bllip-parserBLLIP reranking parser (also known as Charniak-Johnson parser, Charniak parser, Brown reranking parser) See http://pypi.python.org/pypi/bllipparser/ for Python module.
Stars: ✭ 217 (+255.74%)
quickvisionAn Easy To Use PyTorch Computer Vision Library
Stars: ✭ 49 (-19.67%)
jax-modelsUnofficial JAX implementations of deep learning research papers
Stars: ✭ 108 (+77.05%)
pysentimientoA Python multilingual toolkit for Sentiment Analysis and Social NLP tasks
Stars: ✭ 274 (+349.18%)
TransQuestTransformer based translation quality estimation
Stars: ✭ 85 (+39.34%)
NER-FunTool本NER项目包含多个中文数据集,模型采用BiLSTM+CRF、BERT+Softmax、BERT+Cascade、BERT+WOL等,最后用TFServing进行模型部署,线上推理和线下推理。
Stars: ✭ 56 (-8.2%)
Giveme5WExtraction of the five journalistic W-questions (5W) from news articles
Stars: ✭ 16 (-73.77%)
bert experimentalcode and supplementary materials for a series of Medium articles about the BERT model
Stars: ✭ 72 (+18.03%)
KitanaQAKitanaQA: Adversarial training and data augmentation for neural question-answering models
Stars: ✭ 58 (-4.92%)
BERTOverflowA Pre-trained BERT on StackOverflow Corpus
Stars: ✭ 40 (-34.43%)
personality-predictionExperiments for automated personality detection using Language Models and psycholinguistic features on various famous personality datasets including the Essays dataset (Big-Five)
Stars: ✭ 109 (+78.69%)
sentence2vecDeep sentence embedding using Sequence to Sequence learning
Stars: ✭ 23 (-62.3%)
LAMB Optimizer TFLAMB Optimizer for Large Batch Training (TensorFlow version)
Stars: ✭ 119 (+95.08%)
fiction generatorFiction generator with Tensorflow. 模仿王小波的风格的小说生成器
Stars: ✭ 27 (-55.74%)
numberwordsConvert a number to an approximated text expression: from '0.23' to 'less than a quarter'.
Stars: ✭ 191 (+213.11%)
seq2seq-autoencoderTheano implementation of Sequence-to-Sequence Autoencoder
Stars: ✭ 12 (-80.33%)
DiscoveryMining Discourse Markers for Unsupervised Sentence Representation Learning
Stars: ✭ 48 (-21.31%)
rsmorphyMorphological analyzer / inflection engine for Russian and Ukrainian languages rewritten in Rust
Stars: ✭ 27 (-55.74%)
NLPDataAugmentationChinese NLP Data Augmentation, BERT Contextual Augmentation
Stars: ✭ 94 (+54.1%)
optimum🏎️ Accelerate training and inference of 🤗 Transformers with easy to use hardware optimization tools
Stars: ✭ 567 (+829.51%)
bert-tensorflow-pytorch-spacy-conversionInstructions for how to convert a BERT Tensorflow model to work with HuggingFace's pytorch-transformers, and spaCy. This walk-through uses DeepPavlov's RuBERT as example.
Stars: ✭ 26 (-57.38%)
Fast-AgingGANA deep learning model to age faces in the wild, currently runs at 60+ fps on GPUs
Stars: ✭ 133 (+118.03%)
neuro-comma🇷🇺 Punctuation restoration production-ready model for Russian language 🇷🇺
Stars: ✭ 46 (-24.59%)
ConveRTDual Encoders for State-of-the-art Natural Language Processing.
Stars: ✭ 44 (-27.87%)
TwinBertpytorch implementation of the TwinBert paper
Stars: ✭ 36 (-40.98%)
wisdomifyA BERT-based reverse dictionary of Korean proverbs
Stars: ✭ 95 (+55.74%)
Natural-Language-ProcessingContains various architectures and novel paper implementations for Natural Language Processing tasks like Sequence Modelling and Neural Machine Translation.
Stars: ✭ 48 (-21.31%)
ForestCoverChangeDetecting and Predicting Forest Cover Change in Pakistani Areas Using Remote Sensing Imagery
Stars: ✭ 23 (-62.3%)
nlg-markovify-apiAn API built on Plumber (R) utilizing Markovify, a Python package, wrapped in markovifyR (R). It builds a Markov Chain-model based on text (user input) and generates new text based on the model.
Stars: ✭ 19 (-68.85%)
ganbert-pytorchEnhancing the BERT training with Semi-supervised Generative Adversarial Networks in Pytorch/HuggingFace
Stars: ✭ 60 (-1.64%)
cmrc2019A Sentence Cloze Dataset for Chinese Machine Reading Comprehension (CMRC 2019)
Stars: ✭ 118 (+93.44%)
long-short-transformerImplementation of Long-Short Transformer, combining local and global inductive biases for attention over long sequences, in Pytorch
Stars: ✭ 103 (+68.85%)
LIT[AAAI 2022] This is the official PyTorch implementation of "Less is More: Pay Less Attention in Vision Transformers"
Stars: ✭ 79 (+29.51%)
EmbeddingEmbedding模型代码和学习笔记总结
Stars: ✭ 25 (-59.02%)
sepia-assist-serverCore server of the SEPIA Framework responsible for NLU, conversation, smart-service integration, user-accounts and more.
Stars: ✭ 81 (+32.79%)
language-plannerOfficial Code for "Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents"
Stars: ✭ 84 (+37.7%)
JointIDSFBERT-based joint intent detection and slot filling with intent-slot attention mechanism (INTERSPEECH 2021)
Stars: ✭ 55 (-9.84%)
tttA package for fine-tuning Transformers with TPUs, written in Tensorflow2.0+
Stars: ✭ 35 (-42.62%)
transformer generalizationThe official repository for our paper "The Devil is in the Detail: Simple Tricks Improve Systematic Generalization of Transformers". We significantly improve the systematic generalization of transformer models on a variety of datasets using simple tricks and careful considerations.
Stars: ✭ 58 (-4.92%)