FastaiThe fastai deep learning library
Stars: ✭ 21,718 (+1194.28%)
T3[EMNLP 2020] "T3: Tree-Autoencoder Constrained Adversarial Text Generation for Targeted Attack" by Boxin Wang, Hengzhi Pei, Boyuan Pan, Qian Chen, Shuohang Wang, Bo Li
Stars: ✭ 25 (-98.51%)
syntaxdotNeural syntax annotator, supporting sequence labeling, lemmatization, and dependency parsing.
Stars: ✭ 32 (-98.09%)
Vit PytorchImplementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
Stars: ✭ 7,199 (+329.02%)
n-grammer-pytorchImplementation of N-Grammer, augmenting Transformers with latent n-grams, in Pytorch
Stars: ✭ 50 (-97.02%)
keras-bert-nerKeras solution of Chinese NER task using BiLSTM-CRF/BiGRU-CRF/IDCNN-CRF model with Pretrained Language Model: supporting BERT/RoBERTa/ALBERT
Stars: ✭ 7 (-99.58%)
Chinese Bert WwmPre-Training with Whole Word Masking for Chinese BERT(中文BERT-wwm系列模型)
Stars: ✭ 6,357 (+278.84%)
NER-FunTool本NER项目包含多个中文数据集,模型采用BiLSTM+CRF、BERT+Softmax、BERT+Cascade、BERT+WOL等,最后用TFServing进行模型部署,线上推理和线下推理。
Stars: ✭ 56 (-96.66%)
icedataIceData: Datasets Hub for the *IceVision* Framework
Stars: ✭ 41 (-97.56%)
VideoBERTUsing VideoBERT to tackle video prediction
Stars: ✭ 56 (-96.66%)
Reformer PytorchReformer, the efficient Transformer, in Pytorch
Stars: ✭ 1,644 (-2.03%)
miniconsUtility for analyzing Transformer based representations of language.
Stars: ✭ 28 (-98.33%)
knowledge-graph-nlp-in-action从模型训练到部署,实战知识图谱(Knowledge Graph)&自然语言处理(NLP)。涉及 Tensorflow, Bert+Bi-LSTM+CRF,Neo4j等 涵盖 Named Entity Recognition,Text Classify,Information Extraction,Relation Extraction 等任务。
Stars: ✭ 58 (-96.54%)
NSP-BERTThe code for our paper "NSP-BERT: A Prompt-based Zero-Shot Learner Through an Original Pre-training Task —— Next Sentence Prediction"
Stars: ✭ 166 (-90.11%)
hashformersHashformers is a framework for hashtag segmentation with transformers.
Stars: ✭ 18 (-98.93%)
ADL2019Applied Deep Learning (2019 Spring) @ NTU
Stars: ✭ 20 (-98.81%)
KARENKAREN: Unifying Hatespeech Detection and Benchmarking
Stars: ✭ 18 (-98.93%)
textgoText preprocessing, representation, similarity calculation, text search and classification. Let's go and play with text!
Stars: ✭ 33 (-98.03%)
MengziMengzi Pretrained Models
Stars: ✭ 238 (-85.82%)
nuwa-pytorchImplementation of NÜWA, state of the art attention network for text to video synthesis, in Pytorch
Stars: ✭ 347 (-79.32%)
Filipino-Text-BenchmarksOpen-source benchmark datasets and pretrained transformer models in the Filipino language.
Stars: ✭ 22 (-98.69%)
FewCLUEFewCLUE 小样本学习测评基准,中文版
Stars: ✭ 251 (-85.04%)
Product-Categorization-NLPMulti-Class Text Classification for products based on their description with Machine Learning algorithms and Neural Networks (MLP, CNN, Distilbert).
Stars: ✭ 30 (-98.21%)
SIGIR2021 ConureOne Person, One Model, One World: Learning Continual User Representation without Forgetting
Stars: ✭ 23 (-98.63%)
MobileQA离线端阅读理解应用 QA for mobile, Android & iPhone
Stars: ✭ 49 (-97.08%)
Transformers🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Stars: ✭ 55,742 (+3221.93%)
spark-transformersSpark-Transformers: Library for exporting Apache Spark MLLIB models to use them in any Java application with no other dependencies.
Stars: ✭ 39 (-97.68%)
trapperState-of-the-art NLP through transformer models in a modular design and consistent APIs.
Stars: ✭ 28 (-98.33%)
bernA neural named entity recognition and multi-type normalization tool for biomedical text mining
Stars: ✭ 151 (-91%)
fast.ai notes📓 Notes for fast.ai courses: intro to ML, Practical DL and Cutting edge DL.
Stars: ✭ 65 (-96.13%)
deepflash2A deep-learning pipeline for segmentation of ambiguous microscopic images.
Stars: ✭ 34 (-97.97%)
WSDM-Cup-2019[ACM-WSDM] 3rd place solution at WSDM Cup 2019, Fake News Classification on Kaggle.
Stars: ✭ 62 (-96.31%)
Bert PytorchGoogle AI 2018 BERT pytorch implementation
Stars: ✭ 4,642 (+176.64%)
deepprojectsA non-ending collection of jupyter notebooks
Stars: ✭ 30 (-98.21%)
tttA package for fine-tuning Transformers with TPUs, written in Tensorflow2.0+
Stars: ✭ 35 (-97.91%)
PIEFast + Non-Autoregressive Grammatical Error Correction using BERT. Code and Pre-trained models for paper "Parallel Iterative Edit Models for Local Sequence Transduction": www.aclweb.org/anthology/D19-1435.pdf (EMNLP-IJCNLP 2019)
Stars: ✭ 164 (-90.23%)
semantic-document-relationsImplementation, trained models and result data for the paper "Pairwise Multi-Class Document Classification for Semantic Relations between Wikipedia Articles"
Stars: ✭ 21 (-98.75%)
text2keywordsTrained T5 and T5-large model for creating keywords from text
Stars: ✭ 53 (-96.84%)
small-textActive Learning for Text Classification in Python
Stars: ✭ 241 (-85.64%)
Nlp TutorialNatural Language Processing Tutorial for Deep Learning Researchers
Stars: ✭ 9,895 (+489.69%)
ErnieOfficial implementations for various pre-training models of ERNIE-family, covering topics of Language Understanding & Generation, Multimodal Understanding & Generation, and beyond.
Stars: ✭ 4,659 (+177.65%)
YunoYuno is context based search engine for anime.
Stars: ✭ 320 (-80.93%)
simple transformersSimple transformer implementations that I can understand
Stars: ✭ 18 (-98.93%)
DocProductMedical Q&A with Deep Language Models
Stars: ✭ 527 (-68.59%)
eve-botEVE bot, a customer service chatbot to enhance virtual engagement for Twitter Apple Support
Stars: ✭ 31 (-98.15%)
CLUE pytorchCLUE baseline pytorch CLUE的pytorch版本基线
Stars: ✭ 72 (-95.71%)
FinBERTA Pretrained BERT Model for Financial Communications. https://arxiv.org/abs/2006.08097
Stars: ✭ 193 (-88.5%)
ark-nlpA private nlp coding package, which quickly implements the SOTA solutions.
Stars: ✭ 232 (-86.17%)
robustness-vitContains code for the paper "Vision Transformers are Robust Learners" (AAAI 2022).
Stars: ✭ 78 (-95.35%)