VideoBERTUsing VideoBERT to tackle video prediction
Stars: ✭ 56 (-98.4%)
TextPair文本对关系比较 - 语义相似度、字面相似度、文本蕴含等等
Stars: ✭ 44 (-98.74%)
ModelZoo.pytorchHands on Imagenet training. Unofficial ModelZoo project on Pytorch. MobileNetV3 Top1 75.64🌟 GhostNet1.3x 75.78🌟
Stars: ✭ 42 (-98.8%)
iamQA中文wiki百科QA阅读理解问答系统,使用了CCKS2016数据的NER模型和CMRC2018的阅读理解模型,还有W2V词向量搜索,使用torchserve部署
Stars: ✭ 46 (-98.69%)
lambda.pytorchPyTorch implementation of Lambda Network and pretrained Lambda-ResNet
Stars: ✭ 54 (-98.46%)
NER-FunTool本NER项目包含多个中文数据集,模型采用BiLSTM+CRF、BERT+Softmax、BERT+Cascade、BERT+WOL等,最后用TFServing进行模型部署,线上推理和线下推理。
Stars: ✭ 56 (-98.4%)
DeepNERAn Easy-to-use, Modular and Prolongable package of deep-learning based Named Entity Recognition Models.
Stars: ✭ 9 (-99.74%)
ADL2019Applied Deep Learning (2019 Spring) @ NTU
Stars: ✭ 20 (-99.43%)
SIGIR2021 ConureOne Person, One Model, One World: Learning Continual User Representation without Forgetting
Stars: ✭ 23 (-99.34%)
MengziMengzi Pretrained Models
Stars: ✭ 238 (-93.2%)
textgoText preprocessing, representation, similarity calculation, text search and classification. Let's go and play with text!
Stars: ✭ 33 (-99.06%)
rasa milktea chatbotChatbot with bert chinese model, base on rasa framework(中文聊天机器人,结合bert意图分析,基于rasa框架)
Stars: ✭ 97 (-97.23%)
T3[EMNLP 2020] "T3: Tree-Autoencoder Constrained Adversarial Text Generation for Targeted Attack" by Boxin Wang, Hengzhi Pei, Boyuan Pan, Qian Chen, Shuohang Wang, Bo Li
Stars: ✭ 25 (-99.29%)
PDNThe official PyTorch implementation of "Pathfinder Discovery Networks for Neural Message Passing" (WebConf '21)
Stars: ✭ 44 (-98.74%)
HugsVisionHugsVision is a easy to use huggingface wrapper for state-of-the-art computer vision
Stars: ✭ 154 (-95.6%)
ai web RISKOUT BTS국방 리스크 관리 플랫폼 (🏅 국방부장관상/Minister of National Defense Award)
Stars: ✭ 18 (-99.49%)
NSP-BERTThe code for our paper "NSP-BERT: A Prompt-based Zero-Shot Learner Through an Original Pre-training Task —— Next Sentence Prediction"
Stars: ✭ 166 (-95.26%)
bert experimentalcode and supplementary materials for a series of Medium articles about the BERT model
Stars: ✭ 72 (-97.94%)
OpenPromptAn Open-Source Framework for Prompt-Learning.
Stars: ✭ 1,769 (-49.46%)
syntaxdotNeural syntax annotator, supporting sequence labeling, lemmatization, and dependency parsing.
Stars: ✭ 32 (-99.09%)
BERT-embeddingA simple wrapper class for extracting features(embedding) and comparing them using BERT in TensorFlow
Stars: ✭ 24 (-99.31%)
Bert-text-classificationThis shows how to fine-tune Bert language model and use PyTorch-transformers for text classififcation
Stars: ✭ 54 (-98.46%)
task-transferabilityData and code for our paper "Exploring and Predicting Transferability across NLP Tasks", to appear at EMNLP 2020.
Stars: ✭ 35 (-99%)
LightLM高性能小模型测评 Shared Tasks in NLPCC 2020. Task 1 - Light Pre-Training Chinese Language Model for NLP Task
Stars: ✭ 54 (-98.46%)
berserkerBerserker - BERt chineSE woRd toKenizER
Stars: ✭ 17 (-99.51%)
bert-tensorflow-pytorch-spacy-conversionInstructions for how to convert a BERT Tensorflow model to work with HuggingFace's pytorch-transformers, and spaCy. This walk-through uses DeepPavlov's RuBERT as example.
Stars: ✭ 26 (-99.26%)
troveWeakly supervised medical named entity classification
Stars: ✭ 55 (-98.43%)
PoLitBertPolish RoBERTA model trained on Polish literature, Wikipedia, and Oscar. The major assumption is that quality text will give a good model.
Stars: ✭ 25 (-99.29%)
label-studio-transformersLabel data using HuggingFace's transformers and automatically get a prediction service
Stars: ✭ 117 (-96.66%)
robo-vlnPytorch code for ICRA'21 paper: "Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation"
Stars: ✭ 34 (-99.03%)
consistencyImplementation of models in our EMNLP 2019 paper: A Logic-Driven Framework for Consistency of Neural Models
Stars: ✭ 26 (-99.26%)
bangla-bertBangla-Bert is a pretrained bert model for Bengali language
Stars: ✭ 41 (-98.83%)
anonymisationAnonymization of legal cases (Fr) based on Flair embeddings
Stars: ✭ 85 (-97.57%)
Transformers-TutorialsThis repository contains demos I made with the Transformers library by HuggingFace.
Stars: ✭ 2,828 (-19.2%)
PIEFast + Non-Autoregressive Grammatical Error Correction using BERT. Code and Pre-trained models for paper "Parallel Iterative Edit Models for Local Sequence Transduction": www.aclweb.org/anthology/D19-1435.pdf (EMNLP-IJCNLP 2019)
Stars: ✭ 164 (-95.31%)
kwxBERT, LDA, and TFIDF based keyword extraction in Python
Stars: ✭ 33 (-99.06%)
KARENKAREN: Unifying Hatespeech Detection and Benchmarking
Stars: ✭ 18 (-99.49%)
Kevinpro-NLP-demoAll NLP you Need Here. 个人实现了一些好玩的NLP demo,目前包含13个NLP应用的pytorch实现
Stars: ✭ 117 (-96.66%)
bernA neural named entity recognition and multi-type normalization tool for biomedical text mining
Stars: ✭ 151 (-95.69%)
AnnA Anki neuronal AppendixUsing machine learning on your anki collection to enhance the scheduling via semantic clustering and semantic similarity
Stars: ✭ 39 (-98.89%)
openroberta-labThe programming environment »Open Roberta Lab« by Fraunhofer IAIS enables children and adolescents to program robots. A variety of different programming blocks are provided to program motors and sensors of the robot. Open Roberta Lab uses an approach of graphical programming so that beginners can seamlessly start coding. As a cloud-based applica…
Stars: ✭ 98 (-97.2%)
FinBERTA Pretrained BERT Model for Financial Communications. https://arxiv.org/abs/2006.08097
Stars: ✭ 193 (-94.49%)
WSDM-Cup-2019[ACM-WSDM] 3rd place solution at WSDM Cup 2019, Fake News Classification on Kaggle.
Stars: ✭ 62 (-98.23%)
hard-label-attackNatural Language Attacks in a Hard Label Black Box Setting.
Stars: ✭ 26 (-99.26%)
Cross-Lingual-MRCCross-Lingual Machine Reading Comprehension (EMNLP 2019)
Stars: ✭ 66 (-98.11%)
NAG-BERT[EACL'21] Non-Autoregressive with Pretrained Language Model
Stars: ✭ 47 (-98.66%)