All Projects → Albert_zh → Similar Projects or Alternatives

226 Open source projects that are alternatives of or similar to Albert_zh

VideoBERT
Using VideoBERT to tackle video prediction
Stars: ✭ 56 (-98.4%)
Mutual labels:  bert
GoEmotions-pytorch
Pytorch Implementation of GoEmotions 😍😢😱
Stars: ✭ 95 (-97.29%)
Mutual labels:  bert
TextPair
文本对关系比较 - 语义相似度、字面相似度、文本蕴含等等
Stars: ✭ 44 (-98.74%)
Mutual labels:  bert
ModelZoo.pytorch
Hands on Imagenet training. Unofficial ModelZoo project on Pytorch. MobileNetV3 Top1 75.64🌟 GhostNet1.3x 75.78🌟
Stars: ✭ 42 (-98.8%)
Mutual labels:  pre-trained
iamQA
中文wiki百科QA阅读理解问答系统,使用了CCKS2016数据的NER模型和CMRC2018的阅读理解模型,还有W2V词向量搜索,使用torchserve部署
Stars: ✭ 46 (-98.69%)
Mutual labels:  bert
lambda.pytorch
PyTorch implementation of Lambda Network and pretrained Lambda-ResNet
Stars: ✭ 54 (-98.46%)
Mutual labels:  pre-trained-model
NER-FunTool
本NER项目包含多个中文数据集,模型采用BiLSTM+CRF、BERT+Softmax、BERT+Cascade、BERT+WOL等,最后用TFServing进行模型部署,线上推理和线下推理。
Stars: ✭ 56 (-98.4%)
Mutual labels:  bert
DeepNER
An Easy-to-use, Modular and Prolongable package of deep-learning based Named Entity Recognition Models.
Stars: ✭ 9 (-99.74%)
Mutual labels:  bert
ADL2019
Applied Deep Learning (2019 Spring) @ NTU
Stars: ✭ 20 (-99.43%)
Mutual labels:  bert
Quality-Estimation2
机器翻译子任务-翻译质量评价-在BERT模型后面加上Bi-LSTM进行fine-tuning
Stars: ✭ 31 (-99.11%)
Mutual labels:  bert
SIGIR2021 Conure
One Person, One Model, One World: Learning Continual User Representation without Forgetting
Stars: ✭ 23 (-99.34%)
Mutual labels:  bert
text-generation-transformer
text generation based on transformer
Stars: ✭ 36 (-98.97%)
Mutual labels:  bert
Mengzi
Mengzi Pretrained Models
Stars: ✭ 238 (-93.2%)
Mutual labels:  bert
textgo
Text preprocessing, representation, similarity calculation, text search and classification. Let's go and play with text!
Stars: ✭ 33 (-99.06%)
Mutual labels:  bert
rasa milktea chatbot
Chatbot with bert chinese model, base on rasa framework(中文聊天机器人,结合bert意图分析,基于rasa框架)
Stars: ✭ 97 (-97.23%)
Mutual labels:  bert
viewpoint-mining
参考NER,基于BERT的电商评论观点挖掘和情感分析
Stars: ✭ 31 (-99.11%)
Mutual labels:  bert
T3
[EMNLP 2020] "T3: Tree-Autoencoder Constrained Adversarial Text Generation for Targeted Attack" by Boxin Wang, Hengzhi Pei, Boyuan Pan, Qian Chen, Shuohang Wang, Bo Li
Stars: ✭ 25 (-99.29%)
Mutual labels:  bert
MRC Competition Dureader
机器阅读理解 冠军/亚军代码及中文预训练MRC模型
Stars: ✭ 552 (-84.23%)
Mutual labels:  bert
Text and Audio classification with Bert
Text Classification in Turkish Texts with Bert
Stars: ✭ 34 (-99.03%)
Mutual labels:  bert
PDN
The official PyTorch implementation of "Pathfinder Discovery Networks for Neural Message Passing" (WebConf '21)
Stars: ✭ 44 (-98.74%)
Mutual labels:  bert
HugsVision
HugsVision is a easy to use huggingface wrapper for state-of-the-art computer vision
Stars: ✭ 154 (-95.6%)
Mutual labels:  bert
ai web RISKOUT BTS
국방 리스크 관리 플랫폼 (🏅 국방부장관상/Minister of National Defense Award)
Stars: ✭ 18 (-99.49%)
Mutual labels:  bert
NSP-BERT
The code for our paper "NSP-BERT: A Prompt-based Zero-Shot Learner Through an Original Pre-training Task —— Next Sentence Prediction"
Stars: ✭ 166 (-95.26%)
Mutual labels:  bert
bert experimental
code and supplementary materials for a series of Medium articles about the BERT model
Stars: ✭ 72 (-97.94%)
Mutual labels:  bert
OpenPrompt
An Open-Source Framework for Prompt-Learning.
Stars: ✭ 1,769 (-49.46%)
Mutual labels:  pre-trained-model
syntaxdot
Neural syntax annotator, supporting sequence labeling, lemmatization, and dependency parsing.
Stars: ✭ 32 (-99.09%)
Mutual labels:  bert
BERT-embedding
A simple wrapper class for extracting features(embedding) and comparing them using BERT in TensorFlow
Stars: ✭ 24 (-99.31%)
Mutual labels:  bert
Bert-text-classification
This shows how to fine-tune Bert language model and use PyTorch-transformers for text classififcation
Stars: ✭ 54 (-98.46%)
Mutual labels:  xlnet
task-transferability
Data and code for our paper "Exploring and Predicting Transferability across NLP Tasks", to appear at EMNLP 2020.
Stars: ✭ 35 (-99%)
Mutual labels:  bert
LightLM
高性能小模型测评 Shared Tasks in NLPCC 2020. Task 1 - Light Pre-Training Chinese Language Model for NLP Task
Stars: ✭ 54 (-98.46%)
Mutual labels:  bert
berserker
Berserker - BERt chineSE woRd toKenizER
Stars: ✭ 17 (-99.51%)
Mutual labels:  bert
bert-tensorflow-pytorch-spacy-conversion
Instructions for how to convert a BERT Tensorflow model to work with HuggingFace's pytorch-transformers, and spaCy. This walk-through uses DeepPavlov's RuBERT as example.
Stars: ✭ 26 (-99.26%)
Mutual labels:  bert
trove
Weakly supervised medical named entity classification
Stars: ✭ 55 (-98.43%)
Mutual labels:  bert
BERT-for-Chinese-Question-Answering
No description or website provided.
Stars: ✭ 75 (-97.86%)
Mutual labels:  bert
PoLitBert
Polish RoBERTA model trained on Polish literature, Wikipedia, and Oscar. The major assumption is that quality text will give a good model.
Stars: ✭ 25 (-99.29%)
Mutual labels:  roberta
bert quora question pairs
BERT Model Fine-tuning on Quora Questions Pairs
Stars: ✭ 28 (-99.2%)
Mutual labels:  bert
label-studio-transformers
Label data using HuggingFace's transformers and automatically get a prediction service
Stars: ✭ 117 (-96.66%)
Mutual labels:  bert
BERT-BiLSTM-CRF
BERT-BiLSTM-CRF的Keras版实现
Stars: ✭ 40 (-98.86%)
Mutual labels:  bert
SemEval2019Task3
Code for ANA at SemEval-2019 Task 3
Stars: ✭ 41 (-98.83%)
Mutual labels:  bert
robo-vln
Pytorch code for ICRA'21 paper: "Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation"
Stars: ✭ 34 (-99.03%)
Mutual labels:  bert
consistency
Implementation of models in our EMNLP 2019 paper: A Logic-Driven Framework for Consistency of Neural Models
Stars: ✭ 26 (-99.26%)
Mutual labels:  bert
bangla-bert
Bangla-Bert is a pretrained bert model for Bengali language
Stars: ✭ 41 (-98.83%)
Mutual labels:  bert
anonymisation
Anonymization of legal cases (Fr) based on Flair embeddings
Stars: ✭ 85 (-97.57%)
Mutual labels:  bert
bert tokenization for java
This is a java version of Chinese tokenization descried in BERT.
Stars: ✭ 39 (-98.89%)
Mutual labels:  bert
Transformers-Tutorials
This repository contains demos I made with the Transformers library by HuggingFace.
Stars: ✭ 2,828 (-19.2%)
Mutual labels:  bert
PIE
Fast + Non-Autoregressive Grammatical Error Correction using BERT. Code and Pre-trained models for paper "Parallel Iterative Edit Models for Local Sequence Transduction": www.aclweb.org/anthology/D19-1435.pdf (EMNLP-IJCNLP 2019)
Stars: ✭ 164 (-95.31%)
Mutual labels:  bert
kwx
BERT, LDA, and TFIDF based keyword extraction in Python
Stars: ✭ 33 (-99.06%)
Mutual labels:  bert
are-16-heads-really-better-than-1
Code for the paper "Are Sixteen Heads Really Better than One?"
Stars: ✭ 128 (-96.34%)
Mutual labels:  bert
KAREN
KAREN: Unifying Hatespeech Detection and Benchmarking
Stars: ✭ 18 (-99.49%)
Mutual labels:  bert
NLP-Review-Scorer
Score your NLP paper review
Stars: ✭ 25 (-99.29%)
Mutual labels:  bert
Kevinpro-NLP-demo
All NLP you Need Here. 个人实现了一些好玩的NLP demo,目前包含13个NLP应用的pytorch实现
Stars: ✭ 117 (-96.66%)
Mutual labels:  bert
bern
A neural named entity recognition and multi-type normalization tool for biomedical text mining
Stars: ✭ 151 (-95.69%)
Mutual labels:  bert
AnnA Anki neuronal Appendix
Using machine learning on your anki collection to enhance the scheduling via semantic clustering and semantic similarity
Stars: ✭ 39 (-98.89%)
Mutual labels:  bert
openroberta-lab
The programming environment »Open Roberta Lab« by Fraunhofer IAIS enables children and adolescents to program robots. A variety of different programming blocks are provided to program motors and sensors of the robot. Open Roberta Lab uses an approach of graphical programming so that beginners can seamlessly start coding. As a cloud-based applica…
Stars: ✭ 98 (-97.2%)
Mutual labels:  roberta
FinBERT
A Pretrained BERT Model for Financial Communications. https://arxiv.org/abs/2006.08097
Stars: ✭ 193 (-94.49%)
Mutual labels:  bert
WSDM-Cup-2019
[ACM-WSDM] 3rd place solution at WSDM Cup 2019, Fake News Classification on Kaggle.
Stars: ✭ 62 (-98.23%)
Mutual labels:  bert
hard-label-attack
Natural Language Attacks in a Hard Label Black Box Setting.
Stars: ✭ 26 (-99.26%)
Mutual labels:  bert
SQUAD2.Q-Augmented-Dataset
Augmented version of SQUAD 2.0 for Questions
Stars: ✭ 31 (-99.11%)
Mutual labels:  bert
Cross-Lingual-MRC
Cross-Lingual Machine Reading Comprehension (EMNLP 2019)
Stars: ✭ 66 (-98.11%)
Mutual labels:  bert
NAG-BERT
[EACL'21] Non-Autoregressive with Pretrained Language Model
Stars: ✭ 47 (-98.66%)
Mutual labels:  bert
61-120 of 226 similar projects