All Projects → VinAIResearch → Bertweet

VinAIResearch / Bertweet

Licence: mit
BERTweet: A pre-trained language model for English Tweets (EMNLP-2020)

Programming Languages

python
139335 projects - #7 most used programming language
python3
1442 projects

Projects that are alternatives of or similar to Bertweet

Dan Jurafsky Chris Manning Nlp
My solution to the Natural Language Processing course made by Dan Jurafsky, Chris Manning in Winter 2012.
Stars: ✭ 124 (-56.03%)
Mutual labels:  text-classification, sentiment-analysis, named-entity-recognition, ner
Turkish Bert Nlp Pipeline
Bert-base NLP pipeline for Turkish, Ner, Sentiment Analysis, Question Answering etc.
Stars: ✭ 85 (-69.86%)
Mutual labels:  sentiment-analysis, named-entity-recognition, ner
Bert Sklearn
a sklearn wrapper for Google's BERT model
Stars: ✭ 182 (-35.46%)
Mutual labels:  named-entity-recognition, ner, language-model
Spacy Streamlit
👑 spaCy building blocks and visualizers for Streamlit apps
Stars: ✭ 360 (+27.66%)
Mutual labels:  text-classification, named-entity-recognition, ner
Spark Nlp
State of the Art Natural Language Processing
Stars: ✭ 2,518 (+792.91%)
Mutual labels:  sentiment-analysis, named-entity-recognition, text-classification
Ld Net
Efficient Contextualized Representation: Language Model Pruning for Sequence Labeling
Stars: ✭ 148 (-47.52%)
Mutual labels:  named-entity-recognition, ner, language-model
Snips Nlu
Snips Python library to extract meaning from text
Stars: ✭ 3,583 (+1170.57%)
Mutual labels:  text-classification, named-entity-recognition, ner
Phonlp
PhoNLP: A BERT-based multi-task learning toolkit for part-of-speech tagging, named entity recognition and dependency parsing (NAACL 2021)
Stars: ✭ 56 (-80.14%)
Mutual labels:  named-entity-recognition, ner, language-model
Sentiment analysis fine grain
Multi-label Classification with BERT; Fine Grained Sentiment Analysis from AI challenger
Stars: ✭ 546 (+93.62%)
Mutual labels:  text-classification, sentiment-analysis, language-model
Chatbot cn
基于金融-司法领域(兼有闲聊性质)的聊天机器人,其中的主要模块有信息抽取、NLU、NLG、知识图谱等,并且利用Django整合了前端展示,目前已经封装了nlp和kg的restful接口
Stars: ✭ 791 (+180.5%)
Mutual labels:  text-classification, sentiment-analysis, ner
Nlp Experiments In Pytorch
PyTorch repository for text categorization and NER experiments in Turkish and English.
Stars: ✭ 35 (-87.59%)
Mutual labels:  text-classification, named-entity-recognition, ner
Bert Multitask Learning
BERT for Multitask Learning
Stars: ✭ 380 (+34.75%)
Mutual labels:  text-classification, named-entity-recognition, ner
Cluedatasetsearch
搜索所有中文NLP数据集,附常用英文NLP数据集
Stars: ✭ 2,112 (+648.94%)
Mutual labels:  text-classification, sentiment-analysis, ner
Kashgari
Kashgari is a production-level NLP Transfer learning framework built on top of tf.keras for text-labeling and text-classification, includes Word2Vec, BERT, and GPT2 Language Embedding.
Stars: ✭ 2,235 (+692.55%)
Mutual labels:  text-classification, named-entity-recognition, ner
simple NER
simple rule based named entity recognition
Stars: ✭ 29 (-89.72%)
Mutual labels:  named-entity-recognition, ner
deep-atrous-ner
Deep-Atrous-CNN-NER: Word level model for Named Entity Recognition
Stars: ✭ 35 (-87.59%)
Mutual labels:  named-entity-recognition, ner
Chatbot ner
chatbot_ner: Named Entity Recognition for chatbots.
Stars: ✭ 273 (-3.19%)
Mutual labels:  named-entity-recognition, ner
Pytorch-NLU
Pytorch-NLU,一个中文文本分类、序列标注工具包,支持中文长文本、短文本的多类、多标签分类任务,支持中文命名实体识别、词性标注、分词等序列标注任务。 Ptorch NLU, a Chinese text classification and sequence annotation toolkit, supports multi class and multi label classification tasks of Chinese long text and short text, and supports sequence annotation tasks such as Chinese named entity recognition, part of speech ta…
Stars: ✭ 151 (-46.45%)
Mutual labels:  text-classification, named-entity-recognition
rosette-elasticsearch-plugin
Document Enrichment plugin for Elasticsearch
Stars: ✭ 25 (-91.13%)
Mutual labels:  sentiment-analysis, named-entity-recognition
FNet-pytorch
Unofficial implementation of Google's FNet: Mixing Tokens with Fourier Transforms
Stars: ✭ 204 (-27.66%)
Mutual labels:  text-classification, language-model

Table of contents

  1. Introduction
  2. Main results
  3. Using BERTweet with transformers
  4. Using BERTweet with fairseq

BERTweet: A pre-trained language model for English Tweets

  • BERTweet is the first public large-scale language model pre-trained for English Tweets. BERTweet is trained based on the RoBERTa pre-training procedure, using the same model configuration as BERT-base.
  • The corpus used to pre-train BERTweet consists of 850M English Tweets (16B word tokens ~ 80GB), containing 845M Tweets streamed from 01/2012 to 08/2019 and 5M Tweets related to the COVID-19 pandemic.
  • BERTweet does better than its competitors RoBERTa-base and XLM-R-base and outperforms previous state-of-the-art models on three downstream Tweet NLP tasks of Part-of-speech tagging, Named entity recognition and text classification.

The general architecture and experimental results of BERTweet can be found in our paper:

@inproceedings{bertweet,
title     = {{BERTweet: A pre-trained language model for English Tweets}},
author    = {Dat Quoc Nguyen and Thanh Vu and Anh Tuan Nguyen},
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
year      = {2020},
pages     = {9--14}
}

Please CITE our paper when BERTweet is used to help produce published results or is incorporated into other software.

Using BERTweet with transformers

Installation

  • Python 3.6+, and PyTorch 1.1.0+ (or TensorFlow 2.0+)
  • Install transformers:
    • git clone https://github.com/huggingface/transformers.git
    • cd transformers
    • pip3 install --upgrade .
  • Install emoji: pip3 install emoji

Pre-trained models

Model #params Arch. Pre-training data
vinai/bertweet-base 135M base 845M English Tweets (cased)
vinai/bertweet-covid19-base-cased 135M base 23M COVID-19 English Tweets (cased)
vinai/bertweet-covid19-base-uncased 135M base 23M COVID-19 English Tweets (uncased)

As of 09/2020, we have collected a corpus of about 23M "cased" COVID-19 English Tweets, and also generate an "uncased" version of this corpus. Then we continue pre-training from vinai/bertweet-base on each of the "cased" and "uncased" corpora of 23M Tweets for 40 additional epochs, resulting in two BERTweet variants vinai/bertweet-covid19-base-cased and vinai/bertweet-covid19-base-uncased, respectively.

Example usage

import torch
from transformers import AutoModel, AutoTokenizer 

bertweet = AutoModel.from_pretrained("vinai/bertweet-base")

# For transformers v4.x+: 
tokenizer = AutoTokenizer.from_pretrained("vinai/bertweet-base", use_fast=False)

# For transformers v3.x: 
# tokenizer = AutoTokenizer.from_pretrained("vinai/bertweet-base")

# INPUT TWEET IS ALREADY NORMALIZED!
line = "SC has first two presumptive cases of coronavirus , DHEC confirms HTTPURL via @USER 😢"

input_ids = torch.tensor([tokenizer.encode(line)])

with torch.no_grad():
    features = bertweet(input_ids)  # Models outputs are now tuples
    
## With TensorFlow 2.0+:
# from transformers import TFAutoModel
# bertweet = TFAutoModel.from_pretrained("vinai/bertweet-base")

Normalize raw input Tweets

Before applying fastBPE to the pre-training corpus of 850M English Tweets, we tokenized these Tweets using TweetTokenizer from the NLTK toolkit and used the emoji package to translate emotion icons into text strings (here, each icon is referred to as a word token). We also normalized the Tweets by converting user mentions and web/url links into special tokens @USER and HTTPURL, respectively. Thus it is recommended to also apply the same pre-processing step for BERTweet-based downstream applications w.r.t. the raw input Tweets. BERTweet provides this pre-processing step by enabling the normalization argument.

import torch
from transformers import AutoTokenizer

# Load the AutoTokenizer with a normalization mode if the input Tweet is raw
tokenizer = AutoTokenizer.from_pretrained("vinai/bertweet-base", normalization=True)

# from transformers import BertweetTokenizer
# tokenizer = BertweetTokenizer.from_pretrained("vinai/bertweet-base", normalization=True)

line = "SC has first two presumptive cases of coronavirus, DHEC confirms https://postandcourier.com/health/covid19/sc-has-first-two-presumptive-cases-of-coronavirus-dhec-confirms/article_bddfe4ae-5fd3-11ea-9ce4-5f495366cee6.html?utm_medium=social&utm_source=twitter&utm_campaign=user-share… via @postandcourier"

input_ids = torch.tensor([tokenizer.encode(line)])

Using BERTweet with fairseq

Please see details at HERE!

License

MIT License

Copyright (c) 2020 VinAI Research

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].