All Projects → cedrickchee → Awesome Bert Nlp

cedrickchee / Awesome Bert Nlp

Licence: mit
A curated list of NLP resources focused on BERT, attention mechanism, Transformer networks, and transfer learning.

Projects that are alternatives of or similar to Awesome Bert Nlp

Vietnamese Electra
Electra pre-trained model using Vietnamese corpus
Stars: ✭ 55 (-90.3%)
Mutual labels:  natural-language-processing, language-model, transformer
Neural sp
End-to-end ASR/LM implementation with PyTorch
Stars: ✭ 408 (-28.04%)
Mutual labels:  attention-mechanism, language-model, transformer
Pytorch Openai Transformer Lm
🐥A PyTorch implementation of OpenAI's finetuned transformer language model with a script to import the weights pre-trained by OpenAI
Stars: ✭ 1,268 (+123.63%)
Mutual labels:  neural-networks, language-model, transformer
MinTL
MinTL: Minimalist Transfer Learning for Task-Oriented Dialogue Systems
Stars: ✭ 61 (-89.24%)
Mutual labels:  transformer, transfer-learning, language-model
Bert Sklearn
a sklearn wrapper for Google's BERT model
Stars: ✭ 182 (-67.9%)
Mutual labels:  natural-language-processing, transfer-learning, language-model
Gpt2
PyTorch Implementation of OpenAI GPT-2
Stars: ✭ 64 (-88.71%)
Mutual labels:  natural-language-processing, language-model, transformer
Spacy Transformers
🛸 Use pretrained transformers like BERT, XLNet and GPT-2 in spaCy
Stars: ✭ 919 (+62.08%)
Mutual labels:  natural-language-processing, transfer-learning, language-model
Transformers
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Stars: ✭ 55,742 (+9731.04%)
Mutual labels:  natural-language-processing, language-model, transformer
Attention Mechanisms
Implementations for a family of attention mechanisms, suitable for all kinds of natural language processing tasks and compatible with TensorFlow 2.0 and Keras.
Stars: ✭ 203 (-64.2%)
Mutual labels:  natural-language-processing, attention-mechanism, language-model
Nlp Paper
NLP Paper
Stars: ✭ 484 (-14.64%)
Mutual labels:  transfer-learning, language-model, transformer
Tensorlayer Tricks
How to use TensorLayer
Stars: ✭ 357 (-37.04%)
Mutual labels:  natural-language-processing, neural-networks
Amazon Forest Computer Vision
Amazon Forest Computer Vision: Satellite Image tagging code using PyTorch / Keras with lots of PyTorch tricks
Stars: ✭ 346 (-38.98%)
Mutual labels:  neural-networks, transfer-learning
Transformer
A TensorFlow Implementation of the Transformer: Attention Is All You Need
Stars: ✭ 3,646 (+543.03%)
Mutual labels:  attention-mechanism, transformer
Artificio
Deep Learning Computer Vision Algorithms for Real-World Use
Stars: ✭ 326 (-42.5%)
Mutual labels:  neural-networks, transfer-learning
Question generation
Neural question generation using transformers
Stars: ✭ 356 (-37.21%)
Mutual labels:  natural-language-processing, transformer
Abstractive Summarization With Transfer Learning
Abstractive summarisation using Bert as encoder and Transformer Decoder
Stars: ✭ 358 (-36.86%)
Mutual labels:  transfer-learning, transformer
Flow Forecast
Deep learning PyTorch library for time series forecasting, classification, and anomaly detection (originally for flood forecasting).
Stars: ✭ 368 (-35.1%)
Mutual labels:  transfer-learning, transformer
Trankit
Trankit is a Light-Weight Transformer-based Python Toolkit for Multilingual Natural Language Processing
Stars: ✭ 311 (-45.15%)
Mutual labels:  natural-language-processing, language-model
Pytorch Original Transformer
My implementation of the original transformer model (Vaswani et al.). I've additionally included the playground.py file for visualizing otherwise seemingly hard concepts. Currently included IWSLT pretrained models.
Stars: ✭ 411 (-27.51%)
Mutual labels:  attention-mechanism, transformer
Transformer Tts
A Pytorch Implementation of "Neural Speech Synthesis with Transformer Network"
Stars: ✭ 418 (-26.28%)
Mutual labels:  attention-mechanism, transformer

Awesome BERT & Transfer Learning in NLP Awesome

This repository contains a hand-curated of great machine (deep) learning resources for Natural Language Processing (NLP) with a focus on Bidirectional Encoder Representations from Transformers (BERT), attention mechanism, Transformer architectures/networks, and transfer learning in NLP.


Transformer (BERT) (Source)


Table of Contents

Expand Table of Contents

Papers

  1. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova.
  2. Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context by Zihang Dai, Zhilin Yang, Yiming Yang, William W. Cohen, Jaime Carbonell, Quoc V. Le and Ruslan Salakhutdinov.
  • Uses smart caching to improve the learning of long-term dependency in Transformer. Key results: state-of-art on 5 language modeling benchmarks, including ppl of 21.8 on One Billion Word (LM1B) and 0.99 on enwiki8. The authors claim that the method is more flexible, faster during evaluation (1874 times speedup), generalizes well on small datasets, and is effective at modeling short and long sequences.
  1. Conditional BERT Contextual Augmentation by Xing Wu, Shangwen Lv, Liangjun Zang, Jizhong Han and Songlin Hu.
  2. SDNet: Contextualized Attention-based Deep Network for Conversational Question Answering by Chenguang Zhu, Michael Zeng and Xuedong Huang.
  3. Language Models are Unsupervised Multitask Learners by Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei and Ilya Sutskever.
  4. The Evolved Transformer by David R. So, Chen Liang and Quoc V. Le.
  • They used architecture search to improve Transformer architecture. Key is to use evolution and seed initial population with Transformer itself. The architecture is better and more efficient, especially for small size models.
  1. XLNet: Generalized Autoregressive Pretraining for Language Understanding by Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le.
  • A new pretraining method for NLP that significantly improves upon BERT on 20 tasks (e.g., SQuAD, GLUE, RACE).

  • "Transformer-XL is a shifted model (each hyper-column ends with next token) while XLNet is a direct model (each hyper-column ends with contextual representation of same token)." — Thomas Wolf.

  • Comments from HN:

    A clever dual masking-and-caching algorithm.
    • This is NOT "just throwing more compute" at the problem.
    • The authors have devised a clever dual-masking-plus-caching mechanism to induce an attention-based model to learn to predict tokens from all possible permutations of the factorization order of all other tokens in the same input sequence.
    • In expectation, the model learns to gather information from all positions on both sides of each token in order to predict the token.
      • For example, if the input sequence has four tokens, ["The", "cat", "is", "furry"], in one training step the model will try to predict "is" after seeing "The", then "cat", then "furry".
      • In another training step, the model might see "furry" first, then "The", then "cat".
      • Note that the original sequence order is always retained, e.g., the model always knows that "furry" is the fourth token.
    • The masking-and-caching algorithm that accomplishes this does not seem trivial to me.
    • The improvements to SOTA performance in a range of tasks are significant -- see tables 2, 3, 4, 5, and 6 in the paper.
  1. CTRL: Conditional Transformer Language Model for Controllable Generation by Nitish Shirish Keskar, Richard Socher et al. [Code].
  2. PLMpapers - BERT (Transformer, transfer learning) has catalyzed research in pretrained language models (PLMs) and has sparked many extensions. This repo contains a list of papers on PLMs.
  3. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer by Google Brain.
  • The group perform a systematic study of transfer learning for NLP using a unified Text-to-Text Transfer Transformer (T5) model and push the limits to achieve SoTA on SuperGLUE (approaching human baseline), SQuAD, and CNN/DM benchmark. [Code].
  1. Reformer: The Efficient Transformer by Nikita Kitaev, Lukasz Kaiser, and Anselm Levskaya.
  • "They present techniques to reduce the time and memory complexity of Transformer, allowing batches of very long sequences (64K) to fit on one GPU. Should pave way for Transformer to be really impactful beyond NLP domain." — @hardmaru
  1. Supervised Multimodal Bitransformers for Classifying Images and Text (MMBT) by Facebook AI.
  2. A Primer in BERTology: What we know about how BERT works by Anna Rogers et al.
  • "Have you been drowning in BERT papers?". The group survey over 40 papers on BERT's linguistic knowledge, architecture tweaks, compression, multilinguality, and so on.
  1. tomohideshibata/BERT-related papers
  2. Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity by Google Brain. [Code] | [Blog post (unofficial)]
  • Key idea: the architecture use a subset of parameters on every training step and on each example. Upside: model train much faster. Downside: super large model that won't fit in a lot of environments.

Articles

BERT and Transformer

  1. Open Sourcing BERT: State-of-the-Art Pre-training for Natural Language Processing from Google AI.
  2. The Illustrated BERT, ELMo, and co. (How NLP Cracked Transfer Learning).
  3. Dissecting BERT by Miguel Romero and Francisco Ingham - Understand BERT in depth with an intuitive, straightforward explanation of the relevant concepts.
  4. A Light Introduction to Transformer-XL.
  5. Generalized Language Models by Lilian Weng, Research Scientist at OpenAI.
  6. What is XLNet and why it outperforms BERT
  • Permutation Language Modeling objective is the core of XLNet.
  1. DistilBERT (from HuggingFace), released together with the blog post Smaller, faster, cheaper, lighter: Introducing DistilBERT, a distilled version of BERT.
  2. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations paper from Google Research and Toyota Technological Institute. — Improvements for more efficient parameter usage: factorized embedding parameterization, cross-layer parameter sharing, and Sentence Order Prediction (SOP) loss to model inter-sentence coherence. [Blog post | Code]
  3. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators by Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning - A BERT variant like ALBERT and cost less to train. They trained a model that outperforms GPT by using only one GPU; match the performance of RoBERTa by using 1/4 computation. It uses a new pre-training approach, called replaced token detection (RTD), that trains a bidirectional model while learning from all input positions. [Blog post | Code]
  4. Visual Paper Summary: ALBERT (A Lite BERT)

Attention Concept

  1. The Annotated Transformer by Harvard NLP Group - Further reading to understand the "Attention is all you need" paper.
  2. Attention? Attention! - Attention guide by Lilian Weng from OpenAI.
  3. Visualizing A Neural Machine Translation Model (Mechanics of Seq2seq Models With Attention) by Jay Alammar, an Instructor from Udacity ML Engineer Nanodegree.
  4. Making Transformer networks simpler and more efficient - FAIR released an all-attention layer to simplify the Transformer model and an adaptive attention span method to make it more efficient (reduce computation time and memory footprint).
  5. What Does BERT Look At? An Analysis of BERT’s Attention paper by Stanford NLP Group.

Transformer Architecture

  1. The Transformer blog post.
  2. The Illustrated Transformer by Jay Alammar, an Instructor from Udacity ML Engineer Nanodegree.
  3. Watch Łukasz Kaiser’s talk walking through the model and its details.
  4. Transformer-XL: Unleashing the Potential of Attention Models by Google Brain.
  5. Generative Modeling with Sparse Transformers by OpenAI - an algorithmic improvement of the attention mechanism to extract patterns from sequences 30x longer than possible previously.
  6. Stabilizing Transformers for Reinforcement Learning paper by DeepMind and CMU - they propose architectural modifications to the original Transformer and XL variant by moving layer-norm and adding gating creates Gated Transformer-XL (GTrXL). It substantially improve the stability and learning speed (integrating experience through time) in RL.
  7. The Transformer Family by Lilian Weng - since the paper "Attention Is All You Need", many new things have happened to improve the Transformer model. This post is about that.
  8. DETR (DEtection TRansformer): End-to-End Object Detection with Transformers by FAIR - 🔥 Computer vision has not yet been swept up by the Transformer revolution. DETR completely changes the architecture compared with previous object detection systems. (PyTorch Code and pretrained models). "A solid swing at (non-autoregressive) end-to-end detection. Anchor boxes + Non-Max Suppression (NMS) is a mess. I was hoping detection would go end-to-end back in ~2013)" — Andrej Karpathy

Generative Pre-Training Transformer (GPT)

  1. Better Language Models and Their Implications.
  2. Improving Language Understanding with Unsupervised Learning - this is an overview of the original OpenAI GPT model.
  3. 🦄 How to build a State-of-the-Art Conversational AI with Transfer Learning by Hugging Face.
  4. The Illustrated GPT-2 (Visualizing Transformer Language Models) by Jay Alammar.
  5. MegatronLM: Training Billion+ Parameter Language Models Using GPU Model Parallelism by NVIDIA ADLR.
  6. OpenGPT-2: We Replicated GPT-2 Because You Can Too - the authors trained a 1.5 billion parameter GPT-2 model on a similar sized text dataset and they reported results that can be compared with the original model.
  7. MSBuild demo of an OpenAI generative text model generating Python code [video] - The model that was trained on GitHub OSS repos. The model uses English-language code comments or simply function signatures to generate entire Python functions. Cool!
  8. GPT-3: Language Models are Few-Shot Learners (paper) by Tom B. Brown (OpenAI) et al. - "We train GPT-3, an autoregressive language model with 175 billion parameters 😱, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting."
  9. elyase/awesome-gpt3 - A collection of demos and articles about the OpenAI GPT-3 API.
  10. How GPT3 Works - Visualizations and Animations by Jay Alammar.
  11. GPT-Neo - Replicate a GPT-3 sized model and open source it for free. GPT-Neo is "an implementation of model parallel GPT2 & GPT3-like models, with the ability to scale up to full GPT3 sizes (and possibly more!), using the mesh-tensorflow library." [Code].

Additional Reading

  1. How to Build OpenAI's GPT-2: "The AI That's Too Dangerous to Release".
  2. OpenAI’s GPT2 - Food to Media hype or Wake Up Call?
  3. How the Transformers broke NLP leaderboards by Anna Rogers. 🔥🔥🔥
  • A well put summary post on problems with large models that dominate NLP these days.
  • Larger models + more data = progress in Machine Learning research ❓
  1. Transformers From Scratch tutorial by Peter Bloem.
  2. Real-time Natural Language Understanding with BERT using NVIDIA TensorRT on Google Cloud T4 GPUs achieves 2.2 ms latency for inference. Optimizations are open source on GitHub.
  3. NLP's Clever Hans Moment has Arrived by The Gradient.
  4. Language, trees, and geometry in neural networks - a series of expository notes accompanying the paper, "Visualizing and Measuring the Geometry of BERT" by Google's People + AI Research (PAIR) team.
  5. Benchmarking Transformers: PyTorch and TensorFlow by Hugging Face - a comparison of inference time (on CPU and GPU) and memory usage for a wide range of transformer architectures.
  6. Evolution of representations in the Transformer - An accessible article that presents the insights of their EMNLP 2019 paper. They look at how the representations of individual tokens in Transformers trained with different objectives change.
  7. The dark secrets of BERT - This post probes fine-tuned BERT models for linguistic knowledge. In particular, the authors analyse how many self-attention patterns with some linguistic interpretation are actually used to solve downstream tasks. TL;DR: They are unable to find evidence that linguistically interpretable self-attention maps are crucial for downstream performance.
  8. A Visual Guide to Using BERT for the First Time - Tutorial on using BERT in practice, such as for sentiment analysis on movie reviews by Jay Alammar.
  9. Turing-NLG: A 17-billion-parameter language model by Microsoft that outperforms the state of the art on many downstream NLP tasks. This work would not be possible without breakthroughs produced by the DeepSpeed library (compatible with PyTorch) and ZeRO optimizer, which can be explored more in this accompanying blog post.

Tutorials

  1. How to train a new language model from scratch using Transformers and Tokenizers tutorial by Hugging Face. 🔥

Videos

BERTology

  1. XLNet Explained by NLP Breakfasts.
  • Clear explanation. Also covers the two-stream self-attention idea.
  1. The Future of NLP by 🤗
  • Dense overview of what is going on in transfer learning in NLP currently, limits, and future directions.
  1. The Transformer neural network architecture explained by AI Coffee Break with Letitia Parcalabescu.
  • High-level explanation, best suited when unfamiliar with Transformers.

Official Implementations

  1. google-research/bert - TensorFlow code and pre-trained models for BERT.

Other Implementations

PyTorch and TensorFlow

  1. 🤗 Hugging Face Transformers (formerly known as pytorch-transformers and pytorch-pretrained-bert) provides state-of-the-art general-purpose architectures (BERT, GPT-2, RoBERTa, XLM, DistilBert, XLNet, CTRL...) for Natural Language Understanding (NLU) and Natural Language Generation (NLG) with over 32+ pretrained models in 100+ languages and deep interoperability between TensorFlow 2.0 and PyTorch. [Paper]
  2. spacy-transformers - a library that wrap Hugging Face's Transformers, in order to extract features to power NLP pipelines. It also calculates an alignment so the Transformer features can be related back to actual words instead of just wordpieces.

PyTorch

  1. codertimo/BERT-pytorch - Google AI 2018 BERT pytorch implementation.
  2. innodatalabs/tbert - PyTorch port of BERT ML model.
  3. kimiyoung/transformer-xl - Code repository associated with the Transformer-XL paper.
  4. dreamgonfly/BERT-pytorch - A PyTorch implementation of BERT in "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding".
  5. dhlee347/pytorchic-bert - A Pytorch implementation of Google BERT.
  6. pingpong-ai/xlnet-pytorch - A Pytorch implementation of Google Brain XLNet.
  7. facebook/fairseq - RoBERTa: A Robustly Optimized BERT Pretraining Approach by Facebook AI Research. SoTA results on GLUE, SQuAD and RACE.
  8. NVIDIA/Megatron-LM - Ongoing research training transformer language models at scale, including: BERT.
  9. deepset-ai/FARM - Simple & flexible transfer learning for the industry.
  10. NervanaSystems/nlp-architect - NLP Architect by Intel AI. Among other libraries, it provides a quantized version of Transformer models and efficient training method.
  11. kaushaltrivedi/fast-bert - Super easy library for BERT based NLP models. Built based on 🤗 Transformers and is inspired by fast.ai.
  12. NVIDIA/NeMo - Neural Modules is a toolkit for conversational AI by NVIDIA. They are trying to improve speech recognition with BERT post-processing.
  13. facebook/MMBT from Facebook AI - Multimodal transformers model that can accept a transformer model and a computer vision model for classifying image and text.
  14. dbiir/UER-py from Tencent and RUC - Open Source Pre-training Model Framework in PyTorch & Pre-trained Model Zoo (with more focus on Chinese).

Keras

  1. Separius/BERT-keras - Keras implementation of BERT with pre-trained weights.
  2. CyberZHG/keras-bert - Implementation of BERT that could load official pre-trained models for feature extraction and prediction.
  3. bojone/bert4keras - Light reimplement of BERT for Keras.

TensorFlow

  1. guotong1988/BERT-tensorflow - BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding.
  2. kimiyoung/transformer-xl - Code repository associated with the Transformer-XL paper.
  3. zihangdai/xlnet - Code repository associated with the XLNet paper.

Chainer

  1. soskek/bert-chainer - Chainer implementation of "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding".

Transfer Learning in NLP

As Jay Alammar put it:

The year 2018 has been an inflection point for machine learning models handling text (or more accurately, Natural Language Processing or NLP for short). Our conceptual understanding of how best to represent words and sentences in a way that best captures underlying meanings and relationships is rapidly evolving. Moreover, the NLP community has been putting forward incredibly powerful components that you can freely download and use in your own models and pipelines (It's been referred to as NLP's ImageNet moment, referencing how years ago similar developments accelerated the development of machine learning in Computer Vision tasks).

One of the latest milestones in this development is the release of BERT, an event described as marking the beginning of a new era in NLP. BERT is a model that broke several records for how well models can handle language-based tasks. Soon after the release of the paper describing the model, the team also open-sourced the code of the model, and made available for download versions of the model that were already pre-trained on massive datasets. This is a momentous development since it enables anyone building a machine learning model involving language processing to use this powerhouse as a readily-available component – saving the time, energy, knowledge, and resources that would have gone to training a language-processing model from scratch.

BERT builds on top of a number of clever ideas that have been bubbling up in the NLP community recently – including but not limited to Semi-supervised Sequence Learning (by Andrew Dai and Quoc Le), ELMo (by Matthew Peters and researchers from AI2 and UW CSE), ULMFiT (by fast.ai founder Jeremy Howard and Sebastian Ruder), the OpenAI transformer (by OpenAI researchers Radford, Narasimhan, Salimans, and Sutskever), and the Transformer (Vaswani et al).

ULMFiT: Nailing down Transfer Learning in NLP

ULMFiT introduced methods to effectively utilize a lot of what the model learns during pre-training – more than just embeddings, and more than contextualized embeddings. ULMFiT introduced a language model and a process to effectively fine-tune that language model for various tasks.

NLP finally had a way to do transfer learning probably as well as Computer Vision could.

MultiFiT: Efficient Multi-lingual Language Model Fine-tuning by Sebastian Ruder et al. MultiFiT extends ULMFiT to make it more efficient and more suitable for language modelling beyond English. (EMNLP 2019 paper)

Books

  1. Transfer Learning for Natural Language Processing - A book that is a practical primer to transfer learning techniques capable of delivering huge improvements to your NLP models.

Other Resources

Expand Other Resources
  1. hanxiao/bert-as-service - Mapping a variable-length sentence to a fixed-length vector using pretrained BERT model.
  2. brightmart/bert_language_understanding - Pre-training of Deep Bidirectional Transformers for Language Understanding: pre-train TextCNN.
  3. algteam/bert-examples - BERT examples.
  4. JayYip/bert-multiple-gpu - A multiple GPU support version of BERT.
  5. HighCWu/keras-bert-tpu - Implementation of BERT that could load official pre-trained models for feature extraction and prediction on TPU.
  6. whqwill/seq2seq-keyphrase-bert - Add BERT to encoder part for https://github.com/memray/seq2seq-keyphrase-pytorch
  7. xu-song/bert_as_language_model - BERT as language model, a fork from Google official BERT implementation.
  8. Y1ran/NLP-BERT--Chinese version
  9. yuanxiaosc/Deep_dynamic_word_representation - TensorFlow code and pre-trained models for deep dynamic word representation (DDWR). It combines the BERT model and ELMo's deep context word representation.
  10. yangbisheng2009/cn-bert
  11. Willyoung2017/Bert_Attempt
  12. Pydataman/bert_examples - Some examples of BERT. run_classifier.py based on Google BERT for Kaggle Quora Insincere Questions Classification challenge. run_ner.py is based on the first season of the Ruijin Hospital AI contest and a NER written by BERT.
  13. guotong1988/BERT-chinese - Pre-training of deep bidirectional transformers for Chinese language understanding.
  14. zhongyunuestc/bert_multitask - Multi-task.
  15. Microsoft/AzureML-BERT - End-to-end walk through for fine-tuning BERT using Azure Machine Learning.
  16. bigboNed3/bert_serving - Export BERT model for serving.
  17. yoheikikuta/bert-japanese - BERT with SentencePiece for Japanese text.
  18. nickwalton/AIDungeon - AI Dungeon 2 is a completely AI generated text adventure built with OpenAI's largest 1.5B param GPT-2 model. It's a first of it's kind game that allows you to enter and will react to any action you can imagine.
  19. turtlesoupy/this-word-does-not-exist - "This Word Does Not Exist" is a project that allows people to train a variant of GPT-2 that makes up words, definitions and examples from scratch. We've never seen fake text so real.

Tools

  1. jessevig/bertviz - Tool for visualizing attention in the Transformer model.
  2. FastBert - A simple deep learning library that allows developers and data scientists to train and deploy BERT based models for NLP tasks beginning with text classification. The work on FastBert is inspired by fast.ai.

Tasks

Named-Entity Recognition (NER)

Expand NER
  1. kyzhouhzau/BERT-NER - Use google BERT to do CoNLL-2003 NER.
  2. zhpmatrix/bert-sequence-tagging - Chinese sequence labeling.
  3. JamesGu14/BERT-NER-CLI - Bert NER command line tester with step by step setup guide.
  4. sberbank-ai/ner-bert
  5. mhcao916/NER_Based_on_BERT - This project is based on Google BERT model, which is a Chinese NER.
  6. macanv/BERT-BiLSMT-CRF-NER - TensorFlow solution of NER task using Bi-LSTM-CRF model with Google BERT fine-tuning.
  7. ProHiryu/bert-chinese-ner - Use the pre-trained language model BERT to do Chinese NER.
  8. FuYanzhe2/Name-Entity-Recognition - Lstm-CRF, Lattice-CRF, recent NER related papers.
  9. king-menin/ner-bert - NER task solution (BERT-Bi-LSTM-CRF) with Google BERT https://github.com/google-research.

Classification

Expand Classification
  1. brightmart/sentiment_analysis_fine_grain - Multi-label classification with BERT; Fine Grained Sentiment Analysis from AI challenger.
  2. zhpmatrix/Kaggle-Quora-Insincere-Questions-Classification - Kaggle baseline—fine-tuning BERT and tensor2tensor based Transformer encoder solution.
  3. maksna/bert-fine-tuning-for-chinese-multiclass-classification - Use Google pre-training model BERT to fine-tune for the Chinese multiclass classification.
  4. NLPScott/bert-Chinese-classification-task - BERT Chinese classification practice.
  5. fooSynaptic/BERT_classifer_trial - BERT trial for Chinese corpus classfication.
  6. xiaopingzhong/bert-finetune-for-classfier - Fine-tuning the BERT model while building your own dataset for classification.
  7. Socialbird-AILab/BERT-Classification-Tutorial - Tutorial.
  8. malteos/pytorch-bert-document-classification - Enriching BERT with Knowledge Graph Embedding for Document Classification (PyTorch)

Text Generation

Expand Text Generation
  1. asyml/texar - Toolkit for Text Generation and Beyond. Texar is a general-purpose text generation toolkit, has also implemented BERT here for classification, and text generation applications by combining with Texar's other modules.
  2. Plug and Play Language Models: a Simple Approach to Controlled Text Generation (PPLM) paper by Uber AI.

Question Answering (QA)

Expand QA
  1. matthew-z/R-net - R-net in PyTorch, with BERT and ELMo.
  2. vliu15/BERT - TensorFlow implementation of BERT for QA.
  3. benywon/ChineseBert - This is a Chinese BERT model specific for question answering.
  4. xzp27/BERT-for-Chinese-Question-Answering
  5. facebookresearch/SpanBERT - Question Answering on SQuAD; improving pre-training by representing and predicting spans.

Knowledge Graph

Expand Knowledge Graph
  1. sakuranew/BERT-AttributeExtraction - Using BERT for attribute extraction in knowledge graph. Fine-tuning and feature extraction. The BERT-based fine-tuning and feature extraction methods are used to extract knowledge attributes of Baidu Encyclopedia characters.
  2. lvjianxin/Knowledge-extraction - Chinese knowledge-based extraction. Baseline: bi-LSTM+CRF upgrade: BERT pre-training.

License

Expand License

This repository contains a variety of content; some developed by Cedric Chee, and some from third-parties. The third-party content is distributed under the license provided by those parties.

I am providing code and resources in this repository to you under an open source license. Because this is my personal repository, the license you receive to my code and resources is from me and not my employer.

The content developed by Cedric Chee is distributed under the following license:

Code

The code in this repository, including all code samples in the notebooks listed above, is released under the MIT license. Read more at the Open Source Initiative.

Text

The text content of the book is released under the CC-BY-NC-ND license. Read more at Creative Commons.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].