All Projects → kpe → Bert For Tf2

kpe / Bert For Tf2

Licence: mit
A Keras TensorFlow 2.0 implementation of BERT, ALBERT and adapter-BERT.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Bert For Tf2

Omninet
Official Pytorch implementation of "OmniNet: A unified architecture for multi-modal multi-task learning" | Authors: Subhojeet Pramanik, Priyanka Agrawal, Aman Hussain
Stars: ✭ 448 (-34.41%)
Mutual labels:  transformer
Athena
an open-source implementation of sequence-to-sequence based speech processing engine
Stars: ✭ 542 (-20.64%)
Mutual labels:  transformer
Typescript Is
Stars: ✭ 595 (-12.88%)
Mutual labels:  transformer
Nlp Paper
NLP Paper
Stars: ✭ 484 (-29.14%)
Mutual labels:  transformer
Former
Simple transformer implementation from scratch in pytorch.
Stars: ✭ 500 (-26.79%)
Mutual labels:  transformer
Bert paper chinese translation
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding 论文的中文翻译 Chinese Translation!
Stars: ✭ 564 (-17.42%)
Mutual labels:  transformer
Bert Pytorch
Google AI 2018 BERT pytorch implementation
Stars: ✭ 4,642 (+579.65%)
Mutual labels:  transformer
Deep Ctr Prediction
CTR prediction models based on deep learning(基于深度学习的广告推荐CTR预估模型)
Stars: ✭ 628 (-8.05%)
Mutual labels:  transformer
Rust Bert
Rust native ready-to-use NLP pipelines and transformer-based models (BERT, DistilBERT, GPT2,...)
Stars: ✭ 510 (-25.33%)
Mutual labels:  transformer
React Native Svg Transformer
Import SVG files in your React Native project the same way that you would in a Web application.
Stars: ✭ 568 (-16.84%)
Mutual labels:  transformer
Awesome Visual Transformer
Collect some papers about transformer with vision. Awesome Transformer with Computer Vision (CV)
Stars: ✭ 475 (-30.45%)
Mutual labels:  transformer
Lightseq
LightSeq: A High Performance Inference Library for Sequence Processing and Generation
Stars: ✭ 501 (-26.65%)
Mutual labels:  transformer
Speech Transformer
A PyTorch implementation of Speech Transformer, an End-to-End ASR with Transformer network on Mandarin Chinese.
Stars: ✭ 565 (-17.28%)
Mutual labels:  transformer
Seq2seqchatbots
A wrapper around tensor2tensor to flexibly train, interact, and generate data for neural chatbots.
Stars: ✭ 466 (-31.77%)
Mutual labels:  transformer
Wenet
Production First and Production Ready End-to-End Speech Recognition Toolkit
Stars: ✭ 617 (-9.66%)
Mutual labels:  transformer
Jukebox
Code for the paper "Jukebox: A Generative Model for Music"
Stars: ✭ 4,863 (+612.01%)
Mutual labels:  transformer
Android Viewpager Transformers
A collection of view pager transformers
Stars: ✭ 546 (-20.06%)
Mutual labels:  transformer
Laravel Responder
A Laravel Fractal package for building API responses, giving you the power of Fractal with Laravel's elegancy.
Stars: ✭ 673 (-1.46%)
Mutual labels:  transformer
Awesome Fast Attention
list of efficient attention modules
Stars: ✭ 627 (-8.2%)
Mutual labels:  transformer
Awesome Bert Nlp
A curated list of NLP resources focused on BERT, attention mechanism, Transformer networks, and transfer learning.
Stars: ✭ 567 (-16.98%)
Mutual labels:  transformer

BERT for TensorFlow v2

|Build Status| |Coverage Status| |Version Status| |Python Versions| |Downloads|

This repo contains a TensorFlow 2.0_ Keras_ implementation of google-research/bert_ with support for loading of the original pre-trained weights_, and producing activations numerically identical to the one calculated by the original model.

ALBERT_ and adapter-BERT_ are also supported by setting the corresponding configuration parameters (shared_layer=True, embedding_size for ALBERT_ and adapter_size for adapter-BERT_). Setting both will result in an adapter-ALBERT by sharing the BERT parameters across all layers while adapting every layer with layer specific adapter.

The implementation is build from scratch using only basic tensorflow operations, following the code in google-research/bert/modeling.py_ (but skipping dead code and applying some simplifications). It also utilizes kpe/params-flow_ to reduce common Keras boilerplate code (related to passing model and layer configuration arguments).

bert-for-tf2_ should work with both TensorFlow 2.0_ and TensorFlow 1.14_ or newer.

NEWS

  • 30.Jul.2020 - VERBOSE=0 env variable for suppressing stdout output.

  • 06.Apr.2020 - using latest py-params introducing WithParams base for Layer and Model. See news in kpe/py-params_ for how to update (_construct() signature has change and requires calling super().__construct()).

  • 06.Jan.2020 - support for loading the tar format weights from google-research/ALBERT.

  • 18.Nov.2019 - ALBERT tokenization added (make sure to import as from bert import albert_tokenization or from bert import bert_tokenization).

  • 08.Nov.2019 - using v2 per default when loading the TFHub/albert_ weights of google-research/ALBERT_.

  • 05.Nov.2019 - minor ALBERT word embeddings refactoring (word_embeddings_2 -> word_embeddings_projector) and related parameter freezing fixes.

  • 04.Nov.2019 - support for extra (task specific) token embeddings using negative token ids.

  • 29.Oct.2019 - support for loading of the pre-trained ALBERT weights released by google-research/ALBERT_ at TFHub/albert_.

  • 11.Oct.2019 - support for loading of the pre-trained ALBERT weights released by brightmart/albert_zh ALBERT for Chinese_.

  • 10.Oct.2019 - support for ALBERT_ through the shared_layer=True and embedding_size=128 params.

  • 03.Sep.2019 - walkthrough on fine tuning with adapter-BERT and storing the fine tuned fraction of the weights in a separate checkpoint (see tests/test_adapter_finetune.py).

  • 02.Sep.2019 - support for extending the token type embeddings of a pre-trained model by returning the mismatched weights in load_stock_weights() (see tests/test_extend_segments.py).

  • 25.Jul.2019 - there are now two colab notebooks under examples/ showing how to fine-tune an IMDB Movie Reviews sentiment classifier from pre-trained BERT weights using an adapter-BERT_ model architecture on a GPU or TPU in Google Colab.

  • 28.Jun.2019 - v.0.3.0 supports adapter-BERT_ (google-research/adapter-bert_) for "Parameter-Efficient Transfer Learning for NLP", i.e. fine-tuning small overlay adapter layers over BERT's transformer encoders without changing the frozen BERT weights.

LICENSE

MIT. See License File <https://github.com/kpe/bert-for-tf2/blob/master/LICENSE.txt>_.

Install

bert-for-tf2 is on the Python Package Index (PyPI):

::

pip install bert-for-tf2

Usage

BERT in bert-for-tf2 is implemented as a Keras layer. You could instantiate it like this:

.. code:: python

from bert import BertModelLayer

l_bert = BertModelLayer(**BertModelLayer.Params( vocab_size = 16000, # embedding params use_token_type = True, use_position_embeddings = True, token_type_vocab_size = 2,

num_layers               = 12,           # transformer encoder params
hidden_size              = 768,
hidden_dropout           = 0.1,
intermediate_size        = 4*768,
intermediate_activation  = "gelu",

adapter_size             = None,         # see arXiv:1902.00751 (adapter-BERT)

shared_layer             = False,        # True for ALBERT (arXiv:1909.11942)
embedding_size           = None,         # None for BERT, wordpiece embedding size for ALBERT

name                     = "bert"        # any other Keras layer params

))

or by using the bert_config.json from a pre-trained google model_:

.. code:: python

import bert

model_dir = ".models/uncased_L-12_H-768_A-12"

bert_params = bert.params_from_pretrained_ckpt(model_dir) l_bert = bert.BertModelLayer.from_params(bert_params, name="bert")

now you can use the BERT layer in your Keras model like this:

.. code:: python

from tensorflow import keras

max_seq_len = 128 l_input_ids = keras.layers.Input(shape=(max_seq_len,), dtype='int32') l_token_type_ids = keras.layers.Input(shape=(max_seq_len,), dtype='int32')

using the default token_type/segment id 0

output = l_bert(l_input_ids) # output: [batch_size, max_seq_len, hidden_size] model = keras.Model(inputs=l_input_ids, outputs=output) model.build(input_shape=(None, max_seq_len))

provide a custom token_type/segment id as a layer input

output = l_bert([l_input_ids, l_token_type_ids]) # [batch_size, max_seq_len, hidden_size] model = keras.Model(inputs=[l_input_ids, l_token_type_ids], outputs=output) model.build(input_shape=[(None, max_seq_len), (None, max_seq_len)])

if you choose to use adapter-BERT_ by setting the adapter_size parameter, you would also like to freeze all the original BERT layers by calling:

.. code:: python

l_bert.apply_adapter_freeze()

and once the model has been build or compiled, the original pre-trained weights can be loaded in the BERT layer:

.. code:: python

import bert

bert_ckpt_file = os.path.join(model_dir, "bert_model.ckpt") bert.load_stock_weights(l_bert, bert_ckpt_file)

N.B. see tests/test_bert_activations.py_ for a complete example.

FAQ

  1. In all the examlpes bellow, please note the line:

.. code:: python

use in a Keras Model here, and call model.build()

for a quick test, you can replace it with something like:

.. code:: python

model = keras.models.Sequential([ keras.layers.InputLayer(input_shape=(128,)), l_bert, keras.layers.Lambda(lambda x: x[:, 0, :]), keras.layers.Dense(2) ]) model.build(input_shape=(None, 128))

  1. How to use BERT with the google-research/bert_ pre-trained weights?

.. code:: python

model_name = "uncased_L-12_H-768_A-12" model_dir = bert.fetch_google_bert_model(model_name, ".models") model_ckpt = os.path.join(model_dir, "bert_model.ckpt")

bert_params = bert.params_from_pretrained_ckpt(model_dir) l_bert = bert.BertModelLayer.from_params(bert_params, name="bert")

use in a Keras Model here, and call model.build()

bert.load_bert_weights(l_bert, model_ckpt) # should be called after model.build()

  1. How to use ALBERT with the google-research/ALBERT_ pre-trained weights (fetching from TFHub)?

see tests/nonci/test_load_pretrained_weights.py <https://github.com/kpe/bert-for-tf2/blob/master/tests/nonci/test_load_pretrained_weights.py>_:

.. code:: python

model_name = "albert_base" model_dir = bert.fetch_tfhub_albert_model(model_name, ".models") model_params = bert.albert_params(model_name) l_bert = bert.BertModelLayer.from_params(model_params, name="albert")

use in a Keras Model here, and call model.build()

bert.load_albert_weights(l_bert, albert_dir) # should be called after model.build()

  1. How to use ALBERT with the google-research/ALBERT_ pre-trained weights (non TFHub)?

see tests/nonci/test_load_pretrained_weights.py <https://github.com/kpe/bert-for-tf2/blob/master/tests/nonci/test_load_pretrained_weights.py>_:

.. code:: python

model_name = "albert_base_v2" model_dir = bert.fetch_google_albert_model(model_name, ".models") model_ckpt = os.path.join(albert_dir, "model.ckpt-best")

model_params = bert.albert_params(model_dir) l_bert = bert.BertModelLayer.from_params(model_params, name="albert")

use in a Keras Model here, and call model.build()

bert.load_albert_weights(l_bert, model_ckpt) # should be called after model.build()

  1. How to use ALBERT with the brightmart/albert_zh_ pre-trained weights?

see tests/nonci/test_albert.py <https://github.com/kpe/bert-for-tf2/blob/master/tests/nonci/test_albert.py>_:

.. code:: python

model_name = "albert_base" model_dir = bert.fetch_brightmart_albert_model(model_name, ".models") model_ckpt = os.path.join(model_dir, "albert_model.ckpt")

bert_params = bert.params_from_pretrained_ckpt(model_dir) l_bert = bert.BertModelLayer.from_params(bert_params, name="bert")

use in a Keras Model here, and call model.build()

bert.load_albert_weights(l_bert, model_ckpt) # should be called after model.build()

  1. How to tokenize the input for the google-research/bert_ models?

.. code:: python

do_lower_case = not (model_name.find("cased") == 0 or model_name.find("multi_cased") == 0) bert.bert_tokenization.validate_case_matches_checkpoint(do_lower_case, model_ckpt) vocab_file = os.path.join(model_dir, "vocab.txt") tokenizer = bert.bert_tokenization.FullTokenizer(vocab_file, do_lower_case) tokens = tokenizer.tokenize("Hello, BERT-World!") token_ids = tokenizer.convert_tokens_to_ids(tokens)

  1. How to tokenize the input for brightmart/albert_zh?

.. code:: python

import params_flow pf

fetch the vocab file

albert_zh_vocab_url = "https://raw.githubusercontent.com/brightmart/albert_zh/master/albert_config/vocab.txt" vocab_file = pf.utils.fetch_url(albert_zh_vocab_url, model_dir)

tokenizer = bert.albert_tokenization.FullTokenizer(vocab_file) tokens = tokenizer.tokenize("你好世界") token_ids = tokenizer.convert_tokens_to_ids(tokens)

  1. How to tokenize the input for the google-research/ALBERT_ models?

.. code:: python

import sentencepiece as spm

spm_model = os.path.join(model_dir, "assets", "30k-clean.model") sp = spm.SentencePieceProcessor() sp.load(spm_model) do_lower_case = True

processed_text = bert.albert_tokenization.preprocess_text("Hello, World!", lower=do_lower_case) token_ids = bert.albert_tokenization.encode_ids(sp, processed_text)

  1. How to tokenize the input for the Chinese google-research/ALBERT_ models?

.. code:: python

import bert

vocab_file = os.path.join(model_dir, "vocab.txt") tokenizer = bert.albert_tokenization.FullTokenizer(vocab_file=vocab_file) tokens = tokenizer.tokenize(u"你好世界") token_ids = tokenizer.convert_tokens_to_ids(tokens)

Resources

  • BERT_ - BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
  • adapter-BERT_ - adapter-BERT: Parameter-Efficient Transfer Learning for NLP
  • ALBERT_ - ALBERT: A Lite BERT for Self-Supervised Learning of Language Representations
  • google-research/bert_ - the original BERT_ implementation
  • google-research/ALBERT_ - the original ALBERT_ implementation by Google
  • google-research/albert(old)_ - the old location of the original ALBERT_ implementation by Google
  • brightmart/albert_zh_ - pre-trained ALBERT_ weights for Chinese
  • kpe/params-flow_ - A Keras coding style for reducing Keras_ boilerplate code in custom layers by utilizing kpe/py-params_

.. _kpe/params-flow: https://github.com/kpe/params-flow .. _kpe/py-params: https://github.com/kpe/py-params .. _bert-for-tf2: https://github.com/kpe/bert-for-tf2

.. _Keras: https://keras.io .. _pre-trained weights: https://github.com/google-research/bert#pre-trained-models .. _google-research/bert: https://github.com/google-research/bert .. _google-research/bert/modeling.py: https://github.com/google-research/bert/blob/master/modeling.py .. _BERT: https://arxiv.org/abs/1810.04805 .. _pre-trained google model: https://github.com/google-research/bert .. _tests/test_bert_activations.py: https://github.com/kpe/bert-for-tf2/blob/master/tests/test_compare_activations.py .. _TensorFlow 2.0: https://www.tensorflow.org/versions/r2.0/api_docs/python/tf .. _TensorFlow 1.14: https://www.tensorflow.org/versions/r1.14/api_docs/python/tf

.. _google-research/adapter-bert: https://github.com/google-research/adapter-bert/ .. _adapter-BERT: https://arxiv.org/abs/1902.00751 .. _ALBERT: https://arxiv.org/abs/1909.11942 .. _brightmart/albert_zh ALBERT for Chinese: https://github.com/brightmart/albert_zh .. _brightmart/albert_zh: https://github.com/brightmart/albert_zh .. _google ALBERT weights: https://github.com/google-research/google-research/tree/master/albert .. _google-research/albert(old): https://github.com/google-research/google-research/tree/master/albert .. _google-research/ALBERT: https://github.com/google-research/ALBERT .. _TFHub/albert: https://tfhub.dev/google/albert_base/2

.. |Build Status| image:: https://travis-ci.com/kpe/bert-for-tf2.svg?branch=master :target: https://travis-ci.com/kpe/bert-for-tf2 .. |Coverage Status| image:: https://coveralls.io/repos/kpe/bert-for-tf2/badge.svg?branch=master :target: https://coveralls.io/r/kpe/bert-for-tf2?branch=master .. |Version Status| image:: https://badge.fury.io/py/bert-for-tf2.svg :target: https://badge.fury.io/py/bert-for-tf2 .. |Python Versions| image:: https://img.shields.io/pypi/pyversions/bert-for-tf2.svg .. |Downloads| image:: https://img.shields.io/pypi/dm/bert-for-tf2.svg .. |Twitter| image:: https://img.shields.io/twitter/follow/siddhadev?logo=twitter&label=&style= :target: https://twitter.com/intent/user?screen_name=siddhadev

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].