All Projects → nguyenvulebinh → Vietnamese Electra

nguyenvulebinh / Vietnamese Electra

Electra pre-trained model using Vietnamese corpus

Projects that are alternatives of or similar to Vietnamese Electra

Question generation
Neural question generation using transformers
Stars: ✭ 356 (+547.27%)
Mutual labels:  jupyter-notebook, natural-language-processing, transformer
Transformers
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Stars: ✭ 55,742 (+101249.09%)
Mutual labels:  natural-language-processing, language-model, transformer
Gpt2
PyTorch Implementation of OpenAI GPT-2
Stars: ✭ 64 (+16.36%)
Mutual labels:  natural-language-processing, language-model, transformer
Nlp Tutorial
Natural Language Processing Tutorial for Deep Learning Researchers
Stars: ✭ 9,895 (+17890.91%)
Mutual labels:  jupyter-notebook, natural-language-processing, transformer
Gpt2 French
GPT-2 French demo | Démo française de GPT-2
Stars: ✭ 47 (-14.55%)
Mutual labels:  jupyter-notebook, language-model, transformer
Bert Sklearn
a sklearn wrapper for Google's BERT model
Stars: ✭ 182 (+230.91%)
Mutual labels:  jupyter-notebook, natural-language-processing, language-model
Indonesian Language Models
Indonesian Language Models and its Usage
Stars: ✭ 64 (+16.36%)
Mutual labels:  jupyter-notebook, language-model, transformer
Bertviz
Tool for visualizing attention in the Transformer model (BERT, GPT-2, Albert, XLNet, RoBERTa, CTRL, etc.)
Stars: ✭ 3,443 (+6160%)
Mutual labels:  jupyter-notebook, natural-language-processing, transformer
Awesome Bert Nlp
A curated list of NLP resources focused on BERT, attention mechanism, Transformer networks, and transfer learning.
Stars: ✭ 567 (+930.91%)
Mutual labels:  natural-language-processing, language-model, transformer
Spacy Transformers
🛸 Use pretrained transformers like BERT, XLNet and GPT-2 in spaCy
Stars: ✭ 919 (+1570.91%)
Mutual labels:  natural-language-processing, language-model
Syntree2vec
An algorithm to augment syntactic hierarchy into word embeddings
Stars: ✭ 9 (-83.64%)
Mutual labels:  jupyter-notebook, natural-language-processing
Nlp tutorials
Overview of NLP tools and techniques in python
Stars: ✭ 14 (-74.55%)
Mutual labels:  jupyter-notebook, natural-language-processing
Covid 19 Bert Researchpapers Semantic Search
BERT semantic search engine for searching literature research papers for coronavirus covid-19 in google colab
Stars: ✭ 23 (-58.18%)
Mutual labels:  jupyter-notebook, natural-language-processing
Awesome Ai Ml Dl
Awesome Artificial Intelligence, Machine Learning and Deep Learning as we learn it. Study notes and a curated list of awesome resources of such topics.
Stars: ✭ 831 (+1410.91%)
Mutual labels:  jupyter-notebook, natural-language-processing
Spago
Self-contained Machine Learning and Natural Language Processing library in Go
Stars: ✭ 854 (+1452.73%)
Mutual labels:  natural-language-processing, language-model
Nlp In Practice
Starter code to solve real world text data problems. Includes: Gensim Word2Vec, phrase embeddings, Text Classification with Logistic Regression, word count with pyspark, simple text preprocessing, pre-trained embeddings and more.
Stars: ✭ 790 (+1336.36%)
Mutual labels:  jupyter-notebook, natural-language-processing
Tensorflow In Practice Specialization
DeepLearning.AI TensorFlow Developer Professional Certificate Specialization
Stars: ✭ 29 (-47.27%)
Mutual labels:  jupyter-notebook, natural-language-processing
Crnn Pytorch
✍️ Convolutional Recurrent Neural Network in Pytorch | Text Recognition
Stars: ✭ 31 (-43.64%)
Mutual labels:  jupyter-notebook, natural-language-processing
Coursera
Quiz & Assignment of Coursera
Stars: ✭ 774 (+1307.27%)
Mutual labels:  jupyter-notebook, natural-language-processing
Mongolian Bert
Pre-trained Mongolian BERT models
Stars: ✭ 21 (-61.82%)
Mutual labels:  jupyter-notebook, natural-language-processing

Electra pre-trained model using Vietnamese corpus

Overview

ELECTRA is a new method for self-supervised language representation learning. This repository contains the pre-trained Electra small model (tensorflow 2.1.0) trained in a large Vietnamese corpus (~50GB of text).

According to the author's description:

Inspired by generative adversarial networks (GANs), ELECTRA trains the model to distinguish between “real” and “fake” input data. Instead of corrupting the input by replacing tokens with “[MASK]” as in BERT, our approach corrupts the input by replacing some input tokens with incorrect, but somewhat plausible, fakes. For example, in the below figure, the word “cooked” could be replaced with “ate”. While this makes a bit of sense, it doesn’t fit as well with the entire context. The pre-training task requires the model (i.e., the discriminator) to then determine which tokens from the original input have been replaced or kept the same.

electra idea

All corpus was tokenize using coccoc-tokenizer. To using this trained model correctly, let install coccoc-tokenizer lib first.

Prepare environment using conda

# Create new env
conda create -n electra-tf python=3.7
conda activate electra-tf
pip install -r requirements.txt

# Install coc coc tokenizer
git clone https://github.com/coccoc/coccoc-tokenizer.git
cd coccoc-tokenizer
mkdir build && cd build
cmake -DBUILD_PYTHON=1 ..
make install
cd ../python
python setup.py install

Using trained model

Let follow this tutorial to use the trained model. You can also play around in this colab notebook.

Extract features from electra:

from electra_model_tf2 import TFElectraDis
from tokenizers.implementations import SentencePieceBPETokenizer
import tensorflow as tf

# Create tokenizer
tokenizer = SentencePieceBPETokenizer(
    "./vocab/vocab.json",
    "./vocab/merges.txt",
)

vi_electra = TFElectraDis.from_pretrained('./model_pretrained/dis/')

tokenizer.add_special_tokens(["[PAD]", "[UNK]", "[CLS]", "[SEP]", "[MASK]"])
text = "Sinh_viên trường Đại_học Bách_Khoa Hà_Nội"
text_encode = tokenizer.encode(text)
indices = [tokenizer.token_to_id("[CLS]")] + text_encode.ids + [tokenizer.token_to_id("[SEP]")]
assert indices == [64002, 15429, 1782, 5111, 29625, 2052, 64003]

features = vi_electra(tf.constant([indices]))
assert features[0].shape == (7,)  # discriminator detect replaced word
assert len(features[1]) == 13  # discriminator features (12 hidden layers + 1 output layer)
assert features[1][-1].shape == (1, 7, 256)  # 1 sample, 7 words, 256 features dimensions


Training

Please follow the root repository for training model.

Convert model

Because the root repository base on tensorflow 1.x, run the following command to convert model to tensorflow 2.x

pip install torch==1.4.0
python convert_tf2.py

Before running this script, ensure put necessary files in the folders.

  • ./model_pretrained/raw_model checkpoint model that train from this repository
  • ./model_pretrained/config_files include config file that contains model architecture (generator and discriminator models). Config file default is Electra small model.
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].