All Projects → kamalkraj → minGPT-TF

kamalkraj / minGPT-TF

Licence: MIT license
A minimal TF2 re-implementation of the OpenAI GPT training

Programming Languages

Jupyter Notebook
11667 projects
python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to minGPT-TF

gdc
Code for the ICLR 2021 paper "A Distributional Approach to Controlled Text Generation"
Stars: ✭ 94 (+161.11%)
Mutual labels:  language-model, gpt-2, gpt3
gpt-j
A GPT-J API to use with python3 to generate text, blogs, code, and more
Stars: ✭ 101 (+180.56%)
Mutual labels:  language-model, gpt3
gpt-j-api
API for the GPT-J language model 🦜. Including a FastAPI backend and a streamlit frontend
Stars: ✭ 248 (+588.89%)
Mutual labels:  gpt, language-model
few-shot-lm
The source code of "Language Models are Few-shot Multilingual Learners" (MRL @ EMNLP 2021)
Stars: ✭ 32 (-11.11%)
Mutual labels:  gpt, language-model
download-tweets-ai-text-gen-plus
Python script to download public Tweets from a given Twitter account into a format suitable for AI text generation
Stars: ✭ 26 (-27.78%)
Mutual labels:  gpt, gpt-2
ke-dialogue
KE-Dialogue: Injecting knowledge graph into a fully end-to-end dialogue system.
Stars: ✭ 39 (+8.33%)
Mutual labels:  gpt, gpt-2
Tokenizers
💥 Fast State-of-the-Art Tokenizers optimized for Research and Production
Stars: ✭ 5,077 (+14002.78%)
Mutual labels:  gpt, language-model
TF2-GAN
🐳 GAN implemented as Tensorflow 2.X
Stars: ✭ 61 (+69.44%)
Mutual labels:  tf2
tensorflow-ml-nlp-tf2
텐서플로2와 머신러닝으로 시작하는 자연어처리 (로지스틱회귀부터 BERT와 GPT3까지) 실습자료
Stars: ✭ 245 (+580.56%)
Mutual labels:  tf2
mlp-gpt-jax
A GPT, made only of MLPs, in Jax
Stars: ✭ 53 (+47.22%)
Mutual labels:  language-model
dasher-web
Dasher text entry in HTML, CSS, JavaScript, and SVG
Stars: ✭ 34 (-5.56%)
Mutual labels:  language-model
fortress-royale
Team Fortress 2 battle royale gamemode
Stars: ✭ 48 (+33.33%)
Mutual labels:  tf2
subword-lstm-lm
LSTM Language Model with Subword Units Input Representations
Stars: ✭ 45 (+25%)
Mutual labels:  language-model
CallAdmin
CallAdmin is a multilingual sourcemod plugin which provides in-game report functionality
Stars: ✭ 52 (+44.44%)
Mutual labels:  tf2
wechsel
Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.
Stars: ✭ 39 (+8.33%)
Mutual labels:  language-model
deepstory
Deepstory turns a text/generated text into a video where the character is animated to speak your story using his/her voice.
Stars: ✭ 61 (+69.44%)
Mutual labels:  gpt-2
NLP-paper
🎨 🎨NLP 自然语言处理教程 🎨🎨 https://dataxujing.github.io/NLP-paper/
Stars: ✭ 23 (-36.11%)
Mutual labels:  gpt
PCPM
Presenting Collection of Pretrained Models. Links to pretrained models in NLP and voice.
Stars: ✭ 21 (-41.67%)
Mutual labels:  language-model
ml
machine learning
Stars: ✭ 29 (-19.44%)
Mutual labels:  language-model
backprop
Backprop makes it simple to use, finetune, and deploy state-of-the-art ML models.
Stars: ✭ 229 (+536.11%)
Mutual labels:  language-model

minGPT-TF

A TensorFlow re-implementation of mingpt mingpt-tf

Notebooks

play_math.ipynb and play_char.ipynb trained in colab.Links are on the top of each notebook to train model on colab. play_char.ipynb notebook batch_size is reduced to fit in to colab GPU memory. Change the parameters according to GPU memory.

minGPT - Readme

mingpt

A PyTorch re-implementation of GPT training. minGPT tries to be small, clean, interpretable and educational, as most of the currently available ones are a bit sprawling. GPT is not a complicated model and this implementation is appropriately about 300 lines of code, including boilerplate and a totally unnecessary custom causal self-attention module. Anyway, all that's going on is that a sequence of indices goes into a sequence of transformer blocks, and a probability distribution of the next index comes out. The rest of the complexity is just being clever with batching (both across examples and over sequence length) so that training is efficient.

The core minGPT "library" (hah) is two files: mingpt/model.py contains the actual Transformer model definition and mingpt/trainer.py is (GPT-independent) PyTorch boilerplate that trains the model. The attached Jupyter notebooks then show how the "library" (hah) can be used to train sequence models:

  • play_math.ipynb trains a GPT focused on addition (inspired by the addition section in the GPT-3 paper)
  • play_char.ipynb trains a GPT to be a character-level language model on arbitrary text, similar to my older char-rnn but with a transformer instead of an RNN
  • play_words.ipynb a BPE version that does not yet exist

With a bpe encoder, distributed training and maybe fp16 this implementation may be able to reproduce GPT-1/GPT-2 results, though I haven't tried $$$. GPT-3 is likely out of reach as my understanding is that it does not fit into GPU memory and requires a more careful model-parallel treatment.

Example usage

This code is simple enough to just hack inline, not "used", but current API looks something like:

# you're on your own to define a class that returns individual examples as PyTorch LongTensors
from torch.utils.data import Dataset
train_dataset = MyDataset(...)
test_dataset = MyDataset(...)

# construct a GPT model
from mingpt.model import GPT, GPTConfig
mconf = GPTConfig(vocab_size, block_size, n_layer=12, n_head=12, n_embd=768) # a GPT-1
model = GPT(mconf)

# construct a trainer
from mingpt.trainer import Trainer, TrainerConfig
tconf = TrainerConfig(max_epochs=10, batch_size=256)
trainer = Trainer(model, train_dataset, test_dataset, tconf)
trainer.train()
# (... enjoy the show for a while... )

# sample from the model (the [None, ...] and [0] are to push/pop a needed dummy batch dimension)
from mingpt.utils import sample
x = torch.tensor([1, 2, 3], dtype=torch.long)[None, ...] # context conditioning
y = sample(model, x, steps=30, temperature=1.0, sample=True, top_k=5)[0]
print(y) # our model filled in the integer sequence with 30 additional likely integers

References

Code:

  • openai/gpt-2 has the model but not the training code, and in TensorFlow
  • openai/image-gpt has some more modern gpt-3 like modification in its code, good reference as well
  • huggingface/transformers has a language-modeling example. It is full-featured but as a result also somewhat challenging to trace. E.g. some large functions have as much as 90% unused code behind various branching statements that is unused in the default setting of simple language modeling.

Papers + some implementation notes:

Improving Language Understanding by Generative Pre-Training (GPT-1)

  • Our model largely follows the original transformer work
  • We trained a 12-layer decoder-only transformer with masked self-attention heads (768 dimensional states and 12 attention heads). For the position-wise feed-forward networks, we used 3072 dimensional inner states.
  • Adam max learning rate of 2.5e-4. (later GPT-3 for this model size uses 6e-4)
  • LR decay: increased linearly from zero over the first 2000 updates and annealed to 0 using a cosine schedule
  • We train for 100 epochs on minibatches of 64 randomly sampled, contiguous sequences of 512 tokens.
  • Since layernorm is used extensively throughout the model, a simple weight initialization of N(0, 0.02) was sufficient
  • bytepair encoding (BPE) vocabulary with 40,000 merges
  • residual, embedding, and attention dropouts with a rate of 0.1 for regularization.
  • modified version of L2 regularization proposed in (37), with w = 0.01 on all non bias or gain weights
  • For the activation function, we used the Gaussian Error Linear Unit (GELU).
  • We used learned position embeddings instead of the sinusoidal version proposed in the original work
  • For finetuning: We add dropout to the classifier with a rate of 0.1. learning rate of 6.25e-5 and a batchsize of 32. 3 epochs. We use a linear learning rate decay schedule with warmup over 0.2% of training. λ was set to 0.5.
  • GPT-1 model is 12 layers and d_model 768, ~117M params

Language Models are Unsupervised Multitask Learners (GPT-2)

  • LayerNorm was moved to the input of each sub-block, similar to a pre-activation residual network
  • an additional layer normalization was added after the final self-attention block.
  • modified initialization which accounts for the accumulation on the residual path with model depth is used. We scale the weights of residual layers at initialization by a factor of 1/√N where N is the number of residual layers. (weird because in their released code i can only find a simple use of the old 0.02... in their release of image-gpt I found it used for c_proj, and even then only for attn, not for mlp. huh. https://github.com/openai/image-gpt/blob/master/src/model.py)
  • the vocabulary is expanded to 50,257
  • increase the context size from 512 to 1024 tokens
  • larger batchsize of 512 is used
  • GPT-2 used 48 layers and d_model 1600 (vs. original 12 layers and d_model 768). ~1.542B params

Language Models are Few-Shot Learners (GPT-3)

  • GPT-3: 96 layers, 96 heads, with d_model of 12,288 (175B parameters).
  • GPT-1-like: 12 layers, 12 heads, d_model 768 (125M)
  • We use the same model and architecture as GPT-2, including the modified initialization, pre-normalization, and reversible tokenization described therein
  • we use alternating dense and locally banded sparse attention patterns in the layers of the transformer, similar to the Sparse Transformer
  • we always have the feedforward layer four times the size of the bottleneck layer, dff = 4 ∗ dmodel
  • all models use a context window of nctx = 2048 tokens.
  • Adam with β1 = 0.9, β2 = 0.95, and eps = 10−8
  • All models use weight decay of 0.1 to provide a small amount of regularization. (NOTE: GPT-1 used 0.01 I believe, see above)
  • clip the global norm of the gradient at 1.0
  • Linear LR warmup over the first 375 million tokens. Then use cosine decay for learning rate down to 10% of its value, over 260 billion tokens.
  • gradually increase the batch size linearly from a small value (32k tokens) to the full value over the first 4-12 billion tokens of training, depending on the model size.
  • full 2048-sized time context window is always used, with a special END OF DOCUMENT token delimiter

License

MIT

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].