All Projects → rish-16 → Gpt2client

rish-16 / Gpt2client

Licence: mit
✍🏻 gpt2-client: Easy-to-use TensorFlow Wrapper for GPT-2 117M, 345M, 774M, and 1.5B Transformer Models 🤖 📝

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Gpt2client

Gpt2 Chitchat
GPT2 for Chinese chitchat/用于中文闲聊的GPT2模型(实现了DialoGPT的MMI思想)
Stars: ✭ 1,230 (+281.99%)
Mutual labels:  text-generation, transformer
Dialogpt
Large-scale pretraining for dialogue
Stars: ✭ 1,177 (+265.53%)
Mutual labels:  text-generation, transformer
Gpt2 Chinese
Chinese version of GPT2 training code, using BERT tokenizer.
Stars: ✭ 4,592 (+1326.09%)
Mutual labels:  text-generation, transformer
amrlib
A python library that makes AMR parsing, generation and visualization simple.
Stars: ✭ 107 (-66.77%)
Mutual labels:  text-generation, transformer
Gpt2 Newstitle
Chinese NewsTitle Generation Project by GPT2.带有超级详细注释的中文GPT2新闻标题生成项目。
Stars: ✭ 235 (-27.02%)
Mutual labels:  text-generation, transformer
Onnxt5
Summarization, translation, sentiment-analysis, text-generation and more at blazing speed using a T5 version implemented in ONNX.
Stars: ✭ 143 (-55.59%)
Mutual labels:  text-generation, transformer
Gpt2 French
GPT-2 French demo | Démo française de GPT-2
Stars: ✭ 47 (-85.4%)
Mutual labels:  text-generation, transformer
Gpt 2 Tensorflow2.0
OpenAI GPT2 pre-training and sequence prediction implementation in Tensorflow 2.0
Stars: ✭ 172 (-46.58%)
Mutual labels:  text-generation, transformer
pytorch-transformer-chatbot
PyTorch v1.2에서 생긴 Transformer API 를 이용한 간단한 Chitchat 챗봇
Stars: ✭ 44 (-86.34%)
Mutual labels:  text-generation, transformer
text-generation-transformer
text generation based on transformer
Stars: ✭ 36 (-88.82%)
Mutual labels:  text-generation, transformer
Nlp Interview Notes
本项目是作者们根据个人面试和经验总结出的自然语言处理(NLP)面试准备的学习笔记与资料,该资料目前包含 自然语言处理各领域的 面试题积累。
Stars: ✭ 207 (-35.71%)
Mutual labels:  transformer
Allrank
allRank is a framework for training learning-to-rank neural models based on PyTorch.
Stars: ✭ 269 (-16.46%)
Mutual labels:  transformer
Dab
Data Augmentation by Backtranslation (DAB) ヽ( •_-)ᕗ
Stars: ✭ 294 (-8.7%)
Mutual labels:  transformer
Laravel5 Jsonapi
Laravel 5 JSON API Transformer Package
Stars: ✭ 313 (-2.8%)
Mutual labels:  transformer
Accelerated Text
Accelerated Text is a no-code natural language generation platform. It will help you construct document plans which define how your data is converted to textual descriptions varying in wording and structure.
Stars: ✭ 256 (-20.5%)
Mutual labels:  text-generation
Transformer Tensorflow
Implementation of Transformer Model in Tensorflow
Stars: ✭ 286 (-11.18%)
Mutual labels:  transformer
Kenlg Reading
Reading list for knowledge-enhanced text generation, with a survey
Stars: ✭ 257 (-20.19%)
Mutual labels:  text-generation
Textbox
TextBox is an open-source library for building text generation system.
Stars: ✭ 257 (-20.19%)
Mutual labels:  text-generation
ai challenger 2018 sentiment analysis
Fine-grained Sentiment Analysis of User Reviews --- AI CHALLENGER 2018
Stars: ✭ 16 (-95.03%)
Mutual labels:  transformer
Rezero
Official PyTorch Repo for "ReZero is All You Need: Fast Convergence at Large Depth"
Stars: ✭ 317 (-1.55%)
Mutual labels:  transformer

gpt2-client

Easy-to-use Wrapper for GPT-2 117M, 345M, 774M, and 1.5B Transformer Models

Pypi package GitHub license GitHub license

Buy Me A Coffee

What is itInstallationGetting Started

Made by Rishabh Anand • https://rish-16.github.io

What is it

GPT-2 is a Natural Language Processing model developed by OpenAI for text generation. It is the successor to the GPT (Generative Pre-trained Transformer) model trained on 40GB of text from the internet. It features a Transformer model that was brought to light by the Attention Is All You Need paper in 2017. The model has 4 versions - 117M, 345M, 774M, and 1558M - that differ in terms of the amount of training data fed to it and the number of parameters they contain.

The 1.5B model is currently the largest available model released by OpenAI.

Finally, gpt2-client is a wrapper around the original gpt-2 repository that features the same functionality but with more accessiblity, comprehensibility, and utilty. You can play around with all four GPT-2 models in less than five lines of code.

Note: This client wrapper is in no way liable to any damage caused directly or indirectly. Any names, places, and objects referenced by the model are fictional and seek no resemblance to real life entities or organisations. Samples are unfiltered and may contain offensive content. User discretion advised.

Installation

Install client via pip. Ideally, gpt2-client is well supported for Python >= 3.5 and TensorFlow >= 1.X. Some libraries may need to be reinstalled or upgraded using the --upgrade flag via pip if Python 2.X is used.

pip install gpt2-client

Note: gpt2-client is not compatible with TensorFlow 2.0 , try TensorFlow 1.14.0

Getting started

1. Download the model weights and checkpoints

from gpt2_client import GPT2Client

gpt2 = GPT2Client('117M') # This could also be `345M`, `774M`, or `1558M`. Rename `save_dir` to anything.
gpt2.load_model(force_download=False) # Use cached versions if available.

This creates a directory called models in the current working directory and downloads the weights, checkpoints, model JSON, and hyper-parameters required by the model. Once you have called the load_model() function, you need not call it again assuming that the files have finished downloading in the models directory.

Note: Set force_download=True to overwrite the existing cached model weights and checkpoints

2. Start generating text!

from gpt2_client import GPT2Client

gpt2 = GPT2Client('117M') # This could also be `345M`, `774M`, or `1558M`
gpt2.load_model()

gpt2.generate(interactive=True) # Asks user for prompt
gpt2.generate(n_samples=4) # Generates 4 pieces of text
text = gpt2.generate(return_text=True) # Generates text and returns it in an array
gpt2.generate(interactive=True, n_samples=3) # A different prompt each time

You can see from the aforementioned sample that the generation options are highly flexible. You can mix and match based on what kind of text you need generated, be it multiple chunks or one at a time with prompts.

3. Generating text from batch of prompts

from gpt2_client import GPT2Client

gpt2 = GPT2Client('117M') # This could also be `345M`, `774M`, or `1558M`
gpt2.load_model()

prompts = [
  "This is a prompt 1",
  "This is a prompt 2",
  "This is a prompt 3",
  "This is a prompt 4"
]

text = gpt2.generate_batch_from_prompts(prompts) # returns an array of generated text

4. Fine-tuning GPT-2 to custom datasets

from gpt2_client import GPT2Client

gpt2 = GPT2Client('117M') # This could also be `345M`, `774M`, or `1558M`
gpt2.load_model()

my_corpus = './data/shakespeare.txt' # path to corpus
custom_text = gpt2.finetune(my_corpus, return_text=True) # Load your custom dataset

In order to fine-tune GPT-2 to your custom corpus or dataset, it's ideal to have a GPU or TPU at hand. Google Colab is one such tool you can make use of to re-train/fine-tune your custom model.

5. Encoding and decoding text sequences

from gpt2_client import GPT2Client

gpt2 = GPT2Client('117M') # This could also be `345M`, `774M`, or `1558M`
gpt2.load_model()

# encoding a sentence
encs = gpt2.encode_seq("Hello world, this is a sentence")
# [15496, 995, 11, 428, 318, 257, 6827]

# decoding an encoded sequence
decs = gpt2.decode_seq(encs)
# Hello world, this is a sentence

Contributing

Suggestions, improvements, and enhancements are always welcome! If you have any issues, please do raise one in the Issues section. If you have an improvement, do file an issue to discuss the suggestion before creating a PR.

All ideas – no matter how outrageous – welcome!

Donations

Open-source is really fun. Your donations motivate me to bring fresh ideas to life. If interested in supporting my open-source endeavours, please do donate – it means a lot to me!

Buy Me A Coffee

Licence

MIT

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].