All Projects → openai → Gpt 2

openai / Gpt 2

Licence: other
Code for the paper "Language Models are Unsupervised Multitask Learners"

Programming Languages

python
139335 projects - #7 most used programming language

Labels

Projects that are alternatives of or similar to Gpt 2

Signatureview
SignatureView is an open source Android library which allow developers to produce pen and paper like effect for creating signatures on Android
Stars: ✭ 185 (-98.78%)
Mutual labels:  paper
Epg
Code for the paper "Evolved Policy Gradients"
Stars: ✭ 204 (-98.65%)
Mutual labels:  paper
Cardboard
The Bukkit/Spigot/Paper API implementation for Fabric
Stars: ✭ 220 (-98.55%)
Mutual labels:  paper
Acl Papers
paper summary of Association for Computational Linguistics
Stars: ✭ 189 (-98.75%)
Mutual labels:  paper
Drl4recsys
Courses on Deep Reinforcement Learning (DRL) and DRL papers for recommender systems
Stars: ✭ 196 (-98.71%)
Mutual labels:  paper
Awesome Deeplearning Resources
Deep Learning and deep reinforcement learning research papers and some codes
Stars: ✭ 2,483 (-83.6%)
Mutual labels:  paper
Awesome Privacy
Repository for collection of research papers on privacy.
Stars: ✭ 175 (-98.84%)
Mutual labels:  paper
Vehicle reid Collection
🚗 the collection of vehicle re-ID papers, datasets. 🚗
Stars: ✭ 225 (-98.51%)
Mutual labels:  paper
Papers
Summaries of machine learning papers
Stars: ✭ 2,362 (-84.4%)
Mutual labels:  paper
Nfnets Pytorch
NFNets and Adaptive Gradient Clipping for SGD implemented in PyTorch
Stars: ✭ 215 (-98.58%)
Mutual labels:  paper
Hustpapertemp
华中科技大学本科毕业论文LaTeX模板 2017
Stars: ✭ 189 (-98.75%)
Mutual labels:  paper
Paper
Paper is a fast NoSQL-like storage for Java/Kotlin objects on Android with automatic schema migration support.
Stars: ✭ 2,263 (-85.05%)
Mutual labels:  paper
Survey Computer Vision
2020-2021年计算机视觉综述论文分方向整理
Stars: ✭ 207 (-98.63%)
Mutual labels:  paper
Dragan
A stable algorithm for GAN training
Stars: ✭ 189 (-98.75%)
Mutual labels:  paper
Triplet Attention
Official PyTorch Implementation for "Rotate to Attend: Convolutional Triplet Attention Module." [WACV 2021]
Stars: ✭ 222 (-98.53%)
Mutual labels:  paper
Anms Codes
Efficient adaptive non-maximal suppression algorithms for homogeneous spatial keypoint distribution
Stars: ✭ 174 (-98.85%)
Mutual labels:  paper
Research In Production
A collection of research papers categorized by real-world systems that enact them
Stars: ✭ 205 (-98.65%)
Mutual labels:  paper
Machine Learning Resources
A curated list of awesome machine learning frameworks, libraries, courses, books and many more.
Stars: ✭ 226 (-98.51%)
Mutual labels:  paper
Awesome Gans And Deepfakes
A curated list of GAN & Deepfake papers and repositories.
Stars: ✭ 224 (-98.52%)
Mutual labels:  paper
Research Paper Notes
Notes and Summaries on ML-related Research Papers (with optional implementations)
Stars: ✭ 218 (-98.56%)
Mutual labels:  paper

Status: Archive (code is provided as-is, no updates expected)

gpt-2

Code and models from the paper "Language Models are Unsupervised Multitask Learners".

You can read about GPT-2 and its staged release in our original blog post, 6 month follow-up post, and final post.

We have also released a dataset for researchers to study their behaviors.

* Note that our original parameter counts were wrong due to an error (in our previous blog posts and paper). Thus you may have seen small referred to as 117M and medium referred to as 345M.

Usage

This repository is meant to be a starting point for researchers and engineers to experiment with GPT-2.

For basic information, see our model card.

Some caveats

  • GPT-2 models' robustness and worst case behaviors are not well-understood. As with any machine-learned model, carefully evaluate GPT-2 for your use case, especially if used without fine-tuning or in safety-critical applications where reliability is important.
  • The dataset our GPT-2 models were trained on contains many texts with biases and factual inaccuracies, and thus GPT-2 models are likely to be biased and inaccurate as well.
  • To avoid having samples mistaken as human-written, we recommend clearly labeling samples as synthetic before wide dissemination. Our models are often incoherent or inaccurate in subtle ways, which takes more than a quick read for a human to notice.

Work with us

Please let us know if you’re doing interesting research with or working on applications of GPT-2! We’re especially interested in hearing from and potentially working with those who are studying

  • Potential malicious use cases and defenses against them (e.g. the detectability of synthetic text)
  • The extent of problematic content (e.g. bias) being baked into the models and effective mitigations

Development

See DEVELOPERS.md

Contributors

See CONTRIBUTORS.md

Citation

Please use the following bibtex entry:

@article{radford2019language,
  title={Language Models are Unsupervised Multitask Learners},
  author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya},
  year={2019}
}

Future work

We may release code for evaluating the models on various benchmarks.

We are still considering release of the larger models.

License

Modified MIT

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].