All Projects → semiosis → prompts-v1

semiosis / prompts-v1

Licence: GPL-3.0 License
A free and open-source curation of prompts for OpenAI's GPT-3.

Programming Languages

YASnippet
69 projects

Projects that are alternatives of or similar to prompts-v1

go-gpt3
OpenAI GPT-3 API wrapper for Go
Stars: ✭ 107 (+494.44%)
Mutual labels:  openai, gpt-3
zsh codex
This is a ZSH plugin that enables you to use OpenAI's Codex AI in the command line.
Stars: ✭ 787 (+4272.22%)
Mutual labels:  openai
ActiveRagdollControllers
Research into controllers for 2d and 3d Active Ragdolls (using MujocoUnity+ml_agents)
Stars: ✭ 30 (+66.67%)
Mutual labels:  openai
prompts-ai
Advanced playground for GPT-3
Stars: ✭ 156 (+766.67%)
Mutual labels:  gpt-3
gpt-neo-fine-tuning-example
Fine-Tune EleutherAI GPT-Neo And GPT-J-6B To Generate Netflix Movie Descriptions Using Hugginface And DeepSpeed
Stars: ✭ 157 (+772.22%)
Mutual labels:  gpt-3
fix
Allows you to use OpenAI Codex to fix errors in the command line.
Stars: ✭ 72 (+300%)
Mutual labels:  openai
clifs
Contrastive Language-Image Forensic Search allows free text searching through videos using OpenAI's machine learning model CLIP
Stars: ✭ 271 (+1405.56%)
Mutual labels:  openai
gpt-j
A GPT-J API to use with python3 to generate text, blogs, code, and more
Stars: ✭ 101 (+461.11%)
Mutual labels:  gpt-3
pen.el
Pen.el stands for Prompt Engineering in emacs. It facilitates the creation, discovery and usage of prompts to language models. Pen supports OpenAI, EleutherAI, Aleph-Alpha, HuggingFace and others. It's the engine for the LookingGlass imaginary web browser.
Stars: ✭ 376 (+1988.89%)
Mutual labels:  openai
gpt-j-api
API for the GPT-J language model 🦜. Including a FastAPI backend and a streamlit frontend
Stars: ✭ 248 (+1277.78%)
Mutual labels:  gpt-3
Pytorch-RL-CPP
A Repository with C++ implementations of Reinforcement Learning Algorithms (Pytorch)
Stars: ✭ 73 (+305.56%)
Mutual labels:  openai
XENA
XENA is the managed remote administration platform for botnet creation & development powered by blockchain and machine learning. Aiming to provide an ecosystem which serves the bot herders. Favoring secrecy and resiliency over performance. It's micro-service oriented allowing for specialization and lower footprint. Join the community of the ulti…
Stars: ✭ 127 (+605.56%)
Mutual labels:  gpt-3
yourAI
GPT-2 Discord Bot and Steps to Train Something Like You
Stars: ✭ 71 (+294.44%)
Mutual labels:  openai
frozenlake
Value & Policy Iteration for the frozenlake environment of OpenAI
Stars: ✭ 16 (-11.11%)
Mutual labels:  openai
language-planner
Official Code for "Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents"
Stars: ✭ 84 (+366.67%)
Mutual labels:  gpt-3
awesome-codex
A list dedicated to products, demos and articles related to 🤖 OpenAI's Codex.
Stars: ✭ 115 (+538.89%)
Mutual labels:  openai
clip playground
An ever-growing playground of notebooks showcasing CLIP's impressive zero-shot capabilities
Stars: ✭ 80 (+344.44%)
Mutual labels:  openai
ddrl
Deep Developmental Reinforcement Learning
Stars: ✭ 27 (+50%)
Mutual labels:  openai
ethics
Aligning AI With Shared Human Values (ICLR 2021)
Stars: ✭ 77 (+327.78%)
Mutual labels:  gpt-3
PDN
The official PyTorch implementation of "Pathfinder Discovery Networks for Neural Message Passing" (WebConf '21)
Stars: ✭ 44 (+144.44%)
Mutual labels:  gpt-3

New version of repository

https://github.com/semiosis/prompts

This repository is deprecated

Prompts

This is a free and open-source (FOSS) curation of prompts for OpenAI’s GPT-3.

License
GPL-3

The .prompt file format

This is the format I have used to organise these prompts. It is yaml with a schema, which is not yet defined.

This file is as good as any currently as an example of the schema.

./prompts/meeting-bullets-to-summary.prompt

title: "meeting bullet points to summary"
prompt: |+
    Convert my short hand into a first-hand account of the meeting:

    <1>

    Summary:
engine: "davinci-instruct-beta"
temperature: 0.7
max-tokens: 60
top-p: 1
frequency-penalty: 0.0
presence-penalty: 0.0
best-of: 1
stop-sequences:
- "\n\n"
inject-start-text: yes
inject-restart-text: yes
show-probabilities: off
# A flag just for convenience-sake to find prompts which are designed to be chatbots
conversation-mode: no
# Keep stitching together until reaching this limit
# This allows a full response for answers which may need n*max-tokens to reach the stop-sequence.
stitch-max: 0
vars:
- "notes"
examples:
- |+
    Tom: Profits up 50%
    Jane: New servers are online
    Kjel: Need more time to fix software
    Jane: Happy to help
    Parkman: Beta testing almost done

Prompt file snippet

This contains an explanation of the .prompt file format.

title: "${1:title}"
# future-titles: ""
aims: |+
- More abstractive rewording
doc: "Given ... ${1:title}"
# aims: |+
# - More abstractive rewording
prompt-version: 1
# <:pp> defines a point where the following
# text is concatenated before the postprocessor
# is run.
# <1>, <2> etc. are where variables are substituted
prompt: |+
    ${2:contents}

    <1> are like <2> in that
# # Additional transformation of prompt after the template
prompt-filter: "sed -z 's/\\s\\+$//'"
# Trailing whitespace is always removed
# prompt-remove-trailing-whitespace: on
# myrc will select the completion engine using my config.
# This may be openi-complete or something else
engine: "myrc"
# if nothing is selected in myrc and openapi-complete is used
# by default, then openai should select this engine.
preferred-openai-engine: "davinci"
# 0.0 = /r/hadastroke
# 1.0 = /r/iamveryrandom
# Use 0.3-0.8
temperature: 0.8
max-tokens: 60
top-p: 1.0
# Not available yet: openai api completions.create --help
frequency-penalty: 0.5
# If I make presence-penalty 0 then it will get very terse
presence-penalty: 0.0
best-of: 1
# Only the first one will be used by the API,
# but the completer script will use the others.
# Currently the API can only accept one stop-sequence, but that may change.
stop-sequences:
- "###"
- "\n\n"
inject-start-text: yes
inject-restart-text: yes
show-probabilities: off
# Cache the function by default when running the prompt function
cache: on
vars:
- "former"
- "latter"
examples:
- "boysenberries"
- "strawberries"
# Completion is for generating a company-mode completion function
completion: on
# # default values for pen -- evaled
# # This is useful for completion commands.
pen-defaults:
- "(detect-language)"
- "(pen-preceding-text)"
# These are elisp String->String functions and run from pen.el
# It probably runs earlier than the preprocessors shell scripts
pen-preprocessors:
- "pen-pf-correct-grammar"
# # A preprocessor filters the var at that position
# the current implementation of preprocessors is kinda slow and will add ~100ml per variable
# # This may be useful to distinguish a block of text, for example
preprocessors:
- "sed 's/^/- /"
-
chomp-start: on
chomp-end: off
prefer-external: on
# This is an optional external command which may be used to perform the same task as the API.
# This can be used to train the prompt.
external: "generate-text-from-input.sh"
# This script returns a 0-1 decimal value representing the quality of the generated output.
quality-script: "my-quality-checker-for-this-prompt.sh"
# This script can be used to validate the output.
# If the output is accurate, the validation script returns exit code 1.
# The quality-script is sent to this script as the first argument.
validation-script: "my-validator-for-this-prompt.sh"
# Enable running conversation
conversation-mode: no
# Replace selected text
filter: no
# Keep stitching together until reaching this limit
# This allows a full response for answers which may need n*max-tokens to reach the stop-sequence.
stitch-max: 0
needs-work: no
n-test-runs: 5
# Prompt function aliases
aliases:
- "asktutor"
postprocessor: "sed 's/- //' | uniqnosort"
# # Run it n times and combine the output
n-collate: 10

Tooling

If you are looking for a tool which can load and make use of these .prompt files directly, you may use pen.el, a package of emacs that was used to generate them.

https://github.com/mullikine/pen.el

Testing

Use cucumber for test cases

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].