All Projects β†’ THUDM β†’ P-tuning

THUDM / P-tuning

Licence: MIT license
A novel method to tune language models. Codes and datasets for paper ``GPT understands, too''.

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to P-tuning

P-tuning-v2
Source code and data for ACL 2022 paper "P-tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Across Scales and Tasks"
Stars: ✭ 373 (-37.1%)
Mutual labels:  prompt-tuning, p-tuning, parameter-efficient-learning
SCL
πŸ“„ Spatial Contrastive Learning for Few-Shot Classification (ECML/PKDD 2021).
Stars: ✭ 42 (-92.92%)
Mutual labels:  few-shot-learning
Prototypical-Networks
A novel method for few shot learning
Stars: ✭ 47 (-92.07%)
Mutual labels:  few-shot-learning
adapt
Awesome Domain Adaptation Python Toolbox
Stars: ✭ 46 (-92.24%)
Mutual labels:  few-shot-learning
CDFSL-ATA
[IJCAI 2021] Cross-Domain Few-Shot Classification via Adversarial Task Augmentation
Stars: ✭ 21 (-96.46%)
Mutual labels:  few-shot-learning
Awesome Domain Adaptation
A collection of AWESOME things about domian adaptation
Stars: ✭ 3,357 (+466.1%)
Mutual labels:  few-shot-learning
finetuner
Finetuning any DNN for better embedding on neural search tasks
Stars: ✭ 442 (-25.46%)
Mutual labels:  few-shot-learning
Awesome-Few-shot
Awesome Few-shot learning
Stars: ✭ 50 (-91.57%)
Mutual labels:  few-shot-learning
protonet-bert-text-classification
finetune bert for small dataset text classification in a few-shot learning manner using ProtoNet
Stars: ✭ 28 (-95.28%)
Mutual labels:  few-shot-learning
FewCLUE
FewCLUE ε°ζ ·ζœ¬ε­¦δΉ ζ΅‹θ―„εŸΊε‡†οΌŒδΈ­ζ–‡η‰ˆ
Stars: ✭ 251 (-57.67%)
Mutual labels:  few-shot-learning
few-shot-lm
The source code of "Language Models are Few-shot Multilingual Learners" (MRL @ EMNLP 2021)
Stars: ✭ 32 (-94.6%)
Mutual labels:  few-shot-learning
mmfewshot
OpenMMLab FewShot Learning Toolbox and Benchmark
Stars: ✭ 336 (-43.34%)
Mutual labels:  few-shot-learning
Transferlearning
Transfer learning / domain adaptation / domain generalization / multi-task learning etc. Papers, codes, datasets, applications, tutorials.-迁移学习
Stars: ✭ 8,481 (+1330.19%)
Mutual labels:  few-shot-learning
Meta-Fine-Tuning
[CVPR 2020 VL3] The repository for meta fine-tuning in cross-domain few-shot learning.
Stars: ✭ 29 (-95.11%)
Mutual labels:  few-shot-learning
HiCE
Code for ACL'19 "Few-Shot Representation Learning for Out-Of-Vocabulary Words"
Stars: ✭ 56 (-90.56%)
Mutual labels:  few-shot-learning
MeTAL
Official PyTorch implementation of "Meta-Learning with Task-Adaptive Loss Function for Few-Shot Learning" (ICCV2021 Oral)
Stars: ✭ 24 (-95.95%)
Mutual labels:  few-shot-learning
Hands-On-Deep-Learning-Algorithms-with-Python
Hands-On Deep Learning Algorithms with Python, By Packt
Stars: ✭ 76 (-87.18%)
Mutual labels:  few-shot-learning
few-shot-gan-adaptation
[CVPR '21] Official repository for Few-shot Image Generation via Cross-domain Correspondence
Stars: ✭ 198 (-66.61%)
Mutual labels:  few-shot-learning
few-shot-segmentation
PyTorch implementation of 'Squeeze and Excite' Guided Few Shot Segmentation of Volumetric Scans
Stars: ✭ 78 (-86.85%)
Mutual labels:  few-shot-learning
LibFewShot
LibFewShot: A Comprehensive Library for Few-shot Learning.
Stars: ✭ 629 (+6.07%)
Mutual labels:  few-shot-learning

P-tuning

❗ News

🌟 [2022-10-06] Thrilled to present GLM-130B: An Open Bilingual Pre-trained Model. It is an open-sourced LLM outperforming GPT-3 175B over various benchmarks. Get model weights and do inference and P-Tuning with only 4 * RTX 3090 or 8 * RTX 2080 Ti FOR FREE!

🌟 [2022-07-14] Parameter-Efficient Prompt Tuning Makes Generalized and Calibrated Neural Text Retrievers is out! Check our code.

🌟 [2021-10-15] P-tuning v2 is out! Check our Github repo.

A novel method to tune language models. Codes and datasets for paper ``GPT understands, too''.

Xiao Liu*, Yanan Zheng*, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, Jie Tang

You may be also interested in our another work GLM: All NLP Tasks Are Generation Tasks: A General Pretraining Framework

How to use our code

We have released the code and datasets for LAMA and few-shot SuperGLUE (32-dev) experiments. Please check README.md and requirement.txt in the corresponding subdirectories for details.

The LAMA and FewGLUE_32dev datasets are available. The LAMA dataset should be placed in ./data directory, and the SuperGLUE dataset should be placed in the ./ (project root) directory.

Citation

If you find our work useful, please cite the following paper:

    @article{liu2021gpt,
    title={GPT Understands, Too},
    author={Liu, Xiao and Zheng, Yanan and Du, Zhengxiao and Ding, Ming and Qian, Yujie and Yang, Zhilin and Tang, Jie},
    journal={arXiv:2103.10385},
    year={2021}
    }
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].