few-shot-lmThe source code of "Language Models are Few-shot Multilingual Learners" (MRL @ EMNLP 2021)
Stars: ✭ 32 (-67.68%)
dasher-webDasher text entry in HTML, CSS, JavaScript, and SVG
Stars: ✭ 34 (-65.66%)
CharLMCharacter-aware Neural Language Model implemented by PyTorch
Stars: ✭ 32 (-67.68%)
KB-ALBERTKB국민은행에서 제공하는 경제/금융 도메인에 특화된 한국어 ALBERT 모델
Stars: ✭ 215 (+117.17%)
simple-cnapsSource codes for "Improved Few-Shot Visual Classification" (CVPR 2020), "Enhancing Few-Shot Image Classification with Unlabelled Examples" (WACV 2022), and "Beyond Simple Meta-Learning: Multi-Purpose Models for Multi-Domain, Active and Continual Few-Shot Learning" (Neural Networks 2022 - in submission)
Stars: ✭ 88 (-11.11%)
ganbertEnhancing the BERT training with Semi-supervised Generative Adversarial Networks
Stars: ✭ 205 (+107.07%)
multilingual kwsFew-shot Keyword Spotting in Any Language and Multilingual Spoken Word Corpus
Stars: ✭ 122 (+23.23%)
minGPT-TFA minimal TF2 re-implementation of the OpenAI GPT training
Stars: ✭ 36 (-63.64%)
FewShotDetection(ECCV 2020) PyTorch implementation of paper "Few-Shot Object Detection and Viewpoint Estimation for Objects in the Wild"
Stars: ✭ 188 (+89.9%)
LibFewShotLibFewShot: A Comprehensive Library for Few-shot Learning.
Stars: ✭ 629 (+535.35%)
TF-NNLM-TKA toolkit for neural language modeling using Tensorflow including basic models like RNNs and LSTMs as well as more advanced models.
Stars: ✭ 20 (-79.8%)
FUSIONPyTorch code for NeurIPSW 2020 paper (4th Workshop on Meta-Learning) "Few-Shot Unsupervised Continual Learning through Meta-Examples"
Stars: ✭ 18 (-81.82%)
mlmachine learning
Stars: ✭ 29 (-70.71%)
attMPTI[CVPR 2021] Few-shot 3D Point Cloud Semantic Segmentation
Stars: ✭ 118 (+19.19%)
gdcCode for the ICLR 2021 paper "A Distributional Approach to Controlled Text Generation"
Stars: ✭ 94 (-5.05%)
Few-NERDCode and data of ACL 2021 paper "Few-NERD: A Few-shot Named Entity Recognition Dataset"
Stars: ✭ 317 (+220.2%)
asr2424-hour Automatic Speech Recognition
Stars: ✭ 27 (-72.73%)
CoLAKECOLING'2020: CoLAKE: Contextualized Language and Knowledge Embedding
Stars: ✭ 86 (-13.13%)
Vaaku2VecLanguage Modeling and Text Classification in Malayalam Language using ULMFiT
Stars: ✭ 68 (-31.31%)
MLMANACL 2019 paper:Multi-Level Matching and Aggregation Network for Few-Shot Relation Classification
Stars: ✭ 59 (-40.4%)
HiCECode for ACL'19 "Few-Shot Representation Learning for Out-Of-Vocabulary Words"
Stars: ✭ 56 (-43.43%)
PCPMPresenting Collection of Pretrained Models. Links to pretrained models in NLP and voice.
Stars: ✭ 21 (-78.79%)
lm-scorer📃Language Model based sentences scoring library
Stars: ✭ 264 (+166.67%)
kurobakoA black-box optimization benchmark tool
Stars: ✭ 69 (-30.3%)
sinkhorn-label-allocationSinkhorn Label Allocation is a label assignment method for semi-supervised self-training algorithms. The SLA algorithm is described in full in this ICML 2021 paper: https://arxiv.org/abs/2102.08622.
Stars: ✭ 49 (-50.51%)
subword-lstm-lmLSTM Language Model with Subword Units Input Representations
Stars: ✭ 45 (-54.55%)
gap-text2sqlGAP-text2SQL: Learning Contextual Representations for Semantic Parsing with Generation-Augmented Pre-Training
Stars: ✭ 83 (-16.16%)
LM-CNLCChinese Natural Language Correction via Language Model
Stars: ✭ 15 (-84.85%)
backpropBackprop makes it simple to use, finetune, and deploy state-of-the-art ML models.
Stars: ✭ 229 (+131.31%)
personality-predictionExperiments for automated personality detection using Language Models and psycholinguistic features on various famous personality datasets including the Essays dataset (Big-Five)
Stars: ✭ 109 (+10.1%)
cscgCode Generation as a Dual Task of Code Summarization.
Stars: ✭ 28 (-71.72%)
calmContext Aware Language Models
Stars: ✭ 29 (-70.71%)
P-tuningA novel method to tune language models. Codes and datasets for paper ``GPT understands, too''.
Stars: ✭ 593 (+498.99%)
mlp-gpt-jaxA GPT, made only of MLPs, in Jax
Stars: ✭ 53 (-46.46%)
few-shot-segmentationPyTorch implementation of 'Squeeze and Excite' Guided Few Shot Segmentation of Volumetric Scans
Stars: ✭ 78 (-21.21%)
open clipAn open source implementation of CLIP.
Stars: ✭ 1,534 (+1449.49%)
rnn-theanoRNN(LSTM, GRU) in Theano with mini-batch training; character-level language models in Theano
Stars: ✭ 68 (-31.31%)
wechselCode for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.
Stars: ✭ 39 (-60.61%)
pd3f🏭 PDF text extraction pipeline: self-hosted, local-first, Docker-based
Stars: ✭ 132 (+33.33%)
LaplacianShotLaplacian Regularized Few Shot Learning
Stars: ✭ 72 (-27.27%)
COCO-LM[NeurIPS 2021] COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining
Stars: ✭ 109 (+10.1%)
gpt-j-apiAPI for the GPT-J language model 🦜. Including a FastAPI backend and a streamlit frontend
Stars: ✭ 248 (+150.51%)
SCL📄 Spatial Contrastive Learning for Few-Shot Classification (ECML/PKDD 2021).
Stars: ✭ 42 (-57.58%)
swig-srilmSWIG Wrapper for the SRILM toolkit
Stars: ✭ 33 (-66.67%)
PLBARTOfficial code of our work, Unified Pre-training for Program Understanding and Generation [NAACL 2021].
Stars: ✭ 151 (+52.53%)
pytorch-meta-datasetA non-official 100% PyTorch implementation of META-DATASET benchmark for few-shot classification
Stars: ✭ 39 (-60.61%)
ZerothKaldi-based Korean ASR (한국어 음성인식) open-source project
Stars: ✭ 248 (+150.51%)
sib meta learnCode of Empirical Bayes Transductive Meta-Learning with Synthetic Gradients
Stars: ✭ 56 (-43.43%)
mongolian-nlpUseful resources for Mongolian NLP
Stars: ✭ 119 (+20.2%)
brunoa deep recurrent model for exchangeable data
Stars: ✭ 34 (-65.66%)