GPQGeneralized Product Quantization Network For Semi-supervised Image Retrieval - CVPR 2020
Stars: ✭ 60 (+160.87%)
sinkhorn-label-allocationSinkhorn Label Allocation is a label assignment method for semi-supervised self-training algorithms. The SLA algorithm is described in full in this ICML 2021 paper: https://arxiv.org/abs/2102.08622.
Stars: ✭ 49 (+113.04%)
semi-memoryTensorflow Implementation on Paper [ECCV2018]Semi-Supervised Deep Learning with Memory
Stars: ✭ 49 (+113.04%)
NanoFlowPyTorch implementation of the paper "NanoFlow: Scalable Normalizing Flows with Sublinear Parameter Complexity." (NeurIPS 2020)
Stars: ✭ 63 (+173.91%)
SimPLECode for the paper: "SimPLE: Similar Pseudo Label Exploitation for Semi-Supervised Classification"
Stars: ✭ 50 (+117.39%)
DeepAtlasJoint Semi-supervised Learning of Image Registration and Segmentation
Stars: ✭ 38 (+65.22%)
deepOFTensorFlow implementation for "Guided Optical Flow Learning"
Stars: ✭ 26 (+13.04%)
MongeAmpereFlowContinuous-time gradient flow for generative modeling and variational inference
Stars: ✭ 29 (+26.09%)
ST-PlusPlus[CVPR 2022] ST++: Make Self-training Work Better for Semi-supervised Semantic Segmentation
Stars: ✭ 168 (+630.43%)
DualStudentCode for Paper ''Dual Student: Breaking the Limits of the Teacher in Semi-Supervised Learning'' [ICCV 2019]
Stars: ✭ 106 (+360.87%)
sesemisupervised and semi-supervised image classification with self-supervision (Keras)
Stars: ✭ 43 (+86.96%)
NeuroAINeuroAI-UW seminar, a regular weekly seminar for the UW community, organized by NeuroAI Shlizerman Lab.
Stars: ✭ 36 (+56.52%)
unicornnOfficial code for UnICORNN (ICML 2021)
Stars: ✭ 21 (-8.7%)
EgoCNNCode for "Distributed, Egocentric Representations of Graphs for Detecting Critical Structures" (ICML 2019)
Stars: ✭ 16 (-30.43%)
seededldaSemisupervided LDA for theory-driven text analysis
Stars: ✭ 46 (+100%)
FedScaleFedScale is a scalable and extensible open-source federated learning (FL) platform.
Stars: ✭ 274 (+1091.3%)
cflow-adOfficial PyTorch code for WACV 2022 paper "CFLOW-AD: Real-Time Unsupervised Anomaly Detection with Localization via Conditional Normalizing Flows"
Stars: ✭ 138 (+500%)
benchmark VAEUnifying Variational Autoencoder (VAE) implementations in Pytorch (NeurIPS 2022)
Stars: ✭ 1,211 (+5165.22%)
continuous-time-flow-processPyTorch code of "Modeling Continuous Stochastic Processes with Dynamic Normalizing Flows" (NeurIPS 2020)
Stars: ✭ 34 (+47.83%)
Cross-Speaker-Emotion-TransferPyTorch Implementation of ByteDance's Cross-speaker Emotion Transfer Based on Speaker Condition Layer Normalization and Semi-Supervised Training in Text-To-Speech
Stars: ✭ 107 (+365.22%)
icml-nips-iclr-datasetPapers, authors and author affiliations from ICML, NeurIPS and ICLR 2006-2021
Stars: ✭ 21 (-8.7%)
ifl-tppImplementation of "Intensity-Free Learning of Temporal Point Processes" (Spotlight @ ICLR 2020)
Stars: ✭ 58 (+152.17%)
deeprob-kitA Python Library for Deep Probabilistic Modeling
Stars: ✭ 32 (+39.13%)
semantic-parsing-dualSource code and data for ACL 2019 Long Paper ``Semantic Parsing with Dual Learning".
Stars: ✭ 17 (-26.09%)
HybridNetPytorch Implementation of HybridNet: Classification and Reconstruction Cooperation for Semi-Supervised Learning (https://arxiv.org/abs/1807.11407)
Stars: ✭ 16 (-30.43%)
pyprophetPyProphet: Semi-supervised learning and scoring of OpenSWATH results.
Stars: ✭ 23 (+0%)
pyroVEDInvariant representation learning from imaging and spectral data
Stars: ✭ 23 (+0%)
deviation-networkSource code of the KDD19 paper "Deep anomaly detection with deviation networks", weakly/partially supervised anomaly detection, few-shot anomaly detection
Stars: ✭ 94 (+308.7%)
probnmn-clevrCode for ICML 2019 paper "Probabilistic Neural-symbolic Models for Interpretable Visual Question Answering" [long-oral]
Stars: ✭ 63 (+173.91%)
JCLALJCLAL is a general purpose framework developed in Java for Active Learning.
Stars: ✭ 22 (-4.35%)
NeuralPullImplementation of ICML'2021:Neural-Pull: Learning Signed Distance Functions from Point Clouds by Learning to Pull Space onto Surfaces
Stars: ✭ 149 (+547.83%)
tape-neurips2019Tasks Assessing Protein Embeddings (TAPE), a set of five biologically relevant semi-supervised learning tasks spread across different domains of protein biology. (DEPRECATED)
Stars: ✭ 117 (+408.7%)
SemiSeg-AELSemi-Supervised Semantic Segmentation via Adaptive Equalization Learning, NeurIPS 2021 (Spotlight)
Stars: ✭ 79 (+243.48%)
rankpruning🧹 Formerly for binary classification with noisy labels. Replaced by cleanlab.
Stars: ✭ 81 (+252.17%)
EC-GANEC-GAN: Low-Sample Classification using Semi-Supervised Algorithms and GANs (AAAI 2021)
Stars: ✭ 29 (+26.09%)
metric-transfer.pytorchDeep Metric Transfer for Label Propagation with Limited Annotated Data
Stars: ✭ 49 (+113.04%)
generative modelsPytorch implementations of generative models: VQVAE2, AIR, DRAW, InfoGAN, DCGAN, SSVAE
Stars: ✭ 82 (+256.52%)
Pro-GNNImplementation of the KDD 2020 paper "Graph Structure Learning for Robust Graph Neural Networks"
Stars: ✭ 202 (+778.26%)
normalizing-flowsImplementations of normalizing flows using python and tensorflow
Stars: ✭ 15 (-34.78%)
pywslPython codes for weakly-supervised learning
Stars: ✭ 118 (+413.04%)
Active-Passive-Losses[ICML2020] Normalized Loss Functions for Deep Learning with Noisy Labels
Stars: ✭ 92 (+300%)
ACECode for our paper, Neural Network Attributions: A Causal Perspective (ICML 2019).
Stars: ✭ 47 (+104.35%)
ssdg-benchmarkBenchmarks for semi-supervised domain generalization.
Stars: ✭ 46 (+100%)
ganbertEnhancing the BERT training with Semi-supervised Generative Adversarial Networks
Stars: ✭ 205 (+791.3%)
Context-Aware-ConsistencySemi-supervised Semantic Segmentation with Directional Context-aware Consistency (CVPR 2021)
Stars: ✭ 121 (+426.09%)
ganbert-pytorchEnhancing the BERT training with Semi-supervised Generative Adversarial Networks in Pytorch/HuggingFace
Stars: ✭ 60 (+160.87%)
Normalizing FlowsImplementation of Normalizing flows on MNIST https://arxiv.org/abs/1505.05770
Stars: ✭ 14 (-39.13%)
emotion-recognition-GANThis project is a semi-supervised approach to detect emotions on faces in-the-wild using GAN
Stars: ✭ 20 (-13.04%)