FUSIONPyTorch code for NeurIPSW 2020 paper (4th Workshop on Meta-Learning) "Few-Shot Unsupervised Continual Learning through Meta-Examples"
Stars: ✭ 18 (-89.77%)
TybaltTraining and evaluating a variational autoencoder for pan-cancer gene expression data
Stars: ✭ 126 (-28.41%)
concept-based-xaiLibrary implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI
Stars: ✭ 41 (-76.7%)
Recycle GanUnsupervised Video Retargeting (e.g. face to face, flower to flower, clouds and winds, sunrise and sunset)
Stars: ✭ 367 (+108.52%)
deepgttDeepGTT: Learning Travel Time Distributions with Deep Generative Model
Stars: ✭ 30 (-82.95%)
OpenselfsupSelf-Supervised Learning Toolbox and Benchmark
Stars: ✭ 1,239 (+603.98%)
metric-transfer.pytorchDeep Metric Transfer for Label Propagation with Limited Annotated Data
Stars: ✭ 49 (-72.16%)
VQ-APCVector Quantized Autoregressive Predictive Coding (VQ-APC)
Stars: ✭ 34 (-80.68%)
DynamicsA Compositional Object-Based Approach to Learning Physical Dynamics
Stars: ✭ 159 (-9.66%)
PaseProblem Agnostic Speech Encoder
Stars: ✭ 348 (+97.73%)
Deep-Unsupervised-Domain-AdaptationPytorch implementation of four neural network based domain adaptation techniques: DeepCORAL, DDC, CDAN and CDAN+E. Evaluated on benchmark dataset Office31.
Stars: ✭ 50 (-71.59%)
Image similarityPyTorch Blog Post On Image Similarity Search
Stars: ✭ 80 (-54.55%)
hmm market behaviorUnsupervised Learning to Market Behavior Forecasting Example
Stars: ✭ 36 (-79.55%)
Mmt[ICLR-2020] Mutual Mean-Teaching: Pseudo Label Refinery for Unsupervised Domain Adaptation on Person Re-identification.
Stars: ✭ 345 (+96.02%)
GonGradient Origin Networks - a new type of generative model that is able to quickly learn a latent representation without an encoder
Stars: ✭ 126 (-28.41%)
benchmark VAEUnifying Variational Autoencoder (VAE) implementations in Pytorch (NeurIPS 2022)
Stars: ✭ 1,211 (+588.07%)
Paragraph Vectors📄 A PyTorch implementation of Paragraph Vectors (doc2vec).
Stars: ✭ 337 (+91.48%)
temporal-sslVideo Representation Learning by Recognizing Temporal Transformations. In ECCV, 2020.
Stars: ✭ 46 (-73.86%)
dramaMain component extraction for outlier detection
Stars: ✭ 17 (-90.34%)
Neural OdeJupyter notebook with Pytorch implementation of Neural Ordinary Differential Equations
Stars: ✭ 335 (+90.34%)
Eigen-PortfolioUnsupervised machine learning Principal Component Analysis (PCA) on the Dow Jones Industrial Average index and it's respective 30 stocks to construct an optimized diversified intelligent portfolio.
Stars: ✭ 54 (-69.32%)
prescription-outliersDDC-Outlier: Preventing medication errors using unsupervised learning
Stars: ✭ 18 (-89.77%)
CGMMOfficial Repository of "Contextual Graph Markov Model" (ICML 2018 - JMLR 2020)
Stars: ✭ 35 (-80.11%)
Dfc VaeVariational Autoencoder trained by Feature Perceputal Loss
Stars: ✭ 74 (-57.95%)
MIDI-VAENo description or website provided.
Stars: ✭ 56 (-68.18%)
SelflowSelFlow: Self-Supervised Learning of Optical Flow
Stars: ✭ 319 (+81.25%)
TA3N[ICCV 2019 Oral] TA3N: https://github.com/cmhungsteve/TA3N (Most updated repo)
Stars: ✭ 45 (-74.43%)
3dpose ganThe authors' implementation of Unsupervised Adversarial Learning of 3D Human Pose from 2D Joint Locations
Stars: ✭ 124 (-29.55%)
OpenunreidPyTorch open-source toolbox for unsupervised or domain adaptive object re-ID.
Stars: ✭ 250 (+42.05%)
Celebamask HqA large-scale face dataset for face parsing, recognition, generation and editing.
Stars: ✭ 1,156 (+556.82%)
UnflowUnFlow: Unsupervised Learning of Optical Flow with a Bidirectional Census Loss
Stars: ✭ 239 (+35.8%)
Pytorch Mnist Celeba Cgan CdcganPytorch implementation of conditional Generative Adversarial Networks (cGAN) and conditional Deep Convolutional Generative Adversarial Networks (cDCGAN) for MNIST dataset
Stars: ✭ 290 (+64.77%)
Unsupervised Video[CVPR 2017] Unsupervised deep learning using unlabelled videos on the web
Stars: ✭ 233 (+32.39%)
Go Tsnet-Distributed Stochastic Neighbor Embedding (t-SNE) in Go
Stars: ✭ 153 (-13.07%)
Transmomo.pytorchThis is the official PyTorch implementation of the CVPR 2020 paper "TransMoMo: Invariance-Driven Unsupervised Video Motion Retargeting".
Stars: ✭ 225 (+27.84%)
He4o和(he for objective-c) —— “信息熵减机系统”
Stars: ✭ 284 (+61.36%)
Pytorch ByolPyTorch implementation of Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning
Stars: ✭ 213 (+21.02%)
Insta DmLearning Monocular Depth in Dynamic Scenes via Instance-Aware Projection Consistency (AAAI 2021)
Stars: ✭ 67 (-61.93%)
LkvolearnerLearning Depth from Monocular Videos using Direct Methods, CVPR 2018
Stars: ✭ 210 (+19.32%)
DaisyrecA developing recommender system in pytorch. Algorithm: KNN, LFM, SLIM, NeuMF, FM, DeepFM, VAE and so on, which aims to fair comparison for recommender system benchmarks
Stars: ✭ 280 (+59.09%)
IseebetteriSeeBetter: Spatio-Temporal Video Super Resolution using Recurrent-Generative Back-Projection Networks | Python3 | PyTorch | GANs | CNNs | ResNets | RNNs | Published in Springer Journal of Computational Visual Media, September 2020, Tsinghua University Press
Stars: ✭ 202 (+14.77%)
SfmlearnerAn unsupervised learning framework for depth and ego-motion estimation from monocular videos
Stars: ✭ 1,661 (+843.75%)
SealionThe first machine learning framework that encourages learning ML concepts instead of memorizing class functions.
Stars: ✭ 278 (+57.95%)
SimclrSimCLRv2 - Big Self-Supervised Models are Strong Semi-Supervised Learners
Stars: ✭ 2,720 (+1445.45%)
DanmfA sparsity aware implementation of "Deep Autoencoder-like Nonnegative Matrix Factorization for Community Detection" (CIKM 2018).
Stars: ✭ 161 (-8.52%)
CsmCode release for "Canonical Surface Mapping via Geometric Cycle Consistency"
Stars: ✭ 156 (-11.36%)
StylisticpoetryCodes for Stylistic Chinese Poetry Generation via Unsupervised Style Disentanglement (EMNLP 2018)
Stars: ✭ 148 (-15.91%)
ArflowThe official PyTorch implementation of the paper "Learning by Analogy: Reliable Supervision from Transformations for Unsupervised Optical Flow Estimation".
Stars: ✭ 134 (-23.86%)
SmrtHandle class imbalance intelligently by using variational auto-encoders to generate synthetic observations of your minority class.
Stars: ✭ 102 (-42.05%)
rl singing voiceUnsupervised Representation Learning for Singing Voice Separation
Stars: ✭ 18 (-89.77%)