All Projects → corenel → Pytorch Adda

corenel / Pytorch Adda

Licence: mit
A PyTorch implementation for Adversarial Discriminative Domain Adaptation

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Pytorch Adda

Adaptsegnet
Learning to Adapt Structured Output Space for Semantic Segmentation, CVPR 2018 (spotlight)
Stars: ✭ 654 (+98.78%)
Mutual labels:  generative-adversarial-network, domain-adaptation
pytorch-dann
A PyTorch implementation for Unsupervised Domain Adaptation by Backpropagation
Stars: ✭ 110 (-66.57%)
Mutual labels:  generative-adversarial-network, domain-adaptation
Lsd Seg
Learning from Synthetic Data: Addressing Domain Shift for Semantic Segmentation
Stars: ✭ 99 (-69.91%)
Mutual labels:  generative-adversarial-network, domain-adaptation
pytorch-arda
A PyTorch implementation for Adversarial Representation Learning for Domain Adaptation
Stars: ✭ 49 (-85.11%)
Mutual labels:  generative-adversarial-network, domain-adaptation
pytorch-domain-adaptation
Unofficial pytorch implementation of algorithms for domain adaptation
Stars: ✭ 24 (-92.71%)
Mutual labels:  generative-adversarial-network, domain-adaptation
domain adapt
Domain adaptation networks for digit recognitioning
Stars: ✭ 14 (-95.74%)
Mutual labels:  generative-adversarial-network, domain-adaptation
Salad
A toolbox for domain adaptation and semi-supervised learning. Contributions welcome.
Stars: ✭ 257 (-21.88%)
Mutual labels:  domain-adaptation
Pytorch Mnist Celeba Cgan Cdcgan
Pytorch implementation of conditional Generative Adversarial Networks (cGAN) and conditional Deep Convolutional Generative Adversarial Networks (cDCGAN) for MNIST dataset
Stars: ✭ 290 (-11.85%)
Mutual labels:  generative-adversarial-network
robustness
Robustness and adaptation of ImageNet scale models. Pre-Release, stay tuned for updates.
Stars: ✭ 63 (-80.85%)
Mutual labels:  domain-adaptation
VQGAN-CLIP-Docker
Zero-Shot Text-to-Image Generation VQGAN+CLIP Dockerized
Stars: ✭ 58 (-82.37%)
Mutual labels:  generative-adversarial-network
Psgan
PyTorch code for "PSGAN: Pose and Expression Robust Spatial-Aware GAN for Customizable Makeup Transfer" (CVPR 2020 Oral)
Stars: ✭ 318 (-3.34%)
Mutual labels:  generative-adversarial-network
Deep Generative Prior
Code for deep generative prior (ECCV2020 oral)
Stars: ✭ 308 (-6.38%)
Mutual labels:  generative-adversarial-network
Dcgan
The Simplest DCGAN Implementation
Stars: ✭ 286 (-13.07%)
Mutual labels:  generative-adversarial-network
Tf 3dgan
Tensorflow implementation of 3D Generative Adversarial Network.
Stars: ✭ 263 (-20.06%)
Mutual labels:  generative-adversarial-network
Pytorch Srgan
A modern PyTorch implementation of SRGAN
Stars: ✭ 289 (-12.16%)
Mutual labels:  generative-adversarial-network
Textbox
TextBox is an open-source library for building text generation system.
Stars: ✭ 257 (-21.88%)
Mutual labels:  generative-adversarial-network
Few Shot Patch Based Training
The official implementation of our SIGGRAPH 2020 paper Interactive Video Stylization Using Few-Shot Patch-Based Training
Stars: ✭ 313 (-4.86%)
Mutual labels:  generative-adversarial-network
UEGAN
[TIP2020] Pytorch implementation of "Towards Unsupervised Deep Image Enhancement with Generative Adversarial Network"
Stars: ✭ 68 (-79.33%)
Mutual labels:  generative-adversarial-network
Faceswap Gan
A denoising autoencoder + adversarial losses and attention mechanisms for face swapping.
Stars: ✭ 3,099 (+841.95%)
Mutual labels:  generative-adversarial-network
Real Time Self Adaptive Deep Stereo
Code for "Real-time self-adaptive deep stereo" - CVPR 2019 (ORAL)
Stars: ✭ 306 (-6.99%)
Mutual labels:  domain-adaptation

PyTorch-ADDA

A PyTorch implementation for Adversarial Discriminative Domain Adaptation.

Environment

  • Python 3.6
  • PyTorch 0.2.0

Usage

I only test on MNIST -> USPS, you can just run the following command:

python3 main.py

Network

In this experiment, I use three types of network. They are very simple.

  • LeNet encoder

    LeNetEncoder (
      (encoder): Sequential (
        (0): Conv2d(1, 20, kernel_size=(5, 5), stride=(1, 1))
        (1): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))
        (2): ReLU ()
        (3): Conv2d(20, 50, kernel_size=(5, 5), stride=(1, 1))
        (4): Dropout2d (p=0.5)
        (5): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))
        (6): ReLU ()
      )
      (fc1): Linear (800 -> 500)
    )
    
  • LeNet classifier

    LeNetClassifier (
      (fc2): Linear (500 -> 10)
    )
    
  • Discriminator

    Discriminator (
      (layer): Sequential (
        (0): Linear (500 -> 500)
        (1): ReLU ()
        (2): Linear (500 -> 500)
        (3): ReLU ()
        (4): Linear (500 -> 2)
        (5): LogSoftmax ()
      )
    )
    

Result

MNIST (Source) USPS (Target)
Source Encoder + Source Classifier 99.140000% 83.978495%
Target Encoder + Source Classifier 97.634409%

Domain Adaptation does work (97% vs 83%).

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].