All Projects → tensorlayer → Dagan

tensorlayer / Dagan

The implementation code for "DAGAN: Deep De-Aliasing Generative Adversarial Networks for Fast Compressed Sensing MRI Reconstruction"

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Dagan

Hccg Cyclegan
Handwritten Chinese Characters Generation
Stars: ✭ 115 (-12.88%)
Mutual labels:  generative-adversarial-network
Rectorch
rectorch is a pytorch-based framework for state-of-the-art top-N recommendation
Stars: ✭ 121 (-8.33%)
Mutual labels:  generative-adversarial-network
Cyclegan
Software that can generate photos from paintings, turn horses into zebras, perform style transfer, and more.
Stars: ✭ 10,933 (+8182.58%)
Mutual labels:  generative-adversarial-network
3d Recgan
🔥3D-RecGAN in Tensorflow (ICCV Workshops 2017)
Stars: ✭ 116 (-12.12%)
Mutual labels:  generative-adversarial-network
Capsule Gan
Code for my Master thesis on "Capsule Architecture as a Discriminator in Generative Adversarial Networks".
Stars: ✭ 120 (-9.09%)
Mutual labels:  generative-adversarial-network
Cramer Gan
Tensorflow Implementation on "The Cramer Distance as a Solution to Biased Wasserstein Gradients" (https://arxiv.org/pdf/1705.10743.pdf)
Stars: ✭ 123 (-6.82%)
Mutual labels:  generative-adversarial-network
Gpnd
Generative Probabilistic Novelty Detection with Adversarial Autoencoders
Stars: ✭ 112 (-15.15%)
Mutual labels:  generative-adversarial-network
Chainer Pix2pix
chainer implementation of pix2pix
Stars: ✭ 130 (-1.52%)
Mutual labels:  generative-adversarial-network
Mgan
Masking GAN - Image attribute mask generation
Stars: ✭ 120 (-9.09%)
Mutual labels:  generative-adversarial-network
Pytorch Studiogan
StudioGAN is a Pytorch library providing implementations of representative Generative Adversarial Networks (GANs) for conditional/unconditional image generation.
Stars: ✭ 2,325 (+1661.36%)
Mutual labels:  generative-adversarial-network
The Gan Zoo
A list of all named GANs!
Stars: ✭ 11,454 (+8577.27%)
Mutual labels:  generative-adversarial-network
Spiral Tensorflow
in progress
Stars: ✭ 117 (-11.36%)
Mutual labels:  generative-adversarial-network
3dpose gan
The authors' implementation of Unsupervised Adversarial Learning of 3D Human Pose from 2D Joint Locations
Stars: ✭ 124 (-6.06%)
Mutual labels:  generative-adversarial-network
A Nice Mc
Code for "A-NICE-MC: Adversarial Training for MCMC"
Stars: ✭ 115 (-12.88%)
Mutual labels:  generative-adversarial-network
Awesome Gan For Medical Imaging
Awesome GAN for Medical Imaging
Stars: ✭ 1,814 (+1274.24%)
Mutual labels:  generative-adversarial-network
Sketchygan
Code for paper "SketchyGAN: Towards Diverse and Realistic Sketch to Image Synthesis"
Stars: ✭ 113 (-14.39%)
Mutual labels:  generative-adversarial-network
Tensorflow Mnist Cgan Cdcgan
Tensorflow implementation of conditional Generative Adversarial Networks (cGAN) and conditional Deep Convolutional Adversarial Networks (cDCGAN) for MANIST dataset.
Stars: ✭ 122 (-7.58%)
Mutual labels:  generative-adversarial-network
Ganimation
GANimation: Anatomically-aware Facial Animation from a Single Image (ECCV'18 Oral) [PyTorch]
Stars: ✭ 1,730 (+1210.61%)
Mutual labels:  generative-adversarial-network
Ganspapercollection
Stars: ✭ 130 (-1.52%)
Mutual labels:  generative-adversarial-network
Mlds2018spring
Machine Learning and having it Deep and Structured (MLDS) in 2018 spring
Stars: ✭ 124 (-6.06%)
Mutual labels:  generative-adversarial-network

DAGAN

This is the official implementation code for DAGAN: Deep De-Aliasing Generative Adversarial Networks for Fast Compressed Sensing MRI Reconstruction published in IEEE Transactions on Medical Imaging (2018).
Guang Yang*, Simiao Yu*, et al.
(* equal contributions)

If you use this code for your research, please cite our paper.

@article{yang2018_dagan,
	author = {Yang, Guang and Yu, Simiao and Dong, Hao and Slabaugh, Gregory G. and Dragotti, Pier Luigi and Ye, Xujiong and Liu, Fangde and Arridge, Simon R. and Keegan, Jennifer and Guo, Yike and Firmin, David N.},
	journal = {IEEE Trans. Med. Imaging},
	number = 6,
	pages = {1310--1321},
	title = {{DAGAN: deep de-aliasing generative adversarial networks for fast compressed sensing MRI reconstruction}},
	volume = 37,
	year = 2018
}

If you have any questions about this code, please feel free to contact Simiao Yu ([email protected]).

Prerequisites

The original code is in python 3.5 under the following dependencies:

  1. tensorflow (v1.1.0)
  2. tensorlayer (v1.7.2)
  3. easydict (v1.6)
  4. nibabel (v2.1.0)
  5. scikit-image (v0.12.3)

Code tested in Ubuntu 16.04 with Nvidia GPU + CUDA CuDNN (whose version is compatible to tensorflow v1.1.0).

How to use

  1. Prepare data

    1. Data used in this work are publicly available from the MICCAI 2013 grand challenge (link). We refer users to register with the grand challenge organisers to be able to download the data.
    2. Download training and test data respectively into data/MICCAI13_SegChallenge/Training_100 and data/MICCAI13_SegChallenge/Testing_100 (We randomly included 100 T1-weighted MRI datasets for training and 50 datasets for testing)
    3. run 'python data_loader.py'
    4. after running the code, training/validation/testing data should be saved to 'data/MICCAI13_SegChallenge/' in pickle format.
  2. Download pretrained VGG16 model

    1. Download 'vgg16_weights.npz' from this link
    2. Save 'vgg16_weights.npz' into 'trained_model/VGG16'
  3. Train model

    1. run 'CUDA_VISIBLE_DEVICES=0 python train.py --model MODEL --mask MASK --maskperc MASKPERC' where you should specify MODEL, MASK, MASKPERC respectively:
    • MODEL: choose from 'unet' or 'unet_refine'
    • MASK: choose from 'gaussian1d', 'gaussian2d', 'poisson2d'
    • MASKPERC: choose from '10', '20', '30', '40', '50' (percentage of mask)
  4. Test trained model

    1. run 'CUDA_VISIBLE_DEVICES=0 python test.py --model MODEL --mask MASK --maskperc MASKPERC' where you should specify MODEL, MASK, MASKPERC respectively (as above).

Results

Please refer to the paper for the detailed results.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].