autonomousvision / Graf

Licence: mit
Official code release for "GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis"

Projects that are alternatives of or similar to Graf

Udacity Ml Capstone
Udacity 2018 Machine Learning Nanodegree Capstone project
Stars: ✭ 127 (+0%)
Mutual labels:  jupyter-notebook
Algorithms Illuminated
My notes for Tim Roughgarden's awesome course on Algorithms and his 4 part books
Stars: ✭ 127 (+0%)
Mutual labels:  jupyter-notebook
Machine learning beginner
机器学习初学者公众号作品
Stars: ✭ 1,770 (+1293.7%)
Mutual labels:  jupyter-notebook
Aihub
I use this repository for my Youtube channel where I share videos about Artificial Intelligence. The repository includes Machine Learning, Deep Learning, and Reinforcement learning's code.
Stars: ✭ 127 (+0%)
Mutual labels:  jupyter-notebook
Visual Attribution
Pytorch Implementation of recent visual attribution methods for model interpretability
Stars: ✭ 127 (+0%)
Mutual labels:  jupyter-notebook
Criteo 1tb Benchmark
Benchmark of different ML algorithms on Criteo 1TB dataset
Stars: ✭ 127 (+0%)
Mutual labels:  jupyter-notebook
Finlib
A streamlined library for getting historical financial price data, fundamental data, and financial ratios.
Stars: ✭ 127 (+0%)
Mutual labels:  jupyter-notebook
Jupyter notebooks
Collection of jupyter notebooks
Stars: ✭ 127 (+0%)
Mutual labels:  jupyter-notebook
Pynq Computervision
Computer Vision Overlays on Pynq
Stars: ✭ 127 (+0%)
Mutual labels:  jupyter-notebook
Ml hacks
Приёмы в машинном обучении
Stars: ✭ 128 (+0.79%)
Mutual labels:  jupyter-notebook
Trustscore
To Trust Or Not To Trust A Classifier. A measure of uncertainty for any trained (possibly black-box) classifier which is more effective than the classifier's own implied confidence (e.g. softmax probability for a neural network).
Stars: ✭ 127 (+0%)
Mutual labels:  jupyter-notebook
Python notes
Stars: ✭ 127 (+0%)
Mutual labels:  jupyter-notebook
Sandbox
Play time!
Stars: ✭ 127 (+0%)
Mutual labels:  jupyter-notebook
Data Science For Marketing Analytics
Achieve your marketing goals with the data analytics power of Python
Stars: ✭ 127 (+0%)
Mutual labels:  jupyter-notebook
Colour Demosaicing
CFA (Colour Filter Array) Demosaicing Algorithms for Python
Stars: ✭ 127 (+0%)
Mutual labels:  jupyter-notebook
Jupyter Datatables
Jupyter Notebook extension leveraging pandas DataFrames by integrating DataTables and ChartJS.
Stars: ✭ 127 (+0%)
Mutual labels:  jupyter-notebook
Introdatascience
Notes on Data Science. 数理统计、机器学习和数据编程的学习笔记。
Stars: ✭ 127 (+0%)
Mutual labels:  jupyter-notebook
Celegansneuroml
NeuroML based C elegans model, contained in a neuroConstruct project, as well as c302
Stars: ✭ 127 (+0%)
Mutual labels:  jupyter-notebook
Data science
daily curated links in DS, DL, NLP, ML
Stars: ✭ 127 (+0%)
Mutual labels:  jupyter-notebook
Cs231a
Computer Vision: From 3D Reconstruction to Recognition
Stars: ✭ 126 (-0.79%)
Mutual labels:  jupyter-notebook

GRAF


This repository contains official code for the paper GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis.

You can find detailed usage instructions for training your own models and using pre-trained models below.

If you find our code or paper useful, please consider citing

@inproceedings{Schwarz2020NEURIPS,
  title = {GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis},
  author = {Schwarz, Katja and Liao, Yiyi and Niemeyer, Michael and Geiger, Andreas},
  booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
  year = {2020}
}

Installation

First you have to make sure that you have all dependencies in place. The simplest way to do so, is to use anaconda.

You can create an anaconda environment called graf using

conda env create -f environment.yml
conda activate graf

Next, for nerf-pytorch install torchsearchsorted. Note that this requires torch>=1.4.0 and CUDA >= v10.1. You can install torchsearchsorted via

cd submodules/nerf_pytorch
pip install -r requirements.txt
cd torchsearchsorted
pip install .
cd ../../../

Demo

You can now test our code via:

python eval.py configs/carla.yaml --pretrained --rotation_elevation

This script should create a folder results/carla_128_from_pretrained/eval/ where you can find generated videos varying camera pose for the Cars dataset.

Datasets

If you only want to generate images using our pretrained models you do not need to download the datasets. The datasets are only needed if you want to train a model from scratch.

Cars

To download the Cars dataset from the paper simply run

cd data
./download_carla.sh
cd ..

This creates a folder data/carla/ downloads the images as a zip file and extracts them to data/carla/. While we do not use camera poses in this project we provide them for completeness. Your can download them by running

cd data
./download_carla_poses.sh
cd ..

This downloads the camera intrinsics (single file, equal for all images) and extrinsics corresponding to each image.

Faces

Download celebA. Then replace data/celebA in configs/celebA.yaml with *PATH/TO/CELEBA*/Img/img_align_celebA.

Download celebA_hq. Then replace data/celebA_hq in configs/celebAHQ.yaml with *PATH/TO/CELEBA_HQ*.

Cats

Download the CatDataset. Run

cd data
python preprocess_cats.py PATH/TO/CATS/DATASET
cd ..

to preprocess the data and save it to data/cats. If successful this script should print: Preprocessed 9407 images.

Birds

Download CUB-200-2011 and the corresponding Segmentation Masks. Run

cd data
python preprocess_cub.py PATH/TO/CUB-200-2011 PATH/TO/SEGMENTATION/MASKS
cd ..

to preprocess the data and save it to data/cub. If successful this script should print: Preprocessed 8444 images.

Usage

When you have installed all dependencies, you are ready to run our pre-trained models for 3D-aware image synthesis.

Generate images using a pretrained model

To evaluate a pretrained model, run

python eval.py CONFIG.yaml --pretrained --fid_kid --rotation_elevation --shape_appearance

where you replace CONFIG.yaml with one of the config files in ./configs.

This script should create a folder results/EXPNAME/eval with FID and KID scores in fid_kid.csv, videos for rotation and elevation in the respective folders and an interpolation for shape and appearance, shape_appearance.png.

Note that some pretrained models are available for different image sizes which you can choose by setting data:imsize in the config file to one of the following values:

configs/carla.yaml: 
    data:imsize 64 or 128 or 256 or 512
configs/celebA.yaml:
    data:imsize 64 or 128
configs/celebAHQ.yaml:
    data:imsize 256 or 512

Train a model from scratch

To train a 3D-aware generative model from scratch run

python train.py CONFIG.yaml

where you replace CONFIG.yaml with your config file. The easiest way is to use one of the existing config files in the ./configs directory which correspond to the experiments presented in the paper. Note that this will train the model from scratch and will not resume training for a pretrained model.

You can monitor on http://localhost:6006 the training process using tensorboard:

cd OUTPUT_DIR
tensorboard --logdir ./monitoring --port 6006

where you replace OUTPUT_DIR with the respective output directory.

For available training options, please take a look at configs/default.yaml.

Evaluation of a new model

For evaluation of the models run

python eval.py CONFIG.yaml --fid_kid --rotation_elevation --shape_appearance

where you replace CONFIG.yaml with your config file.

Multi-View Consistency Check

You can evaluate the multi-view consistency of the generated images by running a Multi-View-Stereo (MVS) algorithm on the generated images. This evaluation uses COLMAP and make sure that you have COLMAP installed to run

python eval.py CONFIG.yaml --reconstruction

where you replace CONFIG.yaml with your config file. You can also evaluate our pretrained models via:

python eval.py configs/carla.yaml --pretrained --reconstruction

This script should create a folder results/EXPNAME/eval/reconstruction/ where you can find generated multi-view images in images/ and the corresponding 3D reconstructions in models/.

Further Information

GAN training

This repository uses Lars Mescheder's awesome framework for GAN training.

NeRF

We base our code for the Generator on this great Pytorch reimplementation of Neural Radiance Fields.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].