All Projects → tinghuiz → Appearance Flow

tinghuiz / Appearance Flow

A deep learning framework for synthesizing novel views of objects and scenes

Projects that are alternatives of or similar to Appearance Flow

Ai Toolkit Iot Edge
AI Toolkit for Azure IoT Edge
Stars: ✭ 163 (-0.61%)
Mutual labels:  jupyter-notebook
Machine Learning
Machine learning library written in readable python code
Stars: ✭ 163 (-0.61%)
Mutual labels:  jupyter-notebook
Competition Baseline
数据科学竞赛知识、代码、思路
Stars: ✭ 2,553 (+1456.71%)
Mutual labels:  jupyter-notebook
Learnpythonforresearch
This repository provides everything you need to get started with Python for (social science) research.
Stars: ✭ 163 (-0.61%)
Mutual labels:  jupyter-notebook
Scientific graphics in python
Электронный учебник-пособие по научной графике в python
Stars: ✭ 163 (-0.61%)
Mutual labels:  jupyter-notebook
Dl Course
Deep Learning with Catalyst
Stars: ✭ 162 (-1.22%)
Mutual labels:  jupyter-notebook
Keraspersonlab
Keras-tensorflow implementation of PersonLab (https://arxiv.org/abs/1803.08225)
Stars: ✭ 163 (-0.61%)
Mutual labels:  jupyter-notebook
Replika Research
Replika.ai Research Papers, Posters, Slides & Datasets
Stars: ✭ 164 (+0%)
Mutual labels:  jupyter-notebook
Blog posts
Blog posts for matatat.org
Stars: ✭ 163 (-0.61%)
Mutual labels:  jupyter-notebook
Ta
Technical Analysis Library using Pandas and Numpy
Stars: ✭ 2,649 (+1515.24%)
Mutual labels:  jupyter-notebook
Imagecompletion Dcgan
Image completion using deep convolutional generative adversarial nets in tensorflow
Stars: ✭ 163 (-0.61%)
Mutual labels:  jupyter-notebook
Notes
Contains Example Programs and Notebooks for some courses at Bogazici University, Department of Computer Engineering
Stars: ✭ 163 (-0.61%)
Mutual labels:  jupyter-notebook
Csharp with csharpfritz
Show notes, slides, and samples from the CSharp with CSharpFritz show
Stars: ✭ 164 (+0%)
Mutual labels:  jupyter-notebook
Data Science Template
A starter template for Equinor data science / data engineering projects
Stars: ✭ 163 (-0.61%)
Mutual labels:  jupyter-notebook
Maml Jax
Implementation of Model-Agnostic Meta-Learning (MAML) in Jax
Stars: ✭ 164 (+0%)
Mutual labels:  jupyter-notebook
Repo 2018
Deep Learning Summer School + Tensorflow + OpenCV cascade training + YOLO + COCO + CycleGAN + AWS EC2 Setup + AWS IoT Project + AWS SageMaker + AWS API Gateway + Raspberry Pi3 Ubuntu Core
Stars: ✭ 163 (-0.61%)
Mutual labels:  jupyter-notebook
Tensorflow2 Crash Course
A quick crash course in understanding the essentials of TensorFlow 2 and the integrated Keras API
Stars: ✭ 162 (-1.22%)
Mutual labels:  jupyter-notebook
Newsrecommender
A news recommendation system tailored for user communities
Stars: ✭ 164 (+0%)
Mutual labels:  jupyter-notebook
Tdl
Course "Theories of Deep Learning"
Stars: ✭ 164 (+0%)
Mutual labels:  jupyter-notebook
Workshops
Workshops organized to introduce students to security, AI, AR/VR, hardware and software
Stars: ✭ 162 (-1.22%)
Mutual labels:  jupyter-notebook

View Synthesis by Appearance Flow

Tinghui Zhou, Shubham Tulsiani, Weilun Sun, Jitendra Malik, and Alyosha Efros, ECCV 2016.

Overview

We address the problem of novel view synthesis: given an input image, synthesizing new images of the same object or scene observed from arbitrary viewpoints. We approach this as a learning task but, critically, instead of learning to synthesize pixels from scratch, we learn to copy them from the input image. Our approach exploits the observation that the visual appearance of different views of the same instance is highly correlated, and such correlation could be explicitly learned by training a convolutional neural network (CNN) to predict appearance flows – 2-D coordinate vectors specifying which pixels in the input view could be used to reconstruct the target view. Furthermore, the proposed framework easily generalizes to multiple input views by learning how to optimally combine single-view predictions.

Single-view network architecture:

Multi-view network architecture:

Link to the [Paper] [Poster]

Please contact Tinghui Zhou ([email protected]) if you have any questions.

Citing

If you find our paper/code useful, please consider citing:

@inproceedings{zhou2016view,
	title={View Synthesis by Appearance Flow},
	author={Zhou, Tinghui and Tulsiani, Shubham and Sun, Weilun and Malik, Jitendra and Efros, Alexei A},
	booktitle={European Conference on Computer Vision},
	year={2016}
}

Repo organization:

  • A copy of Caffe used for our experiments is included. Specifically, it includes our CPU implementation of the bilinear differentiable image sampler (see 'include/caffe/layers/remap_layer.hpp' and 'src/caffe/layers/remap_layer.cpp').
  • 'models/' contains sample prototxt files of our view synthesis models. The caffemodels can be downloaded via https://people.eecs.berkeley.edu/~tinghuiz/projects/appearanceFlow/caffemodels/[MODEL_NAME].caffemodel
  • 'data/' contains the lists of training and testing shapes in our ShapeNet experiments.
  • 'ObjRenderer/' contains the rendering code we used for generating ShapeNet rendered views.

Running the demo

We provide demo code for synthesizing novel views of ShapeNet cars from a single image. First, download the pre-trained model by

wget -N https://people.eecs.berkeley.edu/~tinghuiz/projects/appearanceFlow/caffemodels/car_single.caffemodel -O models/car_single/car_single.caffemodel

Then you can use the provided jupyter notebook demo.ipynb to run the demo.

Sample ShapeNet Results on Single-view 3D object rotation

The input view is marked with green bounding boxes. All the other views are synthesized by our single-view object rotation network.

Sample KITTI Results on 3D Scene Fly-through

The task is to synthesize a fly-through effect for the 3D scene given only two input views (marked in green and red bounding boxes). All the intermediate frames are synthesized.

Acknowledgement

We thank Philipp Krähenbühl and Abhishek Kar for helpful discussions. This work was supported in part by NSF award IIS-1212798, Intel/NSF Visual and Experiential Computing award IIS-1539099, Berkeley Deep Drive, and a Berkeley Fellowship. We gratefully acknowledge NVIDIA corporation for the donation of GPUs used for this research.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].