All Projects → lvzhaoyang → RefRESH

lvzhaoyang / RefRESH

Licence: MIT license
Create RefRESH data: dataset tools for Learning Rigidity in Dynamic Scenes with a Moving Camera for 3D Motion Field Estimation (ECCV 2018)

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to RefRESH

blender-colab
Render Blender 3.x and 2.9x scenes with Google Colaboratory
Stars: ✭ 78 (+52.94%)
Mutual labels:  blender, rendering
Rd Blender Docker
A collection of Docker containers for running Blender headless or distributed ✨
Stars: ✭ 111 (+117.65%)
Mutual labels:  blender, rendering
export multi
Use the multi-exporter for Blender and check in (and tweak) various scenes step by step.
Stars: ✭ 31 (-39.22%)
Mutual labels:  blender, rendering
osci-render
〰📺🔊 Software for making music by drawing objects on an oscilloscope using audio.
Stars: ✭ 135 (+164.71%)
Mutual labels:  blender, rendering
Armory
3D Engine with Blender Integration
Stars: ✭ 2,350 (+4507.84%)
Mutual labels:  blender, rendering
Bitwrk
Bitcoin-fueled Peer-to-Peer Blender Rendering (and more)
Stars: ✭ 114 (+123.53%)
Mutual labels:  blender, rendering
DuBLF DuBlast
Quick Playblast tool for Blender
Stars: ✭ 18 (-64.71%)
Mutual labels:  blender, rendering
Sheepit Client
Client for the free and distributed render farm "SheepIt Render Farm"
Stars: ✭ 244 (+378.43%)
Mutual labels:  blender, rendering
Appleseed
A modern open source rendering engine for animation and visual effects
Stars: ✭ 1,824 (+3476.47%)
Mutual labels:  blender, rendering
Herebedragons
A basic 3D scene implemented with various engines, frameworks or APIs.
Stars: ✭ 1,616 (+3068.63%)
Mutual labels:  blender, rendering
Hdcycles
Cycles Hydra Delegate
Stars: ✭ 197 (+286.27%)
Mutual labels:  blender, rendering
Blender Cli Rendering
Python scripts for rendering images using Blender 2.83 from command-line interface
Stars: ✭ 241 (+372.55%)
Mutual labels:  blender, rendering
FunMirrors
This is a fun project I created to motivate computer vision enthusiasts and to highlight the importance of understanding fundamental concepts related to image formation in a camera.
Stars: ✭ 43 (-15.69%)
Mutual labels:  rendering
evplp
Implementation of Efficient Energy-Compensated VPLs using Photon Splatting (and various rendering techniques)
Stars: ✭ 26 (-49.02%)
Mutual labels:  rendering
glTF-Blender-IO-materials-variants
Blender3D addon for glTF KHR_materials_variants extension
Stars: ✭ 56 (+9.8%)
Mutual labels:  blender
PintarJS
Micro JS lib for direct WebGL and canvas rendering.
Stars: ✭ 15 (-70.59%)
Mutual labels:  rendering
ForkerRenderer
CPU-Based Software Forward / Deferred Rasterizer, A Tiny OpenGL (PBR, Soft Shadow, SSAO) 🐼
Stars: ✭ 17 (-66.67%)
Mutual labels:  rendering
biology
one key generate biology 3D modul by blender,such as animal,plant,micra
Stars: ✭ 33 (-35.29%)
Mutual labels:  blender
modifier list
Blender add-on with enhanced UI layout for modifiers with handy features. Replaces the regular modifier UI and adds a tab in the Sidebar and a popup.
Stars: ✭ 206 (+303.92%)
Mutual labels:  blender
awesome-point-cloud-deep-learning
Paper list of deep learning on point clouds.
Stars: ✭ 39 (-23.53%)
Mutual labels:  3d-vision

RefRESH Toolkit in Blender

Summary

The blender toolkit for REal 3D from REconstruction with Synthetic Humans (RefRESH).

alt text

Video Example of the Created Data | Project Page | Blog | Learning Rigidity Repository

If you use this code or our generated dataset, please cite the following paper:

Learning Rigidity in Dynamic Scenes with a Moving Camera for 3D Motion Field Estimation, Zhaoyang Lv, Kihwan Kim, Alejandro Troccoli, Deqing Sun, James M. Rehg, Jan Kautz, European Conference on Computer Vision 2018

@inproceedings{Lv18eccv,  
  title     = {Learning Rigidity in Dynamic Scenes with a Moving Camera for 3D Motion Field Estimation},  
  author    = {Lv, Zhaoyang and Kim, Kihwan and Troccoli, Alejandro and Rehg, James and Kautz, Jan},  
  booktitle = {ECCV},  
  year      = {2018}  
}

The inference algorithm and relevant networks are located in the Learning Rigidity repository:

git clone https://github.com/NVlabs/learningrigidity

Contact:

Zhaoyang Lv: [email protected]

Contents

Notes:

  • We currently only support loading BundleFusion RGB-D sequences and meshes. Please be aware that loading different sequences and meshes may result in a change in data loading protocol. Feel free to contribute.

  • All codes are tested by Zhaoyang on linux machines and servers. For any other systems, there is no guarantee how to run it and I have no devices to reproduce the relevant issues.

  • Although the rendering code is automatically batch run as we wish, it can still be slow on a single machine. Please consider use a server with multiple CPU cores to do that.

  • There are some small changes after the paper submission to make the rendering process much easier to run and we change the rendering output to the multichannel OpenEXR format. In this way we also generate two pass flow, rather than one in the original data. Please let me know if it generates unexpected results.

  • Currently there is no strict roadmap about how to extend this toolkit, although I might add functionalities to support different foreground objects, more background meshes, only if there is a research need.

Dependencies

Install Blender

We need blender which has OSL support for rendering (Do not use ubuntu default blender (2.76), which does not have full support for Open Shading Language). Download the blender (2.79 version we tested) and set the blender path

BLENDER_PYTHON=~/develop/blender-2.79b/2.79/python
alias blender_python=$BLENDER_PYTHON/bin/python3.5m

curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
$BLENDER_PYTHON/bin/python3.5m get-pip.py

alias blender_pip=$BLENDER_PYTHON/bin/pip3
# install the python dependencies for blender python  
blender_pip install -r setup/blender_requirements.txt

Install system dependencies

sudo apt-get install openexr
sudo apt-get install libopenexr-dev

Set the python environment

We have already created the conda environment for all the dependencies:

conda env create -f setup/conda_environment.yml

Prepare the 3D static scene datasets

Our dataset creation strategy is not limited to any particular 3D reconstruction dataset. In this paper, we use the scenes reconstructed from BundleFusion project only as an example.

Create a symbolic link for all the target files in the data folder named RefRESH:

mkdir data
ln -s $BUNDLE_FUSION data/RefFRESH

Download BundleFusion

Download all the reconstructed scenes and source files from their website:

# Change the destinationn in the script file if you want to use a different location.
sh setup/download_BundleFusion.sh

Run the bundlefusion utility script to generate the bundle fusion pickle files.

python miscs/process_bundlefusion.py

Similarly, we will also generate the pickle files for all relevant data so that we can more easily accesss in all different projects.

Prepare your synthetic humans

SMPL data

In the smpl_data/README.md, you should finally see all the items in the list.

Download SMPL for MAYA

You need to download SMPL for MAYA from the official website(click here) in order to run the synthetic data generation code. Once you agree on SMPL license terms and have access to downloads, you will have the following two files:

basicModel_f_lbs_10_207_0_v1.0.2.fbx
basicModel_m_lbs_10_207_0_v1.0.2.fbx

Place these two files under smpl_data folder.

Download SMPL textures and other relevant data

With the same credentials as with the SURREAL dataset and within the smpl_data folder, you can download the remaining necessary SMPL data. All the downloaded files should be placed within the same directory:

cd smpl_data
./download_smpl_data.sh /path/to/smpl_data yourusername yourpassword

For a more detailed instructions specifically for this dataset, please refer to smpl_data/README.md.

Ready to run

To create your dataset with the blender toolkit, please refer to ./blender/README

Download the created RefRESH dataset

The dataset created in the paper for pretraining is available. Please check the dataset readme for several notes for it.

License

MIT License

Copyright (c) 2018 Zhaoyang Lv

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Acknowledgement

Our work can not be done without the precedent research efforts. Part of the human rendering code is refactored on top of the SURREAL toolkit.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].