All Projects → JustusThies → NeuralTexGen

JustusThies / NeuralTexGen

Licence: other
Image-space texture optimization of 3D meshes using PyTorch

Programming Languages

python
139335 projects - #7 most used programming language
GLSL
2045 projects
shell
77523 projects

Projects that are alternatives of or similar to NeuralTexGen

MoCoGAN-HD
[ICLR 2021 Spotlight] A Good Image Generator Is What You Need for High-Resolution Video Synthesis
Stars: ✭ 224 (+273.33%)
Mutual labels:  image-generation
Anime2Sketch
A sketch extractor for anime/illustration.
Stars: ✭ 1,623 (+2605%)
Mutual labels:  image-generation
sText2Image
Attribute-Guided Sketch-to-Image Generation
Stars: ✭ 67 (+11.67%)
Mutual labels:  image-generation
soft-intro-vae-pytorch
[CVPR 2021 Oral] Official PyTorch implementation of Soft-IntroVAE from the paper "Soft-IntroVAE: Analyzing and Improving Introspective Variational Autoencoders"
Stars: ✭ 170 (+183.33%)
Mutual labels:  image-generation
vqgan-clip-app
Local image generation using VQGAN-CLIP or CLIP guided diffusion
Stars: ✭ 94 (+56.67%)
Mutual labels:  image-generation
texturize
🤖🖌️ Generate photo-realistic textures based on source images. Remix, remake, mashup! Useful if you want to create variations on a theme or elaborate on an existing texture.
Stars: ✭ 495 (+725%)
Mutual labels:  image-generation
Graphite
Open source 2D node-based raster/vector graphics editor (Photoshop + Illustrator + Houdini = Graphite)
Stars: ✭ 223 (+271.67%)
Mutual labels:  image-generation
Rasterizer
CPU forward/deferred rasterizer with depth-buffering, texture mapping, normal mapping and blinn-phong shading implemented in C++
Stars: ✭ 77 (+28.33%)
Mutual labels:  texture-mapping
Scaffold-Map
Robust, efficient and low distortion bijective mapping in 2D and 3D
Stars: ✭ 51 (-15%)
Mutual labels:  texture-mapping
Texturize
A unified framework for example-based texture synthesis, developed alongside my master's thesis.
Stars: ✭ 15 (-75%)
Mutual labels:  texture-mapping
universum-contracts
text-to-image generation gems / libraries incl. moonbirds, cyberpunks, coolcats, shiba inu doge, nouns & more
Stars: ✭ 17 (-71.67%)
Mutual labels:  image-generation
CycleGAN-Models
Models generated by CycleGAN
Stars: ✭ 42 (-30%)
Mutual labels:  image-generation
color-aware-style-transfer
Reference code for the paper CAMS: Color-Aware Multi-Style Transfer.
Stars: ✭ 36 (-40%)
Mutual labels:  texture-mapping
CoCosNet-v2
CoCosNet v2: Full-Resolution Correspondence Learning for Image Translation
Stars: ✭ 312 (+420%)
Mutual labels:  image-generation
image generator 2
Progressive GANによる画像生成器
Stars: ✭ 31 (-48.33%)
Mutual labels:  image-generation
Finegan
FineGAN: Unsupervised Hierarchical Disentanglement for Fine-grained Object Generation and Discovery
Stars: ✭ 240 (+300%)
Mutual labels:  image-generation
multiple-objects-gan
Implementation for "Generating Multiple Objects at Spatially Distinct Locations" (ICLR 2019)
Stars: ✭ 111 (+85%)
Mutual labels:  image-generation
SuperStyleNet
SuperStyleNet: Deep Image Synthesis with Superpixel Based Style Encoder (BMVC 2021)
Stars: ✭ 28 (-53.33%)
Mutual labels:  image-generation
Continuous-Image-Autoencoder
Deep learning image autoencoder that not depends on image resolution
Stars: ✭ 20 (-66.67%)
Mutual labels:  image-generation
netflix-show-generator
A tool for generating Netflix show image
Stars: ✭ 18 (-70%)
Mutual labels:  image-generation

NeuralTexGen

This code is meant for simple image-space texture optimization. As input you have to provide uv-renderings of the target scene along with the color observations. Based on these 'UV-maps' and the color images one can use PyTorch with the differentiable bilinear sampling to 'train' a color texture under a specific error metric. You can have arbitrary image losses like L1, L2 or style losses (like VGG content & style loss).

Training Data

3D Reconstruction

You need to have a reconstructed mesh of the target object/scene along with the camera parameters for each image. Some approaches that I used in the past:

Parametrization

As we want to optimize for a texture, we have to define a texture space including the mapping from the vertices of the mesh to the texture space (Parametrization). To this end you can use trivial per triangle parametrization (not recommended) or some more advanced techniques to compute the parametrization. You can use MeshLab for trivial parametrizations, Blender or the UV parametrization tool from Microsoft (Github repo).

Tip: If you have a highly tesselated mesh you might want to reduce it (e.g. using the edge collapse filter of MeshLab), otherwise tools like the Microsoft UV Atlas generator will take forever.

Rendering of the UV maps

Use a renderer of your choice to render the per frame uvs using the mesh and the camera parameters. For example, you can use an headless 'EGL-based' OpenGL renderer on your server (see 'preprocessing' folder). Caution: do not render with anti-aliasing this will lead to wrong uvs! Also never upsample uv renderings! It is recommended to render the uv maps at a higher resolution than the color images to circumvent sampling issues (undersampling of the texture). Have a look at the 'prepare_data.sh' script that calls an OpenGL renderer. If you have set the input accordingly you can call:

bash prepare_data.sh

You might need to make the uv renderer executable: chmod +x preprocessing/uv_renderer.

Summary

In the end you should have training pairs consisting of a uv map and a original color image which serves as target image.

Optimization

Define a loss function / loss weights

First of all, you have to choose the loss weights. See 'texture_optimization.sh' and 'options/base_options.py'. Feel free to add new loss functions (see 'models/RGBTextures_model.py').

Optimize aka 'Train'

Start a visdom server that visualizes the training curve.

visdom

Start optimization over the entire training data corpus using:

bash texture_optimization.sh

Example

Below you see an example using an L1 loss. It is applied to a Matterport scene with 89 images with a resolution of 320x256 each (which is quite low). Nevertheless, we can reconstruct a reasonable texture. Note that in this example, we did not apply any correction for whitebalancing / color normalization.

L1 texture optimization

Misc

You can use image enhancing techniques or image super-resolution methods to improve the input images. An easy to use implementation has been published by Idealo (Github repo). See 'misc/super_res.py' for preprocessing your training data (note that the dataloader resizes the color images to match the uv images, thus, make sure you render the uvs with a higher resolution too).

The 'tmux_smpl.sh' script has nothing to do with the texture optimization but simplifies the tmux session handling ;)

Notes

Note that this type of image-space texture optimization can have sampling issues when the texture that we optimize for is undersampled. We, therefore, have a regularizer of the texture based on a Laplacian pyramid (i.e., regularizing neighboring texture values to be similar). You can also cicumvent the potential sampling issues by rendering the UV-maps at higher resolutions (kind of multi-sampling).

Note that you typically operate in texture space to make sure that you optimize for all texels in the texture. You would project the surface point corresponding to a texel to the input images to read out the target colors which you can accumulate (you can apply averaging, or some more advanced 'fusion' step). In this project, you go from image-space to texels, optimizing the texel colors that are sampled by a forward renderer (-> UV maps) such that it matches the color from the image. It is trivial to implement and to extend (e.g. you can easily add different image-space losses).

TODOs

If you have a nice uv parametrization you can also think of adding mip-mapping to the framework, thus, you would provide uv and the mip-level as input. Given the mip-level you can filter the texture on the fly or you precompute the mip levels. This can also be done differentiable and solves the potential sampling issues.

Ackowledgements

This code is based on the Pix2Pix/CycleGAN framework (Github repo).

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].