All Projects → christopher-beckham → Gan Heightmaps

christopher-beckham / Gan Heightmaps

Procedural terrain generation for video games has been traditionally been done with smartly designed but handcrafted algorithms that generate heightmaps. We propose a first step toward the learning and synthesis of these using recent advances in deep generative modelling with openly available satellite imagery from NASA.

Projects that are alternatives of or similar to Gan Heightmaps

Src
Sources for some videos
Stars: ✭ 40 (+0%)
Mutual labels:  jupyter-notebook
Eci2019 Drl
Curso sobre Aprendizaje Profundo por Refuerzo en ECI 2019
Stars: ✭ 41 (+2.5%)
Mutual labels:  jupyter-notebook
Pythonforjournalists
Notebooks and files for the Python for Journalists course on Datajournalism.com
Stars: ✭ 41 (+2.5%)
Mutual labels:  jupyter-notebook
Oreilly Pytorch
🔥 Introductory PyTorch tutorials with OReilly Media.
Stars: ✭ 40 (+0%)
Mutual labels:  jupyter-notebook
Sberbank Covid19 Forecast 2020
Covid19-forecast
Stars: ✭ 41 (+2.5%)
Mutual labels:  jupyter-notebook
Dataset Election 62 Candidates
Thailand Election'62 Candidate Information Browsing Website
Stars: ✭ 41 (+2.5%)
Mutual labels:  jupyter-notebook
Data utilities
Utilities for processing the xView 2018 dataset (i.e., xview1)
Stars: ✭ 40 (+0%)
Mutual labels:  jupyter-notebook
Intelligent Control Techniques For Robots
An analysis of different intelligent control techniques (evolutionary alg., reinf. learn., neural networks) applied to robotic manipulators.
Stars: ✭ 41 (+2.5%)
Mutual labels:  jupyter-notebook
Tensorflow2 tutorials
本项目是TensorFlow2.0学习笔记,主要参考官方文档,此外也添加个人许多个人使用心得体会等内容,本项目所有笔记也发布在博客园等平台,希望对你有所帮助。
Stars: ✭ 41 (+2.5%)
Mutual labels:  jupyter-notebook
Deep Learning Practical Introduction With Keras
DEEP-LEARNING-practical-introduction-with-Keras
Stars: ✭ 41 (+2.5%)
Mutual labels:  jupyter-notebook
Torch Ddcnn
From Pixels to Torques: Policy Learning using Deep Dynamical Convolutional Neural Networks (DDCNN)
Stars: ✭ 40 (+0%)
Mutual labels:  jupyter-notebook
The Hello World Of Machine Learning
Learn to build a basic machine learning model from scratch with this repo and tutorial series.
Stars: ✭ 41 (+2.5%)
Mutual labels:  jupyter-notebook
Ibmi Oss Examples
A set of examples for using open source tools on IBM i
Stars: ✭ 41 (+2.5%)
Mutual labels:  jupyter-notebook
Deep homography estimation
Compute homographies with deep networks instead of feature matching and RANSAC.
Stars: ✭ 40 (+0%)
Mutual labels:  jupyter-notebook
Towardsdeepphenotyping
Stars: ✭ 41 (+2.5%)
Mutual labels:  jupyter-notebook
Cs231n 2017
My own solutions for Stanford CS231n (2017) assignments
Stars: ✭ 40 (+0%)
Mutual labels:  jupyter-notebook
Image bbox slicer
This easy-to-use library splits images and its bounding box annotations into tiles, both into specific sizes and into any arbitrary number of equal parts. It can also resize them, both by specific sizes and by a resizing/scaling factor.
Stars: ✭ 41 (+2.5%)
Mutual labels:  jupyter-notebook
Atarigrandchallenge
Code for 'The Grand Atari Challenge dataset' paper
Stars: ✭ 40 (+0%)
Mutual labels:  jupyter-notebook
Imageaugmentationtypes
Few of the essential image augmentation techniques.
Stars: ✭ 41 (+2.5%)
Mutual labels:  jupyter-notebook
Rppg Pos
Remote photoplethysmography measures heart rate of a person without any contact, from their video. This is an implementation of rPPG called as Plane Orthogonal-to-Skin (POS) as described in the IEEE paper - "Algorithmic Principles of Remote PPG," W. Wang, A. C. den Brinker, S. Stuijk and G. de Haan.
Stars: ✭ 41 (+2.5%)
Mutual labels:  jupyter-notebook

A step towards procedural terrain generation with GANs

Authors: Christopher Beckham, Christopher Pal

Procedural generation in video games is the algorithmic generation of content intended to increase replay value through interleaving the gameplay with elements of unpredictability. This is in contrast to the more traditional, `handcrafty' generation of content, which is generally of higher quality but with the added expense of labour. A prominent game whose premise is almost entirely based on procedural terrain generation is Minecraft, a game where the player can explore a vast open world whose terrain is based entirely on voxels ('volumetric pixels'), allowing the player to manipulate the terrain (i.e. dig tunnels, build walls) and explore interesting landscapes (i.e. beaches, jungles, caves).

So far, terrains have been procedurally generated through a host of algorithms designed to mimic real-life terrain. Some prominent examples of this include Perlin noise and diamond square, in which a greyscale image (a heightmap) is generated from a noise source, which, when rendered in 3D as a mesh, produces a terrain. While these methods are quite fast, they generate terrains which are quite simple in nature. Software such as L3DT employ sophisticated algorithms which let the user control what kind of terrain they desire, (e.g. mountains, lakes, valleys), and while these can produce very impressive terrains it would still seem like an exciting endeavour to leverage the power of generative networks in deep learning (such as the GAN) to learn algorithms to automatically generate terrain directly through learning the raw data without the need to manually write algorithms to generate them.

Datasets

In this work, we leverage extremely high-resolution terrain and heightmap data provided by the NASA Visible Earth project in conjunction with generative adversarial networks (GANs) to create a two-stage pipeline in which heightmaps can be randomly generated as well as a texture map that is inferred from the heightmap. Concretely, we synthesise 512px height and texture maps using random 512px crops from the original NASA images (of size 21600px x 10800px), as seen in the below images. (Note, per-pixel resolution is 25km, so a 512px crop corresponds to ~13k square km.)

Results

We show some heightmaps and texture maps generated by the model. Note that we refer to the heightmap generation as the 'DCGAN' (since it is essentially the DCGAN paper), and the texture generation as 'pix2pix' (which is based on the conditional image-to-image translation GAN paper).

DCGAN

Here are some heightmaps generated at roughly ~590 epochs for the DCGAN part of the network.

Click here to see a video showing linear interpolations between the 100 different randomly generated heightmaps (and their corresponding textures):

pix2pix

Generation of texture maps from ground truth heightmaps ~600 epochs in (generating texture maps from generated heightmaps performed similarly):

(TODO: texture outputs can be patchy, probably because the discriminator is actually a PatchGAN. I will need to run some experiments using a 1x1 discriminator instead.)

Here is one of the generated heightmaps + corresponding texture map rendered in Unity:

Running the code

First we need to download the data. You can find the h5 file here. A quick notebook to visualise the data can also be found in notebooks/visualise_data.ipynb.

To run an experiment, you need to specify a particular experiment. These are defined in experiments.py. I recommend you use the experiment called test1_nobn_bilin_both. The experiments.py file is what you run to launch experiments, and it expects two command-line arguments in the form <experiment name> <mode>. For example, to run test1_nobn_bilin_both we simply do:

python experiments.py test1_nobn_bilin_both train

(NOTE: you will need to modify this file and change the URL to point to where your .h5 file is.)

For all the Theano bells and whistles, I create bash script like so:

#!/bin/bash
env
PYTHONUNBUFFERED=1 \
THEANO_FLAGS=mode=FAST_RUN,device=cuda,floatX=float32,nvcc.fastmath=True,dnn.conv.algo_fwd=time_once,dnn.conv.algo_bwd_filter=time_once,dnn.conv.algo_bwd_data=time_once \
  python experiments.py test1_nobn_bilin_both train

Various things will be dumped into the results folder (for the aforementioned experiment, this is output/test1_nobn_bilin_both) when it is training. The files you can expect to see are:

  • results.txt: various metrics in .csv format that you can plot to examine training.
  • gen_dcgan.png,disc_dcgan.png: architecture diagrams for the DCGAN generator and discriminator.
  • gen_p2p.png,disc_p2p.png: architecture diagrams for the P2P generator and discriminator.
  • out_*.png: outputs from the P2P GAN, showing the predicted Y from ground truth X.
  • dump_a/*.png: outputs from the DCGAN, in which an X's are synthesised from z's drawn from the prior p(z).
  • dump_train/*.a.png,dump_train/*.b.png: high-res ground truth X's along with their predicted Y's for the training set.
  • dump_valid/*.a.png,dump_valid/*.b.png: high-res ground truth X's along with their predicted Y's for the validation set.

This code was inspired by -- and uses code from -- Costa et al's vess2ret, which is a pix2pix implementation used for retinal image synthesis. Some code was also used from the keras-adversarial library.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].