All Projects → thinkingmachines → christmAIs

thinkingmachines / christmAIs

Licence: GPL-2.0 license
Text to abstract art generation for the holidays!

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects
Makefile
30231 projects
Dockerfile
14818 projects

Projects that are alternatives of or similar to christmAIs

continuous-fusion
(ROS) Sensor fusion algorithm for camera+lidar.
Stars: ✭ 26 (-71.11%)
Mutual labels:  perception
robosherlock
www.robosherlock.org
Stars: ✭ 23 (-74.44%)
Mutual labels:  perception
Mediapipe
Cross-platform, customizable ML solutions for live and streaming media.
Stars: ✭ 15,338 (+16942.22%)
Mutual labels:  perception
MotionNet
CVPR 2020, "MotionNet: Joint Perception and Motion Prediction for Autonomous Driving Based on Bird's Eye View Maps"
Stars: ✭ 141 (+56.67%)
Mutual labels:  perception
VPGNet for lane
Vanishing Point Guided Network for lane detection, with post processing
Stars: ✭ 33 (-63.33%)
Mutual labels:  perception
jpp
Joint Perception and Planning For Efficient Obstacle Avoidance Using Stereo Vision
Stars: ✭ 42 (-53.33%)
Mutual labels:  perception
panther
Perception-Aware Trajectory Planner in Dynamic Environments
Stars: ✭ 115 (+27.78%)
Mutual labels:  perception
pihut-xmas-asyncio
Demonstration driving The Pi Hut Raspberry Pi 3D Xmas tree using Python Asyncio
Stars: ✭ 15 (-83.33%)
Mutual labels:  xmas
GapFlyt
GapFlyt: Active Vision Based Minimalist Structure-less Gap Detection For Quadrotor Flight
Stars: ✭ 30 (-66.67%)
Mutual labels:  perception
LogGabor
A python implementation for a LogGabor filtering and pyramid representation
Stars: ✭ 32 (-64.44%)
Mutual labels:  perception
point-cloud-clusters
A catkin workspace in ROS which uses DBSCAN to identify which points in a point cloud belong to the same object.
Stars: ✭ 43 (-52.22%)
Mutual labels:  perception
Robotics-Object-Pose-Estimation
A complete end-to-end demonstration in which we collect training data in Unity and use that data to train a deep neural network to predict the pose of a cube. This model is then deployed in a simulated robotic pick-and-place task.
Stars: ✭ 153 (+70%)
Mutual labels:  perception
AIODrive
Official Python/PyTorch Implementation for "All-In-One Drive: A Large-Scale Comprehensive Perception Dataset with High-Density Long-Range Point Clouds"
Stars: ✭ 32 (-64.44%)
Mutual labels:  perception
the-Cooper-Mapper
An open source autonomous driving research platform for Active SLAM & Multisensor Data Fusion
Stars: ✭ 38 (-57.78%)
Mutual labels:  perception
OpenMaterial
3D model exchange format with physical material properties for virtual development, test and validation of automated driving.
Stars: ✭ 23 (-74.44%)
Mutual labels:  perception
Perception-of-Autonomous-mobile-robot
Perception of Autonomous mobile robot,Using ROS,rs-lidar-16,By SLAM,Object Detection with Yolov5 Based DNN
Stars: ✭ 40 (-55.56%)
Mutual labels:  perception
FARGonautica
No description or website provided.
Stars: ✭ 85 (-5.56%)
Mutual labels:  perception
Robotics-Resources
List of commonly used robotics libraries and packages
Stars: ✭ 71 (-21.11%)
Mutual labels:  perception
form2fit
[ICRA 2020] Train generalizable policies for kit assembly with self-supervised dense correspondence learning.
Stars: ✭ 78 (-13.33%)
Mutual labels:  perception
isaac ros visual odometry
Visual odometry package based on hardware-accelerated NVIDIA Elbrus library with world class quality and performance.
Stars: ✭ 101 (+12.22%)
Mutual labels:  perception

christmAIs

cloud build status Documentation Status License: GPL v2 python 3.6+

christmAIs ("krees-ma-ees") is text-to-abstract art generation for the holidays!

This work converts any input string into an abstract art by:

This results to images that look like these:

alt text alt text alt text alt text

Setup and Installation

Please see requirements.txt and requirements-dev.txt for all Python-related dependencies. Notable dependencies include:

  • numpy==1.14.2
  • scikit_learn==0.20.0
  • Pillow==5.3.0
  • matplotlib==2.1.0
  • tensorflow
  • gensim
  • magenta

The build steps (what we're using to do automated builds in the cloud) can be seen in the Dockerfile. For local development, it is recommended to setup a virtual environment. To do that, simply run the following commands:

git clone [email protected]:thinkingmachines/christmAIs.git
cd christmAIs
make venv

Automated Install

We created an automated install script to perform a one-click setup in your workspace. To run the script, execute the following command:

source venv/bin/activate  # Highly recommended
./install-christmais.sh

This will first install magenta and its dependencies, download file dependencies (categories.txt, model.ckpt, and chromedriver), then clone and install this package.

Manual Install

For manual installation, please follow the instructions below:

Installing magenta

The style transfer capabilities are dependent on the magenta package. As of now, magenta is only supported in Linux and Mac OS. To install magenta, you can perform the automated install or do the following steps:

# Install OS dependencies
apt-get update && \
apt-get install -y build-essential libasound2-dev libjack-dev

# Install magenta
venv/bin/pip install magenta

Installing everything else

You can then install the remaining dependencies in requirements.txt. Assuming that you have create a virtual environment via make venv, we recommend that you simply run the following command:

make build # or `make dev`

This will also download (via wget) the following files:

  • categories.txt (683 B): contains the list of Quick, Draw! categories to compare a string upon (will be saved at ./categories/categories.txt).
  • arbitrary_style_transfer.tar.gz (606.20 MB): contains the model checkpoint for style transfer (will be saved at ./ckpt/model.ckpt).
  • chromedriver (5.09 MB): contains the web driver for accessing the HTML output for Sketch-RNN (will be saved at ./webdriver/chromedriver).

Generating the documentation

Ensure that you have all dev dependencies installed:

git clone [email protected]:thinkingmachines/christmAIs.git
make venv
make dev

Then to build the actual docs

cd christmAIs/docs/
make html

This will generate an index.html file that you can view in your browser.

Usage

We have provided a script, christmais_time.py to easily generate your stylized Quick, Draw! images. In order to use it, simply run the following command:

python -m christmais.tasks.christmais_time     \
    --input=<Input string to draw from>        \
    --style=<Path to style image>              \
    --output=<Unique name of output file>      \
    --model-path=<Path to model.ckpt>          \
    --categories-path=<Path to categories.txt> \
    --webdriver-path=<Path to webdriver>

If you followed the setup instructions above, then the default values for the paths should suffice, you only need to supply --input, --style, and --output.

As an example, let's say I want to use the string Thinking Machines as our basis with the style of Ang Kiukok's Fishermen (ang_kiukok.jpg), then, my command will look like this:

python -m christmais.tasks.christmais_time \
    --input="Thinking Machines"            \
    --style=./path/to/ang_kiukok.png       \
    --output=tmds-output

This will then generate the output image in ./artifacts/:

alt text

References

  • Pennington, Jeffrey, Socher, Richard, et al. (2014). “Glove: Global Vectors for Word Representation”. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532-1543.
  • Ha, David and Eck, Douglas (2017). “A Neural Representation of Sketch Drawings”. In: arXiv.:1704.03477.
  • Ghiasi, Golnaz et al. (2017). “Exploring the structure of real-time, arbitrary neural artistic stylization network”. In: arxiv:1705.06830.
  • Magenta demonstration (sketch-rnn.js):https://github.com/hardmaru/magenta-demos/tree/master/sketch-rnn-js
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].