All Projects → reiinakano → Arbitrary Image Stylization Tfjs

reiinakano / Arbitrary Image Stylization Tfjs

Licence: apache-2.0
Arbitrary style transfer using TensorFlow.js

Programming Languages

javascript
184084 projects - #8 most used programming language

Projects that are alternatives of or similar to Arbitrary Image Stylization Tfjs

Cyclegan Qp
Official PyTorch implementation of "Artist Style Transfer Via Quadratic Potential"
Stars: ✭ 59 (-92.82%)
Mutual labels:  style-transfer, generative-art
Adain Style
Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization
Stars: ✭ 1,049 (+27.62%)
Mutual labels:  style-transfer, generative-art
Snek
See https://github.com/inconvergent/weir instead
Stars: ✭ 696 (-15.33%)
Mutual labels:  generative-art
React Container Query
📦 Modular responsive component
Stars: ✭ 788 (-4.14%)
Mutual labels:  browser
Auth0.js
Auth0 headless browser sdk
Stars: ✭ 755 (-8.15%)
Mutual labels:  browser
Launchy
A helper for launching cross-platform applications in a fire and forget manner.
Stars: ✭ 704 (-14.36%)
Mutual labels:  browser
Gitbeaker
🤖 GitLab API NodeJS library with full support of all the Gitlab API services.
Stars: ✭ 755 (-8.15%)
Mutual labels:  browser
Pytorch Multi Style Transfer
Neural Style and MSG-Net
Stars: ✭ 687 (-16.42%)
Mutual labels:  style-transfer
Exokit
Native VR/AR/XR engine for JavaScript 🦖
Stars: ✭ 809 (-1.58%)
Mutual labels:  browser
React Article Bucket
总结,积累,分享,传播JavaScript各模块核心知识点文章全集,欢迎star,issue(勿fork,内容可能随时修改)。webpack核心内容部分请查看专栏: https://github.com/liangklfangl/webpack-core-usage
Stars: ✭ 750 (-8.76%)
Mutual labels:  browser
Data Augmentation Review
List of useful data augmentation resources. You will find here some not common techniques, libraries, links to github repos, papers and others.
Stars: ✭ 785 (-4.5%)
Mutual labels:  style-transfer
Landmark Detection
Four landmark detection algorithms, implemented in PyTorch.
Stars: ✭ 747 (-9.12%)
Mutual labels:  style-transfer
Interactivegraph
InteractiveGraph provides a web-based interactive visualization and analysis framework for large graph data, which may come from a GSON file, or an online Neo4j graph database. InteractiveGraph also provides applications built on the framework: GraphNavigator, GraphExplorer and RelFinder.
Stars: ✭ 730 (-11.19%)
Mutual labels:  browser
Flutter particle clock
The Grand Prize-winning entry of the #FlutterClock challenge.
Stars: ✭ 771 (-6.2%)
Mutual labels:  generative-art
Monaco Editor
A browser based code editor
Stars: ✭ 27,382 (+3231.14%)
Mutual labels:  browser
Browser
Useragent analysis tool.浏览器分析判断工具 - 用户代理、操作系统信息
Stars: ✭ 789 (-4.01%)
Mutual labels:  browser
Party Mode
An experimental music visualizer using d3.js and the web audio api.
Stars: ✭ 690 (-16.06%)
Mutual labels:  generative-art
Javascript Obfuscator
A powerful obfuscator for JavaScript and Node.js
Stars: ✭ 8,204 (+898.05%)
Mutual labels:  browser
Beaker
An experimental peer-to-peer Web browser
Stars: ✭ 6,411 (+679.93%)
Mutual labels:  browser
Triflejs
Headless automation for Internet Explorer
Stars: ✭ 815 (-0.85%)
Mutual labels:  browser

Arbitrary style transfer in TensorFlow.js

This repository contains an implementation of arbitrary style transfer running fully inside the browser using TensorFlow.js.

Demo website: https://reiinakano.github.io/arbitrary-image-stylization-tfjs

Blog post with more details: https://magenta.tensorflow.org/blog/2018/12/20/style-transfer-js/

Stylize an image

stylize

Combine styles

combine

FAQ

What is this?

This is an implementation of an arbitrary style transfer algorithm running purely in the browser using TensorFlow.js. As with all neural style transfer algorithms, a neural network attempts to "draw" one picture, the Content (usually a photograph), in the style of another, the Style (usually a painting).

Although other browser implementations of style transfer exist, they are normally limited to a pre-selected handful of styles, due to the requirement that a separate neural network must be trained for each style image.

Arbitrary style transfer works around this limitation by using a separate style network that learns to break down any image into a 100-dimensional vector representing its style. This style vector is then fed into another network, the transformer network, along with the content image, to produce the final stylized image.

I have written a blog post explaining this project in more detail.

Is my data safe? Can you see my pictures?

Your data and pictures here never leave your computer! In fact, this is one of the main advantages of running neural networks in your browser. Instead of sending us your data, we send you both the model and the code to run the model. These are then run by your browser.

What are all these different models?

The original paper uses an Inception-v3 model as the style network, which takes up ~36.3MB when ported to the browser as a FrozenModel.

In order to make this model smaller, a MobileNet-v2 was used to distill the knowledge from the pretrained Inception-v3 style network. This resulted in a size reduction of just under 4x, from ~36.3MB to ~9.6MB, at the expense of some quality.

For the transformer network, the original paper uses a model using plain convolution layers. When ported to the browser, this model takes up 7.9MB and is responsible for the majority of the calculations during stylization.

In order to make the transformer model more efficient, most of the plain convolution layers were replaced with depthwise separable convolutions. This reduced the model size to 2.4MB, while drastically improving the speed of stylization.

This demo lets you use any combination of the models, defaulting to the MobileNet-v2 style network and the separable convolution transformer network.

How big are the models I'm downloading?

The distilled style network is ~9.6MB, while the separable convolution transformer network is ~2.4MB, for a total of ~12MB. Since these models work for any style, you only have to download them once!

How does style combination work?

Since each style can be mapped to a 100-dimensional style vector by the style network, we simply take a weighted average of the two to get a new style vector for the transformer network.

This is also how we are able to control the strength of stylization. We take a weighted average of the style vectors of both content and style images and use it as input to the transformer network.

Running locally for development

This project uses Yarn for dependencies.

To run it locally, you must install Yarn and run the following command at the repository's root to get all the dependencies.

yarn run prep

Then, you can run

yarn run start

You can then browse to localhost:9966 to view the application.

Credits

This demo could not have been done without the following:

As a final note, I'd love to hear from people interested in making a suite of tools for artistically manipulating images, kind of like Magenta Studio but for images. Please reach out if you're planning to build/are building one out!

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].