All Projects → memo → py-msa-kdenlive

memo / py-msa-kdenlive

Licence: GPL-3.0 license
Python script to load a Kdenlive (OSS NLE video editor) project file, and conform the edit on video or numpy arrays.

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to py-msa-kdenlive

Conditional Animegan
Conditional GAN for Anime face generation.
Stars: ✭ 70 (+180%)
Mutual labels:  generative-adversarial-network, generative-model, generative-art
Sgan
Stacked Generative Adversarial Networks
Stars: ✭ 240 (+860%)
Mutual labels:  generative-adversarial-network, generative-model
Wgan
Tensorflow Implementation of Wasserstein GAN (and Improved version in wgan_v2)
Stars: ✭ 228 (+812%)
Mutual labels:  generative-adversarial-network, generative-model
MMD-GAN
Improving MMD-GAN training with repulsive loss function
Stars: ✭ 82 (+228%)
Mutual labels:  generative-adversarial-network, generative-model
Dragan
A stable algorithm for GAN training
Stars: ✭ 189 (+656%)
Mutual labels:  generative-adversarial-network, generative-model
Neuralnetworks.thought Experiments
Observations and notes to understand the workings of neural network models and other thought experiments using Tensorflow
Stars: ✭ 199 (+696%)
Mutual labels:  generative-adversarial-network, generative-model
eccv16 attr2img
Torch Implemention of ECCV'16 paper: Attribute2Image
Stars: ✭ 93 (+272%)
Mutual labels:  generative-model, generative-art
Gesturegan
[ACM MM 2018 Oral] GestureGAN for Hand Gesture-to-Gesture Translation in the Wild
Stars: ✭ 136 (+444%)
Mutual labels:  generative-adversarial-network, generative-model
pytorch-GAN
My pytorch implementation for GAN
Stars: ✭ 12 (-52%)
Mutual labels:  generative-adversarial-network, generative-model
simplegan
Tensorflow-based framework to ease training of generative models
Stars: ✭ 19 (-24%)
Mutual labels:  generative-adversarial-network, generative-model
pytorch-CycleGAN
Pytorch implementation of CycleGAN.
Stars: ✭ 39 (+56%)
Mutual labels:  generative-adversarial-network, generative-model
Stylegan2 Pytorch
Simplest working implementation of Stylegan2, state of the art generative adversarial network, in Pytorch. Enabling everyone to experience disentanglement
Stars: ✭ 2,656 (+10524%)
Mutual labels:  generative-adversarial-network, generative-model
Conditional Gan
Anime Generation
Stars: ✭ 141 (+464%)
Mutual labels:  generative-adversarial-network, generative-model
Triple Gan
See Triple-GAN-V2 in PyTorch: https://github.com/taufikxu/Triple-GAN
Stars: ✭ 203 (+712%)
Mutual labels:  generative-adversarial-network, generative-model
Semantic image inpainting
Semantic Image Inpainting
Stars: ✭ 140 (+460%)
Mutual labels:  generative-adversarial-network, generative-model
worlds
Building Virtual Reality Worlds using Three.js
Stars: ✭ 23 (-8%)
Mutual labels:  generative-model, generative-art
cinelerra-cve
NLE Video editor
Stars: ✭ 17 (-32%)
Mutual labels:  video-editing, video-editor
Generative Evaluation Prdc
Code base for the precision, recall, density, and coverage metrics for generative models. ICML 2020.
Stars: ✭ 117 (+368%)
Mutual labels:  generative-adversarial-network, generative-model
Cramer Gan
Tensorflow Implementation on "The Cramer Distance as a Solution to Biased Wasserstein Gradients" (https://arxiv.org/pdf/1705.10743.pdf)
Stars: ✭ 123 (+392%)
Mutual labels:  generative-adversarial-network, generative-model
coursera-gan-specialization
Programming assignments and quizzes from all courses within the GANs specialization offered by deeplearning.ai
Stars: ✭ 277 (+1008%)
Mutual labels:  generative-adversarial-network, generative-model

py-msa-kdenlive

Python script to load a Kdenlive (OSS NLE video editor) project file, and conform the edit on video or numpy arrays.

I used this to create www.deepmeditations.ai (editing video snippets exported from a Generative Adversarial Network, and conforming that edit on numpy arrays of z-sequences).

More information and motivations at https://medium.com/@memoakten/deep-meditations-meaningful-exploration-of-ones-inner-self-576aab2f3894

Paper: https://nips2018creativity.github.io/doc/Deep_Meditations.pdf

Installation

Clone or download the repo, and install dependencies with pip install -r requirements.txt. If I've missed anything (very possible - I extracted this from a much larger set of packages I've been developing and working with) please file an issue. I've only tested this with python 2.7 on Ubuntu, but I think it should work on any OS, and with python 3.x too.

Usage

You can run the python script run.py with the command line arguments:

-k, --kdenlive_prj_path # path to kdenlive project
-n, --track_name # name of track in kdenlive project to use
-i, --input_path # path to input numpy array (e.g. containing z-sequence) or video file
-g, --groundtruth_path # [OPTIONAL] path to ground truth edited array or video file (for checking functionality)
-o, --output_path # path to desired output numpy array containing conformed sequence
-v, --verbose # if 1, dumps entire edit to console (comparing to ground truth if available)

e.g.

python run.py \
    --kdenlive_prj_path "./testdata/test.kdenlive" \
    --track_name "Video 1" \
    --input_path "./testdata/z_orig.npy" \
    --groundtruth_path "./testdata/z_edited.npy" \
    --output_path "z_out.npy" \
    --verbose 0

You can look at the contents of:

  • test_npy.sh and test_video.sh for examples on how to use the script.
  • run.py to see the code on how to use the python API
  • ./msa/kdenlive/kdenlive.py to see the main source and full API.

Citation

Paper to be presented at the 2nd Workshop on Machine Learning for Creativity and Design at the 32nd Conference on Neural Information Processing Systems (NeurIPS) 2018. If you find this useful, please cite the paper:

@article{deepmeditations2018,
  title={Deep Meditations: Controlled navigation of latent space},
  author={Akten, Memo and Fiebrink, Rebecca and Grierson, Mick},
  journal={NeurIPS, Workshop on Machine Learning for Creativity and Design},
  year={2018}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].