All Projects → CyFeng16 → MVIMP

CyFeng16 / MVIMP

Licence: GPL-3.0 license
Mixed Video and Image Manipulation Program

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to MVIMP

AnimeGANv3
Use AnimeGANv3 to make your own animation works, including turning photos or videos into anime.
Stars: ✭ 878 (+1467.86%)
Mutual labels:  colab, animegan, animeganv2
Waifu2x Extension Gui
Video, Image and GIF upscale/enlarge(Super-Resolution) and Video frame interpolation. Achieved with Waifu2x, Real-ESRGAN, SRMD, RealSR, Anime4K, RIFE, CAIN, DAIN, and ACNet.
Stars: ✭ 5,463 (+9655.36%)
Mutual labels:  dain, waifu2x-ncnn-vulkan
AI-Lossless-Zoomer
AI无损放大工具
Stars: ✭ 940 (+1578.57%)
Mutual labels:  waifu2x-ncnn-vulkan
WarezBot
Public Version of Discord bot for scene release
Stars: ✭ 30 (-46.43%)
Mutual labels:  1080p
MIPT-Opt
A course on Optimization Methods
Stars: ✭ 128 (+128.57%)
Mutual labels:  colab
Numerical-Analysis-Python
Python notebooks for Numerical Analysis
Stars: ✭ 82 (+46.43%)
Mutual labels:  colab
swift-colab
Swift kernel for Google Colaboratory
Stars: ✭ 50 (-10.71%)
Mutual labels:  colab
Torrent-To-Google-Drive-Downloader
Simple notebook to stream torrent files to Google Drive using Google Colab and python3.
Stars: ✭ 256 (+357.14%)
Mutual labels:  colab
colab-badge-action
GitHub Action that generates "Open In Colab" Badges for you
Stars: ✭ 15 (-73.21%)
Mutual labels:  colab
data-science-learning
📊 All of courses, assignments, exercises, mini-projects and books that I've done so far in the process of learning by myself Machine Learning and Data Science.
Stars: ✭ 32 (-42.86%)
Mutual labels:  colab
Tensorflow2-ObjectDetectionAPI-Colab-Hands-On
Tensorflow2 Object Detection APIのハンズオン用資料です(Hands-on documentation for the Tensorflow2 Object Detection API)
Stars: ✭ 33 (-41.07%)
Mutual labels:  colab
picatrix
Picatrix is a library designed to help security analysts in a notebook environment, such as colab or jupyter.
Stars: ✭ 35 (-37.5%)
Mutual labels:  colab
edge-computer-vision
Edge Computer Vision Course
Stars: ✭ 41 (-26.79%)
Mutual labels:  colab
stylegan2
StyleGAN2 - Official TensorFlow Implementation with practical improvements
Stars: ✭ 121 (+116.07%)
Mutual labels:  colab
nearo
🔥 Nearo: A react.js app for local selling, buying, and news
Stars: ✭ 40 (-28.57%)
Mutual labels:  3d-photo
Handy
Convert videos with HandBrake online in Google Colab. Mount Cloud drives with rclone in colab. Burn/Hardcode subtitles. Extract/mute audio from video. Get email notification when tasks finish running.
Stars: ✭ 35 (-37.5%)
Mutual labels:  colab
AnimeGANv2-ONNX-Sample
「PyTorch Implementation of AnimeGANv2」のPythonでのONNX推論サンプル
Stars: ✭ 54 (-3.57%)
Mutual labels:  animeganv2
MiXLab
MiXLab is a mix of multiple amazing Colab Notebooks found on the internet such as rcloneLab, RLabClone, Torrent to Google Drive Downloader and some more.
Stars: ✭ 143 (+155.36%)
Mutual labels:  colab
Deep-Learning-with-GoogleColab
Deep Learning Applications (Darknet - YOLOv3, YOLOv4 | DeOldify - Image Colorization, Video Colorization | Face-Recognition) with Google Colaboratory - on the free Tesla K80/Tesla T4/Tesla P100 GPU - using Keras, Tensorflow and PyTorch.
Stars: ✭ 63 (+12.5%)
Mutual labels:  deoldify
ForecastGA
A Python tool to forecast Google Analytics data using several popular time series models.
Stars: ✭ 32 (-42.86%)
Mutual labels:  colab

GitHub last commit GitHub issues GitHub License Code style: black

English | 简体中文 | Español

Welcome to MVIMP 👋

The name MVIMP (Mixed Video and Image Manipulation Program) was inspired by the name GIMP (GNU Image Manipulation Program), which hope it can help more people.

I realize that training a good-performance AI model is kind of just one side of the story, make it easy to use for others is the other thing. Thus, this repository built to embrace out-of-the-box AI ability to manipulate multimedia. Last but not least, wish you have fun!

Model Input Output Parallel Colab Link
AnimeGAN Images Images True Open In Colab
AnimeGANv2 Images Images True Open In Colab
DAIN Video Video False Open In Colab
DeOldify Images Images True Open In Colab
Photo3D Images Videos True(not recommmended) Open In Colab
Waifu2x Images Images True Open In Colab

You are welcomed to discuss future features in this issue.

AnimeGANv2

Original repository: TachibanaYoshino/AnimeGANv2

The improved version of AnimeGAN, which converts landscape photos/videos(todo) to anime. The improvement directions of AnimeGANv2 mainly include the following 4 points:

  1. Solve the problem of high-frequency artifacts in the generated image.
  2. It is easy to train and directly achieve the effects in the paper.
  3. Further, reduce the number of parameters of the generator network. (generator size: 8.17 Mb), The lite version has a smaller generator model.
  4. Use new high-quality style data, which come from BD movies as much as possible.
Dependency Version
TensorFLow 1.15.2
CUDA Toolkit 10.0(tested locally) / 10.1(colab)
Python 3.6.8(3.6+)

Usage:

  1. Colab

    You can open our jupyter notebook through colab link.

  2. Local

    # Step 1: Prepare
    git clone https://github.com/CyFeng16/MVIMP.git
    cd MVIMP
    python3 preparation.py
    # Step 2: Put your photos into ./Data/Input/
    # Step 3: Infernece
    python3 inference_animeganv2.py -s {The_Style_You_Choose}
  3. Description of Parameters

    params abbr. Default Description
    --style -s Hayao The anime style you want to get.
    Style name Anime style
    Hayao Miyazaki Hayao
    Shinkai Makoto Shinkai
    Paprika Kon Satoshi

AnimeGAN

Original repository: TachibanaYoshino/AnimeGAN

This is the Open source of the paper <AnimeGAN: a novel lightweight GAN for photo animation>, which uses the GAN framwork to transform real-world photos into anime images.

Dependency Version
TensorFLow 1.15.2
CUDA Toolkit 10.0(tested locally) / 10.1(colab)
Python 3.6.8(3.6+)

Usage:

  1. Colab

    You can open our jupyter notebook through colab link.

  2. Local

    # Step 1: Prepare
    git clone https://github.com/CyFeng16/MVIMP.git
    cd MVIMP
    python3 preparation.py -f animegan 
    # Step 2: Put your photos into ./Data/Input/
    # Step 3: Infernece
    python3 inference_animegan.py

DAIN

Original repository: baowenbo/DAIN

Depth-Aware video frame INterpolation (DAIN) model explicitly detect the occlusion by exploring the depth cue. We develop a depth-aware flow projection layer to synthesize intermediate flows that preferably sample closer objects than farther ones.

This method achieves SOTA performance on the Middlebury dataset. Video are provided here.

The current version of DAIN (in this repo) can smoothly run 1080p video frame insertion even on GTX-1080 GPU card, as long as you turn -hr on (see Description of Parameters below).

Dependency Version
PyTroch 1.0.0
CUDA Toolkit 9.0(colab tested)
Python 3.6.8(3.6+)
GCC 4.9(Compiling PyTorch 1.0.0 extension files (.c/.cu))

P.S. Make sure your virtual env has torch-1.0.0 and torchvision-0.2.1 with CUDA-9.0 . You can use the following command: You can find out dependencies issue at #5 and #16 .

Usage:

  1. Colab

    You can open our jupyter notebook through colab link.

  2. Local

    # Step 1: Prepare
    git clone https://github.com/CyFeng16/MVIMP.git
    cd MVIMP
    python3 preparation.py -f dain
    # Step 2: Put a single video file into ./Data/Input/
    # Step 3: Infernece
    python3 inference_dain.py -input your_input.mp4 -ts 0.5 -hr
  3. Description of Parameters

    params abbr. Default Description
    --input_video -input / The input video name.
    --time_step -ts 0.5 Set the frame multiplier.
    0.5 corresponds to 2X;
    0.25 corresponds to 4X;
    0.125 corresponds to 8X.
    --high_resolution -hr store_true Default is False(action:store_true).
    Turn it on when you handling FHD videos,
    A frame-splitting process will reduce GPU memory usage.

DeOldify

Original repository: jantic/DeOldify

DeOldify is a Deep Learning based project for colorizing and restoring old images and video!

We are now integrating the inference capabilities of the DeOldify model (both Artistic and Stable, no Video) with our MVIMP repository, and keeping the input and output interfaces consistent.

Dependency Version
PyTroch 1.5.0
CUDA Toolkit 10.1(tested locally/colab)
Python 3.6.8(3.6+)

Other Python dependencies listed in colab_requirements.txt, and will be auto installed while running preparation.py.

Usage:

  1. Colab

    You can open our jupyter notebook through colab link.

  2. Local

    # Step 1: Prepare
    git clone https://github.com/CyFeng16/MVIMP.git
    cd MVIMP
    python3 preparation.py -f deoldify
    # Step 2: Infernece
    python3 -W ignore inference_deoldify.py -art
  3. Description of Parameters

    params abbr. Default Description
    --artistic -art store_true The artistic model achieves the highest quality results in image coloration,
    in terms of interesting details and vibrance.
    --stable -st store_true Stable model achieves the best results with landscapes and portraits.
    --render_factor -factor 35 Between 7 and 40, try more times for better performance.
    --watermarked -mark store_true I respect the spirit of the original author adding a watermark to distinguish AI works,
    but setting it to False may be more convenient for use in a production environment.

Photo3D

Original repository: vt-vl-lab/3d-photo-inpainting

The method for converting a single RGB-D input image into a 3D photo, i.e., a multi-layer representation for novel view synthesis that contains hallucinated color and depth structures in regions occluded in the original view.

Dependency Version
PyTroch 1.5.0
CUDA Toolkit 10.1(tested locally/colab)
Python 3.6.8(3.6+)

Other Python dependencies listed in requirements.txt, and will be auto installed while running preparation.py.

Usage:

  1. Colab

    You can open our jupyter notebook through colab link.

    P.S. Massive memory is occupied during operation(grows with -l).

    Higher memory runtime helps if you are Colab Pro user.

  2. Local

    # Step 1: Prepare
    git clone https://github.com/CyFeng16/MVIMP.git
    cd MVIMP
    python3 preparation.py -f photo3d
    # Step 2: Put your photos into ./Data/Input/
    # Step 3: Infernece
    python3 inference_photo3d.py -f 40 -n 240 -l 960
  3. Description of Parameters

    params abbr. Default Description
    --fps -f 40 The FPS of output video.
    --frames -n 240 The number of frames of output video.
    --longer_side_len -l 960 The longer side of output video(either height or width).

Waifu2x

Original repository: nihui/waifu2x-ncnn-vulkan

waifu2x-ncnn-vulkan is a ncnn implementation of waifu2x, which could runs fast on Intel/AMD/Nvidia with Vulkan API.

We are now integrating the inference capabilities of the waifu2x model ("cunet", "photo" and "animeart") with our MVIMP repository, and keeping the input and output interfaces consistent.

Dependency Version
CUDA Toolkit 10.1(tested locally/colab)
Python 3.6.8(3.6+)

Usage:

  1. Colab

    You can open our jupyter notebook through colab link.

  2. Local

    # Step 1: Prepare
    git clone https://github.com/CyFeng16/MVIMP.git
    cd MVIMP
    python3 preparation.py -f waifu2x-vulkan
    # Step 2: Infernece
    python3 inference_waifu2x-vulkan.py -s 2 -n 0
  3. Description of Parameters

    params abbr. Default Description
    --scale -s 2 upscale ratio (1/2, default=2)
    --noise -n 0 denoise level (-1/0/1/2/3, default=0)
    --tilesize -t 400 Tile size. Between 32 and 19327352831, no appreciable effect.
    --model -m cunet Model to use. You can choose in "cunet", "photo" and "animeart".
    --tta -x store_true
    (True if set)
    TTA mode able to reduce several types of artifacts but it's 8x slower than the non-TTA mode.
    See for details.

TODO

Acknowledgment

This code is based on the TachibanaYoshino/AnimeGAN, TachibanaYoshino/AnimeGANv2, vt-vl-lab/3d-photo-inpainting, baowenbo/DAIN, jantic/DeOldify and nihui/waifu2x-ncnn-vulkan. Thanks to the contributors of those project.

@EtianAM provides our Spanish guide. @BrokenSilence improves DAIN's performance.

Stargazers over time

Stargazers over time

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].