All Projects → VideoFlint → Cabbage

VideoFlint / Cabbage

Licence: mit
A video composition framework build on top of AVFoundation. It's simple to use and easy to extend.

Programming Languages

swift
15916 projects

Projects that are alternatives of or similar to Cabbage

Avdemo
Demo projects for iOS Audio & Video development.
Stars: ✭ 136 (-86.8%)
Mutual labels:  audio, video-processing, avfoundation
Sbplayer ios
基于AVPlayer封装的轻量级播放器,可播放本地及网络视频,易于定制
Stars: ✭ 134 (-86.99%)
Mutual labels:  audio, avfoundation
Soundable
Soundable allows you to play sounds, single and in sequence, in a very easy way
Stars: ✭ 78 (-92.43%)
Mutual labels:  audio, avfoundation
Avfoundationrecorder
Swift audio recorder using AVFoundation
Stars: ✭ 174 (-83.11%)
Mutual labels:  audio, avfoundation
Metalvideoprocess
MetalVideoProcess is a High-Performance video effects processing framework. It's base on GPUImage3 Metal, support asynchronous and multithread processing.
Stars: ✭ 52 (-94.95%)
Mutual labels:  video-processing, avfoundation
VideoLab
High-performance and flexible video editing and effects framework, based on AVFoundation and Metal.
Stars: ✭ 663 (-35.63%)
Mutual labels:  avfoundation, video-processing
Nextlevelsessionexporter
🔄 Export and transcode media in Swift
Stars: ✭ 170 (-83.5%)
Mutual labels:  audio, avfoundation
Online Video Editor
API based Online Video Editing using FFMPEG & NodeJs for Backend Editing
Stars: ✭ 176 (-82.91%)
Mutual labels:  audio, video-processing
Optivideoeditor For Ios
Native Video editor : Video trim, Audio, Video merge, Slow and fast motion, Video transition, Text and image, Filters, etc...
Stars: ✭ 234 (-77.28%)
Mutual labels:  audio, video-processing
Optivideoeditor For Android
Native Video editor : Video trim, Audio, Video merge, Slow and fast motion, Text and image, etc...
Stars: ✭ 209 (-79.71%)
Mutual labels:  audio, video-processing
Auto Editor
Auto-Editor: Effort free video editing!
Stars: ✭ 382 (-62.91%)
Mutual labels:  audio, video-processing
Aural Player
An audio player for macOS, inspired by Winamp for Windows.
Stars: ✭ 256 (-75.15%)
Mutual labels:  audio, avfoundation
Mlt
MLT Multimedia Framework
Stars: ✭ 836 (-18.83%)
Mutual labels:  audio, video-processing
Swiftysound
SwiftySound is a simple library that lets you play sounds with a single line of code.
Stars: ✭ 995 (-3.4%)
Mutual labels:  audio
Awesome Python Scientific Audio
Curated list of python software and packages related to scientific research in audio
Stars: ✭ 1,015 (-1.46%)
Mutual labels:  audio
Audioutils
🎶 Audioutils-音频录制和音频播放工具
Stars: ✭ 38 (-96.31%)
Mutual labels:  audio
Vchsm
C++ 11 algorithm implementation for voice conversion using harmonic plus stochastic models
Stars: ✭ 38 (-96.31%)
Mutual labels:  audio
Aplay
A Better(Maybe) iOS Audio Stream、Cache、Play Framework
Stars: ✭ 44 (-95.73%)
Mutual labels:  audio
Cpal
Cross-platform audio I/O library in pure Rust
Stars: ✭ 1,001 (-2.82%)
Mutual labels:  audio
Audiovisualizer
iOS Audio Visualizer
Stars: ✭ 37 (-96.41%)
Mutual labels:  audio

中文说明 中文使用文档

A high-level video composition framework build on top of AVFoundation. It's simple to use and easy to extend. Use it and make life easier if you are implementing video composition feature.

This project has a Timeline concept. Any resource can put into Timeline. A resource can be Image, Video, Audio, Gif and so on.

Features

  • Build result content objcet with only few step.
  1. Create resource
  2. Set configuration
  3. Put them into Timeline
  4. Use Timeline to generate AVPlayerItem/AVAssetImageGenerator/AVExportSession
  • Resouce: Support video, audio, and image. Resource is extendable, you can create your customized resource type. e.g gif image resource
  • Video configuration support: transform, opacity and so on. The configuration is extendable.
  • Audio configuration support: change volume or process with audio raw data in real time. The configuration is extendable.
  • Transition: Clips may transition with previous and next clip

Usage

Below is the simplest example. Create a resource from AVAsset, set the video frame's scale mode to aspect fill, then insert trackItem to timeline, after all use CompositionGenerator to build AVAssetExportSession/AVAssetImageGenerator/AVPlayerItem.


// 1. Create a resource
let asset: AVAsset = ...     
let resource = AVAssetTrackResource(asset: asset)

// 2. Create a TrackItem instance, TrackItem can configure video&audio configuration
let trackItem = TrackItem(resource: resource)
// Set the video scale mode on canvas
trackItem.configuration.videoConfiguration.baseContentMode = .aspectFill

// 3. Add TrackItem to timeline
let timeline = Timeline()
timeline.videoChannel = [trackItem]
timeline.audioChannel = [trackItem]

// 4. Use CompositionGenerator to create AVAssetExportSession/AVAssetImageGenerator/AVPlayerItem
let compositionGenerator = CompositionGenerator(timeline: timeline)
// Set the video canvas's size
compositionGenerator.renderSize = CGSize(width: 1920, height: 1080)
let exportSession = compositionGenerator.buildExportSession(presetName: AVAssetExportPresetMediumQuality)
let playerItem = compositionGenerator.buildPlayerItem()
let imageGenerator = compositionGenerator.buildImageGenerator()

Basic Concept

Timeline

Use to construct resource, the developer is responsible for putting resources at the right time range.

CompositionGenerator

Use CompositionGenerator to create AVAssetExportSession/AVAssetImageGenerator/AVPlayerItem

CompositionGenerator use Timeline instance translate to AVFoundation API.

Resource

Resource provider Image or/and audio data. It also provide time infomation about the data.

Currently support

  • Image type:
    • ImageResource: Provide a CIImage as video frame
    • PHAssetImageResource: Provide a PHAsset, load CIImage as video frame
    • AVAssetReaderImageResource: Provide AVAsset, reader samplebuffer as video frame using AVAssetReader
    • AVAssetReverseImageResource: Provide AVAsset, reader samplebuffer as video frame using AVAssetReader, but reverse the order
  • Video&Audio type:
    • AVAssetTrackResource: Provide AVAsset, use AVAssetTrack as video frame and audio frame.
    • PHAssetTrackResource: Provide PHAsset, load AVAsset from it.

TrackItem

A TrackItem contains Resource, VideoConfiguration and AudioConfiguration.

Currently support

  • Video Configuration
    • baseContentMode, video frame's scale mode base on canvas size
    • transform
    • opacity
    • configurations, custom filter can be added here.
  • Audio Configuration
    • volume
    • nodes, apply custom audio process operation, e.g VolumeAudioConfiguration
  • videoTransition, audioTransition

Advance usage

Custom Resource

You can provide custom resource type by subclass Resource, and implement func tracks(for type: AVMediaType) -> [AVAssetTrack].

By subclass ImageResource, you can use CIImage as video frame.

Custom Image Filter

Image filter need Implement VideoConfigurationProtocol protocol, then it can be added to TrackItem.configuration.videoConfiguration.configurations

KeyframeVideoConfiguration is a concrete class.

Custom Audio Mixer

Audio Mixer need implement AudioConfigurationProtocol protocol, then it can be added to TrackItem.configuration.audioConfiguration.nodes

VolumeAudioConfiguration is a concrete class.

Why I create this project

AVFoundation aready provide powerful composition API for video and audio, but these API are far away from easy to use.

1.AVComposition

We need to know how and when to connect different tracks. Say we save the time range info for a track, finnaly we will realize the time range info is very easy to broken, consider below scenarios

  • Change previous track's time range info
  • Change speed
  • Add new track
  • Add/remove transition

These operations will affect the timeline and all tracks' time range info need to be updated.

Bad thing is that AVComposition only supports video track and audio track. If we want to combine photo and video, it's very difficult to implement.

2.AVVideoCompostion

Use AVVideoCompositionInstruction to construct timeline, use AVVideoCompositionLayerInstruction to configure track's transform. If we want to operate raw video frame data, need implement AVVideoCompositing protocol.

After I write the code, I realized there are many codes unrelated to business logic, they should be encapsulated.

3.Difffcult to extend features

AVFoundation only supports a few basic composition features. As far as I know, it only can change video frame transform and audio volume. If a developer wants to implement other features, e.g apply a filter to a video frame, then need to rewrite AVVideoCompostion's AVVideoCompositing protocol. The workload suddenly become very large.

Life is hard why should I write hard code too? So I create Cabbage, easy to understand API, flexible feature scalability.

Installation

Cocoapods

platform :ios, '9.0'
use_frameworks!

target 'MyApp' do
  # your other pod
  # ...
  pod 'VFCabbage'
end

Manually

It is not recommended to install the framework manually, but if you have to do it manually. You can

  • simplely drag Cabbage/Sources folder to you project.
  • Or add Cabbage as a submodule.
$ git submodule add https://github.com/VideoFlint/Cabbage.git

Requirements

  • iOS 9.0+
  • Swift 4.x

Projects using Cabbage

  • VideoCat: A demo project demonstrates how to use Cabbage.

LICENSE

Under MIT

Special Thanks

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].