All Projects → teradepth → Iva

teradepth / Iva

IVA: Independent Vector Analysis implementation

Programming Languages

matlab
3953 projects

Projects that are alternatives of or similar to Iva

Hidt
Official repository for the paper "High-Resolution Daytime Translation Without Domain Labels" (CVPR2020, Oral)
Stars: ✭ 513 (+1365.71%)
Mutual labels:  unsupervised-learning
Awesome Artificial Intelligence
A curated list of Artificial Intelligence (AI) courses, books, video lectures and papers.
Stars: ✭ 6,516 (+18517.14%)
Mutual labels:  unsupervised-learning
Summary loop
Codebase for the Summary Loop paper at ACL2020
Stars: ✭ 26 (-25.71%)
Mutual labels:  unsupervised-learning
Athena
an open-source implementation of sequence-to-sequence based speech processing engine
Stars: ✭ 542 (+1448.57%)
Mutual labels:  unsupervised-learning
All About The Gan
All About the GANs(Generative Adversarial Networks) - Summarized lists for GAN
Stars: ✭ 630 (+1700%)
Mutual labels:  unsupervised-learning
Minisom
🔴 MiniSom is a minimalistic implementation of the Self Organizing Maps
Stars: ✭ 801 (+2188.57%)
Mutual labels:  unsupervised-learning
Sc Sfmlearner Release
Unsupervised Scale-consistent Depth and Ego-motion Learning from Monocular Video (NeurIPS 2019)
Stars: ✭ 468 (+1237.14%)
Mutual labels:  unsupervised-learning
Discogan Pytorch
PyTorch implementation of "Learning to Discover Cross-Domain Relations with Generative Adversarial Networks"
Stars: ✭ 961 (+2645.71%)
Mutual labels:  unsupervised-learning
Context Encoder
[CVPR 2016] Unsupervised Feature Learning by Image Inpainting using GANs
Stars: ✭ 731 (+1988.57%)
Mutual labels:  unsupervised-learning
Unsup3d
(CVPR'20 Oral) Unsupervised Learning of Probably Symmetric Deformable 3D Objects from Images in the Wild
Stars: ✭ 905 (+2485.71%)
Mutual labels:  unsupervised-learning
Unsupervised Classification
SCAN: Learning to Classify Images without Labels (ECCV 2020), incl. SimCLR.
Stars: ✭ 605 (+1628.57%)
Mutual labels:  unsupervised-learning
Alibi Detect
Algorithms for outlier and adversarial instance detection, concept drift and metrics.
Stars: ✭ 604 (+1625.71%)
Mutual labels:  unsupervised-learning
Variational Autoencoder
Variational autoencoder implemented in tensorflow and pytorch (including inverse autoregressive flow)
Stars: ✭ 807 (+2205.71%)
Mutual labels:  unsupervised-learning
Lemniscate.pytorch
Unsupervised Feature Learning via Non-parametric Instance Discrimination
Stars: ✭ 532 (+1420%)
Mutual labels:  unsupervised-learning
Domain Transfer Network
TensorFlow Implementation of Unsupervised Cross-Domain Image Generation
Stars: ✭ 850 (+2328.57%)
Mutual labels:  unsupervised-learning
Autovc
AutoVC: Zero-Shot Voice Style Transfer with Only Autoencoder Loss
Stars: ✭ 485 (+1285.71%)
Mutual labels:  unsupervised-learning
Simclr
PyTorch implementation of SimCLR: A Simple Framework for Contrastive Learning of Visual Representations
Stars: ✭ 750 (+2042.86%)
Mutual labels:  unsupervised-learning
Uc Davis Cs Exams Analysis
📈 Regression and Classification with UC Davis student quiz data and exam data
Stars: ✭ 33 (-5.71%)
Mutual labels:  unsupervised-learning
Transferlearning
Transfer learning / domain adaptation / domain generalization / multi-task learning etc. Papers, codes, datasets, applications, tutorials.-迁移学习
Stars: ✭ 8,481 (+24131.43%)
Mutual labels:  unsupervised-learning
Bagofconcepts
Python implementation of bag-of-concepts
Stars: ✭ 18 (-48.57%)
Mutual labels:  unsupervised-learning

IVA: Independent Vector Analysis

matlab implementations

ivabss.m

Natural Gradient algorithm for Frecuency Domain Blind source separation based on Independent Vector Analysis

[y, W] = ivabss(x, nfft, maxiter, tol, eta, nsou)
 y : separated signals (nsou x N)
 W : unmixing matrices (nsou x nmic x nfft/2+1)
 x : observation signals (nmic x N),
       where nsou is # of sources, nmic is # of mics, and N is # of time frames
 nfft : # of fft points (default =1024)
 eta : learning rate (default =0.1)
 maxiter : # of iterations (default =1000)
 tol : When the difference of objective is less than tol,
           the algorithm terminates (default =1e-6)
 nsou : # of sources (default =nmic)

fiva.m

Fast algorithm for Frecuency Domain Blind source separation based on Independent Vector Analysis

[y, W] = fivabss(x, nfft, maxiter, tol, nsou)
 y : separated signals (nsou x N)
 W : unmixing matrices (nsou x nmic x nfft/2+1)
 x : observation signals (nmic x N),
       where nsou is # of sources, nmic is # of mics, and N is # of time frames
 nfft : # of fft points (default =1024)
 maxiter : # of iterations (default =1000)
 tol : When the increment of likelihood is less than tol,
           the algorithm terminates (deault =1e-6)
 nsou : # of sources (default =nmic)

python implementation

TO-DO

References

[1] Taesu Kim, "Independent Vector Analysis" Ph.D. Dissertation, KAIST, 2007

[2] Taesu Kim, Hagai Attias, Soo-Young Lee, Te-Won Lee, "Blind source separation exploiting higher-order frequency dependencies" IEEE Transactions on Audio, Speech, and Language Processing 15 (1), 2007

[3] Intae Lee, Taesu Kim, Te-Won Lee, "Fast fixed-point independent vector analysis algorithms for convolutive blind source separation" Signal Processing 87 (8), 2007

[4] Taesu Kim, Torbjørn Eltoft, Te-Won Lee, "Independent vector analysis: An extension of ICA to multivariate components" International Conference on Independent Component Analysis and Signal Separation, 2006

[5] Taesu Kim, Intae Lee, Te-Won Lee, "Independent vector analysis: definition and algorithms", Fortieth Asilomar Conference on Signals, Systems and Computers, 2006

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].