Social-IQ[CVPR 2019 Oral] Social-IQ: A Question Answering Benchmark for Artificial Social Intelligence
Stars: ✭ 37 (+32.14%)
circDeepEnd-to-End learning framework for circular RNA classification from other long non-coding RNA using multimodal deep learning
Stars: ✭ 21 (-25%)
MMD-GANImproving MMD-GAN training with repulsive loss function
Stars: ✭ 82 (+192.86%)
MultiGraphGANMultiGraphGAN for predicting multiple target graphs from a source graph using geometric deep learning.
Stars: ✭ 16 (-42.86%)
MSAFOffical implementation of paper "MSAF: Multimodal Split Attention Fusion"
Stars: ✭ 47 (+67.86%)
trVAEConditional out-of-distribution prediction
Stars: ✭ 47 (+67.86%)
BBFNThis repository contains the implementation of the paper -- Bi-Bimodal Modality Fusion for Correlation-Controlled Multimodal Sentiment Analysis
Stars: ✭ 42 (+50%)
vmd sizingvmd(MMDモーションデータ)を、指定されたモデルに適用したサイズで再生成します。
Stars: ✭ 33 (+17.86%)
C4D MMD ToolA plugin for Cinema 4D written in C++ is used to import MikuMikuDance data into Cinema 4D.
Stars: ✭ 52 (+85.71%)
cats-blender-plugin😺 A tool designed to shorten steps needed to import and optimize models into VRChat. Compatible models are: MMD, XNALara, Mixamo, DAZ/Poser, Blender Rigify, Sims 2, Motion Builder, 3DS Max and potentially more
Stars: ✭ 1,674 (+5878.57%)
VMD-LiftingVMD-Lifting is a fork of 'Lifting from the Deep' that outputs estimated 3D pose data to a VMD file
Stars: ✭ 31 (+10.71%)
nanoemnanoem is an MMD (MikuMikuDance) compatible implementation and its like cross-platform application mainly built for macOS.
Stars: ✭ 136 (+385.71%)
pmxpmx - a pure JavaScript Parser for PMX Format(MMD)
Stars: ✭ 34 (+21.43%)
dan-visdial✨ Official PyTorch Implementation for EMNLP'19 "Dual Attention Networks for Visual Reference Resolution in Visual Dialog"
Stars: ✭ 38 (+35.71%)
visdial-gnnPyTorch code for Reasoning Visual Dialogs with Structural and Partial Observations
Stars: ✭ 39 (+39.29%)
visdialVisual Dialog: Light-weight Transformer for Many Inputs (ECCV 2020)
Stars: ✭ 27 (-3.57%)
slpUtils and modules for Speech Language and Multimodal processing using pytorch and pytorch lightning
Stars: ✭ 17 (-39.29%)
scarchesReference mapping for single-cell genomics
Stars: ✭ 175 (+525%)
muscapsSource code for "MusCaps: Generating Captions for Music Audio" (IJCNN 2021)
Stars: ✭ 39 (+39.29%)
Robust-Deep-Learning-PipelineDeep Convolutional Bidirectional LSTM for Complex Activity Recognition with Missing Data. Human Activity Recognition Challenge. Springer SIST (2020)
Stars: ✭ 20 (-28.57%)
Multimodal-Future-PredictionThe official repository for the CVPR 2019 paper "Overcoming Limitations of Mixture Density Networks: A Sampling and Fitting Framework for Multimodal Future Prediction"
Stars: ✭ 38 (+35.71%)
MISEMultimodal Image Synthesis and Editing: A Survey
Stars: ✭ 214 (+664.29%)
hateful memes-hate detectronDetecting Hate Speech in Memes Using Multimodal Deep Learning Approaches: Prize-winning solution to Hateful Memes Challenge. https://arxiv.org/abs/2012.12975
Stars: ✭ 35 (+25%)
referit3dCode accompanying our ECCV-2020 paper on 3D Neural Listeners.
Stars: ✭ 59 (+110.71%)
iMIXA framework for Multimodal Intelligence research from Inspur HSSLAB.
Stars: ✭ 21 (-25%)