cfml toolsMy collection of causal inference algorithms built on top of accessible, simple, out-of-the-box ML methods, aimed at being explainable and useful in the business context
Stars: ✭ 24 (-75%)
Awesome-Neural-LogicAwesome Neural Logic and Causality: MLN, NLRL, NLM, etc. 因果推断,神经逻辑,强人工智能逻辑推理前沿领域。
Stars: ✭ 106 (+10.42%)
causal-learnCausal Discovery for Python. Translation and extension of the Tetrad Java code.
Stars: ✭ 428 (+345.83%)
Scan2Cap[CVPR 2021] Scan2Cap: Context-aware Dense Captioning in RGB-D Scans
Stars: ✭ 81 (-15.62%)
SGGpoint[CVPR 2021] Exploiting Edge-Oriented Reasoning for 3D Point-based Scene Graph Analysis (official pytorch implementation)
Stars: ✭ 41 (-57.29%)
LBYLNet[CVPR2021] Look before you leap: learning landmark features for one-stage visual grounding.
Stars: ✭ 46 (-52.08%)
causaldagPython package for the creation, manipulation, and learning of Causal DAGs
Stars: ✭ 82 (-14.58%)
AODAOfficial implementation of "Adversarial Open Domain Adaptation for Sketch-to-Photo Synthesis"(WACV 2022/CVPRW 2021)
Stars: ✭ 44 (-54.17%)
HistoGANReference code for the paper HistoGAN: Controlling Colors of GAN-Generated and Real Images via Color Histograms (CVPR 2021).
Stars: ✭ 158 (+64.58%)
causal-mlMust-read papers and resources related to causal inference and machine (deep) learning
Stars: ✭ 387 (+303.13%)
causeinferMachine learning based causal inference/uplift in Python
Stars: ✭ 45 (-53.12%)
DowhyDoWhy is a Python library for causal inference that supports explicit modeling and testing of causal assumptions. DoWhy is based on a unified language for causal inference, combining causal graphical models and potential outcomes frameworks.
Stars: ✭ 3,480 (+3525%)
Papers读过的CV方向的一些论文,图像生成文字、弱监督分割等
Stars: ✭ 99 (+3.13%)
AMP-RegularizerCode for our paper "Regularizing Neural Networks via Adversarial Model Perturbation", CVPR2021
Stars: ✭ 26 (-72.92%)
MetaBIN[CVPR2021] Meta Batch-Instance Normalization for Generalizable Person Re-Identification
Stars: ✭ 58 (-39.58%)
Causal Reading GroupWe will keep updating the paper list about machine learning + causal theory. We also internally discuss related papers between NExT++ (NUS) and LDS (USTC) by week.
Stars: ✭ 339 (+253.13%)
BCNetDeep Occlusion-Aware Instance Segmentation with Overlapping BiLayers [CVPR 2021]
Stars: ✭ 434 (+352.08%)
cvpr-buzz🐝 Explore Trending Papers at CVPR
Stars: ✭ 37 (-61.46%)
CoMoGANCoMoGAN: continuous model-guided image-to-image translation. CVPR 2021 oral.
Stars: ✭ 139 (+44.79%)
RECCONThis repository contains the dataset and the PyTorch implementations of the models from the paper Recognizing Emotion Cause in Conversations.
Stars: ✭ 126 (+31.25%)
Clipbert[CVPR 2021 Oral] Official PyTorch code for ClipBERT, an efficient framework for end-to-end learning for image-text and video-text tasks.
Stars: ✭ 168 (+75%)
Awesome VqaVisual Q&A reading list
Stars: ✭ 403 (+319.79%)
OscarOscar and VinVL
Stars: ✭ 396 (+312.5%)
Tbd NetsPyTorch implementation of "Transparency by Design: Closing the Gap Between Performance and Interpretability in Visual Reasoning"
Stars: ✭ 345 (+259.38%)
HandMeshNo description or website provided.
Stars: ✭ 258 (+168.75%)
Pytorch VqaStrong baseline for visual question answering
Stars: ✭ 158 (+64.58%)
Awesome Visual Question AnsweringA curated list of Visual Question Answering(VQA)(Image/Video Question Answering),Visual Question Generation ,Visual Dialog ,Visual Commonsense Reasoning and related area.
Stars: ✭ 295 (+207.29%)
Nscl Pytorch ReleasePyTorch implementation for the Neuro-Symbolic Concept Learner (NS-CL).
Stars: ✭ 276 (+187.5%)
MICCAI21 MMQMultiple Meta-model Quantifying for Medical Visual Question Answering
Stars: ✭ 16 (-83.33%)
bottom-up-featuresBottom-up features extractor implemented in PyTorch.
Stars: ✭ 62 (-35.42%)
Vqa regatResearch Code for ICCV 2019 paper "Relation-aware Graph Attention Network for Visual Question Answering"
Stars: ✭ 129 (+34.38%)
rositaROSITA: Enhancing Vision-and-Language Semantic Alignments via Cross- and Intra-modal Knowledge Integration
Stars: ✭ 36 (-62.5%)
vqa-softAccompanying code for "A Simple Loss Function for Improving the Convergence and Accuracy of Visual Question Answering Models" CVPR 2017 VQA workshop paper.
Stars: ✭ 14 (-85.42%)
FigureQA-baselineTensorFlow implementation of the CNN-LSTM, Relation Network and text-only baselines for the paper "FigureQA: An Annotated Figure Dataset for Visual Reasoning"
Stars: ✭ 28 (-70.83%)
DVQA datasetDVQA Dataset: A Bar chart question answering dataset presented at CVPR 2018
Stars: ✭ 20 (-79.17%)
VideoNavQAAn alternative EQA paradigm and informative benchmark + models (BMVC 2019, ViGIL 2019 spotlight)
Stars: ✭ 22 (-77.08%)
Vqa TensorflowTensorflow Implementation of Deeper LSTM+ normalized CNN for Visual Question Answering
Stars: ✭ 98 (+2.08%)
just-ask[TPAMI Special Issue on ICCV 2021 Best Papers, Oral] Just Ask: Learning to Answer Questions from Millions of Narrated Videos
Stars: ✭ 57 (-40.62%)
AoA-pytorchA Pytorch implementation of Attention on Attention module (both self and guided variants), for Visual Question Answering
Stars: ✭ 33 (-65.62%)
MullowbivqaHadamard Product for Low-rank Bilinear Pooling
Stars: ✭ 57 (-40.62%)
probnmn-clevrCode for ICML 2019 paper "Probabilistic Neural-symbolic Models for Interpretable Visual Question Answering" [long-oral]
Stars: ✭ 63 (-34.37%)
iMIXA framework for Multimodal Intelligence research from Inspur HSSLAB.
Stars: ✭ 21 (-78.12%)
ACECode for our paper, Neural Network Attributions: A Causal Perspective (ICML 2019).
Stars: ✭ 47 (-51.04%)
neuro-symbolic-ai-socNeuro-Symbolic Visual Question Answering on Sort-of-CLEVR using PyTorch
Stars: ✭ 41 (-57.29%)
VqaCloudCV Visual Question Answering Demo
Stars: ✭ 57 (-40.62%)
Transformer-MM-Explainability[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
Stars: ✭ 484 (+404.17%)
mmgnn textvqaA Pytorch implementation of CVPR 2020 paper: Multi-Modal Graph Neural Network for Joint Reasoning on Vision and Scene Text
Stars: ✭ 41 (-57.29%)
Conditional Batch NormPytorch implementation of NIPS 2017 paper "Modulating early visual processing by language"
Stars: ✭ 51 (-46.87%)
hcrn-videoqaImplementation for the paper "Hierarchical Conditional Relation Networks for Video Question Answering" (Le et al., CVPR 2020, Oral)
Stars: ✭ 111 (+15.63%)
DeFMO[CVPR 2021] DeFMO: Deblurring and Shape Recovery of Fast Moving Objects
Stars: ✭ 144 (+50%)
Bottom Up AttentionBottom-up attention model for image captioning and VQA, based on Faster R-CNN and Visual Genome
Stars: ✭ 989 (+930.21%)
Scribe-pyRegulatory networks with Direct Information in python
Stars: ✭ 28 (-70.83%)