All Projects → yfzhang114 → Generalization-Causality

yfzhang114 / Generalization-Causality

Licence: MIT license
关于domain generalization,domain adaptation,causality,robutness,prompt,optimization,generative model各式各样研究的阅读笔记

Projects that are alternatives of or similar to Generalization-Causality

NeuralNetworkAnalysis.jl
Reachability analysis for closed-loop control systems
Stars: ✭ 37 (-92.32%)
Mutual labels:  robustness
adaptive-f-divergence
A tensorflow implementation of the NIPS 2018 paper "Variational Inference with Tail-adaptive f-Divergence"
Stars: ✭ 20 (-95.85%)
Mutual labels:  generative-model
causal-semantic-generative-model
Codes for Causal Semantic Generative model (CSG), the model proposed in "Learning Causal Semantic Representation for Out-of-Distribution Prediction" (NeurIPS-21)
Stars: ✭ 51 (-89.42%)
Mutual labels:  generative-model
DUN
Code for "Depth Uncertainty in Neural Networks" (https://arxiv.org/abs/2006.08437)
Stars: ✭ 65 (-86.51%)
Mutual labels:  robustness
GraphCNN-GAN
Graph-convolutional GAN for point cloud generation. Code from ICLR 2019 paper Learning Localized Generative Models for 3D Point Clouds via Graph Convolution
Stars: ✭ 50 (-89.63%)
Mutual labels:  generative-model
Awesome-Vision-Transformer-Collection
Variants of Vision Transformer and its downstream tasks
Stars: ✭ 124 (-74.27%)
Mutual labels:  generative-model
denoising-diffusion-pytorch
Implementation of Denoising Diffusion Probabilistic Model in Pytorch
Stars: ✭ 2,313 (+379.88%)
Mutual labels:  generative-model
eleanor
Code used during my Chaos Engineering and Resiliency Patterns talk.
Stars: ✭ 14 (-97.1%)
Mutual labels:  robustness
Lr-LiVAE
Tensorflow implementation of Disentangling Latent Space for VAE by Label Relevant/Irrelevant Dimensions (CVPR 2019)
Stars: ✭ 29 (-93.98%)
Mutual labels:  generative-model
ShapeFormer
Official repository for the ShapeFormer Project
Stars: ✭ 97 (-79.88%)
Mutual labels:  generative-model
AI Learning Hub
AI Learning Hub for Machine Learning, Deep Learning, Computer Vision and Statistics
Stars: ✭ 53 (-89%)
Mutual labels:  generative-model
vqvae-2
PyTorch implementation of VQ-VAE-2 from "Generating Diverse High-Fidelity Images with VQ-VAE-2"
Stars: ✭ 65 (-86.51%)
Mutual labels:  generative-model
favorite-research-papers
Listing my favorite research papers 📝 from different fields as I read them.
Stars: ✭ 12 (-97.51%)
Mutual labels:  generative-model
simplegan
Tensorflow-based framework to ease training of generative models
Stars: ✭ 19 (-96.06%)
Mutual labels:  generative-model
TIGER
Python toolbox to evaluate graph vulnerability and robustness (CIKM 2021)
Stars: ✭ 103 (-78.63%)
Mutual labels:  robustness
pytorch-GAN
My pytorch implementation for GAN
Stars: ✭ 12 (-97.51%)
Mutual labels:  generative-model
Diffusion-Models-Seminar
No description or website provided.
Stars: ✭ 75 (-84.44%)
Mutual labels:  generative-model
style-vae
Implementation of VAE and Style-GAN Architecture Achieving State of the Art Reconstruction
Stars: ✭ 25 (-94.81%)
Mutual labels:  generative-model
Metric Learning Adversarial Robustness
Code for NeurIPS 2019 Paper
Stars: ✭ 44 (-90.87%)
Mutual labels:  robustness
ATMC
[NeurIPS'2019] Shupeng Gui, Haotao Wang, Haichuan Yang, Chen Yu, Zhangyang Wang, Ji Liu, “Model Compression with Adversarial Robustness: A Unified Optimization Framework”
Stars: ✭ 41 (-91.49%)
Mutual labels:  robustness

This is a repository for organizing articles related to Domain generalization, OOD, optimization, data-centric learning, prompt learning, robutness, and causality. Most papers are linked to my reading notes. Feel free to visit my personal homepage and contact me for collaboration and discussion.

About Me 🔆

I'm the first year Ph.D. student at the State Key Laboratory of Pattern Recognition, the University of Chinese Academy of Sciences, advised by Prof. Tieniu Tan. I have also spent time at Microsoft, advised by Prof. Jingdong Wang.

🔥 Updated 2022-8-13

  • Our paper Towards Principled Disentanglement for Domain Generalization has been selected for an ORAL presentation. 😊 [Reading Notes] [Code] [paper]
  • Recent Domain generalization, domain adaptation papers on ICML have been updated.

Table of Contents (ongoing)

Generalization/OOD

2022

  1. CVPR Oral Towards Principled Disentanglement for Domain Generalization(将解耦用于DG,新理论,新方法)
  2. Arxiv How robust are pre-trained models to distribution shift?(自监督模型比有监督以及无监督模型更鲁棒,在小部分OOD数据上重新训练classifier提升很大)
  3. ICML A Closer Look at Smoothness in Domain Adversarial Training(平滑分类损失可以提高域对抗训练的泛化性能)
  4. CVPR Bayesian Invariant Risk Minimization(缓解IRM在模型过拟合时退化为ERM的问题)
  5. CVPR Towards Unsupervised Domain Generalization(关注模型预训练的过程对DG任务的影响,设计了一个在DG数据集无监督预训练的算法)
  6. CVPR PCL: Proxy-based Contrastive Learning for Domain Generalization(直接采用有监督的对比学习用于DG效果并不好,本文提出可行方法)
  7. CVPR Style Neophile: Constantly Seeking Novel Styles for Domain Generalization(本文提出了一种新的方法,能够产生更多风格的数据)
  8. Arxiv WOODS: Benchmarks for Out-of-Distribution Generalization in Time Series Tasks(一个关于时序数据OOD的多个benchmark)
  9. Arxiv A Broad Study of Pre-training for Domain Generalization and Adaptation(深入研究了预训练对于DA,DG任务的作用,简单的使用目前最好的backbone足已取得SOTA的效果)
  10. Arxiv Domain Generalization by Mutual-Information Regularization with Pre-trained Models(使用预训练模型的特征指导finetune的过程,提高泛化能力)
  11. ICLR Oral A Fine-Grained Analysis on Distribution Shift(如何准确的定义distribution shift,以及如何系统的测量模型的鲁棒性)
  12. ICLR Oral Fine-Tuning Distorts Pretrained Features and Underperforms Out-of-Distribution(fine-tuning(微调)和linear probing相辅相成)
  13. ICLR Spotlight Towards a Unified View of Parameter-Efficient Transfer Learning(统一的参数高效微调理论框架)
  14. ICLR Spotlight How Do Vision Transformers Work?(Vision Transformers (ViTs)的优良特性)
  15. ICLR Spotlight On Predicting Generalization using GANs(使用源域数据训练出的GAN来预测测试误差)
  16. ICLR Poster Uncertainty Modeling for Out-of-Distribution Generalization(域泛化时考虑特征的不确定性,一种新的数据增强方法)
  17. ICLR Poster Gradient Matching for Domain Generalization(鼓励来自不同域的梯度之间的内积更大)
  18. ICML DNA: Domain Generalization with Diversified Neural Averaging(classifier ensemble,即对分类器进行集成。本文从理论和实验角度讨论了ensemble与DG任务的connection。)
  19. ICML Model Agnostic Sample Reweighting for Out-of-Distribution Learning(bi-level的去找一种有效的训练样本加权方式)
  20. ICML Sparse Invariant Risk Minimization(利用全局稀疏性约来防止伪特征在训练过程被使用)

2021

  1. ICML Improved OOD Generalization via Adversarial Training and Pre-training(从理论上表明,一个预先训练的模型对输入扰动具有更强的鲁棒性,那么对下游OOD数据的泛化可以提供更好的初始化。)
  2. ICCV CrossNorm and SelfNorm for Generalization under Distribution Shifts(思路简单的正则化技术用于DG)
  3. ICCV A Style and Semantic Memory Mechanism for Domain Generalization(尝试着去使用intra-domain style invariance来提升模型的泛化性能)
  4. Arxiv: Towards a Theoretical Framework of Out-of-Distribution Generalization (新理论)
  5. Arxiv(Yoshua Bengio) Invariance Principle Meets Information Bottleneck for Out-of-Distribution Generalization (当OOD遇到信息瓶颈理论)
  6. Arxiv Generalization of Reinforcement Learning with Policy-Aware Adversarial Data Augmentation
  7. Arxiv Embracing the Dark Knowledge: Domain Generalization Using Regularized Knowledge Distillation(使用知识蒸馏作为正则化手段)
  8. Arxiv Delving Deep into the Generalization of Vision Transformers under Distribution Shifts (视觉transformer的泛化性讨论)
  9. Arxiv Training Data Subset Selection for Regression with Controlled Generalization Error (从大量训练实例中选择数据子集,并保持可比的泛化性)
  10. Arxiv(MIT) Measuring Generalization with Optimal Transport (网络复杂度与泛化性的理论研究,)
  11. Arxiv(SJTU) OoD-Bench: Benchmarking and Understanding Out-of-Distribution Generalization Datasets and Algorithms (揭示OOD的评测标准尚不完善并提出评测方案)
  12. Arxiv (Tsinghu) Domain-Irrelevant Representation Learning for Unsupervised Domain Generalization (新的task:无监督的DG,源域的数据标签不可以用)
  13. ICML Oral: Can Subnetwork Structure be the Key to Out-of-Distribution Generalization? (彩票模型寻找模型中泛化能力更强的小模型)
  14. ICML Oral:Domain Generalization using Causal Matching (contrastive loss特征对齐+特征不变性约束)
  15. ICML Oral: Just Train Twice: Improving Group Robustness without Training Group Information
  16. ICML Spotlight: Environment Inference for Invariant Learning (没有域标签如何学习域不变性特征?)
  17. ICLR Poster: Understanding the failure modes of out-of-distribution generalization (OOD失败的两种原因)
  18. ICLR Poster: An Empirical Study of Invariant Risk Minimization(对IRM的实验性探索,如可见域的diversity如何影响IRM性能等)
  19. ICLR Poster In Search of Lost Domain Generalization (没有model selection的方法不是好方法,如何根据验证集选择模型?)
  20. ICLR Poster Modeling the Second Player in Distributionally Robust Optimization(用对抗学习建模DRO的uncertainty set)
  21. ICLR Poster Learning perturbation sets for robust machine learning(使用生成模型学习扰动集合)
  22. ICLR Spotlight(Yoshua Bengio) Systematic generalisation with group invariant predictions (将每个类分成不同的domain(environment inference,然后约束每个域的特征尽可能一致从而避免虚假依赖))
  23. CVPR Oral: Reducing Domain Gap by Reducing Style Bias (channel-wise 均值作为图像风格,减少CNN对风格的依赖)
  24. AISTATS Linear Regression Games: Convergence Guarantees to Approximate Out-of-Distribution Solutions
  25. AISTATS Oral Does Invariant Risk Minimization Capture Invariance(IRM只有在满足特定条件的情况下才能真正捕捉不变形特征)
  26. NeurIPS Counterfactual Invariance to Spurious Correlations: Why and How to Pass Stress Tests(本文使用因果工具设计了一个可行的算法,将反事实推理与域泛化(OOD)联系起来,进行有效的“stress test”,比如变化一个句子包含的的gender信息,看最后情感分类会不会改变。)
  27. NeurIPS Adaptive Risk Minimization: Learning to Adapt to Domain Shift(利用未标记的数据来更好地处理新domain引起的distribution shift)
  28. NeurIPS An Empirical Investigation of Domain Generalization with Empirical Risk Minimizers(基于domain adaptation的理论测量方法不能准确地捕捉OOD泛化行为)
  29. NeurIPS Spotlight On Inductive Biases for Heterogeneous Treatment Effect Estimation(使用因果工具设计了一个可行的算法,将反事实推理与域泛化(OOD)联系起来)
  30. NeurIPS Spotlight Test-Time Classifier Adjustment Module for Model-Agnostic Domain Generalization(在test的阶段,我们在依然会选择更新模型头部的linear层)
  31. NeurIPS Why Do Better Loss Functions Lead to Less Transferable Features?(本文研究了训练目标的选择如何影响卷积神经网络在ImageNet上训练得到的可迁移性)

2020

  1. Arxiv I-SPEC: An End-to-End Framework for Learning Transportable, Shift-Stable Models(将Domain Adaptation看作是因果图推理问题)
  2. Arxiv (Stanford)Distributionally Robust Lossesfor Latent Covariate Mixtures.
  3. NeurIPS Energy-based Out-of-distribution Detection(使用能量模型检测OOD样本)
  4. NeurIPS Fairness without demographics through adversarially reweighted learning (利用对抗学习对难样本进行加权,希望加权后的样本使得分类器的损失更大)
  5. NeurIPS Self-training Avoids Using Spurious Features Under Domain Shift (使用target domain的无标签数据训练有助于避免使用虚假特征)
  6. NeurIPS What shapes feature representations? Exploring datasets, architectures, and training(Simplicity Bias,神经网络倾向于拟合“容易”的特征)
  7. Arxiv Invariant Risk Minimization (奠基之作,跳出经验风险最小化--不变风险最小化)
  8. ICLR Poster The Risks of Invariant Risk Minimization (不变风险最小化的缺陷:域数目过少IRM即失败)
  9. ICLR Distributionally Robust Neural Networks for Group Shifts: On the Importance of Regularization for Worst-Case Generalization(GroupDRO: 拥有强正则的DRO)
  10. ICML An investigation of why overparameterizationexacerbates spurious correlations(神经网络的过参数化是造成网络使用虚假相关性的重要原因)
  11. ICML UDA workshop Learning Robust Representations with Score Invariant Learning(非归一化统计模型:用能量学习的方式做OOD)

OLD but Important

  1. ICML 2018 Oral (Stanford) Fairness Without Demographics in Repeated Loss Minimization.
  2. ICCV 2017 CCSA--Unified Deep Supervised Domain Adaptation and Generalization (对比损失对齐源域目标域样本空间)
  3. JSTOR (Peters)Causal inference by using invariant prediction: identification and confidence intervals.
  4. ICML 2015 [Towards a Learning Theory of Cause-Effect Inference](使用kernel mean embedding和分类器进行casual inference )
  5. IJCAI 2020 (CMU) Causal Discovery from Heterogeneous/Nonstationary Data

Survey

  1. Causality 基础概念汇总
  2. Domain Adaptation基础概念与相关文章解读

Robutness/Adaptation/Fairness

2022

  1. Arxiv Are Vision Transformers Robust to Spurious Correlations?(对ViT鲁棒性的研究,更大的模型和更多的训练前数据可以显著提高对伪相关的鲁棒性,预训练数据较少反而不如CNN)
  2. CVPR Exploring Domain-Invariant Parameters for Source FreeDomain Adaptation(相比于以往工作探索域不变特征,该工作旨在寻找域不变参数)
  3. CVPR CENet: Consolidation-and-Exploration Network for Continuous DomainAdaptation(本文说他提出了continuous DA的概念,但是ICML 18就已经提出了呀?)
  4. CVPR Slimmable Domain Adaptation(Adaptation的对象不仅应该是数据,本文考虑下游设备的adaptation。)

2021

  1. ICLR Poster Learning perturbation sets for robust machine learning(使用生成模型学习扰动集合)
  2. ICCV Generalized Source-free Domain Adaptation(不使用源域数据,只有源域预训练的模型时如何adaptation并保证source domain的性能)
  3. ICCV Adaptive Adversarial Network for Source-free Domain Adaptation(在模型优化过程中,我们能否寻找一种新的针对目标的分类器,并使其适应目标特征)
  4. ICCV Gradient Distribution Alignment Certificates Better Adversarial Domain Adaptation(该算法通过特征提取器和鉴别器之间的对抗学习来减小特征梯度在两个域之间的分布差异)
  5. FAccT Algorithmic recourse: from counterfactual explanations to interventions(提出了causal recourse的概念)
  6. ICML WorkShop On the Fairness of Causal Algorithmic Recourse(本文在group recourse的基础上考虑了多个变量之间的相互影响即所谓的因果关系。)
  7. NeurIPS Domain Adaptation with Invariant Representation Learning: What Transformations to Learn?(DA为什么需要两个encoder?)
  8. NeurIPS Gradual Domain Adaptation without Indexed Intermediate Domains(没有domaparameterin label的Gradual domain adaption(GDA))
  9. NeurIPS Implicit Semantic Response Alignment for Partial Domain Adaptation(PDA如何利用好额外类)
  10. NeurIPS The balancing principle for parameter choice in distance-regularized domain adaptation(如何挑选分类损失和正则化项的tradeoff parameter)

Before 2021

  1. Available at Optimization Online Kullback-Leibler Divergence Constrained Distributionally Robust Optimization(开篇之作,使用KL散度构造DRO中的uncertainty set)
  2. ICLR 2018 Oral Certifying Some Distributional Robustnesswith Principled Adversarial Training(基于 Wasserstein-ball构造uncertainty set,用于adversarial robustness)
  3. ICML 2018 Oral Does Distributionally Robust Supervised Learning Give Robust Classifiers?(DRO就一定比ERM好?不一定!必须引入额外信息)
  4. NeurIPS 2019 Distributionally Robust Optimization and Generalization in Kernel Methods(本文使用MMD(maximummean discrepancy)对uncertainty set进行建模,得到了MMD DRO)
  5. EMNLP 2019 Distributionally Robust Language Modeling(Coarse-grained mixture models在NLP中的经典案例)
  6. Arxiv 2019 Equalizing recourse across groups(基础的recourse测量的是单个样本,本文给出了一个group级别的recourse度量。)
  7. ICML 2020 Oral Continuously indexed domain adaptation(连续变化的domain)

Causality

Individual Treatment Estimation

  1. ICML 2017 Estimating individual treatment effect: generalization bounds and algorithms(本文第一次提出了ITE的概念,并使用DA的一套理论对其进行bound,依次设计了一套行而有效的算法。)
  2. NeurIPS 2019 Adapting Neural Networks for the Estimation of Treatment Effects(这篇文章的核心思想是这样的:我们没必要使用所有的协方差变量X进行adjustment。)
  3. PNAS 2019 Meta-learners for Estimating Heterogeneous Treatment Effects using Machine Learning(本文提出了一种新的框架X-learner,当各个treatment组的数据非常不均衡的时候,这种框架非常有效。)
  4. AAAI 2020 Learning Counterfactual Representations for Estimating Individual Dose-Response Curves(本文提出了新的metric,新的数据集,和训练策略,允许对任意数量的treatment的outcome进行估计。)
  5. ICLR 2021 Oral: VCNet and Functional Targeted Regularization For Learning Causal Effects of Continuous Treatments(本文基于varying coefficient model,让每个treatment对应的branch成为treatment的函数,而不需要单独设计branch,依次达到真正的连续性。)
  6. Arxiv 2021 Neural Counterfactual Representation Learning for Combinations of Treatments(本文考虑更复杂的情况:多种treatment共同作用。)
  7. NeurIPS 2021 Spotlight On Inductive Biases for Heterogeneous Treatment Effect Estimation(本文提出了新框架FlexTENet,直接对条件因果值τ进行估计,而不是对μ1,μ2分别估计)
  8. NeurIPS 2021 Nonparametric Estimation of Heterogeneous Treatment Effects: From Theory to Learning Algorithms(本文分析了进来进行 individual treatment effect的各种算法范式,)
  9. Arxiv 2021 Cycle-Balanced Representation Learning For Counterfactual Inference

Data-Centric/Prompt

Data Centric

  1. AISTATS 2019 Towards Optimal Transport with Global Invariances(如何对齐两个数据集?)
  2. NeurIPS 2020 Geometric Dataset Distances via Optimal Transport(如何定义两个数据集之间的距离?)
  3. ICML 2021 Dataset Dynamics via Gradient Flows in Probability Space(如何进行数据集优化,使得两个数据集尽可能的像?)

Prompts

  1. ACL 2021 WARP: Word-level Adversarial ReProgramming(Continuous Prompt开篇之作)
  2. Arxiv 2021 StanfordPrefix-Tuning: Optimizing Continuous Prompts for Generation(Continuous Prompt用于NLG的各种任务)(将prompt用于NLG任务上)
  3. Arxiv 2021 GoogleThe Power of Scale for Parameter-Efficient Prompt Tuning(目前最简单的preifx training:只对input添加prefix)
  4. Arxiv 2021 DeepMindMultimodal Few-Shot Learning with Frozen Language Models(利用图像编码器把图像作为一种动态的prefix,与文本一起送入LM中)

Optimization/GNN/Energy/Generative/Others

Optimization

  1. ICML 2021 An End-to-End Framework for Molecular Conformation Generation via Bilevel Programming
  2. NeurIPS 2021 Deep Structural Causal Models for Tractable Counterfactual Inference
  3. ICML 2018 Bilevel Programming for Hyperparameter Optimization and Meta-Learning(用bi-level programming建模超参数搜索与meta-learning)
  4. NeurIPS 2021 Energy-based Out-of-distribution Detection

LTH (Lottery Ticket Hypothesis)

  1. NeurIPS 2020: The Lottery Ticket Hypothesis for Pre-trained BERT Networks (彩票假设用于BERT fine-tune))
  2. ICML 2021 Oral: Can Subnetwork Structure be the Key to Out-of-Distribution Generalization? (彩票假设用于OOD泛化)
  3. CVPR 2021: The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models (彩票假设用于视觉模型预训练)

Generative Model (mainly diffusion model)

  1. Estimation of Non-Normalized Statistical Models by Score Matching(使用分步积分(Score Matching)的方法解决非归一化分布的估计问题)
  2. UAI 2019 Sliced Score Matching: A Scalable Approach to Density and Score Estimation(将高维的梯度场沿随即方向投影到一维的标量场再进行score-macthing)
  3. NeurIPS 2019 Oral Generative Modeling by Estimating Gradients of the Data Distribution(通过添加噪声的方法,增强Langevin MCMC对低概率密度区域的建模能力)
  4. NeurIPS 2020 improved techniques for training score-based generative models(对score-based generative model失败案例的分析和改进,生成能力开始媲美GAN)
  5. NeurIPS 2020 Denoising Diffusion Probabilistic Models(除VAE, GAN, FLOW外又一生成范式)
  6. ICLR 2021 Outstanding Paper Award Score-Based Generative Modeling through Stochastic Differential Equations
  7. Arxiv 2021 Diffusion Models Beat GANs on Image Synthesis(Diffusion Models在图像和合成上超越GAN)
  8. Arxiv 2021 Variational Diffusion Models

Implicit Neural Representation (INR)

  1. NeurIPS 2020 (Oral): Implicit Neural Representations with Periodic Activation Functions
  2. SIGGRAPH Asia 2020: X-Fields: Implicit Neural View-, Light- and Time-Image Interpolation
  3. CVPR 2021 (Oral):Learning Continuous Image Representation with Local Implicit Image Function
  4. CVPR 2021 Adversarial Generation of Continuous Images
  5. NeurIPS 2021 Learning Signal-Agnostic Manifolds of Neural Fields
  6. Arxiv 2021 Generative Models as Distributions of Functions

Survey

  1. 综述:基于能量的模型
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].