All Projects → olivesgatech → TA3N

olivesgatech / TA3N

Licence: MIT license
[ICCV 2019 Oral] TA3N: https://github.com/cmhungsteve/TA3N (Most updated repo)

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to TA3N

KD3A
Here is the official implementation of the model KD3A in paper "KD3A: Unsupervised Multi-Source Decentralized Domain Adaptation via Knowledge Distillation".
Stars: ✭ 63 (+40%)
Mutual labels:  transfer-learning, unsupervised-learning, domain-adaptation
temporal-ssl
Video Representation Learning by Recognizing Temporal Transformations. In ECCV, 2020.
Stars: ✭ 46 (+2.22%)
Mutual labels:  transfer-learning, unsupervised-learning, action-recognition
Transferlearning
Transfer learning / domain adaptation / domain generalization / multi-task learning etc. Papers, codes, datasets, applications, tutorials.-迁移学习
Stars: ✭ 8,481 (+18746.67%)
Mutual labels:  transfer-learning, unsupervised-learning, domain-adaptation
Awesome Transfer Learning
Best transfer learning and domain adaptation resources (papers, tutorials, datasets, etc.)
Stars: ✭ 1,349 (+2897.78%)
Mutual labels:  transfer-learning, unsupervised-learning, domain-adaptation
Weakly Supervised 3d Object Detection
Weakly Supervised 3D Object Detection from Point Clouds (VS3D), ACM MM 2020
Stars: ✭ 61 (+35.56%)
Mutual labels:  transfer-learning, unsupervised-learning
Transfer Learning Library
Transfer-Learning-Library
Stars: ✭ 678 (+1406.67%)
Mutual labels:  transfer-learning, domain-adaptation
Cross Domain ner
Cross-domain NER using cross-domain language modeling, code for ACL 2019 paper
Stars: ✭ 67 (+48.89%)
Mutual labels:  transfer-learning, domain-adaptation
Ddc Transfer Learning
A simple implementation of Deep Domain Confusion: Maximizing for Domain Invariance
Stars: ✭ 83 (+84.44%)
Mutual labels:  transfer-learning, domain-adaptation
L2c
Learning to Cluster. A deep clustering strategy.
Stars: ✭ 262 (+482.22%)
Mutual labels:  transfer-learning, unsupervised-learning
Deep Transfer Learning
Deep Transfer Learning Papers
Stars: ✭ 68 (+51.11%)
Mutual labels:  transfer-learning, domain-adaptation
Awesome Domain Adaptation
A collection of AWESOME things about domian adaptation
Stars: ✭ 3,357 (+7360%)
Mutual labels:  transfer-learning, domain-adaptation
Video Classification
Tutorial for video classification/ action recognition using 3D CNN/ CNN+RNN on UCF101
Stars: ✭ 543 (+1106.67%)
Mutual labels:  transfer-learning, action-recognition
Multitask Learning
Awesome Multitask Learning Resources
Stars: ✭ 361 (+702.22%)
Mutual labels:  transfer-learning, domain-adaptation
He4o
和(he for objective-c) —— “信息熵减机系统”
Stars: ✭ 284 (+531.11%)
Mutual labels:  transfer-learning, unsupervised-learning
Seg Uncertainty
IJCAI2020 & IJCV 2020 🌇 Unsupervised Scene Adaptation with Memory Regularization in vivo
Stars: ✭ 202 (+348.89%)
Mutual labels:  transfer-learning, domain-adaptation
Libtlda
Library of transfer learners and domain-adaptive classifiers.
Stars: ✭ 71 (+57.78%)
Mutual labels:  transfer-learning, domain-adaptation
Convolutional Handwriting Gan
ScrabbleGAN: Semi-Supervised Varying Length Handwritten Text Generation (CVPR20)
Stars: ✭ 107 (+137.78%)
Mutual labels:  transfer-learning, domain-adaptation
Shot
code released for our ICML 2020 paper "Do We Really Need to Access the Source Data? Source Hypothesis Transfer for Unsupervised Domain Adaptation"
Stars: ✭ 134 (+197.78%)
Mutual labels:  transfer-learning, domain-adaptation
SHOT-plus
code for our TPAMI 2021 paper "Source Data-absent Unsupervised Domain Adaptation through Hypothesis Transfer and Labeling Transfer"
Stars: ✭ 46 (+2.22%)
Mutual labels:  transfer-learning, domain-adaptation
adapt
Awesome Domain Adaptation Python Toolbox
Stars: ✭ 46 (+2.22%)
Mutual labels:  transfer-learning, domain-adaptation

Temporal Attentive Alignment for Video Domain Adaptation

PWC PWC

[Important] Please check here or https://github.com/cmhungsteve/TA3N for the most updated repo!


This is the official PyTorch implementation of our papers:

Temporal Attentive Alignment for Large-Scale Video Domain Adaptation
Min-Hung Chen, Zsolt Kira, Ghassan AlRegib (Advisor), Jaekwon Yoo, Ruxin Chen, Jian Zheng
International Conference on Computer Vision (ICCV), 2019 [Oral (acceptance rate: 4.6%)]
[arXiv][Oral][Poster][Open Access][Blog]

Temporal Attentive Alignment for Video Domain Adaptation
Min-Hung Chen, Zsolt Kira, Ghassan AlRegib (Advisor)
CVPR Workshop (Learning from Unlabeled Videos), 2019
[arXiv]

Although various image-based domain adaptation (DA) techniques have been proposed in recent years, domain shift in videos is still not well-explored. Most previous works only evaluate performance on small-scale datasets which are saturated. Therefore, we first propose two largescale video DA datasets with much larger domain discrepancy: UCF-HMDBfull and Kinetics-Gameplay. Second, we investigate different DA integration methods for videos, and show that simultaneously aligning and learning temporal dynamics achieves effective alignment even without sophisticated DA methods. Finally, we propose Temporal Attentive Adversarial Adaptation Network (TA3N), which explicitly attends to the temporal dynamics using domain discrepancy for more effective domain alignment, achieving state-of-the-art performance on four video DA datasets.


Contents


Requirements

  • support Python 3.6, PyTorch 0.4, CUDA 9.0, CUDNN 7.1.4
  • install all the library with: pip install -r requirements.txt

Dataset Preparation

Data structure

You need to extract frame-level features for each video to run the codes. To extract features, please check dataset_preparation/.

Folder Structure:

DATA_PATH/
  DATASET/
    list_DATASET_SUFFIX.txt
    RGB/
      CLASS_01/
        VIDEO_0001.mp4
        VIDEO_0002.mp4
        ...
      CLASS_02/
      ...

    RGB-Feature/
      VIDEO_0001/
        img_00001.t7
        img_00002.t7
        ...
      VIDEO_0002/
      ...

RGB-Feature/ contains all the feature vectors for training/testing. RGB/ contains all the raw videos.

There should be at least two DATASET folders: source training set and validation set. If you want to do domain adaption, you need to have another DATASET: target training set.

Input data

File lists for training/validation

The file list list_DATASET_SUFFIX.txt is required for data feeding. Each line in the list contains the full path of the video folder, video frame number, and video class index. It looks like:

DATA_PATH/DATASET/RGB-Feature/VIDEO_0001/ 100 0
DATA_PATH/DATASET/RGB-Feature/VIDEO_0002/ 150 1
......

To generate the file list, please check dataset_preparation/.


Usage

  • training/validation: Run ./script_train_val.sh

All the commonly used variables/parameters have comments in the end of the line. Please check Options.

Training

All the outputs will be under the directory exp_path.

  • Outputs:
    • model weights: checkpoint.pth.tar, model_best.pth.tar
    • log files: train.log, train_short.log, val.log, val_short.log

Testing

You can choose one of model_weights for testing. All the outputs will be under the directory exp_path.

  • Outputs:
    • score_data: used to check the model output (scores_XXX.npz)
    • confusion matrix: confusion_matrix_XXX.png and confusion_matrix_XXX-topK.txt

Options

Domain Adaptation

In ./script_train_val.sh, there are several options related to our DA approaches.

  • use_target: switch on/off the DA mode
    • none: not use target data (no DA)
    • uSv/Sv: use target data in a unsupervised/supervised way

More options

For more details of all the arguments, please check opts.py.

Notes

The options in the scripts have comments with the following types:

  • no comment: user can still change it, but NOT recommend (may need to change the code or have different experimental results)
  • comments with choices (e.g. true | false): can only choose from choices
  • comments as depend on users: totally depend on users (mostly related to data path)

Citation

If you find this repository useful, please cite our papers:

@inproceedings{chen2019temporal,
  title={Temporal attentive alignment for large-scale video domain adaptation},
  author={Chen, Min-Hung and Kira, Zsolt and AlRegib, Ghassan and Woo, Jaekwon and Chen, Ruxin and Zheng, Jian},
  booktitle={IEEE International Conference on Computer Vision (ICCV)},
  year={2019},
  url={https://arxiv.org/abs/1907.12743}
}

@article{chen2019taaan,
  title={Temporal Attentive Alignment for Video Domain Adaptation},
  author={Chen, Min-Hung and Kira, Zsolt and AlRegib, Ghassan},
  journal={CVPR Workshop on Learning from Unlabeled Videos},
  year={2019},
  url={https://arxiv.org/abs/1905.10861}
}

Acknowledgments

Some codes are borrowed from TSN, pytorch-tsn, TRN-pytorch, and Xlearn.


Contact

Min-Hung Chen
cmhungsteve AT gatech DOT edu

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].