All Projects → rehg-lab → lowshot-shapebias

rehg-lab / lowshot-shapebias

Licence: other
Learning low-shot object classification with explicit shape bias learned from point clouds

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to lowshot-shapebias

FSL-Mate
FSL-Mate: A collection of resources for few-shot learning (FSL).
Stars: ✭ 1,346 (+3537.84%)
Mutual labels:  few-shot, few-shot-learning, low-shot
few-shot-lm
The source code of "Language Models are Few-shot Multilingual Learners" (MRL @ EMNLP 2021)
Stars: ✭ 32 (-13.51%)
Mutual labels:  few-shot, few-shot-learning
SimpleView
Official Code for ICML 2021 paper "Revisiting Point Cloud Shape Classification with a Simple and Effective Baseline"
Stars: ✭ 95 (+156.76%)
Mutual labels:  point-cloud, 3d-vision
MinkLocMultimodal
MinkLoc++: Lidar and Monocular Image Fusion for Place Recognition
Stars: ✭ 65 (+75.68%)
Mutual labels:  point-cloud, 3d-vision
RandLA-Net-pytorch
🍀 Pytorch Implementation of RandLA-Net (https://arxiv.org/abs/1911.11236)
Stars: ✭ 69 (+86.49%)
Mutual labels:  3d-vision, pytorch-implementation
awesome-point-cloud-deep-learning
Paper list of deep learning on point clouds.
Stars: ✭ 39 (+5.41%)
Mutual labels:  point-cloud, 3d-vision
Transferlearning
Transfer learning / domain adaptation / domain generalization / multi-task learning etc. Papers, codes, datasets, applications, tutorials.-迁移学习
Stars: ✭ 8,481 (+22821.62%)
Mutual labels:  few-shot, few-shot-learning
attMPTI
[CVPR 2021] Few-shot 3D Point Cloud Semantic Segmentation
Stars: ✭ 118 (+218.92%)
Mutual labels:  point-cloud, few-shot-learning
CurveNet
Official implementation of "Walk in the Cloud: Learning Curves for Point Clouds Shape Analysis", ICCV 2021
Stars: ✭ 94 (+154.05%)
Mutual labels:  point-cloud, 3d-vision
DeepI2P
DeepI2P: Image-to-Point Cloud Registration via Deep Classification. CVPR 2021
Stars: ✭ 130 (+251.35%)
Mutual labels:  point-cloud, 3d-vision
MinkLoc3D
MinkLoc3D: Point Cloud Based Large-Scale Place Recognition
Stars: ✭ 83 (+124.32%)
Mutual labels:  point-cloud, 3d-vision
simclr-pytorch
PyTorch implementation of SimCLR: supports multi-GPU training and closely reproduces results
Stars: ✭ 89 (+140.54%)
Mutual labels:  pytorch-implementation
torch-metrics
Metrics for model evaluation in pytorch
Stars: ✭ 99 (+167.57%)
Mutual labels:  pytorch-implementation
cloud to map
Algorithm that converts point cloud data into an occupancy grid
Stars: ✭ 26 (-29.73%)
Mutual labels:  point-cloud
awesome-lidar
😎 Awesome LIDAR list. The list includes LIDAR manufacturers, datasets, point cloud-processing algorithms, point cloud frameworks and simulators.
Stars: ✭ 217 (+486.49%)
Mutual labels:  point-cloud
CV Learning
Projects related to computer vision and image processing.
Stars: ✭ 20 (-45.95%)
Mutual labels:  3d-vision
attention-sampling-pytorch
This is a PyTorch implementation of the paper: "Processing Megapixel Images with Deep Attention-Sampling Models".
Stars: ✭ 25 (-32.43%)
Mutual labels:  pytorch-implementation
sp segmenter
Superpixel-based semantic segmentation, with object pose estimation and tracking. Provided as a ROS package.
Stars: ✭ 33 (-10.81%)
Mutual labels:  point-cloud
cpnet
Learning Video Representations from Correspondence Proposals (CVPR 2019 Oral)
Stars: ✭ 93 (+151.35%)
Mutual labels:  point-cloud
FedLab-benchmarks
Standard federated learning implementations in FedLab and FL benchmarks.
Stars: ✭ 49 (+32.43%)
Mutual labels:  pytorch-implementation

LSSB: Low-Shot learning with Shape Bias - CVPR2021

alt text

This is a repository containing PyTorch code for Using Shape to Categorize: Low-Shot Learning with an Explicit Shape Bias. You can find instructions for for testing pre-trained models in the paper and training your own models in the instructions below.

If you use our work in your research, please consider citing

@article{stojanov21cvpr,
      title={Using Shape to Categorize: Low-Shot Learning with an Explicit Shape Bias},
      author={Stefan Stojanov and Anh Thai and James M. Rehg},
      booktitle = {CVPR},
      year      = {2021}
}

Project Members

Contact

If you have any questions regarding the paper, data or code, please email Stefan Stojanov at [email protected]

Toys4K 3D Object Dataset

myimage

Details for downloading and rendering the Toys4K 3D object dataset are available in the Toys4K directory of this repository.

Results - Shape Bias Improves Low-Shot Generalization

The following results are averaged over three runs for FEAT and five runs for SimpleShot

ModelNet

Method 1-shot 5-way 5-shot 5-way 1-shot 10-way 5-shot 10-way
SimpleShot 58.99 74.29 45.82 62.73
LSSB SimpleShot 57.57 74.39 49.84 64.21
FEAT 58.30 71.54 45.41 60.44
LSSB FEAT 62.84 74.84 51.49 63.80

ShapeNet

Method 1-shot 5-way 5-shot 5-way 1-shot 10-way 5-shot 10-way 1-shot 20-way 5-shot 20-way
SimpleShot 66.73 80.93 53.37 70.32 41.09 59.09
LSSB SimpleShot 67.50 81.30 54.99 71.24 43.60 61.03
FEAT 67.81 81.45 55.69 71.74 44.44 61.46
LSSB FEAT 70.24 80.95 58.45 70.95 47.03 60.43

Toys4K

Method 1-shot 5-way 5-shot 5-way 1-shot 10-way 5-shot 10-way 1-shot 20-way 5-shot 20-way
SimpleShot 68.78 83.69 55.22 73.58 43.05 62.64
LSSB SimpleShot 70.96 81.33 58.47 70.81 46.96 60.30
FEAT 70.86 84.13 57.15 74.29 44.84 63.65
LSSB FEAT 71.58 81.45 59.09 71.00 47.45 59.98

Installation

For this project we use miniconda to manage dependencies. After setting it up, you can install the lssb environment

conda env create -f environment.yml
conda activate lssb

pip install -e .

The most important dependencies to get right are PyTorch 1.6 and PyTorch Lightning 0.8.4. Since this project uses PyTorch Lightning, if you want to make significant modifications to the code it will be helpful to look into the overall PL workflow.

Make sure you install PyTorch with a compatible CUDA version. This repository has been tested with CUDA>10.1.

Datasets

You can download the data we used for the experiments for ModelNet, ShapeNet and our Toys4K dataset using download_data.sh.

Attribution information about our new dataset is available here. Please do not distribute the data made available here in any way, and please make sure you follow the respective ModelNet and ShapeNet licenses.

The approximate sizes for the datasets are 7.4GB for ModelNet, 11GB for ShapeNet55 and 4.1GB for Toys4K

For training and testing with shape bias use download_features.sh to download extracted DGCNN point cloud features.

Code Organization

Following is an outline of the LSSB codebase

lssb - main code directory
    - data - PyTorch dataset classes
        - modelnet.py - dataset class for ModelNet
        - sampler.py - low-shot batch sampler code
        - shapenet.py - datset class for ShapeNet
        - toys.py - dataset clas for Toys4K
        - transform.py - code for image and point cloud transformations
    - feat_extract - code for feature extraction from point clouds
        - extract_simpleshot_pc_feature_modelnet.py - ModelNet feature extraction
        - extract_simpleshot_pc_feature_shapenet.py - ShapeNet feature extraction
        - extract_simpleshot_pc_feature_toys4.5k.py - Toys feature extraction
        - utils.py - util functions for feature extraction
    - lowshot
        - configs
            - feat - config files for training FEAT models on all 3 datasets
            - simpleshot - config files for training SimpleShot models or all 3 datasets
        - models
            - base_feat_classifier.py - base class for FEAT classifier
            - base_simpleshot_classifier.py - base class for SimpleShot classifier
            - image_feat_classifier.py - subclass for image-based FEAT classifier
            - image_simpleshot_classifier.py - sublcass for image-based SimpleShot classifier 
            - joint_feat_classifier.py - subclass for shape-biased FEAT classifier
            - joint_simpleshot_classifier.py - subclass for shape-biased SimpleShot classifier
            - ptcld_simpleshot_classifier.py - subclass for point-cloud based SimpleShot classifier
        - utils
            - feat_utils.py - util functions specific to FEAT
            - simpleshot_utils.py - util functions specifict to SimpleShot
            - train_utils.py - just contains a function to return dataset objects; used across all methods
        - test.py - main testing script; used for all methods
        - train.py - main training script; used for all methods
    - nets
        - ResNet.py - borrowed from torchvision, definitons of ResNet models
        - dgcnn.py - definition of DGCNN point cloud classifier
        - feat.py - definition of FEAT transformer module

Evaluate Pre-Trained Models

First download the pretrained models (approx. 8.3GB)

bash download_models.sh

Once the models are downloaded, please use the scripts in testing_scripts to test the pretrained models e.g.

bash testing_scripts/test_simpleshot_modelnet.sh

Training From Scratch

To train models from scratch, please use the scripts in training_scripts e.g.

bash training_scripts/train_resnet18_simpleshot.sh 0 

Notes for training and testing with shape bias:

To train the shape biased image encoder, feature extraction needs to be done to save on compute. An example command is

python lssb/feat_extract/extract_simpleshot_pc_feature_modelnet.py --ckpt_path=pretrained_models/simpleshot/modelnet/ptcld-only/dgcnn-modelnet-simpleshot/version_0/checkpoints/epoch\=176_val_acc\=0.765.ckpt

Once this is completed, the .npz file should be moved to the dataset directory, under a new directory called features. You can download extracted features using download_features.sh as in the instructions above.

To train the shape-biased FEAT models, a shape-biased simpleshot checkpoint is needed. In the appropriate config file for the encoder add for example

encoder_path_resnet: pretrained_models/simpleshot/modelnet/shape-biased/joint-modelnet-pairwise-simpleshot/version_0/checkpoints/epoch=360_val_acc=0.718.ckpt

if you have downloaded the pretrained models.

Code Credits

We are grateful for the authors of the following code repositories, from where we borrow components used in this repository

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].