All Projects → ywz978020607 → HESIC

ywz978020607 / HESIC

Licence: Apache-2.0 license
Official Code of "Deep Homography for Efficient Stereo Image Compression"[cvpr21oral]

Programming Languages

python
139335 projects - #7 most used programming language
C++
36643 projects - #6 most used programming language

Projects that are alternatives of or similar to HESIC

Awesome-low-level-vision-resources
A curated list of resources for Low-level Vision Tasks
Stars: ✭ 35 (-16.67%)
Mutual labels:  cvpr2021
LabelRelaxation-CVPR21
Official PyTorch Implementation of Embedding Transfer with Label Relaxation for Improved Metric Learning, CVPR 2021
Stars: ✭ 37 (-11.9%)
Mutual labels:  cvpr2021
Im2Vec
[CVPR 2021 Oral] Im2Vec Synthesizing Vector Graphics without Vector Supervision
Stars: ✭ 229 (+445.24%)
Mutual labels:  cvpr2021
SGGpoint
[CVPR 2021] Exploiting Edge-Oriented Reasoning for 3D Point-based Scene Graph Analysis (official pytorch implementation)
Stars: ✭ 41 (-2.38%)
Mutual labels:  cvpr2021
DeFLOCNet
The official pytorch code of DeFLOCNet: Deep Image Editing via Flexible Low-level Controls (CVPR2021)
Stars: ✭ 38 (-9.52%)
Mutual labels:  cvpr2021
EPCDepth
[ICCV 2021] Excavating the Potential Capacity of Self-Supervised Monocular Depth Estimation
Stars: ✭ 105 (+150%)
Mutual labels:  stereo
soft-intro-vae-pytorch
[CVPR 2021 Oral] Official PyTorch implementation of Soft-IntroVAE from the paper "Soft-IntroVAE: Analyzing and Improving Introspective Variational Autoencoders"
Stars: ✭ 170 (+304.76%)
Mutual labels:  cvpr2021
FixBi
FixBi: Bridging Domain Spaces for Unsupervised Domain Adaptation (CVPR 2021)
Stars: ✭ 48 (+14.29%)
Mutual labels:  cvpr2021
calicam
CaliCam: Calibrated Fisheye Stereo & Mono Camera
Stars: ✭ 98 (+133.33%)
Mutual labels:  stereo
CondenseNetV2
[CVPR 2021] CondenseNet V2: Sparse Feature Reactivation for Deep Networks
Stars: ✭ 73 (+73.81%)
Mutual labels:  cvpr2021
Domain-Consensus-Clustering
[CVPR2021] Domain Consensus Clustering for Universal Domain Adaptation
Stars: ✭ 85 (+102.38%)
Mutual labels:  cvpr2021
CCL
PyTorch Implementation on Paper [CVPR2021]Distilling Audio-Visual Knowledge by Compositional Contrastive Learning
Stars: ✭ 76 (+80.95%)
Mutual labels:  cvpr2021
RSCD
[CVPR2021] Towards Rolling Shutter Correction and Deblurring in Dynamic Scenes
Stars: ✭ 83 (+97.62%)
Mutual labels:  cvpr2021
cfvqa
[CVPR 2021] Counterfactual VQA: A Cause-Effect Look at Language Bias
Stars: ✭ 96 (+128.57%)
Mutual labels:  cvpr2021
boombeastic
A Raspberry Pi based smart connected speaker with support for airplay, spotify, mpd and local playback
Stars: ✭ 206 (+390.48%)
Mutual labels:  stereo
WereSoCool
A language for composing microtonal music built in Rust. Make cool sounds. Impress your friends/pets/plants.
Stars: ✭ 41 (-2.38%)
Mutual labels:  stereo
BCNet
Deep Occlusion-Aware Instance Segmentation with Overlapping BiLayers [CVPR 2021]
Stars: ✭ 434 (+933.33%)
Mutual labels:  cvpr2021
cvpr-buzz
🐝 Explore Trending Papers at CVPR
Stars: ✭ 37 (-11.9%)
Mutual labels:  cvpr2021
single-positive-multi-label
Multi-Label Learning from Single Positive Labels - CVPR 2021
Stars: ✭ 63 (+50%)
Mutual labels:  cvpr2021
RainNet
[CVPR 2021] Region-aware Adaptive Instance Normalization for Image Harmonization
Stars: ✭ 125 (+197.62%)
Mutual labels:  cvpr2021

CompressAI

paper link

HESIC Project is inherited from https://github.com/InterDigitalInc/CompressAI

Installation:

pip install -e . 
pip install opencv-contrib-python==3.4.2.17 
pip install kornia 
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main/
conda config --set show_channel_urls yes
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch/
conda install pytorch==1.6.0 torchvision cudatoolkit=10.1

test scripts:

cd /ywz/mywork/

python test3real.py -d "/home/ywz/database/aftercut512" --seed 0 --patch-size 512 512 --batch-size 1 --test-batch-size 1

or

python test3_savereal.py -d "/home/ywz/database/aftercut512" --seed 0 --patch-size 512 512 --batch-size 1 --test-batch-size 1

Errata notes

Recently, when we pushed forward new work, we discovered that there was an error in the code, which led to the wrong results of the conference paper, and we made sufficient improvements and fine-tunings, and attached the final correct results. The reason for the error: We are based on the training and test scripts of the example/train.py internship of the CompressAI framework, but in the 2020.7 version, CompressAI incorrectly used the 'val' variable in the package class of 'AverageMeter'. It was not checked due to the time of submission and our negligence.

At the same time, we have completed the serialization part of the code, and used the left-eye decoded image to re-enter the left-eye code to guide the right-eye entropy model.

(If using the old models and change 'avg' to 'val', the wrong results in the paper could be achieved.)

Result.

Instereo2k

datasets:

Pan Baidu :

link:https://pan.baidu.com/s/1sSbMCl-6LXPal_asBt5Giw code:k8rb

Google Drive: link: https://drive.google.com/drive/folders/1tTMs8vf7Z4FAjwCg2aQVGA_pc9O_VpS1?usp=sharing

pretrained_models:

old models:

Pan Baidu :

link:https://pan.baidu.com/s/1q0_2NZ46fYOCeDDg40nUaw code:qrfu

Google Drive: link: https://drive.google.com/drive/folders/1tTMs8vf7Z4FAjwCg2aQVGA_pc9O_VpS1?usp=sharing

new models: we have put the new models now. link: https://bhpan.buaa.edu.cn:443/link/2DFC695B03950A85EF137D8D0FEB62CD 有效期限:2023-04-01 23:59

Serialize

cd ywz/mywork

newnet1.py : HESIC

newnet1_joint.py : HESIC+

test2_codec.py : test script for codec-compress & decompress

​ -- import newnet1 or import newnet1_joint

cd ywz/DSIC

mynet6_plus.py: DSIC with codec

mytrain2_test_codec.py: test script for codec in DSIC

Migration on Mindspore

https://github.com/ywz978020607/2021Summer-Image-Compression

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].