All Projects → LIAGM → LFattNet

LIAGM / LFattNet

Licence: MIT license
Attention-based View Selection Networks for Light-field Disparity Estimation

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to LFattNet

caliper-benchmarks
Sample benchmark files for Hyperledger Caliper https://wiki.hyperledger.org/display/caliper
Stars: ✭ 69 (+68.29%)
Mutual labels:  benchmark
map benchmark
Comprehensive benchmarks of C++ maps
Stars: ✭ 132 (+221.95%)
Mutual labels:  benchmark
stagin
STAGIN: Spatio-Temporal Attention Graph Isomorphism Network
Stars: ✭ 34 (-17.07%)
Mutual labels:  attention
revl
Helps to benchmark code for Autodesk Maya.
Stars: ✭ 14 (-65.85%)
Mutual labels:  benchmark
react-benchmark
A tool for benchmarking the render performance of React components
Stars: ✭ 99 (+141.46%)
Mutual labels:  benchmark
php-simple-benchmark-script
Очень простой скрипт тестирования быстродействия PHP | Very simple script for testing of PHP operations speed (rusoft repo mirror)
Stars: ✭ 50 (+21.95%)
Mutual labels:  benchmark
typescript-orm-benchmark
⚖️ ORM benchmarking for Node.js applications written in TypeScript
Stars: ✭ 106 (+158.54%)
Mutual labels:  benchmark
snowman
Welcome to Snowman App – a Data Matching Benchmark Platform.
Stars: ✭ 25 (-39.02%)
Mutual labels:  benchmark
rpc-bench
RPC Benchmark of gRPC, Aeron and KryoNet
Stars: ✭ 59 (+43.9%)
Mutual labels:  benchmark
LNSwipeCell
一套友好的、方便集成的针对cell的左滑编辑功能!
Stars: ✭ 16 (-60.98%)
Mutual labels:  attention
sets
Benchmarks for set data structures: hash sets, dawg's, bloom filters, etc.
Stars: ✭ 20 (-51.22%)
Mutual labels:  benchmark
httpit
A rapid http(s) benchmark tool written in Go
Stars: ✭ 156 (+280.49%)
Mutual labels:  benchmark
ftsb
Full Text Search Benchmark, a tool for comparing and evaluating full-text search engines.
Stars: ✭ 12 (-70.73%)
Mutual labels:  benchmark
benchmarking-fft
choosing FFT library...
Stars: ✭ 74 (+80.49%)
Mutual labels:  benchmark
image-recognition
采用深度学习方法进行刀具识别。
Stars: ✭ 19 (-53.66%)
Mutual labels:  attention
gnn-lspe
Source code for GNN-LSPE (Graph Neural Networks with Learnable Structural and Positional Representations), ICLR 2022
Stars: ✭ 165 (+302.44%)
Mutual labels:  attention
facies classification benchmark
The repository includes PyTorch code, and the data, to reproduce the results for our paper titled "A Machine Learning Benchmark for Facies Classification" (published in the SEG Interpretation Journal, August 2019).
Stars: ✭ 79 (+92.68%)
Mutual labels:  benchmark
link-too-big
Link Too Big? Make Link Short
Stars: ✭ 12 (-70.73%)
Mutual labels:  benchmark
benchmark-kit
phpbenchmarks.com kit to add your benchmark.
Stars: ✭ 31 (-24.39%)
Mutual labels:  benchmark
ronin
RoNIN: Robust Neural Inertial Navigation in the Wild
Stars: ✭ 144 (+251.22%)
Mutual labels:  benchmark

LFattNet: Attention-based View Selection Networks for Light-field Disparity Estimation

Attention-based View Selection Networks for Light-field Disparity Estimation

Yu-Ju Tsai,1 Yu-Lun Liu,1,2 Ming Ouhyoung,1 Yung-Yu Chuang1
1National Taiwan University, 2MediaTek

AAAI Conference on Artificial Intelligence (AAAI), Feb 2020

Network Architecture

Network Architecture

SOTA on 4D Light Field Benchmark

  • We achieve TOP rank performance for most of the error matrices on the benchmark.

  • For more detail comparison, please use the link below.
  • Benchmark link

Environment

Ubuntu            16.04
Python            3.5.2
Tensorflow-gpu    1.10
CUDA              9.0.176
Cudnn             7.1.4

Train LFattNet

  • Download HCI Light field dataset from http://hci-lightfield.iwr.uni-heidelberg.de/.
  • Unzip the LF dataset and move 'additional/, training/, test/, stratified/ ' into the 'hci_dataset/'.
  • Check the code in 'LFattNet_func/func_model_81.py' and use the code at line 247.
  • Run python LFattNet_train.py
    • Checkpoint files will be saved in 'LFattNet_checkpoints/LFattNet_ckp/iterXXXX_valmseXXXX_bpXXX.hdf5'.
    • Training process will be saved in
      • 'LFattNet_output/LFattNet/train_iterXXXXX.jpg'
      • 'LFattNet_output/LFattNet/val_iterXXXXX.jpg'.

Evaluate LFattNet

  • Check the code in 'LFattNet_func/func_model_81.py' and use the code at line 250.
  • Run python LFattNet_evaluation.py
    • To use your own model, you can modify the import model at line 78 like below:
      • path_weight='./pretrain_model_9x9.hdf5'

Citation

@inproceedings{Tsai:2020:ABV,
        author = {Tsai, Yu-Ju and Liu, Yu-Lun and Ouhyoung, Ming and Chuang, Yung-Yu},
        title = {Attention-based View Selection Networks for Light-field Disparity Estimation},
        booktitle = {Proceedings of the 34th Conference on Artificial Intelligence (AAAI)},
        year = {2020}
}

Last modified data: 2020/09/14.

The code is modified and heavily borrowed from EPINET: https://github.com/chshin10/epinet

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].