All Projects → mrluin → Esfnet Pytorch

mrluin / Esfnet Pytorch

Licence: unlicense
ESFNet-Pytorch

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Esfnet Pytorch

Bert In Production
A collection of resources on using BERT (https://arxiv.org/abs/1810.04805 ) and related Language Models in production environments.
Stars: ✭ 58 (+222.22%)
Mutual labels:  paper, implementation
Dnc Tensorflow
A TensorFlow implementation of DeepMind's Differential Neural Computers (DNC)
Stars: ✭ 587 (+3161.11%)
Mutual labels:  paper, implementation
Gpt 2 Pytorch
Simple Text-Generator with OpenAI gpt-2 Pytorch Implementation
Stars: ✭ 618 (+3333.33%)
Mutual labels:  implementation
Awesome Face
😎 face releated algorithm, dataset and paper
Stars: ✭ 739 (+4005.56%)
Mutual labels:  paper
Awesome Economics
A curated collection of links for economists
Stars: ✭ 688 (+3722.22%)
Mutual labels:  paper
Awesome Interaction Aware Trajectory Prediction
A selection of state-of-the-art research materials on trajectory prediction
Stars: ✭ 625 (+3372.22%)
Mutual labels:  paper
Large Scale Curiosity
Code for the paper "Large-Scale Study of Curiosity-Driven Learning"
Stars: ✭ 703 (+3805.56%)
Mutual labels:  paper
Pl Compiler Resource
程序语言与编译技术相关资料(持续更新中)
Stars: ✭ 578 (+3111.11%)
Mutual labels:  paper
Splatter Paper
Data and analysis for the Splatter paper
Stars: ✭ 17 (-5.56%)
Mutual labels:  paper
Multiagent Competition
Code for the paper "Emergent Complexity via Multi-agent Competition"
Stars: ✭ 663 (+3583.33%)
Mutual labels:  paper
Cv Arxiv Daily
分享计算机视觉每天的arXiv文章
Stars: ✭ 714 (+3866.67%)
Mutual labels:  paper
Dl Nlp Readings
My Reading Lists of Deep Learning and Natural Language Processing
Stars: ✭ 656 (+3544.44%)
Mutual labels:  paper
All About The Gan
All About the GANs(Generative Adversarial Networks) - Summarized lists for GAN
Stars: ✭ 630 (+3400%)
Mutual labels:  paper
Random Network Distillation
Code for the paper "Exploration by Random Network Distillation"
Stars: ✭ 708 (+3833.33%)
Mutual labels:  paper
Recommendersystem Paper
This repository includes some papers that I have read or which I think may be very interesting.
Stars: ✭ 619 (+3338.89%)
Mutual labels:  paper
Awesome Distributed Systems
A curated list to learn about distributed systems
Stars: ✭ 7,263 (+40250%)
Mutual labels:  paper
Awesome Relation Extraction
📖 A curated list of awesome resources dedicated to Relation Extraction, one of the most important tasks in Natural Language Processing (NLP).
Stars: ✭ 656 (+3544.44%)
Mutual labels:  paper
Densenet
DenseNet implementation in Keras
Stars: ✭ 693 (+3750%)
Mutual labels:  paper
Rticles
LaTeX Journal Article Templates for R Markdown
Stars: ✭ 895 (+4872.22%)
Mutual labels:  paper
Maddpg
Code for the MADDPG algorithm from the paper "Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments"
Stars: ✭ 767 (+4161.11%)
Mutual labels:  paper

ESFNet: Efficient Networks for Building Extraction from High-Resolution Images

The implementation of novel efficient neural network ESFNet

Clone the Repository

git clone https://github.com/mrluin/ESFNet-Pytorch.git
cd ./ESFNet-Pytorch

Installation using Conda

conda env create -f environment.yml
conda activate esfnet

Sample Dataset

For training, you can use as an example the WHU Building Datase.

You would need to download the cropped aerial images. The 3rd option

Directory Structure

Directory:
            #root | -- train 
                  | -- valid
                  | -- test
                  | -- save | -- {model.name} | -- datetime | -- ckpt-epoch{}.pth.format(epoch)
                            |                               | -- best_model.pth
                            |
                            | -- log | -- {model.name} | -- datetime | -- history.txt
                            | -- test| -- log | -- {model.name} | --datetime | -- history.txt
                                     | -- predict | -- {model.name} | --datetime | -- *.png

Training

  1. set root_dir in ./configs/config.cfg, change the root_path like mentioned above.
  2. set divice_id to choose which GPU will be used.
  3. set epochs to control the length of the training phase.
  4. setup the train.py script as follows:
python -m visdom.server -env_path='./visdom_log/' -port=8097 # start visdom server
python train.py

-env_path is where the visdom logfile store in, and -port is the port for visdom. You could also change the -port in train.py.

If my work give you some insights and hints, star me please! Thank you~

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].