All Projects → adhiraiyan → radnet

adhiraiyan / radnet

Licence: MIT license
U-Net for biomedical image segmentation

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to radnet

covid19.MIScnn
Robust Chest CT Image Segmentation of COVID-19 Lung Infection based on limited data
Stars: ✭ 77 (+600%)
Mutual labels:  medical-image-processing, u-net
neurdicom
RESTful PACS server with plugins
Stars: ✭ 97 (+781.82%)
Mutual labels:  mri-images, medical-image-processing
Magic-VNet
VNet for 3d volume segmentation
Stars: ✭ 45 (+309.09%)
Mutual labels:  mri-images, medical-image-processing
2D-and-3D-Deep-Autoencoder
Convolutional AutoEncoder application on MRI images
Stars: ✭ 57 (+418.18%)
Mutual labels:  mri-images
tfsum
Enable TensorBoard for TensorFlow Go API
Stars: ✭ 32 (+190.91%)
Mutual labels:  tensorboard-visualizations
NumNum
Multi-digit prediction from Google Street's images using deep CNN with TensorFlow, OpenCV and Python.
Stars: ✭ 39 (+254.55%)
Mutual labels:  tensorboard-visualizations
U-Net-Satellite
Road Detection from satellite images using U-Net.
Stars: ✭ 38 (+245.45%)
Mutual labels:  u-net
Brain-Tumor-Segmentation
Attention-Guided Version of 2D UNet for Automatic Brain Tumor Segmentation
Stars: ✭ 125 (+1036.36%)
Mutual labels:  u-net
rid-covid
Image-based COVID-19 diagnosis. Links to software, data, and other resources.
Stars: ✭ 74 (+572.73%)
Mutual labels:  ct-scans
PyTorch-Deep-Image-Steganography
A PyTorch implementation of image steganography utilizing deep convolutional neural networks
Stars: ✭ 71 (+545.45%)
Mutual labels:  u-net
AMICI
Advanced Multilanguage Interface to CVODES and IDAS
Stars: ✭ 80 (+627.27%)
Mutual labels:  sensitivity-analysis
DeepWay.v2
Autonomous navigation for blind people
Stars: ✭ 65 (+490.91%)
Mutual labels:  u-net
modelhub
A collection of deep learning models with a unified API.
Stars: ✭ 59 (+436.36%)
Mutual labels:  medical-image-processing
Keras MedicalImgAI
No description or website provided.
Stars: ✭ 23 (+109.09%)
Mutual labels:  medical-image-processing
squeeze-unet
Squeeze-unet Semantic Segmentation for embedded devices
Stars: ✭ 21 (+90.91%)
Mutual labels:  u-net
Pix2Pix-Keras
基于pix2pix模型的动漫图片自动上色(keras实现) 2019-2-25
Stars: ✭ 95 (+763.64%)
Mutual labels:  u-net
3d-prostate-segmentation
Segmentation of prostate from MRI scans
Stars: ✭ 36 (+227.27%)
Mutual labels:  mri-images
grins
Multiphysics Finite Element package built on libMesh
Stars: ✭ 45 (+309.09%)
Mutual labels:  sensitivity-analysis
Brain-MRI-Segmentation
Smart India Hackathon 2019 project given by the Department of Atomic Energy
Stars: ✭ 29 (+163.64%)
Mutual labels:  mri-images
3d-nii-visualizer
A NIfTI (nii.gz) 3D Visualizer using VTK and Qt5
Stars: ✭ 86 (+681.82%)
Mutual labels:  mri-images

RadNet

Package for bio-medical image segmentation.

Not Maintained

Open Source Love GitHub Python 3.6 WorkInProgress Build Status Code Quality Coverage Status GitHub Releases GitHub Stars LinkedIn

Getting StartedTrainTestInterpretPerformanceRelease NotesUpcoming ReleasesCitationFAQBlog

Made by Mukesh Mithrakumar • 🌌 https://mukeshmithrakumar.com

What is it

**RadNet** is an ensemble convolutional neural network package (using U-Net, VGG, and Resnet) for biomedical image detection, segmentation and classification.

Currently the code works for the ISBI Neuronal Stack Segmentation dataset. See Release Notes for the current release features and see Upcoming Releases for the next release enhancements.

If this repository helps you in anyway, show your love ❤️ by putting a on this project ✌️

Please Note that since this is a developer release content is being constantly developed and till I test everything completely I won't be committing updates into the repo so if you run into any issues, please reach out. The best way to prevent this is to use the released source for developing.

📋 Getting Started

📀 Software Prerequisites:

To see the software prerequisites (click to expand...)
```
- pip install 'matplotlib'
- pip install 'graphviz'
- pip install 'tensorflow'
- pip install 'scikit-learn'
- pip install 'tifffile'
- pip install 'Pillow'
- pip install 'scipy'
- pip install 'numpy'
- pip install 'opencv-python>=3.3.0'
- pip install 'torch'
- pip install 'torchvision'
- pip install 'pytest'
- pip install 'flake8'
- pip install 'cython'
- pip install 'psutil'
```

💻 Hardware Prerequisites:

Runs on a NVIDIA GeForce GTX 1050 Ti with 4 GB GDDR5 Frame Buffer and 768 NVIDIA CUDA® Cores.

📘 Folder Structure

To see the folder structure (click to expand...)
```
main_dir
- data (The folder containing data files for training and testing)
- pytorch_unet (Package directory)
    - model (PyTorch u-net model)
        - u_net.py
    - optimize
        - c_extensions.pyx
        - config.py
        - hyperparameter.py
        - multi_process.py
        - performance.py
    - processing
        - augments.py
        - load.py
    - trainer
        - evaluate.py
        - interpret.py
        - train.py
    - utils
        - helpers.py
        - metrics.py
        - unit_test.py
    - visualize
        - logger.py
        - plot.py
- train_logs (will be created)
- visualize (will be created)
- weights (will be created)
```

🔧 Install

Currently you can clone the repo and start building, mean while, am working on the PyPi release, so will be updated

Train

▴ Back to top

Train the model by running:

train.py root_dir(path/to/root directory)

Arguments that can be specified in the training mode:

usage: train.py [-h] [--main_dir MAIN_DIR] [--resume] [-v]
                [--weights_dir WEIGHTS_DIR] [--log_dir LOG_DIR]
                [--image_size IMAGE_SIZE] [--batch_size BATCH_SIZE]
                [-e EPOCHS] [-d DEPTH] [--n_classes N_CLASSES]
                [--up_mode {upconv, upsample}] [--augment]
                [--augment_type {geometric, image, both}]
                [--transform_prob TRANSFORM_PROB] [--test_size TEST_SIZE]
                [--log] [-bg]

Script for training the model

optional arguments:
  -h, --help            show this help message and exit
  --main_dir MAIN_DIR   main directory
  --resume              Choose to start training from checkpoint
  -v, --verbose         Choose to set verbose to False
  --weights_dir WEIGHTS_DIR
                        Choose directory to save weights model
  --log_dir LOG_DIR     Choose directory to save the logs
  --image_size IMAGE_SIZE
                        resize image size
  --batch_size BATCH_SIZE
                        batch size
  -e EPOCHS, --epochs EPOCHS
                        Number of training epochs
  -d DEPTH, --depth DEPTH
                        Number of downsampling/upsampling blocks
  --n_classes N_CLASSES
                        Number of classes in the dataset
  --up_mode {upconv, upsample}
                        Type of upsampling
  --augment             Whether to augment the train images or not
  --augment_type {geometric, image, both}
                        Which type of augmentation to choose from: geometric,
                        brightness or both
  --transform_prob TRANSFORM_PROB
                        Probability of images to augment when calling
                        augmentations
  --test_size TEST_SIZE
                        Validation size to split the data, should be in
                        between 0.0 to 1.0
  --log                 Log the Values
  -bg, --build_graph    Build the model graph

📋 Logging

To activate logging of the errors (:default is set as no)

train.py root_dir(path/to/root directory) --log

To see the log in tensorboard follow the log statement after training:

📊 Network Graph

Since Pytorch graphs are dynamic I couldn't yet integrate it with tensorflow but as a quick hack run the following to build a png version of the model architecture (:default is set as no)

train.py root_dir(path/to/root directory) -bg
To see the output of the graph (click to expand...)

Test

▴ Back to top

Evaluate the model on the test data by running:

evaluate.py root_dir(path/to/root directory)

Arguments that can be specified in the evaluation mode:

usage: evaluate.py [-h] [--main_dir MAIN_DIR] [--image_size IMAGE_SIZE]
                   [--weights_dir WEIGHTS_DIR]

Script for evaluating the trained model

optional arguments:
  -h, --help            show this help message and exit
  --main_dir MAIN_DIR   main directory
  --image_size IMAGE_SIZE
                        resize image size to match train image size
  --weights_dir WEIGHTS_DIR
                        Choose directory to save weights model

📉 Interpret

▴ Back to top

Visualize the intermediate layers by running:

interpret.py root_dir(path/to/root directory)

Arguments that can be specified in the interpret mode:

usage: interpret.py [-h] [--main_dir MAIN_DIR]
                    [--interpret_path INTERPRET_PATH]
                    [--weights_dir WEIGHTS_DIR] [--image_size IMAGE_SIZE]
                    [--depth DEPTH]
                    [--plot_interpret {sensitivity,block_filters}]
                    [--plot_size PLOT_SIZE]

Script for interpreting the trained model results

optional arguments:
  -h, --help            show this help message and exit
  --main_dir MAIN_DIR   main directory
  --interpret_path INTERPRET_PATH
                        Choose directory to save layer visualizations
  --weights_dir WEIGHTS_DIR
                        Choose directory to load weights from
  --image_size IMAGE_SIZE
                        resize image size
  --depth DEPTH         Number of downsampling/upsampling blocks
  --plot_interpret {sensitivity,block_filters}
                        Type of interpret to plot
  --plot_size PLOT_SIZE
                        Image size of sensitivity analysis

🔩 Sensitivity Analysis

To do sensitivity analysis run:

interpret.py root_dir(path/to/root directory) --plot_interpret sensitivity

🔩 Block Analysis

To visualize the weight output of each up/down sampling block run:

interpret.py root_dir(path/to/root directory) --plot_interpret block_filters

📈 Performance

▴ Back to top

(Work in Progress)

:octocat: Release Notes

▴ Back to top

💎 0.1.0 Developer Pre-Release (Jan 01 2019)

:octocat: Upcoming Releases

▴ Back to top

Keep an eye out 👀 for Upcoming Releases: Watchers

🔥 0.2.0 Developer Pre-Alpha

🔥 0.3.0 Developer Alpha

  • Biomedical image pre-processing script
  • modifications for the unet to work on MRI data
  • test on the CHAOS Segmentation challenge
  • modifications for the unet to work on CT scan
  • test on the PAVES Segmentation challenge
  • complete unit_test.py for the above
  • Deploy alpha PyPI package

🔥 0.4.0 Developer Alpha

  • Neural architecture search script
  • Classifier to identify between the organs (One U-Net to segment different organs)
  • Separate classifier to identify different cells
  • Deploy alpha PyPI package

🔥 0.5.0 Science/Research Beta

  • Graphical user interface for RadNet
  • Developer and researcher mode for the GUI
  • Abstracted away the deep learning stuff so its not python/deep learning friendly but more like doctor friendly
  • Build into a software package
  • Deploy beta PyPI package

©️ Citation

▴ Back to top

💬 FAQ

▴ Back to top

  • For any questions and collaborations you can reach me via LinkedIn
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].