All Projects → microsoft → Innereye Deeplearning

microsoft / Innereye Deeplearning

Licence: mit
Medical Imaging Deep Learning library to train and deploy models on Azure Machine Learning and Azure Stack

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Innereye Deeplearning

Dicom Server
OSS Implementation of DICOMweb standard
Stars: ✭ 101 (-58.94%)
Mutual labels:  healthcare, azure, medical-imaging
clara-dicom-adapter
DICOM Adapter is a component of the Clara Deploy SDK which facilitates integration with DICOM compliant systems, enables ingestion of imaging data, helps triggering of jobs with configurable rules and offers pushing the output of jobs to PACS systems.
Stars: ✭ 31 (-87.4%)
Mutual labels:  healthcare, medical-imaging
monai-deploy
MONAI Deploy aims to become the de-facto standard for developing, packaging, testing, deploying and running medical AI applications in clinical production.
Stars: ✭ 56 (-77.24%)
Mutual labels:  healthcare, medical-imaging
fuse-med-ml
A python framework accelerating ML based discovery in the medical field by encouraging code reuse. Batteries included :)
Stars: ✭ 66 (-73.17%)
Mutual labels:  healthcare, medical-imaging
Mne Cpp
MNE-CPP: A Framework for Electrophysiology
Stars: ✭ 104 (-57.72%)
Mutual labels:  healthcare, medical-imaging
Api Management Developer Portal
Azure API Management developer portal.
Stars: ✭ 229 (-6.91%)
Mutual labels:  azure
Awesome Azure
A Curated List of Azure Resources. The list provides you with enough resources to get a full overview of the services in Azure and get started with cloud computing.
Stars: ✭ 241 (-2.03%)
Mutual labels:  azure
Kubestriker
A Blazing fast Security Auditing tool for Kubernetes
Stars: ✭ 213 (-13.41%)
Mutual labels:  azure
Opal
A web framework for building highly usable healthcare applications.
Stars: ✭ 227 (-7.72%)
Mutual labels:  healthcare
Engine
Deploy your apps on any Cloud provider in just a few seconds
Stars: ✭ 1,132 (+360.16%)
Mutual labels:  azure
Azurlshortener
An simple and easy Url Shortener
Stars: ✭ 247 (+0.41%)
Mutual labels:  azure
Dicoogle
Dicoogle - Open Source PACS
Stars: ✭ 237 (-3.66%)
Mutual labels:  medical-imaging
Microsoft Authentication Library For Python
Microsoft Authentication Library (MSAL) for Python makes it easy to authenticate to Azure Active Directory. These documented APIs are stable https://msal-python.readthedocs.io. If you have questions but do not have a github account, ask your questions on Stackoverflow with tag "msal" + "python".
Stars: ✭ 232 (-5.69%)
Mutual labels:  azure
Gdcm
Grassroots DICOM read-only mirror. Only for Pull Request. Please report bug at http://sf.net/p/gdcm
Stars: ✭ 240 (-2.44%)
Mutual labels:  medical-imaging
Azure Powershell
Microsoft Azure PowerShell
Stars: ✭ 2,873 (+1067.89%)
Mutual labels:  azure
Azure Spring Cloud Training
Guides and tutorials to make the most out of Azure Spring Cloud
Stars: ✭ 243 (-1.22%)
Mutual labels:  azure
Applicationinsights Node.js
Microsoft Application Insights SDK for Node.js
Stars: ✭ 229 (-6.91%)
Mutual labels:  azure
Chexnet With Localization
Weakly Supervised Learning for Findings Detection in Medical Images
Stars: ✭ 238 (-3.25%)
Mutual labels:  medical-imaging
Rdbox
RDBOX is an advanced IT platform for robotics and IoT developers that highly integrates cloud-native and edge computing technologies.
Stars: ✭ 246 (+0%)
Mutual labels:  azure
Azure Service Bus Dotnet
☁️ .NET Standard client library for Azure Service Bus
Stars: ✭ 237 (-3.66%)
Mutual labels:  azure

InnerEye-DeepLearning

Build Status

Overview

This is a deep learning toolbox to train models on medical images (or more generally, 3D images). It integrates seamlessly with cloud computing in Azure.

On the modelling side, this toolbox supports

  • Segmentation models
  • Classification and regression models
  • Sequence models

Classification, regression, and sequence models can be built with only images as inputs, or a combination of images and non-imaging data as input. This supports typical use cases on medical data where measurements, biomarkers, or patient characteristics are often available in addition to images.

On the user side, this toolbox focusses on enabling machine learning teams to achieve more. It is cloud-first, and relies on Azure Machine Learning Services (AzureML) for execution, bookkeeping, and visualization. Taken together, this gives:

  • Traceability: AzureML keeps a full record of all experiments that were executed, including a snapshot of the code. Tags are added to the experiments automatically, that can later help filter and find old experiments.
  • Transparency: All team members have access to each other's experiments and results.
  • Reproducibility: Two model training runs using the same code and data will result in exactly the same metrics. All sources of randomness like multithreading are controlled for.
  • Cost reduction: Using AzureML, all compute (virtual machines, VMs) is requested at the time of starting the training job, and freed up at the end. Idle VMs will not incur costs. In addition, Azure low priority nodes can be used to further reduce costs (up to 80% cheaper).
  • Scale out: Large numbers of VMs can be requested easily to cope with a burst in jobs.

Despite the cloud focus, all training and model testing works just as well on local compute, which is important for model prototyping, debugging, and in cases where the cloud can't be used. In particular, if you already have GPU machines available, you will be able to utilize them with the InnerEye toolbox.

In addition, our toolbox supports:

  • Cross-validation using AzureML's built-in support, where the models for individual folds are trained in parallel. This is particularly important for the long-running training jobs often seen with medical images.
  • Hyperparameter tuning using Hyperdrive.
  • Building ensemble models.
  • Easy creation of new models via a configuration-based approach, and inheritance from an existing architecture.

Once training in AzureML is done, the models can be deployed from within AzureML or via Azure Stack Hub.

Getting started

We recommend using our toolbox with Linux or with the Windows Subsystem for Linux (WSL2). Much of the core functionality works fine on Windows, but PyTorch's full feature set is only available on Linux. Read more about WSL here.

Clone the repository into a subfolder of the current directory:

git clone https://github.com/microsoft/InnerEye-DeepLearning
cd InnerEye-DeepLearning
git lfs install
git lfs pull

After that, you need to set up your Python environment:

  • Install conda or miniconda for your operating system.
  • Create a Conda environment from the environment.yml file in the repository root, and activate it:
conda env create --file environment.yml
conda activate InnerEye
  • If environment creation fails with odd error messages on a Windows machine, please continue here.

Now try to run the HelloWorld segmentation model - that's a very simple model that will train for 2 epochs on any machine, no GPU required. You need to set the PYTHONPATH environment variable to point to the repository root first. Assuming that your current directory is the repository root folder, on Linux bash that is:

export PYTHONPATH=`pwd`
python InnerEye/ML/runner.py --model=HelloWorld

(Note the "backtick" around the pwd command, this is not a standard single quote!)

On Windows:

set PYTHONPATH=%cd%
python InnerEye/ML/runner.py --model=HelloWorld

If that works: Congratulations! You have successfully built your first model using the InnerEye toolbox.

If it fails, please check the troubleshooting page on the Wiki.

Further detailed instructions, including setup in Azure, are here:

  1. Setting up your environment
  2. Training a Hello World segmentation model
  3. Setting up Azure Machine Learning
  4. Creating a dataset
  5. Building models in Azure ML
  6. Sample Segmentation and Classification tasks
  7. Debugging and monitoring models
  8. Model diagnostics
  9. Deployment

More information

  1. Project InnerEye
  2. Releases
  3. Changelog
  4. Testing
  5. How to do pull requests
  6. Contributing

Licensing

MIT License

You are responsible for the performance, the necessary testing, and if needed any regulatory clearance for any of the models produced by this toolbox.

Contact

Please send an email to [email protected] if you would like further information about this project.

If you have any feature requests, or find issues in the code, please create an issue on GitHub.

If you are interested in using the InnerEye Deep Learning Toolkit to develop your own products and services, please email [email protected]. We can also provide input on using the toolbox with Azure Stack Hub, a hybrid cloud solution that allows for on-premise medical image analysis that complies with data handling regulations.

Publications

Oktay O., Nanavati J., Schwaighofer A., Carter D., Bristow M., Tanno R., Jena R., Barnett G., Noble D., Rimmer Y., Glocker B., O’Hara K., Bishop C., Alvarez-Valle J., Nori A.: Evaluation of Deep Learning to Augment Image-Guided Radiotherapy for Head and Neck and Prostate Cancers. JAMA Netw Open. 2020;3(11):e2027426. doi:10.1001/jamanetworkopen.2020.27426

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

Credits

This toolbox is maintained by the Microsoft InnerEye team, and has received valuable contributions from a number of people outside our team. We would like to thank in particular our interns, Yao Quin, Zoe Landgraf, Padmaja Jonnalagedda, Mathias Perslev, as well as the AI Residents Patricia Gillespie and Guilherme Ilunga.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].