All Projects → uio-bmi → immuneML

uio-bmi / immuneML

Licence: AGPL-3.0 license
immuneML is a platform for machine learning analysis of adaptive immune receptor repertoire data.

Programming Languages

python
139335 projects - #7 most used programming language
HTML
75241 projects

Projects that are alternatives of or similar to immuneML

immunarch
🧬 Immunarch by ImmunoMind: R Package for Fast and Painless Exploration of Single-cell and Bulk T-cell/Antibody Immune Repertoires
Stars: ✭ 204 (+397.56%)
Mutual labels:  tcr, bcr, immune-repertoire
Openml R
R package to interface with OpenML
Stars: ✭ 81 (+97.56%)
Mutual labels:  benchmarking, classification
Loan-Approval-Prediction
Loan Application Data Analysis
Stars: ✭ 61 (+48.78%)
Mutual labels:  classification
blockchain-load-testing
Code for load testing the Stellar network.
Stars: ✭ 36 (-12.2%)
Mutual labels:  benchmarking
ros tensorflow
This repo introduces how to integrate Tensorflow framework into ROS with object detection API.
Stars: ✭ 39 (-4.88%)
Mutual labels:  classification
load-testing-toolkit
Collection of open-source tools for debugging, benchmarking, load and stress testing your code or services.
Stars: ✭ 65 (+58.54%)
Mutual labels:  benchmarking
onelearn
Online machine learning methods
Stars: ✭ 14 (-65.85%)
Mutual labels:  classification
ugtm
ugtm: a Python package for Generative Topographic Mapping
Stars: ✭ 34 (-17.07%)
Mutual labels:  classification
Fraud-Detection-in-Online-Transactions
Detecting Frauds in Online Transactions using Anamoly Detection Techniques Such as Over Sampling and Under-Sampling as the ratio of Frauds is less than 0.00005 thus, simply applying Classification Algorithm may result in Overfitting
Stars: ✭ 41 (+0%)
Mutual labels:  classification
ML4K-AI-Extension
Use machine learning in AppInventor, with easy training using text, images, or numbers through the Machine Learning for Kids website.
Stars: ✭ 18 (-56.1%)
Mutual labels:  classification
ml-workflow-automation
Python Machine Learning (ML) project that demonstrates the archetypal ML workflow within a Jupyter notebook, with automated model deployment as a RESTful service on Kubernetes.
Stars: ✭ 44 (+7.32%)
Mutual labels:  classification
auditor
Model verification, validation, and error analysis
Stars: ✭ 56 (+36.59%)
Mutual labels:  classification
tcr-workshop
Information and instructions for trying TCR workflow (test && commit || revert)
Stars: ✭ 33 (-19.51%)
Mutual labels:  tcr
grandma
👵 fully programmable stress testing framework
Stars: ✭ 20 (-51.22%)
Mutual labels:  benchmarking
machine learning from scratch matlab python
Vectorized Machine Learning in Python 🐍 From Scratch
Stars: ✭ 28 (-31.71%)
Mutual labels:  classification
Point2Sequence
Point2Sequence: Learning the Shape Representation of 3D Point Clouds with an Attention-based Sequence to Sequence Network
Stars: ✭ 34 (-17.07%)
Mutual labels:  classification
COVID-19-Tweet-Classification-using-Roberta-and-Bert-Simple-Transformers
Rank 1 / 216
Stars: ✭ 24 (-41.46%)
Mutual labels:  classification
catordog
这是一个基于tensorflow和python的猫狗分类算法
Stars: ✭ 20 (-51.22%)
Mutual labels:  classification
reframe
A powerful Python framework for writing and running portable regression tests and benchmarks for HPC systems.
Stars: ✭ 154 (+275.61%)
Mutual labels:  benchmarking
YOLOv1 tensorflow
YOLOv1 tensorflow
Stars: ✭ 14 (-65.85%)
Mutual labels:  classification

immuneML

Python application Docker

immuneML is a platform for machine learning-based analysis and classification of adaptive immune receptors and repertoires (AIRR).

It supports the analyses of experimental B- and T-cell receptor data, as well as synthetic data for benchmarking purposes.

In immuneML, users can define flexible workflows supporting different machine learning libraries (such as scikit-learn or PyTorch), benchmarking of different approaches, numerous reports of data characteristics, ML algorithms and their predictions, and visualizations of results.

Additionally, users can extend the platform by defining their own data representations, ML models, reports and visualizations.

Useful links:

Installation

immuneML can be installed directly using pip. immuneML uses Python 3.7 or 3.8, we recommend installing immuneML inside a virtual environment with one of these Python versions.

For more detailed instructions (virtual environment, troubleshooting, Docker, developer installation), please see the installation documentation.

Installation using pip

To install the immuneML core package, run:

pip install immuneML

Alternatively, to use the TCRdistClassifier ML method and corresponding TCRdistMotifDiscovery report, install immuneML with the optional TCRdist extra:

pip install immuneML[TCRdist]

Optionally, if you want to use the DeepRC ML method and and corresponding DeepRCMotifDiscovery report, you also have to install DeepRC dependencies using the requirements_DeepRC.txt file. Important note: DeepRC uses PyTorch functionalities that depend on GPU. Therefore, DeepRC does not work on a CPU. To install the DeepRC dependencies, run:

pip install -r requirements_DeepRC.txt --no-dependencies

Validating the installation

To validate the installation, run:

immune-ml -h

This should display a help message explaining immuneML usage.

To quickly test out whether immuneML is able to run, try running the quickstart command:

immune-ml-quickstart ./quickstart_results/

This will generate a synthetic dataset and run a simple machine machine learning analysis on the generated data. The results folder will contain two sub-folders: one for the generated dataset (synthetic_dataset) and one for the results of the machine learning analysis (machine_learning_analysis). The files named specs.yaml are the input files for immuneML that describe how to generate the dataset and how to do the machine learning analysis. The index.html files can be used to navigate through all the results that were produced.

Usage

Quickstart

The quickest way to familiarize yourself with immuneML usage is to follow one of the Quickstart tutorials. These tutorials provide a step-by-step guide on how to use immuneML for a simple machine learning analysis on an adaptive immune receptor repertoire (AIRR) dataset, using either the command line tool or the Galaxy web interface.

Overview of input, analyses and results

The figure below shows an overview of immuneML usage. All parameters for an immuneML analysis are defined in the a YAML specification file. In this file, the settings of the analysis components are defined (also known as definitions, shown in six different colors in the figure). Additionally, the YAML file describes one or more instructions, which are workflows that are applied to the defined analysis components. Each instruction uses at least a dataset component, and optionally additional components. AIRR datasets may either be imported from files, or generated synthetically during runtime.

Each instruction produces different types of results, including trained ML models, ML model predictions on a given dataset, plots or other reports describing the dataset or trained models, and modified datasets. To navigate over the results, immuneML generates a summary HTML file.

image info

For a detailed explanation of the YAML specification file, see the tutorial How to specify an analysis with YAML.

See also the following tutorials for specific instructions:

  • Training ML models for repertoire classification (e.g., disease prediction) or receptor sequence classification (e.g., antigen binding prediction). In immuneML, the performance of different machine learning (ML) settings can be compared by nested cross-validation. These ML settings consist of data preprocessing steps, encodings and ML models and their hyperparameters.
  • Exploratory analysis of datasets by applying preprocessing and encoding, and plotting descriptive statistics without training ML models.
  • Simulating immune events, such as disease states, into experimental or synthetic repertoire datasets. By implanting known immune signals into a given dataset, a ground truth benchmarking dataset is created. Such a dataset can be used to test the performance of ML settings under known conditions.
  • Applying trained ML models to new datasets with unknown class labels.
  • And other tutorials

Command line usage

The immune-ml command takes only two parameters: the YAML specification file and a result path. An example is given here:

immune-ml path/to/specification.yaml result/folder/path/

For each instruction specified in the YAML specification file, a subfolder is created in the result/folder/path. Each subfolder will contain:

  • An index.html file which shows an overview of the results produced by that instruction. Inspecting the results of an immuneML analysis typically starts here.
  • A copy of the used YAML specification (full_specification.yaml) with all default parameters explicitly set.
  • A folder containing all raw results produced by the instruction.
  • A folder containing the imported dataset(s) in optimized binary (Pickle) format.

Support

We will prioritize fixing important bugs, and try to answer any questions as soon as possible. We may implement suggested features and enhancements as time permits.

If you run into problems when using immuneML, please see the documentation. In particular, we recommend you check out:

If this does not answer your question, you can contact us via:

To report a potential bug or suggest new features, please submit an issue on GitHub.

If you would like to make contributions, for example by adding a new ML method, encoding, report or preprocessing, please see our developer documentation and submit a pull request.

Requirements

Citing immuneML

If you are using immuneML in any published work, please cite:

Pavlović, M., Scheffer, L., Motwani, K. et al. The immuneML ecosystem for machine learning analysis of adaptive immune receptor repertoires. Nat Mach Intell 3, 936–944 (2021). https://doi.org/10.1038/s42256-021-00413-z


© Copyright 2021, Milena Pavlovic, Lonneke Scheffer, Keshav Motwani, Victor Greiff, Geir Kjetil Sandve

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].