All Projects → Rostlab → SeqVec

Rostlab / SeqVec

Licence: MIT license
Modelling the Language of Life - Deep Learning Protein Sequences

Projects that are alternatives of or similar to SeqVec

mmterm
View proteins and trajectories in the terminal
Stars: ✭ 87 (+17.57%)
Mutual labels:  protein-structure, protein
lightdock
Protein-protein, protein-peptide and protein-DNA docking framework based on the GSO algorithm
Stars: ✭ 110 (+48.65%)
Mutual labels:  protein-structure, protein
VSCoding-Sequence
VSCode Extension for interactively visualising protein structure data in the editor
Stars: ✭ 41 (-44.59%)
Mutual labels:  protein-structure, protein
r3dmol
🧬 An R package for visualizing molecular data in 3D
Stars: ✭ 45 (-39.19%)
Mutual labels:  protein-structure, protein
gcWGAN
Guided Conditional Wasserstein GAN for De Novo Protein Design
Stars: ✭ 38 (-48.65%)
Mutual labels:  protein-structure, protein
Biopython
Official git repository for Biopython (originally converted from CVS)
Stars: ✭ 2,936 (+3867.57%)
Mutual labels:  protein-structure, protein
cbh21-protein-solubility-challenge
Template with code & dataset for the "Structural basis for solubility in protein expression systems" challenge at the Copenhagen Bioinformatics Hackathon 2021.
Stars: ✭ 15 (-79.73%)
Mutual labels:  protein-structure, protein
Jupyter Dock
Jupyter Dock is a set of Jupyter Notebooks for performing molecular docking protocols interactively, as well as visualizing, converting file formats and analyzing the results.
Stars: ✭ 179 (+141.89%)
Mutual labels:  protein-structure, protein
deepblast
Neural Networks for Protein Sequence Alignment
Stars: ✭ 29 (-60.81%)
Mutual labels:  protein-structure, protein
RamaNet
Preforms De novo protein design using machine learning and PyRosetta to generate a novel protein structure
Stars: ✭ 41 (-44.59%)
Mutual labels:  protein-structure
DeepAccNet
Pytorch/Python3 implementation of DeepAccNet, protein model accuracy evaluator.
Stars: ✭ 57 (-22.97%)
Mutual labels:  protein
Bio3DView.jl
A Julia package to view macromolecular structures in the REPL, in a Jupyter notebook/JupyterLab or in Pluto
Stars: ✭ 30 (-59.46%)
Mutual labels:  protein-structure
cgdms
Differentiable molecular simulation of proteins with a coarse-grained potential
Stars: ✭ 44 (-40.54%)
Mutual labels:  protein
ProteinSecondaryStructure-CNN
Protein Secondary Structure predictor using Convolutional Neural Networks
Stars: ✭ 82 (+10.81%)
Mutual labels:  protein-structure
tape-neurips2019
Tasks Assessing Protein Embeddings (TAPE), a set of five biologically relevant semi-supervised learning tasks spread across different domains of protein biology. (DEPRECATED)
Stars: ✭ 117 (+58.11%)
Mutual labels:  protein-structure
biovec
ProtVec can be used in protein interaction predictions, structure prediction, and protein data visualization.
Stars: ✭ 23 (-68.92%)
Mutual labels:  protein
protein-transformer
Predicting protein structure through sequence modeling
Stars: ✭ 77 (+4.05%)
Mutual labels:  protein-structure
mmtf
The specification of the MMTF format for biological structures
Stars: ✭ 40 (-45.95%)
Mutual labels:  protein-structure
enspara
Modeling molecular ensembles with scalable data structures and parallel computing
Stars: ✭ 28 (-62.16%)
Mutual labels:  protein-structure
FLIP
A collection of tasks to probe the effectiveness of protein sequence representations in modeling aspects of protein design
Stars: ✭ 35 (-52.7%)
Mutual labels:  protein

SeqVec

Repository for the paper Modeling aspects of the language of life through transfer-learning protein sequences. Holds pre-trained SeqVec model for creating embeddings for amino acid sequences. Also, contains checkpoint for fine-tuning.

Abstract

Background: One common task in Computational Biology is the prediction of aspects of protein function and structure from their amino acid sequence. For 26 years, most state-of-the-art approaches toward this end have been marrying machine learning and evolutionary information. The retrieval of related proteins from ever growing sequence databases is becoming so time-consuming that the analysis of entire proteomes becomes challenging. On top, evolutionary information is less powerful for small families, e.g. for proteins from the Dark Proteome.

Results: We introduce a novel way to represent protein sequences as continuous vectors (embeddings) by using the deep bi-directional model ELMo taken from natural language processing (NLP). The model has effectively captured the biophysical properties of protein sequences from unlabeled big data (UniRef50). After training, this knowledge is transferred to single protein sequences by predicting relevant sequence features. We refer to these new embeddings as SeqVec (Sequence-to-Vector) and demonstrate their effectiveness by training simple convolutional neural networks on existing data sets for two completely different prediction tasks. At the per-residue level, we significantly improved secondary structure (for NetSurfP-2.0 data set: Q3=79%±1, Q8=68%±1) and disorder predictions (MCC=0.59±0.03) over methods not using evolutionary information. At the per-protein level, we predicted subcellular localization in ten classes (for DeepLoc data set: Q10=68%±1) and distinguished membrane- bound from water-soluble proteins (Q2= 87%±1). All results built upon the embeddings gained from the new tool SeqVec neither explicitly nor implicitly using evolutionary information. Nevertheless, it improved over some methods using such information. Where the lightning-fast HHblits needed on average about two minutes to generate the evolutionary information for a target protein, SeqVec created the vector representation on average in 0.03 seconds.

Conclusion: We have shown that transfer learning can be used to capture biochemical or biophysical properties of protein sequences from large unlabeled sequence databases. The effectiveness of the proposed approach was showcased for different prediction tasks using only single protein sequences. SeqVec embeddings enable predictions that outperform even some methods using evolutionary information. Thus, they prove to condense the underlying principles of protein sequences. This might be the first step towards competitive predictions based only on single protein sequences.

t-SNE projections of SeqVec

2D t-SNE projections 2D t-SNE projections of unsupervised SeqVec embeddings highlight different realities of proteins and their constituent parts, amino acids. Panels (b) to (d) are based on the same data set (Structural Classification of Proteins – extended (SCOPe) 2.07, redundancy reduced at 40%). For these plots, only subsets of SCOPe containing proteins with the annotation of interest (enzymatic activity (c) and kingdom (d)) may be displayed. Panel (a): the embedding space confirms: the 20 standard amino acids are clustered according to their biochemical and biophysical properties, i.e. hydrophobicity, charge or size. The unique role of Cysteine (C, mostly hydrophobic and polar) is conserved. Panel (b): SeqVec embeddings capture structural information as annotated in the main classes in SCOPe without ever having been explicitly trained on structural features. Panel (c): many small, local clusters share function as given by the main classes in the Enzyme Commission Number (E.C.). Panel (d): similarly, small, local clusters represent different kingdoms of life.

Model availability

The ELMo model trained on UniRef50 (=SeqVec) is available at: SeqVec-model

The checkpoint for the pre-trained model is available at: SeqVec-checkpoint

Installation

If you are interested in running seqvec, you can use the convenience seqvec pip package:

pip install seqvec

Additionally, we provide a pipeline that integrates SeqVec, as well as other language models through a shared interface, see bio_embeddings. The pipeline also includes secondary structure and subcellular localization prediction models, and more tools for embedding space visualization and embedding annotation transfer.

Example

In the bio_embeddings github repo, you can find several examples in the examples folder.

For a general example on how to extract embeddings using ELMo, please check the official ELMo implementation: ELMo-Tutorial

Via the seqvec pip package, you can compute embeddings for a fasta file with the seqvec command. Add --protein True to get an embedding per sequence instead of per residue.

seqvec -i sequences.fasta -o embeddings.npz

Load the embeddings with numpy:

import numpy as np
data = np.load("embeddings.npz")  # type: Dict[str, np.ndarray]

If you specify .npy as output format (e.g. with -o embeddings.npy), the script will save the embeddings as an numpy array and the corresponding identifiers (as extracted from the header line in the fasta file) in a json file besides it. The sorting in the json file corresponds to the indexing in the npy file. The npy file can be loaded via:

import json
import numpy as np

data = np.load("embeddings.npy") # shape=(n_proteins,)
with open("embeddings.json") as fp:
    labels = json.load(fp)

How to integrate the embedder into an existing workflow:

Load pre-trained model:

from allennlp.commands.elmo import ElmoEmbedder
from pathlib import Path

model_dir = Path('path/to/pretrained/SeqVec_directory')
weights = model_dir / 'weights.hdf5'
options = model_dir / 'options.json'
embedder = ElmoEmbedder(options,weights, cuda_device=0) # cuda_device=-1 for CPU

Get embedding for amino acid sequence:

seq = 'SEQWENCE' # your amino acid sequence
embedding = embedder.embed_sentence(list(seq)) # List-of-Lists with shape [3,L,1024]

Batch embed sequences:

seq1 = 'SEQWENCE' # your amino acid sequence
seq2 = 'PROTEIN'
seqs = [list(seq1), list(seq2)]
seqs.sort(key=len) # sorting is crucial for speed
embedding = embedder.embed_sentences(seqs) # returns: List-of-Lists with shape [3,L,1024]

Get 1024-dimensional embedding for per-residue predictions:

import torch
residue_embd = torch.tensor(embedding).sum(dim=0) # Tensor with shape [L,1024]

Get 1024-dimensional embedding for per-protein predictions:

protein_embd = torch.tensor(embedding).sum(dim=0).mean(dim=0) # Vector with shape [1024]

FAQ

Torch version conflict If you encounter a version conflict while pip-installing seqvec (ERROR: No matching distribution found for torch<1.3,>=1.2 (from seqvec)), creating a new conda-environment with Python 3.7 can resolve your issue.

Slow embedding of very long sequences We've added an option which automatically falls back to CPU mode if even single-sequence processing fails on GPU due to memory problems. While this allows you to embed also very long proteins, e.g. Q8WZ42 (Titin, ~35k residues), it slows down the embedding process.

Web-service for Predictions based on SeqVec

SeqVec predictions - Chris' Protein properties

Bibtex-Reference

@article{heinzinger2019modeling,
  title={Modeling aspects of the language of life through transfer-learning protein sequences},
  author={Heinzinger, Michael and Elnaggar, Ahmed and Wang, Yu and Dallago, Christian and Nechaev, Dmitrii and Matthes, Florian and Rost, Burkhard},
  journal={BMC bioinformatics},
  volume={20},
  number={1},
  pages={723},
  year={2019},
  publisher={Springer}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].