All Projects → projectglow → Glow

projectglow / Glow

Licence: apache-2.0
An open-source toolkit for large-scale genomic analysis

Programming Languages

scala
5932 projects

Projects that are alternatives of or similar to Glow

Hail
Scalable genomic data analysis.
Stars: ✭ 706 (+344.03%)
Mutual labels:  spark, genomics
Tiledb Vcf
Efficient variant-call data storage and retrieval library using the TileDB storage library.
Stars: ✭ 26 (-83.65%)
Mutual labels:  spark, genomics
Gatk
Official code repository for GATK versions 4 and up
Stars: ✭ 1,002 (+530.19%)
Mutual labels:  spark, genomics
Datacompy
Pandas and Spark DataFrame comparison for humans
Stars: ✭ 147 (-7.55%)
Mutual labels:  spark
Pyspark Learning
Updated repository
Stars: ✭ 147 (-7.55%)
Mutual labels:  spark
Viral Ngs
Viral genomics analysis pipelines
Stars: ✭ 150 (-5.66%)
Mutual labels:  genomics
Handyspark
HandySpark - bringing pandas-like capabilities to Spark dataframes
Stars: ✭ 158 (-0.63%)
Mutual labels:  spark
Technology Talk
汇总java生态圈常用技术框架、开源中间件,系统架构、数据库、大公司架构案例、常用三方类库、项目管理、线上问题排查、个人成长、思考等知识
Stars: ✭ 12,136 (+7532.7%)
Mutual labels:  spark
Sparkmonitor
Monitor Apache Spark from Jupyter Notebook
Stars: ✭ 154 (-3.14%)
Mutual labels:  spark
Spark Ml Source Analysis
spark ml 算法原理剖析以及具体的源码实现分析
Stars: ✭ 1,873 (+1077.99%)
Mutual labels:  spark
Benchm Ml
A minimal benchmark for scalability, speed and accuracy of commonly used open source implementations (R packages, Python scikit-learn, H2O, xgboost, Spark MLlib etc.) of the top machine learning algorithms for binary classification (random forests, gradient boosted trees, deep neural networks etc.).
Stars: ✭ 1,835 (+1054.09%)
Mutual labels:  spark
Cc Pyspark
Process Common Crawl data with Python and Spark
Stars: ✭ 147 (-7.55%)
Mutual labels:  spark
Powderkeg
Live-coding the cluster!
Stars: ✭ 152 (-4.4%)
Mutual labels:  spark
Smoove
structural variant calling and genotyping with existing tools, but, smoothly.
Stars: ✭ 147 (-7.55%)
Mutual labels:  genomics
Learningapachespark
LearningApacheSpark
Stars: ✭ 155 (-2.52%)
Mutual labels:  spark
Spark Cassandra Connector
DataStax Spark Cassandra Connector
Stars: ✭ 1,816 (+1042.14%)
Mutual labels:  spark
Quill
Compile-time Language Integrated Queries for Scala
Stars: ✭ 1,998 (+1156.6%)
Mutual labels:  spark
Aztk
AZTK powered by Azure Batch: On-demand, Dockerized, Spark Jobs on Azure
Stars: ✭ 152 (-4.4%)
Mutual labels:  spark
Spark With Python
Fundamentals of Spark with Python (using PySpark), code examples
Stars: ✭ 150 (-5.66%)
Mutual labels:  spark
Spark Tsne
Distributed t-SNE via Apache Spark
Stars: ✭ 151 (-5.03%)
Mutual labels:  spark

An open-source toolkit for large-scale genomic analyes
Explore the docs »

Issues · Mailing list · Slack

Glow is an open-source toolkit to enable bioinformatics at biobank-scale and beyond.

CircleCI Documentation Status PyPi Conda Version Maven Central Coverage Status DOI

Easy to get started

The toolkit includes the building blocks that you need to perform the most common analyses right away:

  • Load VCF, BGEN, and Plink files into distributed DataFrames
  • Perform quality control and data manipulation with built-in functions
  • Variant normalization and liftOver
  • Perform genome-wide association studies
  • Integrate with Spark ML libraries for population stratification
  • Parallelize command line tools to scale existing workflows

Built to scale

Glow makes genomic data work with Spark, the leading engine for working with large structured datasets. It fits natively into the ecosystem of tools that have enabled thousands of organizations to scale their workflows to petabytes of data. Glow bridges the gap between bioinformatics and the Spark ecosystem.

Flexible

Glow works with datasets in common file formats like VCF, BGEN, and Plink as well as high-performance big data standards. You can write queries using the native Spark SQL APIs in Python, SQL, R, Java, and Scala. The same APIs allow you to bring your genomic data together with other datasets such as electronic health records, real world evidence, and medical images. Glow makes it easy to parallelize existing tools and libraries implemented as command line tools or Pandas functions.

Building and Testing

This project is built using sbt and Java 8.

To build and run Glow, you must install conda and activate the environment in python/environment.yml.

conda env create -f python/environment.yml
conda activate glow

When the environment file changes, you must update the environment:

conda env update -f python/environment.yml

Start an sbt shell using the sbt command.

FYI: The following SBT projects are built on Spark 3.0.0/Scala 2.12.8 by default. To change the Spark version and Scala version, set the environment variables SPARK_VERSION and SCALA_VERSION.

To compile the main code:

compile

To run all Scala tests:

core/test

To test a specific suite:

core/testOnly *VCFDataSourceSuite

To run all Python tests:

python/test

These tests will run with the same Spark classpath as the Scala tests.

To test a specific Python test file:

python/pytest python/test_render_template.py

When using the pytest key, all arguments are passed directly to the pytest runner.

To run documentation tests:

docs/test

To run the Scala, Python and documentation tests:

test

To run Scala tests against the staged Maven artifact with the current stable version:

stagedRelease/test

Testing code on a Databricks cluster

To test your changes on a Databricks cluster, you'll need to build and install the Python and Scala artifacts.

To build an uber jar (Glow + dependencies) with your changes:

sbt core/assembly

The uber jar will be at a path like glow/core/target/${scala_version}/${artifact-name}-assembly-${version}-SNAPSHOT.jar.

To build a wheel with the Python code:

  1. Activate the Glow dev conda environment (conda activate glow)
  2. cd into the python directory
  3. Run python setup.py bdist_wheel

The wheel file will be at a path like python/dist/glow.py-${version}-py3-none-any.whl.

You can then install these libraries on a Databricks cluster.

IntelliJ Tips

If you use IntelliJ, you'll want to:

To run Python unit tests from inside IntelliJ, you must:

  • Open the "Terminal" tab in IntelliJ
  • Activate the glow conda environment (conda activate glow)
  • Start an sbt shell from inside the terminal (sbt)

The "sbt shell" tab in IntelliJ will NOT work since it does not use the glow conda environment.

To test or testOnly in remote debug mode with IntelliJ IDEA set the remote debug configuration in IntelliJ to 'Attach to remote JVM' mode and a specific port number (here the default port number 5005 is used) and then modify the definition of options in groupByHash function in build.sbt to

val options = ForkOptions().withRunJVMOptions(Vector("-Xmx1024m")).withRunJVMOptions(Vector("-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5005"))
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].