All Projects → pangeo-data → cog-best-practices

pangeo-data / cog-best-practices

Licence: BSD-3-Clause license
Best practices with cloud-optimized-geotiffs (COGs)

Programming Languages

Jupyter Notebook
11667 projects
python
139335 projects - #7 most used programming language

cog-best-practices

Best practices with cloud-optimized-geotiffs (COGs)

The goal of this repository is to determine best practices for accessing the increasing amount of COG data with Pangeo tooling (GDAL, Rasterio, Xarray, Dask).

A Cloud Optimized GeoTIFF (COG) is a regular GeoTIFF file, aimed at being hosted on a HTTP file server (or Cloud object storage like S3), with an internal organization that enables more efficient workflows on the cloud. It does this by leveraging the ability of clients issuing HTTP GET range requests to ask for just the parts of a file they need. Read more at https://www.cogeo.org

One great use-case of COGS is downloading small pieces of a big file to your laptop. Another use-case is accessing COGs from within the same datacenter where they are stored over very efficient network connections.

This repository focuses on distributed computing within the same datacenter using this great new AWS public dataset in us-west-2 https://registry.opendata.aws/sentinel-1/ (Sentinel-1 Synthetic Aperture Radar images covering the United States).

Computing environment

We can use Pangeo Cloud and Pangeo Binder on AWS us-west-2 to iterate on examples in a common computing environment, click the button below to run the notebooks in this repository interactivel via Pangeo Binder on AWS:

badge

For notebooks that don't require Dask clusters you can use mybinder.org (which runs in GCP and other data centers) with limited compute resources:

badge

Organization

For starters there are four notebooks in this repository with the following focus:

  1. Accessing a single COG
  2. Working with multiple COGs (concatenated in time)
  3. Dask LocalCluster
  4. Dask GatewayCluster

Unit tests and examples often are simplified to an extreme and consequently fail to translate to ‘real world examples’. At the other extreme, full scientific analysis or large-scale computations are complex and difficult to follow. The goal with these examples is to explore the middle ground - simple operations that are commonplace on ~10-1000GB datasets.

Goals

  1. Figure out ways to improve these notebooks for better efficiency and clarity (this might involve opening issues and pull requests in other projects)
  2. Add new notebooks for common workflows e.g. creating COGs, rechunking COGs, applying custom functions, reprojection...
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].