All Projects → rubisco-sfa → ILAMB

rubisco-sfa / ILAMB

Licence: BSD-3-Clause license
Python software used in the International Land Model Benchmarking (ILAMB) project

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to ILAMB

ZXDataHandle
简单易用的数据转换和存储框架,支持一行代码将模型、模型数组、Json字符串、字典互转;支持模型映射到sqlite3数据库,无需书写sql
Stars: ✭ 13 (-53.57%)
Mutual labels:  model
sequence tagging
Named Entity Recognition (LSTM + CRF + FastText) with models for [historic] German
Stars: ✭ 25 (-10.71%)
Mutual labels:  model
paramak
Create parametric 3D fusion reactor CAD and neutronics models
Stars: ✭ 40 (+42.86%)
Mutual labels:  model
dyngen
Simulating single-cell data using gene regulatory networks 📠
Stars: ✭ 59 (+110.71%)
Mutual labels:  benchmarking
chest xray 14
Benchmarks on NIH Chest X-ray 14 dataset
Stars: ✭ 67 (+139.29%)
Mutual labels:  benchmarking
AWESOME-LDraw
LDraw — awesome software, file format, parts library and model repository (3D models of LEGO® and LEGO-compatible bricks)
Stars: ✭ 30 (+7.14%)
Mutual labels:  model
eloquent-filemaker
A Model extension and Eloquent driver for Laravel connecting to FileMaker through the Data API
Stars: ✭ 38 (+35.71%)
Mutual labels:  model
prediction
Tidy, Type-Safe 'prediction()' Methods
Stars: ✭ 86 (+207.14%)
Mutual labels:  model
ldbc snb docs
Specification of the LDBC Social Network Benchmark suite
Stars: ✭ 39 (+39.29%)
Mutual labels:  benchmarking
common-osint-model
Converting data from services like Censys and Shodan to a common data model
Stars: ✭ 35 (+25%)
Mutual labels:  model
benchmark VAE
Unifying Variational Autoencoder (VAE) implementations in Pytorch (NeurIPS 2022)
Stars: ✭ 1,211 (+4225%)
Mutual labels:  benchmarking
lein-jmh
Leiningen plugin for jmh-clojure
Stars: ✭ 17 (-39.29%)
Mutual labels:  benchmarking
TeslaKit
Elegant Tesla API in Swift
Stars: ✭ 47 (+67.86%)
Mutual labels:  model
nitroml
NitroML is a modular, portable, and scalable model-quality benchmarking framework for Machine Learning and Automated Machine Learning (AutoML) pipelines.
Stars: ✭ 40 (+42.86%)
Mutual labels:  benchmarking
source-engine-model-loader
Three.js loader for parsing Valve's Source Engine models
Stars: ✭ 54 (+92.86%)
Mutual labels:  model
sym
A Mathematica package for generating symbolic models from data
Stars: ✭ 46 (+64.29%)
Mutual labels:  model
Non-rigid-ICP
Non-rigid iterative closest point, nricp.
Stars: ✭ 66 (+135.71%)
Mutual labels:  model
Scrape-Finance-Data
My code for scraping financial data in Vietnam
Stars: ✭ 13 (-53.57%)
Mutual labels:  model
ducker-model
🚗 数据转换器,解耦前后端开发,提升开发效率 https://mp.weixin.qq.com/s/q6xybux0fhrUz5HE5TY0aA
Stars: ✭ 75 (+167.86%)
Mutual labels:  model
modeling-website
Landing page for project sites
Stars: ✭ 16 (-42.86%)
Mutual labels:  model

ILAMB 2.6 Release

It has been a while since our last release, but ILAMB continues to evolve. Many of the changes are 'under the hood' or bugfixes that are not readily seen. In the following, we present a few key changes and draw attention in particular to those that will change scores. We also have worked to make ILAMB ready to integrate with tools being developed as part of the Coordinated Model Evaluation Capabilities (CMEC).

Changes - May 2021

CMEC

  • Added CMEC-compliant JSON output to the standard outputs
  • Added an alternative landing page for ILAMB results which uses the LMT Unified Dashboard
  • Added support files for using cmec-driver as an alternative run environment

Quality of Life

  • Top page overhaul moving to a single result panel with a colorblind friendly palette
  • Shifted score colormaps to be more qualitative and colorblind friendly
  • ILAMB now has continuous integration testing using Azure Pipelines on each commit or pull request
  • ModelResults can be passed a list of paths to search for results, objects are cached as pickle files
  • Plotting limits are now based on the middle 98% across all models to help reduce the effect of a single model with extreme values washing out all the map plots
  • The configure file used to generate a run is now copied into the output directory as ilamb.cfg
  • ILAMB logfiles will now provide an estimate for peak memory usage in each confrontation which can be used in debugging and when running on large clusters with limited memory

Scoring

  • For scoring coupled models, we find that scoring the RMSE of the annual cycle is more reasonable. While the default is still set to score the full time series, this may be changed at runtime with --rmse_score_basis {series|cycle}
  • We have found that when comparing a set of models which contain a multimodel mean, the mean model's interannual variability is typically lower which serendipitously better matches that of our reference data products. This makes the multimodel mean look even better relative to individual models but not for good reasons. We have disabled the interannual variability in our scoring.
  • We have updated a number of reference datasets to their most current version as well as many new datasets and comparions, run ilamb-fetch to update
  • Support for using observational uncertainty in scoring, currently disabled

Useful Information

  • Documentation - installation and basic usage tutorials
  • Sample Output
    • ILAMB - land comparison against a collection of CMIP5 and CMIP6 models
    • IOMB - ocean comparison against a collection of CMIP5 and CMIP6 models
  • Paper published in JAMES which details the design and methodology employed in the ILAMB package. If you find the package or the output helpful in your research or development efforts, we kindly ask you to cite this work.

Description

The International Land Model Benchmarking (ILAMB) project is a model-data intercomparison and integration project designed to improve the performance of land models and, in parallel, improve the design of new measurement campaigns to reduce uncertainties associated with key land surface processes. Building upon past model evaluation studies, the goals of ILAMB are to:

  • develop internationally accepted benchmarks for land model performance, promote the use of these benchmarks by the international community for model intercomparison,
  • strengthen linkages between experimental, remote sensing, and climate modeling communities in the design of new model tests and new measurement programs, and
  • support the design and development of a new, open source, benchmarking software system for use by the international community.

It is the last of these goals to which this repository is concerned. We have developed a python-based generic benchmarking system, initially focused on assessing land model performance.

Funding

This research was performed for the Reducing Uncertainties in Biogeochemical Interactions through Synthesis and Computation (RUBISCO) Scientific Focus Area, which is sponsored by the Regional and Global Climate Modeling (RGCM) Program in the Climate and Environmental Sciences Division (CESD) of the Biological and Environmental Research (BER) Program in the U.S. Department of Energy Office of Science.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].