All Projects → baggepinnen → Montecarlomeasurements.jl

baggepinnen / Montecarlomeasurements.jl

Licence: mit
Propagation of distributions by Monte-Carlo sampling: Real number types with uncertainty represented by samples.

Programming Languages

2034 projects

Projects that are alternatives of or similar to Montecarlomeasurements.jl

CHAI and RAJA provide an excellent base on which to build portable codes. CARE expands that functionality, adding new features such as loop fusion capability and a portable interface for many numerical algorithms. It provides all the basics for anyone wanting to write portable code.
Stars: ✭ 22 (-86.9%)
Mutual labels:  gpu-acceleration, gpu-computing
Clojure library for CUDA development
Stars: ✭ 158 (-5.95%)
Mutual labels:  gpu-acceleration, gpu-computing
AnyDSL Runtime Library
Stars: ✭ 17 (-89.88%)
Mutual labels:  gpu-acceleration, gpu-computing
Deep.Net machine learning framework for F#
Stars: ✭ 99 (-41.07%)
Mutual labels:  gpu-acceleration, gpu-computing
Efficient Spiking Neural Network framework, built on top of PyTorch for GPU acceleration
Stars: ✭ 129 (-23.21%)
Mutual labels:  gpu-acceleration, gpu-computing
The write-once-run-anywhere GPGPU library for Rust
Stars: ✭ 1,350 (+703.57%)
Mutual labels:  gpu-acceleration, gpu-computing
Obsidian Language Repository
Stars: ✭ 38 (-77.38%)
Mutual labels:  gpu-acceleration, gpu-computing
GPU Framework for Radio Astronomical Image Synthesis
Stars: ✭ 27 (-83.93%)
Mutual labels:  gpu-acceleration, gpu-computing
High-performance Bayesian Data Analysis on the GPU in Clojure
Stars: ✭ 342 (+103.57%)
Mutual labels:  gpu-acceleration, gpu-computing
Vulkan compute for people
Stars: ✭ 264 (+57.14%)
Mutual labels:  gpu-acceleration, gpu-computing
GPU-accelerated Levenberg-Marquardt curve fitting in CUDA
Stars: ✭ 174 (+3.57%)
Mutual labels:  gpu-acceleration, gpu-computing
Concurrent CPU-GPU Programming using Task Models
Stars: ✭ 57 (-66.07%)
Mutual labels:  gpu-acceleration, gpu-computing
Massively Parallel Huffman Decoding on GPUs
Stars: ✭ 30 (-82.14%)
Mutual labels:  gpu-acceleration, gpu-computing
CUDA bindings for Ruby
Stars: ✭ 57 (-66.07%)
Mutual labels:  gpu-acceleration, gpu-computing
stdgpu: Efficient STL-like Data Structures on the GPU
Stars: ✭ 531 (+216.07%)
Mutual labels:  gpu-acceleration, gpu-computing
Multi-device OpenCL kernel load balancer and pipeliner API for C#. Uses shared-distributed memory model to keep GPUs updated fast while using same kernel on all devices(for simplicity).
Stars: ✭ 76 (-54.76%)
Mutual labels:  gpu-acceleration, gpu-computing
FastFlow pattern-based parallel programming framework (formerly on sourceforge)
Stars: ✭ 137 (-18.45%)
Mutual labels:  gpu-computing
A Python Library for Genetic Algorithm on OpenCL
Stars: ✭ 103 (-38.69%)
Mutual labels:  gpu-computing
Experimental implementation of OpenCL on Vulkan
Stars: ✭ 158 (-5.95%)
Mutual labels:  gpu-computing
Marian Dev
Fast Neural Machine Translation in C++ - development repository
Stars: ✭ 136 (-19.05%)
Mutual labels:  gpu-acceleration

logo Build Status codecov Documentation, stable Documentation, latest arXiv article

Imagine you had a type that behaved like your standard Float64 but it really represented a probability distribution like Gamma(0.5) or MvNormal(m, S). Then you could call y=f(x) and have y be the probability distribution y=p(f(x)). This package gives you such a type.

This package facilitates working with probability distributions by means of Monte-Carlo methods, in a way that allows for propagation of probability distributions through functions. This is useful for, e.g., nonlinear uncertainty propagation. A variable or parameter might be associated with uncertainty if it is measured or otherwise estimated from data. We provide two core types to represent probability distributions: Particles and StaticParticles, both <: Real. (The name "Particles" comes from the particle-filtering literature.) These types all form a Monte-Carlo approximation of the distribution of a floating point number, i.e., the distribution is represented by samples/particles. Correlated quantities are handled as well, see multivariate particles below.

Although several interesting use cases for doing calculations with probability distributions have popped up (see Examples), the original goal of the package is similar to that of Measurements.jl, to propagate the uncertainty from input of a function to the output. The difference compared to a Measurement is that Particles represent the distribution using a vector of unweighted particles, and can thus represent arbitrary distributions and handle nonlinear uncertainty propagation well. Functions like f(x) = x², f(x) = sign(x) at x=0 and long-time integration, are examples that are not handled well using linear uncertainty propagation ala Measurements.jl. MonteCarloMeasurements also support correlations between quantities.

A number of type Particles behaves just as any other Number while partaking in calculations. After a calculation, an approximation to the complete distribution of the output is captured and represented by the output particles. mean, std etc. can be extracted from the particles using the corresponding functions. Particles also interact with Distributions.jl, so that you can call, e.g., Normal(p) and get back a Normal type from distributions or fit(Gamma, p) to get a Gammadistribution. Particles can also be iterated, asked for maximum/minimum, quantile etc. If particles are plotted with plot(p), a histogram is displayed. This requires Plots.jl. A kernel-density estimate can be obtained by density(p) is StatsPlots.jl is loaded.

Below, we show an example where an input uncertainty is propagated through σ(x)

transformed densities

In the figure above, we see the probability-density function of the input p(x) depicted on the x-axis. The density of the output p(y) = f(x) is shown on the y-axis. Linear uncertainty propagation does this by linearizing f(x) and using the equations for an affine transformation of a Gaussian distribution, and hence produces a Gaussian approximation to the output density. The particles form a sampled approximation of the input density p(x). After propagating them through f(x), they form a sampled approximation to p(y) which correspond very well to the true output density, even though only 20 particles were used in this example. The figure can be reproduced by examples/transformed_densities.jl.

Quick start

using MonteCarloMeasurements, Plots
a = π ± 0.1 # Construct Gaussian uncertain parameters using ± (\\pm)
# Particles{Float64,2000}
#  3.14159 ± 0.1
b = 2  0.1 # ∓ (\\mp) creates StaticParticles (with StaticArrays)
# StaticParticles{Float64,100}
#  2.0 ± 0.0999
std(a)      # Ask about statistical properties
# 0.09999231528930486
sin(a)      # Use them like any real number
# Particles{Float64,2000}
#  1.2168e-16 ± 0.0995
plot(a)     # Plot them
b = sin.(1:0.1:5) .± 0.1; # Create multivariate uncertain numbers
plot(b)                   # Vectors of particles can be plotted
using Distributions
c = Particles(500, Poisson(3.)) # Create uncertain numbers distributed according to a given distribution
# Particles{Int64,500}
#  2.882 ± 1.7

For further help, see the documentation, the examples folder or the arXiv paper.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].