All Projects → stan-dev → MathematicaStan

stan-dev / MathematicaStan

Licence: GPL-3.0 license
A Mathematica package to interact with CmdStan

Programming Languages

Mathematica
289 projects

Projects that are alternatives of or similar to MathematicaStan

Gradio
Create UIs for your machine learning model in Python in 3 minutes
Stars: ✭ 4,358 (+16040.74%)
Mutual labels:  interface
DataAbstractions.Dapper
A light abstraction around Dapper and Dapper.Contrib that also maintains the behavior IDbConnection.
Stars: ✭ 37 (+37.04%)
Mutual labels:  interface
youyou
Edit and create FHIR profiles with a shiny interface ✨
Stars: ✭ 13 (-51.85%)
Mutual labels:  interface
Libagar
Cross-Platform GUI Toolkit (stable)
Stars: ✭ 212 (+685.19%)
Mutual labels:  interface
Elvui
ElvUI for World of Warcraft - Wrath of the Lich King (3.3.5a)
Stars: ✭ 229 (+748.15%)
Mutual labels:  interface
Shogun
Shodan.io Command Line Interface
Stars: ✭ 42 (+55.56%)
Mutual labels:  interface
Wiki
Wikipedia Interface for Node.js
Stars: ✭ 180 (+566.67%)
Mutual labels:  interface
InterfaceInteraction
Interact your app's interface elements with different effects!
Stars: ✭ 57 (+111.11%)
Mutual labels:  interface
Django Colorfield
color field for django models with a nice color-picker in the admin. 🎨
Stars: ✭ 238 (+781.48%)
Mutual labels:  interface
typeclass-interface-pattern
Ideas, thoughts, and notes on a typeclass/interface based polymorphism pattern for standard C
Stars: ✭ 26 (-3.7%)
Mutual labels:  interface
Dbi
A database interface (DBI) definition for communication between R and RDBMSs
Stars: ✭ 215 (+696.3%)
Mutual labels:  interface
Pfui
A User Interface Replacement for World of Warcraft: Vanilla & TBC
Stars: ✭ 226 (+737.04%)
Mutual labels:  interface
MIDI.jl
A Julia library for handling MIDI files
Stars: ✭ 55 (+103.7%)
Mutual labels:  interface
Ydl Ui
A UI for the command-line video downloader "youtube-dl"
Stars: ✭ 198 (+633.33%)
Mutual labels:  interface
ui bibz
Ui Framework based on Bootstrap and Ruby on Rails
Stars: ✭ 13 (-51.85%)
Mutual labels:  interface
Robot Gui
A three.js based 3D robot interface.
Stars: ✭ 181 (+570.37%)
Mutual labels:  interface
significa.co
Significa - A digital design-led agency focused on product development.
Stars: ✭ 72 (+166.67%)
Mutual labels:  interface
loggers
Abstract logging for Golang projects. A kind of log4go in the spirit of log4j
Stars: ✭ 17 (-37.04%)
Mutual labels:  interface
javakdb
Using Java with kdb+
Stars: ✭ 41 (+51.85%)
Mutual labels:  interface
Fluent-Design-For-Web
Windows 10 Inspired UI For Web
Stars: ✭ 28 (+3.7%)
Mutual labels:  interface

MathematicaStan v2.1

Table of contents

Introduction

MathematicaStan is a package to interact with CmdStan from Mathematica.

It is developed under Linux and is compatible with Mathematica v11+

It should work under MacOS and also under Windows.

Author & contact: picaud.vincent at gmail.com

News

2020-12-21

New MathematicaStan version 2.1!

This version has been fixed and should now run under Windows.

I would like to thank Ali Ghaderi who had the patience to help me to debug the Windows version (I do not have access to this OS). Nothing would have been possible without him. All possibly remaining bugs are mine.

As a remainder also note that one should not use path/filename with spaces (Make really does not like that). This consign is also true under Linux or MacOS. See SO:can-gnu-make-handle-filenames-with-spaces by example.

2019-06-28

New MathematicaStan version 2.0!

This version uses Mathematica v11 and has been completely refactored

Caveat: breaking changes!

Note: the “old” MathematicaStan version based on Mathematica v8.0 is now archived in the v1 git branch.

Installation

The Stan CmdStan shell interface

First you must install CmdStan. Once this is done you get a directory containing stuff like:

bin  doc  examples  Jenkinsfile  LICENSE  make  makefile  README.md  runCmdStanTests.py  src  stan  test-all.sh

With my configuration CmdStan is installed in:

~/ExternalSoftware/cmdstan-2.19.1

For Windows users it is possibly something like:

C:\\Users\\USER_NAME\\Documents\\R\\cmdstan-?.??.?

The Mathematica CmdStan package

To install the Mathematica CmdStan package:

  • open the CmdStan.m file with Mathematica.
  • install it using the Mathematica Notebook File->Install menu.

First run

The first time the package is imported

<<CmdStan`

you will get an error message:

CmdStan::cmdStanDirectoryNotDefined: CmdStan directory does not exist, use SetCmdStanDirectory[dir] to define it (with something like SetCmdStanDirectory["~/ExternalSoftware/cmdstan-2.19.1"])

This is normal as we must define the Stan StanCmd shell interface root directory.

With my configuration this is:

SetCmdStanDirectory["~/ExternalSoftware/cmdstan-2.19.1"]

For Windows user this is certainly something like:

SetCmdStanDirectory["C:\\Users\\USER_NAME\\Documents\\R\\cmdstan-?.??.?"]

Note: this location is recorded in the $CmdStanConfigurationFile file and you will not have to redefine it every time you import the CmdStan package.

Tutorial 1, linear regression

Introduction

You can use the file tutorial.wls or manually follow the instruction below.

Import the package as usual

<<CmdStan`

This package defines these functions (and symbols):

?CmdStan`*
CmdStanGetStanOptionRemoveStanOptionStanOptionExistsQStanResultReducedKeys
CompileStanCodeGetStanResultRunStanStanOptionsStanResultReducedMetaKeys
ExportStanCodeGetStanResultMetaSampleDefaultOptionsStanResultStanVerbose
ExportStanDataImportStanResultSetCmdStanDirectoryStanResultKeysVariationalDefaultOptions
GetCmdStanDirectoryOptimizeDefaultOptionsSetStanOptionStanResultMetaKeys$CmdStanConfigurationFile

For this tutorial we use a simple linear regression example and we will work in a temporary location:

SetDirectory[$TemporaryDirectory]
/tmp

Stan code

Define the Stan code

stanCode = "data
  {
    int<lower = 0> N;
    vector[N] x;
    vector[N] y;
  }
  parameters
  {
    real alpha;
    real beta;
    real<lower = 0> sigma;
  }
  model {
    y ~normal(alpha + beta * x, sigma);
  }";

and export it

stanCodeFile = ExportStanCode["linear_regression.stan", stanCode]
/tmp/linear_regression.stan

Code compilation

Stan code compilation is performed by

stanExeFile = CompileStanCode[stanCodeFile] (* Attention: this takes some time *)

With my configuration I get

make: Entering directory '/home/picaud/ExternalSoftware/cmdstan-2.19.1'

--- Translating Stan model to C++ code ---
bin/stanc  --o=/tmp/linear_regression.hpp /tmp/linear_regression.stan
Model name=linear_regression_model
Input file=/tmp/linear_regression.stan
Output file=/tmp/linear_regression.hpp
g++ -std=c++1y -pthread -Wno-sign-compare     -O3 -I src -I stan/src -I stan/lib/stan_math/ -I stan/lib/stan_math/lib/eigen_3.3.3 -I stan/lib/stan_math/lib/boost_1.69.0 -I stan/lib/stan_math/lib/sundials_4.1.0/include    -DBOOST_RESULT_OF_USE_TR1 -DBOOST_NO_DECLTYPE -DBOOST_DISABLE_ASSERTS -DBOOST_PHOENIX_NO_VARIADIC_EXPRESSION     -c -MT /tmp/linear_regression.o -MT /tmp/linear_regression -include /tmp/linear_regression.hpp -include src/cmdstan/main.cpp -MM -E -MG -MP -MF /tmp/linear_regression.d /tmp/linear_regression.hpp

--- Linking C++ model ---
g++ -std=c++1y -pthread -Wno-sign-compare     -O3 -I src -I stan/src -I stan/lib/stan_math/ -I stan/lib/stan_math/lib/eigen_3.3.3 -I stan/lib/stan_math/lib/boost_1.69.0 -I stan/lib/stan_math/lib/sundials_4.1.0/include    -DBOOST_RESULT_OF_USE_TR1 -DBOOST_NO_DECLTYPE -DBOOST_DISABLE_ASSERTS -DBOOST_PHOENIX_NO_VARIADIC_EXPRESSION             -include /tmp/linear_regression.hpp src/cmdstan/main.cpp        stan/lib/stan_math/lib/sundials_4.1.0/lib/libsundials_nvecserial.a stan/lib/stan_math/lib/sundials_4.1.0/lib/libsundials_cvodes.a stan/lib/stan_math/lib/sundials_4.1.0/lib/libsundials_idas.a  -o /tmp/linear_regression
make: Leaving directory '/home/picaud/ExternalSoftware/cmdstan-2.19.1'

Note: if you do not want to have information printed you can use the StanVerbose option:

stanExeFile = CompileStanCode[stanCodeFile, StanVerbose -> False]

Simulated data

Let’s simulate some data:

σ = 3; α = 1; β = 2;
n = 20;
X = Range[n];
Y = α + β*X + RandomVariate[NormalDistribution[0, σ], n];
Show[Plot+ β*x, {x, Min[X], Max[X]}], 
     ListPlot[Transpose@{X, Y}, PlotStyle -> Red]]

./figures/linRegData.png

Create the data.R data file

The data are stored in a Association and then exported thanks to the ExportStanData function.

stanData = <|"N" -> n, "x" -> X, "y" -> Y|>;
stanDataFile = ExportStanData[stanExeFile, stanData]
/tmp/linear_regression.data.R

Note: this function returns the created file name /tmp/linear_regression.data.R. Its first argument, stanExeFile is simply the Stan executable file name with its path. The ExportStanData[] function modifies the file name extension and replace it with “.data.R”, but you can use it with any file name:

ExportStanData["~/tmp/my_custom_filename.data.R",stanData]

Run Stan, likelihood maximization

We are now able to run the stanExeFile executable.

Let’s start by maximizing the likelihood

stanResultFile = RunStan[stanExeFile, OptimizeDefaultOptions]
Running: /tmp/linear_regression method=optimize data file=/tmp/linear_regression.data.R output file=/tmp/linear_regression.csv

method = optimize
  optimize
    algorithm = lbfgs (Default)
      lbfgs
        init_alpha = 0.001 (Default)
        tol_obj = 9.9999999999999998e-13 (Default)
        tol_rel_obj = 10000 (Default)
        tol_grad = 1e-08 (Default)
        tol_rel_grad = 10000000 (Default)
        tol_param = 1e-08 (Default)
        history_size = 5 (Default)
    iter = 2000 (Default)
    save_iterations = 0 (Default)
id = 0 (Default)
data
  file = /tmp/linear_regression.data.R
init = 2 (Default)
random
  seed = 2775739062
output
  file = /tmp/linear_regression.csv
  diagnostic_file =  (Default)
  refresh = 100 (Default)

Initial log joint probability = -8459.75
    Iter      log prob        ||dx||      ||grad||       alpha      alpha0  # evals  Notes 
      19      -32.5116    0.00318011    0.00121546      0.9563      0.9563       52   
Optimization terminated normally: 
  Convergence detected: relative gradient magnitude is below tolerance

The stanResultFile variable contains now the csv result file:

/tmp/linear_regression.csv

Note: again, if you do not want to have printed output, use the StanVerbose->False option.

stanResultFile = RunStan[stanExeFile, OptimizeDefaultOptions,StanVerbose->False]

Note: the method we use is defined by the second argument OptimizeDefaultOptions. If you want to use Variational Bayes or HMC sampling you must use

RunStan[stanExeFile, VariationalDefaultOptions]

or

RunStan[stanExeFile, SampleDefaultOptions]

Note: option management will be detailed later in this tutorial.

Load the CSV result file

To load CSV result file, do

stanResult = ImportStanResult[stanResultFile]

which prints

     file: /tmp/linear_regression.csv
     meta: lp__ 
parameter: alpha , beta , sigma 

To access estimated variable α, β and σ, simply do:

GetStanResultMeta[stanResult, "lp__"]
αe=GetStanResult[stanResult, "alpha"]
βe=GetStanResult[stanResult, "beta"]
σe=GetStanResult[stanResult, "sigma"]

which prints:

{-32.5116}
{2.51749}
{1.83654}
{3.08191}

Note: as with likelihood maximization we only have a point estimation, the returned values are lists of one number.

You can plot the estimated line:

Show[Plot[{αe + βe*x, α + β*x}, {x, Min[X],Max[X]}, PlotLegends -> "Expressions"], 
     ListPlot[Transpose@{X, Y}, PlotStyle -> Red]]

./figures/linRegEstimate.png

Run Stan, Variational Bayes

We want to solve the same problem but using variational inference.

As explained before we must use

stanResultFile = RunStan[stanExeFile, VariationalDefaultOptions]

instead of

stanResultFile = RunStan[stanExeFile, OptimizeDefaultOptions]

However, please note that running this command will erase stanResultFile which is the file where result are exported. To avoid this we can modify the output file name by modifying option values.

The default option values are stored in the write-protected VariationalDefaultOptions variable.

To modify them we must first copy this protected symbol:

myOpt=VariationalDefaultOptions

which prints

method=variational

The option values are printed when you run the RunStan command:

method = variational
  variational
    algorithm = meanfield (Default)
      meanfield
    iter = 10000 (Default)
    grad_samples = 1 (Default)
    elbo_samples = 100 (Default)
    eta = 1 (Default)
    adapt
      engaged = 1 (Default)
      iter = 50 (Default)
    tol_rel_obj = 0.01 (Default)
    eval_elbo = 100 (Default)
    output_samples = 1000 (Default)
id = 0 (Default)
data
  file =  (Default)
init = 2 (Default)
random
  seed = 2784129612
output
  file = output.csv (Default)
  diagnostic_file =  (Default)
  refresh = 100 (Default)

We have to modify the output file option value. This can be done by:

myOpt = SetStanOption[myOpt, "output.file", FileNameJoin[{Directory[], "myOutputFile.csv"}]]

which prints:

method=variational output file=/tmp/myOutputFile.csv

Now we can run Stan:

myOutputFile=RunStan[stanExeFile, myOpt, StanVerbose -> False]

which must print:

/tmp/myOutputFile.csv

Now import this CSV file:

myResult = ImportStanResult[myOutputFile]

which prints:

     file: /tmp/myOutputFile.csv
     meta: lp__ , log_p__ , log_g__ 
parameter: alpha , beta , sigma 

As before you can use:

GetStanResult[myResult,"alpha"]

to get alpha parameter value, but now you will get a list of 1000 sample:

{2.03816, 0.90637, ..., ..., 1.22068, 1.66392}

Instead of the full sample list we are often interested by sample mean, variance… You can get these quantities as follows:

GetStanResult[Mean, myResult, "alpha"]
GetStanResult[Variance, myResult, "alpha"]

which prints:

2.0353
0.317084

You can also get the sample hstogram as simply as:

GetStanResult[Histogram, myResult, "alpha"]

./figures/linRegHisto.png

More about Option management

Overwriting default values

We provide further details concerning option related functions.

To recap the first step is to perform a copy of the write-protected default option values. By example to modify default MCMC option values the first step is:

myOpt = SampleDefaultOptions

The available option are:

method = sample (Default)
  sample
    num_samples = 1000 (Default)
    num_warmup = 1000 (Default)
    save_warmup = 0 (Default)
    thin = 1 (Default)
    adapt
      engaged = 1 (Default)
      gamma = 0.050000000000000003 (Default)
      delta = 0.80000000000000004 (Default)
      kappa = 0.75 (Default)
      t0 = 10 (Default)
      init_buffer = 75 (Default)
      term_buffer = 50 (Default)
      window = 25 (Default)
    algorithm = hmc (Default)
      hmc
        engine = nuts (Default)
          nuts
            max_depth = 10 (Default)
        metric = diag_e (Default)
        metric_file =  (Default)
        stepsize = 1 (Default)
        stepsize_jitter = 0 (Default)
id = 0 (Default)
data
  file = /tmp/linear_regression.data.R
init = 2 (Default)
random
  seed = 3714706817 (Default)
output
  file = /tmp/linear_regression.csv
  diagnostic_file =  (Default)
  refresh = 100 (Default)
  sig_figs = -1 (Default)

If we want to modify:

method = sample (Default)
  sample
    num_samples = 1000 (Default)
    num_warmup = 1000 (Default)

and

method = sample (Default)
  sample
    algorithm = hmc (Default)
      hmc
        engine = nuts (Default)
          nuts
            max_depth = 10 (Default)

you must proceed as follows. For each hierarchy level use a “.” as separator and do not forget to rewrite “=” with the associated value. With our example this gives:

myOpt = SetStanOption[myOpt, "adapt.num_samples", 2000]
myOpt = SetStanOption[myOpt, "adapt.num_warmup", 1500]
myOpt = SetStanOption[myOpt, "algorithm=hmc.engine=nuts.max_depth", 5]

Now you can run the sampler with these new option values:

stanResultFile = RunStan[stanExeFile, myOpt]

which should print:

method = sample (Default)
  sample
    num_samples = 2000
    num_warmup = 1500
    save_warmup = 0 (Default)
    thin = 1 (Default)
    adapt
      engaged = 1 (Default)
      gamma = 0.050000000000000003 (Default)
      delta = 0.80000000000000004 (Default)
      kappa = 0.75 (Default)
      t0 = 10 (Default)
      init_buffer = 75 (Default)
      term_buffer = 50 (Default)
      window = 25 (Default)
    algorithm = hmc (Default)
      hmc
        engine = nuts (Default)
          nuts
            max_depth = 5
        metric = diag_e (Default)
        metric_file =  (Default)
        stepsize = 1 (Default)
        stepsize_jitter = 0 (Default)
id = 0 (Default)
data
  file = /tmp/linear_regression.data.R
init = 2 (Default)
random
  seed = 3720771451 (Default)
output
  file = /tmp/linear_regression.csv
  diagnostic_file =  (Default)
  refresh = 100 (Default)
  sig_figs = -1 (Default)
stanc_version = stanc3 b25c0b64
stancflags = 


Gradient evaluation took 1.3e-05 seconds
1000 transitions using 10 leapfrog steps per transition would take 0.13 seconds.
Adjust your expectations accordingly!


Iteration:    1 / 3500 [  0%]  (Warmup)
Iteration:  100 / 3500 [  2%]  (Warmup)
Iteration:  200 / 3500 [  5%]  (Warmup)
Iteration:  300 / 3500 [  8%]  (Warmup)
Iteration:  400 / 3500 [ 11%]  (Warmup)
Iteration:  500 / 3500 [ 14%]  (Warmup)
Iteration:  600 / 3500 [ 17%]  (Warmup)
Iteration:  700 / 3500 [ 20%]  (Warmup)
Iteration:  800 / 3500 [ 22%]  (Warmup)
Iteration:  900 / 3500 [ 25%]  (Warmup)
Iteration: 1000 / 3500 [ 28%]  (Warmup)
Iteration: 1100 / 3500 [ 31%]  (Warmup)
Iteration: 1200 / 3500 [ 34%]  (Warmup)
Iteration: 1300 / 3500 [ 37%]  (Warmup)
Iteration: 1400 / 3500 [ 40%]  (Warmup)
Iteration: 1500 / 3500 [ 42%]  (Warmup)
Iteration: 1501 / 3500 [ 42%]  (Sampling)
Iteration: 1600 / 3500 [ 45%]  (Sampling)
Iteration: 1700 / 3500 [ 48%]  (Sampling)
Iteration: 1800 / 3500 [ 51%]  (Sampling)
Iteration: 1900 / 3500 [ 54%]  (Sampling)
Iteration: 2000 / 3500 [ 57%]  (Sampling)
Iteration: 2100 / 3500 [ 60%]  (Sampling)
Iteration: 2200 / 3500 [ 62%]  (Sampling)
Iteration: 2300 / 3500 [ 65%]  (Sampling)
Iteration: 2400 / 3500 [ 68%]  (Sampling)
Iteration: 2500 / 3500 [ 71%]  (Sampling)
Iteration: 2600 / 3500 [ 74%]  (Sampling)
Iteration: 2700 / 3500 [ 77%]  (Sampling)
Iteration: 2800 / 3500 [ 80%]  (Sampling)
Iteration: 2900 / 3500 [ 82%]  (Sampling)
Iteration: 3000 / 3500 [ 85%]  (Sampling)
Iteration: 3100 / 3500 [ 88%]  (Sampling)
Iteration: 3200 / 3500 [ 91%]  (Sampling)
Iteration: 3300 / 3500 [ 94%]  (Sampling)
Iteration: 3400 / 3500 [ 97%]  (Sampling)
Iteration: 3500 / 3500 [100%]  (Sampling)

 Elapsed Time: 0.053 seconds (Warm-up)
               0.094 seconds (Sampling)
               0.147 seconds (Total)

You can check than the new option values have been taken into account:

num_samples = 2000
num_warmup = 1500

algorithm = hmc (Default)
  hmc
    engine = nuts (Default)
      nuts
        max_depth = 5

Reading customized values

You can get back the modified values as follows:

GetStanOption[myOpt, "adapt.num_warmup"]
GetStanOption[myOpt, "algorithm=hmc.engine=nuts.max_depth"]

which prints

1500
5

Caveat: if the option was not defined (by SetStanOption) the function returns $Failed.

Erasing customized option values

To erase an option value (and use its default value) use:

myOpt = RemoveStanOption[myOpt, "algorithm=hmc.engine=nuts.max_depth"]

which prints

method=sample adapt num_samples=2000 num_warmup=1500 

Tutorial 2, linear regression with more than one predictor

Parameter arrays

By now the parameters alpha, beta, sigma, were scalars. We will see how to handle parameters that are vectors or matrices.

We use second section of the linear regression example, entitled “Matrix notation and Vectorization”.

The β parameter is now a vector of size K.

stanCode = "data {
    int<lower=0> N;   // number of data items
    int<lower=0> K;   // number of predictors
    matrix[N, K] x;   // predictor matrix
    vector[N] y;      // outcome vector
  }
  parameters {
    real alpha;           // intercept
    vector[K] beta;       // coefficients for predictors
    real<lower=0> sigma;  // error scale
  }
  model {
    y ~ normal(x * beta + alpha, sigma);  // likelihood
  }";

stanCodeFile = ExportStanCode["linear_regression_vect.stan", stanCode];
stanExeFile = CompileStanCode[stanCodeFile];

Simulated data

Here we use {x,x²,x³} as predictors, with their coefficients β = {2,0.1,0.01} so that the model is

y = α + β1 x + β2 x² + β3 x³ + ε

where ε follows a normal distribution.

σ = 3; α = 1; β1 = 2; β2 = 0.1; β3 = 0.01;
n = 20;
X = Range[n];
Y = α + β1*X + β2*X^2 + β3*X^3 + RandomVariate[NormalDistribution[0, σ], n];
Show[Plot+ β1*x + β2*x^2 + β3*x^3, {x, Min[X], Max[X]}],
     ListPlot[Transpose@{X, Y}, PlotStyle -> Red]]

./figures/linReg2Data.png

Exporting data

The expression

y = α + β1 x + β2 x² + β3 x³ + ε

is convenient for random variable manipulations. However in practical computations where we have to evaluate:

y[i] = α + β1 x[i] + β2 (x[i])² + β3 (x[i])³ + ε[i], for i = 1..N

it is more convenient to rewrite this in a “vectorized form”:

y = α + X.β + ε

where X is a KxN matrix of columns X[:,j] = j th-predictor = (x[:])^j and α a vector of size N with constant components = α.

Thus data is exported as follows:

stanData = <|"N" -> n, "K" -> 3, "x" -> Transpose[{X,X^2,X^3}], "y" -> Y|>;
stanDataFile = ExportStanData[stanExeFile, stanData]

Note: as Mathematica stores its matrices rows by rows (the C language convention) we have to transpose {X,X^2,X^3} to get the right matrix X.

Run Stan, HMC sampling

We can now run Stan using the Hamiltonian Monte Carlo (HMC) method:

stanResultFile = RunStan[stanExeFile, SampleDefaultOptions]

which prints:

Running: /tmp/linear_regression_vect method=sample data file=/tmp/linear_regression_vect.data.R output file=/tmp/linear_regression_vect.csv

method = sample (Default)
  sample
    num_samples = 1000 (Default)
    num_warmup = 1000 (Default)
    save_warmup = 0 (Default)
    thin = 1 (Default)
    adapt
      engaged = 1 (Default)
      gamma = 0.050000000000000003 (Default)
      delta = 0.80000000000000004 (Default)
      kappa = 0.75 (Default)
      t0 = 10 (Default)
      init_buffer = 75 (Default)
      term_buffer = 50 (Default)
      window = 25 (Default)
    algorithm = hmc (Default)
      hmc
        engine = nuts (Default)
          nuts
            max_depth = 10 (Default)
        metric = diag_e (Default)
        metric_file =  (Default)
        stepsize = 1 (Default)
        stepsize_jitter = 0 (Default)
id = 0 (Default)
data
  file = /tmp/linear_regression_vect.data.R
init = 2 (Default)
random
  seed = 3043713420
output
  file = /tmp/linear_regression_vect.csv
  diagnostic_file =  (Default)
  refresh = 100 (Default)


Gradient evaluation took 4e-05 seconds
1000 transitions using 10 leapfrog steps per transition would take 0.4 seconds.
Adjust your expectations accordingly!


Iteration:    1 / 2000 [  0%]  (Warmup)
Iteration:  100 / 2000 [  5%]  (Warmup)
Iteration:  200 / 2000 [ 10%]  (Warmup)
Iteration:  300 / 2000 [ 15%]  (Warmup)
Iteration:  400 / 2000 [ 20%]  (Warmup)
Iteration:  500 / 2000 [ 25%]  (Warmup)
Iteration:  600 / 2000 [ 30%]  (Warmup)
Iteration:  700 / 2000 [ 35%]  (Warmup)
Iteration:  800 / 2000 [ 40%]  (Warmup)
Iteration:  900 / 2000 [ 45%]  (Warmup)
Iteration: 1000 / 2000 [ 50%]  (Warmup)
Iteration: 1001 / 2000 [ 50%]  (Sampling)
Iteration: 1100 / 2000 [ 55%]  (Sampling)
Iteration: 1200 / 2000 [ 60%]  (Sampling)
Iteration: 1300 / 2000 [ 65%]  (Sampling)
Iteration: 1400 / 2000 [ 70%]  (Sampling)
Iteration: 1500 / 2000 [ 75%]  (Sampling)
Iteration: 1600 / 2000 [ 80%]  (Sampling)
Iteration: 1700 / 2000 [ 85%]  (Sampling)
Iteration: 1800 / 2000 [ 90%]  (Sampling)
Iteration: 1900 / 2000 [ 95%]  (Sampling)
Iteration: 2000 / 2000 [100%]  (Sampling)

 Elapsed Time: 0.740037 seconds (Warm-up)
               0.60785 seconds (Sampling)
               1.34789 seconds (Total)

Load the CSV result file

As before,

stanResult = ImportStanResult[stanResultFile]

load the generated CSV file and prints:

     file: /tmp/linear_regression_vect.csv
     meta: lp__ , accept_stat__ , stepsize__ , treedepth__ , n_leapfrog__ , divergent__ , energy__ 
parameter: alpha , beta 3, sigma 

Compared to the scalar case, the important thing to notice is the beta 3. That means that β is not a scalar anymore but a vector of size 3

Note: here β is a vector, but if it had been a 3x5 matrix we would have had β 3x5 printed instead.

A call to

GetStanResult[stanResult, "beta"]

returns a vector of size 3 but where each component is a list of 1000 sample (for β1, β2 and β3).

As before it generally useful to summarize this sample with function like mean or histogram:

GetStanResult[Mean, stanResult, "beta"]
GetStanResult[Histogram, stanResult, "beta"]

prints:

{3.30321, -0.010088, 0.0126913}

and plots:

./figures/linReg2Histo.png

This is the moment to digress about Keys. If you try:

StanResultKeys[stanResult]
StanResultMetaKeys[stanResult]

this will print:

{"alpha", "beta.1", "beta.2", "beta.3", "sigma"}
{"lp__", "accept_stat__", "stepsize__", "treedepth__", "n_leapfrog__", "divergent__", "energy__"}

These functions are useful to get the complete list of keys. Note that, as β is an 1D-array of size 1 we have beta.1, beta.2, beta.3. If β was a NxM matrix, the list of keys would have been: beta.1.1, beta.1.2,... beta.N.M.

There is also reduced keys functions:

StanResultReducedKeys[stanResult]
StanResultReducedMetaKeys[stanResult]

which print

{"alpha", "beta", "sigma"}
{"lp__", "accept_stat__", "stepsize__", "treedepth__", "n_leapfrog__", "divergent__", "energy__"}

As you can see the reduced keys functions collect and discard indices to keys associated to arrays.

When accessing a parameter you can work at the component level or globally:

GetStanResult[Mean, stanResult, "beta.2"]
GetStanResult[Mean, stanResult, "beta"]

which prints

-0.010088
{3.30321, -0.010088, 0.0126913}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].