Title: | Bayesian Calibration of Complex Computer Codes |
---|---|
Description: | Performs Bayesian calibration of computer models as per Kennedy and O'Hagan 2001. The package includes routines to find the hyperparameters and parameters; see the help page for stage1() for a worked example using the toy dataset. A tutorial is provided in the calex.Rnw vignette; and a suite of especially simple one dimensional examples appears in inst/doc/one.dim/. |
Authors: | Robin K. S. Hankin [aut, cre] |
Maintainer: | Robin K. S. Hankin <[email protected]> |
License: | GPL-2 |
Version: | 1.2-9 |
Built: | 2024-11-20 02:48:34 UTC |
Source: | https://github.com/robinhankin/calibrator |
Performs Bayesian calibration of computer models as per Kennedy and O'Hagan 2001. The package includes routines to find the hyperparameters and parameters; see the help page for stage1() for a worked example using the toy dataset. A tutorial is provided in the calex.Rnw vignette; and a suite of especially simple one dimensional examples appears in inst/doc/one.dim/.
The DESCRIPTION file:
Package: | calibrator |
Type: | Package |
Title: | Bayesian Calibration of Complex Computer Codes |
Version: | 1.2-9 |
Authors@R: | person(given=c("Robin", "K. S."), family="Hankin", role = c("aut","cre"), email="[email protected]", comment = c(ORCID = "0000-0001-5982-0415")) |
Depends: | R (>= 3.5.0), emulator (>= 1.2-21), mvtnorm |
Imports: | cubature |
Maintainer: | Robin K. S. Hankin <[email protected]> |
Description: | Performs Bayesian calibration of computer models as per Kennedy and O'Hagan 2001. The package includes routines to find the hyperparameters and parameters; see the help page for stage1() for a worked example using the toy dataset. A tutorial is provided in the calex.Rnw vignette; and a suite of especially simple one dimensional examples appears in inst/doc/one.dim/. |
License: | GPL-2 |
URL: | https://github.com/RobinHankin/calibrator |
BugReports: | https://github.com/RobinHankin/calibrator/issues |
Config/pak/sysreqs: | make |
Repository: | https://robinhankin.r-universe.dev |
RemoteUrl: | https://github.com/robinhankin/calibrator |
RemoteRef: | HEAD |
RemoteSha: | 8637464ef2b57c603c2e0122b078560852d4999c |
Author: | Robin K. S. Hankin [aut, cre] (<https://orcid.org/0000-0001-5982-0415>) |
Index of help topics:
C1 Matrix of distances from D1 to D2 D1.fun Function to join x.star to t.vec to give matrix D1 D2.fun Augments observation points with parameters E.theta.toy Expectation and variance with respect to theta EK.eqn10.supp Posterior mean of K Ez.eqn7.supp Expectation of z given y, beta2, phi Ez.eqn9.supp Expectation as per equation 10 of KOH2001 H.fun H function H1.toy Basis functions for D1 and D2 MH Very basic implementation of the Metropolis-Hastings algorithm V.fun Variance matrix for observations V1 Distance matrix V2 distance between observation points Vd Variance matrix for d W covariance matrix for beta W1 Variance matrix for beta1hat W2 variance matrix for beta2 beta1hat.fun beta1 estimator beta2hat.fun estimator for beta2 betahat.fun.koh Expectation of beta, given theta, phi and d blockdiag Assembles matrices blockwise into a block diagonal matrix calibrator-package Bayesian Calibration of Complex Computer Codes cov.p5.supp Covariance function for posterior distribution of z create.new.toy.datasets Create new toy datasets dists.2frames Distance between two points etahat Expectation of computer output extractor.toy Extracts lat/long matrix and theta matrix from D2. h1.toy Basis functions hbar.fun.toy Toy example of hbar (section 4.2) is.positive.definite Is a matrix positive definite? p.eqn4.supp Apostiori probability of psi1 p.eqn8.supp A postiori probability of hyperparameters p.page4 A postiori probability of hyperparameters phi.fun.toy Functions to create or change hyperparameters prob.psi1 A priori probability of psi1, psi2, and theta reality Reality stage1 Stage 1,2 and 3 optimization on toy dataset symmetrize Symmetrize an upper triangular matrix tee Auxiliary functions for equation 9 of the supplement toys Toy datasets tt.fun Integrals needed in KOH2001
Further information is available in the following vignettes:
calex |
Calex: a cookbook for the emulator package (source) |
Robin K. S. Hankin [aut, cre] (<https://orcid.org/0000-0001-5982-0415>)
Maintainer: Robin K. S. Hankin <[email protected]>
M. C. Kennedy and A. O'Hagan 2001. “Bayesian calibration of computer models”. Journal of the Royal Statistical Society, Series B, 63(3): 425–464
R. K. S. Hankin 2005. “Introducing BACCO, an R bundle for Bayesian analysis of computer code output”, Journal of Statistical Software, 14(16)
Least squares estimator for beta1
beta1hat.fun(D1, H1, y, phi)
beta1hat.fun(D1, H1, y, phi)
D1 |
code run points |
H1 |
regressor basis funs |
y |
code outputs |
phi |
hyperparameters |
Robin K. S. Hankin
M. C. Kennedy and A. O'Hagan 2001. Bayesian calibration of computer models. Journal of the Royal Statistical Society B, 63(3) pp425-464
M. C. Kennedy and A. O'Hagan 2001. Supplementary details on Bayesian calibration of computer models, Internal report, University of Sheffield. Available at http://www.tonyohagan.co.uk/academic/ps/calsup.ps
R. K. S. Hankin 2005. Introducing BACCO, an R bundle for Bayesian analysis of computer code output, Journal of Statistical Software, 14(16)
data(toys) y.toy <- create.new.toy.datasets(D1=D1.toy , D2=D2.toy)$y.toy beta1hat.fun(D1=D1.toy, H1=H1.toy, y=y.toy, phi=phi.toy) # now cheat: force the hyperparameters to have the correct psi1: phi.fix <- phi.change(old.phi=phi.toy,psi1=c(1, 0.5, 1.0, 1.0, 0.5, 0.4),phi.fun=phi.fun.toy) # The value for psi1 is obtained by cheating and #examining the source # code for computer.model(); see ?phi.change # Create a new toy dataset with 40 observations: D1.big <- latin.hypercube(40,5) jj <- create.new.toy.datasets(D1=D1.big , D2=D2.toy) # We know that the real coefficients are 4:9 because we # we can cheat and look at the source code for computer.model() # Now estimate the coefficients without cheating: beta1hat.fun(D1=D1.big, H1=H1.toy, jj$y, phi=phi.toy) # Not bad! # We can do slightly better by cheating and using the # correct value for the hyperparameters: beta1hat.fun(D1=D1.big, H1=H1.toy, jj$y,phi=phi.true.toy(phi=phi.toy)) #marginally worse.
data(toys) y.toy <- create.new.toy.datasets(D1=D1.toy , D2=D2.toy)$y.toy beta1hat.fun(D1=D1.toy, H1=H1.toy, y=y.toy, phi=phi.toy) # now cheat: force the hyperparameters to have the correct psi1: phi.fix <- phi.change(old.phi=phi.toy,psi1=c(1, 0.5, 1.0, 1.0, 0.5, 0.4),phi.fun=phi.fun.toy) # The value for psi1 is obtained by cheating and #examining the source # code for computer.model(); see ?phi.change # Create a new toy dataset with 40 observations: D1.big <- latin.hypercube(40,5) jj <- create.new.toy.datasets(D1=D1.big , D2=D2.toy) # We know that the real coefficients are 4:9 because we # we can cheat and look at the source code for computer.model() # Now estimate the coefficients without cheating: beta1hat.fun(D1=D1.big, H1=H1.toy, jj$y, phi=phi.toy) # Not bad! # We can do slightly better by cheating and using the # correct value for the hyperparameters: beta1hat.fun(D1=D1.big, H1=H1.toy, jj$y,phi=phi.true.toy(phi=phi.toy)) #marginally worse.
estimates beta2 as per the equation of page 4 of the supplement. Used
by p.page4()
beta2hat.fun(D1, D2, H1, H2, V, z, etahat.d2, extractor, E.theta, Edash.theta, phi)
beta2hat.fun(D1, D2, H1, H2, V, z, etahat.d2, extractor, E.theta, Edash.theta, phi)
D1 |
Matrix of code run points |
D2 |
Matrix of observation points |
H1 |
regression basis functions |
H2 |
regression basis functions |
V |
overall covariance matrix |
z |
vector of observations |
etahat.d2 |
expectation as per |
extractor |
extractor function |
E.theta |
Expectation |
Edash.theta |
Expectation wrt thetadash |
phi |
hyperparameters |
Robin K. S. Hankin
M. C. Kennedy and A. O'Hagan 2001. Bayesian calibration of computer models. Journal of the Royal Statistical Society B, 63(3) pp425-464
M. C. Kennedy and A. O'Hagan 2001. Supplementary details on Bayesian calibration of computer models, Internal report, University of Sheffield. Available at http://www.tonyohagan.co.uk/academic/ps/calsup.ps
R. K. S. Hankin 2005. Introducing BACCO, an R bundle for Bayesian analysis of computer code output, Journal of Statistical Software, 14(16)
data(toys) etahat.d2 <- etahat(D1=D1.toy, D2=D2.toy, H1=H1.toy, y=y.toy, E.theta=E.theta.toy, extractor=extractor.toy, phi=phi.toy) beta2hat.fun(D1=D1.toy, D2=D2.toy, H1=H1.toy, H2=H2.toy, V=NULL, z=z.toy, etahat.d2=etahat.d2, extractor=extractor.toy, E.theta=E.theta.toy, Edash.theta=Edash.theta.toy, phi=phi.toy) jj <- create.new.toy.datasets(D1.toy , D2.toy) phi.true <- phi.true.toy(phi=phi.toy) y.toy <- jj$y.toy z.toy <- jj$z.toy d.toy <- jj$d.toy etahat.d2 <- etahat(D1=D1.toy, D2=D2.toy, H1=H1.toy, y=y.toy, E.theta=E.theta.toy, extractor=extractor.toy, phi=phi.toy) beta2hat <- beta2hat.fun(D1=D1.toy, D2=D2.toy, H1=H1.toy, H2=H2.toy, V=NULL, z=z.toy, etahat.d2=etahat.d2, extractor=extractor.toy, E.theta=E.theta.toy, Edash.theta=Edash.theta.toy, phi=phi.toy) print(beta2hat) plot(z.toy , H2.toy(D2.toy) %*% beta2hat)
data(toys) etahat.d2 <- etahat(D1=D1.toy, D2=D2.toy, H1=H1.toy, y=y.toy, E.theta=E.theta.toy, extractor=extractor.toy, phi=phi.toy) beta2hat.fun(D1=D1.toy, D2=D2.toy, H1=H1.toy, H2=H2.toy, V=NULL, z=z.toy, etahat.d2=etahat.d2, extractor=extractor.toy, E.theta=E.theta.toy, Edash.theta=Edash.theta.toy, phi=phi.toy) jj <- create.new.toy.datasets(D1.toy , D2.toy) phi.true <- phi.true.toy(phi=phi.toy) y.toy <- jj$y.toy z.toy <- jj$z.toy d.toy <- jj$d.toy etahat.d2 <- etahat(D1=D1.toy, D2=D2.toy, H1=H1.toy, y=y.toy, E.theta=E.theta.toy, extractor=extractor.toy, phi=phi.toy) beta2hat <- beta2hat.fun(D1=D1.toy, D2=D2.toy, H1=H1.toy, H2=H2.toy, V=NULL, z=z.toy, etahat.d2=etahat.d2, extractor=extractor.toy, E.theta=E.theta.toy, Edash.theta=Edash.theta.toy, phi=phi.toy) print(beta2hat) plot(z.toy , H2.toy(D2.toy) %*% beta2hat)
Determines the mean of , given parameters
,
hyperparameters
, and the vector of code outputs and observations
. It is named so as to avoid conflict with
function
betahat.fun
of package emulator.
betahat.fun.koh(D1, D2, H1, H2, theta, d, phi) betahat.fun.koh.vector(D1, D2, H1, H2, theta, d, phi)
betahat.fun.koh(D1, D2, H1, H2, theta, d, phi) betahat.fun.koh.vector(D1, D2, H1, H2, theta, d, phi)
D1 |
Matrix whose rows are observation points and parameter values at which the code has been run |
D2 |
Matrix whose rows are the observation points |
H1 |
Regression function for D1 |
H2 |
Regression function for D2 |
theta |
Parameters |
d |
Vector of code outputs and observations |
phi |
Hyperparameters |
This function is defined between equations 2 and 3 of the
supplement. It is used in functions Ez.eqn9.supp()
and
p.eqn8.supp()
.
The user should always use betahat.fun.koh()
, which is a
wrapper for betahat.fun.koh.vector()
. The forms differ in
their treatment of . In the former,
must be a vector; in the latter,
may be a matrix, in which case
betahat.fun.koh.vector()
is applied to the rows.
In betahat.fun.koh()
, the rownames are assigned by a kludgy
call to H.fun()
, which itself uses a kludge to determine
colnames.
The function returns
Robin K. S. Hankin
M. C. Kennedy and A. O'Hagan 2001. Bayesian calibration of computer models. Journal of the Royal Statistical Society B, 63(3) pp425-464
M. C. Kennedy and A. O'Hagan 2001. Supplementary details on Bayesian calibration of computer models, Internal report, University of Sheffield. Available at http://www.tonyohagan.co.uk/academic/ps/calsup.ps
R. K. S. Hankin 2005. Introducing BACCO, an R bundle for Bayesian analysis of computer code output, Journal of Statistical Software, 14(16)
data(toys) betahat.fun.koh(theta=theta.toy, d=d.toy, D1=D1.toy, D2=D2.toy, H1=H1.toy, H2=H2.toy, phi=phi.toy) betahat.fun.koh.vector(theta=theta.toy, d=d.toy, D1=D1.toy, D2=D2.toy, H1=H1.toy, H2=H2.toy, phi=phi.toy) ## should be identical jj.theta <- rbind(theta.toy,theta.toy+1,theta.toy+2,theta.toy*0) betahat.fun.koh(theta=jj.theta, d=d.toy, D1=D1.toy, D2=D2.toy, H1=H1.toy, H2=H2.toy, phi=phi.toy) ## Now try with true hyperparameters: phi.true <- phi.true.toy(phi=phi.toy) ## And magically create the REAL parameters: theta.REAL <- create.new.toy.datasets(export=TRUE)$REAL.PARAMS jj.theta <- rbind(jj.theta, theta.REAL) ## Generate some data: jj <- create.new.toy.datasets(D1.toy , D2.toy) d.toy <- jj$d.toy ## And finally, observe that the estimated values for beta are pretty ## close to the real values (which omniscient beings can extract using ## reality() and computer.model()): betahat.fun.koh(theta=jj.theta, d=d.toy, D1=D1.toy, D2=D2.toy, H1=H1.toy, H2=H2.toy, phi=phi.true) ## [ ## that is, compare the last column of the above with ## c(computer.model(ex=T)$REAL.COEFFS, reality(ex=T)$REAL.BETA2) ## ]
data(toys) betahat.fun.koh(theta=theta.toy, d=d.toy, D1=D1.toy, D2=D2.toy, H1=H1.toy, H2=H2.toy, phi=phi.toy) betahat.fun.koh.vector(theta=theta.toy, d=d.toy, D1=D1.toy, D2=D2.toy, H1=H1.toy, H2=H2.toy, phi=phi.toy) ## should be identical jj.theta <- rbind(theta.toy,theta.toy+1,theta.toy+2,theta.toy*0) betahat.fun.koh(theta=jj.theta, d=d.toy, D1=D1.toy, D2=D2.toy, H1=H1.toy, H2=H2.toy, phi=phi.toy) ## Now try with true hyperparameters: phi.true <- phi.true.toy(phi=phi.toy) ## And magically create the REAL parameters: theta.REAL <- create.new.toy.datasets(export=TRUE)$REAL.PARAMS jj.theta <- rbind(jj.theta, theta.REAL) ## Generate some data: jj <- create.new.toy.datasets(D1.toy , D2.toy) d.toy <- jj$d.toy ## And finally, observe that the estimated values for beta are pretty ## close to the real values (which omniscient beings can extract using ## reality() and computer.model()): betahat.fun.koh(theta=jj.theta, d=d.toy, D1=D1.toy, D2=D2.toy, H1=H1.toy, H2=H2.toy, phi=phi.true) ## [ ## that is, compare the last column of the above with ## c(computer.model(ex=T)$REAL.COEFFS, reality(ex=T)$REAL.BETA2) ## ]
Assembles matrices blockwise into a block diagonal matrix with optional padding value
blockdiag(m1, m2, p.tr = 0, p.ll = 0)
blockdiag(m1, m2, p.tr = 0, p.ll = 0)
m1 |
Upper left matrix |
m2 |
Lower right matrix |
p.tr |
Padding value for top right quadrant. Defaults to zero. |
p.ll |
Padding value for lower left quadrant. Defaults to zero. |
The function documented here is a subset of adiag
of
package magic
Robin K. S. Hankin
data(toys) blockdiag(D1.toy,D2.toy)
data(toys) blockdiag(D1.toy,D2.toy)
Returns a matrix of distances from the code run points to the
augmented observation points. A wrapper for c1.fun()
.
C1(D1, D2, theta, phi)
C1(D1, D2, theta, phi)
D1 |
D1 |
D2 |
D2 |
theta |
Parameters |
phi |
Hyperparameters |
Robin K. S. Hankin
M. C. Kennedy and A. O'Hagan 2001. Bayesian calibration of computer models. Journal of the Royal Statistical Society B, 63(3) pp425-464
M. C. Kennedy and A. O'Hagan 2001. Supplementary details on Bayesian calibration of computer models, Internal report, University of Sheffield. Available at http://www.tonyohagan.co.uk/academic/ps/calsup.ps
R. K. S. Hankin 2005. Introducing BACCO, an R bundle for Bayesian analysis of computer code output, Journal of Statistical Software, 14(16)
data(toys) C1(D1=D1.toy, D2=D2.toy, theta=theta.toy, phi=phi.toy)
data(toys) C1(D1=D1.toy, D2=D2.toy, theta=theta.toy, phi=phi.toy)
Covariance function for posterior distribution of
conditional on estimated hyperparameters and calibration parameters
.
Cov.eqn9.supp(x, xdash=NULL, theta, d, D1, D2, H1, H2, phi) cov.p5.supp (x, xdash=NULL, theta, d, D1, D2, H1, H2, phi)
Cov.eqn9.supp(x, xdash=NULL, theta, d, D1, D2, H1, H2, phi) cov.p5.supp (x, xdash=NULL, theta, d, D1, D2, H1, H2, phi)
x |
first point, or ( |
xdash |
The second point, or ( |
theta |
Parameters. For |
d |
Observed values |
D1 |
Code run design matrix |
D2 |
Observation points of real process |
H1 |
Basis function for |
H2 |
Basis function for |
phi |
Hyperparameters |
Evaluates the covariance function: the last formula on page 5 of the supplement. The two functions documented here are vectorized differently.
Function Cov.eqn9.supp()
takes matrices for arguments x
and xdash
and a single vector for theta
. Evaluation is
thus taken at a single, fixed value of theta
. The function
returns a matrix whose rows correspond to rows of x
and whose
columns correspond to rows of xdash
.
Function cov.p5.supp()
takes a vector for arguments x
and
xdash
and a matrix for argument theta
whose rows are the
points in parameter space. A vector V
, with elements
corresponding to the rows of argument theta
is returned:
Returns a matrix of covariances
May return the transpose of the desired object
Robin K. S. Hankin
M. C. Kennedy and A. O'Hagan 2001. Bayesian calibration of computer models. Journal of the Royal Statistical Society B, 63(3) pp425-464
M. C. Kennedy and A. O'Hagan 2001. Supplementary details on Bayesian calibration of computer models, Internal report, University of Sheffield. Available at http://www.tonyohagan.co.uk/academic/ps/calsup.ps
R. K. S. Hankin 2005. Introducing BACCO, an R bundle for Bayesian analysis of computer code output, Journal of Statistical Software, 14(16)
data(toys) x <- rbind(x.toy,x.toy+1,x.toy,x.toy,x.toy) rownames(x) <- letters[1:5] xdash <- rbind(x*2,x.toy) rownames(xdash) <- LETTERS[1:6] Cov.eqn9.supp(x=x,xdash=xdash,theta=theta.toy,d=d.toy,D1=D1.toy, D2=D2.toy,H1=H1.toy,H2=H2.toy,phi=phi.toy) phi.true <- phi.true.toy(phi=phi.toy) Cov.eqn9.supp(x=x,xdash=xdash,theta=theta.toy,d=d.toy,D1=D1.toy, D2=D2.toy,H1=H1.toy,H2=H2.toy,phi=phi.true) # Now try a sequence of thetas: cov.p5.supp(x=x.toy,theta=t.vec.toy,d=d.toy,D1=D1.toy,D2=D2.toy, H1=H1.toy,H2=H2.toy,phi=phi.toy)
data(toys) x <- rbind(x.toy,x.toy+1,x.toy,x.toy,x.toy) rownames(x) <- letters[1:5] xdash <- rbind(x*2,x.toy) rownames(xdash) <- LETTERS[1:6] Cov.eqn9.supp(x=x,xdash=xdash,theta=theta.toy,d=d.toy,D1=D1.toy, D2=D2.toy,H1=H1.toy,H2=H2.toy,phi=phi.toy) phi.true <- phi.true.toy(phi=phi.toy) Cov.eqn9.supp(x=x,xdash=xdash,theta=theta.toy,d=d.toy,D1=D1.toy, D2=D2.toy,H1=H1.toy,H2=H2.toy,phi=phi.true) # Now try a sequence of thetas: cov.p5.supp(x=x.toy,theta=t.vec.toy,d=d.toy,D1=D1.toy,D2=D2.toy, H1=H1.toy,H2=H2.toy,phi=phi.toy)
Creates new toy datasets, by sampling from an explicitly specified multivariate Gaussian distribution whose covariance matrix is that required for a Gaussian process.
create.new.toy.datasets(D1,D2,export=FALSE)
create.new.toy.datasets(D1,D2,export=FALSE)
export |
Boolean, with default |
D1 |
D1; set of code run points |
D2 |
D2; set of field observation points |
Returns a list of three elements:
y.toy |
|
z.toy |
|
d.toy |
Because function create.new.toy.datasets()
calls
computer.model()
and model.inadequacy()
, the datasets
returned are drawn from a multivariate Gaussian distribution which
is a Gaussian process
M. C. Kennedy and A. O'Hagan 2001. Bayesian calibration of computer models. Journal of the Royal Statistical Society B, 63(3) pp425-464
M. C. Kennedy and A. O'Hagan 2001. Supplementary details on Bayesian calibration of computer models, Internal report, University of Sheffield. Available at http://www.tonyohagan.co.uk/academic/ps/calsup.ps
R. K. S. Hankin 2005. Introducing BACCO, an R bundle for Bayesian analysis of computer code output, Journal of Statistical Software, 14(16)
toys
, reality
, latin.hypercube
data(toys) create.new.toy.datasets(D1=D1.toy , D2=D2.toy)
data(toys) create.new.toy.datasets(D1=D1.toy , D2=D2.toy)
Function to join x.star
to t.vec
to give matrix D1
with correct row-
and column- names.
D1.fun(x.star, t.vec)
D1.fun(x.star, t.vec)
x.star |
Matrix of code run points |
t.vec |
Matrix of parameter theta values |
Note that the matrix returned is a D1 matrix: it is a design matrix for code observations as it contains both x and theta
Robin K. S. Hankin
M. C. Kennedy and A. O'Hagan 2001. Bayesian calibration of computer models. Journal of the Royal Statistical Society B, 63(3) pp425-464
M. C. Kennedy and A. O'Hagan 2001. Supplementary details on Bayesian calibration of computer models, Internal report, University of Sheffield. Available at http://www.tonyohagan.co.uk/academic/ps/calsup.ps
R. K. S. Hankin 2005. Introducing BACCO, an R bundle for Bayesian analysis of computer code output, Journal of Statistical Software, 14(16)
data(toys) jj <- extractor.toy(D1.toy) x.star.toy <- jj$x.star t.vec.toy <- jj$t.vec D1.fun(x.star.toy , t.vec.toy) # both dataframes D1.fun(x.star.toy , theta.toy) # one dataframe, one vector D1.fun(x.toy , t.vec.toy) # one vector, one dataframe D1.fun(x.toy,theta.toy) # two vectors
data(toys) jj <- extractor.toy(D1.toy) x.star.toy <- jj$x.star t.vec.toy <- jj$t.vec D1.fun(x.star.toy , t.vec.toy) # both dataframes D1.fun(x.star.toy , theta.toy) # one dataframe, one vector D1.fun(x.toy , t.vec.toy) # one vector, one dataframe D1.fun(x.toy,theta.toy) # two vectors
Augments observation points with parameters; will recycle if necessary
D2.fun(D2, theta)
D2.fun(D2, theta)
D2 |
Observation points |
theta |
Parameters |
Robin K. S. Hankin
M. C. Kennedy and A. O'Hagan 2001. “Bayesian calibration of computer models”. Journal of the Royal Statistical Society B, 63(3) pp425-464
M. C. Kennedy and A. O'Hagan 2001. “Supplementary details on Bayesian calibration of computer models”, Internal report, University of Sheffield. Available at http://www.tonyohagan.co.uk/academic/ps/calsup.ps
R. K. S. Hankin 2005. “Introducing BACCO, an R bundle for Bayesian analysis of computer code output”, Journal of Statistical Software, 14(16)
data(toys) D2.fun(D2=D2.toy, theta=theta.toy) D2.fun(D2=t(x.toy), theta=theta.toy) D2.fun(D2=D2.toy[1,,drop=FALSE], theta=theta.toy)
data(toys) D2.fun(D2=D2.toy, theta=theta.toy) D2.fun(D2=t(x.toy), theta=theta.toy) D2.fun(D2=D2.toy[1,,drop=FALSE], theta=theta.toy)
Distance between points specified by rows of two matrices, according to a positive definite matrix. If not specified, the second matrix used is the first.
dists.2frames(a, b=NULL, A=NULL, A.lower=NULL, test.for.symmetry=TRUE)
dists.2frames(a, b=NULL, A=NULL, A.lower=NULL, test.for.symmetry=TRUE)
a |
First dataframe whose rows are the points |
b |
Second dataframe whose rows are the points; if |
A |
Positive definite matrix; if |
A.lower |
The lower triangular Cholesky decomposition of
If a value for |
test.for.symmetry |
Boolean, with default |
Robin K. S. Hankin
M. C. Kennedy and A. O'Hagan 2001. Bayesian calibration of computer models. Journal of the Royal Statistical Society B, 63(3) pp425-464
M. C. Kennedy and A. O'Hagan 2001. Supplementary details on Bayesian calibration of computer models, Internal report, University of Sheffield. Available at http://www.tonyohagan.co.uk/academic/ps/calsup.ps
R. K. S. Hankin 2005. Introducing BACCO, an R bundle for Bayesian analysis of computer code output, Journal of Statistical Software, 14(16)
data(toys) dists.2frames(a=D2.toy,A=diag(2)) A <- diag(2) + matrix(0.2,2,2) A.lower <- t(chol(A)) jj.1 <- dists.2frames(a=D2.toy, A=A, test=TRUE) jj.2 <- dists.2frames(a=D2.toy, A=A, test=FALSE) jj.3 <- dists.2frames(a=D2.toy, A.lower=A.lower, test=FALSE) jj.4 <- dists.2frames(a=D2.toy, A.lower=A.lower, test=TRUE)
data(toys) dists.2frames(a=D2.toy,A=diag(2)) A <- diag(2) + matrix(0.2,2,2) A.lower <- t(chol(A)) jj.1 <- dists.2frames(a=D2.toy, A=A, test=TRUE) jj.2 <- dists.2frames(a=D2.toy, A=A, test=FALSE) jj.3 <- dists.2frames(a=D2.toy, A.lower=A.lower, test=FALSE) jj.4 <- dists.2frames(a=D2.toy, A.lower=A.lower, test=TRUE)
Function
E.theta.toy
returns expectation of H_1(D)
with respect to
;
Edash.theta.toy
returns expectation with
respect to . Function
E.theta.toy
also returns
information about nonlinear behaviour of h1(x,theta)
.
E.theta.toy(D2=NULL, H1=NULL, x1=NULL, x2=NULL, phi, give.mean=TRUE) Edash.theta.toy(x, t.vec, k, H1, fast.but.opaque=FALSE, a=NULL, b=NULL, phi=NULL)
E.theta.toy(D2=NULL, H1=NULL, x1=NULL, x2=NULL, phi, give.mean=TRUE) Edash.theta.toy(x, t.vec, k, H1, fast.but.opaque=FALSE, a=NULL, b=NULL, phi=NULL)
D2 |
Observation points |
H1 |
Regression function for D1 |
phi |
hyperparameters. Default value of |
x |
lat/long point (for |
t.vec |
Matrix whose rows are parameter values (for |
k |
Integer specifying column (for |
give.mean |
In |
fast.but.opaque |
In |
a |
Constant term, needed if |
b |
Linear term, needed if |
x1 |
In |
x2 |
In |
A terse discussion follows; see the calex.pdf
vignette and the
1D case study in directory inst/doc/one/dim/
for more details
and examples.
Function E.theta.toy(give.mean=FALSE,...)
does not
return the variance! The matrix returned is a different size
from the variance matrix!
It returns the thing that must be added to
crossprod(E_theta(h1(x,theta)),t(E_theta(h1(x,theta))))
to give
E_theta(h1(x,theta).t(h1(x,theta)))
.
In other words, it returns
E_theta(h1(x,theta).t(h1(x,theta)))
-
crossprod(E_theta(h1(x,theta)),t(E_theta(h1(x,theta))))
.
If the terms of
h1()
are of the form c(o,theta)
(where o
is a
vector that is a function of x
alone, and independent of
theta
), then the function will include the variance matrix, in
the lower right corner (zeroes elsewhere).
Function E.theta()
must be updated if h1.toy()
changes: unlike E.theta()
and Edash.theta()
, it does not
“know” where the elements that vary with theta
are, nor
their (possibly x-dependent) coefficients.
This form of the function requires x1
and x2
arguments,
for good form's sake, even though the returned value is independent of
x
in the toy example. To see why it is
necessary to include x
, consider a simple case with
. Now
is just
but
is a 2-by-2 matrix (, say) with
.
All three functions here are intimately connected to the form of
h1.toy()
and changing it (or indeed H1.toy()
) will
usually require rewriting all three functions documented here. Look
at the definition of E.theta.toy(give=F)
, and you will see that
even changing the meat of h1.toy()
from c(1,x)
to
c(x,1)
would require a redefinition of E.theta.toy(g=F)
.
The only place that E.theta.toy(g=F)
is used is internally in
hh.fun()
.
Robin K. S. Hankin
M. C. Kennedy and A. O'Hagan 2001. Bayesian calibration of computer models. Journal of the Royal Statistical Society B, 63(3) pp425-464
M. C. Kennedy and A. O'Hagan 2001. Supplementary details on Bayesian calibration of computer models, Internal report, University of Sheffield. Available at http://www.tonyohagan.co.uk/academic/ps/calsup.ps
R. K. S. Hankin 2005. Introducing BACCO, an R bundle for Bayesian analysis of computer code output, Journal of Statistical Software, 14(16)
data(toys) E.theta.toy(D2=D2.toy, H1=H1.toy,phi=phi.toy) E.theta.toy(D2=D2.toy[1,], H1=H1.toy,phi=phi.toy) E.theta.toy(D2=x.toy, H1=H1.toy,phi=phi.toy) Edash.theta.toy(x=x.toy,t.vec=t.vec.toy,k=1, H1=H1.toy,phi=phi.toy)
data(toys) E.theta.toy(D2=D2.toy, H1=H1.toy,phi=phi.toy) E.theta.toy(D2=D2.toy[1,], H1=H1.toy,phi=phi.toy) E.theta.toy(D2=x.toy, H1=H1.toy,phi=phi.toy) Edash.theta.toy(x=x.toy,t.vec=t.vec.toy,k=1, H1=H1.toy,phi=phi.toy)
Estimates the posterior mean of K as per equation 10 of KOH2001S, section 4.2
EK.eqn10.supp(X.dist, D1, D2, H1, H2, d, hbar.fun, lower.theta, upper.theta, extractor, give.info=FALSE, include.prior=FALSE, phi, ...)
EK.eqn10.supp(X.dist, D1, D2, H1, H2, d, hbar.fun, lower.theta, upper.theta, extractor, give.info=FALSE, include.prior=FALSE, phi, ...)
X.dist |
Probability distribution of |
D1 |
Matrix whose rows are the code run points |
D2 |
Matrix whose rows are field observation points |
H1 |
Regression function for |
H2 |
Regression function for |
d |
Vector of code outputs and field observations |
include.prior |
Boolean; passed to function |
hbar.fun |
Function that gives expectation (with respect to |
lower.theta |
Lower integration limit for |
upper.theta |
Lower integration limit for |
extractor |
Extractor function; see |
give.info |
Boolean, with default |
phi |
Hyperparameters |
... |
Extra arguments passed to the integration
function. If multidimensional (ie |
This function evaluates a numerical approximation to equation 10 of section 4.2 of the supplement.
Equation 10 integrates over the prior distribution of theta
. If
theta
is a vector, multidimensional integration is necessary.
In the case of multidimensional integration, function
adaptIntegrate()
is used.
In the case of one dimensional integration—theta being a
scalar—function integrate()
of the stats package is used.
Note that equation 10 is conditional on the observed data and the hyperparameters
Returns a scalar
The function was not reviewed by the Journal of Statistical Software.
The package formely used adapt package, but this is no longer available on CRAN. The package now uses the cubature package.
Robin K. S. Hankin
M. C. Kennedy and A. O'Hagan 2001. Bayesian calibration of computer models. Journal of the Royal Statistical Society B, 63(3) pp425-464
M. C. Kennedy and A. O'Hagan 2001. Supplementary details on Bayesian calibration of computer models, Internal report, University of Sheffield. Available at http://www.tonyohagan.co.uk/academic/ps/calsup.ps
R. K. S. Hankin 2005. Introducing BACCO, an R bundle for Bayesian analysis of computer code output, Journal of Statistical Software, 14(16)
1+1 ## Not run: # Not run because it takes R CMD check too long data(toys) EK.eqn10.supp(X.dist=X.dist.toy, D1=D1.toy, D2=D2.toy, H1=H1.toy, H2=H2.toy, d=d.toy, hbar.fun=hbar.fun.toy, lower.theta=c(-3,-3,-3), upper.theta=c(3,3,3),extractor=extractor.toy, phi=phi.toy) ## End(Not run)
1+1 ## Not run: # Not run because it takes R CMD check too long data(toys) EK.eqn10.supp(X.dist=X.dist.toy, D1=D1.toy, D2=D2.toy, H1=H1.toy, H2=H2.toy, d=d.toy, hbar.fun=hbar.fun.toy, lower.theta=c(-3,-3,-3), upper.theta=c(3,3,3),extractor=extractor.toy, phi=phi.toy) ## End(Not run)
Returns the apostiori expectation of the computer program at a particular point with a particular set of parameters, given the code output.
etahat(D1, D2, H1, y, E.theta, extractor, phi)
etahat(D1, D2, H1, y, E.theta, extractor, phi)
D1 |
Matrix of code observation points and parameters |
D2 |
Matrix of field observation points |
H1 |
Basis functions |
y |
Code observations corresponding to rows of |
E.theta |
expectation wrt theta; see details |
extractor |
Extractor function |
theta |
Parameters |
phi |
Hyperparameters |
Argument E.theta
is officially a function that, given
,y returns
.
However, if supplied a non-function (this is tested by
is.function()
in the code), E.theta
is interpreted as
values of to use. Recycling is carried out by
function
D1.fun()
Robin K. S. Hankin
M. C. Kennedy and A. O'Hagan 2001. Bayesian calibration of computer models. Journal of the Royal Statistical Society B, 63(3) pp425-464
M. C. Kennedy and A. O'Hagan 2001. Supplementary details on Bayesian calibration of computer models, Internal report, University of Sheffield. Available at http://www.tonyohagan.co.uk/academic/ps/calsup.ps
R. K. S. Hankin 2005. Introducing BACCO, an R bundle for Bayesian analysis of computer code output, Journal of Statistical Software, 14(16)
data(toys) etahat(D1=D1.toy, D2=D2.toy, H1=H1.toy, y=y.toy, E.theta=E.theta.toy, extractor=extractor.toy, phi=phi.toy) # Now try giving E.theta=1:3, which will be interpreted as a value for theta: etahat(D1=D1.toy, D2=D2.toy, H1=H1.toy, y=y.toy, E.theta=1:3, extractor=extractor.toy, phi=phi.toy)
data(toys) etahat(D1=D1.toy, D2=D2.toy, H1=H1.toy, y=y.toy, E.theta=E.theta.toy, extractor=extractor.toy, phi=phi.toy) # Now try giving E.theta=1:3, which will be interpreted as a value for theta: etahat(D1=D1.toy, D2=D2.toy, H1=H1.toy, y=y.toy, E.theta=1:3, extractor=extractor.toy, phi=phi.toy)
Extracts x.star.toy
and t.vec.toy
from D2
; toy
example needed because the extraction differs from case to case.
extractor.toy(D1)
extractor.toy(D1)
D1 |
Matrix of code run points |
The first two columns give the elements of x.star
and columns 3 through 5 give the elements of t.vec
.
Function extractor.toy
is the inverse of function
D1.fun
, in the sense that extractor.toy
splits up
D1
into x.star
and t.vec
, while D1.fun
joins them up again
Returns a list with two elements:
x.star |
A matrix containing the lat/longs of the code run points |
t.vec |
A matrix containing the parameters used for the code runs |
Robin K. S. Hankin
M. C. Kennedy and A. O'Hagan 2001. Bayesian calibration of computer models. Journal of the Royal Statistical Society B, 63(3) pp425-464
M. C. Kennedy and A. O'Hagan 2001. Supplementary details on Bayesian calibration of computer models, Internal report, University of Sheffield. Available at http://www.tonyohagan.co.uk/academic/ps/calsup.ps
R. K. S. Hankin 2005. Introducing BACCO, an R bundle for Bayesian analysis of computer code output, Journal of Statistical Software, 14(16)
data(toys) extractor.toy(D1.toy) extractor.toy(D1.toy[1,,drop=FALSE]) (jj <- extractor.toy(D1.fun(x.star=x.toy , t.vec=theta.toy))) D1.fun(jj$x.star,jj$t.vec)
data(toys) extractor.toy(D1.toy) extractor.toy(D1.toy[1,,drop=FALSE]) (jj <- extractor.toy(D1.fun(x.star=x.toy , t.vec=theta.toy))) D1.fun(jj$x.star,jj$t.vec)
Expectation as per equation 7 on the supplement
Ez.eqn7.supp(z, D1, H1, D2, H2, extractor, beta2, y, E.theta, phi)
Ez.eqn7.supp(z, D1, H1, D2, H2, extractor, beta2, y, E.theta, phi)
z |
Vector of observations |
D1 |
Matrix whose rows are code run points |
H1 |
Regressor basis functions |
D2 |
Matrix whose rows are observation points |
H2 |
Regressor basis functions |
extractor |
Function to split D1 |
beta2 |
coefficients |
y |
Code outputs at points corresponding to rows of |
E.theta |
Expectation function to use |
phi |
hyperparameters |
Robin K. S. Hankin
M. C. Kennedy and A. O'Hagan 2001. Bayesian calibration of computer models. Journal of the Royal Statistical Society B, 63(3) pp425-464
M. C. Kennedy and A. O'Hagan 2001. Supplementary details on Bayesian calibration of computer models, Internal report, University of Sheffield. Available at http://www.tonyohagan.co.uk/academic/ps/calsup.ps
R. K. S. Hankin 2005. Introducing BACCO, an R bundle for Bayesian analysis of computer code output, Journal of Statistical Software, 14(16)
data(toys) etahat.d2 <- etahat(D1=D1.toy, D2=D2.toy, H1=H1.toy, y=y.toy, E.theta=E.theta.toy, extractor=extractor.toy, phi=phi.toy) beta2 <- beta2hat.fun(D1=D1.toy, D2=D2.toy, H1=H1.toy, H2=H2.toy, V=V.toy, z=z.toy, etahat.d2=etahat.d2, extractor=extractor.toy, E.theta=E.theta.toy, Edash.theta=Edash.theta.toy, phi=phi.toy) Ez.eqn7.supp(z=z.toy, D1=D1.toy, H1=H1.toy, D2=D2.toy, H2=H2.toy, extractor=extractor.toy, beta2=beta2, y=y.toy, E.theta=E.theta.toy, phi=phi.toy)
data(toys) etahat.d2 <- etahat(D1=D1.toy, D2=D2.toy, H1=H1.toy, y=y.toy, E.theta=E.theta.toy, extractor=extractor.toy, phi=phi.toy) beta2 <- beta2hat.fun(D1=D1.toy, D2=D2.toy, H1=H1.toy, H2=H2.toy, V=V.toy, z=z.toy, etahat.d2=etahat.d2, extractor=extractor.toy, E.theta=E.theta.toy, Edash.theta=Edash.theta.toy, phi=phi.toy) Ez.eqn7.supp(z=z.toy, D1=D1.toy, H1=H1.toy, D2=D2.toy, H2=H2.toy, extractor=extractor.toy, beta2=beta2, y=y.toy, E.theta=E.theta.toy, phi=phi.toy)
Expectation as per equation 10 of KOH2001 (not the supplement)
Ez.eqn9.supp(x, theta, d, D1, D2, H1, H2, phi) Ez.eqn9.supp.vector(x, theta, d, D1, D2, H1, H2, phi)
Ez.eqn9.supp(x, theta, d, D1, D2, H1, H2, phi) Ez.eqn9.supp.vector(x, theta, d, D1, D2, H1, H2, phi)
x |
point at which expectation is needed |
theta |
parameters |
d |
observations and code outputs |
D1 |
code run points |
D2 |
observation points |
H1 |
regression function for D1 |
H2 |
regression function for D2 |
phi |
hyperparameters |
The user should always use Ez.eqn9.supp()
, which is a wrapper
for Ez.eqn9.supp.vector()
. The forms differ in their treatment
of . In the former,
must be a
vector; in the latter,
may be a matrix, in which
case
Ez.eqn9.supp.vector()
is applied to the rows.
Note that Ez.eqn9.supp.vector()
is vectorized in x
but
not (if given a multi-row object,
apply(theta,1,...)
is used to evaluate the function for each
row supplied).
Function Ez.eqn9.supp()
will take multiple-row arguments for
x
and theta
. The output will be a matrix, with rows
corresponding to the rows of x
and columns corresponding to the
rows of theta
. See the third example below.
Note that function Ez.eqn9.supp()
determines whether there are
multiple values of by
is.vector(theta)
. If
this returns TRUE
, it is assumed that is a
single point in multidimensional parameter space; if
FALSE
, it
is assumed to be a matrix whose rows correspond to points in parameter
space.
So if is one dimensional, calling
Ez.eqn9.supp()
with a vector-valued will
fail because the function will assume that
is a
single, multidimensional, point. To get round this, use
as.matrix(theta)
, which is not a vector; the rows are the (1D)
parameter values.
Robin K. S. Hankin
M. C. Kennedy and A. O'Hagan 2001. Bayesian calibration of computer models. Journal of the Royal Statistical Society B, 63(3) pp425-464
M. C. Kennedy and A. O'Hagan 2001. Supplementary details on Bayesian calibration of computer models, Internal report, University of Sheffield. Available at http://www.tonyohagan.co.uk/academic/ps/calsup.ps
R. K. S. Hankin 2005. Introducing BACCO, an R bundle for Bayesian analysis of computer code output, Journal of Statistical Software, 14(16)
data(toys) Ez.eqn9.supp(x=x.toy, theta=theta.toy, d=d.toy, D1=D1.toy, D2=D2.toy, H1=H1.toy,H2=H2.toy, phi=phi.toy) Ez.eqn9.supp(x=D2.toy, theta=t.vec.toy, d=d.toy, D1=D1.toy, D2=D2.toy, H1=H1.toy,H2=H2.toy, phi=phi.toy) Ez.eqn9.supp(x=x.vec, theta=t.vec.toy, d=d.toy, D1=D1.toy, D2=D2.toy, H1=H1.toy,H2=H2.toy, phi=phi.toy)
data(toys) Ez.eqn9.supp(x=x.toy, theta=theta.toy, d=d.toy, D1=D1.toy, D2=D2.toy, H1=H1.toy,H2=H2.toy, phi=phi.toy) Ez.eqn9.supp(x=D2.toy, theta=t.vec.toy, d=d.toy, D1=D1.toy, D2=D2.toy, H1=H1.toy,H2=H2.toy, phi=phi.toy) Ez.eqn9.supp(x=x.vec, theta=t.vec.toy, d=d.toy, D1=D1.toy, D2=D2.toy, H1=H1.toy,H2=H2.toy, phi=phi.toy)
H. See front page of KOHsupp.
H.fun(theta, D1, D2, H1, H2, phi)
H.fun(theta, D1, D2, H1, H2, phi)
theta |
parameters |
D1 |
matrix of code run points |
D2 |
matrix of observation points |
H1 |
Regressor function for D1 |
H2 |
Regressor function for D2 |
phi |
hyperparameters |
Robin K. S. Hankin
M. C. Kennedy and A. O'Hagan 2001. Bayesian calibration of computer models. Journal of the Royal Statistical Society B, 63(3) pp425-464
M. C. Kennedy and A. O'Hagan 2001. Supplementary details on Bayesian calibration of computer models, Internal report, University of Sheffield. Available at http://www.tonyohagan.co.uk/academic/ps/calsup.ps
R. K. S. Hankin 2005. Introducing BACCO, an R bundle for Bayesian analysis of computer code output, Journal of Statistical Software, 14(16)
data(toys) H.fun(theta=theta.toy, D1=D1.toy, D2=D2.toy, H1=H1.toy, H2=H2.toy, phi=phi.toy) H.fun(theta=theta.toy, D1=D1.toy[1,,drop=FALSE], D2=D2.toy, H1=H1.toy, H2=H2.toy, phi=phi.toy) H.fun(theta=theta.toy, D1=D1.toy[1,,drop=FALSE], D2=D2.toy[1,,drop=FALSE], H1=H1.toy, H2=H2.toy, phi=phi.toy)
data(toys) H.fun(theta=theta.toy, D1=D1.toy, D2=D2.toy, H1=H1.toy, H2=H2.toy, phi=phi.toy) H.fun(theta=theta.toy, D1=D1.toy[1,,drop=FALSE], D2=D2.toy, H1=H1.toy, H2=H2.toy, phi=phi.toy) H.fun(theta=theta.toy, D1=D1.toy[1,,drop=FALSE], D2=D2.toy[1,,drop=FALSE], H1=H1.toy, H2=H2.toy, phi=phi.toy)
Basis functions for D1 and D2 respectively.
h1.toy(x) h2.toy(x)
h1.toy(x) h2.toy(x)
x |
Vector of lat/long or lat/long and theta |
Note that h1()
operates on a vector: for dataframes, use
H1.toy()
which is a wrapper for apply(D1, 1, h1)
.
NB If the definition of h1.toy()
or h2.toy()
is
changed, then function hbar.toy()
must be changed to match.
This cannot be done automatically, as the form of hbar.toy()
depends on the distribution of X
. The shibboleth is whether
E_X()
commutes with h_1()
; it does in this case but does
not in general (for example, consider
and
. Then
will
be
; note the V)
Returns basis functions of a vector; in the toy case, just prepend a
1
.
Robin K. S. Hankin
M. C. Kennedy and A. O'Hagan 2001. Bayesian calibration of computer models. Journal of the Royal Statistical Society B, 63(3) pp425-464
M. C. Kennedy and A. O'Hagan 2001. Supplementary details on Bayesian calibration of computer models, Internal report, University of Sheffield. Available at http://www.tonyohagan.co.uk/academic/ps/calsup.ps
R. K. S. Hankin 2005. Introducing BACCO, an R bundle for Bayesian analysis of computer code output, Journal of Statistical Software, 14(16)
data(toys) h1.toy(D1.toy[1,])
data(toys) h1.toy(D1.toy[1,])
Applies basis functions to rows of D1
and D2
H1.toy(D1) H2.toy(D2)
H1.toy(D1) H2.toy(D2)
D1 |
Matrix of code run points |
D2 |
Matrix of observation points |
Returns a matrix whose rows are the basis functions of the code run
points or observation points. Function H1.toy()
operates on
datasets like D1.toy
(latlong and parameters) and function
H2.toy()
operates on datasets like D2.toy
(latlong only)
See package goldstein for a less trivial example of h()
.
Robin K. S. Hankin
M. C. Kennedy and A. O'Hagan 2001. Bayesian calibration of computer models. Journal of the Royal Statistical Society B, 63(3) pp425-464
M. C. Kennedy and A. O'Hagan 2001. Supplementary details on Bayesian calibration of computer models, Internal report, University of Sheffield. Available at http://www.tonyohagan.co.uk/academic/ps/calsup.ps
R. K. S. Hankin 2005. Introducing BACCO, an R bundle for Bayesian analysis of computer code output, Journal of Statistical Software, 14(16)
data(toys) jj <- extractor.toy(D1.toy) x.star.toy <- jj$x.star t.vec.toy <- jj$t.vec H1.toy(D1=D1.toy) H1.toy(D1.toy[1,,drop=FALSE]) H1.toy(D1.fun(x.star.toy , theta.toy)[1,,drop=FALSE]) H1.toy(D1.fun(x.star=x.toy,t.vec=theta.toy)) H1.toy(D1.fun(x.star=x.star.toy[1,],t.vec=t.vec.toy[1,])) H1.toy(D1.fun(x.star=x.star.toy[1,],t.vec=t.vec.toy[1:2,])) H2.toy(D2.toy) H2.toy(t(x.toy))
data(toys) jj <- extractor.toy(D1.toy) x.star.toy <- jj$x.star t.vec.toy <- jj$t.vec H1.toy(D1=D1.toy) H1.toy(D1.toy[1,,drop=FALSE]) H1.toy(D1.fun(x.star.toy , theta.toy)[1,,drop=FALSE]) H1.toy(D1.fun(x.star=x.toy,t.vec=theta.toy)) H1.toy(D1.fun(x.star=x.star.toy[1,],t.vec=t.vec.toy[1,])) H1.toy(D1.fun(x.star=x.star.toy[1,],t.vec=t.vec.toy[1:2,])) H2.toy(D2.toy) H2.toy(t(x.toy))
A toy example of the expectation of h as per section 4.2
hbar.fun.toy(theta, X.dist, phi)
hbar.fun.toy(theta, X.dist, phi)
theta |
Parameter set |
X.dist |
Distribution of variable inputs |
phi |
Hyperparameters |
Note that if h1.toy()
or h2.toy()
change, then
hbar.fun.toy()
will have to change too; see ?h1.toy
for an
example in which nonlinearity changes the form of E.theta.toy()
Returns a vector as per section 4.2 of KOH2001S
Robin K. S. Hankin
M. C. Kennedy and A. O'Hagan 2001. Bayesian calibration of computer models. Journal of the Royal Statistical Society B, 63(3) pp425-464
M. C. Kennedy and A. O'Hagan 2001. Supplementary details on Bayesian calibration of computer models, Internal report, University of Sheffield. Available at http://www.tonyohagan.co.uk/academic/ps/calsup.ps
R. K. S. Hankin 2005. Introducing BACCO, an R bundle for Bayesian analysis of computer code output, Journal of Statistical Software, 14(16)
data(toys) hbar.fun.toy(theta=theta.toy, X.dist=X.dist.toy, phi=phi.toy)
data(toys) hbar.fun.toy(theta=theta.toy, X.dist=X.dist.toy, phi=phi.toy)
Returns TRUE
if and only if a matrix is positive definite.
is.positive.definite(a, ...)
is.positive.definite(a, ...)
a |
Matrix to be tested |
... |
Extra arguments passed to |
A wrapper for eigen()
(a matrix is positive definite if all its
eigenvalues are positive). This function is included for convenience only.
Robin K. S. Hankin
is.positive.definite(diag(3),sym=TRUE) is.positive.definite(diag(6)-0.1)
is.positive.definite(diag(3),sym=TRUE) is.positive.definite(diag(6)-0.1)
Very basic implementation of the Metropolis-Hastings algorithm using a
multivariate Gaussian proposal distribution. Useful for sampling
from p.eqn8.supp()
.
MH(n, start, sigma, pi)
MH(n, start, sigma, pi)
n |
Number of samples to take |
start |
Start value |
sigma |
Variance matrix for kernel |
pi |
Functional proportional to the desired sampling pdf |
This is a basic implementation. The proposal
distribution~ is
Returns a matrix whose rows are samples from . Note
that the first few rows will be “burn-in”, so should be
ignored
This function is a little slow because it is not vectorized.
Robin K. S. Hankin
W. R. Gilks et al 1996. Markov Chain Monte Carlo in practice. Chapman and Hall, 1996. ISBN 0-412-05551-1
N. Metropolis and others 1953. Equation of state calculations by fast computing machines. The Journal of Chemical Physics, volume 21, number 6, pages 1087-1092
# First, a bivariate Gaussian: A <- diag(3) + 0.7 quad.form <- function(M,x){drop(crossprod(crossprod(M,x),x))} pi.gaussian <- function(x){exp(-quad.form(A/2,x))} x.gauss <- MH(n=1000, start=c(0,0,0),sigma=diag(3),pi=pi.gaussian) cov(x.gauss)/solve(A) # Should be a matrix of 1s. # Now something a bit weirder: pi.triangle <- function(x){ 1*as.numeric( (abs(x[1])<1.0) & (abs(x[2])<1.0) ) + 5*as.numeric( (abs(x[1])<0.5) & (abs(x[2])<0.5) ) * as.numeric(x[1]>x[2]) } x.tri <- MH(n=100,start=c(0,0),sigma=diag(2),pi=pi.triangle) plot(x.tri,main="Try with a higher n") # Now a Gaussian mixture model: pi.2gauss <- function(x){ exp(-quad.form(A/2,x)) + exp(-quad.form(A/2,x+c(2,2,2))) } x.2 <- MH(n=100,start=c(0,0,0),sigma=diag(3),pi=pi.2gauss) ## Not run: p3d(x.2, theta=44,d=1e4,d0=1,main="Try with more points")
# First, a bivariate Gaussian: A <- diag(3) + 0.7 quad.form <- function(M,x){drop(crossprod(crossprod(M,x),x))} pi.gaussian <- function(x){exp(-quad.form(A/2,x))} x.gauss <- MH(n=1000, start=c(0,0,0),sigma=diag(3),pi=pi.gaussian) cov(x.gauss)/solve(A) # Should be a matrix of 1s. # Now something a bit weirder: pi.triangle <- function(x){ 1*as.numeric( (abs(x[1])<1.0) & (abs(x[2])<1.0) ) + 5*as.numeric( (abs(x[1])<0.5) & (abs(x[2])<0.5) ) * as.numeric(x[1]>x[2]) } x.tri <- MH(n=100,start=c(0,0),sigma=diag(2),pi=pi.triangle) plot(x.tri,main="Try with a higher n") # Now a Gaussian mixture model: pi.2gauss <- function(x){ exp(-quad.form(A/2,x)) + exp(-quad.form(A/2,x+c(2,2,2))) } x.2 <- MH(n=100,start=c(0,0,0),sigma=diag(3),pi=pi.2gauss) ## Not run: p3d(x.2, theta=44,d=1e4,d0=1,main="Try with more points")
Gives the probability of , given observations.
Equation 4 of the supplement
p.eqn4.supp(D1, y, H1, include.prior=TRUE, lognormally.distributed, return.log, phi)
p.eqn4.supp(D1, y, H1, include.prior=TRUE, lognormally.distributed, return.log, phi)
D1 |
Matrix of code run points |
y |
Vector of code outputs |
H1 |
Regression function |
include.prior |
Boolean with default |
lognormally.distributed |
Boolean; see |
return.log |
Boolean, with default |
phi |
hyperparameters |
Robin K. S. Hankin
M. C. Kennedy and A. O'Hagan 2001. Bayesian calibration of computer models. Journal of the Royal Statistical Society B, 63(3) pp425-464
M. C. Kennedy and A. O'Hagan 2001. Supplementary details on Bayesian calibration of computer models, Internal report, University of Sheffield. Available at http://www.tonyohagan.co.uk/academic/ps/calsup.ps
R. K. S. Hankin 2005. Introducing BACCO, an R bundle for Bayesian analysis of computer code output, Journal of Statistical Software, 14(16)
data(toys) p.eqn4.supp(D1=D1.toy, y=y.toy , H1=H1.toy, lognormally.distributed=TRUE, phi=phi.toy)
data(toys) p.eqn4.supp(D1=D1.toy, y=y.toy , H1=H1.toy, lognormally.distributed=TRUE, phi=phi.toy)
Function to determine the a-postiori probability of hyperparameters
,
and
,
given observations and
.
p.eqn8.supp(theta, D1, D2, H1, H2, d, include.prior=FALSE, lognormally.distributed=FALSE, return.log=FALSE, phi) p.eqn8.supp.vector(theta, D1, D2, H1, H2, d, include.prior=FALSE, lognormally.distributed=FALSE, return.log=FALSE, phi)
p.eqn8.supp(theta, D1, D2, H1, H2, d, include.prior=FALSE, lognormally.distributed=FALSE, return.log=FALSE, phi) p.eqn8.supp.vector(theta, D1, D2, H1, H2, d, include.prior=FALSE, lognormally.distributed=FALSE, return.log=FALSE, phi)
theta |
Parameters |
D1 |
Matrix of code run points |
D2 |
Matrix of observation points |
H1 |
Regression function for D1 |
H2 |
Regression function for D2 |
d |
Vector of code output values and observations |
include.prior |
Boolean, with |
lognormally.distributed |
Boolean, with |
return.log |
Boolean, with default |
phi |
Hyperparameters |
The user should always use p.eqn8.supp()
, which is a wrapper
for p.eqn8.supp.vector()
. The forms differ in their treatment
of . In the former,
must be a
vector; in the latter,
may be a matrix, in which
case
p.eqn8.supp.vector()
is applied to the rows
Robin K. S. Hankin
M. C. Kennedy and A. O'Hagan 2001. Bayesian calibration of computer models. Journal of the Royal Statistical Society B, 63(3) pp425-464
M. C. Kennedy and A. O'Hagan 2001. Supplementary details on Bayesian calibration of computer models, Internal report, University of Sheffield. Available at http://www.tonyohagan.co.uk/academic/ps/calsup.ps
R. K. S. Hankin 2005. Introducing BACCO, an R bundle for Bayesian analysis of computer code output, Journal of Statistical Software, 14(16)
data(toys) p.eqn8.supp(theta=theta.toy, D1=D1.toy, D2=D2.toy, H1=H1.toy, H2=H2.toy, d=d.toy, phi=phi.toy) ## Now try using the true hyperparameters, and data directly drawn from ## the appropriate multivariate distn: phi.true <- phi.true.toy(phi=phi.toy) jj <- create.new.toy.datasets(D1.toy , D2.toy) d.toy <- jj$d.toy p.eqn8.supp(theta=theta.toy, D1=D1.toy, D2=D2.toy, H1=H1.toy, H2=H2.toy, d=d.toy, phi=phi.true) ## Now try p.eqn8.supp() with a vector of possible thetas: p.eqn8.supp(theta=sample.theta(n=11,phi=phi.true), D1=D1.toy, D2=D2.toy, H1=H1.toy, H2=H2.toy, d=d.toy, phi=phi.true)
data(toys) p.eqn8.supp(theta=theta.toy, D1=D1.toy, D2=D2.toy, H1=H1.toy, H2=H2.toy, d=d.toy, phi=phi.toy) ## Now try using the true hyperparameters, and data directly drawn from ## the appropriate multivariate distn: phi.true <- phi.true.toy(phi=phi.toy) jj <- create.new.toy.datasets(D1.toy , D2.toy) d.toy <- jj$d.toy p.eqn8.supp(theta=theta.toy, D1=D1.toy, D2=D2.toy, H1=H1.toy, H2=H2.toy, d=d.toy, phi=phi.true) ## Now try p.eqn8.supp() with a vector of possible thetas: p.eqn8.supp(theta=sample.theta(n=11,phi=phi.true), D1=D1.toy, D2=D2.toy, H1=H1.toy, H2=H2.toy, d=d.toy, phi=phi.true)
Function to determine a postiori probability of hyperparameters
,
and
,
given observations and
.
p.page4(D1, D2, H1, H2, V, y, z, E.theta, Edash.theta, extractor, include.prior=FALSE, lognormally.distributed=FALSE, return.log=FALSE, phi)
p.page4(D1, D2, H1, H2, V, y, z, E.theta, Edash.theta, extractor, include.prior=FALSE, lognormally.distributed=FALSE, return.log=FALSE, phi)
D1 |
Matrix of code run points |
D2 |
Matrix of observation points |
H1 |
Basis function (vectorized) |
H2 |
Regression function for D2 |
V |
Covariance matrix; default value of |
y |
Vector of code outputs |
z |
Vector of observation values |
E.theta |
Expectation over theta |
Edash.theta |
Expectation over theta WRT |
extractor |
Function to extract independent variables and parameters from D1 |
include.prior |
Boolean, with |
lognormally.distributed |
Boolean with |
return.log |
Boolean, with default |
phi |
Hyperparameters |
Robin K. S. Hankin
M. C. Kennedy and A. O'Hagan 2001. Bayesian calibration of computer models. Journal of the Royal Statistical Society B, 63(3) pp425-464
M. C. Kennedy and A. O'Hagan 2001. Supplementary details on Bayesian calibration of computer models, Internal report, University of Sheffield. Available at http://www.tonyohagan.co.uk/academic/ps/calsup.ps
R. K. S. Hankin 2005. Introducing BACCO, an R bundle for Bayesian analysis of computer code output, Journal of Statistical Software, 14(16)
data(toys) p.page4(D1=D1.toy, D2=D2.toy, H1=H1.toy, H2=H2.toy, V=NULL, y=y.toy, z=z.toy,E.theta=E.theta.toy, Edash.theta=Edash.theta.toy, extractor=extractor.toy, phi=phi.toy) ## Now compare the above value with p.page4() calculated with phi ## differing only in psi2: phi.toy.new <- phi.change(phi.fun=phi.fun.toy, old.phi = phi.toy, psi2=c(8,8,8)) p.page4(D1=D1.toy, D2=D2.toy, H1=H1.toy, H2=H2.toy, V=V.toy, y=y.toy, z=z.toy, E.theta=E.theta.toy, Edash.theta=Edash.theta.toy, extractor=extractor.toy, phi=phi.toy.new) ## different!
data(toys) p.page4(D1=D1.toy, D2=D2.toy, H1=H1.toy, H2=H2.toy, V=NULL, y=y.toy, z=z.toy,E.theta=E.theta.toy, Edash.theta=Edash.theta.toy, extractor=extractor.toy, phi=phi.toy) ## Now compare the above value with p.page4() calculated with phi ## differing only in psi2: phi.toy.new <- phi.change(phi.fun=phi.fun.toy, old.phi = phi.toy, psi2=c(8,8,8)) p.page4(D1=D1.toy, D2=D2.toy, H1=H1.toy, H2=H2.toy, V=V.toy, y=y.toy, z=z.toy, E.theta=E.theta.toy, Edash.theta=Edash.theta.toy, extractor=extractor.toy, phi=phi.toy.new) ## different!
Function to create (phi.fun.toy
) or modify
(phi.change
) toy hyperparameters
in a form suitable for passing to the other functions in the library.
The user should never make by hand; always use one of
these functions
phi.fun.toy(rho, lambda, psi1, psi1.apriori, psi2, psi2.apriori, theta.apriori) phi.change(phi.fun, old.phi = NULL, rho = NULL, lambda = NULL, psi1 = NULL, psi1.apriori=NULL, psi1.apriori.mean=NULL, psi1.apriori.sigma=NULL, psi2 = NULL, psi2.apriori=NULL, psi2.apriori.mean=NULL, psi2.apriori.sigma=NULL, theta.apriori=NULL, theta.apriori.mean=NULL, theta.apriori.sigma=NULL)
phi.fun.toy(rho, lambda, psi1, psi1.apriori, psi2, psi2.apriori, theta.apriori) phi.change(phi.fun, old.phi = NULL, rho = NULL, lambda = NULL, psi1 = NULL, psi1.apriori=NULL, psi1.apriori.mean=NULL, psi1.apriori.sigma=NULL, psi2 = NULL, psi2.apriori=NULL, psi2.apriori.mean=NULL, psi2.apriori.sigma=NULL, theta.apriori=NULL, theta.apriori.mean=NULL, theta.apriori.sigma=NULL)
phi.fun |
In |
old.phi |
In function |
rho |
Correlation hyperparameter appearing in main equation |
lambda |
Noise hyperparameter |
psi1 |
Roughness lengths hyperparameter for design matrix
Recall that |
psi1.apriori |
A priori PDF for |
psi1.apriori.mean |
In function |
psi1.apriori.sigma |
In function |
psi2 |
Roughness lengths hyperparameter for Internal function NB: function |
psi2.apriori |
A priori PDF for As for The second element of |
psi2.apriori.mean |
In |
psi2.apriori.sigma |
In
|
theta.apriori |
Apriori PDF for
|
theta.apriori.mean |
In |
theta.apriori.sigma |
In |
Note that this toy function contains within itself
pdm.maker.toy()
which extracts omega_x
and
omega_t
and sigma1squared
from psi1
.
This will need to be changed for real-world applications.
Earlier versions of the package had pdm.maker.toy()
defined separately.
Returns a list of several elements:
rho |
Correlation hyperparameter |
lambda |
Noise hyperparameter |
psi1 |
Roughness lengths hyperparameter for |
psi1.apriori |
Apriori mean and variance matrix for |
psi2 |
Roughness lengths hyperparameter for |
psi2.apriori |
Apriori mean and variance matrix for |
theta.apriori |
Apriori mean and variance matrix for the parameters |
omega_x |
Positive definite matrix for the lat/long part of
|
omega_t |
Positive definite matrix for the code parameters theta,
whose diagonal is |
omegastar_x |
Positive definite matrix for use in equation 13 of
the supplement; represents distances between rows of |
sigma1squared |
variance |
sigma2squared |
variance |
omega_x.upper |
Upper triangular Cholesky decomposition for |
omega_x.lower |
Lower triangular Cholesky decomposition for |
omega_t.upper |
Upper triangular Cholesky decomposition for |
omega_t.lower |
Lower triangular Cholesky decomposition for |
a |
Precalculated matrix for use in
|
b |
Precalculated matrix for use in
|
c |
Precalculated scalar for use in
|
A |
Precalculated scalarfor use in
|
A.upper |
Upper triangular Cholesky decomposition for |
A.lower |
Lower triangular Cholesky decomposition for |
Robin K. S. Hankin
M. C. Kennedy and A. O'Hagan 2001. Bayesian calibration of computer models. Journal of the Royal Statistical Society B, 63(3) pp425-464
M. C. Kennedy and A. O'Hagan 2001. Supplementary details on Bayesian calibration of computer models, Internal report, University of Sheffield. Available at http://www.tonyohagan.co.uk/academic/ps/calsup.ps
R. K. S. Hankin 2005. Introducing BACCO, an R bundle for Bayesian analysis of computer code output, Journal of Statistical Software, 14(16)
phi.fun.toy(100,101,1:6,list(mean=rep(1,6),sigma=1+diag(6)),50:55, list(mean=rep(0,4),sigma=0.1+diag(4)), list(mean=0.1+(1:3),sigma=2.1+diag(3))) phi.fun.toy(rho=1, lambda=1, psi1 = structure(c(1.1, 1.2, 1.3, 1.4, 1.5, 0.7), .Names = c("x", "y", "A","B", "C","s1sq")), psi1.apriori = list( mean=rep(0,6), sigma=0.4+diag(6)), psi2=structure(c(2.1, 2.2), .Names = c("x","y")), psi2.apriori = list(mean=rep(0,5),sigma=0.2+diag(5)), theta.apriori = list(mean=0.1+(1:3),sigma=2.1+diag(3)) ) data(toys) phi.change(phi.fun=phi.fun.toy, old.phi = phi.toy, rho = 100) phi.change(phi.fun=phi.fun.toy, old.phi = phi.toy, theta.apriori.sigma = 4*diag(3)) identical(phi.toy, phi.change(phi.fun=phi.fun.toy, old.phi=phi.toy))
phi.fun.toy(100,101,1:6,list(mean=rep(1,6),sigma=1+diag(6)),50:55, list(mean=rep(0,4),sigma=0.1+diag(4)), list(mean=0.1+(1:3),sigma=2.1+diag(3))) phi.fun.toy(rho=1, lambda=1, psi1 = structure(c(1.1, 1.2, 1.3, 1.4, 1.5, 0.7), .Names = c("x", "y", "A","B", "C","s1sq")), psi1.apriori = list( mean=rep(0,6), sigma=0.4+diag(6)), psi2=structure(c(2.1, 2.2), .Names = c("x","y")), psi2.apriori = list(mean=rep(0,5),sigma=0.2+diag(5)), theta.apriori = list(mean=0.1+(1:3),sigma=2.1+diag(3)) ) data(toys) phi.change(phi.fun=phi.fun.toy, old.phi = phi.toy, rho = 100) phi.change(phi.fun=phi.fun.toy, old.phi = phi.toy, theta.apriori.sigma = 4*diag(3)) identical(phi.toy, phi.change(phi.fun=phi.fun.toy, old.phi=phi.toy))
Function to determine the a-priori probability of
and
of the hyperparameters, and
,
given the apriori means and standard deviations.
Function sample.theta()
samples from its prior distribution.
prob.psi1(phi,lognormally.distributed=TRUE) prob.psi2(phi,lognormally.distributed=TRUE) prob.theta(theta,phi,lognormally.distributed=FALSE) sample.theta(n=1,phi)
prob.psi1(phi,lognormally.distributed=TRUE) prob.psi2(phi,lognormally.distributed=TRUE) prob.theta(theta,phi,lognormally.distributed=FALSE) sample.theta(n=1,phi)
phi |
Hyperparameters |
theta |
Parameters |
lognormally.distributed |
Boolean variable with
|
n |
In function |
These functions use package mvtnorm
to calculate the
probability density under the assumption that the PDF is lognormal.
One implication would be that phi$psi2.apriori$mean
and phi$psi1.apriori$mean
are the means of the
logarithms of the elements of psi1
and psi2
(which are thus assumed to be positive). The sigma
matrix is
the covariance matrix of the logarithms as well.
In these functions, interpretation of argument phi
depends on
the value of Boolean argument lognormally.distributed
. Take
prob.theta()
as an example. If lognormally.distributed
is TRUE
, then log(theta)
is normally distributed with
mean phi$theta.aprior$mean
and variance
phi$theta.apriori$sigma
. If FALSE
, theta
is
normally distributed with mean phi$theta.aprior$mean
and
variance phi$theta.apriori$sigma
.
Interpretation of phi$theta.aprior$mean
depends on the value of
lognormally.distributed
: if TRUE
it is the expected
value of log(theta)
; if FALSE
, it is the expectation of
theta
.
The reason that prob.theta
has a different default value for
lognormally.distributed
is that some elements of theta
might be negative, contraindicating a lognormal distribution
Robin K. S. Hankin
M. C. Kennedy and A. O'Hagan 2001. Bayesian calibration of computer models. Journal of the Royal Statistical Society B, 63(3) pp425-464
M. C. Kennedy and A. O'Hagan 2001. Supplementary details on Bayesian calibration of computer models, Internal report, University of Sheffield. Available at http://www.tonyohagan.co.uk/academic/ps/calsup.ps
R. K. S. Hankin 2005. Introducing BACCO, an R bundle for Bayesian analysis of computer code output, Journal of Statistical Software, 14(16)
p.eqn4.supp
, stage1
, p.eqn8.supp
data(toys) prob.psi1(phi=phi.toy) prob.psi2(phi=phi.toy) prob.theta(theta=theta.toy,phi=phi.toy) sample.theta(n=4,phi=phi.toy)
data(toys) prob.psi1(phi=phi.toy) prob.psi2(phi=phi.toy) prob.theta(theta=theta.toy,phi=phi.toy) sample.theta(n=4,phi=phi.toy)
Function to compute reality, gratis deus ex machina. Includes a simple computer model that substitutes for a complex climate model, and a simple function that substitutes for the base system, in this case the climate.
model.inadequacy(X, set.seed.to.zero=TRUE, draw.from.prior=FALSE, export.true.hyperparameters=FALSE,phi=NULL) computer.model(X, params=NULL, set.seed.to.zero=TRUE, draw.from.prior=FALSE, export.true.hyperparameters=FALSE,phi=NULL) phi.true.toy(phi)
model.inadequacy(X, set.seed.to.zero=TRUE, draw.from.prior=FALSE, export.true.hyperparameters=FALSE,phi=NULL) computer.model(X, params=NULL, set.seed.to.zero=TRUE, draw.from.prior=FALSE, export.true.hyperparameters=FALSE,phi=NULL) phi.true.toy(phi)
X |
Observation point |
params |
Parameters needed by |
set.seed.to.zero |
Boolean, with the default value of |
draw.from.prior |
Boolean, with default |
export.true.hyperparameters |
Boolean, with default value
of |
phi |
In function In functions |
Function reality()
provides the scalar value observed at
a point x
. Evaluation expense is zero; there is no overhead.
(However, it does not compute “reality”: the function returns a
value subject to observational error
as per equation 5. It might be better to call this function
observation()
)
Function computer.model()
returns the output of a simple,
nonlinear computer model.
Both functions documented here return a random variable drawn from an appropriate (correlated) multivariate Gaussian distribution, and are thus Gaussian processes.
The approach is more explicit in the help pages of the emulator
package. There, Gaussian processes are generated by directly invoking
rmvnorm()
with a suitable correlation matrix
Robin K. S. Hankin
M. C. Kennedy and A. O'Hagan 2001. Bayesian calibration of computer models. Journal of the Royal Statistical Society B, 63(3) pp425-464
M. C. Kennedy and A. O'Hagan 2001. Supplementary details on Bayesian calibration of computer models, Internal report, University of Sheffield. Available at http://www.tonyohagan.co.uk/academic/ps/calsup.ps
R. K. S. Hankin 2005. Introducing BACCO, an R bundle for Bayesian analysis of computer code output, Journal of Statistical Software, 14(16)
data(toys) computer.model(X=D2.toy,params=theta.toy) computer.model(D1.toy) computer.model(X=x.toy, params=extractor.toy(D1.toy)$t.vec) phi.fix <- phi.change(old.phi=phi.toy, psi1=c(1, 0.5, 1, 1, 0.5, 0.4),phi.fun=phi.fun.toy) #The values come from c(REAL.SCALES,REAL.SIGMA1SQUARED) as #seen in the sourcecode for computer.model(). computer.model(D1.toy) # use debug(computer.model) and examine # var.matrix directly. It should match the # output from V1(): # first fix phi so that it has the correct values for psi1 (see the # section on psi1 in ?phi.fun.toy for how to get this): phi.fix <- phi.change(old.phi=phi.toy,psi1=c(1, 0.5, 1.0, 1.0, 0.5, 0.4), phi.fun=phi.fun.toy) V1(D1.toy,phi=phi.fix) # What are the hyperparameters that were used to create reality? phi.true.toy(phi=phi.toy) # computer.model(X=D2.toy,params=theta.toy,draw.from.prior=TRUE,phi=phi.toy)
data(toys) computer.model(X=D2.toy,params=theta.toy) computer.model(D1.toy) computer.model(X=x.toy, params=extractor.toy(D1.toy)$t.vec) phi.fix <- phi.change(old.phi=phi.toy, psi1=c(1, 0.5, 1, 1, 0.5, 0.4),phi.fun=phi.fun.toy) #The values come from c(REAL.SCALES,REAL.SIGMA1SQUARED) as #seen in the sourcecode for computer.model(). computer.model(D1.toy) # use debug(computer.model) and examine # var.matrix directly. It should match the # output from V1(): # first fix phi so that it has the correct values for psi1 (see the # section on psi1 in ?phi.fun.toy for how to get this): phi.fix <- phi.change(old.phi=phi.toy,psi1=c(1, 0.5, 1.0, 1.0, 0.5, 0.4), phi.fun=phi.fun.toy) V1(D1.toy,phi=phi.fix) # What are the hyperparameters that were used to create reality? phi.true.toy(phi=phi.toy) # computer.model(X=D2.toy,params=theta.toy,draw.from.prior=TRUE,phi=phi.toy)
Perform O'Hagan's three stage optimization on the toy dataset. Function
stage1()
and stage2()
find the optimal values for
the hyperparameters and stage3()
finds the optimal values for
the three parameters.
stage1(D1, y, H1, maxit, trace=100, method="Nelder-Mead", directory = ".", do.filewrite=FALSE, do.print=TRUE, phi.fun, lognormally.distributed=FALSE, include.prior=TRUE, phi) stage2(D1, D2, H1, H2, y, z, maxit, trace=100, method = "Nelder-Mead", directory = ".", do.filewrite=FALSE, do.print=TRUE, extractor, phi.fun, E.theta, Edash.theta, isotropic=FALSE, lognormally.distributed = FALSE, include.prior = TRUE, use.standin = FALSE, rho.eq.1 = TRUE, phi) stage3(D1, D2, H1, H2, d, maxit, trace=100, method="Nelder-Mead", directory = ".", do.filewrite=FALSE, do.print=TRUE, include.prior = TRUE, lognormally.distributed=FALSE, theta.start=NULL, phi)
stage1(D1, y, H1, maxit, trace=100, method="Nelder-Mead", directory = ".", do.filewrite=FALSE, do.print=TRUE, phi.fun, lognormally.distributed=FALSE, include.prior=TRUE, phi) stage2(D1, D2, H1, H2, y, z, maxit, trace=100, method = "Nelder-Mead", directory = ".", do.filewrite=FALSE, do.print=TRUE, extractor, phi.fun, E.theta, Edash.theta, isotropic=FALSE, lognormally.distributed = FALSE, include.prior = TRUE, use.standin = FALSE, rho.eq.1 = TRUE, phi) stage3(D1, D2, H1, H2, d, maxit, trace=100, method="Nelder-Mead", directory = ".", do.filewrite=FALSE, do.print=TRUE, include.prior = TRUE, lognormally.distributed=FALSE, theta.start=NULL, phi)
maxit |
Maximum number of iterations as passed to |
trace |
Amount of information displayed, as passed to |
D1 |
Matrix whose rows are points at which code output is known |
D2 |
Matrix whose rows are points at which observations were made |
H1 , H2
|
Regressor basis functions for |
y |
Code outputs. Toy example is |
z |
Observations. Toy example is |
d |
Data vector consisting of the code runs and observations |
extractor |
extractor function for |
E.theta , Edash.theta
|
Expectation WRT theta, and dashed theta.
Toy examples are |
phi.fun |
Function to create hyperparameters; passed (in
|
method |
Method argument passed to |
include.prior |
Boolean variable with default |
lognormally.distributed |
Boolean with |
do.filewrite |
Boolean, with |
directory |
The directory to write files to; only matters if
|
isotropic |
In function |
do.print |
Boolean, with default |
use.standin |
In |
rho.eq.1 |
Boolean, with default |
theta.start |
In |
phi |
Hyperparameters. Used as initial values for the hyperparameters in the optimization routines |
The three functions documented here carry out the multi-stage
optimization detailed in KOH2001 (actually, KOH2001 only defined stage
1 and stage 2, which estimated the hyperparameters. What is here
called “stage3()
” estimates the true value of
given the hyperparameters).
stage1()
carries out stage 1 of KOH2001 which is used to
estimate using optimization.
In function stage2()
, setting argument isotropic
to
TRUE
will force phi$omegastar_x
to be a function of a
length one scalar. The value of phi$omegastar_x
used will
depend on pdm.maker.psi2()
(an internal function appearing in
hpa.fun.toy()
). In stage2()
, several kludges are made.
The initial conditions are provided by argument phi
. The
relevant part of this is phi$psi2
.
Function stage2()
estimates and
and
, using
optimization. Note that
includes
in addition to
omegastar_X
(in
the toy case, has three elements: the first two are
the diagonal of
omegastar_x
and the third is
but this information is
encoded in
phi.fun.toy()
, which changes from application to
application).
Function stage3()
attempts to find the maximum likelihood
estimate of , given hyperparameters and
observations, using optimization
Robin K. S. Hankin
M. C. Kennedy and A. O'Hagan 2001. Bayesian calibration of computer models. Journal of the Royal Statistical Society B, 63(3) pp425-464
M. C. Kennedy and A. O'Hagan 2001. Supplementary details on Bayesian calibration of computer models, Internal report, University of Sheffield. Available at http://www.tonyohagan.co.uk/academic/ps/calsup.ps
R. K. S. Hankin 2005. Introducing BACCO, an R bundle for Bayesian analysis of computer code output, Journal of Statistical Software, 14(16)
data(toys) stage1(D1=D1.toy,y=y.toy,H1=H1.toy, maxit=5, phi.fun=phi.fun.toy, phi=phi.toy) ##now try with a slightly bigger dataset: ##Examples below take a few minutes to run: set.seed(0) data(toys) jj <- create.new.toy.datasets(D1.toy , D2.toy) y.toy <- jj$y.toy z.toy <- jj$z.toy d.toy <- jj$d.toy phi.toy.stage1 <- stage1(D1=D1.toy, y=y.toy, H1=H1.toy, maxit=10, phi.fun=phi.fun.toy, phi=phi.toy) phi.toy.stage2 <- stage2(D1=D1.toy, D2=D2.toy, H1=H1.toy, H2=H2.toy, y=y.toy, z=z.toy, extractor=extractor.toy, phi.fun=phi.fun.toy, E.theta=E.theta.toy, Edash.theta=Edash.theta.toy, maxit=3, phi=phi.toy.stage1) stage3(D1=D1.toy, D2=D2.toy, H1=H1.toy, H2=H2.toy, d=d.toy, maxit=3, phi=phi.toy.stage2) # Now try with the true values of the hyperparameters: phi.true <- phi.true.toy(phi=phi.toy) stage3(D1=D1.toy, D2=D2.toy, H1=H1.toy, H2=H2.toy, d=d.toy, maxit=3, phi=phi.true)
data(toys) stage1(D1=D1.toy,y=y.toy,H1=H1.toy, maxit=5, phi.fun=phi.fun.toy, phi=phi.toy) ##now try with a slightly bigger dataset: ##Examples below take a few minutes to run: set.seed(0) data(toys) jj <- create.new.toy.datasets(D1.toy , D2.toy) y.toy <- jj$y.toy z.toy <- jj$z.toy d.toy <- jj$d.toy phi.toy.stage1 <- stage1(D1=D1.toy, y=y.toy, H1=H1.toy, maxit=10, phi.fun=phi.fun.toy, phi=phi.toy) phi.toy.stage2 <- stage2(D1=D1.toy, D2=D2.toy, H1=H1.toy, H2=H2.toy, y=y.toy, z=z.toy, extractor=extractor.toy, phi.fun=phi.fun.toy, E.theta=E.theta.toy, Edash.theta=Edash.theta.toy, maxit=3, phi=phi.toy.stage1) stage3(D1=D1.toy, D2=D2.toy, H1=H1.toy, H2=H2.toy, d=d.toy, maxit=3, phi=phi.toy.stage2) # Now try with the true values of the hyperparameters: phi.true <- phi.true.toy(phi=phi.toy) stage3(D1=D1.toy, D2=D2.toy, H1=H1.toy, H2=H2.toy, d=d.toy, maxit=3, phi=phi.true)
Symmetrize an upper triangular matrix by copying the upper triangular elements into the lower triangular places
symmetrize(a)
symmetrize(a)
a |
Upper triangular matrix to be symmetrized |
Also works for lower triangular matrices
Robin K. S. Hankin
jj <- matrix(rnorm(50),10,5) X <- crossprod(jj,jj) # X has a Wishart distribution (and in # particular is positive definite) chol(X) symmetrize(chol(X))
jj <- matrix(rnorm(50),10,5) X <- crossprod(jj,jj) # X has a Wishart distribution (and in # particular is positive definite) chol(X) symmetrize(chol(X))
Returns a vector whose elements are the “distances” from a point
to the observations and code run points (tee()
); and basis
functions for use in Ez.eqn9.supp()
tee(x, theta, D1, D2, phi) h.fun(x, theta, H1, H2, phi)
tee(x, theta, D1, D2, phi) h.fun(x, theta, H1, H2, phi)
x |
Point from which distances are calculated |
theta |
Value of parameters |
D1 , D2
|
Design matrices of code run points and field observation
points respectively ( |
H1 , H2
|
Basis functions for eta and model inadequacy term
respectively ( |
phi |
Hyperparameters |
Equation 9 of the supplement is identical to equation 10 of KOH2001.
Function h.fun()
returns the first of the subsidiary equations
in equation 9 of the supplement and function tee()
returns the
second (NB: do not confuse this with functions t1bar()
and
t2bar()
which are internal to EK.eqn10.supp()
)
Robin K. S. Hankin
M. C. Kennedy and A. O'Hagan 2001. Bayesian calibration of computer models. Journal of the Royal Statistical Society B, 63(3) pp425-464
M. C. Kennedy and A. O'Hagan 2001. Supplementary details on Bayesian calibration of computer models, Internal report, University of Sheffield. Available at http://www.tonyohagan.co.uk/academic/ps/calsup.ps
R. K. S. Hankin 2005. Introducing BACCO, an R bundle for Bayesian analysis of computer code output, Journal of Statistical Software, 14(16)
data(toys) tee(x=x.toy, theta=theta.toy, D1=D1.toy, D2=D2.toy, phi=phi.toy) # Now some vectorized examples: jj <- rbind(x.toy , x.toy , x.toy+0.01,x.toy+1,x.toy*10) tee(x=jj, theta=theta.toy, D1=D1.toy, D2=D2.toy, phi=phi.toy) h.fun(x=jj, theta=theta.toy, H1=H1.toy, H2=H2.toy, phi=phi.toy)
data(toys) tee(x=x.toy, theta=theta.toy, D1=D1.toy, D2=D2.toy, phi=phi.toy) # Now some vectorized examples: jj <- rbind(x.toy , x.toy , x.toy+0.01,x.toy+1,x.toy*10) tee(x=jj, theta=theta.toy, D1=D1.toy, D2=D2.toy, phi=phi.toy) h.fun(x=jj, theta=theta.toy, H1=H1.toy, H2=H2.toy, phi=phi.toy)
Toy datasets that illustrate the package.
data(toys)
data(toys)
The D1.toy
matrix is 8 rows of code run points, with five
columns. The first two columns are the lat and long and the next
three are parameter values.
The D2.toy
matrix is five rows of observations on two
variables, x
and y
which are styled
“latitude and longitude”.
d.toy
is the “data” vector consisting of length 13: elements
1-8 are code runs and elements 9-13 are observations.
theta.toy
is a vector of length three that is a working example
of . The parameters are designed to work with
computer.model()
.
t.vec.toy
is a matrix of eight rows and three columns. Each
row specifies a value for . The eight rows
correspond to eight code runs.
x.toy
and x.toy2
are vectors of length two that gives a
sample point at which observations may be made (or the code run).
The gloss of the two elements is latitude and longitude.
x.vec
is a matrix whose rows are reasonable x values but
not those in D2.toy
.
y.toy
is a vector of length eight. Each element corresponds to
the output from a code run at each of the rows of D1.toy
.
z.toy
is a vector of length five. Each element corresponds to
a measurement at each of the rows of D2.toy
.
V.toy
is a five by five variance-covariance matrix for the toy
datasets.
X.dist.toy
is a toy example of a distribution of X
for
use in calibrated uncertainty analysis, section 4.2.
Brief description of toy functions fully documented under their own manpage
Function create.new.toy.datasets()
creates new toy datasets
with any number of observations and code runs.
Function E.theta.toy()
returns expectation of H(D)
with
respect to ;
Edash.theta.toy()
returns
expectation with respect to .
Function extractor.toy()
extracts x.star.toy
and t.vec.toy
from D2
; toy example needed because the
extraction differs from case to case.
Function H1.toy()
applies basis functions to rows of D1
and D2
Function phi.fun.toy()
creates a hyperparameter object such as
phi.toy
in a form suitable for passing to the other functions
in the library.
Function phi.change.toy()
modifies the hyperparameter object.
See the helpfiles listed in the “see also” section below
All toy datasets are documented here; they are created by running
inst/toys.R
. There are also several toy
functions that are needed for a toy problem; these are documented
separately (they are too diverse to document fully in a single
manpage). Nevertheless a terse summary for each toy function
is provided on this page. All toy functions in the package are listed
under “See Also”.
Robin K. S. Hankin
M. C. Kennedy and A. O'Hagan 2001. Bayesian calibration of computer models. Journal of the Royal Statistical Society B, 63(3) pp425-464
M. C. Kennedy and A. O'Hagan 2001. Supplementary details on Bayesian calibration of computer models, Internal report, University of Sheffield. Available at http://www.tonyohagan.co.uk/academic/ps/calsup.ps
R. K. S. Hankin 2005. Introducing BACCO, an R bundle for Bayesian analysis of computer code output, Journal of Statistical Software, 14(16)
create.new.toy.datasets
,
E.theta.toy
,
extractor.toy
,
H1.toy
,
phi.fun.toy
,
stage1
data(toys) D1.toy extractor.toy(D1.toy) D2.fun(theta=theta.toy , D2=D2.toy) D2.fun(theta=theta.toy,D2=D2.toy[1,,drop=FALSE]) library("emulator") corr.matrix(D1.toy,scales=rep(1,5)) corr.matrix(D1.toy, pos.def.matrix=diag(5))
data(toys) D1.toy extractor.toy(D1.toy) D2.fun(theta=theta.toy , D2=D2.toy) D2.fun(theta=theta.toy,D2=D2.toy[1,,drop=FALSE]) library("emulator") corr.matrix(D1.toy,scales=rep(1,5)) corr.matrix(D1.toy, pos.def.matrix=diag(5))
Calculates the three integrals needed for V
, under the
restrictions specified in the KOH2001 supplement
tt.fun(D1, extractor, x.i, x.j, test.for.symmetry=FALSE, method=1, phi) ht.fun(x.i, x.j, D1, extractor, Edash.theta, H1, fast.but.opaque=TRUE, x.star=NULL, t.vec=NULL, phi) hh.fun(x.i, x.j, H1, E.theta, phi) t.fun(x, D1, extractor, phi)
tt.fun(D1, extractor, x.i, x.j, test.for.symmetry=FALSE, method=1, phi) ht.fun(x.i, x.j, D1, extractor, Edash.theta, H1, fast.but.opaque=TRUE, x.star=NULL, t.vec=NULL, phi) hh.fun(x.i, x.j, H1, E.theta, phi) t.fun(x, D1, extractor, phi)
D1 |
Matrix of code run points |
H1 |
regression basis functions for |
extractor |
Function to extract |
x |
Lat and long of a point in |
x.i |
Lat and long of first point (eg |
x.j |
Lat and long of second point (eg |
theta |
parameters |
Edash.theta |
Function to return expectation of |
E.theta |
Function to return expectation of |
test.for.symmetry |
In Set this argument to |
fast.but.opaque |
In |
x.star |
In |
t.vec |
In |
method |
In |
phi |
Hyperparameters |
The four functions return integrals representing means taken over
theta
. To wit:
Function tt.fun()
evaluates
and is used in
V.fun()
. Note that this function is symmetric in
and
.
Function ht.fun()
evaluates
and is used in
V.fun()
. Note that this function is not symmetric in
and
.
Function
hh.fun()
evaluates
and is used in
V.fun()
. Note that this function is symmetric in
and
.
Function t.fun()
evaluates
using the formula
It is used in Ez_eq7.supp()
. NB: do not confuse
this function with tee()
, which is different.
These functions are not generally of much interest to the end user; they
are called by V.fun()
. They are defined separately as a
debugging aid, and to simplify the structure of V.fun()
.
Each function returns a matrix as described in KOH2001
Robin K. S. Hankin
M. C. Kennedy and A. O'Hagan 2001. Bayesian calibration of computer models. Journal of the Royal Statistical Society B, 63(3) pp425-464
M. C. Kennedy and A. O'Hagan 2001. Supplementary details on Bayesian calibration of computer models, Internal report, University of Sheffield. Available at http://www.tonyohagan.co.uk/academic/ps/calsup.ps
R. K. S. Hankin 2005. Introducing BACCO, an R bundle for Bayesian analysis of computer code output, Journal of Statistical Software, 14(16)
data(toys) tt.fun(D1=D1.toy, extractor=extractor.toy, x.i=D2.toy[1,], x.j=D2.toy[2,], phi=phi.toy) ht.fun(x.i=D2.toy[1,], x.j=D2.toy[2,], D1=D1.toy, extractor=extractor.toy, Edash.theta=Edash.theta.toy, H1=H1.toy, fast.but.opaque=FALSE, phi=phi.toy) ht.fun(x.i=D2.toy[1,], x.j=D2.toy[2,], D1=D1.toy, extractor=extractor.toy, Edash.theta=Edash.theta.toy, H1=H1.toy, fast.but.opaque=TRUE, x.star=extractor.toy(D1.toy)$x.star, t.vec=extractor.toy(D1.toy)$t.vec, phi=phi.toy) hh.fun(x.i=D2.toy[1,], x.j=D2.toy[2,], H1=H1.toy, E.theta=E.theta.toy, phi=phi.toy) t.fun(x=x.toy, D1=D1.toy, extractor=extractor.toy, phi=phi.toy)
data(toys) tt.fun(D1=D1.toy, extractor=extractor.toy, x.i=D2.toy[1,], x.j=D2.toy[2,], phi=phi.toy) ht.fun(x.i=D2.toy[1,], x.j=D2.toy[2,], D1=D1.toy, extractor=extractor.toy, Edash.theta=Edash.theta.toy, H1=H1.toy, fast.but.opaque=FALSE, phi=phi.toy) ht.fun(x.i=D2.toy[1,], x.j=D2.toy[2,], D1=D1.toy, extractor=extractor.toy, Edash.theta=Edash.theta.toy, H1=H1.toy, fast.but.opaque=TRUE, x.star=extractor.toy(D1.toy)$x.star, t.vec=extractor.toy(D1.toy)$t.vec, phi=phi.toy) hh.fun(x.i=D2.toy[1,], x.j=D2.toy[2,], H1=H1.toy, E.theta=E.theta.toy, phi=phi.toy) t.fun(x=x.toy, D1=D1.toy, extractor=extractor.toy, phi=phi.toy)
Determines the variance/covariance matrix for the observations and code run points.
V.fun(D1, D2, H1, H2, extractor, E.theta, Edash.theta, give.answers=FALSE, test.for.symmetry=FALSE, phi)
V.fun(D1, D2, H1, H2, extractor, E.theta, Edash.theta, give.answers=FALSE, test.for.symmetry=FALSE, phi)
D1 |
Matrix of code run points |
D2 |
Matrix of observation points |
H1 |
Regression function for |
H2 |
Regression function for |
extractor |
Function to extract |
Edash.theta |
Function to return expectation of |
E.theta |
Expectation of |
give.answers |
Boolean (defaulting to |
test.for.symmetry |
Boolean with Set this argument to |
phi |
Hyperparameters |
See KOH2001 for full details on page 3 of the supplement
If give.answers
is the default value of FALSE
,
returns a matrix of covariances for use in p.page4()
.
If give.answers
is TRUE
, returns a named list of (currently)
17 elements. Elements one to six are lines one to six respectively from
page 3 of the supplement; subsequent lines give intermediate steps in
the calculation. The final element is the matrix is the covariances
as returned when give.answers
is FALSE
.
This function takes a long time to run
Robin K. S. Hankin
M. C. Kennedy and A. O'Hagan 2001. Bayesian calibration of computer models. Journal of the Royal Statistical Society B, 63(3) pp425-464
M. C. Kennedy and A. O'Hagan 2001. Supplementary details on Bayesian calibration of computer models, Internal report, University of Sheffield. Available at http://www.tonyohagan.co.uk/academic/ps/calsup.ps
R. K. S. Hankin 2005. Introducing BACCO, an R bundle for Bayesian analysis of computer code output, Journal of Statistical Software, 14(16)
data(toys) (jj <-V.fun(D1=D1.toy, D2=D2.toy, H1=H1.toy, H2=H2.toy, extractor=extractor.toy, Edash.theta=Edash.theta.toy, E.theta=E.theta.toy, phi=phi.toy)) ## Now note that V.fun() changes with the PRIOR used for theta: phi.different.theta <- phi.change(old.phi=phi.toy, theta.apriori.mean=c(100,100,100),phi.fun=phi.fun.toy) V.fun(D1=D1.toy, D2=D2.toy, H1=H1.toy, H2=H2.toy, extractor=extractor.toy, Edash.theta=Edash.theta.toy, E.theta=E.theta.toy, phi=phi.different.theta) ## different! ## Now compare jj above with V.fun() calculated with ## different phi2: phi.toy.new <- phi.change(phi.fun=phi.fun.toy, old.phi = phi.toy, psi2=c(8,8,8)) V.fun(D1=D1.toy, D2=D2.toy, H1=H1.toy, H2=H2.toy, extractor=extractor.toy, Edash.theta=Edash.theta.toy, E.theta=E.theta.toy, phi=phi.toy.new) ## different! ## Not run: data(toys) set.seed(0) jj <- create.new.toy.datasets(D1=D1.toy , D2=D2.toy) y.toy <- jj$y.toy z.toy <- jj$z.toy d.toy <- jj$d.toy v.fun <- function(...){V.fun(D1=D1.toy, D2=D2.toy, H1=H1.toy, H2=H2.toy, extractor=extractor.toy, Edash.theta=Edash.theta.toy, E.theta=E.theta.toy, phi=phi.toy, give=TRUE)} Rprof(file="~/f.txt");ignore <- v.fun();Rprof(file=NULL) system("cd ; R CMD Rprof ~/f.txt > ~/ff.txt") ## End(Not run)
data(toys) (jj <-V.fun(D1=D1.toy, D2=D2.toy, H1=H1.toy, H2=H2.toy, extractor=extractor.toy, Edash.theta=Edash.theta.toy, E.theta=E.theta.toy, phi=phi.toy)) ## Now note that V.fun() changes with the PRIOR used for theta: phi.different.theta <- phi.change(old.phi=phi.toy, theta.apriori.mean=c(100,100,100),phi.fun=phi.fun.toy) V.fun(D1=D1.toy, D2=D2.toy, H1=H1.toy, H2=H2.toy, extractor=extractor.toy, Edash.theta=Edash.theta.toy, E.theta=E.theta.toy, phi=phi.different.theta) ## different! ## Now compare jj above with V.fun() calculated with ## different phi2: phi.toy.new <- phi.change(phi.fun=phi.fun.toy, old.phi = phi.toy, psi2=c(8,8,8)) V.fun(D1=D1.toy, D2=D2.toy, H1=H1.toy, H2=H2.toy, extractor=extractor.toy, Edash.theta=Edash.theta.toy, E.theta=E.theta.toy, phi=phi.toy.new) ## different! ## Not run: data(toys) set.seed(0) jj <- create.new.toy.datasets(D1=D1.toy , D2=D2.toy) y.toy <- jj$y.toy z.toy <- jj$z.toy d.toy <- jj$d.toy v.fun <- function(...){V.fun(D1=D1.toy, D2=D2.toy, H1=H1.toy, H2=H2.toy, extractor=extractor.toy, Edash.theta=Edash.theta.toy, E.theta=E.theta.toy, phi=phi.toy, give=TRUE)} Rprof(file="~/f.txt");ignore <- v.fun();Rprof(file=NULL) system("cd ; R CMD Rprof ~/f.txt > ~/ff.txt") ## End(Not run)
Gives the distance matrix between rows of D1 and D1 (or, if supplied, another matrix)
V1(D1, other = NULL, phi)
V1(D1, other = NULL, phi)
D1 |
Matrix of code run points |
other |
Second matrix to compute distances to. If |
phi |
Hyperparameters |
Returns a matrix
Robin K. S. Hankin
M. C. Kennedy and A. O'Hagan 2001. Bayesian calibration of computer models. Journal of the Royal Statistical Society B, 63(3) pp425-464
M. C. Kennedy and A. O'Hagan 2001. Supplementary details on Bayesian calibration of computer models, Internal report, University of Sheffield. Available at http://www.tonyohagan.co.uk/academic/ps/calsup.ps
R. K. S. Hankin 2005. Introducing BACCO, an R bundle for Bayesian analysis of computer code output, Journal of Statistical Software, 14(16)
data(toys) V1(D1=D1.toy, other=NULL, phi=phi.toy) V1(D1=D1.toy[1,,drop=FALSE], other=NULL, phi=phi.toy) V1(D1=D1.toy, other=D1.toy[1:3,], phi=phi.toy) V1(D1=D1.toy,other=D1.fun(x.star=x.vec,t.vec=theta.toy),phi=phi.toy)
data(toys) V1(D1=D1.toy, other=NULL, phi=phi.toy) V1(D1=D1.toy[1,,drop=FALSE], other=NULL, phi=phi.toy) V1(D1=D1.toy, other=D1.toy[1:3,], phi=phi.toy) V1(D1=D1.toy,other=D1.fun(x.star=x.vec,t.vec=theta.toy),phi=phi.toy)
distance between observation points
V2(x, other = NULL, phi)
V2(x, other = NULL, phi)
x |
Matrix whose rows are observation points |
other |
Second matrix; if |
phi |
Hyperparameters |
This function returns the variance matrix of observations of the real
process at points
.
It appears in the lower right corner of the variance matrix on the
bottom of page 1 of the supplement, calculated by function
Vd()
.
It is also used in functions Cov.eqn9.supp()
and
V.fun()
.
Robin K. S. Hankin
M. C. Kennedy and A. O'Hagan 2001. Bayesian calibration of computer models. Journal of the Royal Statistical Society B, 63(3) pp425-464
M. C. Kennedy and A. O'Hagan 2001. Supplementary details on Bayesian calibration of computer models, Internal report, University of Sheffield. Available at http://www.tonyohagan.co.uk/academic/ps/calsup.ps
R. K. S. Hankin 2005. Introducing BACCO, an R bundle for Bayesian analysis of computer code output, Journal of Statistical Software, 14(16)
data(toys) V2(D2.toy,other=NULL, phi=phi.toy) V2(D2.toy,x.vec,phi=phi.toy)
data(toys) V2(D2.toy,other=NULL, phi=phi.toy) V2(D2.toy,x.vec,phi=phi.toy)
Variance matrix for d, as per the bottom of page 1 of the supplement
Vd(D1, D2, theta, phi)
Vd(D1, D2, theta, phi)
D1 |
matrix of code run points |
D2 |
matrix of observation points |
theta |
Parameters |
phi |
hyperparameters |
Robin K. S. Hankin
M. C. Kennedy and A. O'Hagan 2001. Bayesian calibration of computer models. Journal of the Royal Statistical Society B, 63(3) pp425-464
M. C. Kennedy and A. O'Hagan 2001. Supplementary details on Bayesian calibration of computer models, Internal report, University of Sheffield. Available at http://www.tonyohagan.co.uk/academic/ps/calsup.ps
R. K. S. Hankin 2005. Introducing BACCO, an R bundle for Bayesian analysis of computer code output, Journal of Statistical Software, 14(16)
data(toys) Vd(D1=D1.toy, D2=D2.toy, theta=theta.toy, phi=phi.toy)
data(toys) Vd(D1=D1.toy, D2=D2.toy, theta=theta.toy, phi=phi.toy)
Covariance matrix of beta given theta, phi, d
W(D1, D2, H1, H2, theta, det=FALSE, phi)
W(D1, D2, H1, H2, theta, det=FALSE, phi)
D1 |
Matrix whose rows are code run points |
D2 |
Matrix whose rows are observation points |
H1 |
regression function |
H2 |
regression function |
theta |
parameters |
det |
Boolean, with default |
phi |
Hyperparameters |
This function is defined between equations 2 and 3 of the
supplement. It is used in functions betahat.fun.koh()
,
p.eqn8.supp()
, and p.joint()
.
Returns
If only the determinant is required, setting argument det
to
TRUE
is faster than using det(W(..., det=FALSE))
, as the
former avoids an unnecessary use of solve()
.
Robin K. S. Hankin
M. C. Kennedy and A. O'Hagan 2001. Bayesian calibration of computer models. Journal of the Royal Statistical Society B, 63(3) pp425-464
M. C. Kennedy and A. O'Hagan 2001. Supplementary details on Bayesian calibration of computer models, Internal report, University of Sheffield. Available at http://www.tonyohagan.co.uk/academic/ps/calsup.ps
R. K. S. Hankin 2005. Introducing BACCO, an R bundle for Bayesian analysis of computer code output, Journal of Statistical Software, 14(16)
data(toys) W(D1=D1.toy, D2=D2.toy, H1=H1.toy, H2=H2.toy, theta=theta.toy, phi=phi.toy)
data(toys) W(D1=D1.toy, D2=D2.toy, H1=H1.toy, H2=H2.toy, theta=theta.toy, phi=phi.toy)
returns the variance-covariance matrix for the estimate of beta1hat
W1(D1, H1, det=FALSE, phi)
W1(D1, H1, det=FALSE, phi)
D1 |
matrix of code points |
H1 |
Basis function generator |
phi |
Hyperparameters |
det |
Boolean, with default |
If only the determinant is required, setting argument det
to
TRUE
is faster than using det(W1(...,det=FALSE))
, as the
former avoids an unnecessary use of solve()
.
Robin K. S. Hankin
M. C. Kennedy and A. O'Hagan 2001. Bayesian calibration of computer models. Journal of the Royal Statistical Society B, 63(3) pp425-464
M. C. Kennedy and A. O'Hagan 2001. Supplementary details on Bayesian calibration of computer models, Internal report, University of Sheffield. Available at http://www.tonyohagan.co.uk/academic/ps/calsup.ps
R. K. S. Hankin 2005. Introducing BACCO, an R bundle for Bayesian analysis of computer code output, Journal of Statistical Software, 14(16)
data(toys) W1(D1=D1.toy, H1=H1.toy, phi=phi.toy)
data(toys) W1(D1=D1.toy, H1=H1.toy, phi=phi.toy)
Variance matrix for beta2 as per page 4 of the supplement
W2(D2, H2, V, det=FALSE)
W2(D2, H2, V, det=FALSE)
D2 |
matrix of observation points |
H2 |
regression function |
V |
Overall covariance matrix |
det |
Boolean, with default |
If only the determinant is required, setting argument det
to
TRUE
is faster than using det(W2(...,det=FALSE))
, as the
former avoids an unnecessary use of solve()
Robin K. S. Hankin
M. C. Kennedy and A. O'Hagan 2001. Bayesian calibration of computer models. Journal of the Royal Statistical Society B, 63(3) pp425-464
M. C. Kennedy and A. O'Hagan 2001. Supplementary details on Bayesian calibration of computer models, Internal report, University of Sheffield. Available at http://www.tonyohagan.co.uk/academic/ps/calsup.ps
R. K. S. Hankin 2005. Introducing BACCO, an R bundle for Bayesian analysis of computer code output, Journal of Statistical Software, 14(16)
data(toys) W2(D2=D2.toy, H2=H2.toy, V=V.toy)
data(toys) W2(D2=D2.toy, H2=H2.toy, V=V.toy)