Title: | Bayesian Prediction of Complex Computer Codes |
---|---|
Description: | Performs Bayesian prediction of complex computer codes when fast approximations are available. It uses a hierarchical version of the Gaussian process, originally proposed by Kennedy and O'Hagan (2000), Biometrika 87(1):1. |
Authors: | Robin K. S. Hankin |
Maintainer: | Robin K. S. Hankin <[email protected]> |
License: | GPL-2 |
Version: | 1.2-8 |
Built: | 2025-01-15 02:37:24 UTC |
Source: | https://github.com/robinhankin/approximator |
Implements the ideas of Kennedy and O'Hagan 2000 (see references).
Package: | approximator |
Type: | Package |
Version: | 1.0 |
Date: | 2006-01-10 |
License: | GPL |
This package implements the Bayesian approximation techniques discussed in Kennedy and O'Hagan 2000.
In its simplest form, it takes input from a “slow” but accurate code and a “fast” but inaccurate code, each run at different points in parameter space. The approximator package then uses both sets of model runs to infer what the slow code would produce at a given, untried point in parameter space.
The package includes functionality to work with a hierarchy of codes with increasing accuracy.
Robin K. S. Hankin
Maintainer: <[email protected]>
R. K. S. Hankin 2005. “Introducing BACCO, an R bundle for Bayesian analysis of computer code output”, Journal of Statistical Software, 14(16)
M. C. Kennedy and A. O'Hagan 2000. “Predicting the output from a complex computer code when fast approximations are available” Biometrika, 87(1): pp1-13
data(toyapps) mdash.fun(x=1:3, D1=D1.toy, subsets=subsets.toy, hpa=hpa.toy, z=z.toy, basis=basis.toy)
data(toyapps) mdash.fun(x=1:3, D1=D1.toy, subsets=subsets.toy, hpa=hpa.toy, z=z.toy, basis=basis.toy)
Returns the matrix of correlations of code output at each level evaluated at points on the design matrix.
Afun(level, Di, Dj, hpa)
Afun(level, Di, Dj, hpa)
level |
The level. This enters via the correlation scales |
Di |
First set of points |
Dj |
Second set of points |
hpa |
Hyperparameter object |
This is essentially a convenient wrapper for function
corr.matrix
. It is not really intended for the end user.
Robin K. S. Hankin
M. C. Kennedy and A. O'Hagan 2000. “Predicting the output from a complex computer code when fast approximations are available” Biometrika, 87(1): pp1-13
data(toyapps) D2 <- D1.toy[subsets.toy[[2]],] D3 <- D1.toy[subsets.toy[[3]],] Afun(1,D2,D3,hpa.toy) Afun(2,D2,D3,hpa.toy)
data(toyapps) D2 <- D1.toy[subsets.toy[[2]],] D3 <- D1.toy[subsets.toy[[3]],] Afun(1,D2,D3,hpa.toy) Afun(2,D2,D3,hpa.toy)
Given a level one design matrix, and a subsets object, convert into a list of design matrices, each one of which is the design matrix for its own level
as.sublist(D1, subsets)
as.sublist(D1, subsets)
D1 |
Design matrix for level one code |
subsets |
subsets object |
Robin K. S. Hankin
M. C. Kennedy and A. O'Hagan 2000. “Predicting the output from a complex computer code when fast approximations are available” Biometrika, 87(1): pp1-13
data(toyapps) as.sublist(D1=D1.toy , subsets=subsets.toy)
data(toyapps) as.sublist(D1=D1.toy , subsets=subsets.toy)
A working example of a basis function
basis.toy(x)
basis.toy(x)
x |
Point in parameter space |
Robin K. S. Hankin
M. C. Kennedy and A. O'Hagan 2000. “Predicting the output from a complex computer code when fast approximations are available” Biometrika, 87(1): pp1-13
data(toyapps) basis.toy(D1.toy)
data(toyapps) basis.toy(D1.toy)
Returns the estimator for beta; equation 5. Function
betahat.app()
returns the estimate in terms of fundamental
variables; betahat.app.H()
requires the H
matrix.
betahat.app.H(H, V = NULL, Vinv = NULL, z) betahat.app(D1, subsets, basis, hpa, z, use.Vinv=TRUE)
betahat.app.H(H, V = NULL, Vinv = NULL, z) betahat.app(D1, subsets, basis, hpa, z, use.Vinv=TRUE)
H |
In |
V |
Variance matrix |
Vinv |
Inverse of variance matrix. If not supplied, it is calculated |
use.Vinv |
In function |
z |
vector of observations |
D1 |
Design matrix for level 1 code |
subsets |
Subsets object |
basis |
Basis function |
hpa |
Hyperparameter object |
Robin K. S. Hankin
M. C. Kennedy and A. O'Hagan 2000. “Predicting the output from a complex computer code when fast approximations are available” Biometrika, 87(1): pp1-13
data(toyapps) betahat.app(D1=D1.toy, subsets=subsets.toy, basis=basis.toy, hpa=hpa.toy, z=z.toy, use.Vinv=TRUE) H <- H.fun.app(D1=D1.toy, subsets=subsets.toy, basis=basis.toy,hpa=hpa.toy) V <- V.fun.app(D1=D1.toy, subsets=subsets.toy, hpa=hpa.toy) betahat.app.H(H=H,V=V,z=z.toy)
data(toyapps) betahat.app(D1=D1.toy, subsets=subsets.toy, basis=basis.toy, hpa=hpa.toy, z=z.toy, use.Vinv=TRUE) H <- H.fun.app(D1=D1.toy, subsets=subsets.toy, basis=basis.toy,hpa=hpa.toy) V <- V.fun.app(D1=D1.toy, subsets=subsets.toy, hpa=hpa.toy) betahat.app.H(H=H,V=V,z=z.toy)
Correlation matrices between (sets of) points in parameter space, both
prior (c_fun()
) and posterior (cdash.fun()
).
c_fun(x, xdash=x, subsets, hpa) cdash.fun(x, xdash=x, V=NULL, Vinv=NULL, D1, subsets, basis, hpa, method=2)
c_fun(x, xdash=x, subsets, hpa) cdash.fun(x, xdash=x, V=NULL, Vinv=NULL, D1, subsets, basis, hpa, method=2)
x , xdash
|
Points in parameter space; or, if a matrix,
interpret the rows as points in parameter space. Note that the
default value of |
D1 |
Design matrix |
subsets |
Subset object |
hpa |
hyperparameter object |
basis |
Basis function |
V , Vinv
|
In function |
method |
Integer specifying which of several algebraically identical methods to use. See the source code for details, but default option 2 seems to be the best. Bear in mind that option 3 does not require inversion of a matrix, but is not faster in practice |
Returns a matrix of covariances
Do not confuse function c_fun()
, which computes
defined just below equation 7 on page 4 with
defined
in equation 3 on page 3.
Consider the example given for two levels on page 4 just after
equation 7:
is a kind of prior covariance matrix. Matrix
is a
posterior covariance matrix, conditional on the code observations.
Function Afun()
evaluates in a nice vectorized way.
Equation 7 of KOH2000 contains a typo.
Robin K. S. Hankin
KOH2000
data(toyapps) x <- latin.hypercube(4,3) rownames(x) <- c("ash" , "elm" , "oak", "pine") xdash <- latin.hypercube(7,3) rownames(xdash) <- c("cod","bream","skate","sole","eel","crab","squid") cdash.fun(x=x,xdash=xdash, D1=D1.toy, basis=basis.toy,subsets=subsets.toy, hpa=hpa.toy) # Now add a point whose top-level value is known: x <- rbind(x,D1.toy[subsets.toy[[4]][1],]) cdash.fun(x=x,xdash=xdash, D1=D1.toy, basis=basis.toy,subsets=subsets.toy, hpa=hpa.toy) # Observe how the bottom row is zero (up to rounding error)
data(toyapps) x <- latin.hypercube(4,3) rownames(x) <- c("ash" , "elm" , "oak", "pine") xdash <- latin.hypercube(7,3) rownames(xdash) <- c("cod","bream","skate","sole","eel","crab","squid") cdash.fun(x=x,xdash=xdash, D1=D1.toy, basis=basis.toy,subsets=subsets.toy, hpa=hpa.toy) # Now add a point whose top-level value is known: x <- rbind(x,D1.toy[subsets.toy[[4]][1],]) cdash.fun(x=x,xdash=xdash, D1=D1.toy, basis=basis.toy,subsets=subsets.toy, hpa=hpa.toy) # Observe how the bottom row is zero (up to rounding error)
Generates toy observations on four levels using either internal (unknown) parameters and hyperparameters, or user-supplied versions.
generate.toy.observations(D1, subsets, basis.fun, hpa = NULL, betas = NULL, export.truth = FALSE)
generate.toy.observations(D1, subsets, basis.fun, hpa = NULL, betas = NULL, export.truth = FALSE)
D1 |
Design matrix for level 1 code |
subsets |
Subset object |
basis.fun |
Basis function |
hpa |
Hyperparameter object. If |
betas |
Regression coefficients. If |
export.truth |
Boolean, with default |
Robin K. S. Hankin
M. C. Kennedy and A. O'Hagan 2000. “Predicting the output from a complex computer code when fast approximations are available” Biometrika, 87(1): pp1-13
data(toyapps) generate.toy.observations(D1=D1.toy, subsets=subsets.toy, basis.fun=basis.toy)
data(toyapps) generate.toy.observations(D1=D1.toy, subsets=subsets.toy, basis.fun=basis.toy)
Genie datasets that illustrate the package.
The genie example is a case with three levels.
The D1.genie
matrix is 36 rows of code run points,
corresponding to the observations of the level 1 code. It has four
columns, one per parameter.
hpa.genie
is a hyperparameter object.
subsets.genie
is a list of three elements. Element
corresponds to the rows of
D1.genie
at which level has
been observed.
z.genie
is a three element list. Each element is a vector;
element corresponds to observations of level
. The
lengths will match those of
subsets.genie
.
Function basis.genie()
is a suitable basis function.
Function hpa.fun.genie()
creates a hyperparameter object in a
form suitable for passing to the other functions in the library.
Robin K. S. Hankin
M. C. Kennedy and A. O'Hagan 2000. “Predicting the output from a complex computer code when fast approximations are available” Biometrika, 87(1): pp1-13
data(genie) z.genie jj <- list(trace=100,maxit=10) hpa.genie.level1 <- opt.1(D=D1.genie, z=z.genie, basis=basis.genie, subsets=subsets.genie, hpa.start=hpa.genie.start,control=jj) hpa.genie.level2 <- opt.gt.1(level=2, D=D1.genie, z=z.genie, basis=basis.genie, subsets=subsets.genie, hpa.start=hpa.genie.level1,control=jj) hpa.genie.level3 <- opt.gt.1(level=3, D=D1.genie, z=z.genie, basis=basis.genie, subsets=subsets.genie, hpa.start=hpa.genie.level2,control=jj)
data(genie) z.genie jj <- list(trace=100,maxit=10) hpa.genie.level1 <- opt.1(D=D1.genie, z=z.genie, basis=basis.genie, subsets=subsets.genie, hpa.start=hpa.genie.start,control=jj) hpa.genie.level2 <- opt.gt.1(level=2, D=D1.genie, z=z.genie, basis=basis.genie, subsets=subsets.genie, hpa.start=hpa.genie.level1,control=jj) hpa.genie.level3 <- opt.gt.1(level=3, D=D1.genie, z=z.genie, basis=basis.genie, subsets=subsets.genie, hpa.start=hpa.genie.level2,control=jj)
Returns the matrix of bases H. The “app” of the function name means
“approximator”, to distinguish it from function H.fun()
of the calibrator package.
H.fun.app(D1, subsets, basis, hpa)
H.fun.app(D1, subsets, basis, hpa)
D1 |
Design matrix for level 1 code |
subsets |
Subsets object |
basis |
Basis function |
hpa |
Hyperparameter object |
Robin K. S. Hankin
M. C. Kennedy and A. O'Hagan 2000. “Predicting the output from a complex computer code when fast approximations are available” Biometrika, 87(1): pp1-13
data(toyapps) H.fun.app(D1.toy , subsets=subsets.toy , basis=basis.toy , hpa=hpa.toy)
data(toyapps) H.fun.app(D1.toy , subsets=subsets.toy , basis=basis.toy , hpa=hpa.toy)
Returns the thing at the top of page 6
hdash.fun(x, hpa, basis)
hdash.fun(x, hpa, basis)
x |
Point in question |
hpa |
Hyperparameter object |
basis |
Basis functions |
Robin K. S. Hankin
M. C. Kennedy and A. O'Hagan 2000. “Predicting the output from a complex computer code when fast approximations are available” Biometrika, 87(1): pp1-13
data(toyapps) hdash.fun(x=1:3 , hpa=hpa.toy,basis=basis.toy) uu <- rbind(1:3,1,1:3,3:1) rownames(uu) <- paste("uu",1:4,sep="_") hdash.fun(x=uu, hpa=hpa.toy,basis=basis.toy)
data(toyapps) hdash.fun(x=1:3 , hpa=hpa.toy,basis=basis.toy) uu <- rbind(1:3,1,1:3,3:1) rownames(uu) <- paste("uu",1:4,sep="_") hdash.fun(x=uu, hpa=hpa.toy,basis=basis.toy)
Creates a hyperparameter object from a vector of length 19. Intended as a toy example to be modified for real-world cases.
hpa.fun.toy(x)
hpa.fun.toy(x)
x |
Vector of length 19 that specifies the correlation scales |
Elements 1-4 of x specify the sigmas for each of the four levels in the toy example. Elements 5-7 specify the correlation scales for level 1, elements 8-10 the scales for level 2, and so on.
Internal function pdm.maker()
shows how the B matrix is
obtained from the various elements of input argument x
. Note
how, in this simple example, the B matrices are diagonal, but
generalizing to non-diagonal matrices should be straightforward (if
you can guarantee that they remain positive definite).
sigmas |
The four sigmas corresponding to the four levels |
B |
The four B matrices corresponding to the four levels |
rhos |
The three (sic) matrices corresponding to levels 1-3 |
Robin K. S. Hankin
M. C. Kennedy and A. O'Hagan 2000. “Predicting the output from a complex computer code when fast approximations are available” Biometrika, 87(1): pp1-13
hpa.fun.toy(1:19)
hpa.fun.toy(1:19)
Checks observational data for consistency with a subsets object: the length of the vectors should match
is.consistent(subsets, z)
is.consistent(subsets, z)
subsets |
A subsets object |
z |
Data |
Returns TRUE
or FALSE
depending on whether z is consistent
with subsets.
Robin K. S. Hankin
M. C. Kennedy and A. O'Hagan 2000. “Predicting the output from a complex computer code when fast approximations are available” Biometrika, 87(1): pp1-13
data(toyapps) stopifnot(is.consistent(subsets.toy,z.toy)) z.toy[[4]] <- 1:6 is.consistent(subsets.toy,z.toy)
data(toyapps) stopifnot(is.consistent(subsets.toy,z.toy)) z.toy[[4]] <- 1:6 is.consistent(subsets.toy,z.toy)
Returns the mean of the Gaussian process conditional on the observations and the hyperparameters
mdash.fun(x, D1, subsets, hpa, Vinv = NULL, use.Vinv = TRUE, z, basis)
mdash.fun(x, D1, subsets, hpa, Vinv = NULL, use.Vinv = TRUE, z, basis)
x |
Point at which mean is desired |
D1 |
Code design matrix for level 1 code |
subsets |
subsets object |
hpa |
Hyperparameter object |
Vinv |
Inverse of the variance matrix; if |
use.Vinv |
Boolean, with default |
z |
observations |
basis |
Basis functions |
Robin K. S. Hankin
M. C. Kennedy and A. O'Hagan 2000. “Predicting the output from a complex computer code when fast approximations are available” Biometrika, 87(1): pp1-13
data(toyapps) mdash.fun(x=1:3,D1=D1.toy,subsets=subsets.toy,hpa=hpa.toy,z=z.toy,basis=basis.toy) uu <- rbind(1:3,1,3:1,1:3) rownames(uu) <- c("first","second","third","fourth") mdash.fun(x=uu,D1=D1.toy,subsets=subsets.toy,hpa=hpa.toy,z=z.toy,basis=basis.toy)
data(toyapps) mdash.fun(x=1:3,D1=D1.toy,subsets=subsets.toy,hpa=hpa.toy,z=z.toy,basis=basis.toy) uu <- rbind(1:3,1,3:1,1:3) rownames(uu) <- c("first","second","third","fourth") mdash.fun(x=uu,D1=D1.toy,subsets=subsets.toy,hpa=hpa.toy,z=z.toy,basis=basis.toy)
Returns the likelihood of a set of hyperparameters given the data.
Functions opt1()
and opt.gt.1()
find hyperparameters
that maximize the relevant likelihood for level 1 and higher levels
respectively. Function object()
returns the expression given
by equation 9 in KOH2000, which is minimized opt1()
and
opt.gt.1()
.
object(level, D, z, basis, subsets, hpa) opt.1(D, z, basis, subsets, hpa.start, give.answers=FALSE, ...) opt.gt.1(level, D, z, basis, subsets, hpa.start, give.answers=FALSE, ...)
object(level, D, z, basis, subsets, hpa) opt.1(D, z, basis, subsets, hpa.start, give.answers=FALSE, ...) opt.gt.1(level, D, z, basis, subsets, hpa.start, give.answers=FALSE, ...)
level |
level |
D |
Design matrix for top-level code |
z |
Data |
basis |
Basis function |
subsets |
subsets object |
hpa |
hyperparameter object |
hpa.start |
Starting value for hyperparameter object |
give.answers |
Boolean, with default |
... |
Extra arguments passed to |
This function is the object function used in toy optimizers
optimal.hpa()
.
Robin K. S. Hankin
M. C. Kennedy and A. O'Hagan 2000. “Predicting the output from a complex computer code when fast approximations are available” Biometrika, 87(1): pp1-13
data(toyapps) object(level=4, D=D1.toy , z=z.toy,basis=basis.toy, subsets=subsets.toy, hpa=hpa.fun.toy(1:19)) object(level=4, D=D1.toy , z=z.toy,basis=basis.toy, subsets=subsets.toy, hpa=hpa.fun.toy(3+(1:19))) # Now a little example of finding optimal hyperpameters in the toy case # (a bigger example is given on the genie help page) jj <- list(trace=100,maxit=10) hpa.toy.level1 <- opt.1(D=D1.toy, z=z.toy, basis=basis.toy, subsets=subsets.toy, hpa.start=hpa.toy,control=jj) hpa.toy.level2 <- opt.gt.1(level=2, D=D1.toy, z=z.toy, basis=basis.toy, subsets=subsets.toy, hpa.start=hpa.toy.level1, control=jj) hpa.toy.level3 <- opt.gt.1(level=3, D=D1.toy, z=z.toy, basis=basis.toy, subsets=subsets.toy, hpa.start=hpa.toy.level2, control=jj) hpa.toy.level4 <- opt.gt.1(level=4, D=D1.toy, z=z.toy, basis=basis.toy, subsets=subsets.toy, hpa.start=hpa.toy.level3, control=jj)
data(toyapps) object(level=4, D=D1.toy , z=z.toy,basis=basis.toy, subsets=subsets.toy, hpa=hpa.fun.toy(1:19)) object(level=4, D=D1.toy , z=z.toy,basis=basis.toy, subsets=subsets.toy, hpa=hpa.fun.toy(3+(1:19))) # Now a little example of finding optimal hyperpameters in the toy case # (a bigger example is given on the genie help page) jj <- list(trace=100,maxit=10) hpa.toy.level1 <- opt.1(D=D1.toy, z=z.toy, basis=basis.toy, subsets=subsets.toy, hpa.start=hpa.toy,control=jj) hpa.toy.level2 <- opt.gt.1(level=2, D=D1.toy, z=z.toy, basis=basis.toy, subsets=subsets.toy, hpa.start=hpa.toy.level1, control=jj) hpa.toy.level3 <- opt.gt.1(level=3, D=D1.toy, z=z.toy, basis=basis.toy, subsets=subsets.toy, hpa.start=hpa.toy.level2, control=jj) hpa.toy.level4 <- opt.gt.1(level=4, D=D1.toy, z=z.toy, basis=basis.toy, subsets=subsets.toy, hpa.start=hpa.toy.level3, control=jj)
Evaluates Kennedy's product
Pi(hpa, i, j)
Pi(hpa, i, j)
hpa |
Hyperparameter object |
i |
subscript |
j |
superscript |
This function evaluates Kennedy's product, but with the
additional feature that
if
.
This seems to work in practice.
Robin K. S. Hankin
M. C. Kennedy and A. O'Hagan 2000. “Predicting the output from a complex computer code when fast approximations are available” Biometrika, 87(1): pp1-13
data(toyapps) Pi(hpa.toy,1,2) Pi(hpa.toy,2,2) Pi(hpa.toy,3,2) Pi(hpa.toy,4,2)
data(toyapps) Pi(hpa.toy,1,2) Pi(hpa.toy,2,2) Pi(hpa.toy,3,2) Pi(hpa.toy,4,2)
Given an integer vector whose element is the
number of runs at level
, return a subset object in echelon
form.
subset_maker(x)
subset_maker(x)
x |
A vector of integers |
In this context, x
being in “echelon form” means that
x
is consistent in the sense of passing
is.consistent()
For each ,
x[[i]] = 1:n
for some .
A list object suitable for use as a subset
object
Robin K. S. Hankin
is.consistent
, is.nested
, is.strict
subset_maker(c(10,4,3)) is.nested(subset_maker(c(4,9,6))) #should be FALSE is.nested(subset_maker(c(9,6,4))) #should be TRUE
subset_maker(c(10,4,3)) is.nested(subset_maker(c(4,9,6))) #should be FALSE is.nested(subset_maker(c(9,6,4))) #should be TRUE
Create a list of subsets (subsets.fun()
); or,
given a list of subsets, test for correct inclusion
(is.nested()
), or strict inclusion (is.strict()
).
is.nested(subsets) is.strict(subsets) subsets.fun(n, levels = 4, prob = 0.7)
is.nested(subsets) is.strict(subsets) subsets.fun(n, levels = 4, prob = 0.7)
subsets |
In |
n |
Number of observations in the lowest level (ie level 1, the fastest code) |
levels |
Number of levels |
prob |
Probability of choosing an observation at level
|
Robin K. S. Hankin (subsets.fun()
); Peter Dalgaard (via R-help)
M. C. Kennedy and A. O'Hagan 2000. “Predicting the output from a complex computer code when fast approximations are available” Biometrika, 87(1): pp1-13
is.nested(subsets.fun(20)) # Should be TRUE data(toyapps) stopifnot(is.nested(subsets.toy))
is.nested(subsets.fun(20)) # Should be TRUE data(toyapps) stopifnot(is.nested(subsets.toy))
Returns generalized distances from a point to the design matrix as per equation 10
tee.fun(x, D1, subsets, hpa)
tee.fun(x, D1, subsets, hpa)
x |
Point in parameter space |
D1 |
Design matrix for level 1 code |
subsets |
subsets object |
hpa |
Hyperparameter object |
See equation 10
Robin K. S. Hankin
M. C. Kennedy and A. O'Hagan 2000. “Predicting the output from a complex computer code when fast approximations are available” Biometrika, 87(1): pp1-13
data(toyapps) tee.fun(x=1:3, D1=D1.toy, subsets=subsets.toy, hpa=hpa.toy)
data(toyapps) tee.fun(x=1:3, D1=D1.toy, subsets=subsets.toy, hpa=hpa.toy)
Toy datasets that illustrate the package.
data(toyapps)
data(toyapps)
The toy example is a case with four levels.
The D1.toy
matrix is 20 rows of code run points, corresponding
to the observations of the level 1 code. It has three columns, one
per parameter.
hpa.toy
is a hyperparameter object. It is a list of three
elements: sigmas
, B
, and rhos
.
subsets.toy
is a list of four elements. Element
corresponds to the rows of
D1.toy
at which level has
been observed.
z.toy
is a four element list. Each element is a vector;
element corresponds to obsevations of level
. The
lengths will match those of
subsets.toy
.
betas.toy
is a matrix of coefficients.
Brief description of toy functions fully documented under their own manpage
Function generate.toy.observations()
creates new toy datasets
with any number of observations and code runs.
Function basis.toy()
is an example of a basis function
Function hpa.fun.toy()
creates a hyperparameter object such as
phi.toy
in a form suitable for passing to the other functions
in the library.
See the helpfiles listed in the “see also” section below
All toy datasets are documented here. There are also several toy functions that are needed for a toy problem; these are documented separately (they are too diverse to document fully in a single manpage). Nevertheless a terse summary for each toy function is provided on this page. All toy functions in the package are listed under “See Also”.
Robin K. S. Hankin
M. C. Kennedy and A. O'Hagan 2000. “Predicting the output from a complex computer code when fast approximations are available” Biometrika, 87(1): pp1-13
data(toyapps) is.consistent(subsets.toy , z.toy) generate.toy.observations(D1.toy, subsets.toy, basis.toy, hpa.toy, betas.toy)
data(toyapps) is.consistent(subsets.toy , z.toy) generate.toy.observations(D1.toy, subsets.toy, basis.toy, hpa.toy, betas.toy)
Given a design matrix, a subsets object and a hyperparameter object,
return the variance matrix. The “app” of the function name means
“approximator”, to distinguish it from function V.fun()
of the calibrator package.
V.fun.app(D1, subsets, hpa)
V.fun.app(D1, subsets, hpa)
D1 |
Design matrix for level 1 code |
subsets |
Subsets object |
hpa |
Hyperparameter object |
Robin K. S. Hankin
M. C. Kennedy and A. O'Hagan 2000. “Predicting the output from a complex computer code when fast approximations are available” Biometrika, 87(1): pp1-13
data(toyapps) V.fun.app(D1.toy,subsets.toy,hpa.toy)
data(toyapps) V.fun.app(D1.toy,subsets.toy,hpa.toy)