ForK Library

# krigingfuncpredictGEK.f90 File Reference

Subroutine to Predict mean function values from Gradient-Enhanced Kriging Surface (requires buildkrigingGEK to be called first) More...

Go to the source code of this file.

## Functions/Subroutines

subroutine krigingfuncpredictGEK (ndim, ntot, X, gtot, pts, dims, stot, H, beta, V, hyper, mtot, Xm, Hm, Ym, covarflagi)
This subroutine is used to predict function values from a Gradient-enhanced Kriging model (built using buildkrigingGEK). The details of predicting function values from a Kriging model are detailed in the krigingfuncpredict documentation. To make predictions from a gradient-enhanced Kriging model, the covariance matrix and training data must be redefined. Using a regression mean function, the mean function predictions at a test point are given as:

## Detailed Description

Subroutine to Predict mean function values from Gradient-Enhanced Kriging Surface (requires buildkrigingGEK to be called first)

Definition in file krigingfuncpredictGEK.f90.

## Function Documentation

 subroutine krigingfuncpredictGEK ( integer,intent(in) ndim, integer,intent(in) ntot, real(8),dimension(ndim,ntot),intent(in) X, integer,intent(in) gtot, integer,dimension(gtot),intent(in) pts, integer,dimension(gtot),intent(in) dims, integer,intent(in) stot, real(8),dimension(stot,ntot+gtot),intent(in) H, real(8),dimension(stot),intent(in) Beta, real(8),dimension(ntot+gtot),intent(in) V, real(8),dimension(ndim+3),intent(in) hyper, integer,intent(in) mtot, real(8),dimension(ndim,mtot),intent(in) Xm, real(8),dimension(stot,mtot),intent(in) Hm, real(8),dimension(mtot),intent(out) Ym, integer,intent(in) covarflagi )

This subroutine is used to predict function values from a Gradient-enhanced Kriging model (built using buildkrigingGEK). The details of predicting function values from a Kriging model are detailed in the krigingfuncpredict documentation. To make predictions from a gradient-enhanced Kriging model, the covariance matrix and training data must be redefined. Using a regression mean function, the mean function predictions at a test point are given as:

$y(\vec{x}_{*}) | \vec{X},Y,m(x) = h^{T}(\vec{x}_{*}) \beta + \underline{k}_*^T K^{-1} (\underline{Y}-\underline{H}^{T} \beta)$

where the covariance matrix between test and training points has been redefined to include the correlation between derivative values. Additionally, the collocation matrix and training data are extend to include derivative values. The extension of the collocation matrix and training data for derivative values are found in buildkrigingGEK. The covariance matrix between training data included derivative values is given as:

$\underline{K} = \left[ \begin{array}{cc} cov(Y,Y) & cov(Y,\delta Y) \\ cov(\delta Y,Y) & cov(\delta Y,\delta Y) \end{array} \right]$

where $$cov(Y,Y)$$ is the covariance matrix between function values (same as the covariance matrix used in a function-only model), $$cov(\delta Y, Y)$$ is a rectangular matrix whose elements represent the covariance between the function values and derivative values at the training points and $$cov(\delta Y, \delta Y)$$ represents the covariance between derivative observations. The details of this matrix are also found in buildkrigingGEK. Finally, the covariance between the test points and training points must be extended to include derivatives (denoted as $$\underline{k}_*^T$$). To include derivatives, this covariance is a rectangular block matrix defined as: where $$cov(y^{*}, Y)$$ is a matrix (size = [mtot ntot] where mtot is the number of test points and ntot is the number of training points) whose elements represent the covariance between test function values and training function values. The elements of matrix $$cov(y^{*}, \delta Y)$$ (size = [mtot gtot] where gtot is the total number of derivative values in the training set) represent the covariance between test function values and training derivative values. The subroutine buildkrigingGEK produces a vector of processed data defined as:

$\underline{V} = \underline{K}^{-1} \left( \underline{Y} - \underline{H}^{T} \beta \right)$

Using this definition, function predictions can be calculated inexpensively using the formula:

$y(\vec{x}_{*}) | \vec{X},Y,m(x) = h^{T}(\vec{x}_{*}) \beta + \underline{k}_*^T \underline{V}$

Because the covariance matrix between training data requires the second derivative of the covariance function, the Matern function with $$\nu \geq 3/2$$ must be used for constructing and making predictions from the Gradient-enhanced Kriging model.

Date:
May 2, 2012
Parameters:
 in) ndim : The dimension of the problem in) ntot : The number of Training points in) X : The location of the training points (size=[ndimxntot]) in) gtot: Number of derivative values included in training data (ndim*ntot if all derivatives are included at the training points) in) pts: List identifying what point the derivative value is enforced at (size=[gtot] with values ranging from 1 to ntot) in) dims: List identifying the dimension the derivative is taken with respect to (size=[gtot] with values ranging from 1 to ndim) in) stot : Number of Terms in the regression in) H: The collocation matrix for the regression including derivative values. (size=[stotxntot+gtot]) Columns 1:ntot are the basis evaluated at the training points Columns ntot+1:ntot+gtot are the derivative of the basis evaluated at the training points in) beta: Regression coefficients based on the optimal estimate for the Kriging model (size=[stot]) Supplied by buildkrigingGEK subroutine in) V: Processed Training Data (size=[ntot+gtot]) Supplied by buildkrigingGEK subroutine in) hyper: Hyperparameters for the Kriging Model (size=[ndim+3]) Supplied by buildkrigingGEK subroutine in) mtot : The number of test points, the places where function prediction are desired in) Xm : The location of the test points (size=[ndimxmtot]) in) Hm : The collocation matrix evaluated at the test points. The derivative of the basis is NOT required for the test points to predict function values (size=[stotxmtot]) out) Ym: The predicted function values (size=[mtot]) Using the processed data V, predicting function values is essentially linear with respect to the number of test points so mtot can be set to one and this subroutine can be called multiple times. Often the function values at a set of test points is required, hence the ability to make the predictions in a single function call. in) covarflagi: Flag to govern which covariance function is used covarflag==1 Uses Matern function with $$\nu=3/2$$ covarflag==2 Uses Matern function with $$\nu=5/2$$ The parameter $$\nu$$ governs the smoothness and differentiability of the covariance function. When using gradient values, $$\nu=1/2$$ is not differentiable enough so $$\nu \geq 3/2$$ must be used Must supply the same covariance flag as used in buildkrigingGEK.

Definition at line 78 of file krigingfuncpredictGEK.f90.

References covars::covarflag.