ForK Library

Subroutine to Build Kriging Surface based on Function and Gradient Values. More...
Go to the source code of this file.
Functions/Subroutines  
subroutine  buildkrigingGEK (ndim, ntot, X, Y, gtot, pts, dims, dY, stot, H, beta, V, hyper, hyperflag, optflagi, covarflagi) 
This subroutine creates a gradientenhanced Kriging model based on Function and Gradient values supplied at the Training points. The process of creating a gradientenhanced Kriging model is identical to that of creating a functiononly model (described in buildkriging). The details of creating the Kriging model are not repeated here. To create a gradientenhanced model, the definitions for the training data, covariance matrix, and collocation matrix must be extended to include the derivative values. Using these modified defintions, the procedure in buildkriging can be used to create the Kriging model. First, the vector of training data is redefined as a block vector containing function and derivative observations, given as:
\[ \underline{Y} = \left[\begin{array}{c} Y \\\ \delta Y \end{array} \right] \] where the vector \(Y\) contains the function observations at the training points and the vector \(\delta Y\) contains the derivative observations for the training points. Based on training data in this form, the covariance matrix must also be extended to include the correlation between derivative values. The block definition of this covariance matrix is given as: \[ \underline{K} = \left[ \begin{array}{cc} cov(Y,Y) & cov(Y,\delta Y) \\ cov(\delta Y,Y) & cov(\delta Y,\delta Y) \end{array} \right] \] where \(cov(Y,Y)\) is the covariance matrix between function values (same as the covariance matrix used in a functiononly model), \(cov(\delta Y, Y) \) is a rectangular matrix whose elements represent the covariance between the function values and derivative values at the training points and \(cov(\delta Y, \delta Y) \) represents the covariance between derivative observations. 
Subroutine to Build Kriging Surface based on Function and Gradient Values.
Definition in file buildkrigingGEK.f90.
subroutine buildkrigingGEK  (  integer,intent(in)  ndim, 
integer,intent(in)  ntot,  
real(8),dimension(ndim,ntot),intent(in)  X,  
real(8),dimension(ntot),intent(in)  Y,  
integer,intent(in)  gtot,  
integer,dimension(gtot),intent(in)  pts,  
integer,dimension(gtot),intent(in)  dims,  
real(8),dimension(gtot),intent(in)  dY,  
integer,intent(in)  stot,  
real(8),dimension(stot,ntot+gtot),intent(in)  H,  
real(8),dimension(stot),intent(out)  Beta,  
real(8),dimension(ntot+gtot),intent(out)  V,  
real(8),dimension(ndim+3),intent(out)  hyper,  
integer,intent(in)  hyperflag,  
integer,intent(in)  optflagi,  
integer,intent(in)  covarflagi  
) 
This subroutine creates a gradientenhanced Kriging model based on Function and Gradient values supplied at the Training points. The process of creating a gradientenhanced Kriging model is identical to that of creating a functiononly model (described in buildkriging). The details of creating the Kriging model are not repeated here. To create a gradientenhanced model, the definitions for the training data, covariance matrix, and collocation matrix must be extended to include the derivative values. Using these modified defintions, the procedure in buildkriging can be used to create the Kriging model. First, the vector of training data is redefined as a block vector containing function and derivative observations, given as:
\[ \underline{Y} = \left[\begin{array}{c} Y \\\ \delta Y \end{array} \right] \]
where the vector \(Y\) contains the function observations at the training points and the vector \(\delta Y\) contains the derivative observations for the training points. Based on training data in this form, the covariance matrix must also be extended to include the correlation between derivative values. The block definition of this covariance matrix is given as:
\[ \underline{K} = \left[ \begin{array}{cc} cov(Y,Y) & cov(Y,\delta Y) \\ cov(\delta Y,Y) & cov(\delta Y,\delta Y) \end{array} \right] \]
where \(cov(Y,Y)\) is the covariance matrix between function values (same as the covariance matrix used in a functiononly model), \(cov(\delta Y, Y) \) is a rectangular matrix whose elements represent the covariance between the function values and derivative values at the training points and \(cov(\delta Y, \delta Y) \) represents the covariance between derivative observations.
For a function value at point \( \vec{x}' \) and a derivative value with respect to \(x_{k}\) evaluated at point \(\vec{x}\), the covariance is found by differentiation of the function describing the covariance between function values. The expression for this covariance between function and derivative values is given as:
\[ cov(\frac{\partial y}{\partial x_{k}},y')=\frac{\partial}{\partial x_{k}} k(\vec{x},\vec{x}') \]
For the covariance between a derivative value with respect to \(x_{k}\) evaluated at point \(\vec{x}\) and a derivative value with respect to \(x_{l}\) evaluated at point \(\vec{x}'\), the covariance is found by differentiation of the above expression:
\[ cov(\frac{\partial y}{\partial x_{k}},\frac{\partial y'}{\partial x'_{l}})=\frac{\partial^2}{\partial x_{k}\partial x'_{l}} k(\vec{x},\vec{x}') \]
Hence, to construct a gradientenhanced Kriging model, the covariance function must be twice differentiable. This limits the choice of covariance function in this work to the Matern function with \(\nu \geq 3/2\). These covariance function and derivatives are computed using the functions in module covars. Finally, the collocation matrix must be extend to include the derivative of the basis functions. The block collocation matrix can be written as:
\[ \underline{H} = \left[\begin{array}{c} H \\\ G \end{array} \right] \]
where \(H\) represents the function collocation matrix and \(G\) represents the derivative collocation matrix. The elements of the function collocation matrix are defined as the basis functions for the regression evaluated at the training points. The elements of the derivative collocation matrix are defined as the derivatives of the basis functions for the regression evaluated at the training points. Using these definitions, the mean function values at the training points are defined as:
\[ \bar{Y} = H \beta\]
while the mean derivative values are given as:
\[ \delta \bar{Y} = G \beta\]
For a gradientenhanced model, the size of the covariance matrix grows as the dimension of the space is increased (size=[(ndim+1)ntot x (ndim+1)ntot] if all components of the gradient are included at every point); hence, the expense of constructing the gradientenhanced Kriging model grows quickly with dimension. LAPACK has been utilized extensively to help alleviate this cost in large dimension but it is still likely too slow to construct a gradientenhanced model in more than \( \approx 20 \) dimensions. The derivative values themselves are inputted into the subroutine as a type of sparse matrix. Hence, three vectors are supplied to enter the derivative values (pts, dims, and dY). The elements of the vector pts give the index of the training point that the derivative value is evaluated at. The elements of the vector dims represent the component of the gradient that the derivative value represents and the vector dY contains the derivative values themselves. This data structure allows only certain components of the derivative to be enforced at different points. It also allows for only function values to be supplied at certain points. The final processed data produced by this subroutine is defined as:
\[ \underline{V} = \underline{K}^{1} \left( \underline{Y}  \underline{H}^{T} \beta) \]
in)  ndim : The dimension of the problem 
in)  ntot : The number of training points (also function values) 
in)  X : The location of the training points (size=[ndimxntot]) 
in)  Y : Funcation values for the training points (size=[ntot]) 
in)  gtot: Number of derivative values included in training data (ndim*ntot if all derivatives are included at the training points) 
in)  pts: List identifying what point the derivative value is enforced at (size=[gtot] with values ranging from 1 to ntot) 
in)  dims: List identifying the dimension the derivative is taken with respect to (size=[gtot] with values ranging from 1 to ndim) 
in)  dY: Derivative values (size=[gtot]) 
in)  stot : Number of Terms in the regression 
in)  H: The collocation matrix for the regression including derivative values. (size=[stotxntot+gtot]) Columns 1:ntot are the basis evaluated at the training points Columns ntot+1:ntot+gtot are the derivative of the basis evaluated at the training points 
out)  beta: Regression coefficients based on the optimal estimate for the Kriging model (size=[stot]) 
out)  hyper: Hyperparameters for the Kriging Model (size=[ndim+3])

out)  V: Processed Training Data (size=[ntot+gtot]) In order to make predictions from the Kriging Model, the inverse of the covariance matrix onto the residual of the training data from the regression is all that is needed. To avoid recomputing and inverting the covariance matrix, the product is stored explicitly 
in)  hyperflagi: Flag to govern how the hyperparameters are selected

in)  optflagi: Flag to govern the optimization algorithm used to determine the hyperparameters

in)  covarflagi: Flag to govern which covariance function is used

Definition at line 95 of file buildkrigingGEK.f90.
References covars::covarflag, choleskymod::invertsymmetric(), choleskymod::matrixmulttrans(), choleskymod::matvec(), choleskymod::matvectrans(), opt::optflag, choleskymod::symmatrixmulttrans(), choleskymod::symmatvec(), and choleskymod::symmetricsolve().
Referenced by krigingwrapper().