ForK Library
Functions/Subroutines

krigingmaxeipredictGEK.f90 File Reference

Subroutine to Predict maximum expected improvement from Gradient-Enhanced Kriging Surface based on a current minimum value(requires buildkrigingGEK to be called first)
Only Works for Ordinary Kriging (Constant Mean function) More...

Go to the source code of this file.

Functions/Subroutines

subroutine krigingmaxeipredictGEK (ndim, ntot, X, gtot, pts, dims, stot, H, beta, V, hyper, Xm, Ym, covarflagi, optflag, Lb, Ub, Ymin)
 See documentation for krigingmaxeipredict for details. This subroutine is the gradient-enhanced version and is functionally the same as the function-only subroutine.

Detailed Description

Subroutine to Predict maximum expected improvement from Gradient-Enhanced Kriging Surface based on a current minimum value(requires buildkrigingGEK to be called first)
Only Works for Ordinary Kriging (Constant Mean function)

Definition in file krigingmaxeipredictGEK.f90.


Function Documentation

subroutine krigingmaxeipredictGEK ( integer,intent(in)  ndim,
integer,intent(in)  ntot,
real(8),dimension(ndim,ntot),intent(in)  X,
integer,intent(in)  gtot,
integer,dimension(gtot),intent(in)  pts,
integer,dimension(gtot),intent(in)  dims,
integer,intent(in)  stot,
real(8),dimension(stot,ntot),intent(in)  H,
real(8),dimension(stot),intent(in)  Beta,
real(8),dimension(ntot),intent(in)  V,
real(8),dimension(ndim+2),intent(in)  hyper,
real(8),dimension(ndim),intent(out)  Xm,
real(8),intent(out)  Ym,
integer,intent(in)  covarflagi,
integer,intent(in)  optflag,
real(8),dimension(ndim),intent(in)  Lb,
real(8),dimension(ndim),intent(in)  Ub,
real(8),intent(in)  Ymin 
)

See documentation for krigingmaxeipredict for details. This subroutine is the gradient-enhanced version and is functionally the same as the function-only subroutine.

Author:
Brian Lockwood Department of Mechanical Engineering University of Wyoming
Date:
May 2, 2012
Parameters:
in)ndim : The dimension of the problem
in)ntot : The number of Training points
in)X : The location of the training points (size=[ndimxntot])
in)gtot: Number of derivative values included in training data (ndim*ntot if all derivatives are included at the training points)
in)pts: List identifying what point the derivative value is enforced at (size=[gtot] with values ranging from 1 to ntot)
in)dims: List identifying the dimension the derivative is taken with respect to (size=[gtot] with values ranging from 1 to ndim)
in)stot : Number of Terms in the regression
in)H: The collocation matrix for the regression including derivative values. (size=[stotxntot+gtot])
Columns 1:ntot are the basis evaluated at the training points
Columns ntot+1:ntot+gtot are the derivative of the basis evaluated at the training points
in) beta: Regression coefficients based on the optimal estimate for the Kriging model (size=[stot])
Supplied by buildkrigingGEK subroutine
in)V: Processed Training Data (size=[ntot+gtot])
Supplied by buildkrigingGEK subroutine
in)hyper: Hyperparameters for the Kriging Model (size=[ndim+3])
Supplied by buildkrigingGEK subroutine
out) Xm : Location of the maximum Expected Improvement (size=[ndim])
out) Eim : Maximum Expected Improvement
in)covarflagi: Flag to govern which covariance function is used

  • covarflag==1 Uses Matern function with \(\nu=3/2\)
  • covarflag==2 Uses Matern function with \(\nu=5/2\)


The parameter \(\nu\) governs the smoothness and differentiability of the covariance function.
When using gradient values, \(\nu=1/2\) is not differentiable enough so \(\nu \geq 3/2\) must be used
Must supply the same covariance flag as used in buildkrigingGEK.

in)optflagi: Flag to govern the optimization algorithm used to determine maximum Expected Improvement

  • optflag=0 Simplex Search (Nelder-Mead Method) to determine maximum Expected Improvement
    Local optimization technique using simplex elements to determine search direction and line search to determine step sizes
    Fastest method but inherently sequential and may stall on local optimum
  • optflag=1 Pattern Search used to determine maximum Expected Improvement
    Global optimization method with fixed search directions (forward and backward in each direction)
    Each iteration requires \(2*ndim\) function evaluations; however, these may be performed in parallel using openmp (Each thread performs own likelihood evaluation so OMP_STACK_SIZE must be set large enough)
in) Lb Lower Bound for each variable (size = [ndim])
Should be the minimum possible value of Xm in each dimension
in) Ub Upper Bound for each variable (size = [ndim])
Should be the maximum possible value of Xm in each dimension
in) Ymin The current minimum value that the new candidate improves on.
In the context of optimization, Ymin is the minimum value for the current optimization iteration.
The expected improvement at a point uses the mean and variance from the Kriging model to predict the improvement in the minimum value if a new simulation is performed at that location. Hence, the maximum expected improvement is the location in the design space most likely to reduce the minimum value by the largest amount.

Definition at line 62 of file krigingmaxeipredictGEK.f90.

References patternsearch(), and simplexsearch().

 All Classes Namespaces Files Functions Variables