         (C) Copyright Universal Technical Systems, Inc. 1987
                    TK Solver Plus(R)  Version 1.0

                     Notes on Optimization Models


This file contains updated information about the \OPTIM models supplied
with Version 1.0 of TK Solver Plus.

(For updated information about the program itself, see the README.DOC file on
the program disk.)

C h a p t e r   6:   O p t i m i z a t i o n
--------------------------------------------

Introduction:

TK's strength goes beyond "what if" analysis (what results do you get if you
change this or that ingredient) to "how can" analysis when you input desired
results, and let the computer come up with the necessary ingredients.  Often,
however, you need to know what are the best results (and ingredients)
achievable under the constraints of your model and data.

There are many kinds and nuances of optimization problems and hence a variety
of methods.  The TK Library includes the important ones in addition to the
ultimate flexibility of examining the solution space and modifying the problem
conditions.


OPTIM1A.TK

This model shows the conceptually most straightforward method of finding an
extremum (i.e.  minimum or maximum) of a function of a single variable --
equating its derivative to zero.  At the first stage, TK is used as a scratch
pad for taking a derivative by symbolic differentiation.  The nonlinear
equation with the unknown appearing more than once is then solved by the
Iterative Solver.


OPTIM1B.TK

This model is a variation of the model OPTIM1A with the difference that the
derivative is approximated numerically.  This way is the easiest to formulate
in TK: you create a rule or procedure function that takes the value of an
independent variable and returns the value of the objective function.  You
then equate in the Rule Sheet the values of the objective function at the
unknown extremum point x and at the point incremented by a small value dx ,
say 1e-6.  When started with a reasonable guess value, the Iterative Solver
returns a value of x corresponding to the minimum or maximum value of the
objective function.


OPTIM1C.TK

This model presents the procedure function Golden for golden section search,
which is an optimization analogy of using the bisection method to search for a
root of an equation.  The initial interval of the unknown variable x is
gradually subdivided into golden section fractions .618/.382 while maintaining
the minimum of the objective function inside the reduced interval -- until the
minimum is found with desired accuracy.


OPTIM1D.TK

The same problem as in the previous three models is solved by Brent's method,
which combines the golden section search with inverse parabolic interpolation
(i.e. interpreting the objective function as a parabola passing through three
given points, and using an analytical solution of the parabola's minimum).
Besides the procedure function Brent there is a procedure function bracket
which, prior to calling Brent , performs a search for an interval containing
the minimum point if it is outside the originally given interval.


OPTIM2A.TK

This model shows the application of the symbolic differentiation approach from
model OPTIM1A to a two-dimensional optimization problem.


OPTIM2B.TK

The implied numeric differentiation used in model OPTIM1C is applied to a
two-dimensional optimization problem (and may be easily generalized to
multidimensional problems).


OPTIM2C.TK

This model presents the procedure function  simplex  and several associated
functions representing the robust Nelder-Mead algorithm for n-dimensional
minimization.  The procedure involves building a set of n+1 points (simplex)
in n-dimensional space, examining the values of the objective function at
these points, and replacing the points (i.e. expanding or contracting the
simplex) until the point with minimum value is reached.


OPTIM2D

This model presents the Polak-Ribiere version of the conjugate gradient method
for multidimensional optimization.  It evaluates a gradient of the objective
function at a given point, performs a one-dimensional optimization in the
direction of steepest descent, repeats the cycle from this new point, and so
on, until the minimum is reached.
