wnnlp(3)
NAME
- wn_nlp_conj_method, wn_make_linear_constraint,
- wn_make_nonlinear_constraint - constrained non-linear optimiza
- tion package
SYNOPSIS
#include <wn/wnnlp.h> #define WN_EQ_COMPARISON 1 #define WN_GT_COMPARISON 2 #define WN_LT_COMPARISON 3 typedef struct wn_linear_constraint_type_struct { ... int size; int *vars; int comparison_type; double *weights; double rhs; } *wn_linear_constraint_type; typedef struct wn_nonlinear_constraint_type_struct { ... int size; int *vars; int comparison_type; double offset; ptr client_data; double (*pfunction)(int size,double *values,ptr client_data); void (*pgradient)(double *grad,int size,double *values,ptr client_data); } *wn_nonlinear_constraint_type; void wn_nlp_conj_method ( int *code, double *val_min, double solution_vect[], double delta_vect[], wn_nonlinear_constraint_type objective, wn_sll constraint_list, int num_vars, int conj_iterations, int offset_iterations, double offset_adjust_rate ) void wn_make_linear_constraint ( wn_linear_constraint_type *constraint, int size,double rhs,int comparison_type ) void wn_make_nonlinear_constraint ( wn_nonlinear_constraint_type *constraint, int size,int comparison_type ) extern int wn_nlp_verbose;
DESCRIPTION
- This package is useful for minimizing general functions of
- many variables subject to general constraints. The function to
- minimize and the constraints can be linear or non-linear. If
- they are non-linear, they must be continuous and differentiable.
- The constraints can be equality or inequality constraints.
- The optimization algorithm works roughly as follows: the
- constraints and the objective function are combined to create a
- master unconstrained objective function. The constraints are
- handled by adding a quadratic penalty function to the master ob
- jective function for each constraint that is violated. The mas
- ter objective function is optimized using the conjugate-gradient
- hill-climbing algorithm (see wn_conj_direction_method(3)). The
- resulting solution probably violates some of the constraints.
- Offsets are added to the violated constraints in order to push
- the solution toward compliance with the violated constraints.
- The conjugate-gradient procedure is repeated. This entire cycle
- is repeated until convergence.
- If all goes well, the algorithm converges to a local mini
- mum; this local minimum is not necessarily a global minimum or
- even a good local minimum. If either the objective function or
- the constraint functions are not differentiable, the algorithm
- often gets stuck in a very sub-optimal solution, and no warning
- is given that this has happened.
- Proper scaling of the objective function and of the con
- straint functions is important to ensure fast and accurate con
- vergence of the algorithm. The functions and variables should be
- scaled so that the partial derivatives of all functions with re
- spect to all variables in the region of the solution is near +1
- and -1. Partials which are always very large or always very
- small should be scaled so as to be in this region.
- wn_nlp_conj_method solves a constrained non-linear mini
- mization problem. objective specifies the linear or non-linear
- objective function to minimize. constraint_list is a singly
- linked list of linear and non-linear constraints. objective and
- constraint_list together comprise the problem to be solved. They
- must be composed of objects of types wn_linear_constraint_type
- and wn_nonlinear_constraint_type. num_vars is the number of
- variables in the problem. solution_vect is a vector which the
- caller should initialize to contain a vector which is as close a
- possible to the solution; after termination solution_vect con
- tains the solution. val_min is set to the value of the objective
- function of solution_vect. code indicates the status of the run.
- conj_iterations specifies the number of iterations in the conju
- gate-gradient algorithm. Making it too small results in erratic
- behavior and non-convergence; making it too big wastes CPU time.
- Often setting conj_iterations to the same value as len works
- well. offset_iterations specifies the number of iterations to
- adjust constraint offsets; often a number like 10 or 20 works
- well. offset_adjust_rate specifies how aggressively constraint
- offsets are adjusted. 1 is the most agressive, 0 means no ad
- justment at all. offset_adjust_rate should always be in the
- range (0,1]. For linear problems, or problems which are only
- "mildly" nonlinear, 1 usually works well. Using a value too
- large results in erratic non-convergence; using a value too small
- wastes CPU time. delta_vect is a vector of delta values for com
- puting numerical gradients if the caller does not provide symbol
- ic gradients for all of his non-linear constraints. If all non
- linear constraints have symbolic gradients, set delta_vect to
- NULL.
- wn_make_linear_constraint makes a linear constraint. size
- specifies the number of variables involved in the constraint; rhs
- is the right hand side of the constraint. rhs is ignored if the
- constraint is used as the objective function. comparison_type
- must be one of WN_EQ_COMPARISON, WN_GT_COMPARISON, or
WN_LT_COMPARISON
- caller must set the vectors weights and vars of the linear constraint. vars is an array of integers of size size. Each entry
specifies the index of a variable which is included in the constraint. weights is an array of doubles of size size. Each entry specifies a coefficient for the corresponding variable of
vars.
wn_make_nonlinear_constraint makes a non-linear con - straint. size specifies the number of variables involved in the
- constraint; comparison_type must be one of WN_EQ_COMPARISON,
WN_GT_COMPARISON
linear constraint, the caller must set vars, pfunction, and possibly pgradient and client_data. vars is an array of integers of
size size. Each entry specifies the index of a variable which is
included in the constraint. pfunction is a pointer to a non-linear function of the variables in vars. The constraint implemented is one of
(*function(...) >= 0
(*function(...) <= 0
(*function(...) == 0
- depending on the value of comparison_type. pgradient
- places the gradient of pfunction in grad. If you do not want to
- bother figuring out the gradient of pfunction, do not set
- pgradient. The gradient will be computed numerically by calling
- pfunction with domain values differing by amounts from
- delta_vect. This can be slow. client_data is a pointer to user
- defined data to be associated with the constraint.
- wn_nlp_verbose controls the amount of status information
- output by wn_nlp_conj_method. 0 produces no output; 1 produces
- some output; 2 produces more; 3 produces the most.
RESOURCES
wn_nlp_conj_method runs with
- time = conj_iterations*offset_iterations*<num terms in
- constraints>
stack memory = 1
dynamic memory = len - Of course, this figure tells one nothing about the time
- required to achieve good convergence. This depends heavily on
- the problem being solved and the user's skill in properly adjust
- ing conj_iterations, offset_iterations, and offset_adjust_rate,
- and the problem scaling.
- Typically, if the problem is reasonably behaved and well
- scaled, one can set conj_iterations to something of order of the
- number of variables len, and set offset_iterations to some number
- like 10 or 20.
- The author has succesfully used this routine to solve non
- linear problems with 10,000 variables and 50,000 constraints in a
- few hours of workstation time. This routine can also solve large
- linear programming problems, but the numerical accuracy is not as
- good as that of a more specialized LP algorithm.
DIAGNOSTICS
- code == WN_SUCCESS means successful completion, optimum
- found code == WN_SUBOPTIMAL means termination before optimum
- found code == WN_UNBOUNDED means function is unbounded
SEE ALSO
wnconj(3), wnsplx(3), wnsll(3)
AUTHOR
- Will Naylor
- WNLIB August 23, 1998