Provided by: libnlopt-dev_2.4.2+dfsg-4_amd64 bug

NAME

       nlopt - Nonlinear optimization library

SYNOPSIS

       #include <nlopt.h>

       nlopt_opt opt = nlopt_create(algorithm, n);
       nlopt_set_min_objective(opt, f, f_data);
       nlopt_set_ftol_rel(opt, tol);
       ...
       nlopt_optimize(opt, x , &opt_f);
       nlopt_destroy(opt);

       The "..." indicates any number of calls to NLopt functions, below, to
       set parameters of the optimization, constraints, and stopping
       criteria.  Here, nlopt_set_ftol_rel is merely an example of a
       possible stopping criterion.  You should link the resulting program
       with the linker flags -lnlopt -lm on Unix.

DESCRIPTION

       NLopt  is  a  library for nonlinear optimization.  It attempts to minimize (or maximize) a
       given nonlinear objective function f of n design variables, using the specified algorithm,
       possibly  subject to linear or nonlinear constraints.  The optimum function value found is
       returned in opt_f (type double) with the corresponding design variable values returned  in
       the  (double)  array  x of length n.  The input values in x should be a starting guess for
       the optimum.

       The parameters of the optimization are controlled via the object opt  of  type  nlopt_opt,
       which  is  created  by  the  function  nlopt_create  and disposed of by nlopt_destroy.  By
       calling various functions in the NLopt library, one can specify stopping criteria (e.g., a
       relative  tolerance  on  the objective function value is specified by nlopt_set_ftol_rel),
       upper and/or lower bounds on  the  design  parameters  x,  and  even  arbitrary  nonlinear
       inequality and equality constraints.

       By  changing  the  parameter algorithm among several predefined constants described below,
       one can switch easily between  a  variety  of  minimization  algorithms.   Some  of  these
       algorithms  require  the  gradient (derivatives) of the function to be supplied via f, and
       other algorithms do not require derivatives.  Some of the algorithms  attempt  to  find  a
       global optimum within the given bounds, and others find only a local optimum.  Most of the
       algorithms only handle the case where there  are  no  nonlinear  constraints.   The  NLopt
       library  is  a  wrapper  around several free/open-source minimization packages, as well as
       some new implementations of published optimization  algorithms.   You  could,  of  course,
       compile  and  call  these packages separately, and in some cases this will provide greater
       flexibility than is available via NLopt.  However, depending upon  the  specific  function
       being optimized, the different algorithms will vary in effectiveness.  The intent of NLopt
       is to allow you to quickly switch between algorithms in order to experiment with them  for
       your problem, by providing a simple unified interface to these subroutines.

OBJECTIVE FUNCTION

       The objective function is specified by calling one of:

         nlopt_result nlopt_set_min_objective(nlopt_opt opt,
                                              nlopt_func f,
                                              void* f_data);
         nlopt_result nlopt_set_max_objective(nlopt_opt opt,
                                              nlopt_func f,
                                              void* f_data);

       depending  on  whether  one  wishes  to  minimize  or  maximize  the objective function f,
       respectively.  The function f should be of the form:

         double f(unsigned n,
                  const double* x,
                  double* grad,
                  void* f_data);

       The return value should be the value of the function at the point x, where x points to  an
       array of length n of the design variables.  The dimension n is identical to the one passed
       to nlopt_create.

       In addition, if the argument grad is not NULL, then grad points to an array  of  length  n
       which  should  (upon  return)  be  set to the gradient of the function with respect to the
       design variables at x.  That is, grad[i] should upon return contain the partial derivative
       df/dx[i],  for  0  <=  i < n, if grad is non-NULL.  Not all of the optimization algorithms
       (below) use the gradient information: for algorithms listed as "derivative-free," the grad
       argument  will  always  be  NULL  and need never be computed.  (For algorithms that do use
       gradient information, however, grad may still be NULL for some calls.)

       The f_data  argument  is  the  same  as  the  one  passed  to  nlopt_set_min_objective  or
       nlopt_set_max_objective,  and  may  be  used  to  pass  any additional data through to the
       function.  (That is, it may be  a  pointer  to  some  caller-defined  data  structure/type
       containing information your function needs, which you convert from void* by a typecast.)

BOUND CONSTRAINTS

       Most  of  the  algorithms  in NLopt are designed for minimization of functions with simple
       bound constraints on the inputs.  That is, the input vectors x[i] are constrainted to  lie
       in  a hyperrectangle lb[i] <= x[i] <= ub[i] for 0 <= i < n.  These bounds are specified by
       passing arrays lb and ub of length n to one or both of the functions:

         nlopt_result nlopt_set_lower_bounds(nlopt_opt opt,
                                             const double* lb);
         nlopt_result nlopt_set_upper_bounds(nlopt_opt opt,
                                             const double* ub);

       If a lower/upper bound is not set, the default is no bound (unconstrained, i.e. a bound of
       infinity);  it  is  possible  to  have  lower  bounds  but not upper bounds or vice versa.
       Alternatively, the user can call one of the above functions and explicitly  pass  a  lower
       bound  of  -HUGE_VAL  and/or an upper bound of +HUGE_VAL for some design variables to make
       them have no lower/upper bound, respectively.  (HUGE_VAL is the standard C constant for  a
       floating-point infinity, found in the math.h header file.)

       Note,  however,  that  some  of the algorithms in NLopt, in particular most of the global-
       optimization algorithms, do not support unconstrained  optimization  and  will  return  an
       error if you do not supply finite lower and upper bounds.

       For  convenience, the following two functions are supplied in order to set the lower/upper
       bounds for all design variables to a single constant (so that you don't have  to  fill  an
       array with a constant value):

         nlopt_result nlopt_set_lower_bounds1(nlopt_opt opt,
                                              double lb);
         nlopt_result nlopt_set_upper_bounds1(nlopt_opt opt,
                                              double ub);

NONLINEAR CONSTRAINTS

       Several  of the algorithms in NLopt (MMA and ORIG_DIRECT) also support arbitrary nonlinear
       inequality constraints, and some also allow nonlinear equality constraints (COBYLA, SLSQP,
       ISRES,  and  AUGLAG).  For these algorithms, you can specify as many nonlinear constraints
       as you wish by calling the following functions multiple times.

       In particular, a nonlinear inequality constraint  of  the  form  fc(x)  <=  0,  where  the
       function  fc  is  of  the  same  form  as  the  objective function described above, can be
       specified by calling:

         nlopt_result nlopt_add_inequality_constraint(nlopt_opt opt,
                                                      nlopt_func fc,
                                                      void* fc_data,
                                                      double tol);

       Just as for the objective function, fc_data is a pointer to arbitrary user data that  will
       be  passed  through  to  the  fc  function  whenever it is called.  The parameter tol is a
       tolerance that is used for the purpose of stopping criteria only: a point x is  considered
       feasible  for  judging  whether  to stop the optimization if fc(x) <= tol.  A tolerance of
       zero means that NLopt will try not to consider any x to be converged unless fc is strictly
       non-positive;  generally,  at  least  a  small  positive  tolerance is advisable to reduce
       sensitivity to rounding errors.

       A nonlinear equality constraint of the form h(x) = 0, where the function h is of the  same
       form as the objective function described above, can be specified by calling:

         nlopt_result nlopt_add_equality_constraint(nlopt_opt opt,
                                                    nlopt_func h,
                                                    void* h_data,
                                                    double tol);

       Just  as  for the objective function, h_data is a pointer to arbitrary user data that will
       be passed through to the h function whenever  it  is  called.   The  parameter  tol  is  a
       tolerance  that is used for the purpose of stopping criteria only: a point x is considered
       feasible for judging whether to stop the optimization if  |h(x)|  <=  tol.   For  equality
       constraints,  a  small  positive  tolerance is strongly advised in order to allow NLopt to
       converge even if the equality constraint is slightly nonzero.

       (For any algorithm listed as "derivative-free" below, the grad argument to fc  or  h  will
       always be NULL and need never be computed.)

       To  remove all of the inequality and/or equality constraints from a given problem opt, you
       can call the following functions:

         nlopt_result nlopt_remove_inequality_constraints(nlopt_opt opt);
         nlopt_result nlopt_remove_equality_constraints(nlopt_opt opt);

ALGORITHMS

       The algorithm parameter specifies the optimization algorithm (for more  detail  on  these,
       see  the  README  files  in  the  source-code  subdirectories), and can take on any of the
       following constant values.

       Constants with _G{N,D}_ in their names  refer  to  global  optimization  methods,  whereas
       _L{N,D}_  refers  to local optimization methods (that try to find a local optimum starting
       from the starting guess x).  Constants with _{G,L}N_ refer  to  non-gradient  (derivative-
       free)  algorithms that do not require the objective function to supply a gradient, whereas
       _{G,L}D_ refers to derivative-based algorithms that  require  the  objective  function  to
       supply  a  gradient.   (Especially for local optimization, derivative-based algorithms are
       generally superior to derivative-free ones: the gradient  is  good  to  have  if  you  can
       compute it cheaply, e.g. via an adjoint method.)

       The algorithm specified for a given problem opt is returned by the function:

         nlopt_algorithm nlopt_get_algorithm(nlopt_opt opt);

       The available algorithms are:

       NLOPT_GN_DIRECT_L
              Perform  a  global  (G)  derivative-free (N) optimization using the DIRECT-L search
              algorithm by Jones et al. as modified by Gablonsky  et  al.  to  be  more  weighted
              towards  local  search.   Does  not support unconstrainted optimization.  There are
              also  several  other  variants  of  the  DIRECT  algorithm  that   are   supported:
              NLOPT_GLOBAL_DIRECT,     which     is     the     original     DIRECT    algorithm;
              NLOPT_GLOBAL_DIRECT_L_RAND, a slightly randomized version of DIRECT-L that  may  be
              better    in    high-dimensional    search    spaces;   NLOPT_GLOBAL_DIRECT_NOSCAL,
              NLOPT_GLOBAL_DIRECT_L_NOSCAL,  and  NLOPT_GLOBAL_DIRECT_L_RAND_NOSCAL,  which   are
              versions of DIRECT where the dimensions are not rescaled to a unit hypercube (which
              means that dimensions with larger bounds are given more weight).

       NLOPT_GN_ORIG_DIRECT_L
              A global (G) derivative-free optimization using the DIRECT-L  algorithm  as  above,
              along  with  NLOPT_GN_ORIG_DIRECT  which  is the original DIRECT algorithm.  Unlike
              NLOPT_GN_DIRECT_L above, these two algorithms refer to code based on  the  original
              Fortran  code  of  Gablonsky  et  al., which has some hard-coded limitations on the
              number of subdivisions etc.  and  does  not  support  all  of  the  NLopt  stopping
              criteria,  but  on  the  other  hand  it  supports  arbitrary  nonlinear inequality
              constraints.

       NLOPT_GD_STOGO
              Global (G) optimization using the StoGO algorithm by Madsen et al.  StoGO  exploits
              gradient  information  (D)  (which must be supplied by the objective) for its local
              searches, and performs the global search by  a  branch-and-bound  technique.   Only
              bound-constrained optimization is supported.  There is also another variant of this
              algorithm, NLOPT_GD_STOGO_RAND, which is a randomized version of the  StoGO  search
              scheme.  The StoGO algorithms are only available if NLopt is compiled with C++ code
              enabled, and should be linked  via  -lnlopt_cxx  instead  of  -lnlopt  (via  a  C++
              compiler, in order to link the C++ standard libraries).

       NLOPT_LN_NELDERMEAD
              Perform  a  local  (L)  derivative-free  (N) optimization, starting at x, using the
              Nelder-Mead simplex algorithm, modified to support bound constraints.  Nelder-Mead,
              while  popular,  is  known  to  occasionally  fail  to  converge for some objective
              functions, so it should be used with caution.  Anecdotal  evidence,  on  the  other
              hand,  suggests  that  it  works fairly well for some cases that are hard to handle
              otherwise, e.g. noisy/discontinuous objectives.  See also NLOPT_LN_SBPLX below.

       NLOPT_LN_SBPLX
              Perform a local (L) derivative-free (N)  optimization,  starting  at  x,  using  an
              algorithm  based  on  the  Subplex  algorithm of Rowan et al., which is an improved
              variant of Nelder-Mead (above).  Our implementation does not use  Rowan's  original
              code,  and  has  some  minor  modifications  such  as  explicit  support  for bound
              constraints.  (Like Nelder-Mead, Subplex often works well  in  practice,  even  for
              noisy/discontinuous  objectives,  but  there  is no rigorous guarantee that it will
              converge.)

       NLOPT_LN_PRAXIS
              Local (L) derivative-free (N) optimization using the principal-axis  method,  based
              on  code by Richard Brent.  Designed for unconstrained optimization, although bound
              constraints are supported too (via the inefficient method of  returning  +Inf  when
              the constraints are violated).

       NLOPT_LD_LBFGS
              Local  (L)  gradient-based  (D) optimization using the limited-memory BFGS (L-BFGS)
              algorithm.  (The objective  function  must  supply  the  gradient.)   Unconstrained
              optimization  is  supported  in  addition  to simple bound constraints (see above).
              Based on an implementation by Luksan et al.

       NLOPT_LD_VAR2
              Local (L) gradient-based (D) optimization using a shifted limited-memory  variable-
              metric  method  based  on  code by Luksan et al., supporting both unconstrained and
              bound-constrained optimization.  NLOPT_LD_VAR2  uses  a  rank-2  method,  while  .B
              NLOPT_LD_VAR1 is another variant using a rank-1 method.

       NLOPT_LD_TNEWTON_PRECOND_RESTART
              Local  (L)  gradient-based (D) optimization using an LBFGS-preconditioned truncated
              Newton method with steepest-descent restarting, based on code  by  Luksan  et  al.,
              supporting  both  unconstrained  and  bound-constrained  optimization.   There  are
              several other variants of this algorithm:  NLOPT_LD_TNEWTON_PRECOND  (same  without
              restarting),   NLOPT_LD_TNEWTON_RESTART   (same   without   preconditioning),   and
              NLOPT_LD_TNEWTON (same without restarting or preconditioning).

       NLOPT_GN_CRS2_LM
              Global (G) derivative-free (N) optimization  using  the  controlled  random  search
              (CRS2) algorithm of Price, with the "local mutation" (LM) modification suggested by
              Kaelo and Ali.

       NLOPT_GN_ISRES
              Global (G) derivative-free (N) optimization using a genetic algorithm (mutation and
              differential  evolution), using a stochastic ranking to handle nonlinear inequality
              and equality constraints as suggested by Runarsson and Yao.

       NLOPT_G_MLSL_LDS, NLOPT_G_MLSL
              Global (G) optimization using the multi-level single-linkage (MLSL) algorithm  with
              a  low-discrepancy  sequence  (LDS)  or  pseudorandom  numbers, respectively.  This
              algorithm executes a low-discrepancy or pseudorandom sequence  of  local  searches,
              with  a  clustering  heuristic  to avoid multiple local searches for the same local
              optimum.  The local search algorithm must  be  specified,  along  with  termination
              criteria/tolerances  for  the  local searches, by nlopt_set_local_optimizer.  (This
              subsidiary algorithm can be with or without derivatives, and determines whether the
              objective function needs gradients.)

       NLOPT_LD_MMA, NLOPT_LD_CCSAQ
              Local  (L)  gradient-based  (D)  optimization using the method of moving asymptotes
              (MMA), or rather a refined version  of  the  algorithm  as  published  by  Svanberg
              (2002).   (NLopt  uses  an  independent free-software/open-source implementation of
              Svanberg's algorithm.) CCSAQ is a related algorithm  from  Svanberg's  paper  which
              uses  a  local  quadratic approximation rather than the more-complicated MMA model;
              the two  usually  have  similar  convergence  rates.   The  NLOPT_LD_MMA  algorithm
              supports  both  bound-constrained and unconstrained optimization, and also supports
              an arbitrary number (m) of  nonlinear  inequality  (not  equality)  constraints  as
              described above.

       NLOPT_LD_SLSQP
              Local  (L)  gradient-based  (D) optimization using sequential quadratic programming
              and  BFGS  updates,  supporting  arbitrary  nonlinear   inequality   and   equality
              constraints,  based on the code by Dieter Kraft (1988) adapted for use by the SciPy
              project.  Note that this  algorithm  uses  dense-matrix  methods  requiring  O(n^2)
              storage  and O(n^3) time, making it less practical for problems involving more than
              a few thousand parameters.

       NLOPT_LN_COBYLA
              Local (L) derivative-free (N) optimization using the  COBYLA  algorithm  of  Powell
              (Constrained Optimization BY Linear Approximations).  The NLOPT_LN_COBYLA algorithm
              supports both bound-constrained and unconstrained optimization, and  also  supports
              an  arbitrary  number (m) of nonlinear inequality/equality constraints as described
              above.

       NLOPT_LN_NEWUOA
              Local (L) derivative-free (N) optimization using a variant of the NEWUOA  algorithm
              of  Powell, based on successive quadratic approximations of the objective function.
              We have modified the algorithm to support bound constraints.  The  original  NEWUOA
              algorithm  is  also  available,  as NLOPT_LN_NEWUOA, but this algorithm ignores the
              bound constraints lb and ub, and so  it  should  only  be  used  for  unconstrained
              problems.  Mostly superseded by BOBYQA.

       NLOPT_LN_BOBYQA
              Local  (L)  derivative-free  (N) optimization using the BOBYQA algorithm of Powell,
              based on successive quadratic approximations of the objective function,  supporting
              bound constraints.

       NLOPT_AUGLAG
              Optimize  an  objective  with  nonlinear  inequality/equality  constraints  via  an
              unconstrained (or bound-constrained)  optimization  algorithm,  using  a  gradually
              increasing  "augmented  Lagrangian" penalty for violated constraints.  Requires you
              to specify another optimization  algorithm  for  optimizing  the  objective+penalty
              function,  using  nlopt_set_local_optimizer.   (This  subsidiary  algorithm  can be
              global or local and with or without derivatives,  but  you  must  specify  its  own
              termination  criteria.)  A variant, NLOPT_AUGLAG_EQ, only uses the penalty approach
              for equality constraints, while inequality constraints are handled directly by  the
              subsidiary algorithm (restricting the choice of subsidiary algorithms to those that
              can handle inequality constraints).

STOPPING CRITERIA

       Multiple stopping criteria for  the  optimization  are  supported,  as  specified  by  the
       functions to modify a given optimization problem opt.  The optimization halts whenever any
       one of these criteria is satisfied.  In some cases,  the  precise  interpretation  of  the
       stopping  criterion depends on the optimization algorithm above (although we have tried to
       make them as consistent as reasonably possible), and some algorithms do not support all of
       the stopping criteria.

       Important:  you  do not need to use all of the stopping criteria!  In most cases, you only
       need one or two, and can omit the remainder (all criteria are disabled by default).

       nlopt_result nlopt_set_stopval(nlopt_opt opt,
                                      double stopval);

              Stop when an objective value of at least stopval is found: stop minimizing  when  a
              value  <=  stopval  is  found, or stop maximizing when a value >= stopval is found.
              (Setting stopval to -HUGE_VAL for minimizing or +HUGE_VAL for  maximizing  disables
              this stopping criterion.)

       nlopt_result nlopt_set_ftol_rel(nlopt_opt opt,
                                       double tol);

              Set  relative  tolerance  on  function value: stop when an optimization step (or an
              estimate of the optimum) changes the function value by less than tol multiplied  by
              the  absolute  value  of  the  function  value.   (If there is any chance that your
              optimum function value is close  to  zero,  you  might  want  to  set  an  absolute
              tolerance  with  nlopt_set_ftol_abs as well.)  Criterion is disabled if tol is non-
              positive.

       nlopt_result nlopt_set_ftol_abs(nlopt_opt opt,
                                       double tol);

              Set absolute tolerance on function value: stop when an  optimization  step  (or  an
              estimate of the optimum) changes the function value by less than tol.  Criterion is
              disabled if tol is non-positive.

       nlopt_result nlopt_set_xtol_rel(nlopt_opt opt,
                                       double tol);

              Set relative tolerance on design variables: stop when an optimization step  (or  an
              estimate  of the optimum) changes every design variable by less than tol multiplied
              by the absolute value of the design variable.  (If there  is  any  chance  that  an
              optimal  design  variable  is  close  to  zero,  you  might want to set an absolute
              tolerance with nlopt_set_xtol_abs as well.)  Criterion is disabled if tol  is  non-
              positive.

       nlopt_result nlopt_set_xtol_abs(nlopt_opt opt,
                                       const double* tol);

              Set  absolute  tolerances  on  design  variables.   tol is a pointer to an array of
              length n giving the tolerances: stop when an optimization step (or an  estimate  of
              the optimum) changes every design variable x[i] by less than tol[i].

              For  convenience, the following function may be used to set the absolute tolerances
              in all n design variables to the same value:

                nlopt_result nlopt_set_xtol_abs1(nlopt_opt opt,
                                                 double tol);

              Criterion is disabled if tol is non-positive.

       nlopt_result nlopt_set_maxeval(nlopt_opt opt,
                                      int maxeval);

              Stop when the number of function evaluations  exceeds  maxeval.   (This  is  not  a
              strict  maximum:  the  number  of function evaluations may exceed maxeval slightly,
              depending upon the algorithm.)  Criterion is disabled if maxeval is non-positive.

       nlopt_result nlopt_set_maxtime(nlopt_opt opt,
                                      double maxtime);

              Stop when the optimization time (in seconds)  exceeds  maxtime.   (This  is  not  a
              strict  maximum: the time may exceed maxtime slightly, depending upon the algorithm
              and on how slow your function evaluation is.)  Criterion is disabled if maxtime  is
              non-positive.

RETURN VALUE

       Most  of  the  NLopt  functions  return an enumerated constant of type nlopt_result, which
       takes on one of the following values:

   Successful termination (positive return values):
       NLOPT_SUCCESS
              Generic success return value.

       NLOPT_STOPVAL_REACHED
              Optimization stopped because stopval (above) was reached.

       NLOPT_FTOL_REACHED
              Optimization stopped because ftol_rel or ftol_abs (above) was reached.

       NLOPT_XTOL_REACHED
              Optimization stopped because xtol_rel or xtol_abs (above) was reached.

       NLOPT_MAXEVAL_REACHED
              Optimization stopped because maxeval (above) was reached.

       NLOPT_MAXTIME_REACHED
              Optimization stopped because maxtime (above) was reached.

   Error codes (negative return values):
       NLOPT_FAILURE
              Generic failure code.

       NLOPT_INVALID_ARGS
              Invalid arguments (e.g. lower bounds are  bigger  than  upper  bounds,  an  unknown
              algorithm was specified, etcetera).

       NLOPT_OUT_OF_MEMORY
              Ran out of memory.

       NLOPT_ROUNDOFF_LIMITED
              Halted because roundoff errors limited progress.

       NLOPT_FORCED_STOP
              Halted   because  the  user  called  nlopt_force_stop(opt)  on  the  optimization's
              nlopt_opt object opt from the user's objective function.

LOCAL OPTIMIZER

       Some of the algorithms, especially MLSL and AUGLAG, use a different optimization algorithm
       as  a  subroutine,  typically  for  local  optimization.   You can change the local search
       algorithm and its tolerances by calling:

         nlopt_result nlopt_set_local_optimizer(nlopt_opt opt,
                                                const nlopt_opt local_opt);

       Here, local_opt is another nlopt_opt object whose parameters are  used  to  determine  the
       local  search  algorithm  and  stopping  criteria.   (The  objective function, bounds, and
       nonlinear-constraint parameters of local_opt are ignored.)  The dimension n  of  local_opt
       must match that of opt.

       This  function  makes  a  copy  of  the  local_opt  object, so you can freely destroy your
       original local_opt afterwards.

INITIAL STEP SIZE

       For derivative-free local-optimization algorithms, the optimizer must  somehow  decide  on
       some  initial  step  size to perturb x by when it begins the optimization.  This step size
       should be big enough that the value of the objective changes significantly,  but  not  too
       big  if  you  want to find the local optimum nearest to x.  By default, NLopt chooses this
       initial step size heuristically from the bounds, tolerances, and  other  information,  but
       this may not always be the best choice.

       You can modify the initial step size by calling:

         nlopt_result nlopt_set_initial_step(nlopt_opt opt,
                                             const double* dx);

       Here,  dx  is  an  array  of  length n containing the (nonzero) initial step size for each
       component of the design parameters x.  For convenience, if you want to set the step  sizes
       in every direction to be the same value, you can instead call:

         nlopt_result nlopt_set_initial_step1(nlopt_opt opt,
                                              double dx);

STOCHASTIC POPULATION

       Several  of  the  stochastic  search  algorithms  (e.g.,  CRS,  MLSL,  and ISRES) start by
       generating some initial "population"  of  random  points  x.   By  default,  this  initial
       population  size  is  chosen heuristically in some algorithm-specific way, but the initial
       population can by changed by calling:

         nlopt_result nlopt_set_population(nlopt_opt opt,
                                           unsigned pop);

       (A pop of zero implies that the heuristic default will be used.)

PSEUDORANDOM NUMBERS

       For stochastic optimization algorithms, we  use  pseudorandom  numbers  generated  by  the
       Mersenne Twister algorithm, based on code from Makoto Matsumoto.  By default, the seed for
       the random numbers is generated from the system time, so that they will be different  each
       time  you  run  the program.  If you want to use deterministic random numbers, you can set
       the seed by calling:

                   void nlopt_srand(unsigned long seed);

       Some of the algorithms also support using low-discrepancy sequences (LDS), sometimes known
       as  quasi-random  numbers.   NLopt uses the Sobol LDS, which is implemented for up to 1111
       dimensions.

AUTHORS

       Written by Steven G. Johnson.

       Copyright (c) 2007-2014 Massachusetts Institute of Technology.

SEE ALSO

       nlopt_minimize(3)