# Source code¶

Module: adaptive_sampling David Eriksson , David Bindel
class pySOT.adaptive_sampling.CandidateDDS(data, numcand=None, weights=None)

An implementation of the DDS candidate points method

Only a few candidate points are generated and the candidate point with the lowest value predicted by the surrogate model is selected. The DDS method only perturbs a subset of the dimensions when perturbing the best solution. The probability for a dimension to be perturbed decreases after each evaluation and is capped in order to guarantee global convergence.

Parameters: data (Object) – Optimization problem object numcand (int) – Number of candidate points to be used. Default is min([5000, 100*data.dim]) weights (list of numpy.array) – Weights used for the merit function, to balance exploration vs exploitation ValueError – If number of candidate points is incorrect or if the weights aren’t a list in [0, 1] data – Optimization problem object fhat – Response surface object xrange – Variable ranges, xup - xlow dtol – Smallest allowed distance between evaluated points 1e-3 * sqrt(dim) weights – Weights used for the merit function proposed_points – List of points proposed to the optimization algorithm dmerit – Minimum distance between the points and the proposed points xcand – Candidate points fhvals – Predicted values by the surrogate model next_weight – Index of the next weight to be used numcand – Number of candidate points budget – Remaining evaluation budget probfun – Function that computes the perturbation probability of a given iteration

Note

This object needs to be initialized with the init method. This is done when the initial phase has finished.

Todo

Get rid of the proposed_points object and replace it by something that is controlled by the strategy.

init(start_sample, fhat, budget)

Initialize the sampling method after the initial phase

This initializes the list of sampling methods after the initial phase has finished and the experimental design has been evaluated. The user provides the points in the experimental design, the surrogate model, and the remaining evaluation budget.

Parameters: start_sample (numpy.array) – Points in the experimental design fhat (Object) – Surrogate model budget (int) – Evaluation budget
make_points(npts, xbest, sigma, subset=None, proj_fun=None, merit=<function candidate_merit_weighted_distance>)

Proposes npts new points to evaluate

Parameters: npts (int) – Number of points to select xbest (numpy.array) – Best solution found so far sigma (float) – Current sampling radius w.r.t the unit box subset (numpy.array) – Coordinates to perturb, the others are fixed proj_fun (Object) – Routine for projecting infeasible points onto the feasible region merit (Object) – Merit function for selecting candidate points Points selected for evaluation, of size npts x dim numpy.array

Todo

Change the merit function from being hard-coded

remove_point(x)

Remove x from proposed_points

This removes x from the list of proposed points in the case where the optimization strategy decides to not evaluate x.

Parameters: x (numpy.array) – Point to be removed True if points was removed, False otherwise bool
class pySOT.adaptive_sampling.CandidateDDS_CONT(data, numcand=None, weights=None)

CandidateDDS where only the the continuous variables are perturbed

Parameters: data (Object) – Optimization problem object numcand (int) – Number of candidate points to be used. Default is min([5000, 100*data.dim]) weights (list of numpy.array) – Weights used for the merit function, to balance exploration vs exploitation ValueError – If number of candidate points is incorrect or if the weights aren’t a list in [0, 1] data – Optimization problem object fhat – Response surface object xrange – Variable ranges, xup - xlow dtol – Smallest allowed distance between evaluated points 1e-3 * sqrt(dim) weights – Weights used for the merit function proposed_points – List of points proposed to the optimization algorithm dmerit – Minimum distance between the points and the proposed points xcand – Candidate points fhvals – Predicted values by the surrogate model next_weight – Index of the next weight to be used numcand – Number of candidate points budget – Remaining evaluation budget probfun – Function that computes the perturbation probability of a given iteration

Note

This object needs to be initialized with the init method. This is done when the initial phase has finished.

Todo

Get rid of the proposed_points object and replace it by something that is controlled by the strategy.

init(start_sample, fhat, budget)

Initialize the sampling method after the initial phase

This initializes the list of sampling methods after the initial phase has finished and the experimental design has been evaluated. The user provides the points in the experimental design, the surrogate model, and the remaining evaluation budget.

Parameters: start_sample (numpy.array) – Points in the experimental design fhat (Object) – Surrogate model budget (int) – Evaluation budget
make_points(npts, xbest, sigma, subset=None, proj_fun=None, merit=<function candidate_merit_weighted_distance>)

Proposes npts new points to evaluate

Parameters: npts (int) – Number of points to select xbest (numpy.array) – Best solution found so far sigma (float) – Current sampling radius w.r.t the unit box subset (numpy.array) – Coordinates to perturb, the others are fixed proj_fun (Object) – Routine for projecting infeasible points onto the feasible region merit (Object) – Merit function for selecting candidate points Points selected for evaluation, of size npts x dim numpy.array

Todo

Change the merit function from being hard-coded

remove_point(x)

Remove x from proposed_points

This removes x from the list of proposed points in the case where the optimization strategy decides to not evaluate x.

Parameters: x (numpy.array) – Point to be removed True if points was removed, False otherwise bool
class pySOT.adaptive_sampling.CandidateDDS_INT(data, numcand=None, weights=None)

CandidateDDS where only the the integer variables are perturbed

Parameters: data (Object) – Optimization problem object numcand (int) – Number of candidate points to be used. Default is min([5000, 100*data.dim]) weights (list of numpy.array) – Weights used for the merit function, to balance exploration vs exploitation ValueError – If number of candidate points is incorrect or if the weights aren’t a list in [0, 1] data – Optimization problem object fhat – Response surface object xrange – Variable ranges, xup - xlow dtol – Smallest allowed distance between evaluated points 1e-3 * sqrt(dim) weights – Weights used for the merit function proposed_points – List of points proposed to the optimization algorithm dmerit – Minimum distance between the points and the proposed points xcand – Candidate points fhvals – Predicted values by the surrogate model next_weight – Index of the next weight to be used numcand – Number of candidate points budget – Remaining evaluation budget probfun – Function that computes the perturbation probability of a given iteration

Note

This object needs to be initialized with the init method. This is done when the initial phase has finished.

Todo

Get rid of the proposed_points object and replace it by something that is controlled by the strategy.

init(start_sample, fhat, budget)

Initialize the sampling method after the initial phase

This initializes the list of sampling methods after the initial phase has finished and the experimental design has been evaluated. The user provides the points in the experimental design, the surrogate model, and the remaining evaluation budget.

Parameters: start_sample (numpy.array) – Points in the experimental design fhat (Object) – Surrogate model budget (int) – Evaluation budget
make_points(npts, xbest, sigma, subset=None, proj_fun=None, merit=<function candidate_merit_weighted_distance>)

Proposes npts new points to evaluate

Parameters: npts (int) – Number of points to select xbest (numpy.array) – Best solution found so far sigma (float) – Current sampling radius w.r.t the unit box subset (numpy.array) – Coordinates to perturb, the others are fixed proj_fun (Object) – Routine for projecting infeasible points onto the feasible region merit (Object) – Merit function for selecting candidate points Points selected for evaluation, of size npts x dim numpy.array

Todo

Change the merit function from being hard-coded

remove_point(x)

Remove x from proposed_points

This removes x from the list of proposed points in the case where the optimization strategy decides to not evaluate x.

Parameters: x (numpy.array) – Point to be removed True if points was removed, False otherwise bool
class pySOT.adaptive_sampling.CandidateDYCORS(data, numcand=None, weights=None)

An implementation of the DYCORS method

The DYCORS method only perturbs a subset of the dimensions when perturbing the best solution. The probability for a dimension to be perturbed decreases after each evaluation and is capped in order to guarantee global convergence.

Parameters: data (Object) – Optimization problem object numcand (int) – Number of candidate points to be used. Default is min([5000, 100*data.dim]) weights (list of numpy.array) – Weights used for the merit function, to balance exploration vs exploitation ValueError – If number of candidate points is incorrect or if the weights aren’t a list in [0, 1] data – Optimization problem object fhat – Response surface object xrange – Variable ranges, xup - xlow dtol – Smallest allowed distance between evaluated points 1e-3 * sqrt(dim) weights – Weights used for the merit function proposed_points – List of points proposed to the optimization algorithm dmerit – Minimum distance between the points and the proposed points xcand – Candidate points fhvals – Predicted values by the surrogate model next_weight – Index of the next weight to be used numcand – Number of candidate points budget – Remaining evaluation budget minprob – Smallest allowed perturbation probability n0 – Evaluations spent when the initial phase ended probfun – Function that computes the perturbation probability of a given iteration

Note

This object needs to be initialized with the init method. This is done when the initial phase has finished.

Todo

Get rid of the proposed_points object and replace it by something that is controlled by the strategy.

init(start_sample, fhat, budget)

Initialize the sampling method after the initial phase

This initializes the list of sampling methods after the initial phase has finished and the experimental design has been evaluated. The user provides the points in the experimental design, the surrogate model, and the remaining evaluation budget.

Parameters: start_sample (numpy.array) – Points in the experimental design fhat (Object) – Surrogate model budget (int) – Evaluation budget
make_points(npts, xbest, sigma, subset=None, proj_fun=None, merit=<function candidate_merit_weighted_distance>)

Proposes npts new points to evaluate

Parameters: npts (int) – Number of points to select xbest (numpy.array) – Best solution found so far sigma (float) – Current sampling radius w.r.t the unit box subset (numpy.array) – Coordinates to perturb, the others are fixed proj_fun (Object) – Routine for projecting infeasible points onto the feasible region merit (Object) – Merit function for selecting candidate points Points selected for evaluation, of size npts x dim numpy.array

Todo

Change the merit function from being hard-coded

remove_point(x)

Remove x from proposed_points

This removes x from the list of proposed points in the case where the optimization strategy decides to not evaluate x.

Parameters: x (numpy.array) – Point to be removed True if points was removed, False otherwise bool
class pySOT.adaptive_sampling.CandidateDYCORS_CONT(data, numcand=None, weights=None)

CandidateDYCORS where only the the continuous variables are perturbed

Parameters: data (Object) – Optimization problem object numcand (int) – Number of candidate points to be used. Default is min([5000, 100*data.dim]) weights (list of numpy.array) – Weights used for the merit function, to balance exploration vs exploitation ValueError – If number of candidate points is incorrect or if the weights aren’t a list in [0, 1] data – Optimization problem object fhat – Response surface object xrange – Variable ranges, xup - xlow dtol – Smallest allowed distance between evaluated points 1e-3 * sqrt(dim) weights – Weights used for the merit function proposed_points – List of points proposed to the optimization algorithm dmerit – Minimum distance between the points and the proposed points xcand – Candidate points fhvals – Predicted values by the surrogate model next_weight – Index of the next weight to be used numcand – Number of candidate points budget – Remaining evaluation budget probfun – Function that computes the perturbation probability of a given iteration

Note

This object needs to be initialized with the init method. This is done when the initial phase has finished.

Todo

Get rid of the proposed_points object and replace it by something that is controlled by the strategy.

init(start_sample, fhat, budget)

Initialize the sampling method after the initial phase

This initializes the list of sampling methods after the initial phase has finished and the experimental design has been evaluated. The user provides the points in the experimental design, the surrogate model, and the remaining evaluation budget.

Parameters: start_sample (numpy.array) – Points in the experimental design fhat (Object) – Surrogate model budget (int) – Evaluation budget
make_points(npts, xbest, sigma, subset=None, proj_fun=None, merit=<function candidate_merit_weighted_distance>)

Proposes npts new points to evaluate

Parameters: npts (int) – Number of points to select xbest (numpy.array) – Best solution found so far sigma (float) – Current sampling radius w.r.t the unit box subset (numpy.array) – Coordinates to perturb, the others are fixed proj_fun (Object) – Routine for projecting infeasible points onto the feasible region merit (Object) – Merit function for selecting candidate points Points selected for evaluation, of size npts x dim numpy.array

Todo

Change the merit function from being hard-coded

remove_point(x)

Remove x from proposed_points

This removes x from the list of proposed points in the case where the optimization strategy decides to not evaluate x.

Parameters: x (numpy.array) – Point to be removed True if points was removed, False otherwise bool
class pySOT.adaptive_sampling.CandidateDYCORS_INT(data, numcand=None, weights=None)

CandidateDYCORS where only the the integer variables are perturbed

Parameters: data (Object) – Optimization problem object numcand (int) – Number of candidate points to be used. Default is min([5000, 100*data.dim]) weights (list of numpy.array) – Weights used for the merit function, to balance exploration vs exploitation ValueError – If number of candidate points is incorrect or if the weights aren’t a list in [0, 1] data – Optimization problem object fhat – Response surface object xrange – Variable ranges, xup - xlow dtol – Smallest allowed distance between evaluated points 1e-3 * sqrt(dim) weights – Weights used for the merit function proposed_points – List of points proposed to the optimization algorithm dmerit – Minimum distance between the points and the proposed points xcand – Candidate points fhvals – Predicted values by the surrogate model next_weight – Index of the next weight to be used numcand – Number of candidate points budget – Remaining evaluation budget probfun – Function that computes the perturbation probability of a given iteration

Note

This object needs to be initialized with the init method. This is done when the initial phase has finished.

Todo

Get rid of the proposed_points object and replace it by something that is controlled by the strategy.

init(start_sample, fhat, budget)

Initialize the sampling method after the initial phase

This initializes the list of sampling methods after the initial phase has finished and the experimental design has been evaluated. The user provides the points in the experimental design, the surrogate model, and the remaining evaluation budget.

Parameters: start_sample (numpy.array) – Points in the experimental design fhat (Object) – Surrogate model budget (int) – Evaluation budget
make_points(npts, xbest, sigma, subset=None, proj_fun=None, merit=<function candidate_merit_weighted_distance>)

Proposes npts new points to evaluate

Parameters: npts (int) – Number of points to select xbest (numpy.array) – Best solution found so far sigma (float) – Current sampling radius w.r.t the unit box subset (numpy.array) – Coordinates to perturb, the others are fixed proj_fun (Object) – Routine for projecting infeasible points onto the feasible region merit (Object) – Merit function for selecting candidate points Points selected for evaluation, of size npts x dim numpy.array

Todo

Change the merit function from being hard-coded

remove_point(x)

Remove x from proposed_points

This removes x from the list of proposed points in the case where the optimization strategy decides to not evaluate x.

Parameters: x (numpy.array) – Point to be removed True if points was removed, False otherwise bool
class pySOT.adaptive_sampling.CandidateSRBF(data, numcand=None, weights=None)

An implementation of Stochastic RBF

This is an implementation of the candidate points method that is proposed in the first SRBF paper. Candidate points are generated by making normally distributed perturbations with standard deviation sigma around the best solution. The candidate point that minimizes a specified merit function is selected as the next point to evaluate.

Parameters: data (Object) – Optimization problem object numcand (int) – Number of candidate points to be used. Default is min([5000, 100*data.dim]) weights (list of numpy.array) – Weights used for the merit function, to balance exploration vs exploitation ValueError – If number of candidate points is incorrect or if the weights aren’t a list in [0, 1] data – Optimization problem object fhat – Response surface object xrange – Variable ranges, xup - xlow dtol – Smallest allowed distance between evaluated points 1e-3 * sqrt(dim) weights – Weights used for the merit function proposed_points – List of points proposed to the optimization algorithm dmerit – Minimum distance between the points and the proposed points xcand – Candidate points fhvals – Predicted values by the surrogate model next_weight – Index of the next weight to be used numcand – Number of candidate points budget – Remaining evaluation budget

Note

This object needs to be initialized with the init method. This is done when the initial phase has finished.

Todo

Get rid of the proposed_points object and replace it by something that is controlled by the strategy.

init(start_sample, fhat, budget)

Initialize the sampling method after the initial phase

This initializes the list of sampling methods after the initial phase has finished and the experimental design has been evaluated. The user provides the points in the experimental design, the surrogate model, and the remaining evaluation budget.

Parameters: start_sample (numpy.array) – Points in the experimental design fhat (Object) – Surrogate model budget (int) – Evaluation budget
make_points(npts, xbest, sigma, subset=None, proj_fun=None, merit=<function candidate_merit_weighted_distance>)

Proposes npts new points to evaluate

Parameters: npts (int) – Number of points to select xbest (numpy.array) – Best solution found so far sigma (float) – Current sampling radius w.r.t the unit box subset (numpy.array) – Coordinates to perturb, the others are fixed proj_fun (Object) – Routine for projecting infeasible points onto the feasible region merit (Object) – Merit function for selecting candidate points Points selected for evaluation, of size npts x dim numpy.array

Todo

Change the merit function from being hard-coded

remove_point(x)

Remove x from proposed_points

This removes x from the list of proposed points in the case where the optimization strategy decides to not evaluate x.

Parameters: x (numpy.array) – Point to be removed True if points was removed, False otherwise bool
class pySOT.adaptive_sampling.CandidateSRBF_CONT(data, numcand=None, weights=None)

CandidateSRBF where only the the continuous variables are perturbed

Parameters: data (Object) – Optimization problem object numcand (int) – Number of candidate points to be used. Default is min([5000, 100*data.dim]) weights (list of numpy.array) – Weights used for the merit function, to balance exploration vs exploitation ValueError – If number of candidate points is incorrect or if the weights aren’t a list in [0, 1] data – Optimization problem object fhat – Response surface object xrange – Variable ranges, xup - xlow dtol – Smallest allowed distance between evaluated points 1e-3 * sqrt(dim) weights – Weights used for the merit function proposed_points – List of points proposed to the optimization algorithm dmerit – Minimum distance between the points and the proposed points xcand – Candidate points fhvals – Predicted values by the surrogate model next_weight – Index of the next weight to be used numcand – Number of candidate points budget – Remaining evaluation budget probfun – Function that computes the perturbation probability of a given iteration

Note

This object needs to be initialized with the init method. This is done when the initial phase has finished.

Todo

Get rid of the proposed_points object and replace it by something that is controlled by the strategy.

init(start_sample, fhat, budget)

Initialize the sampling method after the initial phase

This initializes the list of sampling methods after the initial phase has finished and the experimental design has been evaluated. The user provides the points in the experimental design, the surrogate model, and the remaining evaluation budget.

Parameters: start_sample (numpy.array) – Points in the experimental design fhat (Object) – Surrogate model budget (int) – Evaluation budget
make_points(npts, xbest, sigma, subset=None, proj_fun=None, merit=<function candidate_merit_weighted_distance>)

Proposes npts new points to evaluate

Parameters: npts (int) – Number of points to select xbest (numpy.array) – Best solution found so far sigma (float) – Current sampling radius w.r.t the unit box subset (numpy.array) – Coordinates to perturb, the others are fixed proj_fun (Object) – Routine for projecting infeasible points onto the feasible region merit (Object) – Merit function for selecting candidate points Points selected for evaluation, of size npts x dim numpy.array

Todo

Change the merit function from being hard-coded

remove_point(x)

Remove x from proposed_points

This removes x from the list of proposed points in the case where the optimization strategy decides to not evaluate x.

Parameters: x (numpy.array) – Point to be removed True if points was removed, False otherwise bool
class pySOT.adaptive_sampling.CandidateSRBF_INT(data, numcand=None, weights=None)

CandidateSRBF where only the the integer variables are perturbed

Parameters: data (Object) – Optimization problem object numcand (int) – Number of candidate points to be used. Default is min([5000, 100*data.dim]) weights (list of numpy.array) – Weights used for the merit function, to balance exploration vs exploitation ValueError – If number of candidate points is incorrect or if the weights aren’t a list in [0, 1] data – Optimization problem object fhat – Response surface object xrange – Variable ranges, xup - xlow dtol – Smallest allowed distance between evaluated points 1e-3 * sqrt(dim) weights – Weights used for the merit function proposed_points – List of points proposed to the optimization algorithm dmerit – Minimum distance between the points and the proposed points xcand – Candidate points fhvals – Predicted values by the surrogate model next_weight – Index of the next weight to be used numcand – Number of candidate points budget – Remaining evaluation budget probfun – Function that computes the perturbation probability of a given iteration

Note

This object needs to be initialized with the init method. This is done when the initial phase has finished.

Todo

Get rid of the proposed_points object and replace it by something that is controlled by the strategy.

init(start_sample, fhat, budget)

Initialize the sampling method after the initial phase

This initializes the list of sampling methods after the initial phase has finished and the experimental design has been evaluated. The user provides the points in the experimental design, the surrogate model, and the remaining evaluation budget.

Parameters: start_sample (numpy.array) – Points in the experimental design fhat (Object) – Surrogate model budget (int) – Evaluation budget
make_points(npts, xbest, sigma, subset=None, proj_fun=None, merit=<function candidate_merit_weighted_distance>)

Proposes npts new points to evaluate

Parameters: npts (int) – Number of points to select xbest (numpy.array) – Best solution found so far sigma (float) – Current sampling radius w.r.t the unit box subset (numpy.array) – Coordinates to perturb, the others are fixed proj_fun (Object) – Routine for projecting infeasible points onto the feasible region merit (Object) – Merit function for selecting candidate points Points selected for evaluation, of size npts x dim numpy.array

Todo

Change the merit function from being hard-coded

remove_point(x)

Remove x from proposed_points

This removes x from the list of proposed points in the case where the optimization strategy decides to not evaluate x.

Parameters: x (numpy.array) – Point to be removed True if points was removed, False otherwise bool
class pySOT.adaptive_sampling.CandidateUniform(data, numcand=None, weights=None)

Create Candidate points by sampling uniformly in the domain

Parameters: data (Object) – Optimization problem object numcand (int) – Number of candidate points to be used. Default is min([5000, 100*data.dim]) weights (list of numpy.array) – Weights used for the merit function, to balance exploration vs exploitation ValueError – If number of candidate points is incorrect or if the weights aren’t a list in [0, 1] data – Optimization problem object fhat – Response surface object xrange – Variable ranges, xup - xlow dtol – Smallest allowed distance between evaluated points 1e-3 * sqrt(dim) weights – Weights used for the merit function proposed_points – List of points proposed to the optimization algorithm dmerit – Minimum distance between the points and the proposed points xcand – Candidate points fhvals – Predicted values by the surrogate model next_weight – Index of the next weight to be used numcand – Number of candidate points budget – Remaining evaluation budget

Note

This object needs to be initialized with the init method. This is done when the initial phase has finished.

Todo

Get rid of the proposed_points object and replace it by something that is controlled by the strategy.

init(start_sample, fhat, budget)

Initialize the sampling method after the initial phase

This initializes the list of sampling methods after the initial phase has finished and the experimental design has been evaluated. The user provides the points in the experimental design, the surrogate model, and the remaining evaluation budget.

Parameters: start_sample (numpy.array) – Points in the experimental design fhat (Object) – Surrogate model budget (int) – Evaluation budget
make_points(npts, xbest, sigma, subset=None, proj_fun=None, merit=<function candidate_merit_weighted_distance>)

Proposes npts new points to evaluate

Parameters: npts (int) – Number of points to select xbest (numpy.array) – Best solution found so far sigma (float) – Current sampling radius w.r.t the unit box subset (numpy.array) – Coordinates to perturb, the others are fixed proj_fun (Object) – Routine for projecting infeasible points onto the feasible region merit (Object) – Merit function for selecting candidate points Points selected for evaluation, of size npts x dim numpy.array

Todo

Change the merit function from being hard-coded

remove_point(x)

Remove x from proposed_points

This removes x from the list of proposed points in the case where the optimization strategy decides to not evaluate x.

Parameters: x (numpy.array) – Point to be removed True if points was removed, False otherwise bool
class pySOT.adaptive_sampling.CandidateUniform_CONT(data, numcand=None, weights=None)

CandidateUniform where only the the continuous variables are perturbed

Parameters: data (Object) – Optimization problem object numcand (int) – Number of candidate points to be used. Default is min([5000, 100*data.dim]) weights (list of numpy.array) – Weights used for the merit function, to balance exploration vs exploitation ValueError – If number of candidate points is incorrect or if the weights aren’t a list in [0, 1] data – Optimization problem object fhat – Response surface object xrange – Variable ranges, xup - xlow dtol – Smallest allowed distance between evaluated points 1e-3 * sqrt(dim) weights – Weights used for the merit function proposed_points – List of points proposed to the optimization algorithm dmerit – Minimum distance between the points and the proposed points xcand – Candidate points fhvals – Predicted values by the surrogate model next_weight – Index of the next weight to be used numcand – Number of candidate points budget – Remaining evaluation budget probfun – Function that computes the perturbation probability of a given iteration

Note

This object needs to be initialized with the init method. This is done when the initial phase has finished.

Todo

Get rid of the proposed_points object and replace it by something that is controlled by the strategy.

init(start_sample, fhat, budget)

Initialize the sampling method after the initial phase

This initializes the list of sampling methods after the initial phase has finished and the experimental design has been evaluated. The user provides the points in the experimental design, the surrogate model, and the remaining evaluation budget.

Parameters: start_sample (numpy.array) – Points in the experimental design fhat (Object) – Surrogate model budget (int) – Evaluation budget
make_points(npts, xbest, sigma, subset=None, proj_fun=None, merit=<function candidate_merit_weighted_distance>)

Proposes npts new points to evaluate

Parameters: npts (int) – Number of points to select xbest (numpy.array) – Best solution found so far sigma (float) – Current sampling radius w.r.t the unit box subset (numpy.array) – Coordinates to perturb, the others are fixed proj_fun (Object) – Routine for projecting infeasible points onto the feasible region merit (Object) – Merit function for selecting candidate points Points selected for evaluation, of size npts x dim numpy.array

Todo

Change the merit function from being hard-coded

remove_point(x)

Remove x from proposed_points

This removes x from the list of proposed points in the case where the optimization strategy decides to not evaluate x.

Parameters: x (numpy.array) – Point to be removed True if points was removed, False otherwise bool
class pySOT.adaptive_sampling.CandidateUniform_INT(data, numcand=None, weights=None)

CandidateUniform where only the the integer variables are perturbed

Parameters: data (Object) – Optimization problem object numcand (int) – Number of candidate points to be used. Default is min([5000, 100*data.dim]) weights (list of numpy.array) – Weights used for the merit function, to balance exploration vs exploitation ValueError – If number of candidate points is incorrect or if the weights aren’t a list in [0, 1] data – Optimization problem object fhat – Response surface object xrange – Variable ranges, xup - xlow dtol – Smallest allowed distance between evaluated points 1e-3 * sqrt(dim) weights – Weights used for the merit function proposed_points – List of points proposed to the optimization algorithm dmerit – Minimum distance between the points and the proposed points xcand – Candidate points fhvals – Predicted values by the surrogate model next_weight – Index of the next weight to be used numcand – Number of candidate points budget – Remaining evaluation budget probfun – Function that computes the perturbation probability of a given iteration

Note

This object needs to be initialized with the init method. This is done when the initial phase has finished.

Todo

Get rid of the proposed_points object and replace it by something that is controlled by the strategy.

init(start_sample, fhat, budget)

Initialize the sampling method after the initial phase

This initializes the list of sampling methods after the initial phase has finished and the experimental design has been evaluated. The user provides the points in the experimental design, the surrogate model, and the remaining evaluation budget.

Parameters: start_sample (numpy.array) – Points in the experimental design fhat (Object) – Surrogate model budget (int) – Evaluation budget
make_points(npts, xbest, sigma, subset=None, proj_fun=None, merit=<function candidate_merit_weighted_distance>)

Proposes npts new points to evaluate

Parameters: npts (int) – Number of points to select xbest (numpy.array) – Best solution found so far sigma (float) – Current sampling radius w.r.t the unit box subset (numpy.array) – Coordinates to perturb, the others are fixed proj_fun (Object) – Routine for projecting infeasible points onto the feasible region merit (Object) – Merit function for selecting candidate points Points selected for evaluation, of size npts x dim numpy.array

Todo

Change the merit function from being hard-coded

remove_point(x)

Remove x from proposed_points

This removes x from the list of proposed points in the case where the optimization strategy decides to not evaluate x.

Parameters: x (numpy.array) – Point to be removed True if points was removed, False otherwise bool
class pySOT.adaptive_sampling.GeneticAlgorithm(data)

Genetic algorithm for minimizing the surrogate model

Parameters: data (Object) – Optimization problem object data – Optimization problem object fhat – Response surface object xrange – Variable ranges, xup - xlow dtol – Smallest allowed distance between evaluated points 1e-3 * sqrt(dim) proposed_points – List of points proposed to the optimization algorithm budget – Remaining evaluation budget

Note

This object needs to be initialized with the init method. This is done when the initial phase has finished.

init(start_sample, fhat, budget)

Initialize the sampling method after the initial phase

This initializes the list of sampling methods after the initial phase has finished and the experimental design has been evaluated. The user provides the points in the experimental design, the surrogate model, and the remaining evaluation budget.

Parameters: start_sample (numpy.array) – Points in the experimental design fhat (Object) – Surrogate model budget (int) – Evaluation budget
make_points(npts, xbest, sigma, subset=None, proj_fun=None, merit=None)

Proposes npts new points to evaluate

Parameters: npts (int) – Number of points to select xbest (numpy.array) – Best solution found so far (Ignored) sigma (float) – Current sampling radius w.r.t the unit box (Ignored) subset (numpy.array) – Coordinates to perturb (Ignored) proj_fun (Object) – Routine for projecting infeasible points onto the feasible region merit (Object) – Merit function for selecting candidate points (Ignored) Points selected for evaluation, of size npts x dim numpy.array
remove_point(x)

Remove x from proposed_points

This removes x from the list of proposed points in the case where the optimization strategy decides to not evaluate x.

Parameters: x (numpy.array) – Point to be removed True if points was removed, False otherwise bool
class pySOT.adaptive_sampling.MultiSampling(strategy_list, cycle)

Maintains a list of adaptive sampling methods

A collection of adaptive sampling methods and weights so that the user can use multiple adaptive sampling methods for the same optimization problem. This object keeps an internal list of proposed points in order to be able to compute the minimum distance from a point to all proposed evaluations. This list has to be reset each time the optimization algorithm restarts

Parameters: strategy_list (list) – List of adaptive sampling methods to use cycle (list) – List of integers that specifies the sampling order, e.g., [0, 0, 1] uses method1, method1, method2, method1, method1, method2, ... ValueError – If cycle is incorrect sampling_strategies – List of adaptive sampling methods to use cycle – List that specifies the sampling order nstrats – Number of adaptive sampling strategies current_strat – The next adaptive sampling strategy to be used proposed_points – List of points proposed to the optimization algorithm data – Optimization problem object fhat – Response surface object budget – Remaining evaluation budget

Note

This object needs to be initialized with the init method. This is done when the initial phase has finished.

Todo

Get rid of the proposed_points object and replace it by something that is controlled by the strategy.

init(start_sample, fhat, budget)

Initialize the sampling method after the initial phase

This initializes the list of sampling methods after the initial phase has finished and the experimental design has been evaluated. The user provides the points in the experimental design, the surrogate model, and the remaining evaluation budget.

Parameters: start_sample (numpy.array) – Points in the experimental design fhat (Object) – Surrogate model budget (int) – Evaluation budget
make_points(npts, xbest, sigma, subset=None, proj_fun=None, merit=<function candidate_merit_weighted_distance>)

Proposes npts new points to evaluate

Parameters: npts (int) – Number of points to select xbest (numpy.array) – Best solution found so far sigma (float) – Current sampling radius w.r.t the unit box subset (numpy.array) – Coordinates to perturb proj_fun (Object) – Routine for projecting infeasible points onto the feasible region merit (Object) – Merit function for selecting candidate points Points selected for evaluation, of size npts x dim numpy.array

Todo

Change the merit function from being hard-coded

remove_point(x)

Remove x from proposed_points

This removes x from the list of proposed points in the case where the optimization strategy decides to not evaluate x.

Parameters: x (numpy.array) – Point to be removed True if points was removed, False otherwise bool
class pySOT.adaptive_sampling.MultiStartGradient(data, method='L-BFGS-B', num_restarts=30)

A Multi-Start Gradient method for minimizing the surrogate model

A wrapper around the scipy.optimize implementation of box-constrained gradient based minimization.

Parameters: data (Object) – Optimization problem object method (string) – Optimization method to use. The options are L-BFGS-B Quasi-Newton method of Broyden, Fletcher, Goldfarb, and Shanno (BFGS) TNC Truncated Newton algorithm num_restarts (int) – Number of restarts for the multi-start gradient ValueError – If number of candidate points is incorrect or if the weights aren’t a list in [0, 1] data – Optimization problem object fhat – Response surface object xrange – Variable ranges, xup - xlow dtol – Smallest allowed distance between evaluated points 1e-3 * sqrt(dim) bounds – n x 2 matrix with lower and upper bound constraints proposed_points – List of points proposed to the optimization algorithm budget – Remaining evaluation budget

Note

This object needs to be initialized with the init method. This is done when the initial phase has finished.

Note

SLSQP is supposed to work with bound constraints but for some reason it sometimes violates the constraints anyway.

init(start_sample, fhat, budget)

Initialize the sampling method after the initial phase

This initializes the list of sampling methods after the initial phase has finished and the experimental design has been evaluated. The user provides the points in the experimental design, the surrogate model, and the remaining evaluation budget.

Parameters: start_sample (numpy.array) – Points in the experimental design fhat (Object) – Surrogate model budget (int) – Evaluation budget
make_points(npts, xbest, sigma, subset=None, proj_fun=None, merit=None)

Proposes npts new points to evaluate

Parameters: npts (int) – Number of points to select xbest (numpy.array) – Best solution found so far (Ignored) sigma (float) – Current sampling radius w.r.t the unit box (Ignored) subset (numpy.array) – Coordinates to perturb (Ignored) proj_fun (Object) – Routine for projecting infeasible points onto the feasible region merit (Object) – Merit function for selecting candidate points (Ignored) Points selected for evaluation, of size npts x dim numpy.array
remove_point(x)

Remove x from proposed_points

This removes x from the list of proposed points in the case where the optimization strategy decides to not evaluate x.

Parameters: x (numpy.array) – Point to be removed True if points was removed, False otherwise bool

## pySOT.ensemble_surrogate module¶

Module: ensemble_surrogate David Eriksson
class pySOT.ensemble_surrogate.EnsembleSurrogate(model_list, maxp=100)

Compute and evaluate an ensemble of interpolants.

Maintains a list of surrogates and decides how to weights them by using Dempster-Shafer theory to assign pignistic probabilities based on statistics computed using LOOCV.

Parameters: model_list (list) – List of surrogate models maxp (int) – Maximum number of points nump – Current number of points maxp – Initial maximum number of points (can grow) rhs – Right hand side for interpolation system x – Interpolation points fx – Values at interpolation points dim – Number of dimensions model_list – List of surrogate models weights – Weight for each surrogate model surrogate_list – List of internal surrogate models for LOOCV
add_point(xx, fx)

This function also updates the list of LOOCV surrogate models by cleverly just adding one point to n of the models. The scheme in which new models are built is illustrated below:

2 1 1,2

2,3 1,3 1,2 1,2,3

2,3,4 1,3,4 1,2,4 1,2,3 1,2,3,4

2,3,4,5 1,3,4,5 1,2,4,5 1,2,3,5 1,2,3,4 1,2,3,4,5

Parameters: xx (numpy.array) – Point to add fx (float) – The function value of the point to add
compute_weights()

Compute mode weights

Given n observations we use n surrogates built with n-1 of the points in order to predict the value at the removed point. Based on these n predictions we calculate three different statistics:

• Correlation coefficient with true function values
• Root mean square deviation
• Mean absolute error

Based on these three statistics we compute the model weights by applying Dempster-Shafer theory to first compute the pignistic probabilities, which are taken as model weights.

Returns: Model weights numpy.array
deriv(x, d=None)

Evaluate the derivative of the ensemble surrogate at the point x

Parameters: x (numpy.array) – Point for which we want to compute the RBF gradient Derivative of the ensemble surrogate at x numpy.array
eval(x, ds=None)

Evaluate the ensemble surrogate the point xx

Parameters: x (numpy.array) – Point where to evaluate ds (None) – Not used Value of the ensemble surrogate at x float
evals(x, ds=None)

Evaluate the ensemble surrogate at the points xx

Parameters: x (numpy.array) – Points where to evaluate, of size npts x dim ds (numpy.array) – Distances between the centers and the points x, of size npts x ncenters Values of the ensemble surrogate at x, of length npts numpy.array
get_fx()

Get the list of function values for the data points.

Returns: List of function values numpy.array
get_x()

Get the list of data points

Returns: List of data points numpy.array
reset()

Reset the ensemble surrogate.

## pySOT.experimental_design module¶

Module: experimental_design David Eriksson Yi Shen
class pySOT.experimental_design.BoxBehnken(dim)

Box-Behnken experimental design

The Box-Behnken experimental design consists of the midpoint of the edges plus a center point of the unit hypercube

Parameters: dim (int) – Number of dimensions dim – Number of dimensions npts – Number of desired sampling points (2^dim)
generate_points()

Generate a matrix with the initial sample points, scaled to the unit hypercube

Returns: Box-Behnken design in the unit cube of size npts x dim numpy.array
class pySOT.experimental_design.LatinHypercube(dim, npts, criterion='c')

Latin Hypercube experimental design

Parameters: dim (int) – Number of dimensions npts (int) – Number of desired sampling points criterion (string) – Sampling criterion “center” or “c” center the points within the sampling intervals “maximin” or “m” maximize the minimum distance between points, but place the point in a randomized location within its interval “centermaximin” or “cm” same as “maximin”, but centered within the intervals “correlation” or “corr” minimize the maximum correlation coefficient dim – Number of dimensions npts – Number of desired sampling points criterion – A string that specifies how to sample
generate_points()

Generate a matrix with the initial sample points, scaled to the unit hypercube

Returns: Latin hypercube design in the unit cube of size npts x dim numpy.array
class pySOT.experimental_design.SymmetricLatinHypercube(dim, npts)

Symmetric Latin Hypercube experimental design

Parameters: dim (int) – Number of dimensions npts (int) – Number of desired sampling points dim – Number of dimensions npts – Number of desired sampling points
generate_points()

Generate a matrix with the initial sample points, scaled to the unit hypercube

Returns: Symmetric Latin hypercube design in the unit cube of size npts x dim that is of full rank numpy.array ValueError – Unable to find an SLHD of rank at least dim + 1
class pySOT.experimental_design.TwoFactorial(dim)

Two-factorial experimental design

The two-factorial experimental design consists of the corners of the unit hypercube, and hence $$2^{dim}$$ points.

Parameters: dim (int) – Number of dimensions ValueError – If dim >= 15 dim – Number of dimensions npts – Number of desired sampling points (2^dim)
generate_points()

Generate a matrix with the initial sample points, scaled to the unit hypercube

Returns: Full-factorial design in the unit cube of size (2^dim) x dim numpy.array

## pySOT.heuristic_methods module¶

Module: heuristic_methods David Eriksson
class pySOT.heuristic_methods.GeneticAlgorithm(function, dim, xlow, xup, intvar=None, popsize=100, ngen=100, start='SLHD', projfun=None)

Genetic algorithm

This is an implementation of the real-valued Genetic algorithm that is useful for optimizing on a surrogate model, but it can also be used on its own. The mutations are normally distributed perturbations, the selection mechanism is a tournament selection, and the crossover oepration is the standard linear combination taken at a randomly generated cutting point.

The number of evaluations are popsize x ngen

Parameters: function (Object) – Function that can be used to evaluate the entire population. It needs to take an input of size nindividuals x nvariables and return a numpy.array of length nindividuals dim (int) – Number of dimensions xlow (numpy.array) – Lower variable bounds, of length dim xup (numpy.array) – Lower variable bounds, of length dim intvar (list) – List of indices with the integer valued variables (e.g., [0, 1, 5]) popsize (int) – Population size ngen (int) – Number of generations start (string) – Method for generating the initial population proj_fun (Object) – Function that can project ONE infeasible individual onto the feasible region nvariables – Number of variables (dimensions) of the objective function nindividuals – population size lower_boundary – lower bounds for the optimization problem upper_boundary – upper bounds for the optimization problem integer_variables – List of variables that are integer valued start – Method for generating the initial population sigma – Perturbation radius. Each pertubation is N(0, sigma) p_mutation – Mutation probability (1/dim) tournament_size – Size of the tournament (5) p_cross – Cross-over probability (0.9) ngenerations – Number of generations function – Object that can be used to evaluate the objective function projfun – Function that can be used to project an individual onto the feasible region
optimize()

Method used to run the Genetic algorithm

Returns: Returns the best individual and its function value numpy.array, float

## pySOT.gp_regression module¶

Module: gp_regression David Eriksson
class pySOT.gp_regression.GPRegression(maxp=100, gp=None)

Compute and evaluate a GP

Gaussian Process Regression object.

Depends on scitkit-learn==0.18.1.

More details:
http://scikit-learn.org/stable/modules/generated/sklearn.gaussian_process.GaussianProcessRegressor.html
Parameters: maxp (int) – Initial capacity gp (GaussianProcessRegressor) – GP object (can be None) nump – Current number of points maxp – Initial maximum number of points (can grow) x – Interpolation points fx – Function evaluations of interpolation points gp – Object of type GaussianProcessRegressor dim – Number of dimensions model – MARS interpolation model
add_point(xx, fx)

Parameters: xx (numpy.array) – Point to add fx (float) – The function value of the point to add
deriv(x, ds=None)

Evaluate the GP regression object at a point x

Parameters: x (numpy.array) – Point for which we want to compute the GP regression gradient ds (None) – Not used Derivative of the GP regression object at x numpy.array
eval(x, ds=None)

Evaluate the GP regression object at the point x

Parameters: x (numpy.array) – Point where to evaluate ds (None) – Not used Value of the GP regression obejct at x float
evals(x, ds=None)

Evaluate the GP regression object at the points x

Parameters: x (numpy.array) – Points where to evaluate, of size npts x dim ds (None) – Not used Values of the GP regression object at x, of length npts numpy.array
get_fx()

Get the list of function values for the data points.

Returns: List of function values numpy.array
get_x()

Get the list of data points

Returns: List of data points numpy.array
reset()

Reset the interpolation.

## pySOT.mars_interpolant module¶

Module: mars_interpolant Yi Shen
class pySOT.mars_interpolant.MARSInterpolant(maxp=100)

Compute and evaluate a MARS interpolant

MARS builds a model of the form

$\hat{f}(x) = \sum_{i=1}^{k} c_i B_i(x).$

The model is a weighted sum of basis functions $$B_i(x)$$. Each basis function $$B_i(x)$$ takes one of the following three forms:

1. a constant 1.
2. a hinge function of the form $$\max(0, x - const)$$ or $$\max(0, const - x)$$. MARS automatically selects variables and values of those variables for knots of the hinge functions.
3. a product of two or more hinge functions. These basis functions c an model interaction between two or more variables.
Parameters: maxp (int) – Initial capacity nump – Current number of points maxp – Initial maximum number of points (can grow) x – Interpolation points fx – Function evaluations of interpolation points dim – Number of dimensions model – MARS interpolation model
add_point(xx, fx)

Parameters: xx (numpy.array) – Point to add fx (float) – The function value of the point to add
deriv(x, ds=None)

Evaluate the derivative of the MARS interpolant at a point x

Parameters: x (numpy.array) – Point for which we want to compute the MARS gradient ds (None) – Not used Derivative of the MARS interpolant at x numpy.array
eval(x, ds=None)

Evaluate the MARS interpolant at the point x

Parameters: x (numpy.array) – Point where to evaluate ds (None) – Not used Value of the MARS interpolant at x float
evals(x, ds=None)

Evaluate the MARS interpolant at the points x

Parameters: x (numpy.array) – Points where to evaluate, of size npts x dim ds (None) – Not used Values of the MARS interpolant at x, of length npts numpy.array
get_fx()

Get the list of function values for the data points.

Returns: List of function values numpy.array
get_x()

Get the list of data points

Returns: List of data points numpy.array
reset()

Reset the interpolation.

## pySOT.merit_functions module¶

Module: merit_functions David Eriksson , David Bindel
pySOT.merit_functions.candidate_merit_weighted_distance(cand, npts=1)

Weighted distance merit function for the candidate points based methods

Parameters: cand (Object) – Candidate point object npts (int) – Number of points selected for evaluation Points selected for evaluation, of size npts x dim numpy.array

## pySOT.poly_regression module¶

Module: poly_regression David Bindel
class pySOT.poly_regression.PolyRegression(bounds, basisp, maxp=100)

Compute and evaluate a polynomial regression surface.

Parameters: bounds (numpy.array) – a (dims, 2) array of lower and upper bounds in each coordinate basisp (numpy.array) – a (nbasis, dims) array, where the ith basis function is prod_j L_basisp(i,j)(x_j), L_k = the degree k Legendre polynomial maxp (int) – Initial point capacity nump – Current number of points maxp – Initial maximum number of points (can grow) x – Interpolation points fx – Function evaluations of interpolation points bounds – Upper and lower bounds, one row per dimension dim – Number of dimensions basisp – Multi-indices representing terms in a tensor poly basis Each row is a list of dim indices indicating a polynomial degree in the associated dimension. updated – True if the RBF coefficients are up to date
add_point(xx, fx)

Parameters: xx – Point to add fx – The function value of the point to add
deriv(x, ds=None)

Evaluate the derivative of the regression surface at a point x

Parameters: x (numpy.array) – Point where to evaluate ds (None) – Not used Derivative of the polynomial at x numpy.array
eval(x, ds=None)

Evaluate the regression surface at point xx

Parameters: x (numpy.array) – Point where to evaluate ds (None) – Not used Prediction at the point x float
evals(x, ds=None)

Evaluate the regression surface at points x

Parameters: x (numpy.array) – Points where to evaluate, of size npts x dim ds (None) – Not used Prediction at the points x float
get_fx()

Get the list of function values for the data points.

Returns: List of function values numpy.array
get_x()

Get the list of data points

Returns: List of data points numpy.array
reset()

Reset the object.

pySOT.poly_regression.basis_HC(n, d)

Generate list of shape functions for HC poly space.

Parameters: n (int) – Dimension of the space d (int) – Degree bound An N-by-n matrix with S(i,j) = degree of variable j in shape i numpy.array
pySOT.poly_regression.basis_SM(n, d)

Generate list of shape functions for SM poly space.

Parameters: n (int) – Dimension of the space d (int) – Degree bound An N-by-n matrix with S(i,j) = degree of variable j in shape i numpy.array
pySOT.poly_regression.basis_TD(n, d)

Generate list of shape functions for TD poly space.

Parameters: n (int) – Dimension of the space d (int) – Degree bound An N-by-n matrix with S(i,j) = degree of variable j in shape i numpy.array
pySOT.poly_regression.basis_TP(n, d)

Generate list of shape functions for TP poly space.

Parameters: n (int) – Dimension of the space d (int) – Degree bound An N-by-n matrix with S(i,j) = degree of variable j in shape i There are N = n^d shapes. numpy.array
pySOT.poly_regression.basis_base(n, testf)

Generate list of shape functions for a subset of a TP poly space.

Parameters: n (int) – Dimension of the space testf (Object) – Return True if a given multi-index is in range An N-by-n matrix with S(i,j) = degree of variable j in shape i numpy.array
pySOT.poly_regression.dlegendre(x, d)

Evaluate Legendre polynomial derivatives at all coordinates in x.

Parameters: x (numpy.array) – Array of coordinates d (int) – Max degree of polynomials x.shape-by-d arrays of Legendre polynomial values and derivatives numpy.array
pySOT.poly_regression.legendre(x, d)

Evaluate Legendre polynomials at all coordinates in x.

Parameters: x (numpy.array) – Array of coordinates d (int) – Max degree of polynomials A x.shape-by-d array of Legendre polynomial values numpy.array
pySOT.poly_regression.test_legendre1()
pySOT.poly_regression.test_legendre2()
pySOT.poly_regression.test_poly()

## pySOT.kernels module¶

Module: kernels David Eriksson ,
class pySOT.kernels.CubicKernel

Cubic RBF kernel

This is a basic class for the Cubic RBF kernel: $$\varphi(r) = r^3$$ which is conditionally positive definite of order 2.

deriv(dists)

evaluates the derivative of the Cubic kernel for a distance matrix

Parameters: dists (numpy.array) – Distance input matrix a matrix where element $$(i,j)$$ is $$3 \| x_i - x_j \|^2$$ numpy.array
eval(dists)

evaluates the Cubic kernel for a distance matrix

Parameters: dists (numpy.array) – Distance input matrix a matrix where element $$(i,j)$$ is $$\|x_i - x_j \|^3$$ numpy.array
order()

returns the order of the Cubic RBF kernel

Returns: 2 int
phi_zero()

returns the value of $$\varphi(0)$$ for Cubic RBF kernel

Returns: 0 float
class pySOT.kernels.LinearKernel

Linear RBF kernel

This is a basic class for the Linear RBF kernel: $$\varphi(r) = r$$ which is conditionally positive definite of order 1.

deriv(dists)

evaluates the derivative of the Cubic kernel for a distance matrix

Parameters: dists (numpy.array) – Distance input matrix a matrix where element $$(i,j)$$ is 1 numpy.array
eval(dists)

evaluates the Linear kernel for a distance matrix

Parameters: dists (numpy.array) – Distance input matrix a matrix where element $$(i,j)$$ is $$\|x_i - x_j \|$$ numpy.array
order()

returns the order of the Linear RBF kernel

Returns: 1 int
phi_zero()

returns the value of $$\varphi(0)$$ for Linear RBF kernel

Returns: 0 float
class pySOT.kernels.TPSKernel

Thin-plate spline RBF kernel

This is a basic class for the TPS RBF kernel: $$\varphi(r) = r^2 \log(r)$$ which is conditionally positive definite of order 2.

deriv(dists)

evaluates the derivative of the Cubic kernel for a distance matrix

Parameters: dists (numpy.array) – Distance input matrix a matrix where element $$(i,j)$$ is $$\|x_i - x_j \|(1 + 2 \log (\|x_i - x_j \|) )$$ numpy.array
eval(dists)

evaluates the Cubic kernel for a distance matrix

Parameters: dists (numpy.array) – Distance input matrix a matrix where element $$(i,j)$$ is $$\|x_i - x_j \|^2 \log (\|x_i - x_j \|)$$ numpy.array
order()

returns the order of the TPS RBF kernel

Returns: 2 int
phi_zero()

returns the value of $$\varphi(0)$$ for TPS RBF kernel

Returns: 0 float

## pySOT.tails module¶

Module: tails David Eriksson ,
class pySOT.tails.ConstantTail

Constant polynomial tail

This is a standard linear polynomial in d-dimension, built from the basis $$\{1\}$$.

degree()

returns the degree of the constant polynomial tail

Returns: 0 int
deriv(x)

evaluates the gradient of the linear polynomial tail for one point

Parameters: x (numpy.array) – Point to evaluate, of length dim A numpy.array of size dim x dim_tail(dim) numpy.array
dim_tail(dim)

returns the dimensionality of the constant polynomial space for a given dimension

Parameters: dim (int) – Number of dimensions of the Cartesian space 1 int
eval(X)

evaluates the constant polynomial tail for a set of points

Parameters: X (numpy.array) – Points to evaluate, of size npts x dim A numpy.array of size npts x dim_tail(dim) numpy.array
class pySOT.tails.LinearTail

Linear polynomial tail

This is a standard linear polynomial in d-dimension, built from the basis $$\{1,x_1,x_2,\ldots,x_d\}$$.

degree()

returns the degree of the linear polynomial tail

Returns: 1 int
deriv(x)

evaluates the gradient of the linear polynomial tail for one point

Parameters: x (numpy.array) – Point to evaluate, of length dim A numpy.array of size dim x dim_tail(dim) numpy.array
dim_tail(dim)

returns the dimensionality of the linear polynomial space for a given dimension

Parameters: dim (int) – Number of dimensions of the Cartesian space 1 + dim int
eval(X)

evaluates the linear polynomial tail for a set of points

Parameters: X (numpy.array) – Points to evaluate, of size npts x dim A numpy.array of size npts x dim_tail(dim) numpy.array

## pySOT.rbf module¶

Module: rbf David Eriksson , David Bindel
class pySOT.rbf.RBFInterpolant(kernel=<class 'pySOT.kernels.CubicKernel'>, tail=<class 'pySOT.tails.LinearTail'>, maxp=500, eta=1e-08)

Compute and evaluate RBF interpolant.

Manages an expansion of the form

$f(x) = \sum_j c_j \phi(\|x-x_j\|) + \sum_j \lambda_j p_j(x)$

where the functions $$p_j(x)$$ are low-degree polynomials. The fitting equations are

$\begin{split}\begin{bmatrix} \eta I & P^T \\ P & \Phi+\eta I \end{bmatrix} \begin{bmatrix} \lambda \\ c \end{bmatrix} = \begin{bmatrix} 0 \\ f \end{bmatrix}\end{split}$

where $$P_{ij} = p_j(x_i)$$ and $$\Phi_{ij}=\phi(\|x_i-x_j\|)$$. The regularization parameter $$\eta$$ allows us to avoid problems with potential poor conditioning of the system. The regularization parameter can either be fixed or estimated via LOOCV. Specify eta=’adapt’ for estimation.

Parameters: kernel (Kernel) – RBF kernel object tail (Tail) – RBF polynomial tail object maxp (int) – Initial point capacity eta (float or 'adapt') – Regularization parameter kernel – RBF kernel tail – RBF tail eta – Regularization parameter ntail – Number of tail functions nump – Current number of points maxp – Initial maximum number of points (can grow) A – Interpolation system matrix LU – LU-factorization of the RBF system piv – pivot vector for the LU-factorization rhs – Right hand side for interpolation system x – Interpolation points fx – Values at interpolation points c – Expansion coefficients dim – Number of dimensions ntail – Number of tail functions updated – True if the RBF coefficients are up to date
add_point(xx, fx)

Parameters: xx (numpy.array) – Point to add fx (float) – The function value of the point to add
coeffs()

Compute the expansion coefficients

Returns: Expansion coefficients numpy.array
deriv(x, ds=None)

Evaluate the derivative of the RBF interpolant at a point x

Parameters: x (numpy.array) – Point for which we want to compute the RBF gradient ds (numpy.array) – Distances between the centers and the point x Derivative of the RBF interpolant at x numpy.array
eval(x, ds=None)

Evaluate the RBF interpolant at the point x

Parameters: x (numpy.array) – Point where to evaluate Value of the RBF interpolant at x float
evals(x, ds=None)

Evaluate the RBF interpolant at the points x

Parameters: x (numpy.array) – Points where to evaluate, of size npts x dim ds (numpy.array) – Distances between the centers and the points x, of size npts x ncenters Values of the rbf interpolant at x, of length npts numpy.array
get_fx()

Get the list of function values for the data points.

Returns: List of function values numpy.array
get_x()

Get the list of data points

Returns: List of data points numpy.array
reset()

Reset the RBF interpolant

transform_fx(fx)

Replace f with transformed function values for the fitting

Parameters: fx (numpy.array) – Transformed function values

## pySOT.rs_wrappers module¶

Module: rs_wrappers David Bindel
class pySOT.rs_wrappers.RSCapped(model, transformation=None)

This adapter takes an existing response surface and replaces it with a modified version in which the function values are replaced according to some transformation. A very common transformation is to replace all values above the median by the median in order to reduce the influence of large function values.

Parameters: model (Object) – Original response surface object transformation (Object) – Function value transformation object. Median capping is used if no object (or None) is provided transformation – Object used to transform the function values. model – original response surface object fvalues – Function values nump – Current number of points maxp – Initial maximum number of points (can grow) updated – True if the surface is updated
add_point(xx, fx)

Parameters: xx (numpy.array) – Point to add fx (float) – The function value of the point to add
deriv(x, ds=None)

Evaluate the derivative of the capped interpolant at a point x

Parameters: x (numpy.array) – Point for which we want to compute the RBF gradient ds (numpy.array) – Distances between the centers and the point x Derivative of the capped interpolant at x numpy.array
eval(x, ds=None)

Evaluate the capped interpolant at the point x

Parameters: x (numpy.array) – Point where to evaluate Value of the RBF interpolant at x float
evals(x, ds=None)

Evaluate the capped interpolant at the points x

Parameters: x (numpy.array) – Points where to evaluate, of size npts x dim ds (numpy.array) – Distances between the centers and the points x, of size npts x ncenters Values of the capped interpolant at x, of length npts numpy.array
get_fx()

Get the list of function values for the data points.

Returns: List of function values numpy.array
get_x()

Get the list of data points

Returns: List of data points numpy.array
reset()

Reset the capped response surface

class pySOT.rs_wrappers.RSPenalty(model, evals, derivs)

This adapter can be used for approximating an objective function plus a penalty function. The response surface is fitted only to the objective function and the penalty is added on after.

Parameters: model (Object) – Original response surface object evals (Object) – Object that takes the response surface and the points and adds up the response surface value and the penalty function value devals (Object) – Object that takes the response surface and the points and adds up the response surface derivative and the penalty function derivative eval_method – Object that takes the response surface and the points and adds up the response surface value and the penalty function value deval_method – Object that takes the response surface and the points and adds up the response surface derivative and the penalty function derivative model – original response surface object fvalues – Function values nump – Current number of points maxp – Initial maximum number of points (can grow) updated – True if the surface is updated
add_point(xx, fx)

Parameters: xx (numpy.array) – Point to add fx (float) – The function value of the point to add
deriv(x, ds=None)

Evaluate the derivative of the penalty adapter at x

Parameters: x (numpy.array) – Point for which we want to compute the gradient ds (None) – Not used Derivative of the interpolant at x numpy.array
eval(x, ds=None)

Evaluate the penalty adapter interpolant at the point xx

Parameters: x (numpy.array) – Point where to evaluate ds (None) – Not used Value of the interpolant at x float
evals(x, ds=None)

Evaluate the penalty adapter at the points xx

Parameters: x (numpy.array) – Points where to evaluate, of size npts x dim ds (None) – Not used Values of the interpolant at x, of length npts numpy.array
get_fx()

Get the list of function values for the data points.

Returns: List of function values numpy.array
get_x()

Get the list of data points

Returns: List of data points numpy.array
reset()

Reset the capped response surface

class pySOT.rs_wrappers.RSUnitbox(model, data)

Unit box adapter for response surfaces

This adapter takes an existing response surface and replaces it with a modified version where the domain is rescaled to the unit box. This is useful for response surfaces that are sensitive to scaling, such as radial basis functions.

Parameters: model (Object) – Original response surface object data (Object) – Optimization problem object data – Optimization problem object model – original response surface object fvalues – Function values nump – Current number of points maxp – Initial maximum number of points (can grow) updated – True if the surface is updated
add_point(xx, fx)

Parameters: xx (numpy.array) – Point to add fx (float) – The function value of the point to add
deriv(x, ds=None)

Evaluate the derivative of the rbf interpolant at x

Parameters: x (numpy.array) – Point for which we want to compute the MARS gradient ds (None) – Not used Derivative of the MARS interpolant at x numpy.array
eval(x, ds=None)

Evaluate the response surface at the point xx

Parameters: x (numpy.array) – Point where to evaluate ds (None) – Not used Value of the interpolant at x float
evals(x, ds=None)

Evaluate the capped rbf interpolant at the points xx

Parameters: x (numpy.array) – Points where to evaluate, of size npts x dim ds (None) – Not used Values of the MARS interpolant at x, of length npts numpy.array
get_fx()

Get the list of function values for the data points.

Returns: List of function values numpy.array
get_x()

Get the list of data points

Returns: List of data points numpy.array
reset()

Reset the capped response surface

## pySOT.sot_sync_strategies module¶

Module: sot_sync_strategies David Bindel , David Eriksson
class pySOT.sot_sync_strategies.SyncStrategyNoConstraints(worker_id, data, response_surface, maxeval, nsamples, exp_design=None, sampling_method=None, extra=None, extra_vals=None)

Parallel synchronous optimization strategy without non-bound constraints.

This class implements the parallel synchronous SRBF strategy described by Regis and Shoemaker. After the initial experimental design (which is embarrassingly parallel), the optimization proceeds in phases. During each phase, we allow nsamples simultaneous function evaluations. We insist that these evaluations run to completion – if one fails for whatever reason, we will resubmit it. Samples are drawn randomly from around the current best point, and are sorted according to a merit function based on distance to other sample points and predicted function values according to the response surface. After several successive significant improvements, we increase the sampling radius; after several failures to improve the function value, we decrease the sampling radius. We restart once the sampling radius decreases below a threshold.

Parameters: worker_id (int) – Start ID in a multi-start setting data (Object) – Problem parameter data structure response_surface (Object) – Surrogate model object maxeval (int) – Stopping criterion. If positive, this is an evaluation budget. If negative, this is a time budget in seconds. nsamples (int) – Number of simultaneous fevals allowed exp_design (Object) – Experimental design sampling_method (Object) – Sampling method for finding points to evaluate extra (numpy.array) – Points to be added to the experimental design extra_vals (numpy.array) – Values of the points in extra (if known). Use nan for values that are not known.
adjust_step()

After succtol successful steps, we cut the sampling radius; after failtol failed steps, we double the sampling radius.

check_common()

Checks that the inputs are correct

check_input()

Checks that the inputs are correct

log_completion(record)

Record a completed evaluation to the log.

Parameters: record (Object) – Record of the function evaluation
on_complete(record)

Handle completed function evaluation.

When a function evaluation is completed we need to ask the constraint handler if the function value should be modified which is the case for say a penalty method. We also need to print the information to the logfile, update the best value found so far and notify the GUI that an evaluation has completed.

Parameters: record (Object) – Evaluation record
on_reply_accept(proposal)
proj_fun(x)

Projects a set of points onto the feasible region

Parameters: x (numpy.array) – Points, of size npts x dim Projected points numpy.array
propose_action()

Propose an action

sample_adapt()

Generate and queue samples from the search strategy

sample_initial()

Generate and queue an initial experimental design.

start_batch()

Generate and queue a new batch of points

class pySOT.sot_sync_strategies.SyncStrategyPenalty(worker_id, data, response_surface, maxeval, nsamples, exp_design=None, sampling_method=None, extra=None, penalty=1000000.0)

Parallel synchronous optimization strategy with non-bound constraints.

This is an extension of SyncStrategyNoConstraints that also works with bound constraints. We currently only allow inequality constraints, since the candidate based methods don’t work well with equality constraints. We also assume that the constraints are cheap to evaluate, i.e., so that it is easy to check if a given point is feasible. More strategies that can handle expensive constraints will be added.

We use a penalty method in the sense that we try to minimize:

$f(x) + \mu \sum_j (\max(0, g_j(x))^2$

where $$g_j(x) \leq 0$$ are cheap inequality constraints. As a measure of promising function values we let all infeasible points have the value of the feasible candidate point with the worst function value, since large penalties makes it impossible to distinguish between feasible points.

When it comes to the value of $$\mu$$, just choose a very large value.

Parameters: worker_id (int) – Start ID in a multi-start setting data (Object) – Problem parameter data structure response_surface (Object) – Surrogate model object maxeval (int) – Function evaluation budget nsamples (int) – Number of simultaneous fevals allowed exp_design (Object) – Experimental design sampling_method (Object) – Sampling method for finding points to evaluate extra (numpy.array) – Points to be added to the experimental design penalty (float) – Penalty for violating constraints
check_input()

Checks that the inputs are correct

on_complete(record)

Handle completed function evaluation.

When a function evaluation is completed we need to ask the constraint handler if the function value should be modified which is the case for say a penalty method. We also need to print the information to the logfile, update the best value found so far and notify the GUI that an evaluation has completed.

Parameters: record (Object) – Evaluation record
penalty_fun(xx)

Computes the penalty for constraint violation

Parameters: xx (numpy.array) – Points to compute the penalty for Penalty for constraint violations numpy.array
class pySOT.sot_sync_strategies.SyncStrategyProjection(worker_id, data, response_surface, maxeval, nsamples, exp_design=None, sampling_method=None, extra=None, proj_fun=None)

Parallel synchronous optimization strategy with non-bound constraints. It uses a supplied method to project proposed points onto the feasible region in order to always evaluate feasible points which is useful in situations where it is easy to project onto the feasible region and where the objective function is nonsensical for infeasible points.

This is an extension of SyncStrategyNoConstraints that also works with bound constraints.

Parameters: worker_id (int) – Start ID in a multi-start setting data (Object) – Problem parameter data structure response_surface (Object) – Surrogate model object maxeval (int) – Function evaluation budget nsamples (int) – Number of simultaneous fevals allowed exp_design (Object) – Experimental design sampling_method (Object) – Sampling method for finding points to evaluate extra (numpy.array) – Points to be added to the experimental design proj_fun (Object) – Function that projects one point onto the feasible region
check_input()

Checks that the inputs are correct

proj_fun(x)

Projects a set of points onto the feasible region

Parameters: x (numpy.array) – Points, of size npts x dim Projected points numpy.array

## pySOT.test_problems module¶

Module: test_problems David Eriksson , David Bindel
class pySOT.test_problems.Ackley(dim=10)

Ackley function

$f(x_1,\ldots,x_n) = -20\exp\left( -0.2 \sqrt{\frac{1}{n} \sum_{j=1}^n x_j^2} \right) -\exp \left( \frac{1}{n} \sum{j=1}^n \cos(2 \pi x_j) \right) + 20 - e$

subject to

$-15 \leq x_i \leq 20$

Global optimum: $$f(0,0,...,0)=0$$

Parameters: dim (int) – Number of dimensions dim – Number of dimensions xlow – Lower bound constraints xup – Upper bound constraints info – Problem information: min – Global optimum integer – Integer variables continuous – Continuous variables
objfunction(x)

Evaluate the Ackley function at x

Parameters: x (numpy.array) – Data point Value at x float
class pySOT.test_problems.Exponential(dim=10)

Exponential function

$f(x_1,\ldots,x_n) = \sum_{j=1}^n e^{jx_j} - \sum_{j=1} e^{-5.12 j}$

subject to

$-5.12 \leq x_i \leq 5.12$

Global optimum: $$f(0,0,...,0)=0$$

Parameters: dim (int) – Number of dimensions dim – Number of dimensions xlow – Lower bound constraints xup – Upper bound constraints info – Problem information min – Global optimum integer – Integer variables continuous – Continuous variables
objfunction(x)

Evaluate the Exponential function at x

Parameters: x (numpy.array) – Data point Value at x float
class pySOT.test_problems.Griewank(dim=10)

Griewank function

$f(x_1,\ldots,x_n) = 1 + \frac{1}{4000} \sum_{j=1}^n x_j^2 - \prod_{j=1}^n \cos \left( \frac{x_i}{\sqrt{i}} \right)$

subject to

$-512 \leq x_i \leq 512$

Global optimum: $$f(0,0,...,0)=0$$

Parameters: dim (int) – Number of dimensions dim – Number of dimensions xlow – Lower bound constraints xup – Upper bound constraints info – Problem information min – Global optimum integer – Integer variables continuous – Continuous variables
objfunction(x)

Evaluate the Griewank function at x

Parameters: x (numpy.array) – Data point Value at x float
class pySOT.test_problems.Hartman3(dim=3)

Hartman 3 function

Global optimum: $$f(0.114614,0.555649,0.852547)=-3.86278$$

Parameters: dim (int) – Number of dimensions (has to be = 3) dim – Number of dimensions xlow – Lower bound constraints xup – Upper bound constraints info – Problem information min – Global optimum integer – Integer variables continuous – Continuous variables
objfunction(x)

Evaluate the Hartman 3 function at x

Parameters: x – Data point Value at x
class pySOT.test_problems.Hartman6(dim=6)

Hartman 6 function

Global optimum: $$f(0.20169,0.150011,0.476874,0.275332,0.311652,0.6573)=-3.32237$$

Parameters: dim (int) – Number of dimensions (has to be = 6) dim – Number of dimensions xlow – Lower bound constraints xup – Upper bound constraints info – Problem information min – Global optimum integer – Integer variables continuous – Continuous variables
objfunction(x)

Evaluate the Hartman 3 function at x

Parameters: x – Data point Value at x
class pySOT.test_problems.Keane(dim=10)

Keane’s “bump” function

$f(x_1,\ldots,x_n) = -\left| \frac{\sum_{j=1}^n \cos^4(x_j) - 2 \prod_{j=1}^n \cos^2(x_j)}{\sqrt{\sum_{j=1}^n jx_j^2}} \right|$

subject to

$0 \leq x_i \leq 5$
$0.75 - \prod_{j=1}^n x_j < 0$
$\sum_{j=1}^n x_j - 7.5n < 0$

Global optimum: -0.835 for large n

Parameters: dim (int) – Number of dimensions dim – Number of dimensions xlow – Lower bound constraints xup – Upper bound constraints info – Problem information min – Global optimum integer – Integer variables continuous – Continuous variables
deriv_ineq_constraints(x)

Evaluate the derivative of the Keane inequality constraints at x

Parameters: x (numpy.array) – Data points, of size npts x dim Derivative at the constraints, of size npts x nconstraints x ndims float
eval_ineq_constraints(x)

Evaluate the Keane inequality constraints at x

Parameters: x (numpy.array) – Data points, of size npts x dim Value at the constraints, of size npts x nconstraints float
objfunction(x)

Evaluate the Keane function at a point x

Parameters: x (numpy.array) – Data point Value at x float
class pySOT.test_problems.Levy(dim=10)

Ackley function

Global optimum: $$f(1,1,...,1)=0$$

Parameters: dim (int) – Number of dimensions dim – Number of dimensions xlow – Lower bound constraints xup – Upper bound constraints info – Problem information min – Global optimum integer – Integer variables continuous – Continuous variables
objfunction(x)

Evaluate the Levy function at x

Parameters: x – Data point Value at x
class pySOT.test_problems.LinearMI(dim=5)

This is a linear mixed integer problem with non-bound constraints

There are 5 variables, the first 3 are discrete and the last 2 are continuous.

Global optimum: $$f(1,0,0,0,0) = -1$$

Parameters: dim (int) – Number of dimensions (has to be 5) dim – Number of dimensions xlow – Lower bound constraints xup – Upper bound constraints info – Problem information min – Global optimum integer – Integer variables continuous – Continuous variables
eval_ineq_constraints(x)

Evaluate the LinearMI inequality constraints at x

Parameters: x (numpy.array) – Data points, of size npts x dim Value at the constraints, of size npts x nconstraints float
objfunction(x)

Evaluate the LinearMI function at x

Parameters: x (numpy.array) – Data point Value at x float
class pySOT.test_problems.Michalewicz(dim=10)

Michalewicz function

$f(x_1,\ldots,x_n) = -\sum_{i=1}^n \sin(x_i) \sin^{20} \left( \frac{ix_i^2}{\pi} \right)$

subject to

$0 \leq x_i \leq \pi$
Parameters: dim (int) – Number of dimensions dim – Number of dimensions xlow – Lower bound constraints xup – Upper bound constraints info – Problem information min – Global optimum integer – Integer variables continuous – Continuous variables
objfunction(x)

Evaluate the Michalewicz function at x

Parameters: x (numpy.array) – Data point Value at x float
class pySOT.test_problems.Quartic(dim=10)

Quartic function

$f(x_1,\ldots,x_n) = \sum_{j=1}^n j x_j^4 + random[0,1)$

subject to

$-1.28 \leq x_i \leq 1.28$

Global optimum: $$f(0,0,...,0)=0+noise$$

Parameters: dim (int) – Number of dimensions dim – Number of dimensions xlow – Lower bound constraints xup – Upper bound constraints info – Problem information min – Global optimum integer – Integer variables continuous – Continuous variables
objfunction(x)

Evaluate the Quartic function at x

Parameters: x (numpy.array) – Data point Value at x float
class pySOT.test_problems.Rastrigin(dim=10)

Rastrigin function

$f(x_1,\ldots,x_n)=10n-\sum_{i=1}^n (x_i^2 - 10 \cos(2 \pi x_i))$

subject to

$-5.12 \leq x_i \leq 5.12$

Global optimum: $$f(0,0,...,0)=0$$

Parameters: dim (int) – Number of dimensions dim – Number of dimensions xlow – Lower bound constraints xup – Upper bound constraints info – Problem information min – Global optimum integer – Integer variables continuous – Continuous variables
objfunction(x)

Evaluate the Rastrigin function at x

Parameters: x (numpy.array) – Data point Value at x float
class pySOT.test_problems.Rosenbrock(dim=10)

Rosenbrock function

$f(x_1,\ldots,x_n) = \sum_{j=1}^{n-1} \left( 100(x_j^2-x_{j+1})^2 + (1-x_j)^2 \right)$

subject to

$-2.048 \leq x_i \leq 2.048$

Global optimum: $$f(1,1,...,1)=0$$

Parameters: dim (int) – Number of dimensions dim – Number of dimensions xlow – Lower bound constraints xup – Upper bound constraints info – Problem information min – Global optimum integer – Integer variables continuous – Continuous variables
objfunction(x)

Evaluate the Rosenbrock function at x

Parameters: x (numpy.array) – Data point Value at x float
class pySOT.test_problems.SchafferF7(dim=10)

SchafferF7 function

$f(x_1,\ldots,x_n) = \left[\frac{1}{n-1}\sqrt{s_i} \cdot (\sin(50.0s_i^{\frac{1}{5}})+1)\right]^2$

where

$s_i = \sqrt{x_i^2 + x_{i+1}^2}$

subject to

$-100 \leq x_i \leq 100$

Global optimum: $$f(0,0,...,0)=0$$

Parameters: dim (int) – Number of dimensions dim – Number of dimensions xlow – Lower bound constraints xup – Upper bound constraints info – Problem information min – Global optimum integer – Integer variables continuous – Continuous variables
objfunction(x)

Evaluate the SchafferF7 function at x

Parameters: x (numpy.array) – Data point Value at x float
class pySOT.test_problems.Schwefel(dim=10)

Schwefel function

$f(x_1,\ldots,x_n) = \sum_{j=1}^{n} \left( -x_j \sin(\sqrt{|x_j|}) \right) + 418.982997 n$

subject to

$-512 \leq x_i \leq 512$

Global optimum: $$f(420.968746,420.968746,...,420.968746)=0$$

Parameters: dim (int) – Number of dimensions dim – Number of dimensions xlow – Lower bound constraints xup – Upper bound constraints info – Problem information min – Global optimum integer – Integer variables continuous – Continuous variables
objfunction(x)

Evaluate the Schwefel function at x

Parameters: x (numpy.array) – Data point Value at x float
class pySOT.test_problems.Sphere(dim=10)

Sphere function

$f(x_1,\ldots,x_n) = \sum_{j=1}^n x_j^2$

subject to

$-5.12 \leq x_i \leq 5.12$

Global optimum: $$f(0,0,...,0)=0$$

Parameters: dim (int) – Number of dimensions dim – Number of dimensions xlow – Lower bound constraints xup – Upper bound constraints info – Problem information min – Global optimum integer – Integer variables continuous – Continuous variables
objfunction(x)

Evaluate the Sphere function at x

Parameters: x (numpy.array) – Data point Value at x float
class pySOT.test_problems.StyblinskiTang(dim=10)

StyblinskiTang function

$f(x_1,\ldots,x_n) = \frac{1}{2} \sum_{j=1}^n \left(x_j^4 -16x_j^2 +5x_j \right)$

subject to

$-5 \leq x_i \leq 5$

Global optimum: $$f(-2.903534,-2.903534,...,-2.903534)= -39.16599 \cdot n$$

Parameters: dim (int) – Number of dimensions dim – Number of dimensions xlow – Lower bound constraints xup – Upper bound constraints info – Problem information min – Global optimum integer – Integer variables continuous – Continuous variables
objfunction(x)

Evaluate the StyblinskiTang function at x

Parameters: x (numpy.array) – Data point Value at x float
class pySOT.test_problems.Whitley(dim=10)

Quartic function

$f(x_1,\ldots,x_n) = \sum_{i=1}^n \sum_{j=1}^n \left( \frac{(100(x_i^2-x_j)^2+(1-x_j)^2)^2}{4000} - \cos(100(x_i^2-x_j)^2 + (1-x_j)^2 ) + 1 \right)$

subject to

$-10.24 \leq x_i \leq 10.24$

Global optimum: $$f(1,1,...,1)=0$$

Parameters: dim (int) – Number of dimensions dim – Number of dimensions xlow – Lower bound constraints xup – Upper bound constraints info – Problem information min – Global optimum integer – Integer variables continuous – Continuous variables
objfunction(x)

Evaluate the Whitley function at x

Parameters: x (numpy.array) – Data point Value at x float

## pySOT.utils module¶

Module: utils David Eriksson
pySOT.utils.check_opt_prob(obj)

Routine for checking that an implementation of the optimization problem follows the standard. This method checks everything, but can’t make sure that the objective function and constraint methods return values of the correct type since this would involve actually evaluating the objective function which isn’t feasible when the evaluations are expensive. If some test fails, an exception is raised through assert.

Parameters: obj (Object) – Optimization problem AttributeError – If object doesn’t follow the pySOT standard
pySOT.utils.from_unit_box(x, data)

Maps a set of points from the unit box to the original domain

Parameters: x (numpy.array) – Points to be mapped from the unit box, of size npts x dim data (Object) – Optimization problem, needs to have attributes xlow and xup Points mapped to the original domain numpy.array
pySOT.utils.progress_plot(controller, title='', interactive=False)

Makes a progress plot from a POAP controller

This method depends on matplotlib and will terminate if matplotlib.pyplot is unavailable.

Parameters: controller (Object) – POAP controller object title (string) – Title of the plot interactive (bool) – True if the plot should be interactive
pySOT.utils.round_vars(data, x)

Round integer variables to closest integer that is still in the domain

Parameters: data (Object) – Optimization problem object x (numpy.array) – Set of points, of size npts x dim The set of points with the integer variables rounded to the closest integer in the domain numpy.array
pySOT.utils.to_unit_box(x, data)

Maps a set of points to the unit box

Parameters: x (numpy.array) – Points to be mapped to the unit box, of size npts x dim data (Object) – Optimization problem, needs to have attributes xlow and xup Points mapped to the unit box numpy.array
pySOT.utils.unit_rescale(xx)

Shift and rescale elements of a vector to the unit interval

Parameters: xx (numpy.array) – Vector that should be rescaled to the unit interval Vector scaled to the unit interval numpy.array