botorch.optim¶
botorch.optim.fit¶
Tools for model fitting.
-
class
botorch.optim.fit.
OptimizationIteration
(itr, fun, time)[source]¶ Create new instance of OptimizationIteration(itr, fun, time)
-
fun
¶ Alias for field number 1
-
itr
¶ Alias for field number 0
-
time
¶ Alias for field number 2
-
-
botorch.optim.fit.
fit_gpytorch_scipy
(mll, bounds=None, method='L-BFGS-B', options=None, track_iterations=True)[source]¶ Fit a gpytorch model by maximizing MLL with a scipy optimizer.
The model and likelihood in mll must already be in train mode. Note: this method requires that the model has train_inputs and train_targets.
Parameters: - mll (
MarginalLogLikelihood
) – MarginalLogLikelihood to be maximized. - bounds (
Optional
[Dict
[str
,Tuple
[Optional
[float
],Optional
[float
]]]]) – A dictionary mapping parameter names to tuples of lower and upper bounds. - method (
str
) – Solver type, passed along to scipy.minimize. - options (
Optional
[Dict
[str
,Any
]]) – Dictionary of solver options, passed along to scipy.minimize. - track_iterations (
bool
) – Track the function values and wall time for each iteration.
Return type: Tuple
[MarginalLogLikelihood
,List
[OptimizationIteration
]]Returns: 2-element tuple containing
- MarginalLogLikelihood with parameters optimized in-place.
- List of OptimizationIteration objects with information on each iteration. If track_iterations is False, this will be an empty list.
Example
>>> gp = SingleTaskGP(train_X, train_Y) >>> mll = ExactMarginalLogLikelihood(gp.likelihood, gp) >>> mll.train() >>> fit_gpytorch_scipy(mll) >>> mll.eval()
- mll (
-
botorch.optim.fit.
fit_gpytorch_torch
(mll, bounds=None, optimizer_cls=<class 'torch.optim.adam.Adam'>, options=None, track_iterations=True)[source]¶ Fit a gpytorch model by maximizing MLL with a torch optimizer.
The model and likelihood in mll must already be in train mode. Note: this method requires that the model has train_inputs and train_targets.
Parameters: - mll (
MarginalLogLikelihood
) – MarginalLogLikelihood to be maximized. - bounds (
Optional
[Dict
[str
,Tuple
[Optional
[float
],Optional
[float
]]]]) – A ParameterBounds dictionary mapping parameter names to tuples of lower and upper bounds. Bounds specified here take precedence over bounds on the same parameters specified in the constraints registered with the module. - optimizer_cls (
Optimizer
) – Torch optimizer to use. Must not require a closure. - options (
Optional
[Dict
[str
,Any
]]) – options for model fitting. Relevant options will be passed to the optimizer_cls. Additionally, options can include: “disp” to specify whether to display model fitting diagnostics and “maxiter” to specify the maximum number of iterations. - track_iterations (
bool
) – Track the function values and wall time for each iteration.
Return type: Tuple
[MarginalLogLikelihood
,List
[OptimizationIteration
]]Returns: 2-element tuple containing
- mll with parameters optimized in-place.
- List of OptimizationIteration objects with information on each iteration. If track_iterations is False, this will be an empty list.
Example
>>> gp = SingleTaskGP(train_X, train_Y) >>> mll = ExactMarginalLogLikelihood(gp.likelihood, gp) >>> mll.train() >>> fit_gpytorch_torch(mll) >>> mll.eval()
- mll (
botorch.optim.initializers¶
-
botorch.optim.initializers.
initialize_q_batch
(X, Y, n, eta=1.0)[source]¶ Heuristic for selecting initial conditions for candidate generation.
This heuristic selects points from X (without replacement) with probability proportional to exp(eta * Z), where Z = (Y - mean(Y)) / std(Y) and eta is a temperature parameter.
When using an acquisiton function that is non-negative and possibly zero over large areas of the feature space (e.g. qEI), you should use initialize_q_batch_nonneg instead.
Parameters: - X (
Tensor
) – A b x q x d tensor of b samples of q-batches from a d-dim. feature space. Typically, these are generated using qMC sampling. - Y (
Tensor
) – A tensor of b outcomes associated with the samples. Typically, this is the value of the batch acquisition function to be maximized. - n (
int
) – The number of initial condition to be generated. Must be less than b. - eta (
float
) – Temperature parameter for weighting samples.
Return type: Tensor
Returns: A n x q x d tensor of n q-batch initial conditions.
Example
# To get n=10 starting points of q-batch size q=3 for model with d=6: >>> qUCB = qUpperConfidenceBound(model, beta=0.1) >>> Xrnd = torch.rand(500, 3, 6) >>> Xinit = initialize_q_batch(Xrnd, qUCB(Xrnd), 10)
- X (
-
botorch.optim.initializers.
initialize_q_batch_nonneg
(X, Y, n, eta=1.0, alpha=0.0001)[source]¶ Heuristic for selecting initial conditions for non-neg. acquisition functions.
This function is similar to initialize_q_batch, but designed specifically for acquisition functions that are non-negative and possibly zero over large areas of the feature space (e.g. qEI). All samples for which Y < alpha * max(Y) will be ignored (assuming that Y contains at least one positive value).
Parameters: - X (
Tensor
) – A b x q x d tensor of b samples of q-batches from a d-dim. feature space. Typically, these are generated using qMC. - Y (
Tensor
) – A tensor of b outcomes associated with the samples. Typically, this is the value of the batch acquisition function to be maximized. - n (
int
) – The number of initial condition to be generated. Must be less than b. - eta (
float
) – Temperature parameter for weighting samples. - alpha (
float
) – The threshold (as a fraction of the maximum observed value) under which to ignore samples. All input samples for which Y < alpha * max(Y) will be ignored.
Return type: Tensor
Returns: A n x q x d tensor of n q-batch initial conditions.
Example
# To get n=10 starting points of q-batch size q=3 for model with d=6: >>> qEI = qExpectedImprovement(model, best_f=0.2) >>> Xrnd = torch.rand(500, 3, 6) >>> Xinit = initialize_q_batch(Xrnd, qEI(Xrnd), 10)
- X (
botorch.optim.numpy_converter¶
A converter that simplifies using numpy-based optimizers with generic torch nn.Module classes. This enables using a scipy.optim.minimize optimizer for optimizing module parameters.
-
class
botorch.optim.numpy_converter.
TorchAttr
(shape, dtype, device)[source]¶ Create new instance of TorchAttr(shape, dtype, device)
-
device
¶ Alias for field number 2
-
dtype
¶ Alias for field number 1
-
shape
¶ Alias for field number 0
-
-
botorch.optim.numpy_converter.
module_to_array
(module, bounds=None, exclude=None)[source]¶ Extract named parameters from a module into a numpy array.
Only extracts parameters with requires_grad, since it is meant for optimizing.
Parameters: - module (
Module
) – A module with parameters. May specify parameter constraints in a named_parameters_and_constraints method. - bounds (
Optional
[Dict
[str
,Tuple
[Optional
[float
],Optional
[float
]]]]) – A ParameterBounds dictionary mapping parameter names to tuples of lower and upper bounds. Bounds specified here take precedence over bounds on the same parameters specified in the constraints registered with the module. - exclude (
Optional
[Set
[str
]]) – A list of parameter names that are to be excluded from extraction.
Return type: Tuple
[ndarray
,Dict
[str
,TorchAttr
],Optional
[ndarray
]]Returns: 3-element tuple containing - The parameter values as a numpy array. - An ordered dictionary with the name and tensor attributes of each parameter. - A 2 x n_params numpy array with lower and upper bounds if at least one constraint is finite, and None otherwise.
Example
>>> mll = ExactMarginalLogLikelihood(model.likelihood, model) >>> parameter_array, property_dict, bounds_out = module_to_array(mll)
- module (
-
botorch.optim.numpy_converter.
set_params_with_array
(module, x, property_dict)[source]¶ Set module parameters with values from numpy array.
Parameters: - module (
Module
) – Module with parameters to be set - x (
ndarray
) – Numpy array with parameter values - property_dict (
Dict
[str
,TorchAttr
]) – Dictionary of parameter names and torch attributes as returned by module_to_array.
Returns: module with parameters updated in-place.
Return type: Module
Example
>>> mll = ExactMarginalLogLikelihood(model.likelihood, model) >>> parameter_array, property_dict, bounds_out = module_to_array(mll) >>> parameter_array += 0.1 # perturb parameters (for example only) >>> mll = set_params_with_array(mll, parameter_array, property_dict)
- module (
botorch.optim.optimize¶
Methods for optimizing acquisition functions.
-
botorch.optim.optimize.
gen_batch_initial_conditions
(acq_function, bounds, q, num_restarts, raw_samples, options=None)[source]¶ Generate a batch of initial conditions for random-restart optimziation.
Parameters: - acq_function (
AcquisitionFunction
) – The acquisition function to be optimized. - bounds (
Tensor
) – A 2 x d tensor of lower and upper bounds for each column of X. - q (
int
) – The number of candidates to consider. - num_restarts (
int
) – The number of starting points for multistart acquisition function optimization. - raw_samples (
int
) – The number of raw samples to consider in the initialization heuristic. - options (
Optional
[Dict
[str
,Union
[bool
,float
,int
]]]) – Options for initial condition generation. For valid options see initialize_q_batch and initialize_q_batch_nonneg. If options contains a nonnegative=True entry, then acq_function is assumed to be non-negative (useful when using custom acquisition functions).
Return type: Tensor
Returns: A num_restarts x q x d tensor of initial conditions.
Example
>>> qEI = qExpectedImprovement(model, best_f=0.2) >>> bounds = torch.tensor([[0.], [1.]]) >>> Xinit = gen_batch_initial_conditions( >>> qEI, bounds, q=3, num_restarts=25, raw_samples=500 >>> )
- acq_function (
-
botorch.optim.optimize.
joint_optimize
(acq_function, bounds, q, num_restarts, raw_samples, options=None, inequality_constraints=None, equality_constraints=None, fixed_features=None, post_processing_func=None)[source]¶ Generate a set of candidates via joint multi-start optimization.
Parameters: - acq_function (
AcquisitionFunction
) – The acquisition function. - bounds (
Tensor
) – A 2 x d tensor of lower and upper bounds for each column of X. - q (
int
) – The number of candidates. - num_restarts (
int
) – Number of starting points for multistart acquisition function optimization. - raw_samples (
int
) – Number of samples for initialization. - options (
Optional
[Dict
[str
,Union
[bool
,float
,int
]]]) – Options for candidate generation. - constraints (equality) – A list of tuples (indices, coefficients, rhs), with each tuple encoding an inequality constraint of the form sum_i (X[indices[i]] * coefficients[i]) >= rhs
- constraints – A list of tuples (indices, coefficients, rhs), with each tuple encoding an inequality constraint of the form sum_i (X[indices[i]] * coefficients[i]) = rhs
- fixed_features (
Optional
[Dict
[int
,float
]]) – A map {feature_index: value} for features that should be fixed to a particular value during generation. - post_processing_func (
Optional
[Callable
[[Tensor
],Tensor
]]) – A function that post processes an optimization result appropriately (i.e., according to round-trip transformations). Note: post_processing_func is not used by _joint_optimize and is only included to match _sequential_optimize.
Return type: Tensor
Returns: A q x d tensor of generated candidates.
Example
>>> # generate `q=2` candidates jointly using 20 random restarts and 500 raw samples >>> qEI = qExpectedImprovement(model, best_f=0.2) >>> bounds = torch.tensor([[0.], [1.]]) >>> candidates = joint_optimize(qEI, bounds, 2, 20, 500)
- acq_function (
-
botorch.optim.optimize.
sequential_optimize
(acq_function, bounds, q, num_restarts, raw_samples, options=None, inequality_constraints=None, equality_constraints=None, fixed_features=None, post_processing_func=None)[source]¶ Generate a set of candidates via sequential multi-start optimization.
Parameters: - acq_function (
AcquisitionFunction
) – The qNoisyExpectedImprovement acquisition function. - bounds (
Tensor
) – A 2 x d tensor of lower and upper bounds for each column of X. - q (
int
) – The number of candidates. - num_restarts (
int
) – Number of starting points for multistart acquisition function optimization. - raw_samples (
int
) – Number of samples for initialization - options (
Optional
[Dict
[str
,Union
[bool
,float
,int
]]]) – Options for candidate generation. - constraints (equality) – A list of tuples (indices, coefficients, rhs), with each tuple encoding an inequality constraint of the form sum_i (X[indices[i]] * coefficients[i]) >= rhs
- constraints – A list of tuples (indices, coefficients, rhs), with each tuple encoding an inequality constraint of the form sum_i (X[indices[i]] * coefficients[i]) = rhs
- fixed_features (
Optional
[Dict
[int
,float
]]) – A map {feature_index: value} for features that should be fixed to a particular value during generation. - post_processing_func (
Optional
[Callable
[[Tensor
],Tensor
]]) – A function that post-processes an optimization result appropriately (i.e., according to round-trip transformations).
Return type: Tensor
Returns: The set of generated candidates.
Example >>> # generate q=2 candidates sequentially using 20 random restarts and >>> # 500 raw samples >>> qEI = qExpectedImprovement(model, best_f=0.2) >>> bounds = torch.tensor([[0.], [1.]]) >>> candidates = sequential_optimize(qEI, bounds, 2, 20, 500)
- acq_function (
botorch.optim.parameter_constraints¶
Utility functions for constrained optimization.
-
botorch.optim.parameter_constraints.
eval_lin_constraint
(x, flat_idxr, coeffs, rhs)[source]¶ Evaluate a single linear constraint.
Parameters: - x (
ndarray
) – The input array. - flat_idxr (
List
[int
]) – The indices in x to consider. - coeffs (
ndarray
) – The coefficients corresponding to the indices. - rhs (
float
) – The right-hand-side of the constraint.
Returns: sum_i (coeffs[i] * x[i]) - rhs
Return type: The evaluted constraint
- x (
-
botorch.optim.parameter_constraints.
lin_constraint_jac
(x, flat_idxr, coeffs, n)[source]¶ Return the Jacobian associated with a linear constraint.
Parameters: - x (
ndarray
) – The input array. - flat_idxr (
List
[int
]) – The indices for the elements of x that appear in the constraint. - coeffs (
ndarray
) – The coefficients corresponding to the indices. - n (
int
) – number of elements
Return type: ndarray
Returns: The Jacobian.
- x (
-
botorch.optim.parameter_constraints.
make_scipy_bounds
(X, lower_bounds=None, upper_bounds=None)[source]¶ Creates a scipy Bounds object for optimziation
Parameters: - X (
Tensor
) – … x d tensor - lower_bounds (
Union
[float
,Tensor
,None
]) – Lower bounds on each column (last dimension) of X. If this is a single float, then all columns have the same bound. - upper_bounds (
Union
[float
,Tensor
,None
]) – Lower bounds on each column (last dimension) of X. If this is a single float, then all columns have the same bound.
Return type: Optional
[Bounds
]Returns: A scipy Bounds object if either lower_bounds or upper_bounds is not None, and None otherwise.
Example
>>> X = torch.rand(5, 2) >>> scipy_bounds = make_scipy_bounds(X, 0.1, 0.8)
- X (
-
botorch.optim.parameter_constraints.
make_scipy_linear_constraints
(shapeX, inequality_constraints=None, equality_constraints=None)[source]¶ Generate scipy constraints from torch representation.
Parameters: - shapeX (
Size
) – The shape of the torch.Tensor to optimize over (i.e. b x q x d) - constraints (equality) – A list of tuples (indices, coefficients, rhs), with each tuple encoding an inequality constraint of the form sum_i (X[indices[i]] * coefficients[i]) >= rhs, where indices is a single-dimensional index tensor (long dtype) containing indices into the last dimension of X, coefficients is a single-dimensional tensor of coefficients of the same length, and rhs is a scalar.
- constraints – A list of tuples (indices, coefficients, rhs), with each tuple encoding an inequality constraint of the form sum_i (X[indices[i]] * coefficients[i]) == rhs (with indices and coefficients of the same form as in inequality_constraints).
Return type: List
[Dict
[str
,Union
[str
,Callable
[[ndarray
],float
],Callable
[[ndarray
],ndarray
]]]]Returns: A list of dictionaries containing callables for constraint function values and Jacobians and a string indicating the associated constraint type (“eq”, “ineq”), as expected by scipy.minimize.
This function assumes that constraints are the same for each input batch, and broadcasts the constraints accordingly to the input batch shape. This function does currently not support constraints across elements of a q-batch.
Example
The following will enforce that x[1] + 0.5 x[2] >= -0.1 for each x in both elements of the q-batch, and each of the 3 t-batches: >>> constraints = make_scipy_linear_constraints( >>> torch.Size([3, 2, 4]), >>> [(torch.tensor([1, 3]), torch.tensor([1.0, 0.5]), -0.1)], >>> )
- shapeX (
botorch.optim.utils¶
Utilities for optimization.
-
botorch.optim.utils.
check_convergence
(loss_trajectory, param_trajectory, options)[source]¶ Check convergence of optimization for pytorch optimizers.
Right now this is just a dummy function and only checks for maxiter.
Parameters: - loss_trajectory (
List
[float
]) – A list containing the loss value at each iteration. - param_trajectory (
Dict
[str
,List
[Tensor
]]) – A dictionary mapping each parameter name to a list of Tensors where the i`th Tensor is the parameter value at iteration `i. - options (
Dict
[str
,Union
[float
,str
]]) – dictionary of options. Currently only “maxiter” is supported.
Return type: bool
Returns: A boolean indicating whether optimization has converged.
- loss_trajectory (
-
botorch.optim.utils.
columnwise_clamp
(X, lower=None, upper=None)[source]¶ Clamp values of a Tensor in column-wise fashion (with support for t-batches).
This function is useful in conjunction with optimizers from the torch.optim package, which don’t natively handle constraints. If you apply this after a gradient step you can be fancy and call it “projected gradient descent”.
Parameters: - X (
Tensor
) – The b x n x d input tensor. If 2-dimensional, b is assumed to be 1. - lower (
Union
[float
,Tensor
,None
]) – The column-wise lower bounds. If scalar, apply bound to all columns. - upper (
Union
[float
,Tensor
,None
]) – The column-wise upper bounds. If scalar, apply bound to all columns.
Return type: Tensor
Returns: The clamped tensor.
- X (
-
botorch.optim.utils.
fix_features
(X, fixed_features=None)[source]¶ Fix feature values in a Tensor.
The fixed features will have zero gradient in downstream calculations.
Parameters: - X (
Tensor
) – input Tensor with shape … x p, where p is the number of features - fixed_features (
Optional
[Dict
[int
,Optional
[float
]]]) – A dictionary with keys as column indices and values equal to what the feature should be set to in X. If the value is None, that column is just considered fixed. Keys should be in the range [0, p - 1].
Return type: Tensor
Returns: The tensor X with fixed features.
- X (