botorch.generation

Candidate Generation Utilities for Acquisition Functions

Candidate generation utilities.

botorch.generation.gen.gen_candidates_scipy(initial_conditions, acquisition_function, lower_bounds=None, upper_bounds=None, inequality_constraints=None, equality_constraints=None, nonlinear_inequality_constraints=None, options=None, fixed_features=None, timeout_sec=None)[source]

Generate a set of candidates using scipy.optimize.minimize.

Optimizes an acquisition function starting from a set of initial candidates using scipy.optimize.minimize via a numpy converter.

Parameters:
  • initial_conditions (Tensor) – Starting points for optimization, with shape (b) x q x d.

  • acquisition_function (AcquisitionFunction) – Acquisition function to be used.

  • lower_bounds (Tensor | float | None) – Minimum values for each column of initial_conditions.

  • upper_bounds (Tensor | float | None) – Maximum values for each column of initial_conditions.

  • constraints (equality) – A list of tuples (indices, coefficients, rhs), with each tuple encoding an inequality constraint of the form sum_i (X[indices[i]] * coefficients[i]) >= rhs.

  • constraints – A list of tuples (indices, coefficients, rhs), with each tuple encoding an inequality constraint of the form sum_i (X[indices[i]] * coefficients[i]) = rhs.

  • nonlinear_inequality_constraints (List[Tuple[Callable, bool]] | None) – A list of tuples representing the nonlinear inequality constraints. The first element in the tuple is a callable representing a constraint of the form callable(x) >= 0. In case of an intra-point constraint, callable()`takes in an one-dimensional tensor of shape `d and returns a scalar. In case of an inter-point constraint, callable() takes a two dimensional tensor of shape q x d and again returns a scalar. The second element is a boolean, indicating if it is an intra-point or inter-point constraint (True for intra-point. False for inter-point). For more information on intra-point vs inter-point constraints, see the docstring of the inequality_constraints argument to optimize_acqf(). The constraints will later be passed to the scipy solver.

  • options (Dict[str, Any] | None) – Options used to control the optimization including “method” and “maxiter”. Select method for scipy.minimize using the “method” key. By default uses L-BFGS-B for box-constrained problems and SLSQP if inequality or equality constraints are present. If with_grad=False, then we use a two-point finite difference estimate of the gradient.

  • fixed_features (Dict[int, float | None] | None) – This is a dictionary of feature indices to values, where all generated candidates will have features fixed to these values. If the dictionary value is None, then that feature will just be fixed to the clamped value and not optimized. Assumes values to be compatible with lower_bounds and upper_bounds!

  • timeout_sec (float | None) – Timeout (in seconds) for scipy.optimize.minimize routine - if provided, optimization will stop after this many seconds and return the best solution found so far.

  • inequality_constraints (List[Tuple[Tensor, Tensor, float]] | None)

  • equality_constraints (List[Tuple[Tensor, Tensor, float]] | None)

Returns:

2-element tuple containing

  • The set of generated candidates.

  • The acquisition value for each t-batch.

Return type:

Tuple[Tensor, Tensor]

Example

>>> qEI = qExpectedImprovement(model, best_f=0.2)
>>> bounds = torch.tensor([[0., 0.], [1., 2.]])
>>> Xinit = gen_batch_initial_conditions(
>>>     qEI, bounds, q=3, num_restarts=25, raw_samples=500
>>> )
>>> batch_candidates, batch_acq_values = gen_candidates_scipy(
        initial_conditions=Xinit,
        acquisition_function=qEI,
        lower_bounds=bounds[0],
        upper_bounds=bounds[1],
    )
botorch.generation.gen.gen_candidates_torch(initial_conditions, acquisition_function, lower_bounds=None, upper_bounds=None, optimizer=<class 'torch.optim.adam.Adam'>, options=None, callback=None, fixed_features=None, timeout_sec=None)[source]

Generate a set of candidates using a torch.optim optimizer.

Optimizes an acquisition function starting from a set of initial candidates using an optimizer from torch.optim.

Parameters:
  • initial_conditions (Tensor) – Starting points for optimization.

  • acquisition_function (AcquisitionFunction) – Acquisition function to be used.

  • lower_bounds (Tensor | float | None) – Minimum values for each column of initial_conditions.

  • upper_bounds (Tensor | float | None) – Maximum values for each column of initial_conditions.

  • optimizer (Optimizer) – The pytorch optimizer to use to perform candidate search.

  • options (Dict[str, float | str] | None) – Options used to control the optimization. Includes maxiter: Maximum number of iterations

  • callback (Callable[[int, Tensor, Tensor], NoReturn] | None) – A callback function accepting the current iteration, loss, and gradients as arguments. This function is executed after computing the loss and gradients, but before calling the optimizer.

  • fixed_features (Dict[int, float | None] | None) – This is a dictionary of feature indices to values, where all generated candidates will have features fixed to these values. If the dictionary value is None, then that feature will just be fixed to the clamped value and not optimized. Assumes values to be compatible with lower_bounds and upper_bounds!

  • timeout_sec (float | None) – Timeout (in seconds) for optimization. If provided, gen_candidates_torch will stop after this many seconds and return the best solution found so far.

Returns:

2-element tuple containing

  • The set of generated candidates.

  • The acquisition value for each t-batch.

Return type:

Tuple[Tensor, Tensor]

Example

>>> qEI = qExpectedImprovement(model, best_f=0.2)
>>> bounds = torch.tensor([[0., 0.], [1., 2.]])
>>> Xinit = gen_batch_initial_conditions(
>>>     qEI, bounds, q=3, num_restarts=25, raw_samples=500
>>> )
>>> batch_candidates, batch_acq_values = gen_candidates_torch(
        initial_conditions=Xinit,
        acquisition_function=qEI,
        lower_bounds=bounds[0],
        upper_bounds=bounds[1],
    )
botorch.generation.gen.get_best_candidates(batch_candidates, batch_values)[source]

Extract best (q-batch) candidate from batch of candidates

Parameters:
  • batch_candidates (Tensor) – A b x q x d tensor of b q-batch candidates, or a b x d tensor of b single-point candidates.

  • batch_values (Tensor) – A tensor with b elements containing the value of the respective candidate (higher is better).

Returns:

A tensor of size q x d (if q-batch mode) or d from batch_candidates with the highest associated value.

Return type:

Tensor

Example

>>> qEI = qExpectedImprovement(model, best_f=0.2)
>>> bounds = torch.tensor([[0., 0.], [1., 2.]])
>>> Xinit = gen_batch_initial_conditions(
>>>     qEI, bounds, q=3, num_restarts=25, raw_samples=500
>>> )
>>> batch_candidates, batch_acq_values = gen_candidates_scipy(
        initial_conditions=Xinit,
        acquisition_function=qEI,
        lower_bounds=bounds[0],
        upper_bounds=bounds[1],
    )
>>> best_candidates = get_best_candidates(batch_candidates, batch_acq_values)

Sampling Strategies

Sampling-based generation strategies.

A SamplingStrategy returns samples from the input points (i.e. Tensors in feature space), rather than the value for a set of tensors, as acquisition functions do. The q-batch dimension has similar semantics as for acquisition functions in that the points across the q-batch are considered jointly for sampling (where as for q-acquisition functions we evaluate the joint value of the q-batch).

class botorch.generation.sampling.SamplingStrategy(*args, **kwargs)[source]

Bases: Module, ABC

Abstract base class for sampling-based generation strategies.

Initialize internal Module state, shared by both nn.Module and ScriptModule.

abstract forward(X, num_samples=1)[source]

Sample according to the SamplingStrategy.

Parameters:
  • X (Tensor) – A batch_shape x N x d-dim Tensor from which to sample (in the N dimension).

  • num_samples (int) – The number of samples to draw.

Returns:

A batch_shape x num_samples x d-dim Tensor of samples from X, where X[…, i, :] is the i-th sample.

Return type:

Tensor

class botorch.generation.sampling.MaxPosteriorSampling(model, objective=None, posterior_transform=None, replacement=True)[source]

Bases: SamplingStrategy

Sample from a set of points according to their max posterior value.

Example

>>> MPS = MaxPosteriorSampling(model)  # model w/ feature dim d=3
>>> X = torch.rand(2, 100, 3)
>>> sampled_X = MPS(X, num_samples=5)

Constructor for the SamplingStrategy base class.

Parameters:
  • model (Model) – A fitted model.

  • objective (Optional[MCAcquisitionObjective]) – The MCAcquisitionObjective under which the samples are evaluated. Defaults to IdentityMCObjective().

  • posterior_transform (Optional[PosteriorTransform]) – An optional PosteriorTransform.

  • replacement (bool) – If True, sample with replacement.

forward(X, num_samples=1, observation_noise=False)[source]

Sample from the model posterior.

Parameters:
  • X (Tensor) – A batch_shape x N x d-dim Tensor from which to sample (in the N dimension) according to the maximum posterior value under the objective.

  • num_samples (int) – The number of samples to draw.

  • observation_noise (bool) – If True, sample with observation noise.

Returns:

A batch_shape x num_samples x d-dim Tensor of samples from X, where X[…, i, :] is the i-th sample.

Return type:

Tensor

maximize_samples(X, samples, num_samples=1)[source]
Parameters:
  • X (Tensor)

  • samples (Tensor)

  • num_samples (int)

class botorch.generation.sampling.BoltzmannSampling(acq_func, eta=1.0, replacement=True)[source]

Bases: SamplingStrategy

Sample from a set of points according to a tempered acquisition value.

Given an acquisition function acq_func, this sampling strategies draws samples from a batch_shape x N x d-dim tensor X according to a multinomial distribution over its indices given by

weight(X[…, i, :]) ~ exp(eta * standardize(acq_func(X[…, i, :])))

where standardize(Y) standardizes Y to zero mean and unit variance. As the temperature parameter eta -> 0, this approaches uniform sampling, while as eta -> infty, this approaches selecting the maximizer(s) of the acquisition function acq_func.

Example

>>> UCB = UpperConfidenceBound(model, beta=0.1)
>>> BMUCB = BoltzmannSampling(UCB, eta=0.5)
>>> X = torch.rand(2, 100, 3)
>>> sampled_X = BMUCB(X, num_samples=5)

Boltzmann Acquisition Value Sampling.

Parameters:
  • acq_func (AcquisitionFunction) – The acquisition function; to be evaluated in batch at the individual points of a q-batch (not jointly, as is the case for acquisition functions). Can be analytic or Monte-Carlo.

  • eta (float) – The temperature parameter in the softmax.

  • replacement (bool) – If True, sample with replacement.

forward(X, num_samples=1)[source]

Sample from a tempered value of the acquisition function value.

Parameters:
  • X (Tensor) – A batch_shape x N x d-dim Tensor from which to sample (in the N dimension) according to the maximum posterior value under the objective. Note that if a batched model is used in the underlying acquisition function, then its batch shape must be broadcastable to batch_shape.

  • num_samples (int) – The number of samples to draw.

Returns:

A batch_shape x num_samples x d-dim Tensor of samples from X, where X[…, i, :] is the i-th sample.

Return type:

Tensor

class botorch.generation.sampling.ConstrainedMaxPosteriorSampling(model, constraint_model, objective=None, posterior_transform=None, replacement=True)[source]

Bases: MaxPosteriorSampling

Constrained max posterior sampling.

Posterior sampling where we try to maximize an objective function while simulatenously satisfying a set of constraints c1(x) <= 0, c2(x) <= 0, …, cm(x) <= 0 where c1, c2, …, cm are black-box constraint functions. Each constraint function is modeled by a seperate GP model. We follow the procedure as described in https://doi.org/10.48550/arxiv.2002.08526.

Example

>>> CMPS = ConstrainedMaxPosteriorSampling(
        model,
        constraint_model=ModelListGP(cmodel1, cmodel2),
    )
>>> X = torch.rand(2, 100, 3)
>>> sampled_X = CMPS(X, num_samples=5)

Constructor for the SamplingStrategy base class.

Parameters:
  • model (Model) – A fitted model.

  • objective (Optional[MCAcquisitionObjective]) – The MCAcquisitionObjective under which the samples are evaluated. Defaults to IdentityMCObjective().

  • posterior_transform (Optional[PosteriorTransform]) – An optional PosteriorTransform for the objective function (corresponding to model).

  • replacement (bool) – If True, sample with replacement.

  • constraint_model (Union[ModelListGP, MultiTaskGP]) – either a ModelListGP where each submodel is a GP model for one constraint function, or a MultiTaskGP model where each task is one constraint function. All constraints are of the form c(x) <= 0. In the case when the constraint model predicts that all candidates violate constraints, we pick the candidates with minimum violation.

forward(X, num_samples=1, observation_noise=False)[source]

Sample from the model posterior.

Parameters:
  • X (Tensor) – A batch_shape x N x d-dim Tensor from which to sample (in the N dimension) according to the maximum posterior value under the objective.

  • num_samples (int) – The number of samples to draw.

  • observation_noise (bool) – If True, sample with observation noise.

Returns:

A batch_shape x num_samples x d-dim Tensor of samples from X, where

X[…, i, :] is the i-th sample.

Return type:

Tensor

Utilities