botorch.acquisition¶
botorch.acquisition.acquisition¶
Abstract base module for all botorch acquisition functions.
botorch.acquisition.analytic¶
Analytic acquisition functions (not using (q-)MC sampling).
Analytic Acquisition Functions that evaluate the posterior without performing Monte-Carlo sampling.
AnalyticAcquisitionFunction¶
ExpectedImprovement¶
-
class
botorch.acquisition.analytic.
ExpectedImprovement
(model, best_f, maximize=True)[source]¶ Single-outcome Expected Improvement (analytic).
Computes classic Expected Improvement over the current best observed value, using the analytic formula for a Normal posterior distribution. Unlike the MC-based acquisition functions, this relies on the posterior at single test point being Gaussian (and require the posterior to implement mean and variance properties). Only supports the case of q=1. The model must be single-outcome.
EI(x) = E(max(y - best_f, 0)), y ~ f(x)
Example
>>> model = SingleTaskGP(train_X, train_Y) >>> EI = ExpectedImprovement(model, best_f=0.2) >>> ei = EI(test_X)
Single-outcome Expected Improvement (analytic).
Parameters: - model (
Model
) – A fitted single-outcome model. - best_f (
Union
[float
,Tensor
]) – Either a scalar or a b-dim Tensor (batch mode) representing the best function value observed so far (assumed noiseless). - maximize (
bool
) – If True, consider the problem a maximization problem.
-
forward
(X)[source]¶ Evaluate Expected Improvement on the candidate set X.
Parameters: X ( Tensor
) – A b1 x … bk x 1 x d-dim batched tensor of d-dim design points. Expected Improvement is computed for each point individually, i.e., what is considered are the marginal posteriors, not the joint.Return type: Tensor
Returns: A b1 x … bk-dim tensor of Expected Improvement values at the given design points X.
- model (
PosteriorMean¶
-
class
botorch.acquisition.analytic.
PosteriorMean
(model)[source]¶ Single-outcome Posterior Mean.
Only supports the case of q=1. Requires the model’s posterior to have a mean property. The model must be single-outcome.
Example
>>> model = SingleTaskGP(train_X, train_Y) >>> PM = PosteriorMean(model) >>> pm = PM(test_X)
Constructor for the AcquisitionFunction base class.
Parameters: model ( Model
) – A fitted model.
ProbabilityOfImprovement¶
-
class
botorch.acquisition.analytic.
ProbabilityOfImprovement
(model, best_f, maximize=True)[source]¶ Single-outcome Probability of Improvement.
Probability of improvment over the current best observed value, computed using the analytic formula under a Normal posterior distribution. Only supports the case of q=1. Requires the posterior to be Gaussian. The model must be single-outcome.
PI(x) = P(y >= best_f), y ~ f(x)
Example
>>> model = SingleTaskGP(train_X, train_Y) >>> PI = ProbabilityOfImprovement(model, best_f=0.2) >>> pi = PI(test_X)
Single-outcome analytic Probability of Improvement.
Parameters: - model (
Model
) – A fitted single-outcome model. - best_f (
Union
[float
,Tensor
]) – Either a scalar or a b-dim Tensor (batch mode) representing the best function value observed so far (assumed noiseless). - maximize (
bool
) – If True, consider the problem a maximization problem.
- model (
UpperConfidenceBound¶
-
class
botorch.acquisition.analytic.
UpperConfidenceBound
(model, beta, maximize=True)[source]¶ Single-outcome Upper Confidence Bound (UCB).
Analytic upper confidence bound that comprises of the posterior mean plus an additional term: the posterior standard deviation weighted by a trade-off parameter, beta. Only supports the case of q=1 (i.e. greedy, non-batch selection of design points). The model must be single-outcome.
UCB(x) = mu(x) + sqrt(beta) * sigma(x), where mu and sigma are the posterior mean and standard deviation, respectively.
Example
>>> model = SingleTaskGP(train_X, train_Y) >>> UCB = UpperConfidenceBound(model, beta=0.2) >>> ucb = UCB(test_X)
Single-outcome Upper Confidence Bound.
Parameters: - model (
Model
) – A fitted single-outcome GP model (must be in batch mode if candidate sets X will be) - beta (
Union
[float
,Tensor
]) – Either a scalar or a one-dim tensor with b elements (batch mode) representing the trade-off parameter between mean and covariance - maximize (
bool
) – If True, consider the problem a maximization problem.
- model (
ConstrainedExpectedImprovement¶
-
class
botorch.acquisition.analytic.
ConstrainedExpectedImprovement
(model, best_f, objective_index, constraints, maximize=True)[source]¶ Constrained Expected Improvement (feasibility-weighted).
Computes the analytic expected improvement for a Normal posterior distribution, weighted by a probability of feasibility. The objective and constraints are assumed to be independent and have Gaussian posterior distributions. Only supports the case q=1. The model should be multi-outcome, with the index of the objective and constraints passed to the constructor.
Constrained_EI(x) = EI(x) * Product_i P(y_i in [lower_i, upper_i]), where y_i ~ constraint_i(x) and lower_i, upper_i are the lower and upper bounds for the i-th constraint, respectively.
Example
>>> # example where 0th output has a non-negativity constraint and ... # 1st output is the objective >>> model = SingleTaskGP(train_X, train_Y) >>> constraints = {0: (0.0, None)} >>> cEI = ConstrainedExpectedImprovement(model, 0.2, 1, constraints) >>> cei = cEI(test_X)
Analytic Constrained Expected Improvement.
Parameters: - model (
Model
) – A fitted single-outcome model. - best_f (
Union
[float
,Tensor
]) – Either a scalar or a b-dim Tensor (batch mode) representing the best function value observed so far (assumed noiseless). - objective_index (
int
) – The index of the objective. - constraints (
Dict
[int
,Tuple
[Optional
[float
],Optional
[float
]]]) – A dictionary of the form {i: [lower, upper]}, where i is the output index, and lower and upper are lower and upper bounds on that output (resp. interpreted as -Inf / Inf if None) - maximize (
bool
) – If True, consider the problem a maximization problem.
- model (
NoisyExpectedImprovement¶
-
class
botorch.acquisition.analytic.
NoisyExpectedImprovement
(model, X_observed, num_fantasies=20, maximize=True)[source]¶ Single-outcome Noisy Expected Improvement (via fantasies).
This computes Noisy Expected Improvement by averaging over the Expected Improvemnt values of a number of fantasy models. Only supports the case q=1. Assumes that the posterior distribution of the model is Gaussian. The model must be single-outcome.
NEI(x) = E(max(y - max Y_baseline), 0)), (y, Y_baseline) ~ f((x, X_baseline)), where X_baseline are previously observed points.
Note: This acquisition function currently relies on using a FixedNoiseGP (required for noiseless fantasies).
Example
>>> model = FixedNoiseGP(train_X, train_Y, train_Yvar=train_Yvar) >>> NEI = NoisyExpectedImprovement(model, train_X) >>> nei = NEI(test_X)
Single-outcome Noisy Expected Improvement (via fantasies).
Parameters: - model (
GPyTorchModel
) – A fitted single-outcome model. - X_observed (
Tensor
) – A m x d Tensor of observed points that are likely to be the best observed points so far. - num_fantasies (
int
) – The number of fantasies to generate. The higher this number the more accurate the model (at the expense of model complexity and performance). - maximize (
bool
) – If True, consider the problem a maximization problem.
- model (
botorch.acquisition.monte_carlo¶
Batch acquisition functions using the reparameterization trick in combination with (quasi) Monte-Carlo sampling. See [Rezende2014reparam] and [Wilson2017reparam]
[Rezende2014reparam] | D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate inference in deep generative models. ICML 2014. |
[Wilson2017reparam] | J. T. Wilson, R. Moriconi, F. Hutter, and M. P. Deisenroth. The reparameterization trick for acquisition functions. ArXiv 2017. |
MCAcquisitionFunction¶
-
class
botorch.acquisition.monte_carlo.
MCAcquisitionFunction
(model, sampler=None, objective=None)[source]¶ Abstract base class for Monte-Carlo based batch acquisition functions.
Constructor for the MCAcquisitionFunction base class.
Parameters: - model (
Model
) – A fitted model. - sampler (
Optional
[MCSampler
]) – The sampler used to draw base samples. Defaults to SobolQMCNormalSampler(num_samples=500, collapse_batch_dims=True). - objective (
Optional
[MCAcquisitionObjective
]) – THe MCAcquisitionObjective under which the samples are evaluated. Defaults to IdentityMCObjective().
- model (
qExpectedImprovement¶
-
class
botorch.acquisition.monte_carlo.
qExpectedImprovement
(model, best_f, sampler=None, objective=None)[source]¶ MC-based batch Expected Improvement.
This computes qEI by (1) sampling the joint posterior over q points (2) evaluating the improvement over the current best for each sample (3) maximizing over q (4) averaging over the samples
qEI(X) = E(max(max Y - best_f, 0)), Y ~ f(X), where X = (x_1,…,x_q)
Example
>>> model = SingleTaskGP(train_X, train_Y) >>> best_f = train_Y.max()[0] >>> sampler = SobolQMCNormalSampler(1000) >>> qEI = qExpectedImprovement(model, best_f, sampler) >>> qei = qEI(test_X)
q-Expected Improvement.
Parameters: - model (
Model
) – A fitted model. - best_f (
Union
[float
,Tensor
]) – The best (feasible) function value observed so far (assumed noiseless). - sampler (
Optional
[MCSampler
]) – The sampler used to draw base samples. Defaults to SobolQMCNormalSampler(num_samples=500, collapse_batch_dims=True) - objective (
Optional
[MCAcquisitionObjective
]) – The MCAcquisitionObjective under which the samples are evaluated. Defaults to IdentityMCObjective().
- model (
qNoisyExpectedImprovement¶
-
class
botorch.acquisition.monte_carlo.
qNoisyExpectedImprovement
(model, X_baseline, sampler=None, objective=None)[source]¶ MC-based batch Noisy Expected Improvement.
This function does not assume a best_f is known (which would require noiseless observations). Instead, it uses samples from the joint posterior over the q test points and previously observed points. The improvement over previously observed points is computed for each sample and averaged.
qNEI(X) = E(max(max Y - max Y_baseline, 0)), where (Y, Y_baseline) ~ f((X, X_baseline)), X = (x_1,…,x_q)
Example
>>> model = SingleTaskGP(train_X, train_Y) >>> sampler = SobolQMCNormalSampler(1000) >>> qNEI = qNoisyExpectedImprovement(model, train_X, sampler) >>> qnei = qNEI(test_X)
q-Noisy Expected Improvement.
Parameters: - model (
Model
) – A fitted model. - X_baseline (
Tensor
) – A m x d-dim Tensor of m design points that have either already been observed or whose evaluation is pending. These points are considered as the potential best design point. - sampler (
Optional
[MCSampler
]) – The sampler used to draw base samples. Defaults to SobolQMCNormalSampler(num_samples=500, collapse_batch_dims=True). - objective (
Optional
[MCAcquisitionObjective
]) – The MCAcquisitionObjective under which the samples are evaluated. Defaults to IdentityMCObjective().
- model (
qProbabilityOfImprovement¶
-
class
botorch.acquisition.monte_carlo.
qProbabilityOfImprovement
(model, best_f, sampler=None, objective=None, tau=0.001)[source]¶ MC-based batch Probability of Improvement.
Estimates the probability of improvement over the current best observed value by sampling from the joint posterior distribution of the q-batch. MC-based estimates of a probability involves taking expectation of an indicator function; to support auto-differntiation, the indicator is replaced with a sigmoid function with temperature parameter tau.
qPI(X) = P(max Y >= best_f), Y ~ f(X), X = (x_1,…,x_q)
Example
>>> model = SingleTaskGP(train_X, train_Y) >>> best_f = train_Y.max()[0] >>> sampler = SobolQMCNormalSampler(1000) >>> qPI = qProbabilityOfImprovement(model, best_f, sampler) >>> qpi = qPI(test_X)
q-Probability of Improvement.
Parameters: - model (
Model
) – A fitted model. - best_f (
Union
[float
,Tensor
]) – The best (feasible) function value observed so far (assumed noiseless). - sampler (
Optional
[MCSampler
]) – The sampler used to draw base samples. Defaults to SobolQMCNormalSampler(num_samples=500, collapse_batch_dims=True) - objective (
Optional
[MCAcquisitionObjective
]) – The MCAcquisitionObjective under which the samples are evaluated. Defaults to IdentityMCObjective(). - tau (
float
) – The temperature parameter used in the sigmoid approximation of the step function. Smaller values yield more accurate approximations of the function, but result in gradients estimates with higher variance.
- model (
qSimpleRegret¶
-
class
botorch.acquisition.monte_carlo.
qSimpleRegret
(model, sampler=None, objective=None)[source]¶ MC-based batch Simple Regret.
Samples from the joint posterior over the q-batch and computes the simple regret.
qSR(X) = E(max Y), Y ~ f(X), X = (x_1,…,x_q)
Example
>>> model = SingleTaskGP(train_X, train_Y) >>> sampler = SobolQMCNormalSampler(1000) >>> qSR = qSimpleRegret(model, sampler) >>> qsr = qSR(test_X)
Constructor for the MCAcquisitionFunction base class.
Parameters: - model (
Model
) – A fitted model. - sampler (
Optional
[MCSampler
]) – The sampler used to draw base samples. Defaults to SobolQMCNormalSampler(num_samples=500, collapse_batch_dims=True). - objective (
Optional
[MCAcquisitionObjective
]) – THe MCAcquisitionObjective under which the samples are evaluated. Defaults to IdentityMCObjective().
- model (
qUpperConfidenceBound¶
-
class
botorch.acquisition.monte_carlo.
qUpperConfidenceBound
(model, beta, sampler=None, objective=None)[source]¶ MC-based batch Upper Confidence Bound.
Uses a reparameterization to extend UCB to qUCB for q > 1 (See Appendix A of [Wilson2017reparam].)
qUCB = E(max(mu + |Y_tilde - mu|)), where Y_tilde ~ N(mu, beta pi/2 Sigma) and f(X) has distribution N(mu, Sigma).
Example
>>> model = SingleTaskGP(train_X, train_Y) >>> sampler = SobolQMCNormalSampler(1000) >>> qUCB = qUpperConfidenceBound(model, 0.1, sampler) >>> qucb = qUCB(test_X)
q-Upper Confidence Bound.
Parameters: - model (
Model
) – A fitted model. - beta (
float
) – Controls tradeoff between mean and standard deviation in UCB. - sampler (
Optional
[MCSampler
]) – The sampler used to draw base samples. Defaults to SobolQMCNormalSampler(num_samples=500, collapse_batch_dims=True) - objective (
Optional
[MCAcquisitionObjective
]) – The MCAcquisitionObjective under which the samples are evaluated. Defaults to IdentityMCObjective().
- model (
botorch.acquisition.objective¶
Objective Modules to be used with acquisition functions.
MCAcquisitionObjective¶
IdentityMCObjective¶
LinearMCObjective¶
-
class
botorch.acquisition.objective.
LinearMCObjective
(weights)[source]¶ Linear objective constructed from a weight tensor.
For input samples and mc_obj = LinearMCObjective(weights), this produces mc_obj(samples) = sum_{i} weights[i] * samples[…, i]
Example
>>> # example for a two outcomes >>> weights = torch.tensor([0.75, 0.25]) >>> linear_objective = LinearMCObjective(weights) >>> samples = sampler(posterior) >>> objective = linear_objective(samples)
Linear Objective.
Parameters: weights ( Tensor
) – A one-dimensional tensor with o elements representing the linear weights on the outputs.
GenericMCObjective¶
-
class
botorch.acquisition.objective.
GenericMCObjective
(objective)[source]¶ Objective generated from a generic callable.
Allows to construct arbitrary MC-objective functions from a generic callable. In order to be able to use gradient-based acquisition function optimization it should be possible to backpropagate through the callable.
Example
>>> generic_objective = GenericMCObjective(lambda Y: torch.sqrt(Y).sum(dim=-1)) >>> samples = sampler(posterior) >>> objective = generic_objective(samples)
Objective generated from a generic callable.
Parameters: objective ( Callable
[[Tensor
],Tensor
]) – A callable mapping a sample_shape x batch-shape x q x o- dim Tensor to a sample_shape x batch-shape x q-dim Tensor of objective values.-
forward
(samples)[source]¶ Evaluate the feasibility-weigthed objective on the samples.
Parameters: samples ( Tensor
) – A sample_shape x batch_shape x q x o-dim Tensors of samples from a model posterior.Return type: Tensor
Returns: A sample_shape x batch_shape x q-dim Tensor of objective values weighted by feasibility (assuming maximization).
-
ConstrainedMCObjective¶
-
class
botorch.acquisition.objective.
ConstrainedMCObjective
(objective, constraints, infeasible_cost=0.0, eta=0.001)[source]¶ Feasibility-weighted objective.
An Objective allowing to maximize some scalable objective on the model outputs subject to a number of constraints. Constraint feasibilty is approximated by a sigmoid function.
mc_acq(X) = objective(X) * prod_i (1 - sigmoid(constraint_i(X))) TODO: Document functional form exactly.
See botorch.utils.objective.apply_constraints for details on the constarint handling.
Example
>>> bound = 0.0 >>> objective = lambda Y: Y[..., 0] >>> # apply non-negativity constraint on f(x)[1] >>> constraint = lambda Y: bound - Y[..., 1] >>> constrained_objective = ConstrainedMCObjective(objective, [constraint]) >>> samples = sampler(posterior) >>> objective = constrained_objective(samples)
Feasibility-weighted objective.
Parameters: - objective (
Callable
[[Tensor
],Tensor
]) – A callable mapping a sample_shape x batch-shape x q x o- dim Tensor to a sample_shape x batch-shape x q-dim Tensor of objective values. - constraints (
List
[Callable
[[Tensor
],Tensor
]]) – A list of callables, each mapping a Tensor of dimension sample_shape x batch-shape x q x o to a Tensor of dimension sample_shape x batch-shape x q, where negative values imply feasibility. - infeasible_cost (
float
) – The cost of a design if all associated samples are infeasible. - eta (
float
) – The temperature parameter of the sigmoid function approximating the constraint.
-
forward
(samples)[source]¶ Evaluate the feasibility-weighted objective on the samples.
Parameters: samples ( Tensor
) – A sample_shape x batch_shape x q x o-dim Tensors of samples from a model posterior.Return type: Tensor
Returns: A sample_shape x batch_shape x q-dim Tensor of objective values weighted by feasibility (assuming maximization).
- objective (
botorch.acquisition.sampler¶
Sampler modules to be used with MC-evaluated acquisition functions.
MCSampler¶
-
class
botorch.acquisition.sampler.
MCSampler
[source]¶ Abstract base class for Samplers.
Subclasses must implement the _construct_base_samples method.
-
sample_shape
¶ The shape of each sample.
-
resample
¶ If True, re-draw samples in each forward evaluation - this results in stochastic acquisition functions (and thus should not be used with deterministic optimization algorithms).
-
collapse_batch_dims
¶ If True, collapse the t-batch dimensions of the produced samples to size 1. This is useful for preventing sampling variance across t-batches.
Example
This method is usually not called directly, but via the sampler’s __call__ method: >>> posterior = model.posterior(test_X) >>> samples = sampler(posterior)
-
forward
(posterior)[source]¶ Draws MC samples from the posterior.
Parameters: posterior ( Posterior
) – The Posterior to sample from.Return type: Tensor
Returns: The samples drawn from the posterior.
-
sample_shape
The shape of a single sample
Return type: Size
-
IIDNormalSampler¶
-
class
botorch.acquisition.sampler.
IIDNormalSampler
(num_samples, resample=False, seed=None, collapse_batch_dims=True)[source]¶ Sampler for MC base samples using iid N(0,1) samples.
Example
>>> sampler = IIDNormalSampler(1000, seed=1234) >>> posterior = model.posterior(test_X) >>> samples = sampler(posterior)
Sampler for MC base samples using iid N(0,1) samples.
Parameters: - num_samples (
int
) – The number of samples to use. - resample (
bool
) – If True, re-draw samples in each forward evaluation - this results in stochastic acquisition functions (and thus should not be used with deterministic optimization algorithms). - seed (
Optional
[int
]) – The seed for the RNG. If omitted, use a random seed. - collapse_batch_dims (
bool
) – If True, collapse the t-batch dimensions to size 1. This is useful for preventing sampling variance across t-batches.
- num_samples (
SobolQMCNormalSampler¶
-
class
botorch.acquisition.sampler.
SobolQMCNormalSampler
(num_samples, resample=False, seed=None, collapse_batch_dims=True)[source]¶ Sampler for quasi-MC base samples using Sobol sequences.
Example
>>> sampler = SobolQMCNormalSampler(1000, seed=1234) >>> posterior = model.posterior(test_X) >>> samples = sampler(posterior)
Sampler for quasi-MC base samples using Sobol sequences.
Parameters: - num_samples (
int
) – The number of samples to use. - resample (
bool
) – If True, re-draw samples in each forward evaluation - this results in stochastic acquisition functions (and thus should not be used with deterministic optimization algorithms). - seed (
Optional
[int
]) – The seed for the RNG. If omitted, use a random seed. - collapse_batch_dims (
bool
) – If True, collapse the t-batch dimensions to size 1. This is useful for preventing sampling variance across t-batches.
- num_samples (
botorch.acquisition.utils¶
Utilities for acquisition functions.
-
botorch.acquisition.utils.
get_acquisition_function
(acquisition_function_name, model, objective, X_observed, X_pending=None, mc_samples=500, qmc=True, seed=None, **kwargs)[source]¶ Convenience function for initializing botorch acquisition functions.
Parameters: - acquisition_function_name (
str
) – Name of the acquisition function. - model (
Model
) – A fitted model. - objective (
MCAcquisitionObjective
) – A MCAcquisitionObjective. - X_observed (
Tensor
) – A m1 x d-dim Tensor of m1 design points that have already been observed. - X_pending (
Optional
[Tensor
]) – A m2 x d-dim Tensor of m2 design points whose evaluation is pending. - mc_samples (
int
) – The number of samples to use for (q)MC evaluation of the acquisition function. - qmc (
bool
) – If True, use quasi-Monte-Carlo sampling (instead of iid). - seed (
Optional
[int
]) – If provided, perform deterministic optimization (i.e. the function to optimize is fixed and not stochastic).
Return type: Returns: The requested acquisition function.
Example
>>> model = SingleTaskGP(train_X, train_Y) >>> obj = LinearMCObjective(weights=torch.tensor([1.0, 2.0])) >>> acqf = get_acquisition_function("qEI", model, obj, train_X)
- acquisition_function_name (
-
botorch.acquisition.utils.
get_infeasible_cost
(X, model, objective=<function squeeze_last_dim>)[source]¶ Get infeasible cost for a model and objective.
- Computes an infeasible cost M such that -M < min_x f(x) almost always,
- so that feasible points are preferred.
Parameters: - X (
Tensor
) – A m x d Tensor of m design points to use in evaluating the minimum. These points should cover the design space well. The more points the better the estimate, at the expense of added computation. - model (
Model
) – A fitted botorch model. - objective (
Callable
[[Tensor
],Tensor
]) – The objective with which to evaluate the model output.
Return type: float
Returns: The infeasible cost M value.
Example
>>> model = SingleTaskGP(train_X, train_Y) >>> objective = lambda Y: Y[..., -1] ** 2 >>> M = get_infeasible_cost(train_X, model, obj)
-
botorch.acquisition.utils.
is_nonnegative
(acq_function)[source]¶ Determine whether a given acquisition function is non-negative.
Parameters: acq_function ( AcquisitionFunction
) – The AcquisitionFunction instance.Return type: bool
Returns: True if acq_function is non-negative, False if not, or if the behavior is unknown (for custom acquisition functions). Example
>>> qEI = qExpectedImprovement(model, best_f=0.1) >>> is_nonnegative(qEI) # returns True