botorch.utils¶
botorch.utils.constraints¶
Helpers for handling outcome constraints.
-
botorch.utils.constraints.
get_outcome_constraint_transforms
(outcome_constraints)[source]¶ Create outcome constraint callables from outcome constraint tensors.
- Parameters
outcome_constraints (
Optional
[Tuple
[Tensor
,Tensor
]]) – A tuple of (A, b). For k outcome constraints and m outputs at f(x)`, A is k x m and b is k x 1 such that A f(x) <= b.- Return type
Optional
[List
[Callable
[[Tensor
],Tensor
]]]- Returns
A list of callables, each mapping a Tensor of size b x q x m to a tensor of size b x q, where m is the number of outputs of the model. Negative values imply feasibility. The callables support broadcasting (e.g. for calling on a tensor of shape mc_samples x b x q x m).
Example
>>> # constrain `f(x)[0] <= 0` >>> A = torch.tensor([[1., 0.]]) >>> b = torch.tensor([[0.]]) >>> outcome_constraints = get_outcome_constraint_transforms((A, b))
botorch.utils.objective¶
Helpers for handling objectives.
-
botorch.utils.objective.
apply_constraints
(obj, constraints, samples, infeasible_cost, eta=0.001)[source]¶ Apply constraints using an infeasible_cost M for negative objectives.
This allows feasibility-weighting an objective for the case where the objective can be negative by usingthe following strategy: (1) add M to make obj nonnegative (2) apply constraints using the sigmoid approximation (3) shift by -M
- Parameters
obj (
Tensor
) – A n_samples x b x q Tensor of objective values.constraints (
List
[Callable
[[Tensor
],Tensor
]]) – A list of callables, each mapping a Tensor of size b x q x m to a Tensor of size b x q, where negative values imply feasibility. This callable must support broadcasting. Only relevant for multi- output models (m > 1).samples (
Tensor
) – A b x q x m Tensor of samples drawn from the posterior.infeasible_cost (
float
) – The infeasible value.eta (
float
) – The temperature parameter of the sigmoid function.
- Return type
Tensor
- Returns
A n_samples x b x q-dim tensor of feasibility-weighted objectives.
-
botorch.utils.objective.
apply_constraints_nonnegative_soft
(obj, constraints, samples, eta)[source]¶ Applies constraints to a non-negative objective.
This function uses a sigmoid approximation to an indicator function for each constraint.
- Parameters
obj (
Tensor
) – A n_samples x b x q Tensor of objective values.constraints (
List
[Callable
[[Tensor
],Tensor
]]) – A list of callables, each mapping a Tensor of size b x q x m to a Tensor of size b x q, where negative values imply feasibility. This callable must support broadcasting. Only relevant for multi- output models (m > 1).samples (
Tensor
) – A b x q x m Tensor of samples drawn from the posterior.eta (
float
) – The temperature parameter for the sigmoid function.
- Return type
Tensor
- Returns
A n_samples x b x q-dim tensor of feasibility-weighted objectives.
-
botorch.utils.objective.
get_objective_weights_transform
(weights)[source]¶ Create a linear objective callable from a set of weights.
Create a callable mapping a Tensor of size b x q x m to a Tensor of size b x q, where m is the number of outputs of the model using scalarization via the objective weights. This callable supports broadcasting (e.g. for calling on a tensor of shape mc_samples x b x q x m). For m = 1, the objective weight is used to determine the optimization direction.
- Parameters
weights (
Optional
[Tensor
]) – a 1-dimensional Tensor containing a weight for each task. If not provided, the identity mapping is used.- Return type
Callable
[[Tensor
],Tensor
]- Returns
Transform function using the objective weights.
Example
>>> weights = torch.tensor([0.75, 0.25]) >>> transform = get_objective_weights_transform(weights)
-
botorch.utils.objective.
soft_eval_constraint
(lhs, eta=0.001)[source]¶ Element-wise evaluation of a constraint in a ‘soft’ fashion
value(x) = 1 / (1 + exp(x / eta))
- Parameters
lhs (
Tensor
) – The left hand side of the constraint lhs <= 0.eta (
float
) – The temperature parameter of the softmax function. As eta grows larger, this approximates the Heaviside step function.
- Return type
Tensor
- Returns
Element-wise ‘soft’ feasibility indicator of the same shape as lhs. For each element x, value(x) -> 0 as x becomes positive, and value(x) -> 1 as x becomes negative.
botorch.utils.sampling¶
Utilities for MC and qMC sampling.
-
botorch.utils.sampling.
construct_base_samples
(batch_shape, output_shape, sample_shape, qmc=True, seed=None, device=None, dtype=None)[source]¶ Construct base samples from a multi-variate standard normal N(0, I_qo).
- Parameters
batch_shape (
Size
) – The batch shape of the base samples to generate. Typically, this is used with each dimension of size 1, so as to eliminate sampling variance across batches.output_shape (
Size
) – The output shape (q x m) of the base samples to generate.sample_shape (
Size
) – The sample shape of the samples to draw.qmc (
bool
) – If True, use quasi-MC sampling (instead of iid draws).seed (
Optional
[int
]) – If provided, use as a seed for the RNG.
- Return type
Tensor
- Returns
A sample_shape x batch_shape x mutput_shape dimensional tensor of base samples, drawn from a N(0, I_qm) distribution (using QMC if qmc=True). Here output_shape = q x m.
Example
>>> batch_shape = torch.Size([2]) >>> output_shape = torch.Size([3]) >>> sample_shape = torch.Size([10]) >>> samples = construct_base_samples(batch_shape, output_shape, sample_shape)
-
botorch.utils.sampling.
construct_base_samples_from_posterior
(posterior, sample_shape, qmc=True, collapse_batch_dims=True, seed=None)[source]¶ Construct a tensor of normally distributed base samples.
- Parameters
posterior (
Posterior
) – A Posterior object.sample_shape (
Size
) – The sample shape of the samples to draw.qmc (
bool
) – If True, use quasi-MC sampling (instead of iid draws).seed (
Optional
[int
]) – If provided, use as a seed for the RNG.
- Return type
Tensor
- Returns
A num_samples x 1 x q x m dimensional Tensor of base samples, drawn from a N(0, I_qm) distribution (using QMC if qmc=True). Here q and m are the same as in the posterior’s event_shape b x q x m. Importantly, this only obtain a single t-batch of samples, so as to not introduce any sampling variance across t-batches.
Example
>>> sample_shape = torch.Size([10]) >>> samples = construct_base_samples_from_posterior(posterior, sample_shape)
-
botorch.utils.sampling.
draw_sobol_normal_samples
(d, n, device=None, dtype=None, seed=None)[source]¶ Draw qMC samples from a multi-variate standard normal N(0, I_d)
A primary use-case for this functionality is to compute an QMC average of f(X) over X where each element of X is drawn N(0, 1).
- Parameters
d (
int
) – The dimension of the normal distributionn (
int
) – The number of samples to returndevice (
Optional
[device
]) – The torch devicedtype (
Optional
[dtype
]) – The torch dtypeseed (
Optional
[int
]) – The seed used for initializing Owen scrambling. If None (default), use a random seed.
- Return type
Tensor
- Returns
A tensor of qMC standard normal samples with dimension n x d with device and dtype specified by the input.
Example
>>> samples = draw_sobol_normal_samples(2, 10)
-
botorch.utils.sampling.
draw_sobol_samples
(bounds, n, q, seed=None)[source]¶ Draw qMC samples from the box defined by bounds.
- Parameters
bounds (
Tensor
) – A 2 x d dimensional tensor specifying box constraints on a d-dimensional space, where bounds[0, :] and bounds[1, :] correspond to lower and upper bounds, respectively.n (
int
) – The number of (q-batch) samples.q (
int
) – The size of each q-batch.seed (
Optional
[int
]) – The seed used for initializing Owen scrambling. If None (default), use a random seed.
- Return type
Tensor
- Returns
A n x q x d-dim tensor of qMC samples from the box defined by bounds.
Example
>>> bounds = torch.stack([torch.zeros(3), torch.ones(3)]) >>> samples = draw_sobol_samples(bounds, 10, 2)
-
botorch.utils.sampling.
manual_seed
(seed=None)[source]¶ Contextmanager for manual setting the torch.random seed.
- Parameters
seed (
Optional
[int
]) – The seed to set the random number generator to.- Return type
Generator
[None
,None
,None
]- Returns
Generator
Example
>>> with manual_seed(1234): >>> X = torch.rand(3)
botorch.utils.transforms¶
Some basic data transformation helpers.
-
botorch.utils.transforms.
concatenate_pending_points
(method)[source]¶ Decorator concatenating X_pending into an acquisition function’s argument.
This decorator works on the forward method of acquisition functions taking a tensor X as the argument. If the acquisition function has an X_pending attribute (that is not None), this is concatenated into the input X, appropriately expanding the pending points to match the batch shape of X.
Example
>>> class ExampleAcquisitionFunction: >>> @concatenate_pending_points >>> @t_batch_mode_transform() >>> def forward(self, X): >>> ...
- Return type
Callable
[[Any
,Tensor
],Any
]
-
botorch.utils.transforms.
convert_to_target_pre_hook
(module, *args)[source]¶ Pre-hook for automatically calling .to(X) on module prior to forward
-
botorch.utils.transforms.
match_batch_shape
(X, Y)[source]¶ Matches the batch dimension of a tensor to that of another tensor.
- Parameters
X (
Tensor
) – A batch_shape_X x q x d tensor, whose batch dimensions that correspond to batch dimensions of Y are to be matched to those (if compatible).Y (
Tensor
) – A batch_shape_Y x q’ x d tensor.
- Return type
Tensor
- Returns
A batch_shape_Y x q x d tensor containing the data of X expanded to the batch dimensions of Y (if compatible). For instance, if X is b’’ x b’ x q x d and Y is b x q x d, then the returned tensor is b’’ x b x q x d.
Example
>>> X = torch.rand(2, 1, 5, 3) >>> Y = torch.rand(2, 6, 4, 3) >>> X_matched = match_batch_shape(X, Y) >>> X_matched.shape torch.Size([2, 6, 5, 3])
-
botorch.utils.transforms.
normalize
(X, bounds)[source]¶ Min-max normalize X w.r.t. the provided bounds.
- Parameters
X (
Tensor
) – … x d tensor of databounds (
Tensor
) – 2 x d tensor of lower and upper bounds for each of the X’s d columns.
- Return type
Tensor
- Returns
- A … x d-dim tensor of normalized data, given by
(X - bounds[0]) / (bounds[1] - bounds[0]). If all elements of X are contained within bounds, the normalized values will be contained within [0, 1]^d.
Example
>>> X = torch.rand(4, 3) >>> bounds = torch.stack([torch.zeros(3), 0.5 * torch.ones(3)]) >>> X_normalized = normalize(X, bounds)
-
botorch.utils.transforms.
squeeze_last_dim
(Y)[source]¶ Squeeze the last dimension of a Tensor.
- Parameters
Y (
Tensor
) – A … x d-dim Tensor.- Return type
Tensor
- Returns
The input tensor with last dimension squeezed.
Example
>>> Y = torch.rand(4, 3) >>> Y_squeezed = squeeze_last_dim(Y)
-
botorch.utils.transforms.
standardize
(Y)[source]¶ Standardizes (zero mean, unit variance) a tensor by dim=-2.
If the tensor is single-dimensional, simply standardizes the tensor. If for some batch index all elements are equal (of if there is only a single data point), this function will return 0 for that batch index.
- Parameters
Y (
Tensor
) – A batch_shape x n x m-dim tensor.- Return type
Tensor
- Returns
The standardized Y.
Example
>>> Y = torch.rand(4, 3) >>> Y_standardized = standardize(Y)
-
botorch.utils.transforms.
t_batch_mode_transform
(expected_q=None)[source]¶ Factory for decorators taking a t-batched X tensor.
This method creates decorators for instance methods to transform an input tensor X to t-batch mode (i.e. with at least 3 dimensions). This assumes the tensor has a q-batch dimension. The decorator also checks the q-batch size if expected_q is provided.
- Parameters
expected_q (
Optional
[int
]) – The expected q-batch size of X. If specified, this will raise an AssertitionError if X’s q-batch size does not equal expected_q.- Return type
Callable
[[Callable
[[Any
,Tensor
],Any
]],Callable
[[Any
,Tensor
],Any
]]- Returns
The decorated instance method.
Example
>>> class ExampleClass: >>> @t_batch_mode_transform(expected_q=1) >>> def single_q_method(self, X): >>> ... >>> >>> @t_batch_mode_transform() >>> def arbitrary_q_method(self, X): >>> ...
-
botorch.utils.transforms.
unnormalize
(X, bounds)[source]¶ Un-normalizes X w.r.t. the provided bounds.
- Parameters
X (
Tensor
) – … x d tensor of databounds (
Tensor
) – 2 x d tensor of lower and upper bounds for each of the X’s d columns.
- Return type
Tensor
- Returns
- A … x d-dim tensor of unnormalized data, given by
X * (bounds[1] - bounds[0]) + bounds[0]. If all elements of X are contained in [0, 1]^d, the un-normalized values will be contained within bounds.
Example
>>> X_normalized = torch.rand(4, 3) >>> bounds = torch.stack([torch.zeros(3), 0.5 * torch.ones(3)]) >>> X = unnormalize(X_normalized, bounds)