botorch.utils¶
Constraints¶
Helpers for handling outcome constraints.
- botorch.utils.constraints.get_outcome_constraint_transforms(outcome_constraints)[source]¶
Create outcome constraint callables from outcome constraint tensors.
- Parameters:
outcome_constraints (Optional[Tuple[Tensor, Tensor]]) – A tuple of (A, b). For k outcome constraints and m outputs at f(x)`, A is k x m and b is k x 1 such that A f(x) <= b.
- Returns:
A list of callables, each mapping a Tensor of size b x q x m to a tensor of size b x q, where m is the number of outputs of the model. Negative values imply feasibility. The callables support broadcasting (e.g. for calling on a tensor of shape mc_samples x b x q x m).
- Return type:
Optional[List[Callable[[Tensor], Tensor]]]
Example
>>> # constrain `f(x)[0] <= 0` >>> A = torch.tensor([[1., 0.]]) >>> b = torch.tensor([[0.]]) >>> outcome_constraints = get_outcome_constraint_transforms((A, b))
Containers¶
Representations for different kinds of data.
- class botorch.utils.containers.DenseContainer(values, event_shape)[source]¶
Bases:
BotorchContainer
Basic representation of data stored as a dense Tensor.
- Parameters:
values (Tensor) –
event_shape (Size) –
- values: Tensor¶
- event_shape: Size¶
- property shape: Size¶
- property device: device¶
- property dtype: dtype¶
- class botorch.utils.containers.SliceContainer(values, indices, event_shape)[source]¶
Bases:
BotorchContainer
Represent data points formed by concatenating (n-1)-dimensional slices taken from the leading dimension of an n-dimensional source tensor.
- Parameters:
values (Tensor) –
indices (LongTensor) –
event_shape (Size) –
- values: Tensor¶
- indices: LongTensor¶
- event_shape: Size¶
- property shape: Size¶
- property device: device¶
- property dtype: dtype¶
Datasets¶
Representations for different kinds of datasets.
- class botorch.utils.datasets.SupervisedDataset(*args, **kwargs)[source]¶
Bases:
BotorchDataset
Base class for datasets consisting of labelled pairs (x, y).
This class object’s __call__ method converts Tensors src to DenseContainers under the assumption that event_shape=src.shape[-1:].
Example:
X = torch.rand(16, 2) Y = torch.rand(16, 1) A = SupervisedDataset(X, Y) B = SupervisedDataset( DenseContainer(X, event_shape=X.shape[-1:]), DenseContainer(Y, event_shape=Y.shape[-1:]), ) assert A == B
- Parameters:
args (Any) –
kwargs (Any) –
- X: BotorchContainer¶
- Y: BotorchContainer¶
- classmethod dict_from_iter(X, Y, *, keys=None)[source]¶
Returns a dictionary of SupervisedDataset from iterables.
- Parameters:
X (Union[BotorchContainer, Tensor, Iterable[Union[BotorchContainer, Tensor]]]) –
Y (Union[BotorchContainer, Tensor, Iterable[Union[BotorchContainer, Tensor]]]) –
keys (Optional[Iterable[Hashable]]) –
- Return type:
Dict[Hashable, SupervisedDataset]
- class botorch.utils.datasets.FixedNoiseDataset(*args, **kwargs)[source]¶
Bases:
SupervisedDataset
A SupervisedDataset with an additional field Yvar that stipulates observations variances so that Y[i] ~ N(f(X[i]), Yvar[i]).
- Parameters:
args (Any) –
kwargs (Any) –
- X: BotorchContainer¶
- Y: BotorchContainer¶
- Yvar: BotorchContainer¶
- classmethod dict_from_iter(X, Y, Yvar=None, *, keys=None)[source]¶
Returns a dictionary of FixedNoiseDataset from iterables.
- Parameters:
X (Union[BotorchContainer, Tensor, Iterable[Union[BotorchContainer, Tensor]]]) –
Y (Union[BotorchContainer, Tensor, Iterable[Union[BotorchContainer, Tensor]]]) –
Yvar (Optional[Union[BotorchContainer, Tensor, Iterable[Union[BotorchContainer, Tensor]]]]) –
keys (Optional[Iterable[Hashable]]) –
- Return type:
Dict[Hashable, SupervisedDataset]
- class botorch.utils.datasets.RankingDataset(*args, **kwargs)[source]¶
Bases:
SupervisedDataset
A SupervisedDataset whose labelled pairs (x, y) consist of m-ary combinations x ∈ Z^{m} of elements from a ground set Z = (z_1, …) and ranking vectors y {0, …, m - 1}^{m} with properties:
Ranks start at zero, i.e. min(y) = 0.
Sorted ranks are contiguous unless one or more ties are present.
k ranks are skipped after a k-way tie.
Example:
X = SliceContainer( values=torch.rand(16, 2), indices=torch.stack([torch.randperm(16)[:3] for _ in range(8)]), event_shape=torch.Size([3 * 2]), ) Y = DenseContainer( torch.stack([torch.randperm(3) for _ in range(8)]), event_shape=torch.Size([3]) ) dataset = RankingDataset(X, Y)
- Parameters:
args (Any) –
kwargs (Any) –
- Y: BotorchContainer¶
Dispatcher¶
- class botorch.utils.dispatcher.Dispatcher(name, doc=None, encoder=<class 'type'>)[source]¶
Bases:
Dispatcher
Clearing house for multiple dispatch functionality. This class extends <multipledispatch.Dispatcher> by: (i) generalizing the argument encoding convention during method lookup, (ii) implementing __getitem__ as a dedicated method lookup function.
- Parameters:
name (str) – A string identifier for the Dispatcher instance.
doc (Optional[str]) – A docstring for the multiply dispatched method(s).
encoder (Callable[Any, Type]) – A callable that individually transforms the arguments passed at runtime in order to construct the key used for method lookup as tuple(map(encoder, args)). Defaults to type.
- dispatch(*types)[source]¶
Method lookup strategy. Checks for an exact match before traversing the set of registered methods according to the current ordering.
- Parameters:
types (Type) – A tuple of types that gets compared with the signatures of registered methods to determine compatibility.
- Returns:
The first method encountered with a matching signature.
- Return type:
Callable
- encode_args(args)[source]¶
Converts arguments into a tuple of types used during method lookup.
- Parameters:
args (Any) –
- Return type:
Tuple[Type]
- help(*args, **kwargs)[source]¶
Prints the retrieved method’s docstring.
- Parameters:
args (Any) –
kwargs (Any) –
- Return type:
None
- property encoder: Callable[Any, Type]¶
- name¶
- funcs¶
- doc¶
Low-Rank Cholesky Update Utils¶
- botorch.utils.low_rank.extract_batch_covar(mt_mvn)[source]¶
Extract a batched independent covariance matrix from an MTMVN.
- Parameters:
mt_mvn (MultitaskMultivariateNormal) – A multi-task multivariate normal with a block diagonal covariance matrix.
- Returns:
- A lazy covariance matrix consisting of a batch of the blocks of
the diagonal of the MultitaskMultivariateNormal.
- Return type:
LinearOperator
- botorch.utils.low_rank.sample_cached_cholesky(posterior, baseline_L, q, base_samples, sample_shape, max_tries=6)[source]¶
Get posterior samples at the q new points from the joint multi-output posterior.
- Parameters:
posterior (GPyTorchPosterior) – The joint posterior is over (X_baseline, X).
baseline_L (Tensor) – The baseline lower triangular cholesky factor.
q (int) – The number of new points in X.
base_samples (Tensor) – The base samples.
sample_shape (Size) – The sample shape.
max_tries (int) – The number of tries for computing the Cholesky decomposition with increasing jitter.
- Returns:
- A sample_shape x batch_shape x q x m-dim tensor of posterior
samples at the new points.
- Return type:
Tensor
Objective¶
Helpers for handling objectives.
- botorch.utils.objective.get_objective_weights_transform(weights)[source]¶
Create a linear objective callable from a set of weights.
Create a callable mapping a Tensor of size b x q x m and an (optional) Tensor of size b x q x d to a Tensor of size b x q, where m is the number of outputs of the model using scalarization via the objective weights. This callable supports broadcasting (e.g. for calling on a tensor of shape mc_samples x b x q x m). For m = 1, the objective weight is used to determine the optimization direction.
- Parameters:
weights (Optional[Tensor]) – a 1-dimensional Tensor containing a weight for each task. If not provided, the identity mapping is used.
- Returns:
Transform function using the objective weights.
- Return type:
Callable[[Tensor, Optional[Tensor]], Tensor]
Example
>>> weights = torch.tensor([0.75, 0.25]) >>> transform = get_objective_weights_transform(weights)
- botorch.utils.objective.apply_constraints_nonnegative_soft(obj, constraints, samples, eta)[source]¶
Applies constraints to a non-negative objective.
This function uses a sigmoid approximation to an indicator function for each constraint.
- Parameters:
obj (Tensor) – A n_samples x b x q (x m’)-dim Tensor of objective values.
constraints (List[Callable[[Tensor], Tensor]]) – A list of callables, each mapping a Tensor of size b x q x m to a Tensor of size b x q, where negative values imply feasibility. This callable must support broadcasting. Only relevant for multi- output models (m > 1).
samples (Tensor) – A n_samples x b x q x m Tensor of samples drawn from the posterior.
eta (float) – The temperature parameter for the sigmoid function.
- Returns:
A n_samples x b x q (x m’)-dim tensor of feasibility-weighted objectives.
- Return type:
Tensor
- botorch.utils.objective.soft_eval_constraint(lhs, eta=0.001)[source]¶
Element-wise evaluation of a constraint in a ‘soft’ fashion
value(x) = 1 / (1 + exp(x / eta))
- Parameters:
lhs (Tensor) – The left hand side of the constraint lhs <= 0.
eta (float) – The temperature parameter of the softmax function. As eta grows larger, this approximates the Heaviside step function.
- Returns:
Element-wise ‘soft’ feasibility indicator of the same shape as lhs. For each element x, value(x) -> 0 as x becomes positive, and value(x) -> 1 as x becomes negative.
- Return type:
Tensor
- botorch.utils.objective.apply_constraints(obj, constraints, samples, infeasible_cost, eta=0.001)[source]¶
Apply constraints using an infeasible_cost M for negative objectives.
This allows feasibility-weighting an objective for the case where the objective can be negative by using the following strategy: (1) Add M to make obj non-negative; (2) Apply constraints using the sigmoid approximation; (3) Shift by -M.
- Parameters:
obj (Tensor) – A n_samples x b x q (x m’)-dim Tensor of objective values.
constraints (List[Callable[[Tensor], Tensor]]) – A list of callables, each mapping a Tensor of size b x q x m to a Tensor of size b x q, where negative values imply feasibility. This callable must support broadcasting. Only relevant for multi- output models (m > 1).
samples (Tensor) – A n_samples x b x q x m Tensor of samples drawn from the posterior.
infeasible_cost (float) – The infeasible value.
eta (float) – The temperature parameter of the sigmoid function.
- Returns:
A n_samples x b x q (x m’)-dim tensor of feasibility-weighted objectives.
- Return type:
Tensor
Rounding¶
- botorch.utils.rounding.approximate_round(X, tau=0.001)[source]¶
Diffentiable approximate rounding function.
This method is a piecewise approximation of a rounding function where each piece is a hyperbolic tangent function.
- Parameters:
X (Tensor) – The tensor to round to the nearest integer (element-wise).
tau (float) – A temperature hyperparameter.
- Returns:
The approximately rounded input tensor.
- Return type:
Tensor
Sampling¶
Utilities for MC and qMC sampling.
References
T. A. Trikalinos and G. van Valkenhoef. Efficient sampling from uniform density n-polytopes. Technical report, Brown University, 2014.
- botorch.utils.sampling.manual_seed(seed=None)[source]¶
Contextmanager for manual setting the torch.random seed.
- Parameters:
seed (Optional[int]) – The seed to set the random number generator to.
- Returns:
Generator
- Return type:
Generator[None, None, None]
Example
>>> with manual_seed(1234): >>> X = torch.rand(3)
- botorch.utils.sampling.construct_base_samples(batch_shape, output_shape, sample_shape, qmc=True, seed=None, device=None, dtype=None)[source]¶
Construct base samples from a multi-variate standard normal N(0, I_qo).
- Parameters:
batch_shape (Size) – The batch shape of the base samples to generate. Typically, this is used with each dimension of size 1, so as to eliminate sampling variance across batches.
output_shape (Size) – The output shape (q x m) of the base samples to generate.
sample_shape (Size) – The sample shape of the samples to draw.
qmc (bool) – If True, use quasi-MC sampling (instead of iid draws).
seed (Optional[int]) – If provided, use as a seed for the RNG.
device (Optional[device]) –
dtype (Optional[dtype]) –
- Returns:
A sample_shape x batch_shape x mutput_shape dimensional tensor of base samples, drawn from a N(0, I_qm) distribution (using QMC if qmc=True). Here output_shape = q x m.
- Return type:
Tensor
Example
>>> batch_shape = torch.Size([2]) >>> output_shape = torch.Size([3]) >>> sample_shape = torch.Size([10]) >>> samples = construct_base_samples(batch_shape, output_shape, sample_shape)
- botorch.utils.sampling.construct_base_samples_from_posterior(posterior, sample_shape, qmc=True, collapse_batch_dims=True, seed=None)[source]¶
Construct a tensor of normally distributed base samples.
- Parameters:
posterior (Posterior) – A Posterior object.
sample_shape (Size) – The sample shape of the samples to draw.
qmc (bool) – If True, use quasi-MC sampling (instead of iid draws).
seed (Optional[int]) – If provided, use as a seed for the RNG.
collapse_batch_dims (bool) –
- Returns:
A num_samples x 1 x q x m dimensional Tensor of base samples, drawn from a N(0, I_qm) distribution (using QMC if qmc=True). Here q and m are the same as in the posterior’s event_shape b x q x m. Importantly, this only obtain a single t-batch of samples, so as to not introduce any sampling variance across t-batches.
- Return type:
Tensor
Example
>>> sample_shape = torch.Size([10]) >>> samples = construct_base_samples_from_posterior(posterior, sample_shape)
- botorch.utils.sampling.draw_sobol_samples(bounds, n, q, batch_shape=None, seed=None)[source]¶
Draw qMC samples from the box defined by bounds.
- Parameters:
bounds (Tensor) – A 2 x d dimensional tensor specifying box constraints on a d-dimensional space, where bounds[0, :] and bounds[1, :] correspond to lower and upper bounds, respectively.
n (int) – The number of (q-batch) samples. As a best practice, use powers of 2.
q (int) – The size of each q-batch.
batch_shape (Optional[Iterable[int], torch.Size]) – The batch shape of the samples. If given, returns samples of shape n x batch_shape x q x d, where each batch is an n x q x d-dim tensor of qMC samples.
seed (Optional[int]) – The seed used for initializing Owen scrambling. If None (default), use a random seed.
- Returns:
A n x batch_shape x q x d-dim tensor of qMC samples from the box defined by bounds.
- Return type:
Tensor
Example
>>> bounds = torch.stack([torch.zeros(3), torch.ones(3)]) >>> samples = draw_sobol_samples(bounds, 16, 2)
- botorch.utils.sampling.draw_sobol_normal_samples(d, n, device=None, dtype=None, seed=None)[source]¶
Draw qMC samples from a multi-variate standard normal N(0, I_d).
A primary use-case for this functionality is to compute an QMC average of f(X) over X where each element of X is drawn N(0, 1).
- Parameters:
d (int) – The dimension of the normal distribution.
n (int) – The number of samples to return. As a best practice, use powers of 2.
device (Optional[device]) – The torch device.
dtype (Optional[dtype]) – The torch dtype.
seed (Optional[int]) – The seed used for initializing Owen scrambling. If None (default), use a random seed.
- Returns:
A tensor of qMC standard normal samples with dimension n x d with device and dtype specified by the input.
- Return type:
Tensor
Example
>>> samples = draw_sobol_normal_samples(2, 16)
- botorch.utils.sampling.sample_hypersphere(d, n=1, qmc=False, seed=None, device=None, dtype=None)[source]¶
Sample uniformly from a unit d-sphere.
- Parameters:
d (int) – The dimension of the hypersphere.
n (int) – The number of samples to return.
qmc (bool) – If True, use QMC Sobol sampling (instead of i.i.d. uniform).
seed (Optional[int]) – If provided, use as a seed for the RNG.
device (Optional[device]) – The torch device.
dtype (Optional[dtype]) – The torch dtype.
- Returns:
An n x d tensor of uniform samples from from the d-hypersphere.
- Return type:
Tensor
Example
>>> sample_hypersphere(d=5, n=10)
- botorch.utils.sampling.sample_simplex(d, n=1, qmc=False, seed=None, device=None, dtype=None)[source]¶
Sample uniformly from a d-simplex.
- Parameters:
d (int) – The dimension of the simplex.
n (int) – The number of samples to return.
qmc (bool) – If True, use QMC Sobol sampling (instead of i.i.d. uniform).
seed (Optional[int]) – If provided, use as a seed for the RNG.
device (Optional[device]) – The torch device.
dtype (Optional[dtype]) – The torch dtype.
- Returns:
An n x d tensor of uniform samples from from the d-simplex.
- Return type:
Tensor
Example
>>> sample_simplex(d=3, n=10)
- botorch.utils.sampling.sample_polytope(A, b, x0, n=10000, n0=100, seed=None)[source]¶
Hit and run sampler from uniform sampling points from a polytope, described via inequality constraints A*x<=b.
- Parameters:
A (Tensor) – A Tensor describing inequality constraints so that all samples satisfy Ax<=b.
b (Tensor) – A Tensor describing the inequality constraints so that all samples satisfy Ax<=b.
x0 (Tensor) – A d-dim Tensor representing a starting point of the chain satisfying the constraints.
n (int) – The number of resulting samples kept in the output.
n0 (int) – The number of burn-in samples. The chain will produce n+n0 samples but the first n0 samples are not saved.
seed (Optional[int]) – The seed for the sampler. If omitted, use a random seed.
- Returns:
(n, d) dim Tensor containing the resulting samples.
- Return type:
Tensor
- botorch.utils.sampling.batched_multinomial(weights, num_samples, replacement=False, generator=None, out=None)[source]¶
Sample from multinomial with an arbitrary number of batch dimensions.
- Parameters:
weights (Tensor) – A batch_shape x num_categories tensor of weights. For each batch index i, j, …, this functions samples from a multinomial with input weights[i, j, …, :]. Note that the weights need not sum to one, but must be non-negative, finite and have a non-zero sum.
num_samples (int) – The number of samples to draw for each batch index. Must be smaller than num_categories if replacement=False.
replacement (bool) – If True, samples are drawn with replacement.
generator (Optional[Generator]) – A a pseudorandom number generator for sampling.
out (Optional[Tensor]) – The output tensor (optional). If provided, must be of size batch_shape x num_samples.
- Returns:
A batch_shape x num_samples tensor of samples.
- Return type:
LongTensor
This is a thin wrapper around torch.multinomial that allows weight (input) tensors with an arbitrary number of batch dimensions (torch.multinomial only allows a single batch dimension). The calling signature is the same as for torch.multinomial.
Example
>>> weights = torch.rand(2, 3, 10) >>> samples = batched_multinomial(weights, 4) # shape is 2 x 3 x 4
- botorch.utils.sampling.find_interior_point(A, b, A_eq=None, b_eq=None)[source]¶
Find an interior point of a polytope via linear programming.
- Parameters:
A (ndarray) – A n_ineq x d-dim numpy array containing the coefficients of the constraint inequalities.
b (ndarray) – A n_ineq x 1-dim numpy array containing the right hand sides of the constraint inequalities.
A_eq (Optional[ndarray]) – A n_eq x d-dim numpy array containing the coefficients of the constraint equalities.
b_eq (Optional[ndarray]) – A n_eq x 1-dim numpy array containing the right hand sides of the constraint equalities.
- Returns:
A d-dim numpy array containing an interior point of the polytope. This function will raise a ValueError if there is no such point.
- Return type:
ndarray
This method solves the following Linear Program:
min -s subject to A @ x <= b - 2 * s, s >= 0, A_eq @ x = b_eq
In case the polytope is unbounded, then it will also constrain the slack variable s to s<=1.
- class botorch.utils.sampling.HitAndRunPolytopeSampler(inequality_constraints=None, equality_constraints=None, bounds=None, interior_point=None, n_burnin=0)[source]¶
Bases:
PolytopeSampler
A sampler for sampling from a polyope using a hit-and-run algorithm.
A sampler for sampling from a polyope using a hit-and-run algorithm.
- Parameters:
inequality_constraints (Optional[Tuple[Tensor, Tensor]]) – Tensors (A, b) describing inequality constraints A @ x <= b, where A is a n_ineq_con x d-dim Tensor and b is a n_ineq_con x 1-dim Tensor, with n_ineq_con the number of inequalities and d the dimension of the sample space.
equality_constraints (Optional[Tuple[Tensor, Tensor]]) – Tensors (C, d) describing the equality constraints C @ x = d, where C is a n_eq_con x d-dim Tensor and d is a n_eq_con x 1-dim Tensor with n_eq_con the number of equalities.
bounds (Optional[Tensor]) – A 2 x d-dim tensor of box bounds, where inf (-inf) means that the respective dimension is unbounded from above (below).
interior_point (Optional[Tensor]) – A d x 1-dim Tensor representing a point in the (relative) interior of the polytope. If omitted, determined automatically by solving a Linear Program.
n_burnin (int) – The number of burn in samples.
- class botorch.utils.sampling.DelaunayPolytopeSampler(inequality_constraints=None, equality_constraints=None, bounds=None, interior_point=None)[source]¶
Bases:
PolytopeSampler
A polytope sampler using Delaunay triangulation.
This sampler first enumerates the vertices of the constraint polytope and then uses a Delaunay triangulation to tesselate its convex hull.
The sampling happens in two stages: 1. First, we sample from the set of hypertriangles generated by the Delaunay triangulation (i.e. which hyper-triangle to draw the sample from) with probabilities proportional to the triangle volumes. 2. Then, we sample uniformly from the chosen hypertriangle by sampling uniformly from the unit simplex of the appropriate dimension, and then computing the convex combination of the vertices of the hypertriangle according to that draw from the simplex.
The best reference (not exactly the same, but functionally equivalent) is [Trikalinos2014polytope]. A simple R implementation is available at https://github.com/gertvv/tesselample.
Initialize DelaunayPolytopeSampler.
- Parameters:
inequality_constraints (Optional[Tuple[Tensor, Tensor]]) – Tensors (A, b) describing inequality constraints A @ x <= b, where A is a n_ineq_con x d-dim Tensor and b is a n_ineq_con x 1-dim Tensor, with n_ineq_con the number of inequalities and d the dimension of the sample space.
equality_constraints (Optional[Tuple[Tensor, Tensor]]) – Tensors (C, d) describing the equality constraints C @ x = d, where C is a n_eq_con x d-dim Tensor and d is a n_eq_con x 1-dim Tensor with n_eq_con the number of equalities.
bounds (Optional[Tensor]) – A 2 x d-dim tensor of box bounds, where inf (-inf) means that the respective dimension is unbounded from above (below).
interior_point (Optional[Tensor]) – A d x 1-dim Tensor representing a point in the (relative) interior of the polytope. If omitted, determined automatically by solving a Linear Program.
Warning: The vertex enumeration performed in this algorithm can become extremely costly if there are a large number of inequalities. Similarly, the triangulation can get very expensive in high dimensions. Only use this algorithm for moderate dimensions / moderately complex constraint sets. An alternative is the HitAndRunPolytopeSampler.
- botorch.utils.sampling.normalize_linear_constraints(bounds, constraints)[source]¶
Normalize linear constraints to the unit cube.
- Parameters:
bounds (Tensor) – A 2 x d-dim tensor containing the box bounds.
constraints (List[Tuple[Tensor, Tensor, float]]) – A list of tuples (indices, coefficients, rhs), with each tuple encoding an inequality constraint of the form sum_i (X[indices[i]] * coefficients[i]) >= rhs or sum_i (X[indices[i]] * coefficients[i]) = rhs.
- Return type:
List[Tuple[Tensor, Tensor, float]]
- botorch.utils.sampling.get_polytope_samples(n, bounds, inequality_constraints=None, equality_constraints=None, seed=None, thinning=32, n_burnin=10000)[source]¶
Sample from polytope defined by box bounds and (in)equality constraints.
This uses a hit-and-run Markov chain sampler.
TODO: make this method return the sampler object, to avoid doing burn-in every time we draw samples.
- Parameters:
n (int) – The number of samples.
bounds (Tensor) – A 2 x d-dim tensor containing the box bounds.
constraints (equality) – A list of tuples (indices, coefficients, rhs), with each tuple encoding an inequality constraint of the form sum_i (X[indices[i]] * coefficients[i]) >= rhs.
constraints – A list of tuples (indices, coefficients, rhs), with each tuple encoding an inequality constraint of the form sum_i (X[indices[i]] * coefficients[i]) = rhs.
seed (Optional[int]) – The random seed.
thinning (int) – The amount of thinning.
n_burnin (int) – The number of burn-in samples for the Markov chain sampler.
inequality_constraints (Optional[List[Tuple[Tensor, Tensor, float]]]) –
equality_constraints (Optional[List[Tuple[Tensor, Tensor, float]]]) –
- Returns:
A n x d-dim tensor of samples.
- Return type:
Tensor
- botorch.utils.sampling.sparse_to_dense_constraints(d, constraints)[source]¶
Convert parameter constraints from a sparse format into a dense format.
This method converts sparse triples of the form (indices, coefficients, rhs) to constraints of the form Ax >= b or Ax = b.
- Parameters:
d (int) – The input dimension.
constraints (inequality) – A list of tuples (indices, coefficients, rhs), with each tuple encoding an (in)equality constraint of the form sum_i (X[indices[i]] * coefficients[i]) >= rhs or sum_i (X[indices[i]] * coefficients[i]) = rhs.
- Returns:
A: A n_constraints x d-dim tensor of coefficients.
b: A n_constraints x 1-dim tensor of right hand sides.
- Return type:
A two-element tuple containing
Sampling from GP priors¶
- class botorch.utils.gp_sampling.GPDraw(model, seed=None)[source]¶
Bases:
Module
Convenience wrapper for sampling a function from a GP prior.
This wrapper implicitly defines the GP sample as a self-updating function by keeping track of the evaluated points and respective base samples used during the evaluation.
This does not yet support multi-output models.
Construct a GP function sampler.
- Parameters:
model (Model) – The Model defining the GP prior.
seed (Optional[int]) –
- property Xs: Tensor¶
A (batch_shape) x n_eval x d-dim tensor of locations at which the GP was evaluated (or None if the sample has never been evaluated).
- property Ys: Tensor¶
A (batch_shape) x n_eval x d-dim tensor of associated function values (or None if the sample has never been evaluated).
- forward(X)[source]¶
Evaluate the GP sample function at a set of points X.
- Parameters:
X (Tensor) – A batch_shape x n x d-dim tensor of points
- Returns:
The value of the GP sample at the n points.
- Return type:
Tensor
- training: bool¶
- class botorch.utils.gp_sampling.RandomFourierFeatures(kernel, input_dim, num_rff_features, sample_shape=None)[source]¶
Bases:
Module
A class that represents Random Fourier Features.
Initialize RandomFourierFeatures.
- Parameters:
kernel (Kernel) – The GP kernel.
input_dim (int) – The input dimension to the GP kernel.
num_rff_features (int) – The number of Fourier features.
sample_shape (Optional[torch.Size]) – The shape of a single sample. For a single-element torch.Size object, this is simply the number of RFF draws.
- forward(X)[source]¶
Get Fourier basis features for the provided inputs.
Note that the right-most subset of the batch shape of X should be (sample_shape) x (kernel_batch_shape) if using either the sample_shape argument or a batched kernel. In other words, X should be of shape (added_batch_shape) x (sample_shape) x (kernel_batch_shape) x n x input_dim, where parantheses denote that the given batch shape can be empty. X can always be a tensor of shape n x input_dim, in which case broadcasting will take care of the batch shape. This will raise a ValueError if the batch shapes are not compatible.
- Parameters:
X (Tensor) – Input tensor of shape (batch_shape) x n x input_dim.
- Returns:
A Tensor of shape (batch_shape) x n x rff. If X does not have a batch_shape, the output batch_shape will be (sample_shape) x (kernel_batch_shape).
- Return type:
Tensor
- training: bool¶
- botorch.utils.gp_sampling.get_deterministic_model_multi_samples(weights, bases)[source]¶
Get a batched deterministic model that batch evaluates n_samples function samples. This supports multi-output models as well.
- Parameters:
weights (List[Tensor]) – A list of weights with num_outputs elements. Each weight is of shape (batch_shape_input) x n_samples x num_rff_features, where (batch_shape_input) is the batch shape of the inputs used to obtain the posterior weights.
bases (List[RandomFourierFeatures]) – A list of RandomFourierFeatures with num_outputs elements. Each basis has a sample shape of n_samples.
n_samples – The number of function samples.
- Returns:
A batched GenericDeterministicModel`s that batch evaluates `n_samples function samples.
- Return type:
- botorch.utils.gp_sampling.get_eval_gp_sample_callable(w, basis)[source]¶
- Parameters:
w (Tensor) –
basis (RandomFourierFeatures) –
- Return type:
Tensor
- botorch.utils.gp_sampling.get_deterministic_model(weights, bases)[source]¶
Get a deterministic model using the provided weights and bases for each output.
- Parameters:
weights (List[Tensor]) – A list of weights with m elements.
bases (List[RandomFourierFeatures]) – A list of RandomFourierFeatures with m elements.
- Returns:
A deterministic model.
- Return type:
- botorch.utils.gp_sampling.get_deterministic_model_list(weights, bases)[source]¶
Get a deterministic model list using the provided weights and bases for each output.
- Parameters:
weights (List[Tensor]) – A list of weights with m elements.
bases (List[RandomFourierFeatures]) – A list of RandomFourierFeatures with m elements.
- Returns:
A deterministic model.
- Return type:
- botorch.utils.gp_sampling.get_weights_posterior(X, y, sigma_sq)[source]¶
Sample bayesian linear regression weights.
- Parameters:
X (Tensor) – A tensor of inputs with shape (*batch_shape, n num_rff_features).
y (Tensor) – A tensor of outcomes with shape (*batch_shape, n).
sigma_sq (Tensor) – The likelihood noise variance. This should be a tensor with shape kernel_batch_shape, 1, 1 if using a batched kernel. Otherwise, it should be a scalar tensor.
- Returns:
The posterior distribution over the weights.
- Return type:
MultivariateNormal
- botorch.utils.gp_sampling.get_gp_samples(model, num_outputs, n_samples, num_rff_features=512)[source]¶
Sample functions from GP posterior using RFFs. The returned GenericDeterministicModel effectively wraps num_outputs models, each of which has a batch shape of n_samples. Refer get_deterministic_model_multi_samples for more details.
NOTE: If using input / outcome transforms, the gp samples must be accessed via the gp_sample.posterior(X) call. Otherwise, gp_sample(X) will produce bogus values that do not agree with the underlying model. It is also highly recommended to use outcome transforms to standardize the input data, since the gp samples do not work well when training outcomes are not zero-mean.
- Parameters:
model (Model) – The model.
num_outputs (int) – The number of outputs.
n_samples (int) – The number of functions to be sampled IID.
num_rff_features (int) – The number of random Fourier features.
- Returns:
A GenericDeterministicModel that evaluates n_samples sampled functions. If n_samples > 1, this will be a batched model.
- Return type:
Testing¶
- class botorch.utils.testing.BotorchTestCase(methodName='runTest')[source]¶
Bases:
TestCase
Basic test case for Botorch.
- This
sets the default device to be torch.device(“cpu”)
ensures that no warnings are suppressed by default.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- device = device(type='cpu')¶
- class botorch.utils.testing.BaseTestProblemBaseTestCase[source]¶
Bases:
object
- functions: List[BaseTestProblem]¶
- class botorch.utils.testing.SyntheticTestFunctionBaseTestCase[source]¶
Bases:
BaseTestProblemBaseTestCase
- functions: List[BaseTestProblem]¶
- class botorch.utils.testing.MockPosterior(mean=None, variance=None, samples=None)[source]¶
Bases:
Posterior
Mock object that implements dummy methods and feeds through specified outputs
- Parameters:
mean – The mean of the posterior.
variance – The variance of the posterior.
samples – Samples to return from rsample, unless base_samples is provided.
- property device: device¶
The torch device of the posterior.
- property dtype: dtype¶
The torch dtype of the posterior.
- property event_shape: Size¶
The event shape (i.e. the shape of a single sample).
- property mean¶
The mean of the posterior as a (b) x n x m-dim Tensor.
- property variance¶
The variance of the posterior as a (b) x n x m-dim Tensor.
- class botorch.utils.testing.MockModel(posterior)[source]¶
Bases:
Model
,FantasizeMixin
Mock object that implements dummy methods and feeds through specified outputs
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- Parameters:
posterior (MockPosterior) –
- posterior(X, output_indices=None, posterior_transform=None, observation_noise=False)[source]¶
Computes the posterior over model outputs at the provided points.
- Note: The input transforms should be applied here using
self.transform_inputs(X) after the self.eval() call and before any model.forward or model.likelihood calls.
- Parameters:
X (Tensor) – A b x q x d-dim Tensor, where d is the dimension of the feature space, q is the number of points considered jointly, and b is the batch dimension.
output_indices (Optional[List[int]]) – A list of indices, corresponding to the outputs over which to compute the posterior (if the model is multi-output). Can be used to speed up computation if only a subset of the model’s outputs are required for optimization. If omitted, computes the posterior over all model outputs.
observation_noise (bool) – If True, add observation noise to the posterior.
posterior_transform (Optional[PosteriorTransform]) – An optional PosteriorTransform.
- Returns:
A Posterior object, representing a batch of b joint distributions over q points and m outputs each.
- Return type:
- property num_outputs: int¶
The number of outputs of the model.
- property batch_shape: Size¶
The batch shape of the model.
This is a batch shape from an I/O perspective, independent of the internal representation of the model (as e.g. in BatchedMultiOutputGPyTorchModel). For a model with m outputs, a test_batch_shape x q x d-shaped input X to the posterior method returns a Posterior object over an output of shape broadcast(test_batch_shape, model.batch_shape) x q x m.
- state_dict()[source]¶
Returns a dictionary containing references to the whole state of the module.
Both parameters and persistent buffers (e.g. running averages) are included. Keys are corresponding parameter and buffer names. Parameters and buffers set to
None
are not included.Note
The returned object is a shallow copy. It contains references to the module’s parameters and buffers.
Warning
Currently
state_dict()
also accepts positional arguments fordestination
,prefix
andkeep_vars
in order. However, this is being deprecated and keyword arguments will be enforced in future releases.Warning
Please avoid the use of argument
destination
as it is not designed for end-users.- Parameters:
destination (dict, optional) – If provided, the state of module will be updated into the dict and the same object is returned. Otherwise, an
OrderedDict
will be created and returned. Default:None
.prefix (str, optional) – a prefix added to parameter and buffer names to compose the keys in state_dict. Default:
''
.keep_vars (bool, optional) – by default the
Tensor
s returned in the state dict are detached from autograd. If it’s set toTrue
, detaching will not be performed. Default:False
.
- Returns:
a dictionary containing a whole state of the module
- Return type:
dict
Example:
>>> # xdoctest: +SKIP("undefined vars") >>> module.state_dict().keys() ['bias', 'weight']
- load_state_dict(state_dict=None, strict=False)[source]¶
Copies parameters and buffers from
state_dict
into this module and its descendants. Ifstrict
isTrue
, then the keys ofstate_dict
must exactly match the keys returned by this module’sstate_dict()
function.- Parameters:
state_dict (dict) – a dict containing parameters and persistent buffers.
strict (bool, optional) – whether to strictly enforce that the keys in
state_dict
match the keys returned by this module’sstate_dict()
function. Default:True
- Returns:
missing_keys is a list of str containing the missing keys
unexpected_keys is a list of str containing the unexpected keys
- Return type:
NamedTuple
withmissing_keys
andunexpected_keys
fields
Note
If a parameter or buffer is registered as
None
and its corresponding key exists instate_dict
,load_state_dict()
will raise aRuntimeError
.
- class botorch.utils.testing.MockAcquisitionFunction[source]¶
Bases:
object
Mock acquisition function object that implements dummy methods.
- class botorch.utils.testing.MultiObjectiveTestProblemBaseTestCase[source]¶
Bases:
BaseTestProblemBaseTestCase
- functions: List[BaseTestProblem]¶
- class botorch.utils.testing.ConstrainedMultiObjectiveTestProblemBaseTestCase[source]¶
Bases:
MultiObjectiveTestProblemBaseTestCase
- functions: List[BaseTestProblem]¶
Torch¶
- class botorch.utils.torch.BufferDict(buffers=None)[source]¶
Bases:
Module
Holds buffers in a dictionary.
BufferDict can be indexed like a regular Python dictionary, but buffers it contains are properly registered, and will be visible by all Module methods.
BufferDict
is an ordered dictionary that respectsthe order of insertion, and
in
update()
, the order of the mergedOrderedDict
or anotherBufferDict
(the argument toupdate()
).
Note that
update()
with other unordered mapping types (e.g., Python’s plaindict
) does not preserve the order of the merged mapping.- Parameters:
buffers (iterable, optional) – a mapping (dictionary) of (string :
Tensor
) or an iterable of key-value pairs of type (string,Tensor
)
Example:
class MyModule(nn.Module): def __init__(self): super(MyModule, self).__init__() self.buffers = nn.BufferDict({ 'left': torch.randn(5, 10), 'right': torch.randn(5, 10) }) def forward(self, x, choice): x = self.buffers[choice].mm(x) return x
- Parameters:
buffers – A mapping (dictionary) from string to
Tensor
, or an iterable of key-value pairs of type (string,Tensor
).
- pop(key)[source]¶
Remove key from the BufferDict and return its buffer.
- Parameters:
key (string) – key to pop from the BufferDict
- update(buffers)[source]¶
Update the
BufferDict
with the key-value pairs from a mapping or an iterable, overwriting existing keys.Note
If
buffers
is anOrderedDict
, aBufferDict
, or an iterable of key-value pairs, the order of new elements in it is preserved.- Parameters:
buffers (iterable) – a mapping (dictionary) from string to
Tensor
, or an iterable of key-value pairs of type (string,Tensor
)
- extra_repr()[source]¶
Set the extra representation of the module
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
- training: bool¶
Transformations¶
Some basic data transformation helpers.
- botorch.utils.transforms.squeeze_last_dim(Y)[source]¶
Squeeze the last dimension of a Tensor.
- Parameters:
Y (Tensor) – A … x d-dim Tensor.
- Returns:
The input tensor with last dimension squeezed.
- Return type:
Tensor
Example
>>> Y = torch.rand(4, 3) >>> Y_squeezed = squeeze_last_dim(Y)
- botorch.utils.transforms.standardize(Y)[source]¶
Standardizes (zero mean, unit variance) a tensor by dim=-2.
If the tensor is single-dimensional, simply standardizes the tensor. If for some batch index all elements are equal (or if there is only a single data point), this function will return 0 for that batch index.
- Parameters:
Y (Tensor) – A batch_shape x n x m-dim tensor.
- Returns:
The standardized Y.
- Return type:
Tensor
Example
>>> Y = torch.rand(4, 3) >>> Y_standardized = standardize(Y)
- botorch.utils.transforms.normalize(X, bounds)[source]¶
Min-max normalize X w.r.t. the provided bounds.
- Parameters:
X (Tensor) – … x d tensor of data
bounds (Tensor) – 2 x d tensor of lower and upper bounds for each of the X’s d columns.
- Returns:
- A … x d-dim tensor of normalized data, given by
(X - bounds[0]) / (bounds[1] - bounds[0]). If all elements of X are contained within bounds, the normalized values will be contained within [0, 1]^d.
- Return type:
Tensor
Example
>>> X = torch.rand(4, 3) >>> bounds = torch.stack([torch.zeros(3), 0.5 * torch.ones(3)]) >>> X_normalized = normalize(X, bounds)
- botorch.utils.transforms.unnormalize(X, bounds)[source]¶
Un-normalizes X w.r.t. the provided bounds.
- Parameters:
X (Tensor) – … x d tensor of data
bounds (Tensor) – 2 x d tensor of lower and upper bounds for each of the X’s d columns.
- Returns:
- A … x d-dim tensor of unnormalized data, given by
X * (bounds[1] - bounds[0]) + bounds[0]. If all elements of X are contained in [0, 1]^d, the un-normalized values will be contained within bounds.
- Return type:
Tensor
Example
>>> X_normalized = torch.rand(4, 3) >>> bounds = torch.stack([torch.zeros(3), 0.5 * torch.ones(3)]) >>> X = unnormalize(X_normalized, bounds)
- botorch.utils.transforms.normalize_indices(indices, d)[source]¶
Normalize a list of indices to ensure that they are positive.
- Parameters:
indices (Optional[List[int]]) – A list of indices (may contain negative indices for indexing “from the back”).
d (int) – The dimension of the tensor to index.
- Returns:
A normalized list of indices such that each index is between 0 and d-1, or None if indices is None.
- Return type:
Optional[List[int]]
- botorch.utils.transforms.is_fully_bayesian(model)[source]¶
Check if at least one model is a SaasFullyBayesianSingleTaskGP
- Parameters:
model (Model) – A BoTorch model (may be a ModelList or ModelListGP)
d – The dimension of the tensor to index.
- Returns:
True if at least one model is a SaasFullyBayesianSingleTaskGP
- Return type:
bool
- botorch.utils.transforms.t_batch_mode_transform(expected_q=None, assert_output_shape=True)[source]¶
Factory for decorators enabling consistent t-batch behavior.
This method creates decorators for instance methods to transform an input tensor X to t-batch mode (i.e. with at least 3 dimensions). This assumes the tensor has a q-batch dimension. The decorator also checks the q-batch size if expected_q is provided, and the output shape if assert_output_shape is True.
- Parameters:
expected_q (Optional[int]) – The expected q-batch size of X. If specified, this will raise an AssertionError if X’s q-batch size does not equal expected_q.
assert_output_shape (bool) – If True, this will raise an AssertionError if the output shape does not match either the t-batch shape of X, or the acqf.model.batch_shape for acquisition functions using batched models.
- Returns:
The decorated instance method.
- Return type:
Callable[[Callable[[AcquisitionFunction, Any], Any]], Callable[[AcquisitionFunction, Any], Any]]
Example
>>> class ExampleClass: >>> @t_batch_mode_transform(expected_q=1) >>> def single_q_method(self, X): >>> ... >>> >>> @t_batch_mode_transform() >>> def arbitrary_q_method(self, X): >>> ...
- botorch.utils.transforms.concatenate_pending_points(method)[source]¶
Decorator concatenating X_pending into an acquisition function’s argument.
This decorator works on the forward method of acquisition functions taking a tensor X as the argument. If the acquisition function has an X_pending attribute (that is not None), this is concatenated into the input X, appropriately expanding the pending points to match the batch shape of X.
Example
>>> class ExampleAcquisitionFunction: >>> @concatenate_pending_points >>> @t_batch_mode_transform() >>> def forward(self, X): >>> ...
- Parameters:
method (Callable[[Any, Tensor], Any]) –
- Return type:
Callable[[Any, Tensor], Any]
- botorch.utils.transforms.match_batch_shape(X, Y)[source]¶
Matches the batch dimension of a tensor to that of another tensor.
- Parameters:
X (Tensor) – A batch_shape_X x q x d tensor, whose batch dimensions that correspond to batch dimensions of Y are to be matched to those (if compatible).
Y (Tensor) – A batch_shape_Y x q’ x d tensor.
- Returns:
A batch_shape_Y x q x d tensor containing the data of X expanded to the batch dimensions of Y (if compatible). For instance, if X is b’’ x b’ x q x d and Y is b x q x d, then the returned tensor is b’’ x b x q x d.
- Return type:
Tensor
Example
>>> X = torch.rand(2, 1, 5, 3) >>> Y = torch.rand(2, 6, 4, 3) >>> X_matched = match_batch_shape(X, Y) >>> X_matched.shape torch.Size([2, 6, 5, 3])
Feasible Volume¶
- botorch.utils.feasible_volume.get_feasible_samples(samples, inequality_constraints=None)[source]¶
Checks which of the samples satisfy all of the inequality constraints.
- Parameters:
samples (Tensor) – A sample size x d size tensor of feature samples, where d is a feature dimension.
constraints (inequality) – A list of tuples (indices, coefficients, rhs), with each tuple encoding an inequality constraint of the form sum_i (X[indices[i]] * coefficients[i]) >= rhs.
inequality_constraints (Optional[List[Tuple[Tensor, Tensor, float]]]) –
- Returns:
2-element tuple containing
Samples satisfying the linear constraints.
Estimated proportion of samples satisfying the linear constraints.
- Return type:
Tuple[Tensor, float]
- botorch.utils.feasible_volume.get_outcome_feasibility_probability(model, X, outcome_constraints, threshold=0.1, nsample_outcome=1000, seed=None)[source]¶
Monte Carlo estimate of the feasible volume with respect to the outcome constraints.
- Parameters:
model (Model) – The model used for sampling the posterior.
X (Tensor) – A tensor of dimension batch-shape x 1 x d, where d is feature dimension.
outcome_constraints (List[Callable[[Tensor], Tensor]]) – A list of callables, each mapping a Tensor of dimension sample_shape x batch-shape x q x m to a Tensor of dimension sample_shape x batch-shape x q, where negative values imply feasibility.
threshold (float) – A lower limit for the probability of posterior samples feasibility.
nsample_outcome (int) – The number of samples from the model posterior.
seed (Optional[int]) – The seed for the posterior sampler. If omitted, use a random seed.
- Returns:
Estimated proportion of features for which posterior samples satisfy given outcome constraints with probability above or equal to the given threshold.
- Return type:
float
- botorch.utils.feasible_volume.estimate_feasible_volume(bounds, model, outcome_constraints, inequality_constraints=None, nsample_feature=1000, nsample_outcome=1000, threshold=0.1, verbose=False, seed=None, device=None, dtype=None)[source]¶
Monte Carlo estimate of the feasible volume with respect to feature constraints and outcome constraints.
- Parameters:
bounds (Tensor) – A 2 x d tensor of lower and upper bounds for each column of X.
model (Model) – The model used for sampling the outcomes.
outcome_constraints (List[Callable[[Tensor], Tensor]]) – A list of callables, each mapping a Tensor of dimension sample_shape x batch-shape x q x m to a Tensor of dimension sample_shape x batch-shape x q, where negative values imply feasibility.
constraints (inequality) – A list of tuples (indices, coefficients, rhs), with each tuple encoding an inequality constraint of the form sum_i (X[indices[i]] * coefficients[i]) >= rhs.
nsample_feature (int) – The number of feature samples satisfying the bounds.
nsample_outcome (int) – The number of outcome samples from the model posterior.
threshold (float) – A lower limit for the probability of outcome feasibility
seed (Optional[int]) – The seed for both feature and outcome samplers. If omitted, use a random seed.
verbose (bool) – An indicator for whether to log the results.
inequality_constraints (Optional[List[Tuple[Tensor, Tensor, float]]]) –
device (Optional[device]) –
dtype (Optional[dtype]) –
- Returns:
- Estimated proportion of volume in feature space that is
feasible wrt the bounds and the inequality constraints (linear).
- Estimated proportion of feasible features for which
posterior samples (outcome) satisfies the outcome constraints with probability above the given threshold.
- Return type:
2-element tuple containing
Constants¶
- botorch.utils.constants.get_constants(values, device=None, dtype=None)[source]¶
Returns scalar-valued Tensors containing each of the given constants. Used to expedite tensor operations involving scalar arithmetic. Note that the returned Tensors should not be modified in-place.
- Parameters:
values (Union[Number, Iterator[Number]]) –
device (Optional[device]) –
dtype (Optional[dtype]) –
- Return type:
Union[Tensor, Tuple[Tensor, …]]
Safe Math¶
- botorch.utils.safe_math.add(a, b, **kwargs)[source]¶
- Parameters:
a (Tensor) –
b (Tensor) –
- Return type:
Tensor
- botorch.utils.safe_math.sub(a, b)[source]¶
- Parameters:
a (Tensor) –
b (Tensor) –
- Return type:
Tensor
- botorch.utils.safe_math.div(a, b)[source]¶
- Parameters:
a (Tensor) –
b (Tensor) –
- Return type:
Tensor
- botorch.utils.safe_math.mul(a, b)[source]¶
- Parameters:
a (Tensor) –
b (Tensor) –
- Return type:
Tensor
Multi-Objective Utilities¶
Abstract Box Decompositions¶
Box decomposition algorithms.
References
Box Decomposition List¶
Box decomposition container.
- class botorch.utils.multi_objective.box_decompositions.box_decomposition_list.BoxDecompositionList(*box_decompositions)[source]¶
Bases:
Module
A list of box decompositions.
Initialize the box decomposition list.
- Parameters:
*box_decompositions (BoxDecomposition) – An variable number of box decompositions
Example
>>> bd1 = FastNondominatedPartitioning(ref_point, Y=Y1) >>> bd2 = FastNondominatedPartitioning(ref_point, Y=Y2) >>> bd = BoxDecompositionList(bd1, bd2)
- property pareto_Y: List[Tensor]¶
This returns the non-dominated set.
Note: Internally, we store the negative pareto set (minimization).
- Returns:
- A list where the ith element is the n_pareto_i x m-dim tensor
of pareto optimal outcomes for each box_decomposition i.
- property ref_point: Tensor¶
Get the reference point.
Note: Internally, we store the negative reference point (minimization).
- Returns:
A n_box_decompositions x m-dim tensor of outcomes.
- get_hypercell_bounds()[source]¶
Get the bounds of each hypercell in the decomposition.
- Returns:
- A 2 x n_box_decompositions x num_cells x num_outcomes-dim tensor
containing the lower and upper vertices bounding each hypercell.
- Return type:
Tensor
- update(Y)[source]¶
Update the partitioning.
- Parameters:
Y (Union[List[Tensor], Tensor]) – A n_box_decompositions x n x num_outcomes-dim tensor or a list where the ith element contains the new points for box_decomposition i.
- Return type:
None
- compute_hypervolume()[source]¶
Compute hypervolume that is dominated by the Pareto Froniter.
- Returns:
- A (batch_shape)-dim tensor containing the hypervolume dominated by
each Pareto frontier.
- Return type:
Tensor
- training: bool¶
Box Decomposition Utilities¶
Utilities for box decomposition algorithms.
- botorch.utils.multi_objective.box_decompositions.utils.compute_local_upper_bounds(U, Z, z)[source]¶
Compute local upper bounds.
Note: this assumes minimization.
This uses the incremental algorithm (Alg. 1) from [Lacour17].
- Parameters:
U (Tensor) – A n x m-dim tensor containing the local upper bounds.
Z (Tensor) – A n x m x m-dim tensor containing the defining points.
z (Tensor) – A m-dim tensor containing the new point.
- Returns:
A new n’ x m-dim tensor local upper bounds.
A n’ x m x m-dim tensor containing the defining points.
- Return type:
2-element tuple containing
- botorch.utils.multi_objective.box_decompositions.utils.get_partition_bounds(Z, U, ref_point)[source]¶
Get the cell bounds given the local upper bounds and the defining points.
This implements Equation 2 in [Lacour17].
- Parameters:
Z (Tensor) – A n x m x m-dim tensor containing the defining points. The first dimension corresponds to u_idx, the second dimension corresponds to j, and Z[u_idx, j] is the set of definining points Z^j(u) where u = U[u_idx].
U (Tensor) – A n x m-dim tensor containing the local upper bounds.
ref_point (Tensor) – A m-dim tensor containing the reference point.
- Returns:
- A 2 x num_cells x m-dim tensor containing the lower and upper vertices
bounding each hypercell.
- Return type:
Tensor
- botorch.utils.multi_objective.box_decompositions.utils.update_local_upper_bounds_incremental(new_pareto_Y, U, Z)[source]¶
Update the current local upper with the new pareto points.
This assumes minimization.
- Parameters:
new_pareto_Y (Tensor) – A n x m-dim tensor containing the new Pareto points.
U (Tensor) – A n’ x m-dim tensor containing the local upper bounds.
Z (Tensor) – A n x m x m-dim tensor containing the defining points.
- Returns:
A new n’ x m-dim tensor local upper bounds.
A n’ x m x m-dim tensor containing the defining points
- Return type:
2-element tuple containing
- botorch.utils.multi_objective.box_decompositions.utils.compute_non_dominated_hypercell_bounds_2d(pareto_Y_sorted, ref_point)[source]¶
Compute an axis-aligned partitioning of the non-dominated space for 2 objectives.
- Parameters:
pareto_Y_sorted (Tensor) – A (batch_shape) x n_pareto x 2-dim tensor of pareto outcomes that are sorted by the 0th dimension in increasing order. All points must be better than the reference point.
ref_point (Tensor) – A (batch_shape) x 2-dim reference point.
- Returns:
A 2 x (batch_shape) x n_pareto + 1 x m-dim tensor of cell bounds.
- Return type:
Tensor
- botorch.utils.multi_objective.box_decompositions.utils.compute_dominated_hypercell_bounds_2d(pareto_Y_sorted, ref_point)[source]¶
Compute an axis-aligned partitioning of the dominated space for 2-objectives.
- Parameters:
pareto_Y_sorted (Tensor) – A (batch_shape) x n_pareto x 2-dim tensor of pareto outcomes that are sorted by the 0th dimension in increasing order.
ref_point (Tensor) – A 2-dim reference point.
- Returns:
A 2 x (batch_shape) x n_pareto x m-dim tensor of cell bounds.
- Return type:
Tensor
Dominated Partitionings¶
Algorithms for partitioning the dominated space into hyperrectangles.
- class botorch.utils.multi_objective.box_decompositions.dominated.DominatedPartitioning(ref_point, Y=None)[source]¶
Bases:
FastPartitioning
Partition dominated space into axis-aligned hyperrectangles.
This uses the Algorithm 1 from [Lacour17].
Example
>>> bd = DominatedPartitioning(ref_point, Y)
- Parameters:
ref_point (Tensor) – A m-dim tensor containing the reference point.
Y (Optional[Tensor]) – A (batch_shape) x n x m-dim tensor
- compute_hypervolume()[source]¶
Compute hypervolume that is dominated by the Pareto Frontier.
- Returns:
- A (batch_shape)-dim tensor containing the hypervolume dominated by
each Pareto frontier.
- Return type:
Tensor
- training: bool¶
Hypervolume¶
Hypervolume Utilities.
References
C. M. Fonseca, L. Paquete, and M. Lopez-Ibanez. An improved dimension-sweep algorithm for the hypervolume indicator. In IEEE Congress on Evolutionary Computation, pages 1157-1163, Vancouver, Canada, July 2006.
H. Ishibuchi, N. Akedo, and Y. Nojima. A many-objective test problem for visually examining diversity maintenance behavior in a decision space. Proc. 13th Annual Conf. Genetic Evol. Comput., 2011.
- botorch.utils.multi_objective.hypervolume.infer_reference_point(pareto_Y, max_ref_point=None, scale=0.1, scale_max_ref_point=False)[source]¶
Get reference point for hypervolume computations.
This sets the reference point to be ref_point = nadir - 0.1 * range when there is no pareto_Y that is better than the reference point.
[Ishibuchi2011] find 0.1 to be a robust multiplier for scaling the nadir point.
Note: this assumes maximization of all objectives.
- Parameters:
pareto_Y (Tensor) – A n x m-dim tensor of Pareto-optimal points.
max_ref_point (Optional[Tensor]) – A m dim tensor indicating the maximum reference point.
scale (float) – A multiplier used to scale back the reference point based on the range of each objective.
scale_max_ref_point (bool) – A boolean indicating whether to apply scaling to the max_ref_point based on the range of each objective.
- Returns:
A m-dim tensor containing the reference point.
- Return type:
Tensor
- class botorch.utils.multi_objective.hypervolume.Hypervolume(ref_point)[source]¶
Bases:
object
Hypervolume computation dimension sweep algorithm from [Fonseca2006].
Adapted from Simon Wessing’s implementation of the algorithm (Variant 3, Version 1.2) in [Fonseca2006] in PyMOO: https://github.com/msu-coinlab/pymoo/blob/master/pymoo/vendor/hv.py
Maximization is assumed.
TODO: write this in C++ for faster looping.
Initialize hypervolume object.
- Parameters:
ref_point (Tensor) – m-dim Tensor containing the reference point.
- property ref_point: Tensor¶
Get reference point (for maximization).
- Returns:
A m-dim tensor containing the reference point.
- botorch.utils.multi_objective.hypervolume.sort_by_dimension(nodes, i)[source]¶
Sorts the list of nodes in-place by the specified objective.
- Parameters:
nodes (List[Node]) – A list of Nodes
i (int) – The index of the objective to sort by
- Return type:
None
- class botorch.utils.multi_objective.hypervolume.Node(m, dtype, device, data=None)[source]¶
Bases:
object
Node in the MultiList data structure.
Initialize MultiList.
- Parameters:
m (int) – The number of objectives
dtype (torch.dtype) – The dtype
device (torch.device) – The device
data (Optional[Tensor]) – The tensor data to be stored in this Node.
- class botorch.utils.multi_objective.hypervolume.MultiList(m, dtype, device)[source]¶
Bases:
object
A special data structure used in hypervolume computation.
It consists of several doubly linked lists that share common nodes. Every node has multiple predecessors and successors, one in every list.
Initialize m doubly linked lists.
- Parameters:
m (int) – number of doubly linked lists
dtype (torch.dtype) – the dtype
device (torch.device) – the device
- append(node, index)[source]¶
Appends a node to the end of the list at the given index.
- Parameters:
node (Node) – the new node
index (int) – the index where the node should be appended.
- Return type:
None
- extend(nodes, index)[source]¶
Extends the list at the given index with the nodes.
- Parameters:
nodes (List[Node]) – list of nodes to append at the given index.
index (int) – the index where the nodes should be appended.
- Return type:
None
- reinsert(node, index, bounds)[source]¶
Re-inserts the node at its original position.
Re-inserts the node at its original position in all lists in [0, ‘index’] before it was removed. This method assumes that the next and previous nodes of the node that is reinserted are in the list.
- Parameters:
node (Node) – The node
index (int) – The upper bound on the range of indices
bounds (Tensor) – A 2 x m-dim tensor bounds on the objectives
- Return type:
None
Non-dominated Partitionings¶
Algorithms for partitioning the non-dominated space into rectangles.
References
- class botorch.utils.multi_objective.box_decompositions.non_dominated.NondominatedPartitioning(ref_point, Y=None, alpha=0.0)[source]¶
Bases:
BoxDecomposition
A class for partitioning the non-dominated space into hyper-cells.
Note: this assumes maximization. Internally, it multiplies outcomes by -1 and performs the decomposition under minimization. TODO: use maximization internally as well.
Note: it is only feasible to use this algorithm to compute an exact decomposition of the non-dominated space for m<5 objectives (alpha=0.0).
The alpha parameter can be increased to obtain an approximate partitioning faster. The alpha is a fraction of the total hypervolume encapsuling the entire Pareto set. When a hypercell’s volume divided by the total hypervolume is less than alpha, we discard the hypercell. See Figure 2 in [Couckuyt2012] for a visual representation.
This PyTorch implementation of the binary partitioning algorithm ([Couckuyt2012]) is adapted from numpy/tensorflow implementation at: https://github.com/GPflow/GPflowOpt/blob/master/gpflowopt/pareto.py.
TODO: replace this with a more efficient decomposition. E.g. https://link.springer.com/content/pdf/10.1007/s10898-019-00798-7.pdf
Initialize NondominatedPartitioning.
- Parameters:
ref_point (Tensor) – A m-dim tensor containing the reference point.
Y (Optional[Tensor]) – A (batch_shape) x n x m-dim tensor.
alpha (float) – A thresold fraction of total volume used in an approximate decomposition.
Example
>>> bd = NondominatedPartitioning(ref_point, Y=Y1)
- get_hypercell_bounds()[source]¶
Get the bounds of each hypercell in the decomposition.
- Parameters:
ref_point – A (batch_shape) x m-dim tensor containing the reference point.
- Returns:
- A 2 x num_cells x m-dim tensor containing the
lower and upper vertices bounding each hypercell.
- Return type:
Tensor
- compute_hypervolume()[source]¶
Compute the hypervolume for the given reference point.
This method computes the hypervolume of the non-dominated space and computes the difference between the hypervolume between the ideal point and hypervolume of the non-dominated space.
- Returns:
(batch_shape)-dim tensor containing the dominated hypervolume.
- Return type:
Tensor
- training: bool¶
- class botorch.utils.multi_objective.box_decompositions.non_dominated.FastNondominatedPartitioning(ref_point, Y=None)[source]¶
Bases:
FastPartitioning
A class for partitioning the non-dominated space into hyper-cells.
Note: this assumes maximization. Internally, it multiplies by -1 and performs the decomposition under minimization.
This class is far more efficient than NondominatedPartitioning for exact box partitionings
- This class uses the two-step approach similar to that in [Yang2019], where:
- first, Alg 1 from [Lacour17] is used to find the local lower bounds
for the maximization problem
- second, the local lower bounds are used as the Pareto frontier for the
minimization problem, and [Lacour17] is applied again to partition the space dominated by that Pareto frontier.
Initialize FastNondominatedPartitioning.
- Parameters:
ref_point (Tensor) – A m-dim tensor containing the reference point.
Y (Optional[Tensor]) – A (batch_shape) x n x m-dim tensor.
Example
>>> bd = FastNondominatedPartitioning(ref_point, Y=Y1)
- compute_hypervolume()[source]¶
Compute hypervolume that is dominated by the Pareto Froniter.
- Returns:
- A (batch_shape)-dim tensor containing the hypervolume dominated by
each Pareto frontier.
- Return type:
Tensor
- training: bool¶
Pareto¶
- botorch.utils.multi_objective.pareto.is_non_dominated(Y, deduplicate=True)[source]¶
Computes the non-dominated front.
Note: this assumes maximization.
For small n, this method uses a highly parallel methodology that compares all pairs of points in Y. However, this is memory intensive and slow for large n. For large n (or if Y is larger than 5MB), this method will dispatch to a loop-based approach that is faster and has a lower memory footprint.
- Parameters:
Y (Tensor) – A (batch_shape) x n x m-dim tensor of outcomes.
deduplicate (bool) – A boolean indicating whether to only return unique points on the pareto frontier.
- Returns:
A (batch_shape) x n-dim boolean tensor indicating whether each point is non-dominated.
- Return type:
Tensor
Scalarization¶
Helper utilities for constructing scalarizations.
References
- botorch.utils.multi_objective.scalarization.get_chebyshev_scalarization(weights, Y, alpha=0.05)[source]¶
Construct an augmented Chebyshev scalarization.
- Augmented Chebyshev scalarization:
objective(y) = min(w * y) + alpha * sum(w * y)
Outcomes are first normalized to [0,1] for maximization (or [-1,0] for minimization) and then an augmented Chebyshev scalarization is applied.
Note: this assumes maximization of the augmented Chebyshev scalarization. Minimizing/Maximizing an objective is supported by passing a negative/positive weight for that objective. To make all w * y’s have positive sign such that they are comparable when computing min(w * y), outcomes of minimization objectives are shifted from [0,1] to [-1,0].
See [Knowles2005] for details.
This scalarization can be used with qExpectedImprovement to implement q-ParEGO as proposed in [Daulton2020qehvi].
- Parameters:
weights (Tensor) – A m-dim tensor of weights. Positive for maximization and negative for minimization.
Y (Tensor) – A n x m-dim tensor of observed outcomes, which are used for scaling the outcomes to [0,1] or [-1,0].
alpha (float) – Parameter governing the influence of the weighted sum term. The default value comes from [Knowles2005].
- Returns:
Transform function using the objective weights.
- Return type:
Callable[[Tensor, Optional[Tensor]], Tensor]
Example
>>> weights = torch.tensor([0.75, -0.25]) >>> transform = get_aug_chebyshev_scalarization(weights, Y)
Probability Utilities¶
Multivariate Gaussian Probabilities via Bivariate Conditioning¶
Bivariate conditioning algorithm for approximating Gaussian probabilities, see [Genz2016numerical] and [Trinh2015bivariate].
G. Trinh and A. Genz. Bivariate conditioning approximations for multivariate normal probabilities. Statistics and Computing, 2015.
A. Genz and G. Tring. Numerical Computation of Multivariate Normal Probabilities using Bivariate Conditioning. Monte Carlo and Quasi-Monte Carlo Methods, 2016.
GJ. Gibson, CA Galsbey, and DA Elston. Monte Carlo evaluation of multivariate normal integrals and sensitivity to variate ordering. Advances in Numerical Methods and Applications. 1994.
- class botorch.utils.probability.mvnxpb.mvnxpbState(*args, **kwargs)[source]¶
Bases:
dict
- step: int¶
- perm: LongTensor¶
- bounds: Tensor¶
- piv_chol: PivotedCholesky¶
- plug_ins: Tensor¶
- log_prob: Tensor¶
- log_prob_extra: Optional[Tensor]¶
- class botorch.utils.probability.mvnxpb.MVNXPB(covariance_matrix, bounds)[source]¶
Bases:
object
An algorithm for approximating Gaussian probabilities P(X in bounds), where X ~ N(0, covariance_matrix).
Initializes an MVNXPB instance.
- Parameters:
covariance_matrix (Tensor) – Covariance matrices of shape batch_shape x [n, n].
bounds (Tensor) – Tensor of lower and upper bounds, batch_shape x [n, 2]. These bounds are standardized internally and clipped to STANDARDIZED_RANGE.
- classmethod build(step, perm, bounds, piv_chol, plug_ins, log_prob, log_prob_extra=None)[source]¶
Creates an MVNXPB instance from raw arguments. Unlike MVNXPB.__init__, this methods does not preprocess or copy terms.
- Parameters:
step (int) – Integer used to track the solver’s progress.
bounds (Tensor) – Tensor of lower and upper bounds, batch_shape x [n, 2].
piv_chol (PivotedCholesky) – A PivotedCholesky instance for the system.
plug_ins (Tensor) – Tensor of plug-in estimators used to update lower and upper bounds on random variables that have yet to be integrated out.
log_prob (Tensor) – Tensor of log probabilities.
log_prob_extra (Optional[Tensor]) – Tensor of conditional log probabilities for the next random variable. Used when integrating over an odd number of random variables.
perm (Tensor) –
- Return type:
- solve(num_steps=None, eps=1e-10)[source]¶
Runs the MVNXPB solver instance for a fixed number of steps.
Calculates a bivariate conditional approximation to P(X in bounds), where X ~ N(0, Σ). For details, see [Genz2016numerical] or [Trinh2015bivariate].
- Parameters:
num_steps (Optional[int]) –
eps (float) –
- Return type:
Tensor
- select_pivot()[source]¶
GGE variable prioritization strategy from [Gibson1994monte].
Returns the index of the random variable least likely to satisfy its bounds when conditioning on the previously integrated random variables X[:t - 1] attaining the values of plug-in estimators y[:t - 1]. Equivalently,
` argmin_{i = t, ..., n} P(X[i] \in bounds[i] | X[:t-1] = y[:t -1]), `
where t denotes the current step.- Return type:
Optional[LongTensor]
- pivot_(pivot)[source]¶
Swap random variables at pivot and step positions.
- Parameters:
pivot (LongTensor) –
- Return type:
None
- augment(covariance_matrix, bounds, cross_covariance_matrix, disable_pivoting=False, jitter=None, max_tries=None)[source]¶
Augment an n-dimensional MVNXPB instance to include m additional random variables.
- Parameters:
covariance_matrix (Tensor) –
bounds (Tensor) –
cross_covariance_matrix (Tensor) –
disable_pivoting (bool) –
jitter (Optional[float]) –
max_tries (Optional[int]) –
- Return type:
Truncated Multivariate Normal Distribution¶
- class botorch.utils.probability.truncated_multivariate_normal.TruncatedMultivariateNormal(loc, covariance_matrix=None, precision_matrix=None, scale_tril=None, bounds=None, solver=None, sampler=None, validate_args=None)[source]¶
Bases:
MultivariateNormal
Initializes an instance of a TruncatedMultivariateNormal distribution.
Let x ~ N(0, K) be an n-dimensional Gaussian random vector. This class represents the distribution of the truncated Multivariate normal random vector x | a <= x <= b.
- Parameters:
loc (Tensor) – A mean vector for the distribution, batch_shape x event_shape.
covariance_matrix (Optional[Tensor]) – Covariance matrix distribution parameter.
precision_matrix (Optional[Tensor]) – Inverse covariance matrix distribution parameter.
scale_tril (Optional[Tensor]) – Lower triangular, square-root covariance matrix distribution parameter.
bounds (Tensor) – A batch_shape x event_shape x 2 tensor of strictly increasing bounds for x so that bounds[…, 0] < bounds[…, 1] everywhere.
solver (Optional[MVNXPB]) – A pre-solved MVNXPB instance used to approximate the log partition.
sampler (Optional[LinearEllipticalSliceSampler]) – A LinearEllipticalSliceSampler instance used for sample generation.
validate_args (Optional[bool]) – Optional argument to super().__init__.
- log_prob(value)[source]¶
Approximates the true log probability.
- Parameters:
value (Tensor) –
- Return type:
Tensor
- rsample(sample_shape=torch.Size([]))[source]¶
Draw samples from the Truncated Multivariate Normal.
- Parameters:
sample_shape (Size) – The shape of the samples.
- Returns:
The (sample_shape x batch_shape x event_shape) tensor of samples.
- Return type:
Tensor
- property log_partition: Tensor¶
- property sampler: LinearEllipticalSliceSampler¶
- expand(batch_shape, _instance=None)[source]¶
Returns a new distribution instance (or populates an existing instance provided by a derived class) with batch dimensions expanded to batch_shape. This method calls
expand
on the distribution’s parameters. As such, this does not allocate new memory for the expanded distribution instance. Additionally, this does not repeat any args checking or parameter broadcasting in __init__.py, when an instance is first created.- Parameters:
batch_shape (torch.Size) – the desired expanded size.
_instance (Optional[TruncatedMultivariateNormal]) – new instance provided by subclasses that need to override .expand.
- Returns:
New distribution instance with batch dimensions expanded to batch_size.
- Return type:
Unified Skew Normal Distribution¶
- class botorch.utils.probability.unified_skew_normal.UnifiedSkewNormal(trunc, gauss, cross_covariance_matrix, validate_args=None)[source]¶
Bases:
Distribution
Unified Skew Normal distribution of Y | a < X < b for jointly Gaussian random vectors X ∈ R^m and Y ∈ R^n.
Batch shapes trunc.batch_shape and gauss.batch_shape must be broadcastable. Care should be taken when choosing trunc.batch_shape. When trunc is of lower batch dimensionality than gauss, the user should consider expanding trunc to hasten UnifiedSkewNormal.log_prob. In these cases, it is suggested that the user invoke trunc.solver before calling trunc.expand to avoid paying for multiple, identical solves.
- Parameters:
trunc (TruncatedMultivariateNormal) – Distribution of Z = (X | a < X < b) ∈ R^m.
gauss (MultivariateNormal) – Distribution of Y ∈ R^n.
cross_covariance_matrix (Union[Tensor, LinearOperator]) – Cross-covariance Cov(X, Y) ∈ R^{m x n}.
validate_args (Optional[bool]) – Optional argument to super().__init__.
- arg_constraints = {}¶
- log_prob(value)[source]¶
Computes the log probability ln p(Y = value | a < X < b).
- Parameters:
value (Tensor) –
- Return type:
Tensor
- rsample(sample_shape=torch.Size([]))[source]¶
Draw samples from the Unified Skew Normal.
- Parameters:
sample_shape (Size) – The shape of the samples.
- Returns:
The (sample_shape x batch_shape x event_shape) tensor of samples.
- Return type:
Tensor
- expand(batch_shape, _instance=None)[source]¶
Returns a new distribution instance (or populates an existing instance provided by a derived class) with batch dimensions expanded to batch_shape. This method calls
expand
on the distribution’s parameters. As such, this does not allocate new memory for the expanded distribution instance. Additionally, this does not repeat any args checking or parameter broadcasting in __init__.py, when an instance is first created.- Parameters:
batch_shape (torch.Size) – the desired expanded size.
_instance (Optional[UnifiedSkewNormal]) – new instance provided by subclasses that need to override .expand.
- Returns:
New distribution instance with batch dimensions expanded to batch_size.
- Return type:
- property covariance_matrix: Tensor¶
- property scale_tril: Tensor¶
Bivariate Normal Probabilities and Statistics¶
Methods for computing bivariate normal probabilities and statistics.
A. Genz. Numerical computation of rectangular bivariate and trivariate normal and t probabilities. Statistics and Computing, 2004.
B. Muthen. Moments of the censored and truncated bivariate normal distribution. British Journal of Mathematical and Statistical Psychology, 1990.
- botorch.utils.probability.bvn.bvn(r, xl, yl, xu, yu)[source]¶
A function for computing bivariate normal probabilities.
Calculates P(xl < x < xu, yl < y < yu) where x and y are bivariate normal with unit variance and correlation coefficient r. See Section 2.4 of [Genz2004bvnt].
This method uses a sign flip trick to improve numerical performance. Many of bvnu`s internal branches rely on evaluations `Phi(-bound). For a < b < 0, the term Phi(-a) - Phi(-b) goes to zero faster than Phi(b) - Phi(a) because finfo(dtype).epsneg is typically much larger than finfo(dtype).tiny. In these cases, flipping the sign can prevent situations where bvnu(…) - bvnu(…) would otherwise be zero due to round-off error.
- Parameters:
r (Tensor) – Tensor of correlation coefficients.
xl (Tensor) – Tensor of lower bounds for x, same shape as r.
yl (Tensor) – Tensor of lower bounds for y, same shape as r.
xu (Tensor) – Tensor of upper bounds for x, same shape as r.
yu (Tensor) – Tensor of upper bounds for y, same shape as r.
- Returns:
Tensor of probabilities P(xl < x < xu, yl < y < yu).
- Return type:
Tensor
- botorch.utils.probability.bvn.bvnu(r, h, k)[source]¶
Solves for P(x > h, y > k) where x and y are standard bivariate normal random variables with correlation coefficient r. In [Genz2004bvnt], this is (1)
L(h, k, r) = P(x < -h, y < -k) = 1/(a 2pi) int_{h}^{infty} int_{k}^{infty} f(x, y, r) dy dx,
where f(x, y, r) = e^{-1/(2a^2) (x^2 - 2rxy + y^2)} and a = (1 - r^2)^{1/2}.
[Genz2004bvnt] report the following integation scheme incurs a maximum of 5e-16 error when run in double precision: if |r| >= 0.925, use a 20-point quadrature rule on a 5th order Taylor expansion; else, numerically integrate in polar coordinates using no more than 20 quadrature points.
- Parameters:
r (Tensor) – Tensor of correlation coefficients.
h (Tensor) – Tensor of negative upper bounds for x, same shape as r.
k (Tensor) – Tensor of negative upper bounds for y, same shape as r.
- Returns:
A tensor of probabilities P(x > h, y > k).
- Return type:
Tensor
- botorch.utils.probability.bvn.bvnmom(r, xl, yl, xu, yu, p=None)[source]¶
Computes the expected values of truncated, bivariate normal random variables.
Let x and y be a pair of standard bivariate normal random variables having correlation r. This function computes E([x,y] | [xl,yl] < [x,y] < [xu,yu]).
Following [Muthen1990moments] equations (4) and (5), we have
E(x | [xl, yl] < [x, y] < [xu, yu]) = Z^{-1} phi(xl) P(yl < y < yu | x=xl) - phi(xu) P(yl < y < yu | x=xu),
where Z = P([xl, yl] < [x, y] < [xu, yu]) and phi is the standard normal PDF.
- Parameters:
r (Tensor) – Tensor of correlation coefficients.
xl (Tensor) – Tensor of lower bounds for x, same shape as r.
xu (Tensor) – Tensor of upper bounds for x, same shape as r.
yl (Tensor) – Tensor of lower bounds for y, same shape as r.
yu (Tensor) – Tensor of upper bounds for y, same shape as r.
p (Optional[Tensor]) – Tensor of probabilities P(xl < x < xu, yl < y < yu), same shape as r.
- Returns:
E(x | [xl, yl] < [x, y] < [xu, yu]) and E(y | [xl, yl] < [x, y] < [xu, yu]).
- Return type:
Tuple[Tensor, Tensor]
Elliptic Slice Sampler with Linear Constraints¶
Linear Elliptical Slice Sampler.
References
A. Gessner, O. Kanjilal, and P. Hennig. Integrals over gaussians under linear domain constraints. AISTATS 2020.
This implementation is based (with multiple changes / optimiations) on the following implementations based on the algorithm in [Gessner2020]: https://github.com/alpiges/LinConGauss https://github.com/wjmaddox/pytorch_ess
- class botorch.utils.probability.lin_ess.LinearEllipticalSliceSampler(inequality_constraints=None, bounds=None, interior_point=None, mean=None, covariance_matrix=None, covariance_root=None)[source]¶
Bases:
PolytopeSampler
Linear Elliptical Slice Sampler.
TODOs: - clean up docstrings - optimize computations (if possible)
Maybe TODOs: - Support degenerate domains (with zero volume)? - Add batch support ?
Initialize LinearEllipticalSliceSampler.
- Parameters:
inequality_constraints (Optional[Tuple[Tensor, Tensor]]) – Tensors (A, b) describing inequality constraints A @ x <= b, where A is an n_ineq_con x d-dim Tensor and b is an n_ineq_con x 1-dim Tensor, with n_ineq_con the number of inequalities and d the dimension of the sample space. If omitted, must provide bounds instead.
bounds (Optional[Tensor]) – A 2 x d-dim tensor of box bounds. If omitted, must provide inequality_constraints instead.
interior_point (Optional[Tensor]) – A d x 1-dim Tensor presenting a point in the (relative) interior of the polytope. If omitted, an interior point is determined automatically by solving a Linear Program. Note: It is crucial that the point lie in the interior of the feasible set (rather than on the boundary), otherwise the sampler will produce invalid samples.
mean (Optional[Tensor]) – The d x 1-dim mean of the MVN distribution (if omitted, use zero).
covariance_matrix (Optional[Tensor]) – The d x d-dim covariance matrix of the MVN distribution (if omitted, use the identity).
covariance_root (Optional[Tensor]) – A d x k-dim root of the covariance matrix such that covariance_root @ covariance_root.T = covariance_matrix.
This sampler samples from a multivariante Normal N(mean, covariance_matrix) subject to linear domain constraints A x <= b (intersected with box bounds, if provided).
Linear Algebra Helpers¶
- botorch.utils.probability.linalg.block_matrix_concat(blocks)[source]¶
- Parameters:
blocks (Sequence[Sequence[Tensor]]) –
- Return type:
Tensor
- botorch.utils.probability.linalg.augment_cholesky(Laa, Kbb, Kba=None, Lba=None, jitter=None)[source]¶
Computes the Cholesky factor of a block matrix K = [[Kaa, Kab], [Kba, Kbb]] based on a precomputed Cholesky factor Kaa = Laa Laa^T.
- Parameters:
Laa (Tensor) – Cholesky factor of K’s upper left block.
Kbb (Tensor) – Lower-right block of K.
Kba (Optional[Tensor]) – Lower-left block of K.
Lba (Optional[Tensor]) – Precomputed solve Kba Laa^{-T}.
jitter (Optional[float]) – Optional nugget to be added to the diagonal of Kbb.
- Return type:
Tensor
- class botorch.utils.probability.linalg.PivotedCholesky(step: 'int', tril: 'Tensor', perm: 'LongTensor', diag: 'Optional[Tensor]' = None, validate_init: 'InitVar[bool]' = True)[source]¶
Bases:
object
- Parameters:
step (int) –
tril (Tensor) –
perm (LongTensor) –
diag (Optional[Tensor]) –
validate_init (InitVar[bool]) –
- step: int¶
- tril: Tensor¶
- perm: LongTensor¶
- diag: Optional[Tensor] = None¶
- validate_init: InitVar[bool] = True¶
- update_(eps=1e-10)[source]¶
Performs a single matrix decomposition step.
- Parameters:
eps (float) –
- Return type:
None
- concat(other, dim=0)[source]¶
- Parameters:
other (PivotedCholesky) –
dim (int) –
- Return type:
Probability Helpers¶
- botorch.utils.probability.utils.case_dispatcher(out, cases=(), default=None)[source]¶
Basic implementation of a tensorized switching case statement.
- Parameters:
out (Tensor) – Tensor to which case outcomes are written.
cases (Iterable[Tuple[Callable[[], BoolTensor], Callable[[BoolTensor], Tensor]]]) – Iterable of function pairs (pred, func), where mask=pred() specifies whether func is applicable for each entry in out. Note that cases are resolved first-come, first-serve.
default (Optional[Callable[[BoolTensor], Tensor]]) – Optional func to which all unclaimed entries of out are dispatched.
- Return type:
Tensor
- botorch.utils.probability.utils.get_constants(values, device=None, dtype=None)[source]¶
Returns scalar-valued Tensors containing each of the given constants. Used to expedite tensor operations involving scalar arithmetic. Note that the returned Tensors should not be modified in-place.
- Parameters:
values (Union[Number, Iterator[Number]]) –
device (Optional[device]) –
dtype (Optional[dtype]) –
- Return type:
Union[Tensor, Tuple[Tensor, …]]
- botorch.utils.probability.utils.get_constants_like(values, ref)[source]¶
- Parameters:
values (Union[Number, Iterator[Number]]) –
ref (Tensor) –
- Return type:
Union[Tensor, Iterator[Tensor]]
- botorch.utils.probability.utils.gen_positional_indices(shape, dim, device=None)[source]¶
- Parameters:
shape (Size) –
dim (int) –
device (Optional[device]) –
- Return type:
Iterator[LongTensor]
- botorch.utils.probability.utils.build_positional_indices(shape, dim, device=None)[source]¶
- Parameters:
shape (Size) –
dim (int) –
device (Optional[device]) –
- Return type:
LongTensor
- botorch.utils.probability.utils.leggauss(deg, **tkwargs)[source]¶
- Parameters:
deg (int) –
tkwargs (Any) –
- Return type:
Tuple[Tensor, Tensor]
- botorch.utils.probability.utils.ndtr(x)[source]¶
Standard normal CDF.
- Parameters:
x (Tensor) –
- Return type:
Tensor
- botorch.utils.probability.utils.phi(x)[source]¶
Standard normal PDF.
- Parameters:
x (Tensor) –
- Return type:
Tensor
- botorch.utils.probability.utils.swap_along_dim_(values, i, j, dim, buffer=None)[source]¶
Swaps Tensor slices in-place along dimension dim.
When passed as Tensors, i (and j) should be dim-dimensional tensors with the same shape as values.shape[:dim]. The xception to this rule occurs when dim=0, in which case i (and j) should be (at most) one-dimensional when passed as a Tensor.
- Parameters:
values (Tensor) – Tensor whose values are to be swapped.
i (Union[int, LongTensor]) – Indices for slices along dimension dim.
j (Union[int, LongTensor]) – Indices for slices along dimension dim.
dim (int) – The dimension of values along which to swap slices.
buffer (Optional[Tensor]) – Optional buffer used internally to store copied values.
- Returns:
The original values tensor.
- Return type:
Tensor