botorch.acquisition¶
Acquisition Function APIs¶
Abstract Acquisition Function APIs¶
Abstract base module for all botorch acquisition functions.
-
class
botorch.acquisition.acquisition.
AcquisitionFunction
(model)[source]¶ Bases:
torch.nn.modules.module.Module
,abc.ABC
Abstract base class for acquisition functions.
Constructor for the AcquisitionFunction base class.
- Parameters
model (
Model
) – A fitted model.
-
class
botorch.acquisition.acquisition.
OneShotAcquisitionFunction
(model)[source]¶ Bases:
botorch.acquisition.acquisition.AcquisitionFunction
,abc.ABC
Abstract base class for acquisition functions using one-shot optimization
Constructor for the AcquisitionFunction base class.
- Parameters
model (
Model
) – A fitted model.
-
abstract
get_augmented_q_batch_size
(q)[source]¶ Get augmented q batch size for one-shot optimzation.
- Parameters
q (
int
) – The number of candidates to consider jointly.- Return type
int
- Returns
The augmented size for one-shot optimzation (including variables parameterizing the fantasy solutions).
-
abstract
extract_candidates
(X_full)[source]¶ Extract the candidates from a full “one-shot” parameterization.
- Parameters
X_full (
Tensor
) – A b x q_aug x d-dim Tensor with b t-batches of q_aug design points each.- Return type
Tensor
- Returns
A b x q x d-dim Tensor with b t-batches of q design points each.
Analytic Acquisition Function API¶
-
class
botorch.acquisition.analytic.
AnalyticAcquisitionFunction
(model, objective=None)[source]¶ Bases:
botorch.acquisition.acquisition.AcquisitionFunction
,abc.ABC
Base class for analytic acquisition functions.
Base constructor for analytic acquisition functions.
- Parameters
model (
Model
) – A fitted single-outcome model.objective (
Optional
[ScalarizedObjective
]) – A ScalarizedObjective (optional).
Monte-Carlo Acquisition Function API¶
-
class
botorch.acquisition.monte_carlo.
MCAcquisitionFunction
(model, sampler=None, objective=None, X_pending=None)[source]¶ Bases:
botorch.acquisition.acquisition.AcquisitionFunction
,abc.ABC
Abstract base class for Monte-Carlo based batch acquisition functions.
Constructor for the MCAcquisitionFunction base class.
- Parameters
model (
Model
) – A fitted model.sampler (
Optional
[MCSampler
]) – The sampler used to draw base samples. Defaults to SobolQMCNormalSampler(num_samples=512, collapse_batch_dims=True).objective (
Optional
[MCAcquisitionObjective
]) – The MCAcquisitionObjective under which the samples are evaluated. Defaults to IdentityMCObjective().X_pending (
Optional
[Tensor
]) – A m x d-dim Tensor of m design points that have points that have been submitted for function evaluation but have not yet been evaluated.
Acquisition Functions¶
Analytic Acquisition Functions¶
Analytic Acquisition Functions that evaluate the posterior without performing Monte-Carlo sampling.
-
class
botorch.acquisition.analytic.
ExpectedImprovement
(model, best_f, objective=None, maximize=True)[source]¶ Bases:
botorch.acquisition.analytic.AnalyticAcquisitionFunction
Single-outcome Expected Improvement (analytic).
Computes classic Expected Improvement over the current best observed value, using the analytic formula for a Normal posterior distribution. Unlike the MC-based acquisition functions, this relies on the posterior at single test point being Gaussian (and require the posterior to implement mean and variance properties). Only supports the case of q=1. The model must be single-outcome.
EI(x) = E(max(y - best_f, 0)), y ~ f(x)
Example
>>> model = SingleTaskGP(train_X, train_Y) >>> EI = ExpectedImprovement(model, best_f=0.2) >>> ei = EI(test_X)
Single-outcome Expected Improvement (analytic).
- Parameters
model (
Model
) – A fitted single-outcome model.best_f (
Union
[float
,Tensor
]) – Either a scalar or a b-dim Tensor (batch mode) representing the best function value observed so far (assumed noiseless).objective (
Optional
[ScalarizedObjective
]) – A ScalarizedObjective (optional).maximize (
bool
) – If True, consider the problem a maximization problem.
-
forward
(X)[source]¶ Evaluate Expected Improvement on the candidate set X.
- Parameters
X (
Tensor
) – A b1 x … bk x 1 x d-dim batched tensor of d-dim design points. Expected Improvement is computed for each point individually, i.e., what is considered are the marginal posteriors, not the joint.- Return type
Tensor
- Returns
A b1 x … bk-dim tensor of Expected Improvement values at the given design points X.
-
class
botorch.acquisition.analytic.
PosteriorMean
(model, objective=None)[source]¶ Bases:
botorch.acquisition.analytic.AnalyticAcquisitionFunction
Single-outcome Posterior Mean.
Only supports the case of q=1. Requires the model’s posterior to have a mean property. The model must be single-outcome.
Example
>>> model = SingleTaskGP(train_X, train_Y) >>> PM = PosteriorMean(model) >>> pm = PM(test_X)
Base constructor for analytic acquisition functions.
- Parameters
model (
Model
) – A fitted single-outcome model.objective (
Optional
[ScalarizedObjective
]) – A ScalarizedObjective (optional).
-
class
botorch.acquisition.analytic.
ProbabilityOfImprovement
(model, best_f, objective=None, maximize=True)[source]¶ Bases:
botorch.acquisition.analytic.AnalyticAcquisitionFunction
Single-outcome Probability of Improvement.
Probability of improvment over the current best observed value, computed using the analytic formula under a Normal posterior distribution. Only supports the case of q=1. Requires the posterior to be Gaussian. The model must be single-outcome.
PI(x) = P(y >= best_f), y ~ f(x)
Example
>>> model = SingleTaskGP(train_X, train_Y) >>> PI = ProbabilityOfImprovement(model, best_f=0.2) >>> pi = PI(test_X)
Single-outcome analytic Probability of Improvement.
- Parameters
model (
Model
) – A fitted single-outcome model.best_f (
Union
[float
,Tensor
]) – Either a scalar or a b-dim Tensor (batch mode) representing the best function value observed so far (assumed noiseless).objective (
Optional
[ScalarizedObjective
]) – A ScalarizedObjective (optional).maximize (
bool
) – If True, consider the problem a maximization problem.
-
class
botorch.acquisition.analytic.
UpperConfidenceBound
(model, beta, objective=None, maximize=True)[source]¶ Bases:
botorch.acquisition.analytic.AnalyticAcquisitionFunction
Single-outcome Upper Confidence Bound (UCB).
Analytic upper confidence bound that comprises of the posterior mean plus an additional term: the posterior standard deviation weighted by a trade-off parameter, beta. Only supports the case of q=1 (i.e. greedy, non-batch selection of design points). The model must be single-outcome.
UCB(x) = mu(x) + sqrt(beta) * sigma(x), where mu and sigma are the posterior mean and standard deviation, respectively.
Example
>>> model = SingleTaskGP(train_X, train_Y) >>> UCB = UpperConfidenceBound(model, beta=0.2) >>> ucb = UCB(test_X)
Single-outcome Upper Confidence Bound.
- Parameters
model (
Model
) – A fitted single-outcome GP model (must be in batch mode if candidate sets X will be)beta (
Union
[float
,Tensor
]) – Either a scalar or a one-dim tensor with b elements (batch mode) representing the trade-off parameter between mean and covarianceobjective (
Optional
[ScalarizedObjective
]) – A ScalarizedObjective (optional).maximize (
bool
) – If True, consider the problem a maximization problem.
-
class
botorch.acquisition.analytic.
ConstrainedExpectedImprovement
(model, best_f, objective_index, constraints, maximize=True)[source]¶ Bases:
botorch.acquisition.analytic.AnalyticAcquisitionFunction
Constrained Expected Improvement (feasibility-weighted).
Computes the analytic expected improvement for a Normal posterior distribution, weighted by a probability of feasibility. The objective and constraints are assumed to be independent and have Gaussian posterior distributions. Only supports the case q=1. The model should be multi-outcome, with the index of the objective and constraints passed to the constructor.
Constrained_EI(x) = EI(x) * Product_i P(y_i in [lower_i, upper_i]), where y_i ~ constraint_i(x) and lower_i, upper_i are the lower and upper bounds for the i-th constraint, respectively.
Example
>>> # example where 0th output has a non-negativity constraint and ... # 1st output is the objective >>> model = SingleTaskGP(train_X, train_Y) >>> constraints = {0: (0.0, None)} >>> cEI = ConstrainedExpectedImprovement(model, 0.2, 1, constraints) >>> cei = cEI(test_X)
Analytic Constrained Expected Improvement.
- Parameters
model (
Model
) – A fitted single-outcome model.best_f (
Union
[float
,Tensor
]) – Either a scalar or a b-dim Tensor (batch mode) representing the best function value observed so far (assumed noiseless).objective_index (
int
) – The index of the objective.constraints (
Dict
[int
,Tuple
[Optional
[float
],Optional
[float
]]]) – A dictionary of the form {i: [lower, upper]}, where i is the output index, and lower and upper are lower and upper bounds on that output (resp. interpreted as -Inf / Inf if None)maximize (
bool
) – If True, consider the problem a maximization problem.
-
class
botorch.acquisition.analytic.
NoisyExpectedImprovement
(model, X_observed, num_fantasies=20, maximize=True)[source]¶ Bases:
botorch.acquisition.analytic.ExpectedImprovement
Single-outcome Noisy Expected Improvement (via fantasies).
This computes Noisy Expected Improvement by averaging over the Expected Improvemnt values of a number of fantasy models. Only supports the case q=1. Assumes that the posterior distribution of the model is Gaussian. The model must be single-outcome.
NEI(x) = E(max(y - max Y_baseline), 0)), (y, Y_baseline) ~ f((x, X_baseline)), where X_baseline are previously observed points.
Note: This acquisition function currently relies on using a FixedNoiseGP (required for noiseless fantasies).
Example
>>> model = FixedNoiseGP(train_X, train_Y, train_Yvar=train_Yvar) >>> NEI = NoisyExpectedImprovement(model, train_X) >>> nei = NEI(test_X)
Single-outcome Noisy Expected Improvement (via fantasies).
- Parameters
model (
GPyTorchModel
) – A fitted single-outcome model.X_observed (
Tensor
) – A n x d Tensor of observed points that are likely to be the best observed points so far.num_fantasies (
int
) – The number of fantasies to generate. The higher this number the more accurate the model (at the expense of model complexity and performance).maximize (
bool
) – If True, consider the problem a maximization problem.
Monte-Carlo Acquisition Functions¶
Batch acquisition functions using the reparameterization trick in combination with (quasi) Monte-Carlo sampling. See [Rezende2014reparam] and [Wilson2017reparam]
- Rezende2014reparam
D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate inference in deep generative models. ICML 2014.
- Wilson2017reparam
J. T. Wilson, R. Moriconi, F. Hutter, and M. P. Deisenroth. The reparameterization trick for acquisition functions. ArXiv 2017.
-
class
botorch.acquisition.monte_carlo.
qExpectedImprovement
(model, best_f, sampler=None, objective=None, X_pending=None)[source]¶ Bases:
botorch.acquisition.monte_carlo.MCAcquisitionFunction
MC-based batch Expected Improvement.
This computes qEI by (1) sampling the joint posterior over q points (2) evaluating the improvement over the current best for each sample (3) maximizing over q (4) averaging over the samples
qEI(X) = E(max(max Y - best_f, 0)), Y ~ f(X), where X = (x_1,…,x_q)
Example
>>> model = SingleTaskGP(train_X, train_Y) >>> best_f = train_Y.max()[0] >>> sampler = SobolQMCNormalSampler(1000) >>> qEI = qExpectedImprovement(model, best_f, sampler) >>> qei = qEI(test_X)
q-Expected Improvement.
- Parameters
model (
Model
) – A fitted model.best_f (
Union
[float
,Tensor
]) – The best objective value observed so far (assumed noiseless).sampler (
Optional
[MCSampler
]) – The sampler used to draw base samples. Defaults to SobolQMCNormalSampler(num_samples=500, collapse_batch_dims=True)objective (
Optional
[MCAcquisitionObjective
]) – The MCAcquisitionObjective under which the samples are evaluated. Defaults to IdentityMCObjective().X_pending (
Optional
[Tensor
]) – A m x d-dim Tensor of m design points that have points that have been submitted for function evaluation but have not yet been evaluated. Concatenated into X upon forward call. Copied and set to have no gradient.
-
class
botorch.acquisition.monte_carlo.
qNoisyExpectedImprovement
(model, X_baseline, sampler=None, objective=None, X_pending=None, prune_baseline=False)[source]¶ Bases:
botorch.acquisition.monte_carlo.MCAcquisitionFunction
MC-based batch Noisy Expected Improvement.
This function does not assume a best_f is known (which would require noiseless observations). Instead, it uses samples from the joint posterior over the q test points and previously observed points. The improvement over previously observed points is computed for each sample and averaged.
qNEI(X) = E(max(max Y - max Y_baseline, 0)), where (Y, Y_baseline) ~ f((X, X_baseline)), X = (x_1,…,x_q)
Example
>>> model = SingleTaskGP(train_X, train_Y) >>> sampler = SobolQMCNormalSampler(1000) >>> qNEI = qNoisyExpectedImprovement(model, train_X, sampler) >>> qnei = qNEI(test_X)
q-Noisy Expected Improvement.
- Parameters
model (
Model
) – A fitted model.X_baseline (
Tensor
) – A r x d-dim Tensor of r design points that have already been observed. These points are considered as the potential best design point.sampler (
Optional
[MCSampler
]) – The sampler used to draw base samples. Defaults to SobolQMCNormalSampler(num_samples=500, collapse_batch_dims=True).objective (
Optional
[MCAcquisitionObjective
]) – The MCAcquisitionObjective under which the samples are evaluated. Defaults to IdentityMCObjective().X_pending (
Optional
[Tensor
]) – A m x d-dim Tensor of m design points that have points that have been submitted for function evaluation but have not yet been evaluated. Concatenated into X upon forward call. Copied and set to have no gradient.prune_baseline (
bool
) – If True, remove points in X_baseline that are highly unlikely to be the best point. This can significantly improve performance and is generally recommended. In order to customize pruning parameters, instead manually call botorch.acquisition.utils.prune_inferior_points on X_baseline before instantiating the acquisition function.
-
class
botorch.acquisition.monte_carlo.
qProbabilityOfImprovement
(model, best_f, sampler=None, objective=None, X_pending=None, tau=0.001)[source]¶ Bases:
botorch.acquisition.monte_carlo.MCAcquisitionFunction
MC-based batch Probability of Improvement.
Estimates the probability of improvement over the current best observed value by sampling from the joint posterior distribution of the q-batch. MC-based estimates of a probability involves taking expectation of an indicator function; to support auto-differntiation, the indicator is replaced with a sigmoid function with temperature parameter tau.
qPI(X) = P(max Y >= best_f), Y ~ f(X), X = (x_1,…,x_q)
Example
>>> model = SingleTaskGP(train_X, train_Y) >>> best_f = train_Y.max()[0] >>> sampler = SobolQMCNormalSampler(1000) >>> qPI = qProbabilityOfImprovement(model, best_f, sampler) >>> qpi = qPI(test_X)
q-Probability of Improvement.
- Parameters
model (
Model
) – A fitted model.best_f (
Union
[float
,Tensor
]) – The best objective value observed so far (assumed noiseless).sampler (
Optional
[MCSampler
]) – The sampler used to draw base samples. Defaults to SobolQMCNormalSampler(num_samples=500, collapse_batch_dims=True)objective (
Optional
[MCAcquisitionObjective
]) – The MCAcquisitionObjective under which the samples are evaluated. Defaults to IdentityMCObjective().X_pending (
Optional
[Tensor
]) – A m x d-dim Tensor of m design points that have points that have been submitted for function evaluation but have not yet been evaluated. Concatenated into X upon forward call. Copied and set to have no gradient.tau (
float
) – The temperature parameter used in the sigmoid approximation of the step function. Smaller values yield more accurate approximations of the function, but result in gradients estimates with higher variance.
-
class
botorch.acquisition.monte_carlo.
qSimpleRegret
(model, sampler=None, objective=None, X_pending=None)[source]¶ Bases:
botorch.acquisition.monte_carlo.MCAcquisitionFunction
MC-based batch Simple Regret.
Samples from the joint posterior over the q-batch and computes the simple regret.
qSR(X) = E(max Y), Y ~ f(X), X = (x_1,…,x_q)
Example
>>> model = SingleTaskGP(train_X, train_Y) >>> sampler = SobolQMCNormalSampler(1000) >>> qSR = qSimpleRegret(model, sampler) >>> qsr = qSR(test_X)
Constructor for the MCAcquisitionFunction base class.
- Parameters
model (
Model
) – A fitted model.sampler (
Optional
[MCSampler
]) – The sampler used to draw base samples. Defaults to SobolQMCNormalSampler(num_samples=512, collapse_batch_dims=True).objective (
Optional
[MCAcquisitionObjective
]) – The MCAcquisitionObjective under which the samples are evaluated. Defaults to IdentityMCObjective().X_pending (
Optional
[Tensor
]) – A m x d-dim Tensor of m design points that have points that have been submitted for function evaluation but have not yet been evaluated.
-
class
botorch.acquisition.monte_carlo.
qUpperConfidenceBound
(model, beta, sampler=None, objective=None, X_pending=None)[source]¶ Bases:
botorch.acquisition.monte_carlo.MCAcquisitionFunction
MC-based batch Upper Confidence Bound.
Uses a reparameterization to extend UCB to qUCB for q > 1 (See Appendix A of [Wilson2017reparam].)
qUCB = E(max(mu + |Y_tilde - mu|)), where Y_tilde ~ N(mu, beta pi/2 Sigma) and f(X) has distribution N(mu, Sigma).
Example
>>> model = SingleTaskGP(train_X, train_Y) >>> sampler = SobolQMCNormalSampler(1000) >>> qUCB = qUpperConfidenceBound(model, 0.1, sampler) >>> qucb = qUCB(test_X)
q-Upper Confidence Bound.
- Parameters
model (
Model
) – A fitted model.beta (
float
) – Controls tradeoff between mean and standard deviation in UCB.sampler (
Optional
[MCSampler
]) – The sampler used to draw base samples. Defaults to SobolQMCNormalSampler(num_samples=500, collapse_batch_dims=True)objective (
Optional
[MCAcquisitionObjective
]) – The MCAcquisitionObjective under which the samples are evaluated. Defaults to IdentityMCObjective().X_pending (
Optional
[Tensor
]) – A m x d-dim Tensor of m design points that have points that have been submitted for function evaluation but have not yet been evaluated. Concatenated into X upon forward call. Copied and set to have no gradient.
The One-Shot Knowledge Gradient¶
Batch Knowledge Gradient (KG) via one-shot optimization as introduced in [Balandat2019botorch]. For broader discussion of KG see also [Frazier2008knowledge], [Wu2016parallelkg].
- Balandat2019botorch
M. Balandat, B. Karrer, D. R. Jiang, S. Daulton, B. Letham, A. G. Wilson, and E. Bakshy. BoTorch: Programmable Bayesian Optimziation in PyTorch. ArXiv 2019.
- Frazier2008knowledge
P. Frazier, W. Powell, and S. Dayanik. A Knowledge-Gradient policy for sequential information collection. SIAM Journal on Control and Optimization, 2008.
- Wu2016parallelkg
J. Wu and P. Frazier. The parallel knowledge gradient method for batch bayesian optimization. NIPS 2016.
-
class
botorch.acquisition.knowledge_gradient.
qKnowledgeGradient
(model, num_fantasies=64, sampler=None, objective=None, inner_sampler=None, X_pending=None, current_value=None)[source]¶ Bases:
botorch.acquisition.monte_carlo.MCAcquisitionFunction
,botorch.acquisition.acquisition.OneShotAcquisitionFunction
Batch Knowledge Gradient using one-shot optimization.
This computes the batch Knowledge Gradient using fantasies for the outer expectation and either the model posterior mean or MC-sampling for the inner expectation.
In addition to the design variables, the input X also includes variables for the optimal designs for each of the fantasy models. For a fixed number of fantasies, all parts of X can be optimized in a “one-shot” fashion.
q-Knowledge Gradient (one-shot optimization).
- Parameters
model (
Model
) – A fitted model. Must support fantasizing.num_fantasies (
Optional
[int
]) – The number of fantasy points to use. More fantasy points result in a better approximation, at the expense of memory and wall time. Unused if sampler is specified.sampler (
Optional
[MCSampler
]) – The sampler used to sample fantasy observations. Optional if num_fantasies is specified.objective (
Optional
[AcquisitionObjective
]) – The objective under which the samples are evaluated. If None or a ScalarizedObjective, then the analytic posterior mean is used, otherwise the objective is MC-evaluated (using inner_sampler).inner_sampler (
Optional
[MCSampler
]) – The sampler used for inner sampling. Ignored if the objective is None or a ScalarizedObjective.X_pending (
Optional
[Tensor
]) – A m x d-dim Tensor of m design points that have points that have been submitted for function evaluation but have not yet been evaluated.current_value (
Optional
[Tensor
]) – The current value, i.e. the expected best objective given the observed points D. If omitted, forward will not return the actual KG value, but the expected best objective given the data set D u X.
-
forward
(X)[source]¶ Evaluate qKnowledgeGradient on the candidate set X.
- Parameters
X (
Tensor
) –A b x (q + num_fantasies) x d Tensor with b t-batches of q + num_fantasies design points each. We split this X tensor into two parts in the q dimension (dim=-2). The first q are the q-batch of design points and the last num_fantasies are the current solutions of the inner optimization problem.
X_fantasies = X[…, -num_fantasies:, :] X_fantasies.shape = b x num_fantasies x d
X_actual = X[…, :-num_fantasies, :] X_actual.shape = b x q x d
- Return type
Tensor
- Returns
- A Tensor of shape b. For t-batch b, the q-KG value of the design
X_actual[b] is averaged across the fantasy models, where X_fantasies[b, i] is chosen as the final selection for the i-th fantasy model. NOTE: If current_value is not provided, then this is not the true KG value of X_actual[b], and X_fantasies[b, : ] must be maximized at fixed X_actual[b].
-
get_augmented_q_batch_size
(q)[source]¶ Get augmented q batch size for one-shot optimzation.
- Parameters
q (
int
) – The number of candidates to consider jointly.- Return type
int
- Returns
The augmented size for one-shot optimzation (including variables parameterizing the fantasy solutions).
-
extract_candidates
(X_full)[source]¶ We only return X as the set of candidates post-optimization.
- Parameters
X_full (
Tensor
) – A b x (q + num_fantasies) x d-dim Tensor with b t-batches of q + num_fantasies design points each.- Return type
Tensor
- Returns
A b x q x d-dim Tensor with b t-batches of q design points each.
-
class
botorch.acquisition.knowledge_gradient.
qMultiFidelityKnowledgeGradient
(model, num_fantasies=64, sampler=None, objective=None, inner_sampler=None, X_pending=None, current_value=None, cost_aware_utility=None, project=<function qMultiFidelityKnowledgeGradient.<lambda>>, expand=<function qMultiFidelityKnowledgeGradient.<lambda>>)[source]¶ Bases:
botorch.acquisition.knowledge_gradient.qKnowledgeGradient
Batch Knowledge Gradient for multi-fidelity optimization.
A version of qKnowledgeGradient that supports multi-fidelity optimization via a CostAwareUtility and the project and expand operators. If none of these are set, this acquisition function reduces to qKnowledgeGradient.
Multi-Fidelity q-Knowledge Gradient (one-shot optimization).
- Parameters
model (
Model
) – A fitted model. Must support fantasizing.num_fantasies (
Optional
[int
]) – The number of fantasy points to use. More fantasy points result in a better approximation, at the expense of memory and wall time. Unused if sampler is specified.sampler (
Optional
[MCSampler
]) – The sampler used to sample fantasy observations. Optional if num_fantasies is specified.objective (
Optional
[AcquisitionObjective
]) – The objective under which the samples are evaluated. If None or a ScalarizedObjective, then the analytic posterior mean is used, otherwise the objective is MC-evaluated (using inner_sampler).inner_sampler (
Optional
[MCSampler
]) – The sampler used for inner sampling. Ignored if the objective is None or a ScalarizedObjective.X_pending (
Optional
[Tensor
]) – A m x d-dim Tensor of m design points that have points that have been submitted for function evaluation but have not yet been evaluated.current_value (
Optional
[Tensor
]) – The current value, i.e. the expected best objective given the observed points D. If omitted, forward will not return the actual KG value, but the expected best objective given the data set D u X.cost_aware_utility (
Optional
[CostAwareUtility
]) – A CostAwareUtility computing the cost-transformed utility from a candidate set and samples of increases in utility.project (
Callable
[[Tensor
],Tensor
]) – A callable mapping a batch_shape x q x d tensor of design points to a tensor of the same shape projected to the desired target set (e.g. the target fidelities in case of multi-fidelity optimization).expand (
Callable
[[Tensor
],Tensor
]) – A callable mapping a batch_shape x q x d input tensor to a batch_shape x (q + q_e)’ x d-dim output tensor, where the q_e additional points in each q-batch correspond to additional (“trace”) observations.
-
property
cost_sampler
¶
-
forward
(X)[source]¶ Evaluate qMultiFidelityKnowledgeGradient on the candidate set X.
- Parameters
X (
Tensor
) –A b x (q + num_fantasies) x d Tensor with b t-batches of q + num_fantasies design points each. We split this X tensor into two parts in the q dimension (dim=-2). The first q are the q-batch of design points and the last num_fantasies are the current solutions of the inner optimization problem.
X_fantasies = X[…, -num_fantasies:, :] X_fantasies.shape = b x num_fantasies x d
X_actual = X[…, :-num_fantasies, :] X_actual.shape = b x q x d
In addition, X may be augmented with fidelity parameteres as part of thee d-dimension. Projecting fidelities to the target fidelity is handled by project.
- Return type
Tensor
- Returns
- A Tensor of shape b. For t-batch b, the q-KG value of the design
X_actual[b] is averaged across the fantasy models, where X_fantasies[b, i] is chosen as the final selection for the i-th fantasy model. NOTE: If current_value is not provided, then this is not the true KG value of X_actual[b], and X_fantasies[b, : ] must be maximized at fixed X_actual[b].
Entropy-Based Acquisition Functions¶
Acquisition functions for max-value entropy search (MES) and multi-fidelity MES with noisy observation and trace observations.
References
- Wang2018mves
Wang, Z., Jegelka, S., Max-value Entropy Search for Efficient Bayesian Optimization. arXiv:1703.01968v3, 2018
- Takeno2019mfmves
Takeno, S., et al., Multi-fidelity Bayesian Optimization with Max-value Entropy Search. arXiv:1901.08275v1, 2019
-
class
botorch.acquisition.max_value_entropy_search.
qMaxValueEntropy
(model, candidate_set, num_fantasies=16, num_mv_samples=10, num_y_samples=128, use_gumbel=True, maximize=True, X_pending=None)[source]¶ Bases:
botorch.acquisition.monte_carlo.MCAcquisitionFunction
The acquisition function for Max-value Entropy Search.
This acquisition function computes the mutual information of max values and a candidate point X. See [Wang2018mves] for a detailed discussion.
The model must be single-outcome. q > 1 is supported through cyclic optimization and fantasies.
Example
>>> model = SingleTaskGP(train_X, train_Y) >>> candidate_set = torch.rand(1000, bounds.size(1)) >>> candidate_set = bounds[0] + (bounds[1] - bounds[0]) * candidate_set >>> MES = qMaxValueEntropy(model, candidate_set) >>> mes = MES(test_X)
Single-outcome max-value entropy search acquisition function.
- Parameters
model (
Model
) – A fitted single-outcome model.candidate_set (
Tensor
) – A n x d Tensor including n candidate points to discretize the design space. Max values are sampled from the (joint) model posterior over these points.num_fantasies (
int
) – Number of fantasies to generate. The higher this number the more accurate the model (at the expense of model complexity, wall time and memory). Ignored if X_pending is None.num_mv_samples (
int
) – Number of max value samples.num_y_samples (
int
) – Number of posterior samples at specific design point X.use_gumbel (
bool
) – If True, use Gumbel approximation to sample the max values.X_pending (
Optional
[Tensor
]) – A m x d-dim Tensor of m design points that have been submitted for function evaluation but have not yet been evaluated.maximize (
bool
) – If True, consider the problem a maximization problem.
-
set_X_pending
(X_pending=None)[source]¶ Set pending points.
Informs the acquisition function about pending design points, fantasizes the model on the pending points and draws max-value samples from the fantasized model posterior.
- Parameters
X_pending (
Optional
[Tensor
]) – m x d Tensor with m d-dim design points that have been submitted for evaluation but have not yet been evaluated.- Return type
None
-
class
botorch.acquisition.max_value_entropy_search.
qMultiFidelityMaxValueEntropy
(model, candidate_set, num_fantasies=16, num_mv_samples=10, num_y_samples=128, use_gumbel=True, X_pending=None, maximize=True, cost_aware_utility=None, project=<function qMultiFidelityMaxValueEntropy.<lambda>>, expand=<function qMultiFidelityMaxValueEntropy.<lambda>>)[source]¶ Bases:
botorch.acquisition.max_value_entropy_search.qMaxValueEntropy
Multi-fidelity max-value entropy.
The acquisition function for multi-fidelity max-value entropy search with support for trace observations. See [Takeno2019mfmves] for a detailed discussion of the basic ideas on multi-fidelity MES (note that this implementation is somewhat different).
The model must be single-outcome. q > 1 is supported through cyclic optimization and fantasies.
Example
>>> model = SingleTaskGP(train_X, train_Y) >>> candidate_set = torch.rand(1000, bounds.size(1)) >>> candidate_set = bounds[0] + (bounds[1] - bounds[0]) * candidate_set >>> MF_MES = qMultiFidelityMaxValueEntropy(model, candidate_set) >>> mf_mes = MF_MES(test_X)
Single-outcome max-value entropy search acquisition function.
- Parameters
model (
Model
) – A fitted single-outcome model.candidate_set (
Tensor
) – A n x d Tensor including n candidate points to discretize the design space, which will be used to sample the max values from their posteriors.cost_aware_utility (
Optional
[CostAwareUtility
]) – A CostAwareUtility computing the cost-transformed utility from a candidate set and samples of increases in utility.num_fantasies (
int
) – Number of fantasies to generate. The higher this number the more accurate the model (at the expense of model complexity and performance) and it’s only used when X_pending is not None.num_mv_samples (
int
) – Number of max value samples.num_y_samples (
int
) – Number of posterior samples at specific design point X.use_gumbel (
bool
) – If True, use Gumbel approximation to sample the max values.X_pending (
Optional
[Tensor
]) – A m x d-dim Tensor of m design points that have been submitted for function evaluation but have not yet been evaluated.maximize (
bool
) – If True, consider the problem a maximization problem.cost_aware_utility – A CostAwareUtility computing the cost-transformed utility from a candidate set and samples of increases in utility.
project (
Callable
[[Tensor
],Tensor
]) – A callable mapping a batch_shape x q x d tensor of design points to a tensor of the same shape projected to the desired target set (e.g. the target fidelities in case of multi-fidelity optimization).expand (
Callable
[[Tensor
],Tensor
]) – A callable mapping a batch_shape x q x d input tensor to a batch_shape x (q + q_e)’ x d-dim output tensor, where the q_e additional points in each q-batch correspond to additional (“trace”) observations.
Objectives and Cost-Aware Utilities¶
Objectives¶
Objective Modules to be used with acquisition functions.
-
class
botorch.acquisition.objective.
AcquisitionObjective
[source]¶ Bases:
torch.nn.modules.module.Module
,abc.ABC
Abstract base class for objectives.
Initializes internal Module state, shared by both nn.Module and ScriptModule.
-
class
botorch.acquisition.objective.
ScalarizedObjective
(weights, offset=0.0)[source]¶ Bases:
botorch.acquisition.objective.AcquisitionObjective
Affine objective to be used with analytic acquisition functions.
For a Gaussian posterior at a single point (q=1) with mean mu and covariance matrix Sigma, this yields a single-output posterior with mean weights^T * mu and variance weights^T Sigma w.
Example
Example for a model with two outcomes:
>>> weights = torch.tensor([0.5, 0.25]) >>> objective = ScalarizedObjective(weights) >>> EI = ExpectedImprovement(model, best_f=0.1, objective=objective)
Affine objective.
- Parameters
weights (
Tensor
) – A one-dimensional tensor with m elements representing the linear weights on the outputs.offset (
float
) – An offset to be added to posterior mean.
-
forward
(posterior)[source]¶ Compute the posterior of the affine transformation.
- Parameters
posterior (
GPyTorchPosterior
) – A posterior with the same number of outputs as the elements in self.weights.- Return type
- Returns
A single-output posterior.
-
class
botorch.acquisition.objective.
MCAcquisitionObjective
[source]¶ Bases:
botorch.acquisition.objective.AcquisitionObjective
Abstract base class for MC-based objectives.
Initializes internal Module state, shared by both nn.Module and ScriptModule.
-
abstract
forward
(samples)[source]¶ Evaluate the objective on the samples.
- Parameters
samples (
Tensor
) – A sample_shape x batch_shape x q x m-dim Tensors of samples from a model posterior.- Returns
A sample_shape x batch_shape x q-dim Tensor of objective values (assuming maximization).
- Return type
Tensor
This method is usually not called directly, but via the objectives
Example
>>> # `__call__` method: >>> samples = sampler(posterior) >>> outcome = mc_obj(samples)
-
abstract
-
class
botorch.acquisition.objective.
IdentityMCObjective
[source]¶ Bases:
botorch.acquisition.objective.MCAcquisitionObjective
Trivial objective extracting the last dimension.
Example
>>> identity_objective = IdentityMCObjective() >>> samples = sampler(posterior) >>> objective = identity_objective(samples)
Initializes internal Module state, shared by both nn.Module and ScriptModule.
-
forward
(samples)[source]¶ Evaluate the objective on the samples.
- Parameters
samples (
Tensor
) – A sample_shape x batch_shape x q x m-dim Tensors of samples from a model posterior.- Returns
A sample_shape x batch_shape x q-dim Tensor of objective values (assuming maximization).
- Return type
Tensor
This method is usually not called directly, but via the objectives
Example
>>> # `__call__` method: >>> samples = sampler(posterior) >>> outcome = mc_obj(samples)
-
-
class
botorch.acquisition.objective.
LinearMCObjective
(weights)[source]¶ Bases:
botorch.acquisition.objective.MCAcquisitionObjective
Linear objective constructed from a weight tensor.
For input samples and mc_obj = LinearMCObjective(weights), this produces mc_obj(samples) = sum_{i} weights[i] * samples[…, i]
Example
Example for a model with two outcomes:
>>> weights = torch.tensor([0.75, 0.25]) >>> linear_objective = LinearMCObjective(weights) >>> samples = sampler(posterior) >>> objective = linear_objective(samples)
Linear Objective.
- Parameters
weights (
Tensor
) – A one-dimensional tensor with m elements representing the linear weights on the outputs.
-
class
botorch.acquisition.objective.
GenericMCObjective
(objective)[source]¶ Bases:
botorch.acquisition.objective.MCAcquisitionObjective
Objective generated from a generic callable.
Allows to construct arbitrary MC-objective functions from a generic callable. In order to be able to use gradient-based acquisition function optimization it should be possible to backpropagate through the callable.
Example
>>> generic_objective = GenericMCObjective(lambda Y: torch.sqrt(Y).sum(dim=-1)) >>> samples = sampler(posterior) >>> objective = generic_objective(samples)
Objective generated from a generic callable.
- Parameters
objective (
Callable
[[Tensor
],Tensor
]) – A callable mapping a sample_shape x batch-shape x q x m- dim Tensor to a sample_shape x batch-shape x q-dim Tensor of objective values.
-
forward
(samples)[source]¶ Evaluate the feasibility-weigthed objective on the samples.
- Parameters
samples (
Tensor
) – A sample_shape x batch_shape x q x m-dim Tensors of samples from a model posterior.- Return type
Tensor
- Returns
A sample_shape x batch_shape x q-dim Tensor of objective values weighted by feasibility (assuming maximization).
-
class
botorch.acquisition.objective.
ConstrainedMCObjective
(objective, constraints, infeasible_cost=0.0, eta=0.001)[source]¶ Bases:
botorch.acquisition.objective.GenericMCObjective
Feasibility-weighted objective.
An Objective allowing to maximize some scalable objective on the model outputs subject to a number of constraints. Constraint feasibilty is approximated by a sigmoid function.
mc_acq(X) = objective(X) * prod_i (1 - sigmoid(constraint_i(X))) TODO: Document functional form exactly.
See botorch.utils.objective.apply_constraints for details on the constarint handling.
Example
>>> bound = 0.0 >>> objective = lambda Y: Y[..., 0] >>> # apply non-negativity constraint on f(x)[1] >>> constraint = lambda Y: bound - Y[..., 1] >>> constrained_objective = ConstrainedMCObjective(objective, [constraint]) >>> samples = sampler(posterior) >>> objective = constrained_objective(samples)
Feasibility-weighted objective.
- Parameters
objective (
Callable
[[Tensor
],Tensor
]) – A callable mapping a sample_shape x batch-shape x q x m- dim Tensor to a sample_shape x batch-shape x q-dim Tensor of objective values.constraints (
List
[Callable
[[Tensor
],Tensor
]]) – A list of callables, each mapping a Tensor of dimension sample_shape x batch-shape x q x m to a Tensor of dimension sample_shape x batch-shape x q, where negative values imply feasibility.infeasible_cost (
float
) – The cost of a design if all associated samples are infeasible.eta (
float
) – The temperature parameter of the sigmoid function approximating the constraint.
-
forward
(samples)[source]¶ Evaluate the feasibility-weighted objective on the samples.
- Parameters
samples (
Tensor
) – A sample_shape x batch_shape x q x m-dim Tensors of samples from a model posterior.- Return type
Tensor
- Returns
A sample_shape x batch_shape x q-dim Tensor of objective values weighted by feasibility (assuming maximization).
Cost-Aware Utility¶
Cost functions for cost-aware acquisition functions, e.g. multi-fidelity KG. To be used in a context where there is an objective/cost tradeoff.
-
class
botorch.acquisition.cost_aware.
CostAwareUtility
[source]¶ Bases:
torch.nn.modules.module.Module
,abc.ABC
Abstract base class for cost-aware utilities.
Initializes internal Module state, shared by both nn.Module and ScriptModule.
-
abstract
forward
(X, deltas, **kwargs)[source]¶ Evaluate the cost-aware utility on the candidates and improvements.
- Parameters
X (
Tensor
) – A batch_shape x q x d-dim Tensor of with q d-dim design points each for each t-batch.deltas (
Tensor
) – A num_fantasies x batch_shape-dim Tensor of num_fantasy samples from the marginal improvement in utility over the current state at X for each t-batch.
- Return type
Tensor
- Returns
A num_fantasies x batch_shape-dim Tensor of cost-transformed utilities.
-
abstract
-
class
botorch.acquisition.cost_aware.
GenericCostAwareUtility
(cost)[source]¶ Bases:
botorch.acquisition.cost_aware.CostAwareUtility
Generic cost-aware utility wrapping a callable.
Generic cost-aware utility wrapping a callable.
- Parameters
cost (
Callable
[[Tensor
,Tensor
],Tensor
]) – A callable mapping a batch_shape x q x d’-dim candidate set to a batch_shape-dim tensor of costs
-
forward
(X, deltas, **kwargs)[source]¶ Evaluate the cost function on the candidates and improvements.
- Parameters
X (
Tensor
) – A batch_shape x q x d’-dim Tensor of with q d-dim design points for each t-batch.deltas (
Tensor
) – A num_fantasies x batch_shape-dim Tensor of num_fantasy samples from the marginal improvement in utility over the current state at X for each t-batch.
- Return type
Tensor
- Returns
A num_fantasies x batch_shape-dim Tensor of cost-weighted utilities.
-
class
botorch.acquisition.cost_aware.
InverseCostWeightedUtility
(cost_model, use_mean=True, cost_objective=None, min_cost=0.01)[source]¶ Bases:
botorch.acquisition.cost_aware.CostAwareUtility
A cost-aware utility using inverse cost weighting based on a model.
Computes the cost-aware utility by inverse-weighting samples U = (u_1, …, u_N) of the increase in utility. If use_mean=True, this uses the posterior mean mean_cost of the cost model, i.e. weighted utility = mean(U) / mean_cost. If use_mean=False, it uses samples C = (c_1, …, c_N) from the posterior of the cost model and performs the inverse weighting on the sample level: weighted utility = mean(u_1 / c_1, …, u_N / c_N).
The cost is additive across multiple elements of a q-batch.
Cost-aware utility that weights increase in utiltiy by inverse cost.
- Parameters
cost_model (
Model
) – A Model modeling the cost of evaluating a candidate set X, where X are the same features as in the model for the acquisition function this is to be used with. If no cost_objective is specified, the outputs are required to be non-negative.use_mean (
bool
) – If True, use the posterior mean, otherwise use posterior samples from the cost model.cost_objective (
Optional
[MCAcquisitionObjective
]) – If specified, transform the posterior mean / the posterior samples from the cost model. This can be used e.g. to un-transform predictions/samples of a cost model fit on the log-transformed cost (often done to ensure non-negativity).min_cost (
float
) – A value used to clamp the cost samples so that they are not too close to zero, which may cause numerical issues.
- Returns
The inverse-cost-weighted utiltiy.
-
forward
(X, deltas, sampler=None, **kwargs)[source]¶ Evaluate the cost function on the candidates and improvements.
- Parameters
X (
Tensor
) – A batch_shape x q x d-dim Tensor of with q d-dim design points each for each t-batch.deltas (
Tensor
) – A num_fantasies x batch_shape-dim Tensor of num_fantasy samples from the marginal improvement in utility over the current state at X for each t-batch.sampler (
Optional
[MCSampler
]) – A sampler used for sampling from the posterior of the cost model (required if use_mean=False, ignored if use_mean=True).
- Return type
Tensor
- Returns
A num_fantasies x batch_shape-dim Tensor of cost-weighted utilities.
Utilities¶
Fixed Feature Acquisition Function¶
A wrapper around AquisitionFunctions to fix certain features for optimization. This is useful e.g. for performing contextual optimization.
-
class
botorch.acquisition.fixed_feature.
FixedFeatureAcquisitionFunction
(acq_function, d, columns, values)[source]¶ Bases:
botorch.acquisition.acquisition.AcquisitionFunction
A wrapper around AquisitionFunctions to fix a subset of features.
Example
>>> model = SingleTaskGP(train_X, train_Y) # d = 5 >>> qEI = qExpectedImprovement(model, best_f=0.0) >>> columns = [2, 4] >>> values = X[..., columns] >>> qEI_FF = FixedFeatureAcquisitionFunction(qEI, 5, columns, values) >>> qei = qEI_FF(test_X) # d' = 3
Derived Acquisition Function by fixing a subset of input features.
- Parameters
acq_function (
AcquisitionFunction
) – The base acquisition function, operating on input tensors X_full of feature dimension d.d (
int
) – The feature dimension expected by acq_function.columns (
List
[int
]) – d_f < d indices of columns in X_full that are to be fixed to the provided values.values (
Union
[Tensor
,List
[float
]]) – The values to which to fix the columns in columns. Either a full batch_shape x q x d_f tensor of values (if values are different for each of the q input points), or an array-like of values that is broadcastable to the input across t-batch and q-batch dimensions, e.g. a list of length d_f if values are the same across all t and q-batch dimensions.
-
forward
(X)[source]¶ Evaluate base acquisition function under the fixed features.
- Parameters
X (
Tensor
) – Input tensor of feature dimension d’ < d such that d’ + d_f = d.- Returns
Base acquisition function evaluated on tensor X_full constructed by adding values in the appropriate places (see _construct_X_full).
General Utilities for Acquisition Functions¶
Utilities for acquisition functions.
-
botorch.acquisition.utils.
get_acquisition_function
(acquisition_function_name, model, objective, X_observed, X_pending=None, mc_samples=500, qmc=True, seed=None, **kwargs)[source]¶ Convenience function for initializing botorch acquisition functions.
- Parameters
acquisition_function_name (
str
) – Name of the acquisition function.model (
Model
) – A fitted model.objective (
MCAcquisitionObjective
) – A MCAcquisitionObjective.X_observed (
Tensor
) – A m1 x d-dim Tensor of m1 design points that have already been observed.X_pending (
Optional
[Tensor
]) – A m2 x d-dim Tensor of m2 design points whose evaluation is pending.mc_samples (
int
) – The number of samples to use for (q)MC evaluation of the acquisition function.qmc (
bool
) – If True, use quasi-Monte-Carlo sampling (instead of iid).seed (
Optional
[int
]) – If provided, perform deterministic optimization (i.e. the function to optimize is fixed and not stochastic).
- Return type
- Returns
The requested acquisition function.
Example
>>> model = SingleTaskGP(train_X, train_Y) >>> obj = LinearMCObjective(weights=torch.tensor([1.0, 2.0])) >>> acqf = get_acquisition_function("qEI", model, obj, train_X)
-
botorch.acquisition.utils.
get_infeasible_cost
(X, model, objective=<function squeeze_last_dim>)[source]¶ Get infeasible cost for a model and objective.
- Computes an infeasible cost M such that -M < min_x f(x) almost always,
so that feasible points are preferred.
- Parameters
X (
Tensor
) – A n x d Tensor of n design points to use in evaluating the minimum. These points should cover the design space well. The more points the better the estimate, at the expense of added computation.model (
Model
) – A fitted botorch model.objective (
Callable
[[Tensor
],Tensor
]) – The objective with which to evaluate the model output.
- Return type
float
- Returns
The infeasible cost M value.
Example
>>> model = SingleTaskGP(train_X, train_Y) >>> objective = lambda Y: Y[..., -1] ** 2 >>> M = get_infeasible_cost(train_X, model, obj)
-
botorch.acquisition.utils.
is_nonnegative
(acq_function)[source]¶ Determine whether a given acquisition function is non-negative.
- Parameters
acq_function (
AcquisitionFunction
) – The AcquisitionFunction instance.- Return type
bool
- Returns
True if acq_function is non-negative, False if not, or if the behavior is unknown (for custom acquisition functions).
Example
>>> qEI = qExpectedImprovement(model, best_f=0.1) >>> is_nonnegative(qEI) # returns True
-
botorch.acquisition.utils.
prune_inferior_points
(model, X, objective=None, num_samples=2048, max_frac=1.0)[source]¶ Prune points from an input tensor that are unlikely to be the best point.
Given a model, an objective, and an input tensor X, this function returns the subset of points in X that have some probability of being the best point under the objective. This function uses sampling to estimate the probabilities, the higher the number of points n in X the higher the number of samples num_samples should be to obtain accurate estimates.
- Parameters
model (
Model
) – A fitted model. Batched models are currently not supported.X (
Tensor
) – An input tensor of shape n x d. Batched inputs are currently not supported.objective (
Optional
[MCAcquisitionObjective
]) – The objective under which to evaluate the posterior.num_samples (
int
) – The number of samples used to compute empirical probabilities of being the best point.max_frac (
float
) – The maximum fraction of points to retain. Must satisfy 0 < max_frac <= 1. Ensures that the number of elements in the returned tensor does not exceed ceil(max_frac * n).
- Return type
Tensor
- Returns
A n’ x d with subset of points in X, where
n’ = min(N_nz, ceil(max_frac * n))
with N_nz the number of points in X that have non-zero (empirical, under num_samples samples) probability of being the best point.
-
botorch.acquisition.utils.
project_to_target_fidelity
(X, target_fidelities=None)[source]¶ Project X onto the target set of fidelities.
This function assumes that the set of feasible fidelities is a box, so projecting here just means setting each fidelity parameter to its target value.
- Parameters
X (
Tensor
) – A batch_shape x q x d-dim Tensor of with q d-dim design points for each t-batch.target_fidelities (
Optional
[Dict
[int
,float
]]) – A dictionary mapping a subset of columns of X (the fidelity parameters) to their respective target fidelity value. If omitted, assumes that the last column of X is the fidelity parameter with a target value of 1.0.
- Return type
Tensor
- Returns
- A batch_shape x q x d-dim Tensor X_proj with fidelity parameters
projected to the provided fidelity values.
-
botorch.acquisition.utils.
expand_trace_observations
(X, fidelity_dims=None, num_trace_obs=0)[source]¶ Expand X with trace observations.
Expand a tensor of inputs with “trace observations” that are obtained during the evaluation of the candidate set. This is used in multi-fidelity optimization. It can be though of as augmenting the q-batch with additional points that are the expected trace observations.
Let f_i be the i-th fidelity parameter. Then this functions assumes that for each element of the q-batch, besides the fidelity f_i, we will observe additonal fidelities f_i1, …, f_iK, where K = num_trace_obs, during evaluation of the candidate set X. Specifically, this function assumes that f_ij = (K-j) / (num_trace_obs + 1) * f_i for all i. That is, the expansion is performed in parallel for all fidelities (it does not expand out all possible combinations).
- Parameters
X (
Tensor
) – A batch_shape x q x d-dim Tensor of with q d-dim design points (incl. the fidelity parameters) for each t-batch.fidelity_dims (
Optional
[List
[int
]]) – The indices of the fidelity parameters. If omitted, assumes that the last column of X contains the fidelity parameters.num_trace_obs (
int
) – The number of trace observations to use.
- Return type
Tensor
- Returns
- A batch_shape x (q + num_trace_obs x q) x d Tensor X_expanded that
expands X with trace observations.