botorch.sampling

Monte-Carlo Sampler API

The base class for sampler modules to be used with MC-evaluated acquisition functions.

Deterministic Sampler

A dummy sampler for use with deterministic models.

class botorch.sampling.deterministic.DeterministicSampler(sample_shape, seed=None)[source]

Bases: StochasticSampler

A sampler that simply calls posterior.rsample, intended to be used with DeterministicModel & DeterministicPosterior.

[DEPRECATED] - Use IndexSampler in conjunction with EnsemblePosterior instead of DeterministicSampler with DeterministicPosterior.

This is effectively signals that StochasticSampler is safe to use with deterministic models since their output is deterministic by definition.

Abstract base class for samplers.

Parameters:
  • sample_shape (torch.Size) – The sample_shape of the samples to generate. The full shape of the samples is given by posterior._extended_shape(sample_shape).

  • seed (Optional[int]) – An optional seed to use for sampling.

Index Sampler

Sampler to be used with EnsemblePosteriors to enable deterministic optimization of acquisition functions with ensemble models.

class botorch.sampling.index_sampler.IndexSampler(sample_shape, seed=None)[source]

Bases: MCSampler

A sampler that calls posterior.rsample_from_base_samples to generate the samples via index base samples.

Abstract base class for samplers.

Parameters:
  • sample_shape (torch.Size) – The sample_shape of the samples to generate. The full shape of the samples is given by posterior._extended_shape(sample_shape).

  • seed (Optional[int]) – An optional seed to use for sampling.

forward(posterior)[source]

Draws MC samples from the posterior.

Parameters:

posterior (EnsemblePosterior) – The ensemble posterior to sample from.

Returns:

The samples drawn from the posterior.

Return type:

Tensor

Get Sampler Helper

botorch.sampling.get_sampler.get_sampler(posterior, sample_shape, **kwargs)[source]

Get the sampler for the given posterior.

The sampler can be used as sampler(posterior) to produce samples suitable for use in acquisition function optimization via SAA.

Parameters:
  • posterior (TorchPosterior) – A Posterior to get the sampler for.

  • sample_shape (Size) – The sample shape of the samples produced by the given sampler. The full shape of the resulting samples is given by posterior._extended_shape(sample_shape).

  • kwargs (Any) – Optional kwargs, passed down to the samplers during construction.

Returns:

The MCSampler object for the given posterior.

Return type:

MCSampler

List Sampler

A SamplerList for sampling from a PosteriorList.

Gaussian Monte-Carlo Samplers

Sampler modules producing N(0,1) samples, to be used with MC-evaluated acquisition functions and Gaussian posteriors.

class botorch.sampling.normal.NormalMCSampler(sample_shape, seed=None)[source]

Bases: MCSampler, ABC

Base class for samplers producing (possibly QMC) N(0,1) samples.

Subclasses must implement the _construct_base_samples method.

Abstract base class for samplers.

Parameters:
  • sample_shape (torch.Size) – The sample_shape of the samples to generate. The full shape of the samples is given by posterior._extended_shape(sample_shape).

  • seed (Optional[int]) – An optional seed to use for sampling.

forward(posterior)[source]

Draws MC samples from the posterior.

Parameters:

posterior (Posterior) – The posterior to sample from.

Returns:

The samples drawn from the posterior.

Return type:

Tensor

class botorch.sampling.normal.IIDNormalSampler(sample_shape, seed=None)[source]

Bases: NormalMCSampler

Sampler for MC base samples using iid N(0,1) samples.

Example

>>> sampler = IIDNormalSampler(1000, seed=1234)
>>> posterior = model.posterior(test_X)
>>> samples = sampler(posterior)

Abstract base class for samplers.

Parameters:
  • sample_shape (torch.Size) – The sample_shape of the samples to generate. The full shape of the samples is given by posterior._extended_shape(sample_shape).

  • seed (Optional[int]) – An optional seed to use for sampling.

class botorch.sampling.normal.SobolQMCNormalSampler(sample_shape, seed=None)[source]

Bases: NormalMCSampler

Sampler for quasi-MC N(0,1) base samples using Sobol sequences.

Example

>>> sampler = SobolQMCNormalSampler(torch.Size([1024]), seed=1234)
>>> posterior = model.posterior(test_X)
>>> samples = sampler(posterior)

Abstract base class for samplers.

Parameters:
  • sample_shape (torch.Size) – The sample_shape of the samples to generate. The full shape of the samples is given by posterior._extended_shape(sample_shape).

  • seed (Optional[int]) – An optional seed to use for sampling.

Pairwise Monte-Carlo Samplers

class botorch.sampling.pairwise_samplers.PairwiseMCSampler(max_num_comparisons=None, seed=None)[source]

Bases: MCSampler

Abstract class for Pairwise MC Sampler.

This sampler will sample pairwise comparisons. It is to be used together with PairwiseGP and BoTorch acquisition functions (e.g., qKnowledgeGradient)

Parameters:
  • max_num_comparisons (int) – Max number of comparisons drawn within samples. If None, use all possible pairwise comparisons

  • seed (int) – The seed for np.random.seed. If omitted, use a random seed. May be overwritten by sibling classes or subclasses.

forward(posterior)[source]

Draws MC samples from the posterior and make comparisons

Parameters:

posterior (Posterior) – The Posterior to sample from. The returned samples are expected to have output dimension of 1.

Returns:

Posterior sample pairwise comparisons.

Return type:

Tensor

class botorch.sampling.pairwise_samplers.PairwiseIIDNormalSampler(sample_shape, seed=None, max_num_comparisons=None, **kwargs)[source]

Bases: PairwiseMCSampler, IIDNormalSampler

Parameters:
  • sample_shape (torch.Size) – The sample_shape of the samples to generate.

  • seed (Optional[int]) – The seed for the RNG. If omitted, use a random seed.

  • max_num_comparisons (int) – Max number of comparisons drawn within samples. If None, use all possible pairwise comparisons.

  • kwargs (Any) – Catch-all for deprecated arguments.

class botorch.sampling.pairwise_samplers.PairwiseSobolQMCNormalSampler(sample_shape, seed=None, max_num_comparisons=None, **kwargs)[source]

Bases: PairwiseMCSampler, SobolQMCNormalSampler

Parameters:
  • sample_shape (torch.Size) – The sample_shape of the samples to generate.

  • seed (Optional[int]) – The seed for the RNG. If omitted, use a random seed.

  • max_num_comparisons (int) – Max number of comparisons drawn within samples. If None, use all possible pairwise comparisons.

  • kwargs (Any) – Catch-all for deprecated arguments.

QMC Base Functionality

Quasi Monte-Carlo sampling from Normal distributions.

References:

[Pages2018numprob] (1,2)

G. Pages. Numerical Probability: An Introduction with Applications to Finance. Universitext. Springer International Publishing, 2018.

class botorch.sampling.qmc.NormalQMCEngine(d, seed=None, inv_transform=False)[source]

Bases: object

Engine for qMC sampling from a Multivariate Normal N(0, I_d).

By default, this implementation uses Box-Muller transformed Sobol samples following pg. 123 in [Pages2018numprob]. To use the inverse transform instead, set inv_transform=True.

Example

>>> engine = NormalQMCEngine(3)
>>> samples = engine.draw(16)

Engine for drawing qMC samples from a multivariate normal N(0, I_d).

Parameters:
  • d (int) – The dimension of the samples.

  • seed (Optional[int]) – The seed with which to seed the random number generator of the underlying SobolEngine.

  • inv_transform (bool) – If True, use inverse transform instead of Box-Muller.

draw(n=1, out=None, dtype=torch.float32)[source]

Draw n qMC samples from the standard Normal.

Parameters:
  • n (int) – The number of samples to draw. As a best practice, use powers of 2.

  • out (Tensor | None) – An option output tensor. If provided, draws are put into this tensor, and the function returns None.

  • dtype (dtype) – The desired torch data type (ignored if out is provided).

Returns:

A n x d tensor of samples if out=None and None otherwise.

Return type:

Tensor | None

class botorch.sampling.qmc.MultivariateNormalQMCEngine(mean, cov, seed=None, inv_transform=False)[source]

Bases: object

Engine for qMC sampling from a multivariate Normal N(mu, Sigma).

By default, this implementation uses Box-Muller transformed Sobol samples following pg. 123 in [Pages2018numprob]. To use the inverse transform instead, set inv_transform=True.

Example

>>> mean = torch.tensor([1.0, 2.0])
>>> cov = torch.tensor([[1.0, 0.25], [0.25, 2.0]])
>>> engine = MultivariateNormalQMCEngine(mean, cov)
>>> samples = engine.draw(16)

Engine for qMC sampling from a multivariate Normal N(mu, Sigma).

Parameters:
  • mean (Tensor) – The mean vector.

  • cov (Tensor) – The covariance matrix.

  • seed (Optional[int]) – The seed with which to seed the random number generator of the underlying SobolEngine.

  • inv_transform (bool) – If True, use inverse transform instead of Box-Muller.

draw(n=1, out=None)[source]

Draw n qMC samples from the multivariate Normal.

Parameters:
  • n (int) – The number of samples to draw. As a best practice, use powers of 2.

  • out (Tensor | None) – An option output tensor. If provided, draws are put into this tensor, and the function returns None.

Returns:

A n x d tensor of samples if out=None and None otherwise.

Return type:

Tensor | None

Stochastic Samplers

Samplers to enable use cases that are not base sample driven, such as stochastic optimization of acquisition functions.

class botorch.sampling.stochastic_samplers.ForkedRNGSampler(sample_shape, seed=None)[source]

Bases: MCSampler

A sampler using torch.fork_rng to enable replicable sampling from a posterior that does not support base samples.

NOTE: This approach is not a one-to-one replacement for base sample driven sampling. The main missing piece in this approach is that its outputs are not replicable across the batch dimensions. As a result, when an acquisition function is batch evaluated with repeated candidates, each candidate will produce a different acquisition value, which is not compatible with Sample Average Approximation.

Abstract base class for samplers.

Parameters:
  • sample_shape (torch.Size) – The sample_shape of the samples to generate. The full shape of the samples is given by posterior._extended_shape(sample_shape).

  • seed (Optional[int]) – An optional seed to use for sampling.

forward(posterior)[source]

Draws MC samples from the posterior in a fork_rng context.

Parameters:

posterior (Posterior) – The posterior to sample from.

Returns:

The samples drawn from the posterior.

Return type:

Tensor

class botorch.sampling.stochastic_samplers.StochasticSampler(sample_shape, seed=None)[source]

Bases: MCSampler

A sampler that simply calls posterior.rsample to generate the samples. This should only be used for stochastic optimization of the acquisition functions, e.g., via gen_candidates_torch. This should not be used with optimize_acqf, which uses deterministic optimizers under the hood.

NOTE: This ignores the seed option.

Abstract base class for samplers.

Parameters:
  • sample_shape (torch.Size) – The sample_shape of the samples to generate. The full shape of the samples is given by posterior._extended_shape(sample_shape).

  • seed (Optional[int]) – An optional seed to use for sampling.

forward(posterior)[source]

Draws MC samples from the posterior.

Parameters:

posterior (Posterior) – The posterior to sample from.

Returns:

The samples drawn from the posterior.

Return type:

Tensor

Pathwise Sampling

Feature Maps

class botorch.sampling.pathwise.features.maps.FeatureMap(*args, **kwargs)[source]

Bases: TransformedModuleMixin, Module

Initialize internal Module state, shared by both nn.Module and ScriptModule.

num_outputs: int
batch_shape: Size
input_transform: TInputTransform | None
output_transform: TOutputTransform | None
class botorch.sampling.pathwise.features.maps.KernelEvaluationMap(kernel, points, input_transform=None, output_transform=None)[source]

Bases: FeatureMap

A feature map defined by centering a kernel at a set of points.

Initializes a KernelEvaluationMap instance:

feature_map(x) = output_transform(kernel(input_transform(x), points)).
Parameters:
  • kernel (Kernel) – The kernel \(k\) used to define the feature map.

  • points (Tensor) – A tensor passed as the kernel’s second argument.

  • input_transform (Optional[TInputTransform]) – An optional input transform for the module.

  • output_transform (Optional[TOutputTransform]) – An optional output transform for the module.

forward(x)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Parameters:

x (Tensor)

Return type:

Tensor | LinearOperator

property num_outputs: int
property batch_shape: Size
class botorch.sampling.pathwise.features.maps.KernelFeatureMap(kernel, weight, bias=None, input_transform=None, output_transform=None)[source]

Bases: FeatureMap

Representation of a kernel \(k: \mathcal{X}^2 \to \mathbb{R}\) as an n-dimensional feature map \(\phi: \mathcal{X} \to \mathbb{R}^n\) satisfying: \(k(x, x') ≈ \phi(x)^\top \phi(x')\).

Initializes a KernelFeatureMap instance:

feature_map(x) = output_transform(input_transform(x)^{T} weight + bias).
Parameters:
  • kernel (Kernel) – The kernel \(k\) used to define the feature map.

  • weight (Tensor) – A tensor of weights used to linearly combine the module’s inputs.

  • bias (Optional[Tensor]) – A tensor of biases to be added to the linearly combined inputs.

  • input_transform (Optional[TInputTransform]) – An optional input transform for the module.

  • output_transform (Optional[TOutputTransform]) – An optional output transform for the module.

forward(x)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Parameters:

x (Tensor)

Return type:

Tensor

property num_outputs: int
property batch_shape: Size

Feature Map Generators

[rahimi2007random]

A. Rahimi and B. Recht. Random features for large-scale kernel machines. Advances in Neural Information Processing Systems 20 (2007).

[sutherland2015error]

D. J. Sutherland and J. Schneider. On the error of random Fourier features. arXiv preprint arXiv:1506.02785 (2015).

botorch.sampling.pathwise.features.generators.gen_kernel_features(kernel, num_inputs, num_outputs, **kwargs)[source]

Generates a feature map \(\phi: \mathcal{X} \to \mathbb{R}^{n}\) such that \(k(x, x') ≈ \phi(x)^{T} \phi(x')\). For stationary kernels \(k\), defaults to the method of random Fourier features. For more details, see [rahimi2007random] and [sutherland2015error].

Parameters:
  • kernel (Kernel) – The kernel \(k\) to be represented via a finite-dim basis.

  • num_inputs (int) – The number of input features.

  • num_outputs (int) – The number of kernel features.

  • kwargs (Any)

Return type:

KernelFeatureMap

Sample Paths

class botorch.sampling.pathwise.paths.SamplePath(*args, **kwargs)[source]

Bases: ABC, TransformedModuleMixin, Module

Abstract base class for Botorch sample paths.

Initialize internal Module state, shared by both nn.Module and ScriptModule.

class botorch.sampling.pathwise.paths.PathDict(paths=None, join=None, input_transform=None, output_transform=None)[source]

Bases: SamplePath

A dictionary of SamplePaths.

Initializes a PathDict instance.

Parameters:
  • paths (Optional[Mapping[str, SamplePath]]) – An optional mapping of strings to sample paths.

  • join (Optional[Callable[[List[Tensor]], Tensor]]) – An optional callable used to combine each path’s outputs.

  • input_transform (InputTransform | Callable[[Tensor], Tensor] | None) – An optional input transform for the module.

  • output_transform (OutcomeTransform | Callable[[Tensor], Tensor] | None) – An optional output transform for the module.

forward(x, **kwargs)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Parameters:
  • x (Tensor)

  • kwargs (Any)

Return type:

Tensor | Dict[str, Tensor]

items()[source]
Return type:

Iterable[Tuple[str, SamplePath]]

keys()[source]
Return type:

Iterable[str]

values()[source]
Return type:

Iterable[SamplePath]

class botorch.sampling.pathwise.paths.PathList(paths=None, join=None, input_transform=None, output_transform=None)[source]

Bases: SamplePath

A list of SamplePaths.

Initializes a PathList instance.

Parameters:
  • paths (Optional[Iterable[SamplePath]]) – An optional iterable of sample paths.

  • join (Optional[Callable[[List[Tensor]], Tensor]]) – An optional callable used to combine each path’s outputs.

  • input_transform (InputTransform | Callable[[Tensor], Tensor] | None) – An optional input transform for the module.

  • output_transform (OutcomeTransform | Callable[[Tensor], Tensor] | None) – An optional output transform for the module.

forward(x, **kwargs)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Parameters:
  • x (Tensor)

  • kwargs (Any)

Return type:

Tensor | List[Tensor]

class botorch.sampling.pathwise.paths.GeneralizedLinearPath(feature_map, weight, bias_module=None, input_transform=None, output_transform=None)[source]

Bases: SamplePath

A sample path in the form of a generalized linear model.

Initializes a GeneralizedLinearPath instance.

path(x) = output_transform(bias_module(z) + feature_map(z)^T weight),
where z = input_transform(x).
Parameters:
  • feature_map (FeatureMap) – A map used to featurize the module’s inputs.

  • weight (Union[Parameter, Tensor]) – A tensor of weights used to combine input features.

  • bias_module (Optional[Module]) – An optional module used to define additive offsets.

  • input_transform (InputTransform | Callable[[Tensor], Tensor] | None) – An optional input transform for the module.

  • output_transform (OutcomeTransform | Callable[[Tensor], Tensor] | None) – An optional output transform for the module.

forward(x, **kwargs)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Parameters:

x (Tensor)

Return type:

Tensor

Pathwise Prior Samplers

botorch.sampling.pathwise.prior_samplers.draw_kernel_feature_paths(model, sample_shape, **kwargs)[source]

Draws functions from a Bayesian-linear-model-based approximation to a GP prior.

When evaluted, sample paths produced by this method return Tensors with dimensions sample_dims x batch_dims x [joint_dim], where joint_dim denotes the penultimate dimension of the input tensor. For multioutput models, outputs are returned as the final batch dimension.

Parameters:
  • model (GP) – The prior over functions.

  • sample_shape (Size) – The shape of the sample paths to be drawn.

  • kwargs (Any)

Return type:

GeneralizedLinearPath

Pathwise Posterior Samplers

[wilson2020sampling] (1,2)

J. Wilson, V. Borovitskiy, A. Terenin, P. Mostowsky, and M. Deisenroth. Efficiently sampling functions from Gaussian process posteriors. International Conference on Machine Learning (2020).

[wilson2021pathwise] (1,2)

J. Wilson, V. Borovitskiy, A. Terenin, P. Mostowsky, and M. Deisenroth. Pathwise Conditioning of Gaussian Processes. Journal of Machine Learning Research (2021).

class botorch.sampling.pathwise.posterior_samplers.MatheronPath(prior_paths, update_paths, input_transform=None, output_transform=None)[source]

Bases: PathDict

Represents function draws from a GP posterior via Matheron’s rule:

          "Prior path"
               v
(f | y)(·) = f(·) + Cov(f(·), y) Cov(y, y)^{-1} (y - f(X) - ε),
                    \_______________________________________/
                                        v
                                  "Update path"

where = denotes equality in distribution, \(f \sim GP(0, k)\), \(y \sim N(f(X), \Sigma)\), and \(\epsilon \sim N(0, \Sigma)\). For more information, see [wilson2020sampling] and [wilson2021pathwise].

Initializes a MatheronPath instance.

Parameters:
  • prior_paths (SamplePath) – Sample paths used to represent the prior.

  • update_paths (SamplePath) – Sample paths used to represent the data.

  • input_transform (InputTransform | Callable[[Tensor], Tensor] | None) – An optional input transform for the module.

  • output_transform (OutcomeTransform | Callable[[Tensor], Tensor] | None) – An optional output transform for the module.

botorch.sampling.pathwise.posterior_samplers.draw_matheron_paths(model, sample_shape, prior_sampler=<function draw_kernel_feature_paths>, update_strategy=<function gaussian_update>, **kwargs)[source]

Generates function draws from (an approximate) Gaussian process prior.

When evaluted, sample paths produced by this method return Tensors with dimensions sample_dims x batch_dims x [joint_dim], where joint_dim denotes the penultimate dimension of the input tensor. For multioutput models, outputs are returned as the final batch dimension.

Parameters:
  • model (GP) – Gaussian process whose posterior is to be sampled.

  • sample_shape (Size) – Sizes of sample dimensions.

  • prior_sample – A callable that takes a model and a sample shape and returns a set of sample paths representing the prior.

  • update_strategy (Callable[[GP, Tensor], SamplePath]) – A callable that takes a model and a tensor of prior process values and returns a set of sample paths representing the data.

  • prior_sampler (Callable[[GP, Size], SamplePath])

  • kwargs (Any)

Return type:

MatheronPath

Pathwise Update Strategies

botorch.sampling.pathwise.update_strategies.gaussian_update(model, sample_values, likelihood=<class 'botorch.utils.types.DEFAULT'>, **kwargs)[source]

Computes a Gaussian pathwise update in exact arithmetic:

(f | y)(·) = f(·) + Cov(f(·), y) Cov(y, y)^{-1} (y - f(X) - ε),
                    \_______________________________________/
                                        V
                            "Gaussian pathwise update"

where = denotes equality in distribution, \(f \sim GP(0, k)\), \(y \sim N(f(X), \Sigma)\), and \(\epsilon \sim N(0, \Sigma)\). For more information, see [wilson2020sampling] and [wilson2021pathwise].

Parameters:
  • model (GP) – A Gaussian process prior together with a likelihood.

  • sample_values (Tensor) – Assumed values for \(f(X)\).

  • likelihood (Likelihood | None) – An optional likelihood used to help define the desired update. Defaults to model.likelihood if it exists else None.

  • kwargs (Any)

Return type:

GeneralizedLinearPath

Utilities

class botorch.sampling.pathwise.utils.TransformedModuleMixin[source]

Bases: object

Mixin that wraps a module’s __call__ method with optional transforms.

input_transform: InputTransform | Callable[[Tensor], Tensor] | None
output_transform: OutcomeTransform | Callable[[Tensor], Tensor] | None
class botorch.sampling.pathwise.utils.TensorTransform(*args, **kwargs)[source]

Bases: ABC, Module

Abstract base class for transforms that map tensor to tensor.

Initialize internal Module state, shared by both nn.Module and ScriptModule.

abstract forward(values, **kwargs)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Parameters:
  • values (Tensor)

  • kwargs (Any)

Return type:

Tensor

class botorch.sampling.pathwise.utils.ChainedTransform(*transforms)[source]

Bases: TensorTransform

A composition of TensorTransforms.

Initializes a ChainedTransform instance.

Parameters:

transforms (TensorTransform) – A set of transforms to be applied from right to left.

forward(values)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Parameters:

values (Tensor)

Return type:

Tensor

class botorch.sampling.pathwise.utils.SineCosineTransform(scale=None)[source]

Bases: TensorTransform

A transform that returns concatenated sine and cosine features.

Initializes a SineCosineTransform instance.

Parameters:

scale (Optional[Tensor]) – An optional tensor used to rescale the module’s outputs.

forward(values)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Parameters:

values (Tensor)

Return type:

Tensor

class botorch.sampling.pathwise.utils.InverseLengthscaleTransform(kernel)[source]

Bases: TensorTransform

A transform that divides its inputs by a kernels lengthscales.

Initializes an InverseLengthscaleTransform instance.

Parameters:

kernel (Kernel) – The kernel whose lengthscales are to be used.

forward(values)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Parameters:

values (Tensor)

Return type:

Tensor

class botorch.sampling.pathwise.utils.OutputscaleTransform(kernel)[source]

Bases: TensorTransform

A transform that multiplies its inputs by the square root of a kernel’s outputscale.

Initializes an OutputscaleTransform instance.

Parameters:

kernel (ScaleKernel) – A ScaleKernel whose outputscale is to be used.

forward(values)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Parameters:

values (Tensor)

Return type:

Tensor

class botorch.sampling.pathwise.utils.FeatureSelector(indices, dim=-1)[source]

Bases: TensorTransform

A transform that returns a subset of its input’s features. along a given tensor dimension.

Initializes a FeatureSelector instance.

Parameters:
  • indices (Iterable[int]) – A LongTensor of feature indices.

  • dim (Union[int, LongTensor]) – The dimensional along which to index features.

forward(values)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Parameters:

values (Tensor)

Return type:

Tensor

class botorch.sampling.pathwise.utils.OutcomeUntransformer(transform, num_outputs)[source]

Bases: TensorTransform

Module acting as a bridge for OutcomeTransform.untransform.

Initializes an OutcomeUntransformer instance.

Parameters:
  • transform (OutcomeTransform) – The wrapped OutcomeTransform instance.

  • num_outputs (Union[int, LongTensor]) – The number of outcome features that the OutcomeTransform transforms.

forward(values)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Parameters:

values (Tensor)

Return type:

Tensor

botorch.sampling.pathwise.utils.get_input_transform(model)[source]

Returns a model’s input_transform or None.

Parameters:

model (GPyTorchModel)

Return type:

InputTransform | None

botorch.sampling.pathwise.utils.get_output_transform(model)[source]

Returns a wrapped version of a model’s outcome_transform or None.

Parameters:

model (GPyTorchModel)

Return type:

OutcomeUntransformer | None

botorch.sampling.pathwise.utils.get_train_inputs(model: Model, transformed: bool = False) Tuple[Tensor, ...][source]
botorch.sampling.pathwise.utils.get_train_inputs(model: ModelList, transformed: bool = False) List[...]
botorch.sampling.pathwise.utils.get_train_targets(model: Model, transformed: bool = False) Tensor[source]
botorch.sampling.pathwise.utils.get_train_targets(model: ModelList, transformed: bool = False) List[...]