Utilities for model fitting., optimizer=<function fit_gpytorch_scipy>, **kwargs)[source]

Fit hyperparameters of a GPyTorch model.

On optimizer failures, a new initial condition is sampled from the hyperparameter priors and optimization is retried. The maximum number of retries can be passed in as a max_retries kwarg (default is 5).

Optimizer functions are in

  • mll (gpytorch.mlls.marginal_log_likelihood.MarginalLogLikelihood) – MarginalLogLikelihood to be maximized.

  • optimizer (Callable) – The optimizer function.

  • kwargs (Any) – Arguments passed along to the optimizer function, including max_retries and sequential (controls the fitting of ModelListGP and BatchedMultiOutputGPyTorchModel models) or approx_mll (whether to use gpytorch’s approximate MLL computation).


MarginalLogLikelihood with optimized parameters.

Return type



>>> gp = SingleTaskGP(train_X, train_Y)
>>> mll = ExactMarginalLogLikelihood(gp.likelihood, gp)
>>> fit_gpytorch_model(mll), max_tree_depth=6, warmup_steps=512, num_samples=256, thinning=16, disable_progbar=False)[source]

Fit a fully Bayesian model using the No-U-Turn-Sampler (NUTS)

  • model (botorch.models.fully_bayesian.SaasFullyBayesianSingleTaskGP) – SaasFullyBayesianSingleTaskGP to be fitted.

  • max_tree_depth (int) – Maximum tree depth for NUTS

  • warmup_steps (int) – The number of burn-in steps for NUTS.

  • num_samples (int) – The number of MCMC samples. Note that with thinning, num_samples / thinning samples are retained.

  • thinning (int) – The amount of thinning. Every nth sample is retained.

  • disable_progbar (bool) – A boolean indicating whether to print the progress bar and diagnostics during MCMC.

Return type



>>> gp = SaasFullyBayesianSingleTaskGP(train_X, train_Y)
>>> fit_fully_bayesian_model_nuts(gp)