\n", "__References__\n", "\n", "[1] [Takeno, S., et al., _Multi-fidelity Bayesian Optimization with Max-value Entropy Search._ arXiv:1901.08275v1, 2019](https://arxiv.org/abs/1901.08275)\n", "\n", "[2] [Wang, Z., Jegelka, S., _Max-value Entropy Search for Efficient Bayesian Optimization._ arXiv:1703.01968v3, 2018](https://arxiv.org/abs/1703.01968)\n" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "### 2. Setting up a toy model\n", "We will fit a standard SingleTaskGP model on noisy observations of the synthetic 2D Branin function on the hypercube $[-5,10]\\times [0, 15]$." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import math\n", "import torch\n", "\n", "from botorch.test_functions import Branin\n", "from botorch.fit import fit_gpytorch_model\n", "from botorch.models import SingleTaskGP\n", "from botorch.utils.transforms import standardize, normalize\n", "from gpytorch.mlls import ExactMarginalLogLikelihood\n", "\n", "torch.manual_seed(7)\n", "\n", "bounds = torch.tensor(Branin._bounds).T\n", "train_X = bounds[0] + (bounds[1] - bounds[0]) * torch.rand(10, 2)\n", "train_Y = Branin(negate=True)(train_X).unsqueeze(-1)\n", "\n", "train_X = normalize(train_X, bounds=bounds)\n", "train_Y = standardize(train_Y + 0.05 * torch.randn_like(train_Y))\n", "\n", "model = SingleTaskGP(train_X, train_Y)\n", "mll = ExactMarginalLogLikelihood(model.likelihood, model)\n", "fit_gpytorch_model(mll);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 3. Defining the MES acquisition function\n", "\n", "The `qMaxValueEntropy` acquisition function is a subclass of `MCAcquisitionFunction` and supports pending points `X_pending`. Required arguments for the constructor are `model` and `candidate_set` (the discretized candidate points in the design space that will be used to draw max value samples). There are also other optional parameters, such as number of max value samples $\\mathcal{F^*}$, number of $\\mathcal{Y}$ samples and number of fantasies (in case of $q>1$). Two different sampling algorithms are supported for the max value samples: the discretized Thompson sampling and the Gumbel sampling introduced in [2]. Gumbel sampling is the default choice in the acquisition function. " ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "from botorch.acquisition.max_value_entropy_search import qMaxValueEntropy\n", "\n", "candidate_set = torch.rand(1000, bounds.size(1), device=bounds.device, dtype=bounds.dtype)\n", "candidate_set = bounds[0] + (bounds[1] - bounds[0]) * candidate_set\n", "qMES = qMaxValueEntropy(model, candidate_set)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 4. Optimizing the MES acquisition function to get the next candidate points\n", "In order to obtain the next candidate point(s) to query, we need to optimize the acquisition function over the design space. For $q=1$ case, we can simply call the `optimize_acqf` function in the library. At $q>1$, due to the intractability of the aquisition function in this case, we need to use either sequential or cyclic optimization (multiple cycles of sequential optimization). " ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(tensor([[1.5350, 0.0758]]), tensor(0.0121))" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from botorch.optim import optimize_acqf\n", "\n", "# for q = 1\n", "candidates, acq_value = optimize_acqf(\n", " acq_function=qMES, \n", " bounds=bounds,\n", " q=1,\n", " num_restarts=10,\n", " raw_samples=512,\n", ")\n", "candidates, acq_value" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(tensor([[-0.3238, 0.6565],\n", " [ 1.5349, 0.0748]]), tensor([0.0135, 0.0065]))" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# for q = 2, sequential optimization\n", "candidates_q2, acq_value_q2 = optimize_acqf(\n", " acq_function=qMES, \n", " bounds=bounds,\n", " q=2,\n", " num_restarts=10,\n", " raw_samples=512,\n", " sequential=True,\n", ")\n", "candidates_q2, acq_value_q2" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(tensor([[-0.3236, 0.6563],\n", " [ 1.5326, 0.0732]]), tensor([0.0101, 0.0064]))" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from botorch.optim import optimize_acqf_cyclic\n", "\n", "# for q = 2, cyclic optimization\n", "candidates_q2_cyclic, acq_value_q2_cyclic = optimize_acqf_cyclic(\n", " acq_function=qMES, \n", " bounds=bounds,\n", " q=2,\n", " num_restarts=10,\n", " raw_samples=512,\n", " cyclic_options={\"maxiter\": 2}\n", ")\n", "candidates_q2_cyclic, acq_value_q2_cyclic" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The use of the `qMultiFidelityMaxValueEntropy` acquisition function is very similar to `qMaxValueEntropy`, but requires additional optional arguments related to the fidelity and cost models. We will provide more details on the MF-MES acquisition function in a separate tutorial. " ] } ], "metadata": { "bento_stylesheets": { "bento/extensions/flow/main.css": true, "bento/extensions/kernel_selector/main.css": true, "bento/extensions/kernel_ui/main.css": true, "bento/extensions/new_kernel/main.css": true, "bento/extensions/system_usage/main.css": true, "bento/extensions/theme/main.css": true }, "disseminate_notebook_id": { "notebook_id": "571652800248046" }, "disseminate_notebook_info": { "bento_version": "20191104-000203", "description": "", "hide_code": false, "hipster_group": "", "kernel_build_info": { "error": "The file located at '/data/users/liangshi/fbsource/fbcode/bento/kernels/local/ae_experimental/TARGETS' could not be found." }, "no_uii": true, "notebook_number": "166906", "others_can_edit": false, "reviewers": "", "revision_id": "2843712368994083", "tags": "botorch", "tasks": "", "title": "max_value_entropy_acquisition_function_tutorials" }, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.4" } }, "nbformat": 4, "nbformat_minor": 2 }