Optimisation

Hypertunity ships with three types of hyperparameter space exploration algorithms. A Bayesian optimisation, random and grid search. While the first one is sequential in nature and requires evaluations to update its internal model of the objective function, so that more informed sample suggestions are generated, the latter two are able to generate all samples in parallel and do not require updating. In this section we will give a brief overview of each.

Bayesian optimisation

BayesianOptimisation in Hypertunity is a wrapper around GPyOpt.methods.BayesianOptimization which uses Gaussian Process regression to build a surrogate model of the objective function. It is initialised from a Domain object:

bo = BayesianOptimization(domain)

The BayesianOptimisation optimiser is highly customisable during sampling. This enables the user to dynamically refine the model during calling run_step(). This approach introduces however the computational burden of recomputing the surrogate model at each query. In the following example we show how one can set the GP model using readily available ones from GPy.models, e.g. a GPHeteroschedasticRegression:

bo = BayesianOptimisation(domain=domain, seed=7)                    # initialise BO optimiser
kernel = GPy.kern.RBF(1) + GPy.kern.Bias(1)                         # create a custom kernel
custom_model = GPy.models.GPHeteroscedasticRegression(..., kernel)  # create a custom model
samples = bayes_opt.run_step(model=custom_model)                    # generate samples

Custom optimiser

If neither of the predefined optimiser are useful for your problem, you can easily roll out a custom one. Only thing you have to do is to inherit from the base Optimiser class and implement the run_step() method.

class CustomOptimiser(Optimiser):
    def __init__(self, domain, *args, **kwargs):
        super(CustomOptimiser, self).__init__(domain)
        ...

    def run_step(batch_size, *args, **kwargs):
        ...
        return [samples]