hypertunity.optimisation

Summary

Data classes

EvaluationScore A tuple of the evaluation value of the objective function and a variance if known.
HistoryPoint A tuple of a Sample at which the objective has been evaluated and the corresponding metrics.

Optimisers

Optimiser Abstract class Optimiser for all optimisers.
BayesianOptimisation Bayesian Optimiser using GPyOpt as a backend.
GridSearch Grid search pseudo-optimiser.
RandomSearch Uniform random sampling pseudo-optimiser.

API documentation

class EvaluationScore(value, variance=0.0)[source]

A tuple of the evaluation value of the objective function and a variance if known.

class HistoryPoint(sample, metrics)[source]

A tuple of a Sample at which the objective has been evaluated and the corresponding metrics. The metrics are supplied as dict mapping of a str metric name to an EvaluationScore.

class Optimiser(domain)[source]

Abstract class Optimiser for all optimisers.

It must be implemented by all subclasses in this package.

Every Optimiser instance can be run for one single step using the run_step() method. The Optimiser does not perform the evaluation of the objective function but only proposes values from its domain. Therefore an evaluation history must be supplied via the :py:meth`update` method. The history can be erased and the Optimiser brought to the initial state via the reset() method.

__init__(domain)[source]

Initialise the optimiser with a domain.

Parameters:domainDomain. The domain of the objective function.
property history

Return the accumulated optimisation history.

reset()[source]

Reset the optimiser to the initial state.

abstract run_step(batch_size, *args, **kwargs)[source]

Perform one step of optimisation and suggest the next sample to evaluate.

Parameters:
  • batch_size – (optional) int. The number of samples to suggest at once.
  • *args – optional arguments for the Optimiser.
  • **kwargs – optional keyword arguments for the Optimiser.
Returns:

A List[Sample] with the suggested samples to evaluate.

update(x, fx, **kwargs)[source]

Update the optimiser’s history with new points.

Parameters:
  • xSample or List[Sample]. The samples at which the objective function has been evaluated.
  • fxEvaluationScore or List[EvaluationScore]. The evaluation scores at the corresponding samples.
class BayesianOptimisation(domain, seed=None)[source]

Bayesian Optimiser using GPyOpt as a backend.

__init__(domain, seed=None)[source]

Initialise the optimiser’s domain.

Parameters:
  • domainDomain. The domain of the objective function.
  • seed – (optional) int. The seed of the optimiser. Used for reproducibility purposes.
reset()[source]

Reset the optimiser for a fresh start.

run_step(batch_size=1, minimise=False, **kwargs)[source]

Run one step of Bayesian optimisation with a GP regression surrogate model.

The first sample of the domain is chosen at random. Only after the model has been updated with at least one (data point, evaluation score)-pair the GPs are built and the acquisition function computed and optimised.

Parameters:
  • batch_size – (optional) int. The number of samples to suggest at once. If larger than one, there is no guarantee for the optimality of the number of probes.
  • minimise – (optional) bool. Whether the objective should be minimised
  • **kwargs – optional keyword arguments which will be passed to the backend GPyOpt.methods.BayesianOptimisation optimiser.
Keyword Arguments:
 
  • modelstr or GPy.Model object. The surrogate model used by the backend optimiser.
  • kernelGPy.Kern object. The kernel used by the model.
  • variancefloat. The variance of the objective function.
Returns:

A list of batch_size-many Sample instances at which the objective should be evaluated next.

Raises:
  • ExhaustedSearchSpaceError – if the domain is discrete and
  • gets exhausted.
update(x, fx, **kwargs)[source]

Update the surrogate model with the domain sample x and the function evaluation fx.

Parameters:
  • x – class:Sample. One sample of the domain of the objective function.
  • fx – a float, an EvaluationScore or a dict. The evaluation scores of the objective evaluated at x. If given as dict then it must be a mapping from metric names to EvaluationScore or float results.
  • **kwargs – unused by this model.
class GridSearch(domain, sample_continuous=False, seed=None)[source]

Grid search pseudo-optimiser.

__init__(domain, sample_continuous=False, seed=None)[source]

Initialise the GridSearch optimiser from a discrete domain.

If the domain contains continuous subspaces, then they could be sampled if sample_continuous is enabled.

Parameters:
  • domainDomain. The domain to iterate over.
  • sample_continuous – (optional) bool. Whether to sample the continuous subspaces of the domain.
  • seed – (optional) int. Seed for the sampling of the continuous subspace if necessary.
reset()[source]

Reset the optimiser to the beginning of the Cartesian-product walk.

run_step(batch_size=1, **kwargs)[source]

Get the next batch_size samples from the Cartesian-product walk over the domain.

Parameters:batch_size – (optional) int. The number of samples to suggest at once.
Returns:A list of Sample instances from the domain.
Raises:ExhaustedSearchSpaceError – if the (discrete part of the) domain is fully exhausted and no samples can be generated.

Notes

This method does not guarantee that the returned list of Samples will be of length batch_size. This is due to the size of the domain and the fact that samples will not be repeated.

class RandomSearch(domain, seed=None)[source]

Uniform random sampling pseudo-optimiser.

__init__(domain, seed=None)[source]

Initialise the RandomSearch search space.

Parameters:
  • domainDomain. The domain of the objective function. It will be sampled uniformly using the sample() method of the Domain.
  • seed – (optional) int. The seed for the domain sampling.
run_step(batch_size=1, **kwargs)[source]

Sample uniformly the domain for batch_size number of times.

Parameters:batch_size – (optional) int. The number of samples to return at one step.
Returns:A list of batch_size many Sample instances.