hypertunity.optimisation
¶
Summary¶
Data classes¶
EvaluationScore 
A tuple of the evaluation value of the objective function and a variance if known. 
HistoryPoint 
A tuple of a Sample at which the objective has been evaluated and the corresponding metrics. 
Optimisers¶
Optimiser 
Abstract class Optimiser for all optimisers. 
BayesianOptimisation 
Bayesian Optimiser using GPyOpt as a backend. 
GridSearch 
Grid search pseudooptimiser. 
RandomSearch 
Uniform random sampling pseudooptimiser. 
API documentation¶

class
EvaluationScore
(value, variance=0.0)[source]¶ A tuple of the evaluation value of the objective function and a variance if known.

class
HistoryPoint
(sample, metrics)[source]¶ A tuple of a
Sample
at which the objective has been evaluated and the corresponding metrics. The metrics are supplied asdict
mapping of astr
metric name to anEvaluationScore
.

class
Optimiser
(domain)[source]¶ Abstract class
Optimiser
for all optimisers.It must be implemented by all subclasses in this package.
Every
Optimiser
instance can be run for one single step using therun_step()
method. TheOptimiser
does not perform the evaluation of the objective function but only proposes values from its domain. Therefore an evaluation history must be supplied via the :py:meth`update` method. The history can be erased and theOptimiser
brought to the initial state via thereset()
method.
__init__
(domain)[source]¶ Initialise the optimiser with a domain.
Parameters: domain – Domain
. The domain of the objective function.

property
history
¶ Return the accumulated optimisation history.

abstract
run_step
(batch_size, *args, **kwargs)[source]¶ Perform one step of optimisation and suggest the next sample to evaluate.
Parameters:  batch_size – (optional)
int
. The number of samples to suggest at once.  *args – optional arguments for the Optimiser.
 **kwargs – optional keyword arguments for the Optimiser.
Returns: A
List[Sample]
with the suggested samples to evaluate. batch_size – (optional)

update
(x, fx, **kwargs)[source]¶ Update the optimiser’s history with new points.
Parameters:  x –
Sample
orList[Sample]
. The samples at which the objective function has been evaluated.  fx –
EvaluationScore
orList[EvaluationScore]
. The evaluation scores at the corresponding samples.
 x –


class
BayesianOptimisation
(domain, seed=None)[source]¶ Bayesian Optimiser using GPyOpt as a backend.

__init__
(domain, seed=None)[source]¶ Initialise the optimiser’s domain.
Parameters:  domain –
Domain
. The domain of the objective function.  seed – (optional)
int
. The seed of the optimiser. Used for reproducibility purposes.
 domain –

run_step
(batch_size=1, minimise=False, **kwargs)[source]¶ Run one step of Bayesian optimisation with a GP regression surrogate model.
The first sample of the domain is chosen at random. Only after the model has been updated with at least one (data point, evaluation score)pair the GPs are built and the acquisition function computed and optimised.
Parameters:  batch_size – (optional)
int
. The number of samples to suggest at once. If larger than one, there is no guarantee for the optimality of the number of probes.  minimise – (optional)
bool
. Whether the objective should be minimised  **kwargs – optional keyword arguments which will be passed to the backend GPyOpt.methods.BayesianOptimisation optimiser.
Keyword Arguments:  model –
str
orGPy.Model
object. The surrogate model used by the backend optimiser.  kernel –
GPy.Kern
object. The kernel used by the model.  variance –
float
. The variance of the objective function.
Returns: A list of batch_sizemany
Sample
instances at which the objective should be evaluated next.Raises:  ExhaustedSearchSpaceError – if the domain is discrete and
 gets exhausted. –
 batch_size – (optional)

update
(x, fx, **kwargs)[source]¶ Update the surrogate model with the domain sample x and the function evaluation fx.
Parameters:  x – class:Sample. One sample of the domain of the objective function.
 fx – a
float
, anEvaluationScore
or adict
. The evaluation scores of the objective evaluated at x. If given asdict
then it must be a mapping from metric names toEvaluationScore
orfloat
results.  **kwargs – unused by this model.


class
GridSearch
(domain, sample_continuous=False, seed=None)[source]¶ Grid search pseudooptimiser.

__init__
(domain, sample_continuous=False, seed=None)[source]¶ Initialise the
GridSearch
optimiser from a discrete domain.If the domain contains continuous subspaces, then they could be sampled if sample_continuous is enabled.
Parameters:  domain –
Domain
. The domain to iterate over.  sample_continuous – (optional)
bool
. Whether to sample the continuous subspaces of the domain.  seed – (optional)
int
. Seed for the sampling of the continuous subspace if necessary.
 domain –

run_step
(batch_size=1, **kwargs)[source]¶ Get the next batch_size samples from the Cartesianproduct walk over the domain.
Parameters: batch_size – (optional) int
. The number of samples to suggest at once.Returns: A list of Sample
instances from the domain.Raises: ExhaustedSearchSpaceError – if the (discrete part of the) domain is fully exhausted and no samples can be generated. Notes
This method does not guarantee that the returned list of
Samples
will be of length batch_size. This is due to the size of the domain and the fact that samples will not be repeated.


class
RandomSearch
(domain, seed=None)[source]¶ Uniform random sampling pseudooptimiser.

__init__
(domain, seed=None)[source]¶ Initialise the
RandomSearch
search space.Parameters:  domain –
Domain
. The domain of the objective function. It will be sampled uniformly using thesample()
method of theDomain
.  seed – (optional)
int
. The seed for the domain sampling.
 domain –
