BPt.ParamSearch#

class BPt.ParamSearch(search_type='RandomSearch', cv='default', n_iter=10, scorer='default', weight_scorer=False, mp_context='loky', n_jobs='default', random_state='default', dask_ip=None, memmap_X=False, search_only_params=None, verbose=0, progress_bar='default', progress_loc=None)[source]#

ParamSearch is special input object designed to be used with ModelPipeline or Pipeline that is used in order to define a hyperparameter search strategy.

When passed to Pipeline, its search strategy is applied in the context of any set Params within the base pieces. Specifically, there must be at least one parameter search somewhere in the object ParamSearch is passed! All backend hyper-parameter searches make use of the <facebookresearch/nevergrad>`_ library.
Parameters
search_typestr, optional
The type of nevergrad hyper-parameter search to conduct. See Search Types for all available options.
You may pass ‘grid’ here in addition to the supported nevergrad searches. This will use sklearn’s GridSearch. Note in this case some of the other parameters are ignored, these are: weight_scorer, mp_context, dask_ip, memmap_X, search_only_params
default = 'RandomSearch'
cvCV, ‘default’ or splits valid arg, optional
The hyper-parameter search works by internally evaluating each combination of parameters. In order to internally evaluate a set of parameters, there must be some type of cross-validation defined. This parameter is used to represent the choice of cross-validation to use. The set of parameters which achieves the highest average score across the folds defined here will be selected.
Passed input should be an instance of CV. If left as the custom str ‘default’, then a CV object will be initialized and used with just the default parameters (which is a once repeated 3-fold cross-validation).
Lastly, you may also pass to cv any argument that would be valid to the splits parameter of the CV object. For example, int 3, would indicate a 3-fold CV.
default = 'default'
n_iterint, optional
This parameter represents the number of different hyper-parameters to try. It can also be thought of as the budget given to the underlying search algorithm.
How well a hyper-parameter search works (i.e., the quality of the chosen parameters) and how long it takes to run, are quite dependent on both this parameter and the passed cv strategy. In general, if too few choices are provided the algorithm will likely not select high performing hyperparameters, and alternatively if too high a value/budget is set, then you may find overfit/non-generalize hyper-parameter choices.
Other factors which might influence the ‘right’ number of n_iter to specify are:
  • search_type

    Depending on the underlying search type, it may take a bigger or smaller budget on average to find a good set of hyper-parameters

  • The dimension of the underlying search space

    If you are only optimizing a few, say 2, underlying parameter distributions, this will require a far smaller budget then say a really high dimensional search space.

  • The CV strategy

    The CV strategy defined via cv may make it easier or harder to overfit when searching for hyper-parameters, thus conceptually a good choice of cross-validation strategy can serve to increase the number n_iter you can use before overfitting or alternatively a bad choice may limit it.

  • Number of data points / subjects

    Along with CV strategy, the number of data points/subjects will greatly influence how quickly you overfit, and therefore a good choice of n_iter.

default = 10
scorerstr or ‘default’, optional
In order for a set of hyper-parameters to be evaluated, a single scorer must be defined. In the case that multiple scorer’s are passed here, the first one will be used and the others ignored.

Note

If selecting a custom (i.e., anything but ‘default’), be careful to make sure to select an appropriate scorer for the underlying problem type.

When passing a custom scorer, you may optionally pass it within a dictionary, where the key is the name that will be used to represent that scorer.
If left as ‘default’, a reasonable scorer based on the problem type is used.
  • ‘regression’ : ‘r2’

  • ‘binary’ : ‘matthews’

  • ‘categorical’ : ‘matthews’

default = 'default'
weight_scorerbool or ‘default’, optional
The weight_scorer parameter allows for optionally weighting the scores by the number of subjects within each validation fold. The mean score is then if set to True, a weighted average instead.
This parameter is typically only useful in the case where the folds vary dramatically by size (e.g., in the case of a leave-out-group cross-validation, where the groups vary in size).
default = False
mp_contextstr, optional
When a hyper-parameter search is launched there are different ways through python that the multi-processing can be launched (assuming n_jobs > 1). Depending on the system running the code, some options may be more reliable than others.
Valid options are:
  • ‘loky’: Create and use the python library loky backend.

  • ‘fork’: Python default fork mp_context

  • ‘forkserver’: Python default forkserver mp_context

  • ‘spawn’: Python default spawn mp_context

New as of version 1.3.6+, the ‘loky’ backend will be used, which is quite reliable across a number of systems and shouldn’t really need to be changed.
default = 'loky'
n_jobsint or ‘default’, optional
This parameter can be set in the case that a specific number of jobs (i.e., processors) should be used to run this parameter search.
If left as the default value of ‘default’ then the choice of n_jobs will be inherited through the context in which whatever object this search is associated with is used. This is typically a good choice, and this value can be left as ‘default’.
default = 'default'
random_stateint, None or ‘default’, optional
If left as ‘default’, the random_state as set through an associated ProblemSpec will be used. Otherwise, you may specify a specific value here. Either an integer representing a fixed random state or None to specify a new random state each time.
default = 'default'
dask_ipstr or None, optional

If None, the default, then ignore this parameter. Otherwise, this parameter represents experimental Dask support. In this case the passed parameter should be a string representing the ip of a created dask cluster. A dask Client object will then be created and passed this ip in order to connect to the cluster.

Warning

This functionality is still experimental, and will not work if the underlying search_type is ‘grid’.

Built in to using dask to evaluate each combination of parameters is pre-scattering the data to each cluster node.

default = None
memmap_Xbool, optional
When passing large memory arrays in each parameter search, it can be useful as a memory reduction technique to pass numpy memmap’ed arrays. This solves an issue where the loky backend will not properly pass too large arrays.

Warning

This option can slow down code a large amount, and typically should not be used unless out of memory errors are encountered. This option will be skipped if the underlying search_type is ‘grid’, or is using dask_ip also. In the case of using dask_ip large data will be pre-scattered instead anyway.

default = False
search_only_paramsdict or None, optional
In some rare cases, it may be the case that you want to specify that certain parameters be passed only during the nested parameter searches. A dict of parameters can be passed here to accomplish that. For example, if passing:
search_only_params = {'svm classifier__probability': False}
Assuming that the default / selecting parameter for this svm classifier for probability is True by default, then only when exploring nested hyper-parameter options will probability be set to False, but when fitting the final model with the best parameters found from the search, it will revert back to the default (i.e., in this case probability = True).

Note

This may be difficult for non-advanced users to use, as you must pass parameters exactly as how they are represented internally. Using build on the piece of interest may be helpful in figuring out what this looks like for a specific piece and parameter.

To ignore this parameter / option. simply keep the default value of None.
default = None
verboseint, optional

Controls the verbosity of the search, where the higher value set, the more messages will be printed. By default no verbosity(i.e., verbose=0) will be used.

default = 0
progress_barbool or ‘default’, optional

Include a seperate progress bar to track progress of parameter search.

Default behavior is to set to whatever value progress bar is set to when calling evaluate(), otherwise set to False.

default = 'default'
progress_locNone or str, optional

This is an optional parameter. If set to non-None, then it should be passed a str representing the location of a text file to append to after every completed parameter.

default = None

Methods

copy()

This method returns a deepcopy of the base object.

get_params([deep])

Get parameters for this estimator.

set_params(**params)

Set the parameters of this estimator.