Models#
Different base obj choices for the Model
are shown below
The exact str indicator, as passed to the obj param is represented by the sub-heading (within “”)
The available models are further broken down by which can work with different problem_types.
Additionally, a link to the original models documentation as well as the implemented parameter distributions are shown.
binary#
“dt classifier”#
Base Class Documentation:
sklearn.tree.DecisionTreeClassifier
Param Distributions
“default”
defaults only“dt classifier dist”
max_depth: Scalar(lower=1, upper=30).set_mutation(sigma=4.833333333333333).set_bounds(full_range_sampling=True, lower=1, upper=30).set_integer_casting() min_samples_split: Scalar(lower=2, upper=50).set_mutation(sigma=8.0).set_bounds(full_range_sampling=True, lower=2, upper=50).set_integer_casting() class_weight: TransitionChoice([None, 'balanced'])
“elastic net logistic”#
Base Class Documentation:
sklearn.linear_model.LogisticRegression
Param Distributions
“base elastic”
max_iter: 100 multi_class: 'auto' penalty: 'elasticnet' class_weight: None solver: 'saga' l1_ratio: 0.5“elastic classifier”
max_iter: 100 multi_class: 'auto' penalty: 'elasticnet' class_weight: TransitionChoice([None, 'balanced']) solver: 'saga' l1_ratio: Scalar(lower=0.01, upper=1).set_mutation(sigma=0.165).set_bounds(full_range_sampling=True, lower=0.01, upper=1) C: Log(lower=1e-05, upper=100000.0)“elastic clf v2”
max_iter: 100 multi_class: 'auto' penalty: 'elasticnet' class_weight: TransitionChoice([None, 'balanced']) solver: 'saga' l1_ratio: Scalar(lower=0.01, upper=1).set_mutation(sigma=0.165).set_bounds(full_range_sampling=True, lower=0.01, upper=1) C: Log(lower=0.01, upper=100000.0)“elastic classifier extra”
max_iter: Scalar(lower=100, upper=1000).set_mutation(sigma=150.0).set_bounds(full_range_sampling=True, lower=100, upper=1000).set_integer_casting() multi_class: 'auto' penalty: 'elasticnet' class_weight: TransitionChoice([None, 'balanced']) solver: 'saga' l1_ratio: Scalar(lower=0.01, upper=1).set_mutation(sigma=0.165).set_bounds(full_range_sampling=True, lower=0.01, upper=1) C: Log(lower=1e-05, upper=100000.0) tol: Log(lower=1e-06, upper=0.01)
“et classifier”#
Base Class Documentation:
sklearn.ensemble.ExtraTreesClassifier
Param Distributions
“default”
defaults only
“gaussian nb”#
Base Class Documentation:
sklearn.naive_bayes.GaussianNB
Param Distributions
“base gnb”
var_smoothing: 1e-09
“gb classifier”#
Base Class Documentation:
sklearn.ensemble.GradientBoostingClassifier
Param Distributions
“default”
defaults only
“gp classifier”#
Base Class Documentation:
sklearn.gaussian_process.GaussianProcessClassifier
Param Distributions
“base gp classifier”
n_restarts_optimizer: 5
“hgb classifier”#
Base Class Documentation:
sklearn.ensemble.gradient_boosting.HistGradientBoostingClassifier
Param Distributions
“default”
defaults only“hgb dist1”
max_iter: Scalar(init=100, lower=3, upper=200).set_mutation(sigma=32.833333333333336).set_bounds(full_range_sampling=False, lower=3, upper=200).set_integer_casting()“hgb dist2”
max_iter: Scalar(init=100, lower=3, upper=500).set_mutation(sigma=82.83333333333333).set_bounds(full_range_sampling=False, lower=3, upper=500).set_integer_casting() min_samples_leaf: Scalar(lower=10, upper=100).set_mutation(sigma=15.0).set_bounds(full_range_sampling=True, lower=10, upper=100).set_integer_casting() max_leaf_nodes: Scalar(init=20, lower=6, upper=80).set_mutation(sigma=12.333333333333334).set_bounds(full_range_sampling=False, lower=6, upper=80).set_integer_casting() l2_regularization: TransitionChoice([0, Log(lower=1e-05, upper=1)])
“knn classifier”#
Base Class Documentation:
sklearn.neighbors.KNeighborsClassifier
Param Distributions
“base knn”
n_neighbors: 5“knn dist”
weights: TransitionChoice(['uniform', 'distance']) n_neighbors: Scalar(lower=2, upper=25).set_mutation(sigma=3.8333333333333335).set_bounds(full_range_sampling=True, lower=2, upper=25).set_integer_casting()
“lasso logistic”#
Base Class Documentation:
sklearn.linear_model.LogisticRegression
Param Distributions
“base lasso”
max_iter: 100 multi_class: 'auto' penalty: 'l1' class_weight: None solver: 'liblinear'“lasso C”
max_iter: 100 multi_class: 'auto' penalty: 'l1' class_weight: TransitionChoice([None, 'balanced']) solver: 'liblinear' C: Log(lower=1e-05, upper=1000.0)“lasso C extra”
max_iter: Scalar(lower=100, upper=1000).set_mutation(sigma=150.0).set_bounds(full_range_sampling=True, lower=100, upper=1000).set_integer_casting() multi_class: 'auto' penalty: 'l1' class_weight: TransitionChoice([None, 'balanced']) solver: 'liblinear' C: Log(lower=1e-05, upper=1000.0) tol: Log(lower=1e-06, upper=0.01)
“light gbm classifier”#
Base Class Documentation:
BPt.extensions.BPtLGBM.BPtLGBMClassifier
Param Distributions
“base lgbm”
silent: True“lgbm classifier dist1”
silent: True boosting_type: TransitionChoice(['gbdt', 'dart', 'goss']) n_estimators: Scalar(init=100, lower=3, upper=500).set_mutation(sigma=82.83333333333333).set_bounds(full_range_sampling=False, lower=3, upper=500).set_integer_casting() num_leaves: Scalar(init=20, lower=6, upper=80).set_mutation(sigma=12.333333333333334).set_bounds(full_range_sampling=False, lower=6, upper=80).set_integer_casting() min_child_samples: Scalar(lower=10, upper=500).set_mutation(sigma=81.66666666666667).set_bounds(full_range_sampling=True, lower=10, upper=500).set_integer_casting() min_child_weight: Log(lower=1e-05, upper=10000.0) subsample: Scalar(lower=0.3, upper=0.95).set_mutation(sigma=0.10833333333333332).set_bounds(full_range_sampling=True, lower=0.3, upper=0.95) colsample_bytree: Scalar(lower=0.3, upper=0.95).set_mutation(sigma=0.10833333333333332).set_bounds(full_range_sampling=True, lower=0.3, upper=0.95) reg_alpha: TransitionChoice([0, Log(lower=1e-05, upper=1)]) reg_lambda: TransitionChoice([0, Log(lower=1e-05, upper=1)]) class_weight: TransitionChoice([None, 'balanced'])“lgbm classifier dist2”
silent: True lambda_l2: 0.001 boosting_type: TransitionChoice(['gbdt', 'dart']) min_child_samples: TransitionChoice([1, 5, 7, 10, 15, 20, 35, 50, 100, 200, 500, 1000]) num_leaves: TransitionChoice([2, 4, 7, 10, 15, 20, 25, 30, 35, 40, 50, 65, 80, 100, 125, 150, 200, 250]) colsample_bytree: TransitionChoice([0.7, 0.9, 1.0]) subsample: Scalar(lower=0.3, upper=1).set_mutation(sigma=0.11666666666666665).set_bounds(full_range_sampling=True, lower=0.3, upper=1) learning_rate: TransitionChoice([0.01, 0.05, 0.1]) n_estimators: TransitionChoice([5, 20, 35, 50, 75, 100, 150, 200, 350, 500, 750, 1000]) class_weight: TransitionChoice([None, 'balanced'])“lgbm classifier dist3”
silent: True n_estimators: 1000 early_stopping_rounds: 150 eval_split: 0.2 boosting_type: 'gbdt' learning_rate: Log(init=0.1, lower=0.005, upper=0.2) colsample_bytree: Scalar(init=1, lower=0.75, upper=1).set_mutation(sigma=0.041666666666666664).set_bounds(full_range_sampling=False, lower=0.75, upper=1) min_child_samples: Scalar(init=20, lower=2, upper=30).set_mutation(sigma=4.666666666666667).set_bounds(full_range_sampling=False, lower=2, upper=30).set_integer_casting() num_leaves: Scalar(init=31, lower=16, upper=96).set_mutation(sigma=13.333333333333334).set_bounds(full_range_sampling=False, lower=16, upper=96).set_integer_casting() class_weight: TransitionChoice([None, 'balanced'])
“linear svm classifier”#
Base Class Documentation:
sklearn.svm.LinearSVC
Param Distributions
“base linear svc”
max_iter: 100“linear svc dist”
max_iter: 100 C: Log(lower=1, upper=10000.0) class_weight: TransitionChoice([None, 'balanced'])
“logistic”#
Base Class Documentation:
sklearn.linear_model.LogisticRegression
Param Distributions
“base logistic”
max_iter: 100 multi_class: 'auto' penalty: 'none' class_weight: None solver: 'lbfgs'
“pa classifier”#
Base Class Documentation:
sklearn.linear_model.PassiveAggressiveClassifier
Param Distributions
“default”
defaults only
“random forest classifier”#
Base Class Documentation:
sklearn.ensemble.RandomForestClassifier
Param Distributions
“base rf regressor”
n_estimators: 100“rf classifier dist best”
n_estimators: Scalar(init=100, lower=10, upper=200).set_mutation(sigma=31.666666666666668).set_bounds(full_range_sampling=False, lower=10, upper=200).set_integer_casting() class_weight: TransitionChoice([None, 'balanced'])“rf classifier dist”
n_estimators: Scalar(init=100, lower=3, upper=500).set_mutation(sigma=82.83333333333333).set_bounds(full_range_sampling=False, lower=3, upper=500).set_integer_casting() max_depth: TransitionChoice([None, Scalar(init=25, lower=2, upper=200).set_mutation(sigma=33.0).set_bounds(full_range_sampling=False, lower=2, upper=200).set_integer_casting()]) max_features: Scalar(lower=0.1, upper=1.0).set_mutation(sigma=0.15).set_bounds(full_range_sampling=True, lower=0.1, upper=1.0) min_samples_split: Scalar(lower=0.1, upper=1.0).set_mutation(sigma=0.15).set_bounds(full_range_sampling=True, lower=0.1, upper=1.0) bootstrap: True class_weight: TransitionChoice([None, 'balanced'])
“ridge logistic”#
Base Class Documentation:
sklearn.linear_model.LogisticRegression
Param Distributions
“base ridge”
max_iter: 100 penalty: 'l2' solver: 'saga'“ridge C”
max_iter: 100 solver: 'saga' C: Log(lower=1e-05, upper=1000.0) class_weight: TransitionChoice([None, 'balanced'])“ridge C extra”
max_iter: Scalar(lower=100, upper=1000).set_mutation(sigma=150.0).set_bounds(full_range_sampling=True, lower=100, upper=1000).set_integer_casting() solver: 'saga' C: Log(lower=1e-05, upper=1000.0) class_weight: TransitionChoice([None, 'balanced']) tol: Log(lower=1e-06, upper=0.01)
“sgd classifier”#
Base Class Documentation:
sklearn.linear_model.SGDClassifier
Param Distributions
“default”
defaults only“sgd elastic classifier”
loss: 'squared_epsilon_insensitive' penalty: 'elasticnet' alpha: Log(lower=1e-05, upper=100000.0) l1_ratio: Scalar(lower=0.01, upper=1).set_mutation(sigma=0.165).set_bounds(full_range_sampling=True, lower=0.01, upper=1) class_weight: TransitionChoice([None, 'balanced'])“sgd classifier big search”
loss: TransitionChoice(['hinge', 'log', 'modified_huber', 'squared_hinge', 'perceptron']) penalty: TransitionChoice(['l2', 'l1', 'elasticnet']) alpha: Log(lower=1e-05, upper=100.0) l1_ratio: Scalar(lower=0.01, upper=1).set_mutation(sigma=0.165).set_bounds(full_range_sampling=True, lower=0.01, upper=1) max_iter: 100 learning_rate: TransitionChoice(['optimal', 'invscaling', 'adaptive', 'constant']) eta0: Log(lower=1e-06, upper=1000.0) power_t: Scalar(lower=0.1, upper=0.9).set_mutation(sigma=0.13333333333333333).set_bounds(full_range_sampling=True, lower=0.1, upper=0.9) early_stopping: TransitionChoice([False, True]) validation_fraction: Scalar(lower=0.05, upper=0.5).set_mutation(sigma=0.075).set_bounds(full_range_sampling=True, lower=0.05, upper=0.5) n_iter_no_change: Scalar(lower=5, upper=30).set_mutation(sigma=4.166666666666667).set_bounds(full_range_sampling=True, lower=5, upper=30).set_integer_casting() class_weight: TransitionChoice([None, 'balanced'])
“svm classifier”#
Base Class Documentation:
sklearn.svm.SVC
Param Distributions
“base svm classifier”
kernel: 'rbf' gamma: 'scale' probability: True“svm classifier dist”
kernel: 'rbf' gamma: Log(lower=1e-06, upper=1) C: Log(lower=0.0001, upper=10000.0) probability: True class_weight: TransitionChoice([None, 'balanced'])
“xgb classifier”#
Base Class Documentation:
xgboost.XGBClassifier
Param Distributions
“base xgb classifier”
verbosity: 0 objective: 'binary:logistic'“xgb classifier dist1”
verbosity: 0 objective: 'binary:logistic' n_estimators: Scalar(init=100, lower=3, upper=500).set_mutation(sigma=82.83333333333333).set_bounds(full_range_sampling=False, lower=3, upper=500).set_integer_casting() min_child_weight: Log(lower=1e-05, upper=10000.0) subsample: Scalar(lower=0.3, upper=0.95).set_mutation(sigma=0.10833333333333332).set_bounds(full_range_sampling=True, lower=0.3, upper=0.95) colsample_bytree: Scalar(lower=0.3, upper=0.95).set_mutation(sigma=0.10833333333333332).set_bounds(full_range_sampling=True, lower=0.3, upper=0.95) reg_alpha: TransitionChoice([0, Log(lower=1e-05, upper=1)]) reg_lambda: TransitionChoice([0, Log(lower=1e-05, upper=1)])“xgb classifier dist2”
verbosity: 0 objective: 'binary:logistic' max_depth: TransitionChoice([None, Scalar(init=25, lower=2, upper=200).set_mutation(sigma=33.0).set_bounds(full_range_sampling=False, lower=2, upper=200).set_integer_casting()]) learning_rate: Scalar(lower=0.01, upper=0.5).set_mutation(sigma=0.08166666666666667).set_bounds(full_range_sampling=True, lower=0.01, upper=0.5) n_estimators: Scalar(lower=3, upper=500).set_mutation(sigma=82.83333333333333).set_bounds(full_range_sampling=True, lower=3, upper=500).set_integer_casting() min_child_weight: TransitionChoice([1, 5, 10, 50]) subsample: Scalar(lower=0.5, upper=1).set_mutation(sigma=0.08333333333333333).set_bounds(full_range_sampling=True, lower=0.5, upper=1) colsample_bytree: Scalar(lower=0.4, upper=0.95).set_mutation(sigma=0.09166666666666666).set_bounds(full_range_sampling=True, lower=0.4, upper=0.95)“xgb classifier dist3”
verbosity: 0 objective: 'binary:logistic' learning_rare: Scalar(lower=0.005, upper=0.3).set_mutation(sigma=0.049166666666666664).set_bounds(full_range_sampling=True, lower=0.005, upper=0.3) min_child_weight: Scalar(lower=0.5, upper=10).set_mutation(sigma=1.5833333333333333).set_bounds(full_range_sampling=True, lower=0.5, upper=10) max_depth: TransitionChoice(array([3, 4, 5, 6, 7, 8, 9])) subsample: Scalar(lower=0.5, upper=1).set_mutation(sigma=0.08333333333333333).set_bounds(full_range_sampling=True, lower=0.5, upper=1) colsample_bytree: Scalar(lower=0.5, upper=1).set_mutation(sigma=0.08333333333333333).set_bounds(full_range_sampling=True, lower=0.5, upper=1) reg_alpha: Log(lower=1e-05, upper=1)
regression#
“ard regressor”#
Base Class Documentation:
sklearn.linear_model.ARDRegression
Param Distributions
“default”
defaults only
“bayesian ridge regressor”#
Base Class Documentation:
sklearn.linear_model.BayesianRidge
Param Distributions
“default”
defaults only
“dt regressor”#
Base Class Documentation:
sklearn.tree.DecisionTreeRegressor
Param Distributions
“default”
defaults only“dt dist”
max_depth: Scalar(lower=1, upper=30).set_mutation(sigma=4.833333333333333).set_bounds(full_range_sampling=True, lower=1, upper=30).set_integer_casting() min_samples_split: Scalar(lower=2, upper=50).set_mutation(sigma=8.0).set_bounds(full_range_sampling=True, lower=2, upper=50).set_integer_casting()
“elastic net regressor”#
Base Class Documentation:
sklearn.linear_model.ElasticNet
Param Distributions
“base elastic net”
max_iter: 100“elastic regression”
max_iter: 100 alpha: Log(lower=1e-05, upper=100000.0) l1_ratio: Scalar(lower=0.01, upper=1).set_mutation(sigma=0.165).set_bounds(full_range_sampling=True, lower=0.01, upper=1)“elastic regression extra”
max_iter: Scalar(lower=100, upper=1000).set_mutation(sigma=150.0).set_bounds(full_range_sampling=True, lower=100, upper=1000).set_integer_casting() alpha: Log(lower=1e-05, upper=100000.0) l1_ratio: Scalar(lower=0.01, upper=1).set_mutation(sigma=0.165).set_bounds(full_range_sampling=True, lower=0.01, upper=1) tol: Log(lower=1e-06, upper=0.01)
“et regressor”#
Base Class Documentation:
sklearn.ensemble.ExtraTreesRegressor
Param Distributions
“default”
defaults only
“gb regressor”#
Base Class Documentation:
sklearn.ensemble.GradientBoostingRegressor
Param Distributions
“default”
defaults only
“gp regressor”#
Base Class Documentation:
sklearn.gaussian_process.GaussianProcessRegressor
Param Distributions
“base gp regressor”
n_restarts_optimizer: 5 normalize_y: True
“hgb regressor”#
Base Class Documentation:
sklearn.ensemble.gradient_boosting.HistGradientBoostingRegressor
Param Distributions
“default”
defaults only“hgb dist1”
max_iter: Scalar(init=100, lower=3, upper=200).set_mutation(sigma=32.833333333333336).set_bounds(full_range_sampling=False, lower=3, upper=200).set_integer_casting()“hgb dist2”
max_iter: Scalar(init=100, lower=3, upper=500).set_mutation(sigma=82.83333333333333).set_bounds(full_range_sampling=False, lower=3, upper=500).set_integer_casting() min_samples_leaf: Scalar(lower=10, upper=100).set_mutation(sigma=15.0).set_bounds(full_range_sampling=True, lower=10, upper=100).set_integer_casting() max_leaf_nodes: Scalar(init=20, lower=6, upper=80).set_mutation(sigma=12.333333333333334).set_bounds(full_range_sampling=False, lower=6, upper=80).set_integer_casting() l2_regularization: TransitionChoice([0, Log(lower=1e-05, upper=1)])
“knn regressor”#
Base Class Documentation:
sklearn.neighbors.KNeighborsRegressor
Param Distributions
“base knn regression”
n_neighbors: 5“knn dist regression”
weights: TransitionChoice(['uniform', 'distance']) n_neighbors: Scalar(lower=2, upper=25).set_mutation(sigma=3.8333333333333335).set_bounds(full_range_sampling=True, lower=2, upper=25).set_integer_casting()
“lasso regressor”#
Base Class Documentation:
sklearn.linear_model.Lasso
Param Distributions
“base lasso regressor”
max_iter: 100“lasso regressor dist”
max_iter: 100 alpha: Log(lower=1e-05, upper=100000.0)
“light gbm regressor”#
Base Class Documentation:
BPt.extensions.BPtLGBM.BPtLGBMRegressor
Param Distributions
“base lgbm”
silent: True“lgbm dist best”
silent: True lambda_l2: 0.001 boosting_type: TransitionChoice(['gbdt', 'dart']) min_child_samples: TransitionChoice([1, 5, 7, 10, 15, 20, 35, 50, 100, 200, 500, 1000]) num_leaves: TransitionChoice([2, 4, 7, 10, 15, 20, 25, 30, 35, 40, 50, 65, 80, 100, 125, 150, 200, 250]) colsample_bytree: TransitionChoice([0.7, 0.9, 1.0]) subsample: Scalar(lower=0.3, upper=1).set_mutation(sigma=0.11666666666666665).set_bounds(full_range_sampling=True, lower=0.3, upper=1) learning_rate: TransitionChoice([0.01, 0.05, 0.1]) n_estimators: TransitionChoice([5, 20, 35, 50, 75, 100, 150, 200, 350, 500, 750, 1000])“lgbm dist1”
silent: True boosting_type: TransitionChoice(['gbdt', 'dart', 'goss']) n_estimators: Scalar(init=100, lower=3, upper=500).set_mutation(sigma=82.83333333333333).set_bounds(full_range_sampling=False, lower=3, upper=500).set_integer_casting() num_leaves: Scalar(init=20, lower=6, upper=80).set_mutation(sigma=12.333333333333334).set_bounds(full_range_sampling=False, lower=6, upper=80).set_integer_casting() min_child_samples: Scalar(lower=10, upper=500).set_mutation(sigma=81.66666666666667).set_bounds(full_range_sampling=True, lower=10, upper=500).set_integer_casting() min_child_weight: Log(lower=1e-05, upper=10000.0) subsample: Scalar(lower=0.3, upper=0.95).set_mutation(sigma=0.10833333333333332).set_bounds(full_range_sampling=True, lower=0.3, upper=0.95) colsample_bytree: Scalar(lower=0.3, upper=0.95).set_mutation(sigma=0.10833333333333332).set_bounds(full_range_sampling=True, lower=0.3, upper=0.95) reg_alpha: TransitionChoice([0, Log(lower=1e-05, upper=1)]) reg_lambda: TransitionChoice([0, Log(lower=1e-05, upper=1)])“lgbm dist3”
silent: True n_estimators: 1000 early_stopping_rounds: 150 eval_split: 0.2 boosting_type: 'gbdt' learning_rate: Log(init=0.1, lower=0.005, upper=0.2) colsample_bytree: Scalar(init=1, lower=0.75, upper=1).set_mutation(sigma=0.041666666666666664).set_bounds(full_range_sampling=False, lower=0.75, upper=1) min_child_samples: Scalar(init=20, lower=2, upper=30).set_mutation(sigma=4.666666666666667).set_bounds(full_range_sampling=False, lower=2, upper=30).set_integer_casting() num_leaves: Scalar(init=31, lower=16, upper=96).set_mutation(sigma=13.333333333333334).set_bounds(full_range_sampling=False, lower=16, upper=96).set_integer_casting()
“linear regressor”#
Base Class Documentation:
sklearn.linear_model.LinearRegression
Param Distributions
“base linear”
fit_intercept: True
“linear svm regressor”#
Base Class Documentation:
sklearn.svm.LinearSVR
Param Distributions
“base linear svr”
loss: 'epsilon_insensitive' max_iter: 10000.0“linear svr dist”
loss: 'epsilon_insensitive' max_iter: 10000.0 C: Log(lower=1, upper=10000.0)
“random forest regressor”#
Base Class Documentation:
sklearn.ensemble.RandomForestRegressor
Param Distributions
“base rf”
n_estimators: 100“rf dist best”
n_estimators: Scalar(init=100, lower=10, upper=200).set_mutation(sigma=31.666666666666668).set_bounds(full_range_sampling=False, lower=10, upper=200).set_integer_casting()“rf dist”
n_estimators: Scalar(init=100, lower=3, upper=500).set_mutation(sigma=82.83333333333333).set_bounds(full_range_sampling=False, lower=3, upper=500).set_integer_casting() max_depth: TransitionChoice([None, Scalar(init=25, lower=2, upper=200).set_mutation(sigma=33.0).set_bounds(full_range_sampling=False, lower=2, upper=200).set_integer_casting()]) max_features: Scalar(lower=0.1, upper=1.0).set_mutation(sigma=0.15).set_bounds(full_range_sampling=True, lower=0.1, upper=1.0) min_samples_split: Scalar(lower=0.1, upper=1.0).set_mutation(sigma=0.15).set_bounds(full_range_sampling=True, lower=0.1, upper=1.0) bootstrap: True
“ridge regressor”#
Base Class Documentation:
sklearn.linear_model.Ridge
Param Distributions
“base ridge regressor”
max_iter: 100 solver: 'lsqr'“ridge regressor best”
max_iter: 1000 solver: 'lsqr' alpha: Log(lower=0.001, upper=1000000.0)“ridge regressor dist”
max_iter: 100 solver: 'lsqr' alpha: Log(lower=0.001, upper=100000.0)
“svm regressor”#
Base Class Documentation:
sklearn.svm.SVR
Param Distributions
“base svm”
kernel: 'rbf' gamma: 'scale'“svm dist”
kernel: 'rbf' gamma: Log(lower=1e-06, upper=1) C: Log(lower=0.0001, upper=10000.0)
“tweedie regressor”#
Base Class Documentation:
sklearn.linear_model.glm.TweedieRegressor
Param Distributions
“default”
defaults only
“xgb regressor”#
Base Class Documentation:
xgboost.XGBRegressor
Param Distributions
“base xgb”
verbosity: 0 objective: 'reg:squarederror'“xgb dist1”
verbosity: 0 objective: 'reg:squarederror' n_estimators: Scalar(init=100, lower=3, upper=500).set_mutation(sigma=82.83333333333333).set_bounds(full_range_sampling=False, lower=3, upper=500).set_integer_casting() min_child_weight: Log(lower=1e-05, upper=10000.0) subsample: Scalar(lower=0.3, upper=0.95).set_mutation(sigma=0.10833333333333332).set_bounds(full_range_sampling=True, lower=0.3, upper=0.95) colsample_bytree: Scalar(lower=0.3, upper=0.95).set_mutation(sigma=0.10833333333333332).set_bounds(full_range_sampling=True, lower=0.3, upper=0.95) reg_alpha: TransitionChoice([0, Log(lower=1e-05, upper=1)]) reg_lambda: TransitionChoice([0, Log(lower=1e-05, upper=1)])“xgb dist2”
verbosity: 0 objective: 'reg:squarederror' max_depth: TransitionChoice([None, Scalar(init=25, lower=2, upper=200).set_mutation(sigma=33.0).set_bounds(full_range_sampling=False, lower=2, upper=200).set_integer_casting()]) learning_rate: Scalar(lower=0.01, upper=0.5).set_mutation(sigma=0.08166666666666667).set_bounds(full_range_sampling=True, lower=0.01, upper=0.5) n_estimators: Scalar(lower=3, upper=500).set_mutation(sigma=82.83333333333333).set_bounds(full_range_sampling=True, lower=3, upper=500).set_integer_casting() min_child_weight: TransitionChoice([1, 5, 10, 50]) subsample: Scalar(lower=0.5, upper=1).set_mutation(sigma=0.08333333333333333).set_bounds(full_range_sampling=True, lower=0.5, upper=1) colsample_bytree: Scalar(lower=0.4, upper=0.95).set_mutation(sigma=0.09166666666666666).set_bounds(full_range_sampling=True, lower=0.4, upper=0.95)“xgb dist3”
verbosity: 0 objective: 'reg:squarederror' learning_rare: Scalar(lower=0.005, upper=0.3).set_mutation(sigma=0.049166666666666664).set_bounds(full_range_sampling=True, lower=0.005, upper=0.3) min_child_weight: Scalar(lower=0.5, upper=10).set_mutation(sigma=1.5833333333333333).set_bounds(full_range_sampling=True, lower=0.5, upper=10) max_depth: TransitionChoice(array([3, 4, 5, 6, 7, 8, 9])) subsample: Scalar(lower=0.5, upper=1).set_mutation(sigma=0.08333333333333333).set_bounds(full_range_sampling=True, lower=0.5, upper=1) colsample_bytree: Scalar(lower=0.5, upper=1).set_mutation(sigma=0.08333333333333333).set_bounds(full_range_sampling=True, lower=0.5, upper=1) reg_alpha: Log(lower=1e-05, upper=1)
categorical#
“dt classifier”#
Base Class Documentation:
sklearn.tree.DecisionTreeClassifier
Param Distributions
“default”
defaults only“dt classifier dist”
max_depth: Scalar(lower=1, upper=30).set_mutation(sigma=4.833333333333333).set_bounds(full_range_sampling=True, lower=1, upper=30).set_integer_casting() min_samples_split: Scalar(lower=2, upper=50).set_mutation(sigma=8.0).set_bounds(full_range_sampling=True, lower=2, upper=50).set_integer_casting() class_weight: TransitionChoice([None, 'balanced'])
“elastic net logistic”#
Base Class Documentation:
sklearn.linear_model.LogisticRegression
Param Distributions
“base elastic”
max_iter: 100 multi_class: 'auto' penalty: 'elasticnet' class_weight: None solver: 'saga' l1_ratio: 0.5“elastic classifier”
max_iter: 100 multi_class: 'auto' penalty: 'elasticnet' class_weight: TransitionChoice([None, 'balanced']) solver: 'saga' l1_ratio: Scalar(lower=0.01, upper=1).set_mutation(sigma=0.165).set_bounds(full_range_sampling=True, lower=0.01, upper=1) C: Log(lower=1e-05, upper=100000.0)“elastic clf v2”
max_iter: 100 multi_class: 'auto' penalty: 'elasticnet' class_weight: TransitionChoice([None, 'balanced']) solver: 'saga' l1_ratio: Scalar(lower=0.01, upper=1).set_mutation(sigma=0.165).set_bounds(full_range_sampling=True, lower=0.01, upper=1) C: Log(lower=0.01, upper=100000.0)“elastic classifier extra”
max_iter: Scalar(lower=100, upper=1000).set_mutation(sigma=150.0).set_bounds(full_range_sampling=True, lower=100, upper=1000).set_integer_casting() multi_class: 'auto' penalty: 'elasticnet' class_weight: TransitionChoice([None, 'balanced']) solver: 'saga' l1_ratio: Scalar(lower=0.01, upper=1).set_mutation(sigma=0.165).set_bounds(full_range_sampling=True, lower=0.01, upper=1) C: Log(lower=1e-05, upper=100000.0) tol: Log(lower=1e-06, upper=0.01)
“et classifier”#
Base Class Documentation:
sklearn.ensemble.ExtraTreesClassifier
Param Distributions
“default”
defaults only
“gaussian nb”#
Base Class Documentation:
sklearn.naive_bayes.GaussianNB
Param Distributions
“base gnb”
var_smoothing: 1e-09
“gb classifier”#
Base Class Documentation:
sklearn.ensemble.GradientBoostingClassifier
Param Distributions
“default”
defaults only
“gp classifier”#
Base Class Documentation:
sklearn.gaussian_process.GaussianProcessClassifier
Param Distributions
“base gp classifier”
n_restarts_optimizer: 5
“hgb classifier”#
Base Class Documentation:
sklearn.ensemble.gradient_boosting.HistGradientBoostingClassifier
Param Distributions
“default”
defaults only“hgb dist1”
max_iter: Scalar(init=100, lower=3, upper=200).set_mutation(sigma=32.833333333333336).set_bounds(full_range_sampling=False, lower=3, upper=200).set_integer_casting()“hgb dist2”
max_iter: Scalar(init=100, lower=3, upper=500).set_mutation(sigma=82.83333333333333).set_bounds(full_range_sampling=False, lower=3, upper=500).set_integer_casting() min_samples_leaf: Scalar(lower=10, upper=100).set_mutation(sigma=15.0).set_bounds(full_range_sampling=True, lower=10, upper=100).set_integer_casting() max_leaf_nodes: Scalar(init=20, lower=6, upper=80).set_mutation(sigma=12.333333333333334).set_bounds(full_range_sampling=False, lower=6, upper=80).set_integer_casting() l2_regularization: TransitionChoice([0, Log(lower=1e-05, upper=1)])
“knn classifier”#
Base Class Documentation:
sklearn.neighbors.KNeighborsClassifier
Param Distributions
“base knn”
n_neighbors: 5“knn dist”
weights: TransitionChoice(['uniform', 'distance']) n_neighbors: Scalar(lower=2, upper=25).set_mutation(sigma=3.8333333333333335).set_bounds(full_range_sampling=True, lower=2, upper=25).set_integer_casting()
“lasso logistic”#
Base Class Documentation:
sklearn.linear_model.LogisticRegression
Param Distributions
“base lasso”
max_iter: 100 multi_class: 'auto' penalty: 'l1' class_weight: None solver: 'liblinear'“lasso C”
max_iter: 100 multi_class: 'auto' penalty: 'l1' class_weight: TransitionChoice([None, 'balanced']) solver: 'liblinear' C: Log(lower=1e-05, upper=1000.0)“lasso C extra”
max_iter: Scalar(lower=100, upper=1000).set_mutation(sigma=150.0).set_bounds(full_range_sampling=True, lower=100, upper=1000).set_integer_casting() multi_class: 'auto' penalty: 'l1' class_weight: TransitionChoice([None, 'balanced']) solver: 'liblinear' C: Log(lower=1e-05, upper=1000.0) tol: Log(lower=1e-06, upper=0.01)
“light gbm classifier”#
Base Class Documentation:
BPt.extensions.BPtLGBM.BPtLGBMClassifier
Param Distributions
“base lgbm”
silent: True“lgbm classifier dist1”
silent: True boosting_type: TransitionChoice(['gbdt', 'dart', 'goss']) n_estimators: Scalar(init=100, lower=3, upper=500).set_mutation(sigma=82.83333333333333).set_bounds(full_range_sampling=False, lower=3, upper=500).set_integer_casting() num_leaves: Scalar(init=20, lower=6, upper=80).set_mutation(sigma=12.333333333333334).set_bounds(full_range_sampling=False, lower=6, upper=80).set_integer_casting() min_child_samples: Scalar(lower=10, upper=500).set_mutation(sigma=81.66666666666667).set_bounds(full_range_sampling=True, lower=10, upper=500).set_integer_casting() min_child_weight: Log(lower=1e-05, upper=10000.0) subsample: Scalar(lower=0.3, upper=0.95).set_mutation(sigma=0.10833333333333332).set_bounds(full_range_sampling=True, lower=0.3, upper=0.95) colsample_bytree: Scalar(lower=0.3, upper=0.95).set_mutation(sigma=0.10833333333333332).set_bounds(full_range_sampling=True, lower=0.3, upper=0.95) reg_alpha: TransitionChoice([0, Log(lower=1e-05, upper=1)]) reg_lambda: TransitionChoice([0, Log(lower=1e-05, upper=1)]) class_weight: TransitionChoice([None, 'balanced'])“lgbm classifier dist2”
silent: True lambda_l2: 0.001 boosting_type: TransitionChoice(['gbdt', 'dart']) min_child_samples: TransitionChoice([1, 5, 7, 10, 15, 20, 35, 50, 100, 200, 500, 1000]) num_leaves: TransitionChoice([2, 4, 7, 10, 15, 20, 25, 30, 35, 40, 50, 65, 80, 100, 125, 150, 200, 250]) colsample_bytree: TransitionChoice([0.7, 0.9, 1.0]) subsample: Scalar(lower=0.3, upper=1).set_mutation(sigma=0.11666666666666665).set_bounds(full_range_sampling=True, lower=0.3, upper=1) learning_rate: TransitionChoice([0.01, 0.05, 0.1]) n_estimators: TransitionChoice([5, 20, 35, 50, 75, 100, 150, 200, 350, 500, 750, 1000]) class_weight: TransitionChoice([None, 'balanced'])“lgbm classifier dist3”
silent: True n_estimators: 1000 early_stopping_rounds: 150 eval_split: 0.2 boosting_type: 'gbdt' learning_rate: Log(init=0.1, lower=0.005, upper=0.2) colsample_bytree: Scalar(init=1, lower=0.75, upper=1).set_mutation(sigma=0.041666666666666664).set_bounds(full_range_sampling=False, lower=0.75, upper=1) min_child_samples: Scalar(init=20, lower=2, upper=30).set_mutation(sigma=4.666666666666667).set_bounds(full_range_sampling=False, lower=2, upper=30).set_integer_casting() num_leaves: Scalar(init=31, lower=16, upper=96).set_mutation(sigma=13.333333333333334).set_bounds(full_range_sampling=False, lower=16, upper=96).set_integer_casting() class_weight: TransitionChoice([None, 'balanced'])
“linear svm classifier”#
Base Class Documentation:
sklearn.svm.LinearSVC
Param Distributions
“base linear svc”
max_iter: 100“linear svc dist”
max_iter: 100 C: Log(lower=1, upper=10000.0) class_weight: TransitionChoice([None, 'balanced'])
“logistic”#
Base Class Documentation:
sklearn.linear_model.LogisticRegression
Param Distributions
“base logistic”
max_iter: 100 multi_class: 'auto' penalty: 'none' class_weight: None solver: 'lbfgs'
“pa classifier”#
Base Class Documentation:
sklearn.linear_model.PassiveAggressiveClassifier
Param Distributions
“default”
defaults only
“random forest classifier”#
Base Class Documentation:
sklearn.ensemble.RandomForestClassifier
Param Distributions
“base rf regressor”
n_estimators: 100“rf classifier dist best”
n_estimators: Scalar(init=100, lower=10, upper=200).set_mutation(sigma=31.666666666666668).set_bounds(full_range_sampling=False, lower=10, upper=200).set_integer_casting() class_weight: TransitionChoice([None, 'balanced'])“rf classifier dist”
n_estimators: Scalar(init=100, lower=3, upper=500).set_mutation(sigma=82.83333333333333).set_bounds(full_range_sampling=False, lower=3, upper=500).set_integer_casting() max_depth: TransitionChoice([None, Scalar(init=25, lower=2, upper=200).set_mutation(sigma=33.0).set_bounds(full_range_sampling=False, lower=2, upper=200).set_integer_casting()]) max_features: Scalar(lower=0.1, upper=1.0).set_mutation(sigma=0.15).set_bounds(full_range_sampling=True, lower=0.1, upper=1.0) min_samples_split: Scalar(lower=0.1, upper=1.0).set_mutation(sigma=0.15).set_bounds(full_range_sampling=True, lower=0.1, upper=1.0) bootstrap: True class_weight: TransitionChoice([None, 'balanced'])
“ridge logistic”#
Base Class Documentation:
sklearn.linear_model.LogisticRegression
Param Distributions
“base ridge”
max_iter: 100 penalty: 'l2' solver: 'saga'“ridge C”
max_iter: 100 solver: 'saga' C: Log(lower=1e-05, upper=1000.0) class_weight: TransitionChoice([None, 'balanced'])“ridge C extra”
max_iter: Scalar(lower=100, upper=1000).set_mutation(sigma=150.0).set_bounds(full_range_sampling=True, lower=100, upper=1000).set_integer_casting() solver: 'saga' C: Log(lower=1e-05, upper=1000.0) class_weight: TransitionChoice([None, 'balanced']) tol: Log(lower=1e-06, upper=0.01)
“sgd classifier”#
Base Class Documentation:
sklearn.linear_model.SGDClassifier
Param Distributions
“default”
defaults only“sgd elastic classifier”
loss: 'squared_epsilon_insensitive' penalty: 'elasticnet' alpha: Log(lower=1e-05, upper=100000.0) l1_ratio: Scalar(lower=0.01, upper=1).set_mutation(sigma=0.165).set_bounds(full_range_sampling=True, lower=0.01, upper=1) class_weight: TransitionChoice([None, 'balanced'])“sgd classifier big search”
loss: TransitionChoice(['hinge', 'log', 'modified_huber', 'squared_hinge', 'perceptron']) penalty: TransitionChoice(['l2', 'l1', 'elasticnet']) alpha: Log(lower=1e-05, upper=100.0) l1_ratio: Scalar(lower=0.01, upper=1).set_mutation(sigma=0.165).set_bounds(full_range_sampling=True, lower=0.01, upper=1) max_iter: 100 learning_rate: TransitionChoice(['optimal', 'invscaling', 'adaptive', 'constant']) eta0: Log(lower=1e-06, upper=1000.0) power_t: Scalar(lower=0.1, upper=0.9).set_mutation(sigma=0.13333333333333333).set_bounds(full_range_sampling=True, lower=0.1, upper=0.9) early_stopping: TransitionChoice([False, True]) validation_fraction: Scalar(lower=0.05, upper=0.5).set_mutation(sigma=0.075).set_bounds(full_range_sampling=True, lower=0.05, upper=0.5) n_iter_no_change: Scalar(lower=5, upper=30).set_mutation(sigma=4.166666666666667).set_bounds(full_range_sampling=True, lower=5, upper=30).set_integer_casting() class_weight: TransitionChoice([None, 'balanced'])
“svm classifier”#
Base Class Documentation:
sklearn.svm.SVC
Param Distributions
“base svm classifier”
kernel: 'rbf' gamma: 'scale' probability: True“svm classifier dist”
kernel: 'rbf' gamma: Log(lower=1e-06, upper=1) C: Log(lower=0.0001, upper=10000.0) probability: True class_weight: TransitionChoice([None, 'balanced'])
“xgb classifier”#
Base Class Documentation:
xgboost.XGBClassifier
Param Distributions
“base xgb classifier”
verbosity: 0 objective: 'binary:logistic'“xgb classifier dist1”
verbosity: 0 objective: 'binary:logistic' n_estimators: Scalar(init=100, lower=3, upper=500).set_mutation(sigma=82.83333333333333).set_bounds(full_range_sampling=False, lower=3, upper=500).set_integer_casting() min_child_weight: Log(lower=1e-05, upper=10000.0) subsample: Scalar(lower=0.3, upper=0.95).set_mutation(sigma=0.10833333333333332).set_bounds(full_range_sampling=True, lower=0.3, upper=0.95) colsample_bytree: Scalar(lower=0.3, upper=0.95).set_mutation(sigma=0.10833333333333332).set_bounds(full_range_sampling=True, lower=0.3, upper=0.95) reg_alpha: TransitionChoice([0, Log(lower=1e-05, upper=1)]) reg_lambda: TransitionChoice([0, Log(lower=1e-05, upper=1)])“xgb classifier dist2”
verbosity: 0 objective: 'binary:logistic' max_depth: TransitionChoice([None, Scalar(init=25, lower=2, upper=200).set_mutation(sigma=33.0).set_bounds(full_range_sampling=False, lower=2, upper=200).set_integer_casting()]) learning_rate: Scalar(lower=0.01, upper=0.5).set_mutation(sigma=0.08166666666666667).set_bounds(full_range_sampling=True, lower=0.01, upper=0.5) n_estimators: Scalar(lower=3, upper=500).set_mutation(sigma=82.83333333333333).set_bounds(full_range_sampling=True, lower=3, upper=500).set_integer_casting() min_child_weight: TransitionChoice([1, 5, 10, 50]) subsample: Scalar(lower=0.5, upper=1).set_mutation(sigma=0.08333333333333333).set_bounds(full_range_sampling=True, lower=0.5, upper=1) colsample_bytree: Scalar(lower=0.4, upper=0.95).set_mutation(sigma=0.09166666666666666).set_bounds(full_range_sampling=True, lower=0.4, upper=0.95)“xgb classifier dist3”
verbosity: 0 objective: 'binary:logistic' learning_rare: Scalar(lower=0.005, upper=0.3).set_mutation(sigma=0.049166666666666664).set_bounds(full_range_sampling=True, lower=0.005, upper=0.3) min_child_weight: Scalar(lower=0.5, upper=10).set_mutation(sigma=1.5833333333333333).set_bounds(full_range_sampling=True, lower=0.5, upper=10) max_depth: TransitionChoice(array([3, 4, 5, 6, 7, 8, 9])) subsample: Scalar(lower=0.5, upper=1).set_mutation(sigma=0.08333333333333333).set_bounds(full_range_sampling=True, lower=0.5, upper=1) colsample_bytree: Scalar(lower=0.5, upper=1).set_mutation(sigma=0.08333333333333333).set_bounds(full_range_sampling=True, lower=0.5, upper=1) reg_alpha: Log(lower=1e-05, upper=1)