BPt.EvalResults#
- class BPt.EvalResults(estimator, ps, encoders=None, progress_bar=True, store_preds=False, store_estimators=False, store_timing=False, store_cv=True, store_data_ref=True, eval_verbose=0, progress_loc=None, mute_warnings=False, compare_bars=None)[source]#
This class is returned from calls to
evaluate()
, and can be used to store information from evaluate, or compute additional feature importances. It should typically not be initialized by the user.Attributes
This parameter stores the training subjects / indexThis parameter stores the validation subjects / indexThis attribute represents the mean coef_ as a numpy array across all folds.
If set to store CV is true, a deepcopy of the passed cv splitter will be stored
This parameter stores the passed saved, unfitted estimator used in this evaluation.
If the parameter store_estimators is set to True when calling
evaluate()
, then this parameter will store the fitted estimator in a list.The features names corresponding to any measures of feature importance, stored as a list of lists, where the top level list represents each fold of cross validation.
This property stores the mean values across fitted estimators assuming each fitted estimator has a non empty feature_importances_ attribute.
This property stores the mean value across each fold of the CV for either the coef_ or feature_importance_ parameter.
This parameter stores the mean scores as a dictionary of values, where each dictionary is indexed by the name of the scorer, and the dictionary value is the mean score for that scorer.
This property stores information on the fit and scoring times, if requested by the original call to
evaluate()
.A quicker helper property to get the number of CV folds this object was evaluated with.
A quicker helper property to get the sum of the length of
train_subjects
andval_subjects
.If the parameter store_preds is set to True when calling
evaluate()
, then this parameter will store the predictions from every evaluate fold.A saved and pre-processed version of the problem_spec used (with any extra_params applied) when running this instance of Evaluator.
This property represents a quick helper for accessing the mean scores of whatever the first scorer is (in the case of multiple scorers).
This property stores the scores for each scorer as a dictionary of lists, where the keys are the names of the scorer and the list represents the score obtained for each fold, where each index corresponds to to a fold of cross validation.
This parameter stores the standard deviation scores as a dictionary of values, where each dictionary is indexed by the name of the scorer, and value contains the standard deviation across evaluation folds for that scorer.
This property stores information on the fit and scoring times, if requested by the original call to
evaluate()
.This parameter stores the training subjects / indexThis parameter stores the validation subjects / indexThis property stores the mean scores across evaluation folds (simmilar to
mean_scores
), but weighted by the number of subjects / datapoints in each fold.Methods
compare
(other[, rope_interval])This method is designed to perform a statistical comparison between the results from the evaluation stored in this object and another instance of
EvalResults
.get_X_transform_df
([dataset, fold, ...])This method is used as a helper for getting the transformed input data for one of the saved models run during evaluate.
This function returns each coef_ value across fitted estimators.
This function returns each feature_importances_ value across fitted estimators.
get_fis
([mean, abs])This method will return a pandas DataFrame with each row a fold, and each column a feature if the underlying model supported either the coef_ or feature_importance_ parameters.
get_inverse_fis
([fis])Try to inverse transform stored feature importances (either beta weights or automatically calculated feature importances) to their original space.
get_preds_dfs
([drop_nan_targets])This function can be used to return the raw predictions made during evaluation as a list of pandas Dataframes.
permutation_importance
([dataset, n_repeats, ...])This function computes the permutation feature importances from the base scikit-learn function
sklearn.inspection.permutation_importance()
run_permutation_test
([n_perm, dataset, ...])Compute signifigance values for the original results according to a permutation test scheme.
subset_by
(group[, dataset, decode_values])Generate instances of
EvalResultsSubset
based on subsets of subjects based on different unique groups.to_pickle
(loc)Quick helper to save as pickle.