Summary Encoder

class category_encoders.quantile_encoder.SummaryEncoder(verbose: int = 0, cols: list[str] = None, drop_invariant: bool = False, return_df: bool = True, handle_missing: str = 'value', handle_unknown: str = 'value', quantiles: Sequence[float] = (0.25, 0.75), m: float = 1.0)[source]

Summary Encoding for categorical features.

It’s an encoder designed for creating richer representations by applying quantile encoding for a set of quantiles.

Parameters:
verbose: int

integer indicating verbosity of the output. 0 for none.

quantiles: list

list of floats indicating the statistical quantiles. Each element represent a column

m: float

this is the “m” in the m-probability estimate. Higher value of m results into stronger shrinking. M is non-negative. 0 for no smoothing.

cols: list

a list of columns to encode, if None, all string columns will be encoded.

drop_invariant: bool

boolean for whether or not to drop columns with 0 variance.

return_df: bool

boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array).

handle_missing: str

options are ‘error’, ‘return_nan’ and ‘value’, defaults to ‘value’, which returns the target quantile.

handle_unknown: str

options are ‘error’, ‘return_nan’ and ‘value’, defaults to ‘value’, which returns the target quantile.

Methods

fit(X, y)

Fits the encoder according to X and y by fitting the individual encoders.

fit_transform(X[, y])

Fit and transform using target.

get_feature_names()

Deprecated method to get feature names.

get_feature_names_in()

Get the names of all input columns present when fitting.

get_feature_names_out([input_features])

Returns the names of all transformed / added columns.

get_metadata_routing()

Get metadata routing of this object.

get_params([deep])

Get parameters for this estimator.

set_params(**params)

Set the parameters of this estimator.

set_transform_request(*[, override_return_df])

Configure whether metadata should be requested to be passed to the transform method.

transform(X[, y, override_return_df])

Summary encode new data.

References

[1]

Quantile Encoder: Tackling High Cardinality Categorical Features in Regression Problems,

https://link.springer.com/chapter/10.1007%2F978-3-030-85529-1_14 .. [R21e1e6e9fdbe-2] A Preprocessing Scheme for High-Cardinality Categorical Attributes in Classification and Prediction Problems, equation 7, from https://dl.acm.org/citation.cfm?id=507538 .. [R21e1e6e9fdbe-3] On estimating probabilities in tree pruning, equation 1, from https://link.springer.com/chapter/10.1007/BFb0017010 .. [R21e1e6e9fdbe-4] Additive smoothing, from https://en.wikipedia.org/wiki/Additive_smoothing#Generalized_to_the_case_of_known_incidence_rates .. [R21e1e6e9fdbe-5] Target encoding done the right way https://maxhalford.github.io/blog/target-encoding/

fit(X: ndarray | DataFrame | list | generic | csr_matrix, y: list | Series | ndarray | tuple | DataFrame) SummaryEncoder[source]

Fits the encoder according to X and y by fitting the individual encoders.

Parameters:
Xarray-like, shape = [n_samples, n_features]

Training vectors, where n_samples is the number of samples and n_features is the number of features.

yarray-like, shape = [n_samples]

Target values.

Returns:
selfencoder

Returns self.

fit_transform(X: ndarray | DataFrame | list | generic | csr_matrix, y: list | Series | ndarray | tuple | DataFrame | None = None)[source]

Fit and transform using target.

This also uses the target for transforming, not only for training.

get_feature_names() ndarray[source]

Deprecated method to get feature names. Use get_feature_names_out instead.

get_feature_names_in() ndarray[source]

Get the names of all input columns present when fitting.

These columns are necessary for the transform step.

get_feature_names_out(input_features=None) ndarray[source]

Returns the names of all transformed / added columns.

Note that in sklearn the get_feature_names_out function takes the feature_names_in as an argument and determines the output feature names using the input. A fit is usually not necessary and if so a NotFittedError is raised. We just require a fit all the time and return the fitted output columns.

Returns:
feature_names: np.ndarray

A list with all feature names transformed or added. Note: potentially dropped features (because the feature is constant/invariant) are not included!

get_metadata_routing()

Get metadata routing of this object.

Please check User Guide on how the routing mechanism works.

Returns:
routingMetadataRequest

A MetadataRequest encapsulating routing information.

get_params(deep=True)

Get parameters for this estimator.

Parameters:
deepbool, default=True

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:
paramsdict

Parameter names mapped to their values.

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters:
**paramsdict

Estimator parameters.

Returns:
selfestimator instance

Estimator instance.

set_transform_request(*, override_return_df: bool | None | str = '$UNCHANGED$') SummaryEncoder

Configure whether metadata should be requested to be passed to the transform method.

Note that this method is only relevant when this estimator is used as a sub-estimator within a meta-estimator and metadata routing is enabled with enable_metadata_routing=True (see sklearn.set_config()). Please check the User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to transform if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to transform.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Parameters:
override_return_dfstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for override_return_df parameter in transform.

Returns:
selfobject

The updated object.

transform(X: ndarray | DataFrame | list | generic | csr_matrix, y: list | Series | ndarray | tuple | DataFrame | None = None, override_return_df: bool = False) DataFrame | ndarray[source]

Summary encode new data.

Parameters:
X: data to encode.
y: optional target information.
override_return_df: if true return a numpy array instead of a

dataframe regardless of the return_df parameter.

Returns:
encoded data.