Quantile Encoder
- class category_encoders.quantile_encoder.QuantileEncoder(verbose: int = 0, cols: list[str] = None, drop_invariant: bool = False, return_df: bool = True, handle_missing: str = 'value', handle_unknown: str = 'value', quantile: float = 0.5, m: float = 1.0)[source]
Quantile Encoding for categorical features.
This a statistically modified version of target MEstimate encoder where selected features are replaced by the statistical quantile instead of the mean. Replacing with the median is a particular case where self.quantile = 0.5. In comparison to MEstimateEncoder it has two tunable parameter m and quantile
- Parameters:
- verbose: int
integer indicating verbosity of the output. 0 for none.
- quantile: float
float indicating statistical quantile. ´0.5´ for median.
- m: float
this is the “m” in the m-probability estimate. Higher value of m results into stronger shrinking. M is non-negative. 0 for no smoothing.
- cols: list
a list of columns to encode, if None, all string columns will be encoded.
- drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
- return_df: bool
boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array).
- handle_missing: str
options are ‘error’, ‘return_nan’ and ‘value’, defaults to ‘value’, which returns the target quantile.
- handle_unknown: str
options are ‘error’, ‘return_nan’ and ‘value’, defaults to ‘value’, which returns the target quantile.
Methods
fit(X[, y])Fits the encoder according to X and y.
fit_quantile_encoding(X, y)Calculate the quantile encoding mapping.
fit_transform(X[, y])Fit and transform using the target information.
Deprecated method to get feature names.
Get the names of all input columns present when fitting.
get_feature_names_out([input_features])Get the names of all transformed / added columns.
Get metadata routing of this object.
get_params([deep])Get parameters for this estimator.
quantile_encode(X_in)Apply quantile encoding.
set_output(*[, transform])Set output container.
set_params(**params)Set the parameters of this estimator.
set_transform_request(*[, override_return_df])Configure whether metadata should be requested to be passed to the
transformmethod.transform(X[, y, override_return_df])Perform the transformation to new categorical data.
References
[1]Quantile Encoder: Tackling High Cardinality Categorical Features in Regression Problems, https://link.springer.com/chapter/10.1007%2F978-3-030-85529-1_14
[2]A Preprocessing Scheme for High-Cardinality Categorical Attributes in Classification and Prediction Problems, equation 7, from https://dl.acm.org/citation.cfm?id=507538
[3]On estimating probabilities in tree pruning, equation 1, from https://link.springer.com/chapter/10.1007/BFb0017010
[4]Additive smoothing, from https://en.wikipedia.org/wiki/Additive_smoothing#Generalized_to_the_case_of_known_incidence_rates
[5]Target encoding done the right way https://maxhalford.github.io/blog/target-encoding/
- fit(X: ndarray | DataFrame | list | generic | csr_matrix, y: list | Series | ndarray | tuple | DataFrame | None = None, **kwargs)
Fits the encoder according to X and y.
- Parameters:
- Xarray-like, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples and n_features is the number of features.
- yarray-like, shape = [n_samples]
Target values.
- Returns:
- selfencoder
Returns self.
- fit_quantile_encoding(X: DataFrame, y: Series) dict[str, Series][source]
Calculate the quantile encoding mapping.
- Parameters:
- X: training data.
- y: target data.
- Returns:
- mapping col-name -> series with category-label -> quantile mapping.
- fit_transform(X: ndarray | DataFrame | list | generic | csr_matrix, y: list | Series | ndarray | tuple | DataFrame | None = None, **fit_params)
Fit and transform using the target information.
This also uses the target for transforming, not only for training.
- get_feature_names() ndarray
Deprecated method to get feature names. Use get_feature_names_out instead.
- get_feature_names_in() ndarray
Get the names of all input columns present when fitting.
These columns are necessary for the transform step.
- get_feature_names_out(input_features=None) ndarray
Get the names of all transformed / added columns.
Note that in sklearn the get_feature_names_out function takes the feature_names_in as an argument and determines the output feature names using the input. A fit is usually not necessary and if so a NotFittedError is raised. We just require a fit all the time and return the fitted output columns.
- Returns:
- feature_names: np.ndarray
A numpy array with all feature names transformed or added. Note: potentially dropped features (because the feature is constant/invariant) are not included!
- get_metadata_routing()
Get metadata routing of this object.
Please check User Guide on how the routing mechanism works.
- Returns:
- routingMetadataRequest
A
MetadataRequestencapsulating routing information.
- get_params(deep=True)
Get parameters for this estimator.
- Parameters:
- deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
- Returns:
- paramsdict
Parameter names mapped to their values.
- set_output(*, transform=None)
Set output container.
See sphx_glr_auto_examples_miscellaneous_plot_set_output.py for an example on how to use the API.
- Parameters:
- transform{“default”, “pandas”, “polars”}, default=None
Configure output of transform and fit_transform.
“default”: Default output format of a transformer
“pandas”: DataFrame output
“polars”: Polars output
None: Transform configuration is unchanged
Added in version 1.4: “polars” option was added.
- Returns:
- selfestimator instance
Estimator instance.
- set_params(**params)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as
Pipeline). The latter have parameters of the form<component>__<parameter>so that it’s possible to update each component of a nested object.- Parameters:
- **paramsdict
Estimator parameters.
- Returns:
- selfestimator instance
Estimator instance.
- set_transform_request(*, override_return_df: bool | None | str = '$UNCHANGED$') QuantileEncoder
Configure whether metadata should be requested to be passed to the
transformmethod.Note that this method is only relevant when this estimator is used as a sub-estimator within a meta-estimator and metadata routing is enabled with
enable_metadata_routing=True(seesklearn.set_config()). Please check the User Guide on how the routing mechanism works.The options for each parameter are:
True: metadata is requested, and passed totransformif provided. The request is ignored if metadata is not provided.False: metadata is not requested and the meta-estimator will not pass it totransform.None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
- Parameters:
- override_return_dfstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED
Metadata routing for
override_return_dfparameter intransform.
- Returns:
- selfobject
The updated object.
- transform(X: ndarray | DataFrame | list | generic | csr_matrix, y: list | Series | ndarray | tuple | DataFrame | None = None, override_return_df: bool = False)
Perform the transformation to new categorical data.
Some encoders behave differently on whether y is given or not. This is mainly due to regularisation in order to avoid overfitting. On training data transform should be called with y, on test data without.
- Parameters:
- Xarray-like, shape = [n_samples, n_features]
- yarray-like, shape = [n_samples] or None
- override_return_dfbool
override self.return_df to force to return a data frame
- Returns:
- parray or DataFrame, shape = [n_samples, n_features_out]
Transformed values with encoding applied.