M-estimate

class category_encoders.m_estimate.MEstimateEncoder(verbose=0, cols=None, drop_invariant=False, return_df=True, handle_unknown='value', handle_missing='value', random_state=None, randomized=False, sigma=0.05, m=1.0)[source]

M-probability estimate of likelihood.

This is a simplified version of target encoder, which goes under names like m-probability estimate or additive smoothing with known incidence rates. In comparison to target encoder, m-probability estimate has only one tunable parameter (m), while target encoder has two tunable parameters (min_samples_leaf and smoothing).

Parameters:
verbose: int

integer indicating verbosity of the output. 0 for none.

cols: list

a list of columns to encode, if None, all string columns will be encoded.

drop_invariant: bool

boolean for whether or not to drop encoded columns with 0 variance.

return_df: bool

boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array).

handle_missing: str

options are ‘return_nan’, ‘error’ and ‘value’, defaults to ‘value’, which returns the prior probability.

handle_unknown: str

options are ‘return_nan’, ‘error’ and ‘value’, defaults to ‘value’, which returns the prior probability.

randomized: bool,

adds normal (Gaussian) distribution noise into training data in order to decrease overfitting (testing data are untouched).

sigma: float

standard deviation (spread or “width”) of the normal distribution.

m: float

this is the “m” in the m-probability estimate. Higher value of m results into stronger shrinking. M is non-negative.

References

[R73dfab905ada-1]A Preprocessing Scheme for High-Cardinality Categorical Attributes in Classification and Prediction Problems, equation 7, from

https://dl.acm.org/citation.cfm?id=507538

[R73dfab905ada-2]On estimating probabilities in tree pruning, equation 1, from

https://link.springer.com/chapter/10.1007/BFb0017010

[R73dfab905ada-3]Additive smoothing, from

https://en.wikipedia.org/wiki/Additive_smoothing#Generalized_to_the_case_of_known_incidence_rates

Methods

fit(self, X, y, \*\*kwargs) Fit encoder according to X and binary y.
fit_transform(self, X[, y]) Encoders that utilize the target must make sure that the training data are transformed with:
get_feature_names(self) Returns the names of all transformed / added columns.
get_params(self[, deep]) Get parameters for this estimator.
set_params(self, \*\*params) Set the parameters of this estimator.
transform(self, X[, y, override_return_df]) Perform the transformation to new categorical data.
fit(self, X, y, **kwargs)[source]

Fit encoder according to X and binary y.

Parameters:
X : array-like, shape = [n_samples, n_features]

Training vectors, where n_samples is the number of samples and n_features is the number of features.

y : array-like, shape = [n_samples]

Binary target values.

Returns:
self : encoder

Returns self.

fit_transform(self, X, y=None, **fit_params)[source]
Encoders that utilize the target must make sure that the training data are transformed with:
transform(X, y)
and not with:
transform(X)
get_feature_names(self)[source]

Returns the names of all transformed / added columns.

Returns:
feature_names: list

A list with all feature names transformed or added. Note: potentially dropped features are not included!

transform(self, X, y=None, override_return_df=False)[source]

Perform the transformation to new categorical data.

When the data are used for model training, it is important to also pass the target in order to apply leave one out.

Parameters:
X : array-like, shape = [n_samples, n_features]
y : array-like, shape = [n_samples] when transform by leave one out

None, when transform without target information (such as transform test set)

Returns:
p : array, shape = [n_samples, n_numeric + N]

Transformed values with encoding applied.