Generalized Linear Mixed Model Encoder

class category_encoders.glmm.GLMMEncoder(verbose=0, cols=None, drop_invariant=False, return_df=True, handle_unknown='value', handle_missing='value', random_state=None, randomized=False, sigma=0.05, binomial_target=None)[source]

Generalized linear mixed model.

Supported targets: binomial and continuous. For polynomial target support, see PolynomialWrapper.

This is a supervised encoder similar to TargetEncoder or MEstimateEncoder, but there are some advantages:

  1. Solid statistical theory behind the technique. Mixed effects models are a mature branch of statistics.

2. No hyper-parameters to tune. The amount of shrinkage is automatically determined through the estimation process. In short, the less observations a category has and/or the more the outcome varies for a category then the higher the regularization towards “the prior” or “grand mean”. 3. The technique is applicable for both continuous and binomial targets. If the target is continuous, the encoder returns regularized difference of the observation’s category from the global mean.

If the target is binomial, the encoder returns regularized log odds per category.

In comparison to JamesSteinEstimator, this encoder utilizes generalized linear mixed models from statsmodels library.

Note: This is an alpha implementation. The API of the method may change in the future.

Parameters
verbose: int

integer indicating verbosity of the output. 0 for none.

cols: list

a list of columns to encode, if None, all string columns will be encoded.

drop_invariant: bool

boolean for whether or not to drop encoded columns with 0 variance.

return_df: bool

boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array).

handle_missing: str

options are ‘return_nan’, ‘error’ and ‘value’, defaults to ‘value’, which returns 0.

handle_unknown: str

options are ‘return_nan’, ‘error’ and ‘value’, defaults to ‘value’, which returns 0.

randomized: bool,

adds normal (Gaussian) distribution noise into training data in order to decrease overfitting (testing data are untouched).

sigma: float

standard deviation (spread or “width”) of the normal distribution.

binomial_target: bool

if True, the target must be binomial with values {0, 1} and Binomial mixed model is used. If False, the target must be continuous and Linear mixed model is used. If None (the default), a heuristic is applied to estimate the target type.

References

1

Data Analysis Using Regression and Multilevel/Hierarchical Models, page 253, from

https://faculty.psau.edu.sa/filedownload/doc-12-pdf-a1997d0d31f84d13c1cdc44ac39a8f2c-original.pdf

Attributes
feature_names

Methods

fit(X[, y])

Fits the encoder according to X and y.

fit_transform(X[, y])

Encoders that utilize the target must make sure that the training data are transformed with:

get_feature_names()

Returns the names of all transformed / added columns.

get_params([deep])

Get parameters for this estimator.

set_params(**params)

Set the parameters of this estimator.

transform(X[, y, override_return_df])

Perform the transformation to new categorical data.

Parameters
verbose: int

integer indicating verbosity of output. 0 for none.

cols: list

a list of columns to encode, if None, all string and categorical columns will be encoded.

drop_invariant: bool

boolean for whether or not to drop columns with 0 variance.

return_df: bool

boolean for whether to return a pandas DataFrame from transform and inverse transform (otherwise it will be a numpy array).

handle_missing: str

how to handle missing values at fit time. Options are ‘error’, ‘return_nan’, and ‘value’. Default ‘value’, which treat NaNs as a countable category at fit time.

handle_unknown: str, int or dict of {columnoption, …}.

how to handle unknown labels at transform time. Options are ‘error’ ‘return_nan’, ‘value’ and int. Defaults to None which uses NaN behaviour specified at fit time. Passing an int will fill with this int value.

kwargs: dict.

additional encoder specific parameters like regularisation.

Attributes
feature_names

Methods

fit(X[, y])

Fits the encoder according to X and y.

fit_transform(X[, y])

Encoders that utilize the target must make sure that the training data are transformed with:

get_feature_names()

Returns the names of all transformed / added columns.

get_params([deep])

Get parameters for this estimator.

set_params(**params)

Set the parameters of this estimator.

transform(X[, y, override_return_df])

Perform the transformation to new categorical data.

fit(X, y=None, **kwargs)

Fits the encoder according to X and y.

Parameters
Xarray-like, shape = [n_samples, n_features]

Training vectors, where n_samples is the number of samples and n_features is the number of features.

yarray-like, shape = [n_samples]

Target values.

Returns
selfencoder

Returns self.

fit_transform(X, y=None, **fit_params)
Encoders that utilize the target must make sure that the training data are transformed with:

transform(X, y)

and not with:

transform(X)

get_feature_names() List[str]

Returns the names of all transformed / added columns.

Returns
feature_names: list

A list with all feature names transformed or added. Note: potentially dropped features (because the feature is constant/invariant) are not included!

get_params(deep=True)

Get parameters for this estimator.

Parameters
deepbool, default=True

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns
paramsdict

Parameter names mapped to their values.

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters
**paramsdict

Estimator parameters.

Returns
selfestimator instance

Estimator instance.

transform(X, y=None, override_return_df=False)

Perform the transformation to new categorical data.

Some encoders behave differently on whether y is given or not. This is mainly due to regularisation in order to avoid overfitting. On training data transform should be called with y, on test data without.

Parameters
Xarray-like, shape = [n_samples, n_features]
yarray-like, shape = [n_samples] or None
override_return_dfbool

override self.return_df to force to return a data frame

Returns
parray or DataFrame, shape = [n_samples, n_features_out]

Transformed values with encoding applied.