James-Stein Encoder

class category_encoders.james_stein.JamesSteinEncoder(verbose=0, cols=None, drop_invariant=False, return_df=True, handle_unknown='value', handle_missing='value', model='independent', random_state=None, randomized=False, sigma=0.05)[source]

James-Stein estimator.

For feature value i, James-Stein estimator returns a weighted average of:

  1. The mean target value for the observed feature value i.
  2. The mean target value (regardless of the feature value).

This can be written as:

JS_i = (1-B)*mean(y_i) + B*mean(y)

The question is, what should be the weight B? If we put too much weight on the conditional mean value, we will overfit. If we put too much weight on the global mean, we will underfit. The canonical solution in machine learning is to perform cross-validation. However, Charles Stein came with a closed-form solution to the problem. The intuition is: If the estimate of mean(y_i) is unreliable (y_i has high variance), we should put more weight on mean(y). Stein put it into an equation as:

B = var(y_i) / (var(y_i)+var(y))

The only remaining issue is that we do not know var(y), let alone var(y_i). Hence, we have to estimate the variances. But how can we reliably estimate the variances, when we already struggle with the estimation of the mean values?! There are multiple solutions:

1. If we have the same count of observations for each feature value i and all y_i are close to each other, we can pretend that all var(y_i) are identical. This is called a pooled model. 2. If the observation counts are not equal, it makes sense to replace the variances with squared standard errors, which penalize small observation counts:

SE^2 = var(y)/count(y)

This is called an independent model.

James-Stein estimator has, however, one practical limitation - it was defined only for normal distributions. If you want to apply it for binary classification, which allows only values {0, 1}, it is better to first convert the mean target value from the bound interval <0,1> into an unbounded interval by replacing mean(y) with log-odds ratio:

log-odds_ratio_i = log(mean(y_i)/mean(y_not_i))

This is called binary model. The estimation of parameters of this model is, however, tricky and sometimes it fails fatally. In these situations, it is better to use beta model, which generally delivers slightly worse accuracy than binary model but does not suffer from fatal failures.

Parameters:
verbose: int

integer indicating verbosity of the output. 0 for none.

cols: list

a list of columns to encode, if None, all string columns will be encoded.

drop_invariant: bool

boolean for whether or not to drop encoded columns with 0 variance.

return_df: bool

boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array).

handle_missing: str

options are ‘return_nan’, ‘error’ and ‘value’, defaults to ‘value’, which returns the prior probability.

handle_unknown: str

options are ‘return_nan’, ‘error’ and ‘value’, defaults to ‘value’, which returns the prior probability.

model: str

options are ‘pooled’, ‘beta’, ‘binary’ and ‘independent’, defaults to ‘independent’.

randomized: bool,

adds normal (Gaussian) distribution noise into training data in order to decrease overfitting (testing data are untouched).

sigma: float

standard deviation (spread or “width”) of the normal distribution.

References

[R50705e07df56-1]Parametric empirical Bayes inference: Theory and applications, equations 1.19 & 1.20, from

https://www.jstor.org/stable/2287098

[R50705e07df56-2]Empirical Bayes for multiple sample sizes, from

http://chris-said.io/2017/05/03/empirical-bayes-for-multiple-sample-sizes/

[R50705e07df56-3]Shrinkage Estimation of Log-odds Ratios for Comparing Mobility Tables, from

https://journals.sagepub.com/doi/abs/10.1177/0081175015570097

[R50705e07df56-4]Stein’s paradox and group rationality, from

http://www.philos.rug.nl/~romeyn/presentation/2017_romeijn_-_Paris_Stein.pdf

[R50705e07df56-5]Stein’s Paradox in Statistics, from

http://statweb.stanford.edu/~ckirby/brad/other/Article1977.pdf

Methods

fit(self, X, y, \*\*kwargs) Fit encoder according to X and binary y.
fit_transform(self, X[, y]) Encoders that utilize the target must make sure that the training data are transformed with:
get_feature_names(self) Returns the names of all transformed / added columns.
get_params(self[, deep]) Get parameters for this estimator.
set_params(self, \*\*params) Set the parameters of this estimator.
transform(self, X[, y, override_return_df]) Perform the transformation to new categorical data.
fit(self, X, y, **kwargs)[source]

Fit encoder according to X and binary y.

Parameters:
X : array-like, shape = [n_samples, n_features]

Training vectors, where n_samples is the number of samples and n_features is the number of features.

y : array-like, shape = [n_samples]

Binary target values.

Returns:
self : encoder

Returns self.

fit_transform(self, X, y=None, **fit_params)[source]
Encoders that utilize the target must make sure that the training data are transformed with:
transform(X, y)
and not with:
transform(X)
get_feature_names(self)[source]

Returns the names of all transformed / added columns.

Returns:
feature_names: list

A list with all feature names transformed or added. Note: potentially dropped features are not included!

transform(self, X, y=None, override_return_df=False)[source]

Perform the transformation to new categorical data. When the data are used for model training, it is important to also pass the target in order to apply leave one out.

Parameters:
X : array-like, shape = [n_samples, n_features]
y : array-like, shape = [n_samples] when transform by leave one out

None, when transform without target information (such as transform test set)

Returns:
p : array, shape = [n_samples, n_numeric + N]

Transformed values with encoding applied.