Hashing

class category_encoders.hashing.HashingEncoder(max_process=0, max_sample=0, verbose=0, n_components=8, cols=None, drop_invariant=False, return_df=True, hash_method='md5')[source]

A multivariate hashing implementation with configurable dimensionality/precision.

The advantage of this encoder is that it does not maintain a dictionary of observed categories. Consequently, the encoder does not grow in size and accepts new values during data scoring by design.

It’s important to read about how max_process & max_sample work before setting them manually, inappropriate setting slows down encoding.

Parameters:
verbose: int

integer indicating verbosity of the output. 0 for none.

cols: list

a list of columns to encode, if None, all string columns will be encoded.

drop_invariant: bool

boolean for whether or not to drop columns with 0 variance.

return_df: bool

boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array).

hash_method: str

which hashing method to use. Any method from hashlib works.

max_process: int

how many processes to use in transform(). Limited in range(1, 64). By default, it uses half of the logical CPUs. For example, 4C4T makes max_process=2, 4C8T makes max_process=4. Set it larger if you have a strong CPU. It is not recommended to set it larger than is the count of the logical CPUs as it will actually slow down the encoding.

max_sample: int

how many samples to encode by each process at a time. This setting is useful on low memory machines. By default, max_sample=(all samples num)/(max_process). For example, 4C8T CPU with 100,000 samples makes max_sample=25,000, 6C12T CPU with 100,000 samples makes max_sample=16,666. It is not recommended to set it larger than the default value.

References

[R8dde675226a2-1]Feature Hashing for Large Scale Multitask Learning, from

https://alex.smola.org/papers/2009/Weinbergeretal09.pdf

Methods

fit(self, X[, y]) Fit encoder according to X and y.
fit_transform(self, X[, y]) Fit to data, then transform it.
get_feature_names(self) Returns the names of all transformed / added columns.
get_params(self[, deep]) Get parameters for this estimator.
hashing_trick(X_in[, hashing_method, N, …]) A basic hashing implementation with configurable dimensionality/precision
set_params(self, \*\*params) Set the parameters of this estimator.
transform(self, X[, override_return_df]) Call _transform() if you want to use single CPU with all samples
fit(self, X, y=None, **kwargs)[source]

Fit encoder according to X and y.

Parameters:
X : array-like, shape = [n_samples, n_features]

Training vectors, where n_samples is the number of samples and n_features is the number of features.

y : array-like, shape = [n_samples]

Target values.

Returns:
self : encoder

Returns self.

get_feature_names(self)[source]

Returns the names of all transformed / added columns.

Returns:
feature_names: list

A list with all feature names transformed or added. Note: potentially dropped features are not included!

static hashing_trick(X_in, hashing_method='md5', N=2, cols=None, make_copy=False)[source]

A basic hashing implementation with configurable dimensionality/precision

Performs the hashing trick on a pandas dataframe, X, using the hashing method from hashlib identified by hashing_method. The number of output dimensions (N), and columns to hash (cols) are also configurable.

Parameters:
X_in: pandas dataframe

description text

hashing_method: string, optional

description text

N: int, optional

description text

cols: list, optional

description text

make_copy: bool, optional

description text

Returns:
out : dataframe

A hashing encoded dataframe.

References

Cite the relevant literature, e.g. [R6b702480991a-1]. You may also cite these references in the notes section above. .. [R6b702480991a-1] Kilian Weinberger; Anirban Dasgupta; John Langford; Alex Smola; Josh Attenberg (2009). Feature Hashing for Large Scale Multitask Learning. Proc. ICML.

transform(self, X, override_return_df=False)[source]

Call _transform() if you want to use single CPU with all samples