polylearn.PolynomialNetworkRegressor

class polylearn.PolynomialNetworkRegressor(degree=2, n_components=2, beta=1, tol=1e-06, fit_lower=’augment’, warm_start=False, max_iter=10000, verbose=False, random_state=None)[source]

Polynomial network for regression (with squared loss).

Parameters:

degree : int >= 2, default: 2

Degree of the polynomial. Corresponds to the order of feature interactions captured by the model. Currently only supports degrees up to 3.

n_components : int, default: 2

Dimension of the lifted tensor.

beta : float, default: 1

Regularization amount for higher-order weights.

tol : float, default: 1e-6

Tolerance for the stopping condition.

fit_lower : {‘augment’|None}, default: ‘augment’

Whether and how to fit lower-order, non-homogeneous terms.

  • ‘augment’: adds a dummy column (1 everywhere) in order to capture

lower-order terms (including linear terms).

  • None: only learns weights for the degree given.

warm_start : boolean, optional, default: False

Whether to use the existing solution, if available. Useful for computing regularization paths or pre-initializing the model.

max_iter : int, optional, default: 10000

Maximum number of passes over the dataset to perform.

verbose : boolean, optional, default: False

Whether to print debugging information.

random_state : int seed, RandomState instance, or None (default)

The seed of the pseudo random number generator to use for initializing the parameters.

Attributes:

self.U_ : array, shape [n_components, n_features, degree]

The learned weights in the lifted tensor parametrization.

References

Polynomial Networks and Factorization Machines: New Insights and Efficient Training Algorithms. Mathieu Blondel, Masakazu Ishihata, Akinori Fujino, Naonori Ueda. In: Proceedings of ICML 2016. http://mblondel.org/publications/mblondel-icml2016.pdf

On the computational efficiency of training neural networks. Roi Livni, Shai Shalev-Shwartz, Ohad Shamir. In: Proceedings of NIPS 2014.

Methods

fit(X, y) Fit polynomial network to training data.
get_params([deep]) Get parameters for this estimator.
predict(X) Predict regression output for the samples in X.
score(X, y[, sample_weight]) Returns the coefficient of determination R^2 of the prediction.
set_params(**params) Set the parameters of this estimator.
__init__(degree=2, n_components=2, beta=1, tol=1e-06, fit_lower=’augment’, warm_start=False, max_iter=10000, verbose=False, random_state=None)[source]
fit(X, y)

Fit polynomial network to training data.

Parameters:

X : array-like or sparse, shape = [n_samples, n_features]

Training vectors, where n_samples is the number of samples and n_features is the number of features.

y : array-like, shape = [n_samples]

Target values.

Returns:

self : Estimator

Returns self.

get_params(deep=True)

Get parameters for this estimator.

Parameters:

deep : boolean, optional

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:

params : mapping of string to any

Parameter names mapped to their values.

predict(X)

Predict regression output for the samples in X.

Parameters:

X : {array-like, sparse matrix}, shape = [n_samples, n_features]

Samples.

Returns:

y_pred : array, shape = [n_samples]

Returns predicted values.

score(X, y, sample_weight=None)

Returns the coefficient of determination R^2 of the prediction.

The coefficient R^2 is defined as (1 - u/v), where u is the regression sum of squares ((y_true - y_pred) ** 2).sum() and v is the residual sum of squares ((y_true - y_true.mean()) ** 2).sum(). Best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a R^2 score of 0.0.

Parameters:

X : array-like, shape = (n_samples, n_features)

Test samples.

y : array-like, shape = (n_samples) or (n_samples, n_outputs)

True values for X.

sample_weight : array-like, shape = [n_samples], optional

Sample weights.

Returns:

score : float

R^2 of self.predict(X) wrt. y.

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Returns:self :