lightning.regression.SGDRegressor

class lightning.regression.SGDRegressor(loss='squared', penalty='l2', alpha=0.01, learning_rate='pegasos', eta0=0.03, power_t=0.5, epsilon=0.01, fit_intercept=True, intercept_decay=1.0, max_iter=10, shuffle=True, random_state=None, callback=None, n_calls=100, verbose=0)[source]

Estimator for learning linear regressors by SGD.

Parameters:

loss : str, ‘squared’, ‘epsilon_insensitive’, ‘huber’

Loss function to be used.

penalty : str, ‘l2’, ‘l1’, ‘l1/l2’

The penalty to be used.

  • l2: ridge
  • l1: lasso
  • l1/l2: group lasso

alpha : float

Weight of the penalty term.

learning_rate : ‘pegasos’, ‘constant’, ‘invscaling’

Learning schedule to use.

eta0 : float

Step size.

power_t : float

Power to be used (when learning_rate=’invscaling’).

epsilon : float

Value to be used for epsilon-insensitive loss.

fit_intercept : bool

Whether to fit the intercept or not.

intercept_decay : float

Value by which the intercept is multiplied (to regularize it).

max_iter : int

Maximum number of iterations to perform.

shuffle : bool

Whether to shuffle data.

callback : callable

Callback function.

n_calls : int

Frequency with which callback must be called.

random_state : RandomState or int

The seed of the pseudo random number generator to use.

verbose : int

Verbosity level.

Methods

fit(X, y) Fit model according to X and y.
get_params([deep]) Get parameters for this estimator.
n_nonzero([percentage])
predict(X) Perform regression on an array of test vectors X.
score(X, y[, sample_weight]) Returns the coefficient of determination R^2 of the prediction.
set_params(**params) Set the parameters of this estimator.
__init__(loss='squared', penalty='l2', alpha=0.01, learning_rate='pegasos', eta0=0.03, power_t=0.5, epsilon=0.01, fit_intercept=True, intercept_decay=1.0, max_iter=10, shuffle=True, random_state=None, callback=None, n_calls=100, verbose=0)[source]
fit(X, y)[source]

Fit model according to X and y.

Parameters:

X : array-like, shape = [n_samples, n_features]

Training vectors, where n_samples is the number of samples and n_features is the number of features.

y : array-like, shape = [n_samples] or [n_samples, n_targets]

Target values.

Returns:

self : regressor

Returns self.

get_params(deep=True)

Get parameters for this estimator.

Parameters:

deep: boolean, optional :

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:

params : mapping of string to any

Parameter names mapped to their values.

predict(X)[source]

Perform regression on an array of test vectors X.

Parameters:

X : array-like, shape = [n_samples, n_features]

Returns:

p : array, shape = [n_samples]

Predicted target values for X

score(X, y, sample_weight=None)

Returns the coefficient of determination R^2 of the prediction.

The coefficient R^2 is defined as (1 - u/v), where u is the regression sum of squares ((y_true - y_pred) ** 2).sum() and v is the residual sum of squares ((y_true - y_true.mean()) ** 2).sum(). Best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a R^2 score of 0.0.

Parameters:

X : array-like, shape = (n_samples, n_features)

Test samples.

y : array-like, shape = (n_samples) or (n_samples, n_outputs)

True values for X.

sample_weight : array-like, shape = [n_samples], optional

Sample weights.

Returns:

score : float

R^2 of self.predict(X) wrt. y.

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Returns:self :