lightning.regression.SGDRegressor¶
- class lightning.regression.SGDRegressor(loss='squared', penalty='l2', alpha=0.01, learning_rate='pegasos', eta0=0.03, power_t=0.5, epsilon=0.01, fit_intercept=True, intercept_decay=1.0, max_iter=10, shuffle=True, random_state=None, callback=None, n_calls=100, verbose=0)[source]¶
Estimator for learning linear regressors by SGD.
- Parameters
loss (str, 'squared', 'epsilon_insensitive', 'huber') – Loss function to be used.
penalty (str, 'l2', 'l1', 'l1/l2') –
The penalty to be used.
l2: ridge
l1: lasso
l1/l2: group lasso
alpha (float) – Weight of the penalty term.
learning_rate ('pegasos', 'constant', 'invscaling') – Learning schedule to use.
eta0 (float) – Step size.
power_t (float) – Power to be used (when learning_rate=’invscaling’).
epsilon (float) – Value to be used for epsilon-insensitive loss.
fit_intercept (bool) – Whether to fit the intercept or not.
intercept_decay (float) – Value by which the intercept is multiplied (to regularize it).
max_iter (int) – Maximum number of iterations to perform.
shuffle (bool) – Whether to shuffle data.
callback (callable) – Callback function.
n_calls (int) – Frequency with which callback must be called.
random_state (RandomState or int) – The seed of the pseudo random number generator to use.
verbose (int) – Verbosity level.
- fit(X, y)[source]¶
Fit model according to X and y.
- Parameters
X (array-like, shape = [n_samples, n_features]) – Training vectors, where n_samples is the number of samples and n_features is the number of features.
y (array-like, shape = [n_samples] or [n_samples, n_targets]) – Target values.
- Returns
self – Returns self.
- Return type
regressor
- get_params(deep=True)¶
Get parameters for this estimator.
- Parameters
deep (bool, default=True) – If True, will return the parameters for this estimator and contained subobjects that are estimators.
- Returns
params – Parameter names mapped to their values.
- Return type
dict
- n_nonzero(percentage=False)¶
- predict(X)[source]¶
Perform regression on an array of test vectors X.
- Parameters
X (array-like, shape = [n_samples, n_features]) –
- Returns
p – Predicted target values for X
- Return type
array, shape = [n_samples]
- score(X, y, sample_weight=None)¶
Return the coefficient of determination of the prediction.
The coefficient of determination \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares
((y_true - y_pred)** 2).sum()
and \(v\) is the total sum of squares((y_true - y_true.mean()) ** 2).sum()
. The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0.- Parameters
X (array-like of shape (n_samples, n_features)) – Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape
(n_samples, n_samples_fitted)
, wheren_samples_fitted
is the number of samples used in the fitting for the estimator.y (array-like of shape (n_samples,) or (n_samples, n_outputs)) – True values for X.
sample_weight (array-like of shape (n_samples,), default=None) – Sample weights.
- Returns
score – \(R^2\) of
self.predict(X)
wrt. y.- Return type
float
Notes
The \(R^2\) score used when calling
score
on a regressor usesmultioutput='uniform_average'
from version 0.23 to keep consistent with default value ofr2_score()
. This influences thescore
method of all the multioutput regressors (except forMultiOutputRegressor
).
- set_params(**params)¶
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as
Pipeline
). The latter have parameters of the form<component>__<parameter>
so that it’s possible to update each component of a nested object.- Parameters
**params (dict) – Estimator parameters.
- Returns
self – Estimator instance.
- Return type
estimator instance