lightning.regression.FistaRegressor¶
- class lightning.regression.FistaRegressor(C=1.0, alpha=1.0, penalty='l1', max_iter=100, max_steps=30, eta=2.0, sigma=1e-05, callback=None, verbose=0)[source]¶
Estimator for learning linear classifiers by FISTA.
The objective functions considered take the form
minimize F(W) = C * L(W) + alpha * R(W),
where L(W) is a loss term and R(W) is a penalty term.
- Parameters
penalty (str or Penalty object, {'l2', 'l1', 'l1/l2', 'tv1d', 'simplex'}) –
The penalty or constraint to be used.
l2: ridge
l1: lasso
l1/l2: group lasso
tv1d: 1-dimensional total variation (also known as fussed lasso)
simplex: simplex constraint
The method can also take an arbitrary Penalty object, i.e., an instance that implements methods projection regularization method (see file penalty.py)
- Cfloat
Weight of the loss term.
- alphafloat
Weight of the penalty term.
- max_iterint
Maximum number of iterations to perform.
- max_stepsint
Maximum number of steps to use during the line search.
- sigmafloat
Constant used in the line search sufficient decrease condition.
- etafloat
Decrease factor for line-search procedure. For example, eta=2. will decrease the step size by a factor of 2 at each iteration of the line-search routine.
- callbackcallable
Callback function.
- verboseint
Verbosity level.
- get_params(deep=True)¶
Get parameters for this estimator.
- Parameters
deep (bool, default=True) – If True, will return the parameters for this estimator and contained subobjects that are estimators.
- Returns
params – Parameter names mapped to their values.
- Return type
dict
- n_nonzero(percentage=False)¶
- predict(X)¶
- score(X, y, sample_weight=None)¶
Return the coefficient of determination of the prediction.
The coefficient of determination \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares
((y_true - y_pred)** 2).sum()
and \(v\) is the total sum of squares((y_true - y_true.mean()) ** 2).sum()
. The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0.- Parameters
X (array-like of shape (n_samples, n_features)) – Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape
(n_samples, n_samples_fitted)
, wheren_samples_fitted
is the number of samples used in the fitting for the estimator.y (array-like of shape (n_samples,) or (n_samples, n_outputs)) – True values for X.
sample_weight (array-like of shape (n_samples,), default=None) – Sample weights.
- Returns
score – \(R^2\) of
self.predict(X)
wrt. y.- Return type
float
Notes
The \(R^2\) score used when calling
score
on a regressor usesmultioutput='uniform_average'
from version 0.23 to keep consistent with default value ofr2_score()
. This influences thescore
method of all the multioutput regressors (except forMultiOutputRegressor
).
- set_params(**params)¶
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as
Pipeline
). The latter have parameters of the form<component>__<parameter>
so that it’s possible to update each component of a nested object.- Parameters
**params (dict) – Estimator parameters.
- Returns
self – Estimator instance.
- Return type
estimator instance