lightning.classification.SAGAClassifier

class lightning.classification.SAGAClassifier(eta='auto', alpha=1.0, beta=0.0, loss='smooth_hinge', penalty=None, gamma=1.0, max_iter=10, n_inner=1.0, tol=0.001, verbose=0, callback=None, random_state=None)[source]

Estimator for learning linear classifiers by SAGA.

Solves the following objective:

minimize_w  1 / n_samples * \sum_i loss(w^T x_i, y_i)
            + alpha * 0.5 * ||w||^2_2 + beta * penalty(w)
Parameters
  • eta (float or {'auto', 'line-search'}, defaults to 'auto') – step size for the gradient updates. If set to ‘auto’, this will calculate a step size based on the input data. If set to ‘line-search’, it will perform a line-search to find the step size based for the current iteration.

  • alpha (float) – amount of squared L2 regularization

  • beta (float) – amount of regularization for the penalty term

  • loss (string) – loss to use in the objective function. Can be one of “smooth_hinge”, “squared_hinge” or “log” (for logistic loss).

  • penalty (string or Penalty object) – penalty term to use in the objective function. Can be “l1” or a custom Penalty object (object defined in lightning/impl/sag_fast.pxd)

  • gamma (float) – gamma parameter in the “smooth_hinge” loss (not used for other loss functions)

  • max_iter (int) – maximum number of outer iterations (also known as epochs).

  • tol (float) – stopping criterion tolerance.

  • verbose (int) – verbosity level. Set positive to print progress information.

  • callback (callable or None) – if given, callback(self) will be called on each outer iteration (epoch).

  • random_state (int or RandomState) – Pseudo-random number generator state used for random sampling.

decision_function(X)
fit(X, y, sample_weight=None)
Parameters
  • X (numpy array, sparse matrix or RowDataset of size (n_samples, n_features)) –

  • y (numpy array of size (n_samples,)) –

  • sample_weight (numpy array of size (n_samples,), optional) –

Returns

Return type

self

get_params(deep=True)

Get parameters for this estimator.

Parameters

deep (bool, default=True) – If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns

params – Parameter names mapped to their values.

Return type

dict

n_nonzero(percentage=False)
predict(X)
property predict_proba
score(X, y, sample_weight=None)

Return the mean accuracy on the given test data and labels.

In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.

Parameters
  • X (array-like of shape (n_samples, n_features)) – Test samples.

  • y (array-like of shape (n_samples,) or (n_samples, n_outputs)) – True labels for X.

  • sample_weight (array-like of shape (n_samples,), default=None) – Sample weights.

Returns

score – Mean accuracy of self.predict(X) wrt. y.

Return type

float

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters

**params (dict) – Estimator parameters.

Returns

self – Estimator instance.

Return type

estimator instance