lightning.classification.FistaClassifier

class lightning.classification.FistaClassifier(C=1.0, alpha=1.0, loss='squared_hinge', penalty='l1', multiclass=False, max_iter=100, max_steps=30, eta=2.0, sigma=1e-05, callback=None, verbose=0, prox_args=())[source]

Estimator for learning linear classifiers by FISTA.

The objective functions considered take the form

minimize F(W) = C * L(W) + alpha * R(W),

where L(W) is a loss term and R(W) is a penalty term.

Parameters:

loss : str, ‘squared_hinge’, ‘log’, ‘modified_huber’, ‘squared’

The loss function to be used.

penalty : str or Penalty object, ‘l2’, ‘l1’, ‘l1/l2’, ‘simplex’

The penalty or constraint to be used.

  • l2: ridge
  • l1: lasso
  • l1/l2: group lasso
  • tv1d: 1-dimensional total variation (also known as fused lasso)
  • simplex: simplex constraint

The method can also take an arbitrary Penalty object, i.e., an instance that implements methods projection regularization method (see file penalty.py)

multiclass : bool

Whether to use a direct multiclass formulation (True) or one-vs-rest (False).

C : float

Weight of the loss term.

alpha : float

Weight of the penalty term.

max_iter : int

Maximum number of iterations to perform.

max_steps : int

Maximum number of steps to use during the line search.

sigma : float

Constant used in the line search sufficient decrease condition.

eta : float

Decrease factor for line-search procedure. For example, eta=2. will decrease the step size by a factor of 2 at each iteration of the line-search routine.

callback : callable

Callback function.

verbose : int

Verbosity level.

Methods

decision_function(X)
fit(X, y)
get_params([deep]) Get parameters for this estimator.
n_nonzero([percentage])
predict(X)
score(X, y[, sample_weight]) Returns the mean accuracy on the given test data and labels.
set_params(**params) Set the parameters of this estimator.
__init__(C=1.0, alpha=1.0, loss='squared_hinge', penalty='l1', multiclass=False, max_iter=100, max_steps=30, eta=2.0, sigma=1e-05, callback=None, verbose=0, prox_args=())[source]
get_params(deep=True)

Get parameters for this estimator.

Parameters:

deep: boolean, optional :

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:

params : mapping of string to any

Parameter names mapped to their values.

score(X, y, sample_weight=None)

Returns the mean accuracy on the given test data and labels.

In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.

Parameters:

X : array-like, shape = (n_samples, n_features)

Test samples.

y : array-like, shape = (n_samples) or (n_samples, n_outputs)

True labels for X.

sample_weight : array-like, shape = [n_samples], optional

Sample weights.

Returns:

score : float

Mean accuracy of self.predict(X) wrt. y.

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Returns:self :