lightning.classification.LinearSVC

class lightning.classification.LinearSVC(C=1.0, loss='hinge', criterion='accuracy', max_iter=1000, tol=0.001, permute=True, shrinking=True, warm_start=False, random_state=None, callback=None, n_calls=100, verbose=0)[source]

Estimator for learning linear support vector machine by coordinate descent in the dual.

Parameters
  • loss (str, 'hinge', 'squared_hinge') – The loss function to be used.

  • criterion (str, 'accuracy', 'auc') – Whether to optimize for classification accuracy or AUC.

  • C (float) – Weight of the loss term.

  • max_iter (int) – Maximum number of iterations to perform.

  • tol (float) – Tolerance of the stopping criterion.

  • shrinking (bool) – Whether to activate shrinking or not.

  • warm_start (bool) – Whether to activate warm-start or not.

  • permute (bool) – Whether to permute coordinates or not before cycling.

  • callback (callable) – Callback function.

  • n_calls (int) – Frequency with which callback must be called.

  • random_state (RandomState or int) – The seed of the pseudo random number generator to use.

  • verbose (int) – Verbosity level.

Examples

The following example demonstrates how to learn a classification model:

>>> from sklearn.datasets import fetch_20newsgroups_vectorized
>>> from lightning.classification import LinearSVC
>>> bunch = fetch_20newsgroups_vectorized(subset="all")
>>> X, y = bunch.data, bunch.target
>>> clf = LinearSVC().fit(X, y)
>>> accuracy = clf.score(X, y)
decision_function(X)
fit(X, y)[source]

Fit model according to X and y.

Parameters
  • X (array-like, shape = [n_samples, n_features]) – Training vectors, where n_samples is the number of samples and n_features is the number of features.

  • y (array-like, shape = [n_samples]) – Target values.

Returns

self – Returns self.

Return type

classifier

get_params(deep=True)

Get parameters for this estimator.

Parameters

deep (bool, default=True) – If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns

params – Parameter names mapped to their values.

Return type

dict

n_nonzero(percentage=False)
predict(X)
property predict_proba
score(X, y, sample_weight=None)

Return the mean accuracy on the given test data and labels.

In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.

Parameters
  • X (array-like of shape (n_samples, n_features)) – Test samples.

  • y (array-like of shape (n_samples,) or (n_samples, n_outputs)) – True labels for X.

  • sample_weight (array-like of shape (n_samples,), default=None) – Sample weights.

Returns

score – Mean accuracy of self.predict(X) wrt. y.

Return type

float

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters

**params (dict) – Estimator parameters.

Returns

self – Estimator instance.

Return type

estimator instance