lightning.classification.KernelSVC

class lightning.classification.KernelSVC(alpha=1.0, solver='cg', max_iter=50, tol=0.001, kernel='linear', gamma=0.1, coef0=1, degree=4, random_state=None, verbose=0, n_jobs=1)[source]

Estimator for learning kernel SVMs by Newton’s method.

Parameters:

alpha : float

Weight of the penalty term.

solver : str, ‘cg’, ‘dense’

max_iter : int

Maximum number of iterations to perform.

tol : float

Tolerance of the stopping criterion.

kernel: “linear” | “poly” | “rbf” | “sigmoid” | “cosine” | “precomputed” :

Kernel to use. Default: “linear”

degree : int, default=3

Degree for poly, rbf and sigmoid kernels. Ignored by other kernels.

gamma : float, optional

Kernel coefficient for rbf and poly kernels. Default: 1/n_features. Ignored by other kernels.

coef0 : float, optional

Independent term in poly and sigmoid kernels. Ignored by other kernels.

random_state : RandomState or int

The seed of the pseudo random number generator to use.

verbose : int

Verbosity level.

n_jobs : int

Number of jobs to use to compute the kernel matrix.

Methods

decision_function(X) Return the decision function for test vectors X.
fit(X, y) Fit model according to X and y.
get_params([deep]) Get parameters for this estimator.
n_nonzero([percentage])
predict(X)
score(X, y[, sample_weight]) Returns the mean accuracy on the given test data and labels.
set_params(**params) Set the parameters of this estimator.
__init__(alpha=1.0, solver='cg', max_iter=50, tol=0.001, kernel='linear', gamma=0.1, coef0=1, degree=4, random_state=None, verbose=0, n_jobs=1)[source]
decision_function(X)[source]

Return the decision function for test vectors X.

Parameters:

X : array-like, shape = [n_samples, n_features]

Returns:

P : array, shape = [n_classes, n_samples]

Decision function for X

fit(X, y)[source]

Fit model according to X and y.

Parameters:

X : array-like, shape = [n_samples, n_features]

Training vectors, where n_samples is the number of samples and n_features is the number of features.

y : array-like, shape = [n_samples]

Target values.

Returns:

self : classifier

Returns self.

get_params(deep=True)

Get parameters for this estimator.

Parameters:

deep: boolean, optional :

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:

params : mapping of string to any

Parameter names mapped to their values.

score(X, y, sample_weight=None)

Returns the mean accuracy on the given test data and labels.

In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.

Parameters:

X : array-like, shape = (n_samples, n_features)

Test samples.

y : array-like, shape = (n_samples) or (n_samples, n_outputs)

True labels for X.

sample_weight : array-like, shape = [n_samples], optional

Sample weights.

Returns:

score : float

Mean accuracy of self.predict(X) wrt. y.

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Returns:self :