skglm.penalties.L1_plus_L2#

class skglm.penalties.L1_plus_L2(alpha, l1_ratio, positive=False)[source]#

`ell_1 + ell_2` penalty (aka ElasticNet penalty).

__init__(alpha, l1_ratio, positive=False)[source]#

Methods

__init__(alpha, l1_ratio[, positive])

alpha_max(gradient0)

Return penalization value for which 0 is solution.

generalized_support(w)

Return a mask with non-zero coefficients.

get_spec()

Specify the numba types of the class attributes.

is_penalized(n_features)

Return a binary mask with the penalized features.

params_to_dict()

Get the parameters to initialize an instance of the class.

prox_1d(value, stepsize, j)

Compute the proximal operator (scaled soft-thresholding).

subdiff_distance(w, grad, ws)

Compute distance of negative gradient to the subdifferential at w.

value(w)

Compute the L1 + L2 penalty value.