Skip to content

Risk Control

Risk-controlling prediction methods.

Controllers

mapie.risk_control.MultiLabelClassificationController

MultiLabelClassificationController(
    predict_function: Callable[
        [ArrayLike], Union[list[NDArray], NDArray]
    ],
    risk: str = "recall",
    method: Optional[str] = None,
    target_level: Union[float, Iterable[float]] = 0.9,
    confidence_level: Optional[float] = None,
    rcps_bound: Optional[Union[str, None]] = None,
    predict_params: ArrayLike = np.arange(0, 1, 0.01),
    n_jobs: Optional[int] = None,
    random_state: Optional[Union[int, RandomState]] = None,
    verbose: int = 0,
)

Prediction sets for multilabel-classification.

This class implements two conformal prediction methods for estimating prediction sets for multilabel-classification. It guarantees (under the hypothesis of exchangeability) that a risk is at least 1 - alpha (alpha is a user-specified parameter). For now, we consider the recall as risk.

PARAMETER DESCRIPTION
predict_function

predict_proba method of a fitted multi-label classifier. It can return either: - a list of arrays of length n_classes where each array is of shape (n_samples, 2) with probabilities of the negative and positive class (as output by MultiOutputClassifier), or - an ndarray of shape (n_samples, n_classes) or (n_samples, n_classes, 2) containing positive probabilities, or positive and negative probabilities (assuming last dimension is [neg, pos]).

TYPE: Callable[[ArrayLike], Union[list[NDArray], NDArray]]

risk

The risk metric to control ("precision" or "recall"). The selected risk determines which conformal prediction methods are valid: - "precision" implies that method must be "ltt" - "recall" implies that method can be "crc" (default) or "rcps"

TYPE: str DEFAULT: 'recall'

method

Method to use for the prediction . If risk is "recall", the method can be either "crc" (default) or "rcps". If risk is "precision", the method used is "ltt". If None, the default is "crc" for recall and "ltt" for precision.

TYPE: Optional[str] DEFAULT: None

target_level

The minimum performance level for the metric. Must be between 0 and 1. Can be a float or any iterable of floats. By default 0.9.

TYPE: Optional[Union[float, Iterable[float]]] DEFAULT: 0.9

confidence_level

Can be a float, or None. If using method="rcps" or method="ltt" (precision control), then it cannot be set to None and must lie in (0, 1). Between 0 and 1, the level of certainty at which we compute the Upper Confidence Bound of the average risk. Higher confidence_level produce larger (more conservative) prediction sets. By default None.

TYPE: Optional[float] DEFAULT: None

rcps_bound

Method used to compute the Upper Confidence Bound of the average risk. Only necessary with the RCPS method. If provided when using CRC or LTT it is ignored and a warning is raised. By default None.

TYPE: Optional[Union[str, `None`]] DEFAULT: None

predict_params

Array of parameters (thresholds λ) to consider for controlling the risk. Defaults to np.arange(0, 1, 0.01). Length is used to set n_predict_params.

TYPE: Optional[ArrayLike] DEFAULT: arange(0, 1, 0.01)

n_jobs

Number of jobs for parallel processing using joblib via the "locky" backend. For this moment, parallel processing is disabled. If -1 all CPUs are used. If 1 is given, no parallel computing code is used at all, which is useful for debugging. For n_jobs below -1, (n_cpus + 1 + n_jobs) are used. "None" is a marker for unset that will be interpreted as n_jobs=1 (sequential execution).

By default None.

TYPE: Optional[int] DEFAULT: None

random_state

Pseudo random number generator state used for random uniform sampling to evaluate quantiles and prediction sets. Pass an int for reproducible output across multiple function calls.

By default 1.

TYPE: Optional[Union[int, RandomState]] DEFAULT: None

verbose

The verbosity level, used with joblib for parallel processing. For the moment, parallel processing is disabled. The frequency of the messages increases with the verbosity level. If it more than 10, all iterations are reported. Above 50, the output is sent to stdout.

By default 0.

TYPE: int DEFAULT: 0

ATTRIBUTE DESCRIPTION
valid_methods

List of all valid methods. Either CRC or RCPS

TYPE: List[str]

valid_bounds

List of all valid bounds computation for RCPS only.

TYPE: List[Union[str, `None`]]

n_predict_params

Number of thresholds on which we compute the risk.

TYPE: int

predict_params

Array of parameters (noted λ in [3]) to consider for controlling the risk.

TYPE: NDArray

risks

The risk for each observation for each threshold

TYPE: ArrayLike of shape (n_samples_cal, n_predict_params)

r_hat

Average risk for each predict_param

TYPE: ArrayLike of shape (n_predict_params)

r_hat_plus

Upper confidence bound for each predict_param, computed with different bounds. Only relevant when method="rcps".

TYPE: ArrayLike of shape (n_predict_params)

best_predict_param

Optimal threshold for a given alpha.

TYPE: NDArray of shape (n_alpha)

valid_index

List of list of all index that satisfy fwer controlling. This attribute is computed when the user wants to control precision score. Only relevant when risk="precision" as it uses learn then test (ltt) procedure. Contains n_alpha lists.

TYPE: List[List[Any]]

valid_predict_params

List of list of all thresholds that satisfy fwer controlling. This attribute is computed when the user wants to control precision score. Only relevant when risk="precision" as it uses learn then test (ltt) procedure. Contains n_alpha lists.

TYPE: List[List[Any]]

sigma_init

First variance in the sigma_hat array. The default value is the same as in the paper implementation [1].

TYPE: Optional[float]

References

[1] Stephen Bates, Anastasios Angelopoulos, Lihua Lei, Jitendra Malik, and Michael I. Jordan. Distribution-free, risk-controlling prediction sets. CoRR, abs/2101.02703, 2021. URL https://arxiv.org/abs/2101.02703

[2] Angelopoulos, Anastasios N., Stephen, Bates, Adam, Fisch, Lihua, Lei, and Tal, Schuster. "Conformal Risk Control." (2022).

[3] Angelopoulos, A. N., Bates, S., Candès, E. J., Jordan, M. I., & Lei, L. (2021). Learn then test: "Calibrating predictive algorithms to achieve risk control".

Examples:

>>> import numpy as np
>>> from sklearn.multioutput import MultiOutputClassifier
>>> from sklearn.linear_model import LogisticRegression
>>> from mapie.risk_control import MultiLabelClassificationController
>>> X_toy = np.arange(4).reshape(-1, 1)
>>> y_toy = np.stack([[1, 0, 1], [1, 0, 0], [0, 1, 1], [0, 1, 0]])
>>> clf = MultiOutputClassifier(LogisticRegression()).fit(X_toy, y_toy)
>>> mapie_clf = MultiLabelClassificationController(predict_function=clf.predict_proba, target_level=0.7).calibrate(X_toy, y_toy)
>>> y_pi_mapie = mapie_clf.predict(X_toy)
>>> print(y_pi_mapie[:, :, 0])
[[ True False  True]
 [ True False  True]
 [False  True  True]
 [False  True False]]
Source code in mapie/risk_control/multi_label_classification.py
def __init__(
    self,
    predict_function: Callable[[ArrayLike], Union[list[NDArray], NDArray]],
    risk: str = "recall",
    method: Optional[str] = None,
    target_level: Union[float, Iterable[float]] = 0.9,
    confidence_level: Optional[float] = None,
    rcps_bound: Optional[Union[str, None]] = None,
    predict_params: ArrayLike = np.arange(0, 1, 0.01),
    n_jobs: Optional[int] = None,
    random_state: Optional[Union[int, np.random.RandomState]] = None,
    verbose: int = 0,
) -> None:
    self._predict_function = predict_function
    self._risk_name = risk
    self._risk = self._check_and_convert_risk(risk)
    self.method = method
    self._check_method()

    alpha = []
    for target in (
        target_level if isinstance(target_level, Iterable) else [target_level]
    ):
        assert self._risk.higher_is_better, (
            "Current implemented risks (precision and recall) are defined such that "
            "'higher is better'. The 'lower is better' case is not implemented."
        )
        alpha.append(1 - target)  # for higher is better only

    self._alpha = np.array(_check_alpha(alpha))

    self._check_confidence_level(confidence_level)
    self._delta = 1 - confidence_level if confidence_level is not None else None

    self._check_bound(rcps_bound)
    self._rcps_bound = rcps_bound

    self.predict_params = np.asarray(predict_params)
    self.n_predict_params = len(self.predict_params)

    self.n_jobs = n_jobs
    self.random_state = random_state
    self.verbose = verbose
    self._check_parameters()

    self._is_fitted = False

is_fitted property

is_fitted

Returns True if the controller is fitted

compute_risks

compute_risks(
    X: ArrayLike,
    y: ArrayLike,
    _refit: Optional[bool] = False,
) -> MultiLabelClassificationController

Fit the base estimator or use the fitted base estimator on batch data to compute risks. All the computed risks will be concatenated each time the compute_risks method is called.

PARAMETER DESCRIPTION
X

Training data.

TYPE: ArrayLike of shape (n_samples, n_features)

y

Training labels.

TYPE: NDArray of shape (n_samples, n_classes)

_refit

Whether or not refit from scratch.

By default False

TYPE: Optional[bool] DEFAULT: False

RETURNS DESCRIPTION
MultiLabelClassificationController

The model itself.

Source code in mapie/risk_control/multi_label_classification.py
def compute_risks(
    self,
    X: ArrayLike,
    y: ArrayLike,
    _refit: Optional[bool] = False,
) -> MultiLabelClassificationController:
    """
    Fit the base estimator or use the fitted base estimator on
    batch data to compute risks. All the computed risks will be concatenated each
    time the compute_risks method is called.

    Parameters
    ----------
    X : ArrayLike of shape (n_samples, n_features)
        Training data.

    y : NDArray of shape (n_samples, n_classes)
        Training labels.

    _refit: bool
        Whether or not refit from scratch.

        By default False

    Returns
    -------
    MultiLabelClassificationController
        The model itself.
    """
    # Checks
    first_call = self._check_compute_risks_first_call()

    X, y = indexable(X, y)

    y = cast(NDArray, y)
    X = cast(NDArray, X)

    self._check_all_labelled(y)
    self.n_samples_ = _num_samples(X)

    # Compute risks
    y_pred_proba = self._predict_function(X)
    y_pred_proba_array = self._transform_pred_proba(y_pred_proba)

    n_lambdas = len(self.predict_params)
    n_samples = len(y_pred_proba_array)

    y_pred_proba_array_repeat = np.repeat(y_pred_proba_array, n_lambdas, axis=2)
    y_pred = (y_pred_proba_array_repeat > self.predict_params).astype(int)

    risk = np.zeros((n_samples, n_lambdas))
    for index_sample in range(n_samples):
        for index_lambda in range(n_lambdas):
            risk[index_sample, index_lambda], _ = (
                self._risk.get_value_and_effective_sample_size(
                    y[index_sample, :], y_pred[index_sample, :, index_lambda]
                )
            )

    if first_call or _refit:
        self._risks = risk
    else:
        self._risks = np.vstack((self._risks, risk))

    return self

compute_best_predict_param

compute_best_predict_param() -> (
    MultiLabelClassificationController
)

Compute optimal predict_params based on the computed risks.

Source code in mapie/risk_control/multi_label_classification.py
def compute_best_predict_param(self) -> MultiLabelClassificationController:
    """
    Compute optimal predict_params based on the computed risks.
    """
    if self._risk == precision:
        self.n_obs = len(self._risks)
        self.r_hat = self._risks.mean(axis=0)
        self.valid_index, _ = ltt_procedure(
            np.expand_dims(self.r_hat, axis=0),
            np.expand_dims(self._alpha, axis=0),
            cast(float, self._delta),
            np.expand_dims(np.array([self.n_obs]), axis=0),
        )
        self.valid_predict_params = []
        for index_list in self.valid_index:
            self.valid_predict_params.append(self.predict_params[index_list])
        check_valid_ltt_params_index(
            predict_params=self.predict_params,
            valid_index=self.valid_index,
            alpha=self._alpha,
        )
        self.best_predict_param, _ = find_precision_best_predict_param(
            self.r_hat, self.valid_index, self.predict_params
        )
    elif self._risk == recall:
        self.r_hat, self.r_hat_plus = get_r_hat_plus(
            self._risks,
            self.predict_params,
            self.method,
            self._rcps_bound,
            self._delta,
            self.sigma_init,
        )
        self.best_predict_param = find_best_predict_param(
            self.predict_params, self.r_hat_plus, self._alpha
        )
    else:
        raise NotImplementedError(
            "risk not implemented. Only 'precision' and 'recall' are currently supported."
        )
    self._is_fitted = True

    return self

calibrate

calibrate(
    X: ArrayLike, y: ArrayLike
) -> MultiLabelClassificationController

Use the fitted base estimator to compute risks and predict_params. Note that for high dimensional data, you can instead use the compute_risks method to compute risks batch by batch, followed by compute_best_predict_param.

Parameters

X: ArrayLike of shape (n_samples, n_features) Training data.

y: NDArray of shape (n_samples, n_classes) Training labels.

Returns

MultiLabelClassificationController The model itself.

Source code in mapie/risk_control/multi_label_classification.py
def calibrate(
    self, X: ArrayLike, y: ArrayLike
) -> MultiLabelClassificationController:
    """
     Use the fitted base estimator to compute risks and predict_params.
     Note that for high dimensional data, you can instead use the compute_risks
     method to compute risks batch by batch, followed by compute_best_predict_param.

     Parameters
     ----------
     X: ArrayLike of shape (n_samples, n_features)
         Training data.

     y: NDArray of shape (n_samples, n_classes)
         Training labels.

     Returns
     -------
    MultiLabelClassificationController
         The model itself.
    """

    self.compute_risks(X, y, _refit=True)
    self.compute_best_predict_param()

    return self

predict

predict(X: ArrayLike) -> NDArray

Prediction sets on new samples based on the target risk level. Prediction sets for a given alpha are deduced from the computed risks.

PARAMETER DESCRIPTION
X

TYPE: ArrayLike

RETURNS DESCRIPTION
NDArray of shape (n_samples, n_classes, n_alpha)
Source code in mapie/risk_control/multi_label_classification.py
def predict(
    self,
    X: ArrayLike,
) -> NDArray:
    """
    Prediction sets on new samples based on the target risk level.
    Prediction sets for a given `alpha` are deduced from the computed
    risks.

    Parameters
    ----------
    X: ArrayLike of shape (n_samples, n_features)

    Returns
    -------
    NDArray of shape (n_samples, n_classes, n_alpha)
    """

    check_is_fitted(self)

    # Estimate prediction sets
    y_pred_proba = self._predict_function(X)
    y_pred_proba_array = self._transform_pred_proba(y_pred_proba)

    y_pred_proba_array = np.repeat(y_pred_proba_array, len(self._alpha), axis=2)
    y_pred_proba_array = (
        y_pred_proba_array > self.best_predict_param[np.newaxis, np.newaxis, :]
    )
    return y_pred_proba_array

mapie.risk_control.SemanticSegmentationController

SemanticSegmentationController(
    predict_function: Callable[
        [ArrayLike], Union[list[NDArray], NDArray]
    ],
    risk: str = "recall",
    method: Optional[str] = None,
    target_level: Union[float, Iterable[float]] = 0.9,
    confidence_level: Optional[float] = None,
    rcps_bound: Optional[Union[str, None]] = None,
    predict_params: ArrayLike = np.arange(0, 1, 0.01),
    n_jobs: Optional[int] = None,
    random_state: Optional[Union[int, RandomState]] = None,
    verbose: int = 0,
)

Bases: MultiLabelClassificationController

Risk controller for semantic segmentation tasks, inheriting from MultiLabelClassificationController.

Source code in mapie/risk_control/multi_label_classification.py
def __init__(
    self,
    predict_function: Callable[[ArrayLike], Union[list[NDArray], NDArray]],
    risk: str = "recall",
    method: Optional[str] = None,
    target_level: Union[float, Iterable[float]] = 0.9,
    confidence_level: Optional[float] = None,
    rcps_bound: Optional[Union[str, None]] = None,
    predict_params: ArrayLike = np.arange(0, 1, 0.01),
    n_jobs: Optional[int] = None,
    random_state: Optional[Union[int, np.random.RandomState]] = None,
    verbose: int = 0,
) -> None:
    self._predict_function = predict_function
    self._risk_name = risk
    self._risk = self._check_and_convert_risk(risk)
    self.method = method
    self._check_method()

    alpha = []
    for target in (
        target_level if isinstance(target_level, Iterable) else [target_level]
    ):
        assert self._risk.higher_is_better, (
            "Current implemented risks (precision and recall) are defined such that "
            "'higher is better'. The 'lower is better' case is not implemented."
        )
        alpha.append(1 - target)  # for higher is better only

    self._alpha = np.array(_check_alpha(alpha))

    self._check_confidence_level(confidence_level)
    self._delta = 1 - confidence_level if confidence_level is not None else None

    self._check_bound(rcps_bound)
    self._rcps_bound = rcps_bound

    self.predict_params = np.asarray(predict_params)
    self.n_predict_params = len(self.predict_params)

    self.n_jobs = n_jobs
    self.random_state = random_state
    self.verbose = verbose
    self._check_parameters()

    self._is_fitted = False

predict

predict(X: ArrayLike) -> NDArray

Prediction sets on new samples based on the target risk level. Prediction sets for a given alpha are deduced from the computed risks.

PARAMETER DESCRIPTION
X

TYPE: ArrayLike

RETURNS DESCRIPTION
NDArray of shape (n_samples, n_classes, n_alpha)
Source code in mapie/risk_control/semantic_segmentation.py
def predict(
    self,
    X: ArrayLike,
) -> NDArray:
    """
    Prediction sets on new samples based on the target risk level.
    Prediction sets for a given `alpha` are deduced from the computed
    risks.

    Parameters
    ----------
    X: ArrayLike of shape (n_samples, n_features)

    Returns
    -------
    NDArray of shape (n_samples, n_classes, n_alpha)
    """

    check_is_fitted(self)

    # Estimate prediction sets
    y_pred_proba = self._predict_function(X)
    y_pred_proba_array = self._transform_pred_proba(y_pred_proba, ravel=False)

    y_pred_proba_array = np.repeat(y_pred_proba_array, len(self._alpha), axis=1)
    y_pred_proba_array = (
        y_pred_proba_array
        > self.best_predict_param[np.newaxis, :, np.newaxis, np.newaxis]
    )
    return y_pred_proba_array

mapie.risk_control.BinaryClassificationController

BinaryClassificationController(
    predict_function: Callable[[ArrayLike], NDArray],
    risk: Risk,
    target_level: Union[float, List[float]],
    confidence_level: float = 0.9,
    best_predict_param_choice: Union[
        Literal["auto"], Risk_str, BinaryClassificationRisk
    ] = "auto",
    list_predict_params: NDArray = np.linspace(
        0, 0.99, 100
    ),
    fwer_method: Union[
        FWER_METHODS, FWERProcedure
    ] = "bonferroni",
)

Controls the risk or performance of a binary classifier.

BinaryClassificationController finds the decision thresholds of a binary classifier that statistically guarantee a risk to be below a target level (the risk is "controlled"). It can be used to control a performance metric as well, such as the precision. In that case, the thresholds guarantee that the performance is above a target level.

Usage:

  1. Instantiate a BinaryClassificationController, providing the predict_proba method of your binary classifier
  2. Call the calibrate method to find the thresholds
  3. Use the predict method to predict using the best threshold

Note: for a given model, calibration dataset, target level, and confidence level, there may not be any threshold controlling the risk.

PARAMETER DESCRIPTION
predict_function

predict_proba method of a fitted binary classifier. Its output signature must be of shape (len(X), 2).

Or, in the general case of multi-dimensional parameters (thresholds), a function that takes (X, *params) and outputs 0 or 1. This can be useful to e.g., ensemble multiple binary classifiers with different thresholds for each classifier. In that case, predict_params must be provided.

TYPE: Callable[[ArrayLike], NDArray]

risk

The risk or performance metric to control. Valid options:

  • An existing risk defined in mapie.risk_control accessible through its string equivalent: "precision", "recall", "accuracy", "fpr" for false positive rate, or "predicted_positive_fraction".
  • A custom instance of BinaryClassificationRisk object

Can be a list of risks in the case of multi risk control.

TYPE: Union[BinaryClassificationRisk, str, List[BinaryClassificationRisk, str]]

target_level

The maximum risk level (or minimum performance level). Must be between 0 and 1. Can be a list of target levels in the case of multi risk control (length should match the length of the risks list).

TYPE: Union[float, List[float]]

confidence_level

The confidence level with which the risk (or performance) is controlled. Must be between 0 and 1. See the documentation for detailed explanations.

TYPE: float DEFAULT: 0.9

best_predict_param_choice

default="auto" How to select the best threshold from the valid thresholds that control the risk (or performance). The BinaryClassificationController will try to minimize (or maximize) a secondary objective. Valid options:

  • "auto" (default). For mono risk defined in mapie.risk_control, an automatic choice is made. For multi risk, we use the first risk in the list.
  • An existing risk defined in mapie.risk_control accessible through

its string equivalent: "precision", "recall", "accuracy", "fpr" for false positive rate, or "predicted_positive_fraction". - A custom instance of BinaryClassificationRisk object

TYPE: (Union['auto', BinaryClassificationRisk, str],) DEFAULT: 'auto'

list_predict_params

The set of parameters (noted λ in [1]) to consider for controlling the risk (or performance). When predict_function is a predict_proba method, the shape is (n_params,) and the parameter values are used to threshold the probabilities. When predict_function is a general function with multi-dimensional parameters (λ) that outputs 0 or 1, the shape is (n_params, params_dim). Note that performance is degraded when len(predict_params) is large as it is used by the Bonferroni correction [1].

TYPE: NDArray DEFAULT: np.linspace(0, 0.99, 100)

fwer_method

Method used to control the family-wise error rate (FWER).

Supported methods: - "bonferroni" : Classical Bonferroni correction. This is the default method. It is valid in all settings but can be conservative, especially when the number of tested parameters is large. - "fixed_sequence" : Fixed Sequence Testing (FST) with a single start. However, users can use multi-start by instantiating FWERFixedSequenceTesting with any desired number of starts and passing the instance to control_fwer. - "bonferroni_holm" : Sequential Graphical Testing corresponding to the Bonferroni–Holm procedure. Suitable for general settings. - "split_fixed_sequence" : Split Fixed Sequence Testing (SFST).

TYPE: ('bonferroni', 'bonferroni_holm', 'fixed_sequence', 'split_fixed_sequence') DEFAULT: "bonferroni"

ATTRIBUTE DESCRIPTION
valid_predict_params

The valid thresholds that control the risk (or performance). Use the calibrate method to compute these.

TYPE: NDArray

best_predict_param

The best threshold that control the risk (or performance). It is a tuple if multi-dimensional parameters are used. Use the calibrate method to compute it.

TYPE: Optional[Union[float, Tuple[float, ...]]]

p_values

P-values associated with each tested parameter in list_predict_params. In the multi-risk setting, the value corresponds to the maximum over the tested risks.

TYPE: NDArray

Examples:

>>> import numpy as np
>>> from sklearn.linear_model import LogisticRegression
>>> from sklearn.datasets import make_classification
>>> from sklearn.model_selection import train_test_split
>>> from mapie.risk_control import BinaryClassificationController, precision
>>> X, y = make_classification(
...     n_features=2,
...     n_redundant=0,
...     n_informative=2,
...     n_clusters_per_class=1,
...     n_classes=2,
...     random_state=42,
...     class_sep=2.0
... )
>>> X_train, X_temp, y_train, y_temp = train_test_split(
...     X, y, test_size=0.4, random_state=42
... )
>>> X_calib, X_test, y_calib, y_test = train_test_split(
...     X_temp, y_temp, test_size=0.1, random_state=42
... )
>>> clf = LogisticRegression().fit(X_train, y_train)
>>> controller = BinaryClassificationController(
...     predict_function=clf.predict_proba,
...     risk=precision,
...     target_level=0.6
... )
>>> predictions = controller.calibrate(X_calib, y_calib).predict(X_test)
References

[1] Angelopoulos, Anastasios N., Stephen, Bates, Emmanuel J. Candès, et al. "Learn Then Test: Calibrating Predictive Algorithms to Achieve Risk Control." (2022)

Source code in mapie/risk_control/binary_classification.py
def __init__(
    self,
    predict_function: Callable[[ArrayLike], NDArray],
    risk: Risk,
    target_level: Union[float, List[float]],
    confidence_level: float = 0.9,
    best_predict_param_choice: Union[
        Literal["auto"], Risk_str, BinaryClassificationRisk
    ] = "auto",
    list_predict_params: NDArray = np.linspace(0, 0.99, 100),
    fwer_method: Union[FWER_METHODS, FWERProcedure] = "bonferroni",
):
    self.is_multi_risk = self._check_if_multi_risk_control(risk, target_level)
    self._predict_function = predict_function
    risk_list = risk if isinstance(risk, list) else [risk]
    try:
        self._risk = [
            BinaryClassificationController.risk_choice_map[risk]
            if isinstance(risk, str)
            else risk
            for risk in risk_list
        ]
    except KeyError as e:
        raise ValueError(
            "When risk is provided as a string, it must be one of: "
            f"{list(BinaryClassificationController.risk_choice_map.keys())}"
        ) from e
    target_level_list = (
        target_level if isinstance(target_level, list) else [target_level]
    )
    self._alpha = self._convert_target_level_to_alpha(target_level_list)
    self._delta = 1 - confidence_level

    self._best_predict_param_choice = self._set_best_predict_param_choice(
        best_predict_param_choice
    )

    self._predict_params = list_predict_params
    self.is_multi_dimensional_param = self._check_if_multi_dimensional_param(
        self._predict_params
    )
    self.fwer_method = self._check_fwer_method(fwer_method)
    self._learned_fixed_sequence: Optional[NDArray[Any]] = None

    self.valid_predict_params: NDArray = np.array([])
    self.best_predict_param: Optional[Union[float, Tuple[float, ...]]] = None
    self.p_values: Optional[NDArray] = None

calibrate

calibrate(
    X_calibrate: ArrayLike, y_calibrate: ArrayLike
) -> BinaryClassificationController

Calibrate the BinaryClassificationController. Sets attributes valid_predict_params and best_predict_param (if the risk or performance can be controlled at the target level).

PARAMETER DESCRIPTION
X_calibrate

Features of the calibration set.

TYPE: ArrayLike

y_calibrate

Binary labels of the calibration set.

TYPE: ArrayLike

RETURNS DESCRIPTION
BinaryClassificationController

The calibrated controller instance.

Notes

When using fwer_method="split_fixed_sequence", the learning step must be performed separately on independent data:

  1. bcc.learn_fixed_sequence_order(X_learn, y_learn)
  2. bcc.calibrate(X_calibrate, y_calibrate)

Using the same data for both steps would invalidate guarantees.

Source code in mapie/risk_control/binary_classification.py
def calibrate(  # pragma: no cover
    self, X_calibrate: ArrayLike, y_calibrate: ArrayLike
) -> BinaryClassificationController:
    """
    Calibrate the BinaryClassificationController.
    Sets attributes valid_predict_params and best_predict_param (if the risk
    or performance can be controlled at the target level).

    Parameters
    ----------
    X_calibrate : ArrayLike
        Features of the calibration set.

    y_calibrate : ArrayLike
        Binary labels of the calibration set.

    Returns
    -------
    BinaryClassificationController
        The calibrated controller instance.

    Notes
    -----
    When using `fwer_method="split_fixed_sequence"`,
    the learning step must be performed separately on independent data:

    1. bcc.learn_fixed_sequence_order(X_learn, y_learn)
    2. bcc.calibrate(X_calibrate, y_calibrate)

    Using the same data for both steps would invalidate guarantees.
    """
    y_calibrate_ = np.asarray(y_calibrate, dtype=int)

    original_params = self._predict_params
    if self.fwer_method == "split_fixed_sequence":
        if self._learned_fixed_sequence is None:
            raise ValueError(
                "You must call 'learn_fixed_sequence_order' before 'calibrate' "
                "when using fwer_method='split_fixed_sequence'."
            )
        self._predict_params = self._learned_fixed_sequence

    predictions_per_param = self._get_predictions_per_param(
        X_calibrate, self._predict_params, is_calibration_step=True
    )

    risk_values, eff_sample_sizes = self._get_risk_values_and_eff_sample_sizes(
        y_calibrate_, predictions_per_param, self._risk
    )
    (valid_index, p_values) = ltt_procedure(
        risk_values,
        np.expand_dims(self._alpha, axis=1),
        self._delta,
        eff_sample_sizes,
        True,
        fwer_method=self.fwer_method,
    )
    valid_params_index = valid_index[0]

    self.valid_predict_params = self._predict_params[valid_params_index]

    check_valid_ltt_params_index(
        predict_params=self._predict_params, valid_index=self.valid_predict_params
    )

    if len(self.valid_predict_params) == 0:
        self.best_predict_param = None
    else:
        self._set_best_predict_param(
            y_calibrate_,
            predictions_per_param,
            valid_params_index,
        )

    self.p_values = p_values
    self._predict_params = original_params

    return self

learn_fixed_sequence_order

learn_fixed_sequence_order(
    X_learn: ArrayLike,
    y_learn: ArrayLike,
    beta_grid: NDArray = np.logspace(-25, 0, 1000),
    binary: bool = False,
) -> BinaryClassificationController

Learn an ordered sequence of prediction parameters for split fixed-sequence FWER control.

This method performs the learning step of split fixed-sequence testing. It must be called before calibrate when fwer_method="split_fixed_sequence".

The data provided here must be independent from the calibration data used later in calibrate. Using the same data would invalidate the statistical guarantees.

A typical workflow is to split your calibration dataset:

  • one subset for learning the parameter order
  • one subset for calibration

For each value in beta_grid, the parameter whose p-value vector is closest to the constant vector beta is selected. Duplicate parameters are removed while preserving order, yielding a deterministic testing sequence.

PARAMETER DESCRIPTION
X_learn

Features used only to learn the parameter order.

TYPE: ArrayLike

y_learn

Binary labels associated with X_learn.

TYPE: ArrayLike

beta_grid

Grid of target p-values used to construct the ordering. Smaller values prioritize parameters with stronger evidence.

TYPE: NDArray DEFAULT: np.logspace(-25, 0, 1000)

binary

Whether the loss associated with the controlled risk is binary.

TYPE: bool DEFAULT: False

RETURNS DESCRIPTION
BinaryClassificationController

The controller instance with the learned sequence of ordered prediction parameters.

Notes

This method does NOT perform risk control. It only determines an order of parameters. Statistical guarantees are provided later when calling calibrate.

Source code in mapie/risk_control/binary_classification.py
def learn_fixed_sequence_order(
    self,
    X_learn: ArrayLike,
    y_learn: ArrayLike,
    beta_grid: NDArray = np.logspace(-25, 0, 1000),
    binary: bool = False,
) -> BinaryClassificationController:
    """
    Learn an ordered sequence of prediction parameters for split fixed-sequence FWER control.

    This method performs the learning step of split fixed-sequence testing.
    It must be called before `calibrate` when `fwer_method="split_fixed_sequence"`.

    The data provided here must be independent from the calibration data used later in `calibrate`.
    Using the same data would invalidate the statistical guarantees.

    A typical workflow is to split your calibration dataset:

    - one subset for learning the parameter order
    - one subset for calibration

    For each value in `beta_grid`, the parameter whose p-value vector is
    closest to the constant vector beta is selected. Duplicate parameters are
    removed while preserving order, yielding a deterministic testing sequence.

    Parameters
    ----------
    X_learn : ArrayLike
        Features used only to learn the parameter order.

    y_learn : ArrayLike
        Binary labels associated with X_learn.

    beta_grid : NDArray, default=np.logspace(-25, 0, 1000)
        Grid of target p-values used to construct the ordering.
        Smaller values prioritize parameters with stronger evidence.

    binary : bool, default=False
        Whether the loss associated with the controlled risk is binary.

    Returns
    -------
    BinaryClassificationController
        The controller instance with the learned sequence of ordered prediction parameters.

    Notes
    -----
    This method does NOT perform risk control.
    It only determines an order of parameters.
    Statistical guarantees are provided later when calling `calibrate`.
    """
    y_learn = np.asarray(y_learn, dtype=int)
    predictions_per_param = self._get_predictions_per_param(
        X_learn, self._predict_params, is_calibration_step=True
    )

    r_hat, n_obs = self._get_risk_values_and_eff_sample_sizes(
        y_learn, predictions_per_param, self._risk
    )
    alpha_np = np.expand_dims(self._alpha, axis=1)
    p_values = np.array(
        [
            compute_hoeffding_bentkus_p_value(r_hat_i, n_obs_i, alpha_np_i, binary)
            for r_hat_i, n_obs_i, alpha_np_i in zip(r_hat, n_obs, alpha_np)
        ]
    )

    n_risks, n_lambdas = p_values.shape[:2]
    ordered_predict_params: List[Any] = []

    for beta_value in beta_grid:
        beta_vector: NDArray[np.float64] = np.repeat(beta_value, n_risks)

        distances_to_beta: list[np.float64] = [
            np.max(np.abs(p_values[:, idx, 0] - beta_vector))
            for idx in range(n_lambdas)
        ]

        best_idx = np.argmin(distances_to_beta)
        candidate = self._predict_params[best_idx]

        if self.is_multi_dimensional_param:
            candidate = tuple(candidate.tolist())

        if candidate not in ordered_predict_params:
            ordered_predict_params.append(candidate)

    if self.is_multi_dimensional_param:
        ordered_predict_params = [list(p) for p in ordered_predict_params]

    self._learned_fixed_sequence = np.array(ordered_predict_params, dtype=object)

    return self

predict

predict(X_test: ArrayLike) -> NDArray

Predict using predict_function at the best threshold.

PARAMETER DESCRIPTION
X_test

Features

TYPE: ArrayLike

RETURNS DESCRIPTION
NDArray

NDArray of shape (n_samples,)

RAISES DESCRIPTION
ValueError

If the method .calibrate was not called, or if no valid thresholds were found during calibration.

Source code in mapie/risk_control/binary_classification.py
def predict(self, X_test: ArrayLike) -> NDArray:
    """
    Predict using predict_function at the best threshold.

    Parameters
    ----------
    X_test : ArrayLike
        Features

    Returns
    -------
    NDArray
        NDArray of shape (n_samples,)

    Raises
    ------
    ValueError
        If the method .calibrate was not called,
        or if no valid thresholds were found during calibration.
    """
    if self.best_predict_param is None:
        raise ValueError(
            "Cannot predict. "
            "Either you forgot to calibrate the controller first, "
            "or calibration was not successful."
        )
    return cast(
        NDArray,
        self._get_predictions_per_param(
            X_test,
            np.array([self.best_predict_param]),
        )[0],
    )

mapie.risk_control.BinaryClassificationRisk

BinaryClassificationRisk(
    risk_occurrence: Callable[
        [NDArray[integer], NDArray[integer]], NDArray[bool_]
    ],
    risk_condition: Callable[
        [NDArray[integer], NDArray[integer]], NDArray[bool_]
    ],
    higher_is_better: bool,
)

Define a risk (or a performance metric) to be used with the BinaryClassificationController. Predefined instances are implemented, see :data:mapie.risk_control.precision, :data:mapie.risk_control.recall, :data:mapie.risk_control.accuracy, :data:mapie.risk_control.false_positive_rate, and :data:mapie.risk_control.predicted_positive_fraction.

Here, a binary classification risk (or performance) is defined by an occurrence and a condition. Let's take the example of precision. Precision is the sum of true positives over the total number of predicted positives. In other words, precision is the average of correct predictions (occurrence) given that those predictions are positive (condition). Programmatically, precision = (sum(y_pred == y_true) if y_pred == 1)/sum(y_pred == 1). Because precision is a performance metric rather than a risk, higher_is_better must be set to True. See the implementation of precision in mapie.risk_control.

Note: any risk or performance metric that can be defined as sum(occurrence if condition) / sum(condition) can be theoretically controlled with the BinaryClassificationController, thanks to the LearnThenTest framework [1] and the binary Hoeffding-Bentkus p-values implemented in MAPIE.

Note: by definition, the value of the risk (or performance metric) here is always between 0 and 1.

PARAMETER DESCRIPTION
risk_occurrence

A function defining the occurrence of the risk for a given sample. Must take y_true and y_pred as input and return a boolean.

TYPE: Callable[[int, int], bool]

risk_condition

A function defining the condition of the risk for a given sample, Must take y_true and y_pred as input and return a boolean.

TYPE: Callable[[int, int], bool]

higher_is_better

Whether this BinaryClassificationRisk instance is a risk (higher_is_better=False) or a performance metric (higher_is_better=True).

TYPE: bool

ATTRIBUTE DESCRIPTION
higher_is_better

See params.

TYPE: bool

References

[1] Angelopoulos, Anastasios N., Stephen, Bates, Emmanuel J. Candès, et al. "Learn Then Test: Calibrating Predictive Algorithms to Achieve Risk Control." (2022)

Source code in mapie/risk_control/risks.py
def __init__(
    self,
    risk_occurrence: Callable[
        [NDArray[np.integer], NDArray[np.integer]], NDArray[np.bool_]
    ],
    risk_condition: Callable[
        [NDArray[np.integer], NDArray[np.integer]], NDArray[np.bool_]
    ],
    higher_is_better: bool,
):
    self._risk_occurrence = risk_occurrence
    self._risk_condition = risk_condition
    self.higher_is_better = higher_is_better

get_value_and_effective_sample_size

get_value_and_effective_sample_size(
    y_true: NDArray, y_pred: NDArray
) -> Tuple[float, int]

Computes the value of a risk given an array of ground truth labels and the corresponding predictions. Also returns the number of samples used to compute that value.

That number can be different from the total number of samples. For example, in the case of precision, only the samples with positive predictions are used.

In the case of a performance metric, this function returns 1 - perf_value.

PARAMETER DESCRIPTION
y_true

NDArray of ground truth labels, of shape (n_samples,), with values in {0, 1}

TYPE: NDArray

y_pred

NDArray of predictions, of shape (n_samples,), with values in {0, 1}

TYPE: NDArray

RETURNS DESCRIPTION
Tuple[float, int]

A tuple containing the value of the risk between 0 and 1, and the number of effective samples used to compute that value (between 1 and n_samples).

In the case of a performance metric, this function returns 1 - perf_value.

If the risk is not defined (condition never met), the value is set to 1, and the number of effective samples is set to -1.

Source code in mapie/risk_control/risks.py
def get_value_and_effective_sample_size(
    self,
    y_true: NDArray,
    y_pred: NDArray,
) -> Tuple[float, int]:
    """
    Computes the value of a risk given an array of ground
    truth labels and the corresponding predictions. Also returns the number of
    samples used to compute that value.

    That number can be different from the total number of samples. For example, in
    the case of precision, only the samples with positive predictions are used.

    In the case of a performance metric, this function returns 1 - perf_value.

    Parameters
    ----------
    y_true : NDArray
        NDArray of ground truth labels, of shape (n_samples,), with values in {0, 1}

    y_pred : NDArray
        NDArray of predictions, of shape (n_samples,), with values in {0, 1}

    Returns
    -------
    Tuple[float, int]
        A tuple containing the value of the risk between 0 and 1,
        and the number of effective samples used to compute that value
        (between 1 and n_samples).

        In the case of a performance metric, this function returns 1 - perf_value.

        If the risk is not defined (condition never met), the value is set to 1,
        and the number of effective samples is set to -1.
    """
    risk_occurrences = self._risk_occurrence(y_true, y_pred)
    risk_conditions = self._risk_condition(y_true, y_pred)

    effective_sample_size = y_true.size - np.sum(~risk_conditions)
    # Casting needed for MyPy with Python 3.9
    effective_sample_size_int = cast(int, effective_sample_size)
    if effective_sample_size_int != 0.0:
        risk_sum: int = np.sum(risk_occurrences[risk_conditions])
        risk_value = risk_sum / effective_sample_size_int
        if self.higher_is_better:
            risk_value = 1 - risk_value
        return risk_value, effective_sample_size_int
    else:
        # In this case, the corresponding lambda shouldn't be considered valid.
        # In the current LTT implementation, providing n_obs=-1 will result
        # in an infinite p_value, effectively invaliding the lambda
        return 1, -1

FWER Procedures

mapie.risk_control.FWERProcedure

Bases: ABC

Base class for procedures controlling the Family-Wise Error Rate (FWER).

This class defines a unified interface for sequential multiple testing procedures that allocate and update a global error budget delta across a set of hypotheses.

Subclasses implement the strategy that determines:

  • how the error budget is initialized,
  • which hypothesis is tested next,
  • how local significance levels are computed,
  • how the state evolves after a rejection.

The main entry point is run which executes the procedure and returns the indices of rejected hypotheses.

Methods to implement

_init_state(n_lambdas, delta) Initialize internal state.

_select_next_hypothesis(p_values) Return index of next hypothesis to test, or None if no test remains.

_local_significance_levels() Return current local significance levels.

_update_on_reject(hypothesis_index) Update state after a rejection.

run

run(p_values: NDArray, delta: float) -> NDArray[np.int_]

Execute the multiple testing procedure.

PARAMETER DESCRIPTION
p_values

P-values associated with hypotheses.

TYPE: NDArray of shape (n_lambdas,)

delta

Target family-wise error rate.

TYPE: float

RETURNS DESCRIPTION
NDArray[int]

Sorted indices of rejected hypotheses.

Source code in mapie/risk_control/fwer_control.py
def run(self, p_values: NDArray, delta: float) -> NDArray[np.int_]:
    """
    Execute the multiple testing procedure.

    Parameters
    ----------
    p_values : NDArray of shape (n_lambdas,)
        P-values associated with hypotheses.
    delta : float
        Target family-wise error rate.

    Returns
    -------
    NDArray[int]
        Sorted indices of rejected hypotheses.
    """
    p_values = np.asarray(p_values, float)
    n_lambdas = len(p_values)

    self._init_state(n_lambdas, delta)
    rejected_mask: NDArray[np.bool_] = np.zeros(n_lambdas, dtype=bool)

    while True:
        hypothesis_index = self._select_next_hypothesis(p_values)
        if hypothesis_index is None:
            break

        if (
            p_values[hypothesis_index]
            <= self._local_significance_levels()[hypothesis_index]
        ):
            rejected_mask[hypothesis_index] = True
            self._update_on_reject(hypothesis_index)
        else:  # Ignore coverage: Python 3.9 fails to detect this line although it is tested.
            break  # pragma: no cover

    return np.flatnonzero(rejected_mask)

mapie.risk_control.FWERBonferroniHolm

Bases: FWERProcedure

Holm step-down procedure for controlling the FWER [1].

At each step, the hypothesis with the smallest p-value among the remaining ones is tested at level delta / k, where k is the number of hypotheses still active.

The procedure stops when the current hypothesis is not rejected.

Notes

This method strictly dominates Bonferroni in power while preserving strong FWER control.

[1] Holm, S. (1979). A simple sequentially rejective multiple test procedure. Scandinavian journal of statistics, 65-70.


mapie.risk_control.FWERFixedSequenceTesting

FWERFixedSequenceTesting(n_starts: int = 1)

Bases: FWERProcedure

Fixed Sequential Testing (ascending) procedure with multi-start for controlling the Family-Wise Error Rate (FWER) [1].

Hypotheses are assumed to be ordered according to a parameter grid such that rejection becomes progressively easier along the sequence.

If multiple starts are used, each start explores a disjoint segment of hypotheses. Starts falling inside already rejected regions are automatically discarded.

PARAMETER DESCRIPTION
n_starts

Number of equally spaced starting points used in the multi-start procedure.

TYPE: int DEFAULT: 1

References

[1] P. Bauer, "Multiple testing in clinical trials," Statistics in Medicine, vol. 10, no. 6, pp. 871-890, 1991.

Source code in mapie/risk_control/fwer_control.py
def __init__(self, n_starts: int = 1):
    if n_starts <= 0:
        raise ValueError("n_starts must be a positive integer.")
    self.n_starts = n_starts

mapie.risk_control.FWERBonferroniCorrection

Bonferroni procedure for controlling the FWER [1].

Each hypothesis is tested independently at level delta / n_lambdas. The procedure stops as soon as one hypothesis is not rejected.

Notes

This is the simplest FWER-controlling method. It does not adapt to p-values and does not redistribute error budget after rejections.

[1] Bonferroni, C. E. (1936). Teoria statistica delle classi e calcolo delle probabilità.

run

run(p_values: NDArray, delta: float) -> NDArray[np.int_]

Execute the multiple testing procedure.

PARAMETER DESCRIPTION
p_values

P-values associated with hypotheses.

TYPE: NDArray of shape (n_lambdas,)

delta

Target family-wise error rate.

TYPE: float

RETURNS DESCRIPTION
NDArray[int]

Sorted indices of rejected hypotheses.

Source code in mapie/risk_control/fwer_control.py
def run(self, p_values: NDArray, delta: float) -> NDArray[np.int_]:
    """
    Execute the multiple testing procedure.

    Parameters
    ----------
    p_values : NDArray of shape (n_lambdas,)
        P-values associated with hypotheses.
    delta : float
        Target family-wise error rate.

    Returns
    -------
    NDArray[int]
        Sorted indices of rejected hypotheses.
    """
    p_values = np.asarray(p_values, float)
    n_lambdas = len(p_values)
    rejected_mask = p_values <= delta / n_lambdas
    return np.flatnonzero(rejected_mask)

Risk Functions

mapie.risk_control.accuracy module-attribute

accuracy = BinaryClassificationRisk(
    risk_occurrence=lambda y_true, y_pred: y_pred == y_true,
    risk_condition=lambda y_true, y_pred: repeat(
        True, len(y_true)
    ),
    higher_is_better=True,
)

mapie.risk_control.false_positive_rate module-attribute

false_positive_rate = BinaryClassificationRisk(
    risk_occurrence=lambda y_true, y_pred: y_pred == 1,
    risk_condition=lambda y_true, y_pred: y_true == 0,
    higher_is_better=False,
)

mapie.risk_control.precision module-attribute

precision = BinaryClassificationRisk(
    risk_occurrence=lambda y_true, y_pred: (
        ravel() == ravel()
    ),
    risk_condition=lambda y_true, y_pred: ravel() == 1,
    higher_is_better=True,
)

mapie.risk_control.recall module-attribute

recall = BinaryClassificationRisk(
    risk_occurrence=lambda y_true, y_pred: (
        ravel() == ravel()
    ),
    risk_condition=lambda y_true, y_pred: ravel() == 1,
    higher_is_better=True,
)

mapie.risk_control.predicted_positive_fraction module-attribute

predicted_positive_fraction = BinaryClassificationRisk(
    risk_occurrence=lambda y_true, y_pred: y_pred == 1,
    risk_condition=lambda y_true, y_pred: repeat(
        True, len(y_true)
    ),
    higher_is_better=False,
)

mapie.risk_control.positive_predictive_value module-attribute

positive_predictive_value = precision

mapie.risk_control.negative_predictive_value module-attribute

negative_predictive_value = BinaryClassificationRisk(
    risk_occurrence=lambda y_true, y_pred: y_pred == y_true,
    risk_condition=lambda y_true, y_pred: y_pred == 0,
    higher_is_better=True,
)

mapie.risk_control.abstention_rate module-attribute

abstention_rate = BinaryClassificationRisk(
    risk_occurrence=lambda y_true, y_pred: isnan(y_pred),
    risk_condition=lambda y_true, y_pred: repeat(
        True, len(y_true)
    ),
    higher_is_better=False,
)

mapie.risk_control.control_fwer

control_fwer(
    p_values: NDArray,
    delta: float,
    fwer_method: Union[
        FWER_METHODS, FWERProcedure
    ] = "bonferroni",
) -> NDArray

Apply a Family-Wise Error Rate (FWER) control procedure.

This function applies a multiple testing correction to a collection of p-values in order to control the family-wise error rate (FWER) at level delta.

The correction method is selected via the fwer_method argument.

Supported methods are: - "bonferroni": classical Bonferroni correction, - "bonferroni_holm": Sequential Graphical Testing corresponding to the Bonferroni-Holm procedure. - "fixed_sequence": Fixed Sequence Testing (FST), - "split_fixed_sequence": Split Fixed Sequence Testing (SFST). - Custom procedures can also be implemented by subclassing FWERProcedure and passing an instance to fwer_method.

PARAMETER DESCRIPTION
p_values

P-values associated with each tested hypothesis.

TYPE: NDArray of shape (n_lambdas,)

delta

Target family-wise error rate. Must be in (0, 1].

TYPE: float

fwer_method

FWER control strategy.

TYPE: (bonferroni, bonferroni_holm, fixed_sequence, split_fixed_sequence) DEFAULT: "bonferroni"

RETURNS DESCRIPTION
valid_index

Sorted indices of hypotheses rejected under FWER control.

TYPE: NDArray

Notes

fwer_method="fixed_sequence" corresponds to the fixed sequence testing procedure with one start. However, users can use multi-start by instantiating FWERFixedSequenceTesting with any desired number of starts and passing the instance to control_fwer.

If fwer_method="split_fixed_sequence", this function behaves exactly as "fixed_sequence". The distinction exists only upstream, where the ordering of hypotheses may have been learned from separate data.

Source code in mapie/risk_control/fwer_control.py
def control_fwer(
    p_values: NDArray,
    delta: float,
    fwer_method: Union[FWER_METHODS, FWERProcedure] = "bonferroni",
) -> NDArray:
    """
    Apply a Family-Wise Error Rate (FWER) control procedure.

    This function applies a multiple testing correction to a collection
    of p-values in order to control the family-wise error rate (FWER)
    at level `delta`.

    The correction method is selected via the `fwer_method` argument.

    Supported methods are:
    - `"bonferroni"`: classical Bonferroni correction,
    - `"bonferroni_holm"`: Sequential Graphical Testing corresponding
      to the Bonferroni-Holm procedure.
    - `"fixed_sequence"`: Fixed Sequence Testing (FST),
    - `"split_fixed_sequence"`: Split Fixed Sequence Testing (SFST).
    - Custom procedures can also be implemented by subclassing `FWERProcedure`
      and passing an instance to `fwer_method`.

    Parameters
    ----------
    p_values : NDArray of shape (n_lambdas,)
        P-values associated with each tested hypothesis.
    delta : float
        Target family-wise error rate. Must be in (0, 1].
    fwer_method : {"bonferroni", "bonferroni_holm", "fixed_sequence", "split_fixed_sequence"} or FWERProcedure instance, default="bonferroni"
        FWER control strategy.

    Returns
    -------
    valid_index : NDArray
        Sorted indices of hypotheses rejected under FWER control.

    Notes
    -----
    fwer_method="fixed_sequence" corresponds to the fixed sequence testing procedure with one start.
    However, users can use multi-start by instantiating FWERFixedSequenceTesting with
    any desired number of starts and passing the instance to control_fwer.

    If fwer_method="split_fixed_sequence", this function behaves exactly as
    "fixed_sequence". The distinction exists only upstream, where the ordering
    of hypotheses may have been learned from separate data.
    """
    p_values = np.asarray(p_values, dtype=float)
    n_lambdas = len(p_values)

    if n_lambdas == 0:
        raise ValueError("p_values must be non-empty.")
    if not (0 < delta <= 1):
        raise ValueError("delta must be in (0, 1].")

    if isinstance(fwer_method, FWERProcedure):
        procedure: Union[FWERProcedure, FWERBonferroniCorrection] = fwer_method
    elif fwer_method == "bonferroni":
        procedure = FWERBonferroniCorrection()
    elif fwer_method in ["fixed_sequence", "split_fixed_sequence"]:
        procedure = FWERFixedSequenceTesting(n_starts=1)
    elif fwer_method == "bonferroni_holm":
        procedure = FWERBonferroniHolm()
    else:
        raise ValueError(
            f"Unknown FWER control method: {fwer_method}. "
            f"Supported methods are: {FWER_IMPLEMENTED}, "
            "or an instance of FWERProcedure."
        )

    return procedure.run(p_values, delta)