mapie.metrics.classification_ssc

mapie.metrics.classification_ssc(y_true: numpy.ndarray[Any, numpy.dtype[numpy._typing._array_like._ScalarType_co]], y_pred_set: numpy.ndarray[Any, numpy.dtype[numpy._typing._array_like._ScalarType_co]], num_bins: Optional[int] = None) numpy.ndarray[Any, numpy.dtype[numpy._typing._array_like._ScalarType_co]][source]

Compute Size-Stratified Coverage metrics proposed in [3] that is the conditional coverage conditioned by the size of the predictions sets. The sets are ranked by their size (ascending) and then divided into num_bins groups: one value of coverage by groups is computed.

[3] Angelopoulos, A. N., & Bates, S. (2021). A gentle introduction to conformal prediction and distribution-free uncertainty quantification. arXiv preprint arXiv:2107.07511.

Parameters
y_true: NDArray of shape (n_samples,)

True labels.

y_pred_set: NDArray of shape (n_samples, n_class, n_alpha)
or (n_samples, n_class)

Prediction sets given by booleans of labels.

num_bins: int or None

Number of groups. If None, one value of coverage by possible size of sets (n_classes +1) is computed. Should be less than the number of different set sizes.

Returns
NDArray of shape (n_alpha, num_bins)

Examples

>>> from mapie.metrics import classification_ssc
>>> import numpy as np
>>> y_true = y_true_class = np.array([3, 3, 1, 2, 2])
>>> y_pred_set = np.array([
...    [True, True, True, True],
...    [False, True, False, True],
...    [True, True, True, False],
...    [False, False, True, True],
...    [True, True, False, True]])
>>> print(classification_ssc(y_true, y_pred_set, num_bins=2))
[[1.         0.66666667]]