mapie.metrics
.classification_coverage_score¶
- mapie.metrics.classification_coverage_score(y_true: Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], bool, int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]], y_pred_set: Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], bool, int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]]) float [source]¶
Effective coverage score obtained by the prediction sets.
The effective coverage is obtained by estimating the fraction of true labels that lie within the prediction sets.
- Parameters
- y_true: ArrayLike of shape (n_samples,)
True labels.
- y_pred_set: ArrayLike of shape (n_samples, n_class)
Prediction sets given by booleans of labels.
- Returns
- float
Effective coverage obtained by the prediction sets.
Examples
>>> from mapie.metrics import classification_coverage_score >>> import numpy as np >>> y_true = np.array([3, 3, 1, 2, 2]) >>> y_pred_set = np.array([ ... [False, False, True, True], ... [False, True, False, True], ... [False, True, True, False], ... [False, False, True, True], ... [False, True, False, True] ... ]) >>> print(classification_coverage_score(y_true, y_pred_set)) 0.8