mapie.metrics.expected_calibration_error

mapie.metrics.expected_calibration_error(y_true: Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], bool, int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]], y_scores: Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], bool, int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]], num_bins: int = 50, split_strategy: Optional[str] = None) float[source]

The expected calibration error, which is the difference between the confidence scores and accuracy per bin [1].

[1] Naeini, Mahdi Pakdaman, Gregory Cooper, and Milos Hauskrecht. “Obtaining well calibrated probabilities using bayesian binning.” Twenty-Ninth AAAI Conference on Artificial Intelligence. 2015.

Parameters
y_true: ArrayLike of shape (n_samples,)

The target values for the calibrator.

y_score: ArrayLike of shape (n_samples,) or (n_samples, n_classes)

The predictions scores.

num_bins: int

Number of bins to make the split in the y_score. The allowed values are num_bins above 0.

split_strategy: str

The way of splitting the predictions into different bins. The allowed split strategies are “uniform”, “quantile” and “array split”.

Returns
——-
float

The score of ECE (Expected Calibration Error).