Theoretical Description¶
This document provides detailed descriptions of various metrics used to evaluate the performance of predictive models, particularly focusing on their ability to estimate uncertainties and calibrate predictions accurately.
1. General Metrics¶
Regression Coverage Score¶
The Regression Coverage Score (RCS) calculates the fraction of true outcomes that fall within the provided prediction intervals.
where:
is the number of samples,
is the true value for the -th sample,
and are the lower and upper bounds of the prediction intervals, respectively.
Regression Mean Width Score¶
The Regression Mean Width Score (RMWS) assesses the average width of the prediction intervals provided by the model.
Classification Coverage Score¶
The Classification Coverage Score (CCS) measures how often the true class labels fall within the predicted sets.
Here, represents the set of predicted labels that could possibly contain the true label for the -th observation .
Classification Mean Width Score¶
For classification tasks, the Classification Mean Width Score (CMWS) calculates the average size of the prediction sets across all samples.
where denotes the number of classes included in the prediction set for sample .
Size-Stratified Coverage¶
Size-Stratified Coverage (SSC) evaluates how the size of prediction sets or intervals affects their ability to cover the true outcomes [1]. It’s calculated separately for classification and regression:
Regression:
Classification:
where:
is the number of distinct size groups,
and are the indices of samples whose prediction intervals or sets belong to the -th size group.
Hilbert-Schmidt Independence Criterion¶
The Hilbert-Schmidt Independence Criterion (HSIC) is a non-parametric measure of independence between two variables, applied here to test the independence of interval sizes from their coverage indicators [4].
where:
and are the kernel matrices representing the interval sizes and coverage indicators, respectively.
is the centering matrix, .
This measure is crucial for determining whether certain sizes of prediction intervals are systematically more or less likely to contain the true values, which can highlight biases in interval-based predictions.
Coverage Width-Based Criterion¶
The Coverage Width-Based Criterion (CWC) evaluates prediction intervals by balancing their empirical coverage and width. It is designed to both reward narrow intervals and penalize those that do not achieve a specified coverage probability [6].
Mean Winkler Interval Score¶
The Mean Winkler Interval (MWI) Score evaluates prediction intervals by combining their width with a penalty for intervals that do not contain the observation [8, 10].
where is the nearest interval boundary not containing , and is the significance level.
2. Calibration Metrics¶
Expected Calibration Error¶
The Expected Calibration Error (ECE) is a metric used to evaluate how well the predicted probabilities of a model align with the actual outcomes. It measures the difference between predicted confidence levels and actual accuracy. The process involves dividing the predictions into bins based on confidence scores and then comparing the accuracy within each bin to the average confidence level of the predictions in that bin. The number of bins is a hyperparameter , and we refer to a specific bin by .
For each bin , the accuracy and confidence are defined as follows:
The ECE is then calculated using the following formula:
where:
is the set of indices of samples that fall into the -th bin.
is the number of samples in the -th bin.
is the total number of samples.
is the accuracy within the -th bin.
is the average confidence score within the -th bin.
In simple terms, once the different bins from the confidence scores have been created, we check the mean accuracy of each bin. The absolute mean difference between the two is the ECE. Hence, the lower the ECE, the better the calibration was performed. The difference between the average confidence and the actual accuracy within each bin is weighted by the proportion of samples in that bin, ensuring that bins with more samples have a larger influence on the final ECE value.
Top-Label Expected Calibration Error (Top-Label ECE)¶
The Top-Label Expected Calibration Error (Top-Label ECE) extends the concept of ECE to the multi-class setting. Instead of evaluating calibration over all predicted probabilities, Top-Label ECE focuses on the calibration of the most confident prediction (top-label) for each sample. For the top-label class, the calculation of the accuracy and confidence is conditioned on the top label, and the average ECE is taken for each top-label.
The Top-Label ECE is calculated as follows:
where:
is the number of unique labels.
is the set of indices of samples that fall into the -th bin for label .
is the number of samples in the -th bin for label .
is the total number of samples for label .
is the accuracy within the -th bin for label .
is the average confidence score within the -th bin for label .
is the total number of bins.
For each label, the predictions are binned according to their confidence scores for that label. The calibration error is then calculated for each label separately and averaged across all labels to obtain the final Top-Label ECE value. This ensures that the calibration is measured specifically for the most confident prediction, which is often the most critical for decision-making in multi-class problems.
Cumulative Differences¶
Cumulative Differences calculates the cumulative differences between sorted true values and prediction scores, helping to understand how well the prediction scores correspond to the actual outcomes when both are ordered by the score [2].
where:
is the permutation which sorts all the true values.
is the permutation which sorts all the predicted values.
Kolmogorov-Smirnov Statistic for Calibration¶
The Kolmogorov-Smirnov test was derived in [2, 3, 11]. The idea is to consider the cumulative differences between sorted scores and their corresponding labels and to compare its properties to that of a standard Brownian motion. Let us consider the cumulative differences on sorted scores:
We also introduce a typical normalization scale :
The Kolmogorov-Smirnov statistic is then defined as :
It can be shown [2] that, under the null hypothesis of well-calibrated scores, this quantity asymptotically (i.e. when N goes to infinity) converges to the maximum absolute value of a standard Brownian motion over the unit interval . [3, 11] also provide closed-form formulas for the cumulative distribution function (CDF) of the maximum absolute value of such a standard Brownian motion. So we state the p-value associated to the statistical test of well calibration as:
Kuiper’s Test¶
The Kuiper test was derived in [2, 3, 11] and is very similar to Kolmogorov-Smirnov. This time, the statistic is defined as:
It can be shown [2] that, under the null hypothesis of well-calibrated scores, this quantity asymptotically (i.e. when N goes to infinity) converges to the range of a standard Brownian motion over the unit interval . [3, 11] also provide closed-form formulas for the cumulative distribution function (CDF) of the range of such a standard Brownian motion. So we state the p-value associated to the statistical test of well calibration as:
Spiegelhalter’s Test¶
The Spiegelhalter test was derived in [9]. It is based on a decomposition of the Brier score:
where scores are denoted and their corresponding labels . This can be decomposed in two terms:
It can be shown that the first term has an expected value of zero under the null hypothesis of well calibration. So we interpret the second term as the Brier score expected value under the null hypothesis. As for the variance of the Brier score, it can be computed as:
So we can build a Z-score as follows:
This statistic follows a normal distribution of cumulative distribution CDF so that we state the associated p-value:
References¶
[1] Angelopoulos, A. N., & Bates, S. (2021). A gentle introduction to conformal prediction and distribution-free uncertainty quantification. arXiv preprint arXiv:2107.07511.
[2] Arrieta-Ibarra I, Gujral P, Tannen J, Tygert M, Xu C. Metrics of calibration for probabilistic predictions. The Journal of Machine Learning Research. 2022 Jan 1;23(1):15886-940.
[3] D. A. Darling. A. J. F. Siegert. The First Passage Problem for a Continuous Markov Process. Ann. Math. Statist. 24 (4) 624 - 639, December, 1953.
[4] Feldman, S., Bates, S., & Romano, Y. (2021). Improving conditional coverage via orthogonal quantile regression. Advances in Neural Information Processing Systems, 34, 2060-2071.
[5] Gupta, Chirag, and Aaditya K. Ramdas. “Top-label calibration and multiclass-to-binary reductions.” arXiv preprint arXiv:2107.08353 (2021).
[6] Khosravi, Abbas, Saeid Nahavandi, and Doug Creighton. “Construction of optimal prediction intervals for load forecasting problems.” IEEE Transactions on Power Systems 25.3 (2010): 1496-1503.
[7] Naeini, Mahdi Pakdaman, Gregory Cooper, and Milos Hauskrecht. “Obtaining well calibrated probabilities using bayesian binning.” Twenty-Ninth AAAI Conference on Artificial Intelligence. 2015.
[8] Robert L. Winkler “A Decision-Theoretic Approach to Interval Estimation”, Journal of the American Statistical Association, volume 67, pages 187-191 (1972) (https://doi.org/10.1080/01621459.1972.10481224)
[9] Spiegelhalter DJ. Probabilistic prediction in patient management and clinical trials. Statistics in medicine. 1986 Sep;5(5):421-33.
[10] Tilmann Gneiting and Adrian E Raftery “Strictly Proper Scoring Rules, Prediction, and Estimation”, Journal of the American Statistical Association, volume 102, pages 359-378 (2007) (https://doi.org/10.1198/016214506000001437) (Section 6.2)
[11] Tygert M. Calibration of P-values for calibration and for deviation of a subpopulation from the full population. arXiv preprint arXiv:2202.00100.2022 Jan 31.