MAPIE Logo
1.0.1

Getting Started

  • Quick Start with MAPIE
    • 1. Download and install the module
    • 2. Regression
    • 3. Classification
  • The conformalization (or “calibration”) set
    • 1. Split conformal predictions
      • Split conformal predictions with a pre-trained model
      • Split conformal predictions with an untrained model
    • 2. Cross conformal predictions
  • Choosing the right algorithm
  • MAPIE v1 release notes
    • Introduction
    • API changes overview
    • Python, scikit-learn and NumPy versions support
    • API changes in detail
      • Regression and classification API changes (excluding time series)
        • Classes
        • Workflow and methods
        • Parameters
      • Other API changes
        • Time series
        • Risk control
        • Calibration
        • Mondrian
        • Metrics
        • Conformity scores

Measure predictions uncertainty

  • Prediction intervals (regression)
    • Choose the right algorithm
    • Use MAPIE to plot prediction intervals
    • Use MAPIE with a pre-trained model
      • 1. Use a neural network
      • 1.1 Pre-train a neural network
      • 1.2 Use MAPIE to conformalize the models
      • 1.3 Plot results
      • 2. Use LGBM models
      • 2.1 Pre-train LGBM models
      • 2.2 Use MAPIE to conformalize the models
      • 2.3 Plot results
    • All regression examples
      • 1. Quickstart
        • Use MAPIE to plot prediction intervals
        • Use MAPIE on data with gamma distribution
        • Use MAPIE with a pre-trained model
        • Use MAPIE on data with uneven uncertainty
        • Use MAPIE on data with constant uncertainty
        • Tutorial for time series
      • 2. Advanced analysis
        • Conformal Predictive Distribution with MAPIE
        • The symmetric_correction parameter of ConformalizedQuantileRegressor
        • Hyperparameters tuning with CrossConformalRegressor
        • Time series: example of the EnbPI technique
        • Estimating aleatoric and epistemic uncertainties
        • Focus on intervals width
        • Focus on residual normalised score
        • Focus on local (or “conditional”) coverage
        • ConformalizedQuantileRegressor on gamma distributed data
        • Coverage validity of MAPIE for regression tasks
        • Comparison between conformalized quantile regressor and cross methods
      • 3. Simulations from scientific articles
        • Predictive inference with the jackknife+, Foygel-Barber et al. (2020)
        • Adaptive conformal predictions for time series, Zaffran et al. (2022)
        • Predictive inference is free with the Jackknife+-after-Bootstrap, Kim et al. (2020)
      • 4. Other notebooks
    • Theoretical Description
      • 1. The “Naive” method
      • 2. The split method
      • 3. The jackknife method
      • 4. The jackknife+ method
      • 5. The jackknife-minmax method
      • 6. The CV+ method
      • 7. The CV and CV-minmax methods
      • 8. The jackknife+-after-bootstrap method
      • 9. The Conformalized Quantile Regression (CQR) Method
        • Notations and Definitions
        • Mathematical Formulation
      • 10. The ensemble batch prediction intervals (EnbPI) method
      • Key takeaways
      • References
    • Theoretical Description for Conformity Scores
      • 1. The absolute residual score
      • 2. The gamma score
      • 3. The residual normalized score
      • Key takeaways
      • References
  • Prediction sets (classification)
    • Choosing the right algorithm
    • Use MAPIE to plot prediction sets
    • All classification examples
      • 1. Quickstart examples
        • Use MAPIE to plot prediction sets
      • 2. Advanced analysis
        • LAC and APS methods explained
        • Set prediction example in the binary classification setting
        • Cross conformal classification explained
      • 3. Simulations from scientific articles
        • Least Ambiguous Set-Valued Classifiers with Bounded Error Levels, Sadinle et al. (2019)
      • 4. Other notebooks
    • Theoretical Description
      • 1. LAC
      • 2. Top-K
      • 3. Adaptive Prediction Sets (APS)
      • 4. Regularized Adaptive Prediction Sets (RAPS)
      • 5. Split- and cross-conformal methods
      • References
    • The binary classification case
      • Set prediction example in the binary classification setting
        • 1. Conformal Prediction method using the softmax score of the true label
      • Theoretical Description
        • 1. Set Prediction
        • 2. Probabilistic Prediction
        • 3. Calibration
        • References

Control prediction errors

  • Theoretical Description
    • 1. Risk-Controlling Prediction Sets
      • 1.1. General settings
      • 1.2. Bounds calculation
      • 1.2.1. Hoeffding Bound
      • 1.2.2. Bernstein Bound
      • 1.2.3. Waudby-Smith–Ramdas
    • 2. Conformal Risk Control
    • 3. Learn Then Test
    • References
  • Tutorial for recall and precision control for multi-label classification
    • 1. Construction of the dataset
    • 2 Recall control risk with CRC and RCPS
    • 2.1 Fitting PrecisionRecallController
    • 2.2. Results
    • 3. Precision control risk with LTT
    • 3.1 Fitting PrecisionRecallController
    • 3.2 Valid parameters for precision control
  • Risk control notebooks
    • 1. Overview of Recall Control for Multi-Label Classification : recall_notebook
    • 2. Overview of Precision Control for Multi-Label Classification : precision_notebook

Calibrate multi-class predictions

  • Theoretical Description
    • Top-Label
    • References
  • Calibration examples
    • 1. Quickstart examples
      • Testing for calibration in binary classification settings
        • 1. Create 1-dimensional dataset and scores to test for calibration
        • 2. Visualizing and testing for miscalibration
    • 2. Advanced analysis
      • Evaluating the asymptotic convergence of p-values
  • Calibration notebooks
    • 1. Top-label calibration for outputs of ML models : notebook

Question & Answers

  • Metrics: how to measure conformal prediction performance?
    • 1. General Metrics
      • Regression Coverage Score
      • Regression Mean Width Score
      • Classification Coverage Score
      • Classification Mean Width Score
      • Size-Stratified Coverage
      • Hilbert-Schmidt Independence Criterion
      • Coverage Width-Based Criterion
      • Mean Winkler Interval Score
    • 2. Calibration Metrics
      • Expected Calibration Error
      • Top-Label Expected Calibration Error (Top-Label ECE)
      • Cumulative Differences
      • Kolmogorov-Smirnov Statistic for Calibration
      • Kuiper’s Test
      • Spiegelhalter’s Test
    • References
  • Mondrian: how to use prior knowledge on groups when measuring uncertainty?
    • Tutorial: how to ensure fairness across groups with Mondrian
      • 1. Create the noisy dataset
      • 2. Split the dataset into a training set, a conformalization set, and a test set
      • 3. Fit a random forest regressor on the training set
      • 4. Build the classical conformal prediction intervals
        • Conformalize a SplitConformalRegressor on the conformalization set
        • Predict the prediction intervals on the test set
        • Evaluate the coverage score by group
      • 5. Build the Mondrian conformal prediction intervals
        • Conformalize a SplitConformalRegressor on the conformalization set for each group
        • Predict the prediction intervals on the test set
      • 6. Compare the coverage by partition, plot both methods side by side
    • Theoretical Description
      • References

API

  • MAPIE API
    • Regression
      • Conformalizers
        • mapie.regression.SplitConformalRegressor
        • mapie.regression.CrossConformalRegressor
        • mapie.regression.JackknifeAfterBootstrapRegressor
        • mapie.regression.ConformalizedQuantileRegressor
        • mapie.regression.TimeSeriesRegressor
      • Metrics
        • mapie.metrics.regression.regression_coverage_score
        • mapie.metrics.regression.regression_mean_width_score
        • mapie.metrics.regression.regression_ssc
        • mapie.metrics.regression.regression_ssc_score
        • mapie.metrics.regression.hsic
        • mapie.metrics.regression.coverage_width_based
        • mapie.metrics.regression.regression_mwi_score
      • Conformity Scores
        • mapie.conformity_scores.BaseRegressionScore
        • mapie.conformity_scores.AbsoluteConformityScore
        • mapie.conformity_scores.GammaConformityScore
        • mapie.conformity_scores.ResidualNormalisedScore
      • Resampling
        • mapie.subsample.BlockBootstrap
        • mapie.subsample.Subsample
    • Classification
      • Conformalizers
        • mapie.classification.SplitConformalClassifier
        • mapie.classification.CrossConformalClassifier
      • Metrics
        • mapie.metrics.classification.classification_coverage_score
        • mapie.metrics.classification.classification_mean_width_score
        • mapie.metrics.classification.classification_ssc
        • mapie.metrics.classification.classification_ssc_score
      • Conformity Scores
        • mapie.conformity_scores.BaseClassificationScore
        • mapie.conformity_scores.NaiveConformityScore
        • mapie.conformity_scores.LACConformityScore
        • mapie.conformity_scores.APSConformityScore
        • mapie.conformity_scores.RAPSConformityScore
        • mapie.conformity_scores.TopKConformityScore
    • Risk Control
      • mapie.risk_control.PrecisionRecallController
        • PrecisionRecallController
        • Examples using mapie.risk_control.PrecisionRecallController
    • Calibration
      • Conformalizer
        • mapie.calibration.TopLabelCalibrator
      • Metrics
        • mapie.metrics.calibration.expected_calibration_error
        • mapie.metrics.calibration.top_label_ece
        • mapie.metrics.calibration.cumulative_differences
        • mapie.metrics.calibration.kolmogorov_smirnov_cdf
        • mapie.metrics.calibration.kolmogorov_smirnov_p_value
        • mapie.metrics.calibration.kolmogorov_smirnov_statistic
        • mapie.metrics.calibration.kuiper_cdf
        • mapie.metrics.calibration.kuiper_p_value
        • mapie.metrics.calibration.kuiper_statistic
        • mapie.metrics.calibration.length_scale
        • mapie.metrics.calibration.spiegelhalter_p_value
        • mapie.metrics.calibration.spiegelhalter_statistic
    • Utils
      • mapie.utils.train_conformalize_test_split
        • train_conformalize_test_split()
        • Examples using mapie.utils.train_conformalize_test_split
MAPIE
  • »
  • Overview: module code

All modules for which code is available

  • mapie.calibration
  • mapie.classification
  • mapie.conformity_scores.bounds.absolute
  • mapie.conformity_scores.bounds.gamma
  • mapie.conformity_scores.bounds.residuals
  • mapie.conformity_scores.classification
  • mapie.conformity_scores.regression
  • mapie.conformity_scores.sets.aps
  • mapie.conformity_scores.sets.lac
  • mapie.conformity_scores.sets.naive
  • mapie.conformity_scores.sets.raps
  • mapie.conformity_scores.sets.topk
  • mapie.metrics.calibration
  • mapie.metrics.classification
  • mapie.metrics.regression
  • mapie.regression.quantile_regression
  • mapie.regression.regression
  • mapie.regression.time_series_regression
  • mapie.risk_control
  • mapie.subsample
  • mapie.utils

© Copyright 2022, Quantmetry.

Built with Sphinx using a theme provided by Read the Docs.