netcal.regression

Probabilistic Regression Calibration Package

Methods for uncertainty calibration of probabilistic regression tasks. A probabilistic regression model does not only provide a continuous estimate but also an according uncertainty (commonly a Gaussian standard deviation/variance). The methods within this package are able to recalibrate this uncertainty by means of quantile calibration [1], distribution calibration [2], or variance calibration [3], [4].

Quantile calibration [1] requires that a predicted quantile for a quantile level t covers approx. 100t% of the ground-truth samples.

Methods for quantile calibration:

  • IsotonicRegression [1].

Distribution calibration [2] requires that a predicted probability distribution should be equal to the observed error distribution. This must hold for all statistical moments.

Methods for distribution calibration:

  • GPBeta [2].

  • GPNormal [5].

  • GPCauchy [5].

Variance calibration [3], [4] requires that the predicted variance of a Gaussian distribution should match the observed error variance which is equivalent to the root mean squared error.

Methods for variance calibration:

  • VarianceScaling [3], [4].

  • GPNormal [5].

References

[1] Volodymyr Kuleshov, Nathan Fenner, and Stefano Ermon: “Accurate uncertainties for deep learning using calibrated regression.” International Conference on Machine Learning. PMLR, 2018. Get source online

[2] Hao Song, Tom Diethe, Meelis Kull and Peter Flach: “Distribution calibration for regression.” International Conference on Machine Learning. PMLR, 2019. Get source online

[3] Levi, Dan, et al.: “Evaluating and calibrating uncertainty prediction in regression tasks.” arXiv preprint arXiv:1905.11659 (2019). Get source online

[4] Laves, Max-Heinrich, et al.: “Well-calibrated regression uncertainty in medical imaging with deep learning.” Medical Imaging with Deep Learning. PMLR, 2020. Get source online

[5] Küppers, Fabian, Schneider, Jonas, and Haselhoff, Anselm: “Parametric and Multivariate Uncertainty Calibration for Regression and Object Detection.” ArXiv preprint arXiv:2207.01242, 2022. Get source online

Available classes

IsotonicRegression()

Isotonic regression calibration for probabilistic regression models with multiple independent output dimensions (optionally).

VarianceScaling()

Variance recalibration using maximum likelihood estimation for multiple independent dimensions (optionally).

GPBeta([n_inducing_points, ...])

GP-Beta recalibration method for regression uncertainty calibration using the well-known Beta calibration method from classification calibration in combination with a Gaussian process (GP) parameter estimation.

GPNormal([n_inducing_points, ...])

GP-Normal recalibration method for regression uncertainty calibration using a temperature scaling for the variance of a normal distribution but using the Gaussian process (GP) parameter estimation to adaptively obtain the scaling parameter for each input individually.

GPCauchy([n_inducing_points, ...])

GP-Cauchy recalibration method for regression uncertainty calibration that consumes an uncalibrated Gaussian distribution but converts it to a calibrated Cauchy distribution.

Package for Gaussian process optimization

gp

Regression GP Calibration Package This package provides the framework for all Gaussian Process (GP) recalibration schemes. These are GP-Beta [2], GP-Normal [3], and GP-Cauchy [3]. The goal of regression calibration using a GP scheme is to achieve distribution calibration, i.e., to match the predicted moments (mean, variance) to the true observed ones. In contrast to quantile calibration [1], where only the marginal calibration is of interest, the distribution calibration [2] is more restrictive. It requires that the predicted moments should match the observed ones given a certain probability distribution. Therefore, the authors in [2] propose to use Gaussian process to estimate the recalibration parameters of a Beta calibration function locally (i.e., matching the observed moments of neighboring samples). The GP-Normal and the GP-Cauchy follow the same principle but return parametric output distributions after calibration. References [1] Volodymyr Kuleshov, Nathan Fenner, and Stefano Ermon: "Accurate uncertainties for deep learning using calibrated regression." International Conference on Machine Learning. PMLR, 2018. Get source online [2] Hao Song, Tom Diethe, Meelis Kull and Peter Flach: "Distribution calibration for regression." International Conference on Machine Learning. PMLR, 2019. Get source online [3] Küppers, Fabian, Schneider, Jonas, and Haselhoff, Anselm: "Parametric and Multivariate Uncertainty Calibration for Regression and Object Detection." ArXiv preprint arXiv:2207.01242, 2022. Get source online