In: Statistics and Probability
Describe a case when a modeler might want to evaluate their models using probability calibration plots.
Specifically, the predicted probabilities are divided up into a fixed number of buckets along the x-axis. The number of events (class=1) are then counted for each bin. Finally, the counts are normalized. The results are then plotted as a line plot.
These plots are commonly referred to as ‘reliability‘ diagrams in forecast literature, although may also be called ‘calibration‘ plots or curves as they summarize how well the forecast probabilities are calibrated.
The better calibrated ....or. more reliable a forecast, the closer the points will appear along the main diagonal from the bottom left to the top right of the plot.
There are two ways to use this class
1.prefit
2.cross-validation............
You can fit a model on a training dataset and calibrate this prefit model using a hold out validation dataset.....
Note-if there is any understanding problem regarding this please feel free to ask via comment box..thank you