In: Statistics and Probability
Measures of Predictive Models: Sensitivity and Specificity
Q: Is there a good trade-off that can be made between the sensitivity and specificity?
The definition of sensitivity and specificity is as under
Sensitivity = TP/(TP +FN): The proportion of observed positive values that were predicted to be positive.
Specificity = TN/(TN+FP): The proportion of observed negatives that were predicted to be negatives.
The tradeoff between the true positive and false positive rates reflects the reciprocal relationship between sensitivity and specificity expressed as the ability of a given test to distinguish “noise” from “signal plus noise.”3
A Receiver Operating Characteristic (ROC) curve is a graphical representation of the trade off between the false negative and false positive rates for every possible cut off. By tradition, the plot shows the false positive rate (1-specificity) on the X axis and the true positive rate (sensitivity or 1 - the false negative rate) on the Y axis.
The accuracy of a test (depicting the ability of the test to correctly classify cases with a certain condition and cases without the condition) is measured by the area under the ROC curve. An area of 1 represents a perfect test, while an area of .5 represents a worthless test. The closer the curve follows the left-hand border and then the top border of the ROC space, the more accurate the test; the true positive rate is high and the false positive rate is low. Statistically, more area under the curve means that it is identifying more true positives while minimizing the number/percent of false positives.