Youden's J statistic
In-game article clicks load inline without leaving the challenge.
Youden's J statistic (also called Youden's index) is a single statistic that captures the performance of a dichotomous diagnostic test. In meteorology, this statistic is referred to as Peirce Skill Score (PSS), Hanssen–Kuipers Discriminant (HKD), or True Skill Statistic (TSS).
Definition
Youden's J statistic is
J = sensitivity + specificity − 1 = recall 1 + recall 0 − 1 {\displaystyle J={\text{sensitivity}}+{\text{specificity}}-1={\text{recall}}_{1}+{\text{recall}}_{0}-1}
with the two right-hand quantities being sensitivity and specificity. Thus the expanded formula is:
J = T P T P + F N + T N T N + F P − 1 = T P × T N − F P × F N ( T P + F N ) ( T N + F P ) {\displaystyle J={\frac {\mathit {TP}}{{\mathit {TP}}+{\mathit {FN}}}}+{\frac {\mathit {TN}}{{\mathit {TN}}+{\mathit {FP}}}}-1={\frac {{\mathit {TP}}\times {\mathit {TN}}-{\mathit {FP}}\times {\mathit {FN}}}{({\mathit {TP}}+{\mathit {FN}})({\mathit {TN}}+{\mathit {FP}})}}}
In this equation, TP is the number of true positives, TN the number of true negatives, FP the number of false positives and FN the number of false negatives.
The index was suggested by W. J. Youden in 1950 as a way of summarising the performance of a diagnostic test; however, the formula was earlier published in Science by C. S. Peirce in 1884. Its value ranges from -1 through 1 (inclusive), and has a zero value when a diagnostic test gives the same proportion of positive results for groups with and without the disease, i.e the test is useless. A value of 1 indicates that there are no false positives or false negatives, i.e. the test is perfect. The index gives equal weight to false positive and false negative values, so all tests with the same value of the index give the same proportion of total misclassified results. While it is possible to obtain a value of less than zero from this equation, e.g. Classification yields only False Positives and False Negatives, a value of less than zero just indicates that the positive and negative labels have been switched. After correcting the labels the result will then be in the 0 through 1 range.

Youden's index is often used in conjunction with receiver operating characteristic (ROC) analysis. The index is defined for all points of an ROC curve, and the maximum value of the index may be used as a criterion for selecting the optimum cut-off point when a diagnostic test gives a numeric rather than a dichotomous result. The index is represented graphically as the height above the chance line, and it is also equivalent to the area under the curve subtended by a single operating point. Because the ROC curve almost always forms a convex curve, the line of this maximum index value is likely to intersect the ROC curve at the point where the ROC curve is closest to the point in the top left corner (i.e. the point closest to no false positive or false negative results).
Confidence interval
For a given diagnostic test with n sensitivity {\displaystyle n_{\text{sensitivity}}} diseased subjects and n specificity {\displaystyle n_{\text{specificity}}} healthy subjects, the point estimate of the Youden Index is:
J ^ = p ^ sensitivity + p ^ specificity − 1 {\displaystyle {\hat {J}}={\hat {p}}_{\text{sensitivity}}+{\hat {p}}_{\text{specificity}}-1}
The estimation of the confidence interval depends on whether the threshold (cut-point) used to define a positive result is pre-specified (fixed) or chosen from the data to maximize the index (optimized).
If the threshold is fixed, the sensitivity and specificity are estimated from independent samples, and the variance of the Youden Index is the sum of the variances of the two binomial proportions:
Var ( J ^ ) = Var ( p ^ sensitivity ) + Var ( p ^ specificity ) = p ^ sensitivity ( 1 − p ^ sensitivity ) n sensitivity + p ^ specificity ( 1 − p ^ specificity ) n specificity {\displaystyle {\text{Var}}({\hat {J}})={\text{Var}}({\hat {p}}_{\text{sensitivity}})+{\text{Var}}({\hat {p}}_{\text{specificity}})={\frac {{\hat {p}}_{\text{sensitivity}}(1-{\hat {p}}_{\text{sensitivity}})}{n_{\text{sensitivity}}}}+{\frac {{\hat {p}}_{\text{specificity}}(1-{\hat {p}}_{\text{specificity}})}{n_{\text{specificity}}}}}
Based on the Central limit theorem, the (1 − α) confidence interval (Wald-type) is calculated as:
C I = J ^ ± z 1 − α / 2 p ^ sensitivity ( 1 − p ^ sensitivity ) n sensitivity + p ^ specificity ( 1 − p ^ specificity ) n specificity {\displaystyle CI={\hat {J}}\pm z_{1-\alpha /2}{\sqrt {{\frac {{\hat {p}}_{\text{sensitivity}}(1-{\hat {p}}_{\text{sensitivity}})}{n_{\text{sensitivity}}}}+{\frac {{\hat {p}}_{\text{specificity}}(1-{\hat {p}}_{\text{specificity}})}{n_{\text{specificity}}}}}}}
where z 1 − α / 2 {\displaystyle z_{1-\alpha /2}} is the critical value from the standard normal distribution (e.g., 1.96 for a 95% confidence interval).
If the threshold is instead optimized to maximize J, the variance estimate must account for the additional variability of the threshold selection process. In such cases, the Delta method or bootstrapping is required to maintain the nominal coverage probability.
Alternative estimation methods
While the Wald interval is widely utilized, it may exhibit poor coverage probabilities or produce bounds outside the logical range of [−1, 1] when sample sizes are small or when proportions are near 1. More robust methods include:
- Newcombe "square-and-add" method: Because J can be expressed as a difference of two independent proportions (sensitivity and the false positive rate), the Newcombe method for the difference of proportions—which combines two Wilson score intervals—typically provides better coverage for small samples.
- Logit Transformation: Applying a logit transformation ensures the confidence interval remains within the logical range of [−1, 1]. This is typically achieved by calculating the interval for the transformed components (sensitivity and specificity) or by shifting the index to a [0, 1] scale before transformation, then back-transforming the resulting bounds.
Other metrics
Youden's index is also known as deltaP'. It allows for several multiclass generalizations, one of which is (Bookmaker) Informedness. It is the probability of an informed decision (as opposed to a random guess) and takes into account all predictions. However, a low Informedness value does not imply that the model is close to a random model, whereas this is the case for the Youden's index in the binary case. A recent multiclass generalization of Youden's J preserves this property.
An unrelated but commonly used combination of basic statistics from information retrieval is the F-score, being a (possibly weighted) harmonic mean of recall and precision where recall = sensitivity = true positive rate. But specificity and precision are totally different measures. F-score, like recall and precision, only considers the so-called positive predictions, with recall being the probability of predicting just the positive class, precision being the probability of a positive prediction being correct, and F-score equating these probabilities under the effective assumption that the positive labels and the positive predictions should have the same distribution and prevalence, similar to the assumption underlying of Fleiss' kappa. Youden's J, Informedness, Recall, Precision and F-score are intrinsically undirectional, aiming to assess the deductive effectiveness of predictions in the direction proposed by a rule, theory or classifier. DeltaP is Youden's J used to assess the reverse or abductive direction, (and generalizes to the multiclass case as Markedness), matching well human learning of associations; rules and, superstitions as we model possible causation;, while correlation and kappa evaluate bidirectionally.
Matthews correlation coefficient is the geometric mean of the regression coefficient of the dichotomous problem and its dual, where the component regression coefficients of the Matthews correlation coefficient are deltaP and deltaP' (that is Youden's J or Pierce's I). The main article on Matthews correlation coefficient discusses two different generalizations to the multiclass case, one being the analogous geometric mean of Informedness and Markedness.
Kappa statistics such as Fleiss' kappa and Cohen's kappa are methods for calculating inter-rater reliability based on different assumptions about the marginal or prior distributions, and are increasingly used as chance corrected alternatives to accuracy in other contexts (including the multiclass case). Fleiss' kappa, like F-score, assumes that both variables are drawn from the same distribution and thus have the same expected prevalence, while Cohen's kappa assumes that the variables are drawn from distinct distributions and referenced to a model of expectation that assumes prevalences are independent.
When the true prevalences for the two positive variables are equal as assumed in Fleiss kappa and F-score, that is the number of positive predictions matches the number of positive classes in the dichotomous (two class) case, the different kappa and correlation measure collapse to identity with Youden's J, and recall, precision and F-score are similarly identical with accuracy.