FAQ/criteria - CBU statistics Wiki
location: FAQ / criteria

## Two group discriminant diagnostics

For 2 x 2 tables there are four terms used to summarise the classification table of observed and predicted group membership outputted by discriminant procedures such as binary logistic regression.

Let’s call the two groups positive (+) and negative (-) with classification table given below.

 True + - Pred + a b - c d

The following four quantities are often quoted and asked for by journals as a means of evaluating the sharpness of the model fit in the two group case.

• Sensitivity = a/(a+c)= proportion of those who are really + who are predicted to be +
• Specificity = d/(b+d) = proportion of those who are really - who are predicted to be -
• Positive Predictive Value (PPV) = a/(a+b) = proportion of those predicted as + who really are +
• Negative Predictive Value (NPV) = d/(c+d) = proportion of those predicted as - who really are -
• An example is available using wavelength.

Producing a classification table in a Cross-validation sample

Output from a discriminant procedure such as logistic regression from one data set (design set) can easily be tested on another one (test set) when there are two groups. Simply compute the sum of the product of the regression coefficients from the design set multiplied by the predictor variables in the test set to produce a score for each subject. If the score is positive allocate to one group (usually the lower coded group) and if this score is negative allocate to the other group (usually the higher coded group but check on these codings). The classification table comparing predicted groups to true group membership can then be produced using, for example, the CROSSTABS procedure in SPSS. In general for k groups (k-1) multiple discriminant scores are produced with one group having a zero score. The predicted group for each individual corresponds to the group with the highest discriminant score.

References

Fleiss, JL (2003) Statistical Methods for Rates and Proportions, 3rd Edition. Wiley:New York. (in South Wing 5 of University library).

Warrens MJ (2011) Chance-corrected measures for 2 x 2 tables that coincide with weighted kappa. British Journal of Mathematical and Statistical Psychology 64 355-365. This quotes the above indices and others showing they are special cases of the weighted Cohen's kappa measure of agreement.

None: FAQ/criteria (last edited 2015-02-02 14:34:05 by PeterWatson)