Diff for "FAQ/lrchi" - CBU statistics Wiki
location: Diff for "FAQ/lrchi"
Differences between revisions 17 and 18
Revision 17 as of 2011-08-12 10:48:40
Size: 3572
Editor: PeterWatson
Comment:
Revision 18 as of 2011-08-12 10:59:21
Size: 4258
Editor: PeterWatson
Comment:
Deletions are marked like this. Additions are marked like this.
Line 37: Line 37:
where D is the a quantity known as the deviance which represents the overall lack of fit of the model (or deviation of subjects' predicted group probabilities from their observed groups). This deviation is represented by deviance residuals which can be outputted in SPSS using the /SAVE DEV subcommand as below. where D is the a quantity known as the deviance which represents the overall lack of fit of the model (or deviation of subjects' predicted group probabilities from their observed groups). This deviation is represented by deviance residuals which can be outputted in SPSS using the /SAVE DEV subcommand as below. 
Line 53: Line 53:
D is then equal to the number of observations multiplied by the mean of D2. D is then equal to the number of observations multiplied by the mean of D2, the squared subject deviance residuals. We can the use this with the outputted omnibus test chi-square to obtain an adjusted likelihood ratio R-squared for the cases where two or more subjects have identical predictor variable values.

Hosmer and Lemeshow further state that R-squared values tend to be lower in logistic regression than the usual linear regression for continuous outcomes and, consequently, can give misleading low indications of the fit of models deemed good using other fit criteria such as area under ROC crurve or percentage correctly classified. They, therefore, recommend using R-squared to compare competing models rather than as a stand-alone effect size.

How do I summarise a fit for a logistic regression model?

Menard (2000) compares various R-squared measures for binary logistic regressions and concludes that the log-likelihood ratio chi-square is the most appropriate:

$$ \mbox{R-squared (Likelihood ratio)} = 1 - \frac{ln(L[m])}{ln(L[0]) } = 1 - \frac{-2 ln(L[m])}{-2 ln(L[0]) } = \frac{ln(L[m]) - ln(L[0])}{ln(L[0])}$$

where L(m) and L(0) are the log likelihoods for the model with predictors and the model containing only the intercept respectively. The latter term involves using -2 times the log likelihood which is outputted by SPSS (and other software) rather than the log likelihood. This R-squared form is also known as McFadden's R-squared.

Ths statistical significance of the predictors may be jointly assessed using twice the change in the log-likelihoods in the above expression. This equals 2 (ln (L[m]) - ln (L[0])) which is distributed as chi-square(p) if the p predictors jointly have no influence on group membership. This chi-square is computed and outputted by most software which performs binary logistic regressions. In SPSS, for example, this term is denoted by the chi-square statistic produced immediately after the predictors are added to the model under the heading 'Block 1 Method=Enter'. For example running a logistic regression in SPSS to assess the joint importance of two predictors p1 and p2 with the syntax below

LOGISTIC REGRESSION  y
  /METHOD = ENTER p1 p2
  /CRITERIA = PIN(.05) POUT(.10) ITERATE(20) CUT(.5) .

we obtain the likelihood ratio chi-square in the output which is of form:

BLOCK 1: METHOD-ENTER
 
Omnibus Tests of Model Coefficients
                Chi-square      df      Sig.
Step 1  Step    3.958           2       .138
        Block   3.958           2       .138
        Model   3.958           2       .138

This may be expressed as chi-square(2) = 3.96, p = 0.14 indicating that together the two predictors, p1 and p2, do not have a statistically significant association with group, y.

The above R-squared estimate is also advocated by Train (2003).

Hosmer and Lemeshow (2000) note that the likelihood ratio R-squared does not attain the maximum value of 1.00 when 2 or more subjects have the same values of their predictor variables. In this case they propose a modification of the likelihood ratio R-squared

$$\frac{ln(L[0]) - ln(L[m])}{ln(L[0]) - ln(L([m]) - 0.5D $$ = $$\frac{-2 ln(L[0]) - (-2 ln(L[m]))}{-2ln(L[0]) - (-2ln(L([m])) + D $$ = $$\frac{ \mbox{Omnibus test chi-square for m variables}}{\mbox{Omnibus test chi-square for m variables + deviance} }$$

where D is the a quantity known as the deviance which represents the overall lack of fit of the model (or deviation of subjects' predicted group probabilities from their observed groups). This deviation is represented by deviance residuals which can be outputted in SPSS using the /SAVE DEV subcommand as below.

LOGISTIC REGRESSION  y
  /METHOD = ENTER p1 p2
  /SAVE = DEV
  /CRITERIA = PIN(.05) POUT(.10) ITERATE(20) CUT(.5) .

COMPUTE D2= DEV_1*DEV_1
EXE.

DESCRIPTIVES
  VARIABLES=D2
  /STATISTICS=MEAN STDDEV MIN MAX .

D is then equal to the number of observations multiplied by the mean of D2, the squared subject deviance residuals. We can the use this with the outputted omnibus test chi-square to obtain an adjusted likelihood ratio R-squared for the cases where two or more subjects have identical predictor variable values.

Hosmer and Lemeshow further state that R-squared values tend to be lower in logistic regression than the usual linear regression for continuous outcomes and, consequently, can give misleading low indications of the fit of models deemed good using other fit criteria such as area under ROC crurve or percentage correctly classified. They, therefore, recommend using R-squared to compare competing models rather than as a stand-alone effect size.

References

Hosmer, D.W. and Lemeshow, S. (2000). Applied logistic regression. 2nd Edition. Wiley:New York.

Menard, S. (2000) Coefficients of determination for multiple logistic regression analysis. American Statistician, 54, 17-24.

Train, K. (2003) Discrete choice methods with simulation. Cambridge University Press:Cambridge.

None: FAQ/lrchi (last edited 2015-12-08 16:34:30 by PeterWatson)