Diff for "FAQ/AICreg" - CBU statistics Wiki
location: Diff for "FAQ/AICreg"
Differences between revisions 4 and 5
Revision 4 as of 2014-04-01 10:19:35
Size: 1330
Editor: PeterWatson
Comment:
Revision 5 as of 2014-06-17 15:12:18
Size: 1931
Editor: PeterWatson
Comment:
Deletions are marked like this. Additions are marked like this.
Line 9: Line 9:
There is also a Bayesian Information Criterion (BIC) or Schwarz's criterion
Line 10: Line 11:
__Reference__ BIC = n ln(RSS/n) + [(k+1) ln(n)]/n

where n is the total sample size and there are k parameters (including the intercept).

Nagin (1999)suggests using exp(BIC(1)-BIC(2)) as a means of deciding on whether one BIC is meaningfully lower than another BIC (page 147 and Table 2 on page 148 gives some rules of thumb). It is also mentioned in Chapter Four of Nagin(2005).




__References__
Line 15: Line 25:

Nagin DS (1999) Analyzing Developmental Trajectories: A Semiparametric, Group-Based Approach. ''Psychological Methods'' '''4(2)''' 139-157.

How do I compute Akaike's information criterion (AIC) to compare regression models?

Akaike's information criterion is used to compare both the efficiency of multivariate models looking at the same data combining the degree of fit with the number of terms in the model. Better fitting simpler models are preferred with smaller AICs. AIC can be used as an alternative to the F ratio in stepwise regressions investigating the effectiveness of adding or subtracting one or more predictors from a model (see an example in the Regression Grad talk).

AIC = n ln(RSS/n) + 2 df(model)

where RSS is the Residual Sum of Squares which is routinely outputted from the regression analysis, n is the total sample size and df(model) is the degrees of freedom of the regression model which is the number of parameters equal to the number of predictors + 1 (for the intercept). The above formula for AIC is also given on page 63 of Burnham and Anderson (2002).

There is also a Bayesian Information Criterion (BIC) or Schwarz's criterion

BIC = n ln(RSS/n) + [(k+1) ln(n)]/n

where n is the total sample size and there are k parameters (including the intercept).

Nagin (1999)suggests using exp(BIC(1)-BIC(2)) as a means of deciding on whether one BIC is meaningfully lower than another BIC (page 147 and Table 2 on page 148 gives some rules of thumb). It is also mentioned in Chapter Four of Nagin(2005).

References

Burnham, K.P., and Anderson, D.R. 2002. Model selection and multimodel inference: a practical information-theoretic approach, second edition. Springer-Verlag, New York.

(A pdf copy of the above book may also be downloaded for free from here.)

Nagin DS (1999) Analyzing Developmental Trajectories: A Semiparametric, Group-Based Approach. Psychological Methods 4(2) 139-157.

None: FAQ/AICreg (last edited 2024-01-25 11:14:01 by PeterWatson)