FAQ/Simon - CBU statistics Wiki

You can't save spelling words.

Clear message
location: FAQ / Simon

Testing normality including skewness and kurtosis

High levels of skewness (symmetry) and kurtosis (peakedness) of regression/ANOVA model residuals (which may be saved in SPSS) are not desirable and can undermine these analyses. SPSS gives these values (see CBSU Stats methods talk on exploratory data analysis). Steve Simon (see here) gives some sound advice on checking normality assumptions including rules of thumb on just how large skew and kurtosis must be to start worrying about doing statistical analyses. His main points in this e-mail to the SPSSX-L mailing list of 3rd December 2009 are reproduced below:

  • There are no official rules about cut-off criteria to decide just how large skew or kurtosis values must be to indicate non-normality.
  • Avoid using a test of significance, because it has too much power when the assumption of normality is least important and too little power when the assumption of normality is most important.
  • I generally don't get too excited about skewness unless it is larger than +/- 1 or so.

[Note: Hair Jr, JF, Anderson, RE, Tatham, RL, Black WC (1998) Multivariate Data Analysis Fifth Edition. Prentice-Hall:New Jersey give the same cutoffs for skewness].

  • Streiner and Norman (1995), in the book "Health Measurement Scales" suggest that if 80%+ of individuals are responding at one end of the scale, you have a problem, otherwise, it doesn't matter.
  • SPSS defines kurtosis in a truly evil way by subtracting 3 from the value of the fourth central standardized moment. A value of 6 or larger on the true kurtosis (or a value of 3 or more on the perverted definition of kurtosis that SPSS uses) indicates a large departure from normality. Very small values of kurtosis also indicate a deviation from normality, but it is a very benign deviation. This indicates very light tails, as might happen if the data is truncated or sharply bounded on both the low end and the high end.
  • Don't let skewness and kurtosis prevent you from also graphically examining normality. A histogram and/or a Q-Q plot are very helpful here.

What about if most the variables that I have are normal and a few of them are not? In this case, is it possible to use the parametric tests?

These lecture notes on page 12 also give the +/- 3 rule of thumb for kurtosis cut-offs. The values for asymmetry and kurtosis between -2 and +2 are considered acceptable in order to prove normal univariate distribution (George & Mallery, 2010). Hair et al. (2010) and Bryne (2010) argued that data is considered to be normal if skewness is between ‐2 to +2 and kurtosis is between ‐7 to +7. More rules of thumb attributable to Kline (2011) are given here.

Curran et al. (1996) suggest these same moderate normality thresholds of 2.0 and 7.0 for skewness and kurtosis respectively when assessing multivariate normality which is assumed in factor analyses and MANOVA. There is also a graphical method assessing multivariate normality plotting the j-th ordered mahalanobis distances of p variables against the (j-0.5)/p quantile of a chi-square distribution. If the variables have a multivariate normal distribution the plot will form a line. Details are in Stevens (2001) and Johnson and Wichern's 3rd edition (1992) and also in both the appendix of DeCarlo's 1997 paper (pdf paper version here) which contains a SPSS macro and a less flexible non-macro form in Burdenski (2000) (paper in pdf format is here) which also contains SPSS code. A SPSS macro version of Burdenski SPSS syntax is given here. Both DeCarlo and Burdenski SPSS syntax plot the ordered Mahalanobis distances against chi-square quantiles as mentioned above by Simon as a graphical test of multivariate Normality. Best fitting lines can be added to the scatterplots (see here for how to do this in SPSS) to help assess linearity.

In addition Wiederman, Hagmann and von Eye (2015) present an easy to use z-test to compare skewnesses of variables (in this case residuals from different multiple regressions to compare asymmetry of residuals with the regression model with the more symmetric residuals being preferred).

References

Burdenski, T. (2000). Evaluating univariate, bivariate, and multivariate Normality using graphical and statistical procedures. Multiple Linear Regression Viewpoints, 26(2), 15-28.

Byrne, B. M. (2010). Structural equation modeling with AMOS: Basic concepts, applications, and programming. New York: Routledge.

Curran, P. J., West, S. G., & Finch, J. F. (1996). The robustness of test statistics to nonnormality and specification error in confirmatory factor analysis. Psychological Methods, 1(1), 16-29.

George, D. & Mallery, M. (2010). SPSS for Windows Step by Step: A Simple Guide and Reference, 17.0 update (10a ed.) Boston: Pearson.

Hair, J., Black, W. C., Babin, B. J. & Anderson, R. E. (2010) Multivariate data analysis (7th ed.). Upper Saddle River, New Jersey: Pearson Educational International.

Johnson, R.A. and Wichern, D.W. (1992). Applied Multivariate Statistical Analysis. 3rd Edition. Prentice-Hall: Englewood Cliffs, New Jersey.

Johnson, R.A. and Wichern, D.W. (2007). Applied Multivariate Statistical Analysis. 6th Edition. Pearson: New Jersey.

Kline, R.B. (2011). Principles and practice of structural equation modeling (5th ed., pp. 3-427). New York:The Guilford Press.

Looney, S.W. (1995). How to use tests for univariate normality to assess multivariate normality. American Statistician 49(1) 64-70.

Stevens, J.P. (2001). Applied Multivariate Statistics for the Social Sciences (Applied Multivariate STATS) Psychology Press:London.

Streiner, D.L. and Norman, G.R. (1995). Health Measurement Scales. A practical guide to their development and use. 2nd Edition. Oxford Medical Publications, Inc.

Wiedermann, W., Hagmann, M. and von Eye, A. (2015) Significance tests to determine the direction of effects in linear regression models. British Journal of Mathematical and Statistical Psychology 68 116-141.

None: FAQ/Simon (last edited 2018-08-14 09:28:35 by PeterWatson)