= How do I adjust R-squared for the number of predictors in a model? = The $$R^text{2}$$ statistic which summarises a multiple regression has a drawback that it never decreases with the addition of subsequent predictors. An adjusted form of R-squared can decrease with the addition of a subsequently uninformative predictor. This aids model selection by penalising overly parametrised models. For a sample size of n, with p predictors, Adjusted $$R^2 ^$$ = 1 - [ (1 - $$R^2 ^$$) (n-1)/(n-p-1) ] The adjusted $$R^2 ^$$ may take negative values. In such a case the adjusted $$R^2 ^$$ is given the value of zero. __Example__ A sample of size 10 has a $$R^2 ^$$ of 0.623 . A predictor, probability, is added which increases the $$R^2 ^$$ to 0.635. The adjusted $$R^2 ^$$ are 0.576 and 0.531 for the one and two predictor models which, by convention, we equate to zero. Taken together the two predictors do not explain much variance in the outcome. The addition of the second predictor, probability, actually reduces the value of the adjusted R-squared. __Reference__ Cohen, J. & Cohen P. (1983) Applied multiple regression/correlation analysis for the behavioral sciences. lawrence Erlbaum, Hillsdale, NJ.