# A note on confidence intervals and statistical significance

If two 95% confidence intervals overlap this does *not* imply that the two statistics on which they are based (e.g. means, odds ratios) differ at the 5 percent level. In other words it is possible for the difference between two statistics to be statistically non-zero and for their respective confidence intervals still to overlap. This is usually the case when the difference between the means has moderate significance.

It *is* true, however, that if a pair of confidence intervals do not overlap the difference between the two statistics is statistically non-zero.

The rationale behind the above discrepancy is explained in this article taken from the Cornell University website. See also here.

As a special case if two independent group means have the same standard error (se) then the standard error of the difference in the two means equals sqrt(se^{2 }+ se^{2 }) = sqrt(2 se^{2 }) = sqrt(2)se.

Now if both groups have large sample sizes then the group difference is approximately statistically significant if the abs[difference in group means] / (sqrt(2) se) is greater than 2 ie if the abs(difference in group means) > 2 x sqrt(2) se = approximately 2.8 se. Now, 2.8 se is greater than 2 se which suggests that in this special case of equal mean standard errors two intervals about two statistically non-significant means could not overlap if the interval widths about the mean are equal to one standard error of each mean.

The below is taken from a reply to the psych-postgrads list:

It depends slightly on what you want to use the outcomes for. A rather simple way is to visualise the confidence intervals. If the 95%CI’s are touching, p =.01. If they are further away from each other, p < .01. Cumming and Finch (2005) state that if the 95%CI arms overlap by up to half their length (so the end of one 95%CI is halfway between the end of the other one and its mean), p is approximately .05.

See the following link for an explanation with some fancy pictures including a formula for working out the sample size required for a specific precision for computing means.

If you want to know more about Effect Sizes and Confidence Intervals in general, I’d recommend reading “Understanding the New Statistics” by Geoff Cumming.

Altman and Bland (2011) give simple formulae to compute a p-value from a confidence interval.

References

Altman DG and Bland M (2011) How to obtain the p-value from a confidence interval. *BMJ* **343**:d2304.

Cumming G (2012) Understanding the New Statistics: Effect sizes, confidence intervals and meta-analyses. Routledge:New York.

Cumming G and Finch S (2005) Inference by eye. Confidence intervals and how to read pictures of data. *American Psychologist* **60(2)** 170-180.

Wolfe R and Hanley J (2002) If we're so different, why do we keep overlapping? When 1 plus 1 doesn't make 2. CMAJ 166 65-66