FAQ/SpssBonferroni - CBU statistics Wiki

Exactly one page like "BonferroniStuff" found, redirecting to page.

Clear message
location: FAQ / SpssBonferroni

Adjusted p Values

When you request a Bonferroni test from SPSS under 'post hoc comparisons', what you get for each pair of means is a p-value ('significance') that is adjusted so that it can be compared directly to .05, assuming that that is your desired experiment-wise alpha.

For instance, for a three-group experiment, a pairwise comparison (i.e., a t test) that yields a p value of .016 would be considered significant at the .05 level, because .016 < (.05 / 3). Instead of giving you the actual two-tailed p-value, SPSS adjusts the p-value by multiplying it by 3, in this case, and gives you a Bonferroni p of .048 (.016 times 3), which you can see immediately is just under .05, and therefore significant by the Bonferroni test. Put simply, SPSS adjusts the actual p value by applying the Bonferroni correction backwards. Bland and Altman (1995) illustrate and discuss the Bonferroni method on 35 comparisons of means in the pdf file here which is reproduced on this webpage.

In the general case, without SPSS, you would divide alpha by the total number of possible pairwise comparisons (if conservative enough to use Bonferroni as a post hoc test), and then compare each of your actual, observed, raw, unadjusted, p values to that shrunken value of alpha. SPSS performs the opposite operation, and multiplies each of your actual p-values by the total number of possible pairs, so each can be compared to alpha-experimentwise.

If you follow this recipe, then it can happen that the backwards corrected p-value is greater then 1! Not a sensible outcome for what is supposed to be a probability value. So SPSS replaces this with a 1.000.

What this means is that the raw (unadjusted) p-value is greater than or equal to 1/c, where c is the number of comparisons.

Sometimes reviewers query this. If you want to quote the SPSS printout we suggest that you report these p values in the form (SPSS Bonferroni adjusted p=0.734) or whatever the printout says. Or state something like in this section SPSS Bonferroni adjusted p-values are quoted. Alternatively select Sidak adjusted significance values in SPSS rather than Bonferroni.

The problem of $$p=1.000$$ stems from the collision of two issues.

  • At one end the standard Bonferroni 'divide by c' correction or, in reverse, 'multiply by c' adjustment is based on the Bonferroni inequality p(experimentwise) < 1-(1-p)c < cp. This inequality is increasing inaccurate for large $$c$$ or for large $$p$$. And when $$p \ge 1/c$$ we get the very uninformative $$p_text{adjusted} \ge 1$$. The SPSS algorithm is quote an adjusted p-value of 1.000 when the unadjusted p-value is this large.

  • At the other end SPSS complies with the dominant HybridInferenceModel that is in common currency in research. Instead of following the Neyman-Pearson Hypothesis Testing paradigm and faithfully selecting a value of $$\alpha$$ in advance and then Accepting or Rejecting the results of a significance test by comparing the observed p-value to preselected criterion value of $$\alpha$$, the researcher wants to have an observed p-value as a starting point and then use its smallness as a measure of significance, a la Fisher, and then quote it as if it were a reference value of $$\alpha$$. What the SPSS approach attempts to do is to provide the researcher with an adjusted p-value which can be manipulated as if it were this reference $$\alpha$$.

Others also give reasons to criticise here the use of Bonferroni. Armstrong (2014) suggests guidelines for when to use Bonferroni.

How do we get round this? The sensible way of patching this up would be by using Sidak adjusted significance values. Here we use the formula p(adjusted) = 1-(1-p(unadjusted))c which can never produce a nonsense estimate of p(adjusted).

The above may also be rewritten as p(unadjusted) = 1- Exp([ln[1-p(adjusted)]/c) where Exp() is the exponential function. In particular Putting p(adjusted) equal to 0.05 gives the highest unadjusted p-value that gives a statistically siginficant result adjusted for the c comparisons. It is very close to the Bonferroni analogue of p(unadjusted)/c and may also be evaluated using 1 - (1 - p(adjusted)1/c since

p(unadjusted) = 1- Exp(ln[1-p(adjusted)]/c) = 1 - (1 - p(adjusted))1/c since

ln[1-p(adjusted)]/c = ln (1 - p(adjusted))1/c$$

A spreadsheet is here which works out Sidak and Bonferroni corrections for inputted p-values with two and three comparisons and can be adapted for more (thanks to Fionnuala Murphy for this).

There is some confusion over nomenclature.

Reference

Armstrong RA (2014) When to use the Bonferroni correction. Ophthalmic & Physiological Optics 34 502-508.

Bland JM and Altman DG (1995) Multiple significance tests:the Bonferroni method BMJ 310 170.