FAQ/Bayes - CBU statistics Wiki
Self: FAQ/Bayes

How do I calculate and interpret conditional probabilities?

Gigerenzer (2002) suggests a way to obtain conditional probabilities using frequencies in a decision tree. An illustrated example (Wininger and Johnson, 2018) using this method in prosthetics is is here.

Cortina and Dunlap (1997) give an example evaluating the detection rate of a test (positive/negative result) to detect schizophrenia (disorder).

To do this one fixes the following:

The base rate of schizophrenia in adults (2%)

The test will correctly identify schizophrenia (give a positive result) on 95% of people with schizophrenia

The test will correctly identify normal individuals (give a negative result) on 97% of normal people.

Despite this we can show the test is unreliable.

This is a more intuitive way of illustrating the equivalent Bayesian equation:

$$\mbox{P(No disorder|+ result) = }\frac{\mbox{P(No disorder) * P(+ result | No disorder)}}{\mbox{P(No disorder) * P(+ result | No disorder) + P(Disorder) * P(- result | Disorder)}}$$

A talk with subtitles further illustrating aspects of conditional probabilities given by Ted Donnelly (Oxford), a geneticist, is available for viewing here.

Using statistical distributions of likelihoods and priors to obtain posterior distributions

Baguley (2012, p.393-395) gives formulae for the posterior mean ($$u_text{post}$$) and variance ($$\sigma_text{post}^text{2}$$)

for a normal distribution, of form

N(u, sigma2 ), with an assumed prior distribution of form N(u_p, sigma_p2 ) and an obtained likelihood distribution (obtained using sample data) equal to a N(u_lik, sigma_lik2 ). In particular

sigma_post2 = [ 1 /sigma_lik2 + 1 /sigma_p2 ] -1

  • u_post = (sigma_post2 / sigma_lik2 ) u_lik + (sigma_post2 / sigma_p2 ) u_p

Zoltan Dienes also has a comprehensive website featuring a range of on-line Bayesian calculators including one that will evaluate posterior means and sds for Normal distributions here.

Baguley also gives references for obtaining posterior distributions for data having a binomial distribution which assumes a beta distribution as its prior distribution. For this reason the posterior distribution, in this case, is called a beta-binomial distribution.

WINBUGS is freeware for fitting a range of models using simulation (via the Gibbs sampler) and is available from here.

References

Andrews M and Baguley T (2013) Prior approval: The growth of Bayesian methods in psychology British Journal of Mathematical and Statistical Psychology 66(1) 1–7. Primer article free on-line to CBSU users.

Baguley T (2012) Serious Stats. A guide to advanced statistics for the behavioral sciences. Palgrave Macmillan:New York.

Cortina JM, Dunlap WP (1997) On the logic and purpose of significance testing. Psychological Methods 2(2) 161-172.

Gelman A and Shalizi CR (2013) Philosophy and the practice of Bayesian statistics British Journal of Mathematical and Statistical Psychology 66(1) 8–38. Primer article free to access on-line to CBSU users.

Gigerenzer G (2002) Reckoning with risk: learning to live with uncertainty. London: Penguin.

Krushchk JK (2011) Doing bayesian data analysis: a tutorial using R and BUGS. Academic Press:Elsevier. For further reading: genuinely accessible to beginners illustrating using prior and posterior probabilities in inference for ANOVAs and other regression models.

Wininger M and Johnson R (2018) Prosthetic hand signals:how Bayesian inference can decode movement intentions and control the next generation of powered prostheses. Significance 15(4) 30-35.

None: FAQ/Bayes (last edited 2018-08-20 09:42:10 by PeterWatson)