|
Size: 3919
Comment:
|
Size: 3710
Comment:
|
| Deletions are marked like this. | Additions are marked like this. |
| Line 32: | Line 32: |
| N(u, $$\sigma^text{2}$$), | N(u, sigma^2 ^), |
| Line 34: | Line 34: |
| N($$u_text{p}, \sigma_text{p}^text{2}$$) and an obtained likelihood distribution (obtained using sample data) equal to a N($$\hat{u}_text{lik}, \hat{\sigma}_text{lik}^text{2}$$). In particular |
N(u_p, sigma_p^2 ^) and an obtained likelihood distribution (obtained using sample data) equal to a N(u_lik, sigma_lik^2 ^). In particular |
| Line 37: | Line 37: |
| $$sigma_text{post}^2^$$ = | sigma_post^2 ^ = [ 1 /sigma_lik^2 ^ + 1 /sigma_p^2^ ] ^-1^ |
| Line 39: | Line 39: |
| [ 1 /(\hat{sigma}_\mbox{lik}^2^ + 1 /(\sigma_\mbox{p}^2^ ] ^-1^ $$ u_text{post} = $$ $$(\sigma_\mbox{post}^2^ / $$ $$\hat{sigma}_\mbox{lik}^2^ ) $$ $$ \hat{u}_\mbox{lik} + $$ $$ (\sigma_\mbox{post}^2^ / $$ $$ \sigma_\mbox{p}^2^) $$ $$ u_\mbox{p} $$ |
u_post = (sigma_post}^2^ / sigma_lik^2^ ) u_lik + (sigma_post^2 ^ / sigma_p^2 ^) u_p |
How do I calculate and interpret conditional probabilities?
Gigerenzer (2002) suggests a way to obtain conditional probabilities using frequencies in a decision tree.
Cortina and Dunlap (1997) give an example evaluating the detection rate of a test (positive/negative result) to detect schizophrenia (disorder).
To do this one fixes the following:
The base rate of schizophrenia in adults (2%)
The test will correctly identify schizophrenia (give a positive result) on 95% of people with schizophrenia
The test will correctly identify normal individuals (give a negative result) on 97% of normal people.
Despite this we can show the test is unreliable.
This is a more intuitive way of illustrating the equivalent Bayesian equation:
$$\mbox{P(No disorder|+ result) = }\frac{\mbox{P(No disorder) * P(+ result | No disorder)}}{\mbox{P(No disorder) * P(+ result | No disorder) + P(Disorder) * P(- result | Disorder)}}$$
A talk with subtitles further illustrating aspects of conditional probabilities given by Ted Donnelly (Oxford), a geneticist, is available for viewing here.
Using statistical distributions of likelihoods and priors to obtain posterior distributions
Baguley (2012, p.393-395) gives formulae for the posterior mean ($$u_text{post}$$) and variance ($$\sigma_text{post}^text{2}$$)
for a normal distribution, of form
N(u, sigma2 ), with an assumed prior distribution of form N(u_p, sigma_p2 ) and an obtained likelihood distribution (obtained using sample data) equal to a N(u_lik, sigma_lik2 ). In particular
sigma_post2 = [ 1 /sigma_lik2 + 1 /sigma_p2 ] -1
u_post = (sigma_post}2 / sigma_lik2 ) u_lik + (sigma_post2 / sigma_p2 ) u_p
Zoltan Dienes also has a comprehensive website featuring a range of on-line Bayesian calculators including one that will evaluate posterior means and sds for Normal distributions here.
Baguley also gives references for obtaining posterior distributions for data having a binomial distribution which assumes a beta distribution as its prior distribution. For this reason the posterior distribution, in this case, is called a beta-binomial distribution.
WINBUGS is freeware for fitting a range of models using simulation (via the Gibbs sampler) and is available from here.
References
Andrews M and Baguley T (2013) Prior approval: The growth of Bayesian methods in psychology British Journal of Mathematical and Statistical Psychology 66(1) 1–7. Primer article free on-line to CBSU users.
Baguley T (2012) Serious Stats. A guide to advanced statistics for the behavioral sciences. Palgrave Macmillan:New York.
Cortina JM, Dunlap WP (1997) On the logic and purpose of significance testing. Psychological Methods 2(2) 161-172.
Gelman A and Shalizi CR (2013) Philosophy and the practice of Bayesian statistics British Journal of Mathematical and Statistical Psychology 66(1) 8–38. Primer article free to access on-line to CBSU users.
Gigerenzer G (2002) Reckoning with risk: learning to live with uncertainty. London: Penguin.
Krushchk JK (2011) Doing bayesian data analysis: a tutorial using R and BUGS. Academic Press:Elsevier. For further reading: genuinely accessible to beginners illustrating using prior and posterior probabilities in inference for ANOVAs and other regression models.
