Suppose that there are two sampling distributions that satisfy (sorry about the lousy math notation) the proportionality relationship,
Pr1(data | parameter) = k * Pr2(data | parameter)
where k may depend on the data but not on the parameter. Then the same proportionality relationship holds for the prior predictive distributions,
Pr1(data) = Integral { Pr1(data | parameter) Pr(parameter) d(parameter) }Pr1(data) = Integral { k Pr2(data | parameter) Pr(parameter) d(parameter) }Pr1(data) = k Integral { Pr2(data | parameter) Pr(parameter) d(parameter) }Pr1(data) = k Pr2(data)
Now write out Bayes’ theorem:
Pr(parameter | data) = Pr(parameter) Pr1(data | parameter) / Pr1(data)Pr(parameter | data) = Pr(parameter) k Pr2(data | parameter) / (k Pr2(data) )Pr(parameter | data) = Pr(parameter) * Pr2(data | parameter) / Pr2(data))
So it doesn’t matter whether the data were sampled according to Pr1 or Pr2. You can check that the binomial and negative binomial distributions satisfy the proportionality condition by looking them up in Wikipedia.
Your argument is convincing; I sit corrected.
Current theme: default
Less Wrong (text)
Less Wrong (link)
Arrow keys: Next/previous image
Escape or click: Hide zoomed image
Space bar: Reset image size & position
Scroll to zoom in/out
(When zoomed in, drag to pan; double-click to close)
Keys shown in yellow (e.g., ]) are accesskeys, and require a browser-specific modifier key (or keys).
]
Keys shown in grey (e.g., ?) do not require any modifier keys.
?
Esc
h
f
a
m
v
c
r
q
t
u
o
,
.
/
s
n
e
;
Enter
[
\
k
i
l
=
-
0
′
1
2
3
4
5
6
7
8
9
→
↓
←
↑
Space
x
z
`
g
Suppose that there are two sampling distributions that satisfy (sorry about the lousy math notation) the proportionality relationship,
Pr1(data | parameter) = k * Pr2(data | parameter)
where k may depend on the data but not on the parameter. Then the same proportionality relationship holds for the prior predictive distributions,
Pr1(data) = Integral { Pr1(data | parameter) Pr(parameter) d(parameter) }
Pr1(data) = Integral { k Pr2(data | parameter) Pr(parameter) d(parameter) }
Pr1(data) = k Integral { Pr2(data | parameter) Pr(parameter) d(parameter) }
Pr1(data) = k Pr2(data)
Now write out Bayes’ theorem:
Pr(parameter | data) = Pr(parameter) Pr1(data | parameter) / Pr1(data)
Pr(parameter | data) = Pr(parameter) k Pr2(data | parameter) / (k Pr2(data) )
Pr(parameter | data) = Pr(parameter) * Pr2(data | parameter) / Pr2(data))
So it doesn’t matter whether the data were sampled according to Pr1 or Pr2. You can check that the binomial and negative binomial distributions satisfy the proportionality condition by looking them up in Wikipedia.
Your argument is convincing; I sit corrected.