I’m pretty sure that the Cauchy likelihood, like the other members of the t family, is a weighted mixture of normal distributions. (Gamma distribution over the inverse of the variance)
EDIT: There’s a paper on this, “Scale mixtures of normal distributions” by Andrews and Mallows, if you want the details
Oh, for sure it is. But that only gives it a conditionally conjugate prior, not a fully (i.e., marginally) conjugate prior. That’s great for Gibbs sampling, but not for pen-and-paper computations.
In the three years since I wrote the grandparent, I’ve found a nice mixture representation for any unimodal symmetric distribution:
Suppose f(x), the pdf for a real-valued X, is unimodal and symmetric around 0. If W is positive-valued with pdf g(w) = -w f ’(w) and U ~ Unif(-W, W), then U’s marginal distribution is the same as X. Proof is by integration-by-parts. ETA: No, wait, it’s direct. Derp.
I don’t think it would be too hard to convert this width-weighted-mixture-of-uniforms representation to a precision-weighted-mixture-of-normals representation.
I’m pretty sure that the Cauchy likelihood, like the other members of the t family, is a weighted mixture of normal distributions. (Gamma distribution over the inverse of the variance)
EDIT: There’s a paper on this, “Scale mixtures of normal distributions” by Andrews and Mallows, if you want the details
Oh, for sure it is. But that only gives it a conditionally conjugate prior, not a fully (i.e., marginally) conjugate prior. That’s great for Gibbs sampling, but not for pen-and-paper computations.
In the three years since I wrote the grandparent, I’ve found a nice mixture representation for any unimodal symmetric distribution:
I don’t think it would be too hard to convert this width-weighted-mixture-of-uniforms representation to a precision-weighted-mixture-of-normals representation.