Apologies if this question seems naive but I would really appreciate your wisdom.
Is there a reasonable way of applying probability to analogue inference problems?
For example, if two substances A and B are being measured using a device which produces an analogue value C. Given a history of analogue values, how does one determine the probability of each substance. Unless the analogue values match exactly, how can historical information contribute to the answer without making assumptions of the shape of the probability density function created by A or B?
If this assumption must be made how can this be reasonably determined, and crucially, what events could occur that would lead to it being changed?
A real example would be, that the PDF is often modelled as a Gaussian distribution, but more recent approaches tend to use different distributions because of outliers. This seems like the right thing to do because our visual sense of distribution can easily identify such points, but is there any more rigorous justification?
Is, in effect, the selection of the underlying model the real challenge of rational decision making, not the inference rules?
Is there a reasonable way of applying probability to analogue inference problems?
Your examples, certainly show a grasp of the problem. The solution is first sketched in Chapter 4.6 of Jaynes
Is, in effect, the selection of the underlying model the real challenge of rational decision making, not the inference rules?
Definitely. Jaynes finishes deriving the inference rules in Chapter 2 and illustrates how to use them in Chapter 3. The remainder of the book deals with “the real challenge”. In particular Chapters 6, 7, 12, 19, and especially 20. In effect, you use Bayesian inference and/or Wald decision theory to choose between underlying models pretty much as you might have used them to choose between simple hypotheses. But there are subtleties, … to put things mildly. But then classical statistics has its subtleties too.
Apologies if this question seems naive but I would really appreciate your wisdom.
Is there a reasonable way of applying probability to analogue inference problems?
For example, if two substances A and B are being measured using a device which produces an analogue value C. Given a history of analogue values, how does one determine the probability of each substance. Unless the analogue values match exactly, how can historical information contribute to the answer without making assumptions of the shape of the probability density function created by A or B? If this assumption must be made how can this be reasonably determined, and crucially, what events could occur that would lead to it being changed?
A real example would be, that the PDF is often modelled as a Gaussian distribution, but more recent approaches tend to use different distributions because of outliers. This seems like the right thing to do because our visual sense of distribution can easily identify such points, but is there any more rigorous justification?
Is, in effect, the selection of the underlying model the real challenge of rational decision making, not the inference rules?
Your examples, certainly show a grasp of the problem. The solution is first sketched in Chapter 4.6 of Jaynes
Definitely. Jaynes finishes deriving the inference rules in Chapter 2 and illustrates how to use them in Chapter 3. The remainder of the book deals with “the real challenge”. In particular Chapters 6, 7, 12, 19, and especially 20. In effect, you use Bayesian inference and/or Wald decision theory to choose between underlying models pretty much as you might have used them to choose between simple hypotheses. But there are subtleties, … to put things mildly. But then classical statistics has its subtleties too.