The objection to the standard approach is put nicely in chapter 10 of Peterson (2009):
Strictly speaking, when Bayesians claim that rational decision makers behave ‘as if’ they act from a subjective probability function and a utility function, and maximise subjective expected utility, this is merely meant to prove that the agent’s preferences over alternative acts can be described by a representation theorem and a corresponding uniqueness theorem… Bayesians do not claim that the probability and utility functions constitute genuine reasons for choosing one alternative over another. This seems to indicate that Bayesian decision theory cannot offer decision makers any genuine action guidance...
Briefly put, the objection holds that Bayesians ‘put the card before the horse’ from the point of view of the deliberating agent. A decision maker who is able to state a complete preference ordering over uncertain prospects, as required by Bayesian theories, already knows what to do. Therefore, Bayesian decision makers do not get any new, action-guiding information from their theories.
...the output of a decision theory based on the Bayesian approach is a (set of) probability and utility function(s) that can be used to describe [a rational agent] as an expected utility maximizer… What Bayesians use as input data to their theories is exactly what a decision theorist would like to obtain as output.
...a Bayesian decision theory has strictly speaking nothing to tell [an agent] about how to behave. However, despite this, Bayesian frequently argue that are representation theorem and its corresponding uniqueness theorem are normatively relevant for a non-ideal agent in indirect ways. Suppose, for instance, that [an agent] has access to some of his preferences over uncertain prospects, but not all, and also assume that he has partial information about his utility and probability functions. Then, the Bayesian representation theorems can, it is sometimes suggested, but put to work to ‘fill the missing gaps’ of a preference ordering, utility function and probability function, by using the initially incomplete information to reason back and forth, thereby making the preference ordering and the functions less incomplete. In this process, some preferences for risky acts might be found to be inconsistent with the initial preference ordering, and for this reason be ruled out as illegitimate.
That said, this indirect use of Bayesian decision theory seems to suffer from at least two weaknesses. First, it is perhaps too optimistic to assume that the decision maker’s initial information always is sufficiently rich to allow him to fill all the gaps in his preference ordering… The second weakness is that even if the initial information happens to be sufficiently rich to fill the gaps, this manouevre offers no theoretical justification for the initial preference ordering over risky acts. Why should the initial preferences be retained? …As long as no preference violates the structural constraints stipulated by the Bayesian theory, everything is fine. But is not this view of practical rationality a bit too uninformative?
I don’t currently view the two objections in the second-to-last paragraph above as necessitating a non-Bayesian decision theory, but the original concern that Bayesian decision theory doesn’t technically offer any action-guidance is serious, and the primary motivation for my interest in Peterson’s ex ante approach.
It’s not obvious to me that the ex ante approach would offer more action-guidance for FAI. Our preferences over acts seem easier to observe than our internal utilities over outcomes. An extrapolation effort might use both kinds of data, of course.
For the moment I was just thinking of the ex ante approach in the context of offering action guidance to humans. The ex post approach can’t offer any direct advice for what to do because an agent that can state its preferences over acts already knows what to do. What I want to do is state how much I value different outcomes and what probability distributions I have over states of affairs, and have a decision theory tell me which action I can take to maximize my expected utility. It seems that Peterson’s ex ante approach is the only approach that can provide this for me.
The objection to the standard approach is put nicely in chapter 10 of Peterson (2009):
I don’t currently view the two objections in the second-to-last paragraph above as necessitating a non-Bayesian decision theory, but the original concern that Bayesian decision theory doesn’t technically offer any action-guidance is serious, and the primary motivation for my interest in Peterson’s ex ante approach.
It’s not obvious to me that the ex ante approach would offer more action-guidance for FAI. Our preferences over acts seem easier to observe than our internal utilities over outcomes. An extrapolation effort might use both kinds of data, of course.
For the moment I was just thinking of the ex ante approach in the context of offering action guidance to humans. The ex post approach can’t offer any direct advice for what to do because an agent that can state its preferences over acts already knows what to do. What I want to do is state how much I value different outcomes and what probability distributions I have over states of affairs, and have a decision theory tell me which action I can take to maximize my expected utility. It seems that Peterson’s ex ante approach is the only approach that can provide this for me.