It’s not obvious to me that the ex ante approach would offer more action-guidance for FAI. Our preferences over acts seem easier to observe than our internal utilities over outcomes. An extrapolation effort might use both kinds of data, of course.
For the moment I was just thinking of the ex ante approach in the context of offering action guidance to humans. The ex post approach can’t offer any direct advice for what to do because an agent that can state its preferences over acts already knows what to do. What I want to do is state how much I value different outcomes and what probability distributions I have over states of affairs, and have a decision theory tell me which action I can take to maximize my expected utility. It seems that Peterson’s ex ante approach is the only approach that can provide this for me.
It’s not obvious to me that the ex ante approach would offer more action-guidance for FAI. Our preferences over acts seem easier to observe than our internal utilities over outcomes. An extrapolation effort might use both kinds of data, of course.
For the moment I was just thinking of the ex ante approach in the context of offering action guidance to humans. The ex post approach can’t offer any direct advice for what to do because an agent that can state its preferences over acts already knows what to do. What I want to do is state how much I value different outcomes and what probability distributions I have over states of affairs, and have a decision theory tell me which action I can take to maximize my expected utility. It seems that Peterson’s ex ante approach is the only approach that can provide this for me.