My most easily articulated problem with CEV is mentioned in this comment, and can be summarized with the following rhetorical question: What if “our wish if we knew more, thought faster, were more the people we wished we were” is to cease existing (or to wirehead)? Can we prove in advance that this is impossible? If we can’t get a guarantee that this is impossible, does that mean that we should accept wireheading as a possible positive future outcome?
You say you are aware of all the relevant LW posts. What about LW comments? Here are two quite insightful ones:
Marcello’s comment about extrapolation, with an interesting short Wei Dai-EY debate below it.
XiXiDu’s recent comment about the context-dependence of preferences.
My most easily articulated problem with CEV is mentioned in this comment, and can be summarized with the following rhetorical question: What if “our wish if we knew more, thought faster, were more the people we wished we were” is to cease existing (or to wirehead)? Can we prove in advance that this is impossible? If we can’t get a guarantee that this is impossible, does that mean that we should accept wireheading as a possible positive future outcome?
EDIT: Another nice short comment by Wei Dai. It is part of a longer exchange with cousin_it.