This almost has to be false. I personally think CEV sounds like the best direction I currently know about, but maybe the process of extrapolation has a hidden ‘gotcha’. Hopefully a decision theory that can model self-modifying agents (like our extrapolated selves, perhaps, as well as the AI) will help us figure out what we should be asking. Settling on one approach before then seems premature, and in fact neither the SI nor Eliezer has done so.
This almost has to be false. I personally think CEV sounds like the best direction I currently know about, but maybe the process of extrapolation has a hidden ‘gotcha’. Hopefully a decision theory that can model self-modifying agents (like our extrapolated selves, perhaps, as well as the AI) will help us figure out what we should be asking. Settling on one approach before then seems premature, and in fact neither the SI nor Eliezer has done so.