Obviously this emphasis on CEV is absurd, but I don’t know what the alternatives are. Do you? And what are they?
I’m a fan of the “just solve decision theory and the rest will follow” approach. Some hybrid of “just solve decision theory” and the philosophical intuitions behind CFAI might also do it and might be less likely to spark AGI by accident. And there’s technically the oracle AI option, but I don’t like that one.
And can thinking about CEV be used to generate better alternatives?
Maybe, but it seems to me that the opportunity cost is high. CEV wastes people’s time on “extrapolation algorithms” and thinking about whether preferences sufficiently converge and other problems that generally aren’t on the correct meta level. It also makes people think that AGI requires an ethical solution rather than a make-sure-you-solve-everything-ever-because-this-is-your-only-chance-bucko solution to all philosophy ever.
I’m a fan of the “just solve decision theory and the rest will follow” approach. Some hybrid of “just solve decision theory” and the philosophical intuitions behind CFAI might also do it and might be less likely to spark AGI by accident. And there’s technically the oracle AI option, but I don’t like that one.
Maybe, but it seems to me that the opportunity cost is high. CEV wastes people’s time on “extrapolation algorithms” and thinking about whether preferences sufficiently converge and other problems that generally aren’t on the correct meta level. It also makes people think that AGI requires an ethical solution rather than a make-sure-you-solve-everything-ever-because-this-is-your-only-chance-bucko solution to all philosophy ever.