This line of reasoning, of “AGI respecting human autonomy” has the problem that our choices, undertaken freely (to whatever extent it is possible to say so), can be bad—not because of some external circumstances, but because of us being human. It’s like in the Great Divorce—given an omnipotent, omnibenevolent God, would a voluntary hell exist? This is to say: if you believe in respecting human autonomy, then how you live your life now very much matters, because you are now shaping your to-be-satisfsfied-for-eternity preferences.
Of course, the answer is that “AGI will figure this out somehow”. Which is equivalent to saying “I don’t know”. Which I think contradicts the argument “If all goes well, it literally doesn’t matter what you do; how you live is essentially up to you from that point on”.
The correct argument is, IMO: “there is a huge uncertainty, so you might as well live your life as you are now, but any other choice is pretty much equally defensible”.
This line of reasoning, of “AGI respecting human autonomy” has the problem that our choices, undertaken freely (to whatever extent it is possible to say so), can be bad—not because of some external circumstances, but because of us being human. It’s like in the Great Divorce—given an omnipotent, omnibenevolent God, would a voluntary hell exist? This is to say: if you believe in respecting human autonomy, then how you live your life now very much matters, because you are now shaping your to-be-satisfsfied-for-eternity preferences.
Of course, the answer is that “AGI will figure this out somehow”. Which is equivalent to saying “I don’t know”. Which I think contradicts the argument “If all goes well, it literally doesn’t matter what you do; how you live is essentially up to you from that point on”.
The correct argument is, IMO: “there is a huge uncertainty, so you might as well live your life as you are now, but any other choice is pretty much equally defensible”.