It seems like the preferences of the AI you build are way more important than its experience (not sure if that’s what you mean).
This is because the AIs preferences are going to have a much larger downstream impact?
I’d agree, but caveat that there may be likely possible futures which don’t involve the creation of hyper-rational AIs with well-defined preferences, but rather artificial life with messy incomplete, inconsistent preferences but morally valuable experiences. More generally, the future of the light cone could be determined by societal/evolutionary factors rather than any particular agent or agent-y process.
I found your 2nd paragraph unclear...
the goals happen to overlap enough
Is this referring to the goals of having “AIs that have good preferences” and “AIs that have lots of morally valuable experience”?
This is because the AIs preferences are going to have a much larger downstream impact?
I’d agree, but caveat that there may be likely possible futures which don’t involve the creation of hyper-rational AIs with well-defined preferences, but rather artificial life with messy incomplete, inconsistent preferences but morally valuable experiences. More generally, the future of the light cone could be determined by societal/evolutionary factors rather than any particular agent or agent-y process.
I found your 2nd paragraph unclear...
Is this referring to the goals of having “AIs that have good preferences” and “AIs that have lots of morally valuable experience”?