If you’re saying we should work on enlightenment before working on AGI x-risk, I disagree.
We may well not have the time.
I am very aware that we may not have time.
But sometimes people will make an argument “unless we figure out X our attempts at resolving x-risk are really really doomed.”
Lots of different people, for lots of different versions of X, actually:
solving agent foundations,
scaling human enlightenment (so that we stop flailing making things worse all the time),
building a culture that can talk about the fact that we can’t see or talk about conflict (and so our efforts to do good in the world get predictably coopted by extractive forces).
etc.
I definitely want to know if any of those statements are true, for any particular version of X, even if we don’t have time to do X before the deadline.
Not having time to succeed doesn’t have any baring on what is necessary to succeed.
I am very aware that we may not have time.
But sometimes people will make an argument “unless we figure out X our attempts at resolving x-risk are really really doomed.”
Lots of different people, for lots of different versions of X, actually:
solving agent foundations,
scaling human enlightenment (so that we stop flailing making things worse all the time),
building a culture that can talk about the fact that we can’t see or talk about conflict (and so our efforts to do good in the world get predictably coopted by extractive forces).
etc.
I definitely want to know if any of those statements are true, for any particular version of X, even if we don’t have time to do X before the deadline.
Not having time to succeed doesn’t have any baring on what is necessary to succeed.