If what you’re saying is no set of beliefs perfectly captures reality then I agree.
If what you’re saying is don’t get attached to your beliefs, I agree. Even rationalists are often anchored to beliefs and preferences much more than is Bayesian optimal. Rationalism provides resistance but not immunity to motivated reasoning.
If you’re saying beliefs aren’t more or less true and more or less useful, I disagree.
If you’re saying we should work on enlightenment before working on AGI x-risk, I disagree.
If you’re saying we should work on enlightenment before working on AGI x-risk, I disagree.
We may well not have the time.
I am very aware that we may not have time.
But sometimes people will make an argument “unless we figure out X our attempts at resolving x-risk are really really doomed.”
Lots of different people, for lots of different versions of X, actually:
solving agent foundations,
scaling human enlightenment (so that we stop flailying making things worse all the time),
building a culture that can talk about the fact that we can’t see or talk about conflict (and so our efforts to do good in the world get predictably coopted by extractive forces).
etc.
I definitely want to know if any of those statements are true, for any particular version of X, even if we don’t have time to do X before the deadline.
If what you’re saying is no set of beliefs perfectly captures reality then I agree.
If what you’re saying is don’t get attached to your beliefs, I agree. Even rationalists are often anchored to beliefs and preferences much more than is Bayesian optimal. Rationalism provides resistance but not immunity to motivated reasoning.
If you’re saying beliefs aren’t more or less true and more or less useful, I disagree.
If you’re saying we should work on enlightenment before working on AGI x-risk, I disagree.
We may well not have the time.
I am very aware that we may not have time.
But sometimes people will make an argument “unless we figure out X our attempts at resolving x-risk are really really doomed.”
Lots of different people, for lots of different versions of X, actually:
solving agent foundations,
scaling human enlightenment (so that we stop flailying making things worse all the time),
building a culture that can talk about the fact that we can’t see or talk about conflict (and so our efforts to do good in the world get predictably coopted by extractive forces).
etc.
I definitely want to know if any of those statements are true, for any particular version of X, even if we don’t have time to do X before the deadline.
Not having time doesn’t have any b