Say maybe Illusion of Truth and Ambiguity Effect each are biasing how researchers in AI Safety evaluate one option below.
If you had to choose, which bias would more likely apply to which option?
A: Aligning AGI to be safe over the long term is possible in principle.
B: Long-term safe AGI is impossible fundamentally.
Say maybe Illusion of Truth and Ambiguity Effect each are biasing how researchers in AI Safety evaluate one option below.
If you had to choose, which bias would more likely apply to which option?
A: Aligning AGI to be safe over the long term is possible in principle.
B: Long-term safe AGI is impossible fundamentally.