These biases seem very important to keep in mind!
If “AI safety” refers here only to AI alignment, I’d be happy to read about how overconfidence about the difficulty/safety of one’s approach might exacerbate the unilateralist’s curse.
These biases seem very important to keep in mind!
If “AI safety” refers here only to AI alignment, I’d be happy to read about how overconfidence about the difficulty/safety of one’s approach might exacerbate the unilateralist’s curse.