Giving precise forecasts would give people who are invested in AI progress a chance to dunk on him and undermine his credibility by being able to point out precisely when and how he was wrong, while neglecting the gigantic consequences if they themselves are wrong about the consequences of continued AI capabilities research. Until the incentives he faces chance, I expect his behavior to remain roughly the same.
But then anyone who makes a precise bet could lose out in the same way. I assume you don’t believe that getting in general is wrong, so where does the asymmetry come from? Yudkowsky is excused betting because he’s actually right?
But then anyone who makes a precise bet could lose out in the same way. I assume you don’t believe that getting in general is wrong, so where does the asymmetry come from? Yudkowsky is excused betting because he’s actually right?