I think Eliezer is implying here that timelines may be short or at least that the left tail is fatter than people want to admit, but I think the thing that Sarah feels compelled to respond to is more the vibe that you have no right to think there are long timelines. He’s saying that in order to be confident in no strong AI within a few years you need lots of concrete predictions and probabilities or else you’re just pulling things out of [the air] on request without a model and not updating on evidence, and implying that recent evidence should update you in favor of sooner being more likely rather than AGI getting one day later in expectation each day. In particular, his fifth point in response to the conference.
It felt off-putting enough to me that I decided to respond at length here to the associated analysis and logic, even though I too fully agree with no fire alarm and the need to act now and the fact that most people don’t have models and so on.
I don’t have enough knowledge of current ML to offer short term predictions that are worth anything, which is something I want to try and change, but in the meantime I don’t think that means I can’t make meaningful long term predictions, just that they’ll be worse than they would otherwise be.
My take is that Eliezer is saying that we should be aware of the significant probability that AGI takes us unaware, and also that people don’t tend to think enough about their claims. He’s not saying “be certain that it will be soon,” but rather “any claim that it will almost certainly take centuries is suspect if it cannot be backed up with specific, lower-level difficulty claims expressed through estimated times for certain goals to be reached.” I’m not sure if this goes against your reading of the post, though.
I think Eliezer is implying here that timelines may be short or at least that the left tail is fatter than people want to admit, but I think the thing that Sarah feels compelled to respond to is more the vibe that you have no right to think there are long timelines. He’s saying that in order to be confident in no strong AI within a few years you need lots of concrete predictions and probabilities or else you’re just pulling things out of [the air] on request without a model and not updating on evidence, and implying that recent evidence should update you in favor of sooner being more likely rather than AGI getting one day later in expectation each day. In particular, his fifth point in response to the conference.
It felt off-putting enough to me that I decided to respond at length here to the associated analysis and logic, even though I too fully agree with no fire alarm and the need to act now and the fact that most people don’t have models and so on.
I don’t have enough knowledge of current ML to offer short term predictions that are worth anything, which is something I want to try and change, but in the meantime I don’t think that means I can’t make meaningful long term predictions, just that they’ll be worse than they would otherwise be.
My take is that Eliezer is saying that we should be aware of the significant probability that AGI takes us unaware, and also that people don’t tend to think enough about their claims. He’s not saying “be certain that it will be soon,” but rather “any claim that it will almost certainly take centuries is suspect if it cannot be backed up with specific, lower-level difficulty claims expressed through estimated times for certain goals to be reached.” I’m not sure if this goes against your reading of the post, though.