Depends on what you mean by very high. If you mean >95% I agree with you. If you mean >50% I don’t.
Deep learning hits a wall for decades: <5% chance. I’m being generous here.
Moore’s law comes to a halt: Even if the price of compute stopped falling tomorrow, it would only push my timelines back a few years. (It would help a lot for >20 year timeline scenarios, but it wouldn’t be a silver bullet for them either.)
Anti-tech regulation being sufficiently strong, sufficiently targeted, and happening sufficiently soon that it actually prevents doom: This one I’m more optimistic about, but I still feel like it’s <10% chance by default.
Alignment turning out to be easy: I’m also somewhat hopeful about this one but still I give it <10% chance.
Analogy: Suppose it was 2015 and the question we were debating was “Will any humans be killed by poorly programmed self-driving cars?” A much lower-stakes question but analogous in a bunch of ways.
You could trot out a similar list of maybes to argue that the probability is <95%. Maybe deep learning will hit a wall and self-driving cars won’t be built, maybe making them recognize and avoid pedestrians will turn out to be easy, etc. But it would be wrong to conclude that the probability was therefore <50%.
I’m definitely only talking about probabilities in the range of >90%. >50% is justifiable without a strong argument for the disjunctivity of doom.
I like the self-driving car analogy, and I do think the probability in 2015 that a self-driving car would ever kill someone was between 50% and 95% (mostly because of a >5% chance that AGI comes before self-driving cars).
Depends on what you mean by very high. If you mean >95% I agree with you. If you mean >50% I don’t.
Deep learning hits a wall for decades: <5% chance. I’m being generous here. Moore’s law comes to a halt: Even if the price of compute stopped falling tomorrow, it would only push my timelines back a few years. (It would help a lot for >20 year timeline scenarios, but it wouldn’t be a silver bullet for them either.) Anti-tech regulation being sufficiently strong, sufficiently targeted, and happening sufficiently soon that it actually prevents doom: This one I’m more optimistic about, but I still feel like it’s <10% chance by default. Alignment turning out to be easy: I’m also somewhat hopeful about this one but still I give it <10% chance.
Analogy: Suppose it was 2015 and the question we were debating was “Will any humans be killed by poorly programmed self-driving cars?” A much lower-stakes question but analogous in a bunch of ways.
You could trot out a similar list of maybes to argue that the probability is <95%. Maybe deep learning will hit a wall and self-driving cars won’t be built, maybe making them recognize and avoid pedestrians will turn out to be easy, etc. But it would be wrong to conclude that the probability was therefore <50%.
I’m definitely only talking about probabilities in the range of >90%. >50% is justifiable without a strong argument for the disjunctivity of doom.
I like the self-driving car analogy, and I do think the probability in 2015 that a self-driving car would ever kill someone was between 50% and 95% (mostly because of a >5% chance that AGI comes before self-driving cars).