“But I don’t get the confidence about the unaligned AGI killing off humanity. The probability may be 90%, but it’s not 99.9999% as many seem to imply, including Eliezer.”
I think that 90% is also wildly high, and many other people around think so too. But most of them (with perfectly valid criticisms) do not engage in discussions in LW (with some honourable exceptions, e.g. Robin Hanson a few days ago, but how much attention did it draw?)
I don’t have any definite estimate, just that it’s Too Damn High for the path we are currently on. I don’t think anyone has a good argument for it being lower then 5%, or even 50%, but I wouldn’t be surprised if we survived and in hindsight those were justifiable numbers.
I also don’t think there is any good argument for it being greater than 90%, but this is irrelevant since if you’re making a bet on behalf of humanity with total extinction on one side at anything like those probabilities, you’re a dangerous lunatic who should be locked up.
I would say that AGI is by far the greatest near-term existential risk we face, and that the probability of extinction from AGI seems likely to be greater than the probability of ‘merely’ major civilization damage from many things that are receiving a lot more effort to mitigate.
So to me, our civilization’s priorities look very screwed up—which is to be expected from the first (and therefore likely stupidest) animal that is capable of creating AI.
“I don’t think anyone has a good argument for it being lower then 5%, or even 50%,”
That’s false. There are many, many good arguments. In fact, I would say that it is not only that, it is also that many of the arguments pro-doom are very bad. The only problem is that the conversation in LW on this topic is badly biased towards one camp and that’s creating a distorted image on this website. People arguing against doom tend to be downvoted way more easily than people pro-doom. I am not saying that I don’t think it is a relevant problem , something people shouldn’t work on, etc.
“But I don’t get the confidence about the unaligned AGI killing off humanity. The probability may be 90%, but it’s not 99.9999% as many seem to imply, including Eliezer.”
I think that 90% is also wildly high, and many other people around think so too. But most of them (with perfectly valid criticisms) do not engage in discussions in LW (with some honourable exceptions, e.g. Robin Hanson a few days ago, but how much attention did it draw?)
I don’t have any definite estimate, just that it’s Too Damn High for the path we are currently on. I don’t think anyone has a good argument for it being lower then 5%, or even 50%, but I wouldn’t be surprised if we survived and in hindsight those were justifiable numbers.
I also don’t think there is any good argument for it being greater than 90%, but this is irrelevant since if you’re making a bet on behalf of humanity with total extinction on one side at anything like those probabilities, you’re a dangerous lunatic who should be locked up.
I would say that AGI is by far the greatest near-term existential risk we face, and that the probability of extinction from AGI seems likely to be greater than the probability of ‘merely’ major civilization damage from many things that are receiving a lot more effort to mitigate.
So to me, our civilization’s priorities look very screwed up—which is to be expected from the first (and therefore likely stupidest) animal that is capable of creating AI.
“I don’t think anyone has a good argument for it being lower then 5%, or even 50%,”
That’s false. There are many, many good arguments. In fact, I would say that it is not only that, it is also that many of the arguments pro-doom are very bad. The only problem is that the conversation in LW on this topic is badly biased towards one camp and that’s creating a distorted image on this website. People arguing against doom tend to be downvoted way more easily than people pro-doom. I am not saying that I don’t think it is a relevant problem , something people shouldn’t work on, etc.