Even if every one of your object level objections is likely to be right, this wouldn’t shift me much in terms of policies I think we should pursue because the downside risks from TAI are astronomically large even at small probabilities (unless you discount all future and non-human life to 0). I see Eliezer as making arguments about the worst ways things could go wrong and why it’s not guaranteed that they won’t go that way. We could get lucky, but we shouldn’t count on luck, so even if Eliezer is wrong he’s wrong in ways that, if we adopt policies that account for his arguments, better protect us from existential catastrophe at the cost of getting to TAI a few decades later, which is a small price to pay to offset very large risks that exist even at small probabilities.
I am reasonably sympathetic to this argument, and I agree that the difference between EY’s p(doom) > 50% and my p(doom) of perhaps 5% to 10% doesn’t obviously cash out into major policy differences.
I of course fully agree with EY/bostrom/others that AI is the dominant risk, we should be appropriately cautious, etc. This is more about why I find EY’s specific classic doom argument to be uncompelling.
My own doom scenario is somewhat different and more subtle, but mostly beyond scope of this (fairly quick) summary essay.
You mention here that “of course” you agree that AI is the dominant risk, and that you rate p(doom) somewhere in the 5-10% range.
But that wasn’t at all clear to me from reading the opening to the article.
Eliezer Yudkowsky predicts doom from AI: that humanity faces likely extinction in the near future (years or decades) from a rogue unaligned superintelligent AI system. …
I have evaluated this model in detail and found it substantially incorrect...
As written, that opener suggests to me that you think the overall model of doom being likely is substantially incorrect (not just the details I’ve elided of it being the default).
I feel it would be very helpful to the reader to ground the article from the outset with the note you’ve made here somewhere near the start. I.e., that your argument is with the specific doom case from EY, that you retain a significant p(doom), but that it’s based on different reasoning.
I agree, and believe it would have been useful if Jacob (post author) had made this clear in the opening paragraph of the post. I see no point in reading the post if it does not measurably impact my foom/doom timeline probability distribution.
I see Eliezer as making arguments about the worst ways things could go wrong and why it’s not guaranteed that they won’t go that way.
Eliezer believes and argues that things go wrong by default, with no way he sees to avoid that. Not just “no guarantee they won’t go wrong”.
It may be that his arguments are sufficient to convince you of “no guarantee they won’t go wrong” but not to convince you of “they go wrong by default, no apparent way to avoid that”. But that’s not what he’s arguing.
Even if every one of your object level objections is likely to be right, this wouldn’t shift me much in terms of policies I think we should pursue because the downside risks from TAI are astronomically large even at small probabilities (unless you discount all future and non-human life to 0). I see Eliezer as making arguments about the worst ways things could go wrong and why it’s not guaranteed that they won’t go that way. We could get lucky, but we shouldn’t count on luck, so even if Eliezer is wrong he’s wrong in ways that, if we adopt policies that account for his arguments, better protect us from existential catastrophe at the cost of getting to TAI a few decades later, which is a small price to pay to offset very large risks that exist even at small probabilities.
I am reasonably sympathetic to this argument, and I agree that the difference between EY’s p(doom) > 50% and my p(doom) of perhaps 5% to 10% doesn’t obviously cash out into major policy differences.
I of course fully agree with EY/bostrom/others that AI is the dominant risk, we should be appropriately cautious, etc. This is more about why I find EY’s specific classic doom argument to be uncompelling.
My own doom scenario is somewhat different and more subtle, but mostly beyond scope of this (fairly quick) summary essay.
You mention here that “of course” you agree that AI is the dominant risk, and that you rate p(doom) somewhere in the 5-10% range.
But that wasn’t at all clear to me from reading the opening to the article.
As written, that opener suggests to me that you think the overall model of doom being likely is substantially incorrect (not just the details I’ve elided of it being the default).
I feel it would be very helpful to the reader to ground the article from the outset with the note you’ve made here somewhere near the start. I.e., that your argument is with the specific doom case from EY, that you retain a significant p(doom), but that it’s based on different reasoning.
I agree, and believe it would have been useful if Jacob (post author) had made this clear in the opening paragraph of the post. I see no point in reading the post if it does not measurably impact my foom/doom timeline probability distribution.
I am interested in his doom scenario, however.
Eliezer believes and argues that things go wrong by default, with no way he sees to avoid that. Not just “no guarantee they won’t go wrong”.
It may be that his arguments are sufficient to convince you of “no guarantee they won’t go wrong” but not to convince you of “they go wrong by default, no apparent way to avoid that”. But that’s not what he’s arguing.