I first bounced off at the calculation in the footnote. This is nonsensical without some extraordinarily powerful assumptions that you don’t even state, let alone argue for.
“5% chance of a localized intelligence explosion” fine, way lower than I’d put it but not out of bounds.
“If that happens, about 20% chance of that leading to AI takeover” is arguable depending upon what you mean by “intelligence explosion”. It’s plausible if you think that almost all such “explosions” produce systems only weakly more powerful than human, but again you don’t state or argue for this.
“Given AI takeover, about 10% chance that leads to “doom” seems very low also.
“So about 0.1% of “AI doom”. Wait, WTF? Did you just multiply those to get an overall chance of AI doom? Are you seriously claiming the only way to get AI doom is via the very first intelligence explosion leading to takeover and doom? How? Why?
If you were serious about this, you’d consider that localized intelligence explosion is not the only path to superintelligence. You’d consider that if one intelligence explosion can happen, then more than one can happen, and a 20% chance that any one such event leads to takeover is not the same as the overall probability of AI takeover being 20%. You’d consider that 10% chance of any given AI takeover causing doom is not the same as the overall probability of doom from AI takeover. You’d consider that superintelligent AI could cause doom even without actually taking control of the world, e.g. by faithfully giving humans the power to cause their own doom while knowing that it will result.
Also consider that in 2011 Yudkowsky was naive and optimistic. His central scenario was what can go wrong when humans actually try to contain a potential superintelligence. They limit it to a brain in a box in an isolated location, and so he worked on what can go wrong even then. The intervening thirteen years has showed that we’re not likely to even try, so that opens up the space of possible avenues to doom even further.
Most of the later conclusions you reach are also without supporting evidence or argument. Such as “That’s not where the world is heading. We’re heading to continued gradual progress.” You present this as fact, without any supporting evidence. How do you know, with perfect certainty or even actionable confidence, that this is how we are going to continue? Why should I believe this assertion?
I first bounced off at the calculation in the footnote. This is nonsensical without some extraordinarily powerful assumptions that you don’t even state, let alone argue for.
“5% chance of a localized intelligence explosion” fine, way lower than I’d put it but not out of bounds.
“If that happens, about 20% chance of that leading to AI takeover” is arguable depending upon what you mean by “intelligence explosion”. It’s plausible if you think that almost all such “explosions” produce systems only weakly more powerful than human, but again you don’t state or argue for this.
“Given AI takeover, about 10% chance that leads to “doom” seems very low also.
“So about 0.1% of “AI doom”. Wait, WTF? Did you just multiply those to get an overall chance of AI doom? Are you seriously claiming the only way to get AI doom is via the very first intelligence explosion leading to takeover and doom? How? Why?
If you were serious about this, you’d consider that localized intelligence explosion is not the only path to superintelligence. You’d consider that if one intelligence explosion can happen, then more than one can happen, and a 20% chance that any one such event leads to takeover is not the same as the overall probability of AI takeover being 20%. You’d consider that 10% chance of any given AI takeover causing doom is not the same as the overall probability of doom from AI takeover. You’d consider that superintelligent AI could cause doom even without actually taking control of the world, e.g. by faithfully giving humans the power to cause their own doom while knowing that it will result.
Also consider that in 2011 Yudkowsky was naive and optimistic. His central scenario was what can go wrong when humans actually try to contain a potential superintelligence. They limit it to a brain in a box in an isolated location, and so he worked on what can go wrong even then. The intervening thirteen years has showed that we’re not likely to even try, so that opens up the space of possible avenues to doom even further.
Most of the later conclusions you reach are also without supporting evidence or argument. Such as “That’s not where the world is heading. We’re heading to continued gradual progress.” You present this as fact, without any supporting evidence. How do you know, with perfect certainty or even actionable confidence, that this is how we are going to continue? Why should I believe this assertion?