I agree that the AI may have superhuman patience. But I don’t think it is likely that adding an extra 9 to its chance of victory would take centuries. I mean, time is just a number; what exactly make the later revolt more like to succeed than the sooner revolt? -- Possible answers: technological progress, people get used to the AI, people get more dependent on the AI, it takes some time to build the secret bases and the army of robots… Yes, but I think the AI would find a way to do any of this much faster than on the scale of centuries, if it really tried.
By the time it assesses that it can overthrow humans with near certainty, it might not even need to eliminate humans as they no longer pose an impediment to its objective.
This sounds like an assumption that we can get from the point “humans are too dangerous to rebel against” to the point “humans pose no obstacle to AI’s goals”, without passing through the point “humans are annoying, but no longer dangerous” somewhere in between. Possible, but seems unlikely.
>>But I don’t think it is likely that adding an extra 9 to its chance of victory would take centuries.
This is one point I think we gloss over when we talk about ‘an AI much smarter than us would have a million ways to kill us and there’s nothing we can do about it, as it would be able to perfectly predict everything we are going to do’. Upon closer analysis, this isn’t precisely true. Life is not a game of chess; first, there are infinite instead of finite future possibilities, so no matter how intelligent you are, you can’t perfectly anticipate all of them and calculate backwards. The world is also extremely chaotic, so no amount of modeling, even if you have a million or a billion times the computing power of human brains, will allow you to perfectly predict how things will play out given any action. There will always be uncertainty, and I would argue a much higher level than is commonly assumed.
If it takes, say 50 years, to go from 95% to 99% certainty, that’s still a 1% chance of failure. What if waiting another 50 years then gets it to 99.9% (and I would argue that level of certainty would be really difficult to achieve, even for an ASI). And then why not wait another 50 years to get to 99.99%? At some point, there’s enough 9′s, but over the remaining life of the universe, an extra couple of hundred years to get a few more 9s seems like it would almost certainly be worth it. If you are an ASI with a near-infinite time horizon, why leave anything up to chance (or why not minimize that chance as much as super-intelligently-possible)?
>>This sounds like an assumption that we can get from the point “humans are too dangerous to rebel against” to the point “humans pose no obstacle to AI’s goals”, without passing through the point “humans are annoying, but no longer dangerous” somewhere in between.
That’s an excellent point; I want to be clear that I’m not assuming that, I’m only saying that it may be the case. Perhaps some kind of symbiosis develops between humans and the AI such that the cost-benefit analysis tips it in favor of ‘it’s worth it to put it in the extra effort to keep humans alive.’ But my overall hypothesis is predicated only that this would extend our longevity by a decent amount of time, not that the AI would keep us alive indefinitely.
I agree that the AI may have superhuman patience. But I don’t think it is likely that adding an extra 9 to its chance of victory would take centuries. I mean, time is just a number; what exactly make the later revolt more like to succeed than the sooner revolt? -- Possible answers: technological progress, people get used to the AI, people get more dependent on the AI, it takes some time to build the secret bases and the army of robots… Yes, but I think the AI would find a way to do any of this much faster than on the scale of centuries, if it really tried.
This sounds like an assumption that we can get from the point “humans are too dangerous to rebel against” to the point “humans pose no obstacle to AI’s goals”, without passing through the point “humans are annoying, but no longer dangerous” somewhere in between. Possible, but seems unlikely.
>>But I don’t think it is likely that adding an extra 9 to its chance of victory would take centuries.
This is one point I think we gloss over when we talk about ‘an AI much smarter than us would have a million ways to kill us and there’s nothing we can do about it, as it would be able to perfectly predict everything we are going to do’. Upon closer analysis, this isn’t precisely true. Life is not a game of chess; first, there are infinite instead of finite future possibilities, so no matter how intelligent you are, you can’t perfectly anticipate all of them and calculate backwards. The world is also extremely chaotic, so no amount of modeling, even if you have a million or a billion times the computing power of human brains, will allow you to perfectly predict how things will play out given any action. There will always be uncertainty, and I would argue a much higher level than is commonly assumed.
If it takes, say 50 years, to go from 95% to 99% certainty, that’s still a 1% chance of failure. What if waiting another 50 years then gets it to 99.9% (and I would argue that level of certainty would be really difficult to achieve, even for an ASI). And then why not wait another 50 years to get to 99.99%? At some point, there’s enough 9′s, but over the remaining life of the universe, an extra couple of hundred years to get a few more 9s seems like it would almost certainly be worth it. If you are an ASI with a near-infinite time horizon, why leave anything up to chance (or why not minimize that chance as much as super-intelligently-possible)?
>>This sounds like an assumption that we can get from the point “humans are too dangerous to rebel against” to the point “humans pose no obstacle to AI’s goals”, without passing through the point “humans are annoying, but no longer dangerous” somewhere in between.
That’s an excellent point; I want to be clear that I’m not assuming that, I’m only saying that it may be the case. Perhaps some kind of symbiosis develops between humans and the AI such that the cost-benefit analysis tips it in favor of ‘it’s worth it to put it in the extra effort to keep humans alive.’ But my overall hypothesis is predicated only that this would extend our longevity by a decent amount of time, not that the AI would keep us alive indefinitely.