On the implicit assumptions (i’m going to add that to article later when i think of better way to put it:
When you assume that AI needs resources and is going to eat you, that can be false if any of the following (and many other statements) are false:
The AI is working towards sufficiently unbounded goal, and needs resources. Trivially: the AI that wants to understand theory of everything may actually accomplish it’s goal before eating you.
More resources do help get closer to the goal (not necessarily so—e.g. for the hardware self improvement, there’s speed of light lag and the intelligence doesn’t scale so well with volume. When hardware is running at THz speeds, 0.3 millimetres of light per cycle, the speed of light lag is a huge problem)
The AI assigns no value for something like human life. Case in point: the evolution, as a form of optimizer, doesn’t assign values to any lives, yet it has created another, much more powerful optimizer, human mind, and now we have greenpeace trying to sink japanese whaling vessel, the endangered species list, and so on and so forth.
The AI won’t find a substantially cleverer way to gain the computational resources (or other resources) that it needs. It may make as much sense as caveman worrying that the AI would kill all cavemen because AI will obviously want all mammoth tusks to itself, or would need human bones as structural elements.
Thus, these statements are, logically, the assumptions you are implicitly making. You are implicitly making a huge conjunction. The conjunction fallacy v2.0 .
On the implicit assumptions (i’m going to add that to article later when i think of better way to put it:
When you assume that AI needs resources and is going to eat you, that can be false if any of the following (and many other statements) are false:
The AI is working towards sufficiently unbounded goal, and needs resources. Trivially: the AI that wants to understand theory of everything may actually accomplish it’s goal before eating you.
More resources do help get closer to the goal (not necessarily so—e.g. for the hardware self improvement, there’s speed of light lag and the intelligence doesn’t scale so well with volume. When hardware is running at THz speeds, 0.3 millimetres of light per cycle, the speed of light lag is a huge problem)
The AI assigns no value for something like human life. Case in point: the evolution, as a form of optimizer, doesn’t assign values to any lives, yet it has created another, much more powerful optimizer, human mind, and now we have greenpeace trying to sink japanese whaling vessel, the endangered species list, and so on and so forth.
The AI won’t find a substantially cleverer way to gain the computational resources (or other resources) that it needs. It may make as much sense as caveman worrying that the AI would kill all cavemen because AI will obviously want all mammoth tusks to itself, or would need human bones as structural elements.
Thus, these statements are, logically, the assumptions you are implicitly making. You are implicitly making a huge conjunction. The conjunction fallacy v2.0 .