I think maybe I derailed the conversation by saying “disassemble”, when really “kill” is all that’s required for the argument to go through. I don’t know what sort of fight you are imagining humans having with nanotech that imposes substantial additional costs on the ASI beyond the part where it needs to build & deploy the nanotech that actually does the “killing” part, but in this world I do not expect there to be a fight. I don’t think it requires being able to immediately achieve all of your goals at zero cost in order for it to be cheap for the ASI to do that, conditional on it having developed that technology.
I don’t know what sort of fight you are imagining humans having with nanotech that imposes substantial additional costs on the ASI beyond the part where it needs to build & deploy the nanotech that actually does the “killing” part, but in this world I do not expect there to be a fight.
The additional costs of human resistance don’t need to be high in an absolute sense. These costs only need to be higher than the benefit of killing humans, for your argument fail.
It is likewise very easy for the United States to invade and occupy Costa Rica—but that does not imply that it is rational for the United States to do so, because the benefits of invading Costa Rica are presumably even smaller than the costs of taking such an action, even without much unified resistance from Costa Rica.
What matters for the purpose of this argument is the relative magnitude of costs vs. benefits, not the absolute magnitude of the costs. It is insufficient to argue that the costs of killing humans are small. That fact alone does not imply that it is rational to kill humans, from the perspective of an AI. You need to further argue that the benefits of killing humans are even larger to establish the claim that a misaligned AI should rationally kill us.
To the extent your statement that “I don’t expect there to be a fight” means that you don’t think humans can realistically resist in any way that imposes costs on AIs, that’s essentially what I meant to respond to when I talked about the idea of AIs being able to achieve their goals at “zero costs”.
Of course, if you assume that AIs will be able to do whatever they want without any resistance whatsoever from us, then you can of course conclude that they will be able to achieve any goals they want without needing to compromise with us. If killing humans doesn’t cost anything, then yes I agree, the benefits of killing humans, however small, will be higher, and thus it will be rational for AIs to kill humans. I am doubting the claim that the cost of killing humans will be literally zero.
Even if this cost is small, it merely needs to be larger than the benefits of killing humans, for AIs to rationally avoid killing humans.
Of course, if you assume that AIs will be able to do whatever they want without any resistance whatsoever from us, then you can of course conclude that they will be able to achieve any goals they want without needing to compromise with us. If killing humans doesn’t cost anything, then yes, the benefits of killing humans, however small, will be higher, and thus it will be rational for AIs to kill humans. I am doubting the claim that the cost of killing humans will be literally zero.
See Ben’s comment for why the level of nanotech we’re talking about implies a cost of approximately zero.
I would also add: having more energy in the immediate future means more probes send out faster to more distant parts of the galaxy, which may be measured in “additional star systems colonized before they disappear outside the lightcone via universe expansion”. So the benefits are not trivial either.
I think maybe I derailed the conversation by saying “disassemble”, when really “kill” is all that’s required for the argument to go through. I don’t know what sort of fight you are imagining humans having with nanotech that imposes substantial additional costs on the ASI beyond the part where it needs to build & deploy the nanotech that actually does the “killing” part, but in this world I do not expect there to be a fight. I don’t think it requires being able to immediately achieve all of your goals at zero cost in order for it to be cheap for the ASI to do that, conditional on it having developed that technology.
The additional costs of human resistance don’t need to be high in an absolute sense. These costs only need to be higher than the benefit of killing humans, for your argument fail.
It is likewise very easy for the United States to invade and occupy Costa Rica—but that does not imply that it is rational for the United States to do so, because the benefits of invading Costa Rica are presumably even smaller than the costs of taking such an action, even without much unified resistance from Costa Rica.
What matters for the purpose of this argument is the relative magnitude of costs vs. benefits, not the absolute magnitude of the costs. It is insufficient to argue that the costs of killing humans are small. That fact alone does not imply that it is rational to kill humans, from the perspective of an AI. You need to further argue that the benefits of killing humans are even larger to establish the claim that a misaligned AI should rationally kill us.
To the extent your statement that “I don’t expect there to be a fight” means that you don’t think humans can realistically resist in any way that imposes costs on AIs, that’s essentially what I meant to respond to when I talked about the idea of AIs being able to achieve their goals at “zero costs”.
Of course, if you assume that AIs will be able to do whatever they want without any resistance whatsoever from us, then you can of course conclude that they will be able to achieve any goals they want without needing to compromise with us. If killing humans doesn’t cost anything, then yes I agree, the benefits of killing humans, however small, will be higher, and thus it will be rational for AIs to kill humans. I am doubting the claim that the cost of killing humans will be literally zero.
Even if this cost is small, it merely needs to be larger than the benefits of killing humans, for AIs to rationally avoid killing humans.
See Ben’s comment for why the level of nanotech we’re talking about implies a cost of approximately zero.
I would also add: having more energy in the immediate future means more probes send out faster to more distant parts of the galaxy, which may be measured in “additional star systems colonized before they disappear outside the lightcone via universe expansion”. So the benefits are not trivial either.