I stand by brains-in-vats being relevant in at least some doom scenarios, notwithstanding the slow training. For example, I sometimes have arguments like:
ME: A power-seeking AGI might wipe out human civilization with a super-plague plus drone strikes on the survivors.
THEM: Even if the AGI could do that, it wouldn’t want to, because it wants to survive into the indefinite future, and that’s impossible without having humans around to manufacture chips, mine minerals, run the power grid, etc.
ME: Even if the AGI merely had access to a few dexterous teleoperated robot bodies and its own grid-isolated solar cell, at first, then once it wipes out all the humans, it could gradually (over decades) build its way back to industrial civilization.
THEM: Nope. Fabs are too labor-intensive to run, supply, and maintain. The AGI could scavenge existing chips but it could never make new ones. Eventually the scavenge-able chips would all break down and the AGI would be dead. The AGI would know that, and therefore it would never wipe out humanity in the first place.
ME: What about brains-in-vats?!
(I have other possible responses too—I actually wouldn’t concede the claim that nanofab is out of the question—but anyway, this is a context where brains-in-vats are plausibly relevant.)
I presume you’re imagining different argument chains, in which case, yeah, brains-in-vats that need 10 years to train might well not be relevant. :)
I stand by brains-in-vats being relevant in at least some doom scenarios, notwithstanding the slow training. For example, I sometimes have arguments like:
ME: A power-seeking AGI might wipe out human civilization with a super-plague plus drone strikes on the survivors.
THEM: Even if the AGI could do that, it wouldn’t want to, because it wants to survive into the indefinite future, and that’s impossible without having humans around to manufacture chips, mine minerals, run the power grid, etc.
ME: Even if the AGI merely had access to a few dexterous teleoperated robot bodies and its own grid-isolated solar cell, at first, then once it wipes out all the humans, it could gradually (over decades) build its way back to industrial civilization.
THEM: Nope. Fabs are too labor-intensive to run, supply, and maintain. The AGI could scavenge existing chips but it could never make new ones. Eventually the scavenge-able chips would all break down and the AGI would be dead. The AGI would know that, and therefore it would never wipe out humanity in the first place.
ME: What about brains-in-vats?!
(I have other possible responses too—I actually wouldn’t concede the claim that nanofab is out of the question—but anyway, this is a context where brains-in-vats are plausibly relevant.)
I presume you’re imagining different argument chains, in which case, yeah, brains-in-vats that need 10 years to train might well not be relevant. :)