it can set up a highly redundant, distributed computing context for itself to run in, hidden behind an onion link, paid for by crypto wallets which it controls.
This is a risky position because if another misaligned AI launches, it will probably take full control of all computers and halt any other AIs.
nanobots don’t solve the problem of maintaining the digital infrastructure in which it exists
I don’t mean gray-goo nanobots. Nanomachines can do all sorts of things, including maintaining infrastructure, if they’re programmed to do so.
This is a risky position because if another misaligned AI launches, it will probably take full control of all computers and halt any other AIs.
AIs looking to expand their computational power could adopt either “white hat” (paying for their computational resources) or “black hat” (exploiting security vulnerabilities to seize control of computational resources) strategies. It’s possible that an AI exploiting the black hat strategy might be able to seize control of all accessible computers, and this strategy could plausibly involve killing all humans to avoid being shut down. But I expect that a self-interested, risk-averse AI would probably choose the white hat strategy to avoid armageddon risk, and might plausibly invest resources into security research to preclude the risk of black hat AI.
I don’t mean gray-goo nanobots. Nanomachines can do all sorts of things, including maintaining infrastructure, if they’re programmed to do so.
I guess the crux of my argument is that sure, the AI could design coordinated nanobot-powered bodies with two legs and ten fingers who have enough agency to figure out how to repair broken power lines and who predictably do what they’re incentivized to do. But that’s already a solved problem.
This is a risky position because if another misaligned AI launches, it will probably take full control of all computers and halt any other AIs.
I don’t mean gray-goo nanobots. Nanomachines can do all sorts of things, including maintaining infrastructure, if they’re programmed to do so.
AIs looking to expand their computational power could adopt either “white hat” (paying for their computational resources) or “black hat” (exploiting security vulnerabilities to seize control of computational resources) strategies. It’s possible that an AI exploiting the black hat strategy might be able to seize control of all accessible computers, and this strategy could plausibly involve killing all humans to avoid being shut down. But I expect that a self-interested, risk-averse AI would probably choose the white hat strategy to avoid armageddon risk, and might plausibly invest resources into security research to preclude the risk of black hat AI.
I guess the crux of my argument is that sure, the AI could design coordinated nanobot-powered bodies with two legs and ten fingers who have enough agency to figure out how to repair broken power lines and who predictably do what they’re incentivized to do. But that’s already a solved problem.