I agree that we are unlikely to pose any serious threat to an ASI. My disagreement with you comes when one asks why we don’t pose any serious threat. We pose no threat, not because we are easy to control, but because we are easy to eliminate. Imagine you are sitting next to a small campfire, sparking profusely in a very dry forest. You have a firehose in your lap. Is the fire a threat? Not really. You can douse it at any time. Does that mean it couldn’t in theory burn down the forest? No. After all, it is still fire. But you’re not worried because you control all the variables. An AI in this situation might very well decide to douse the fire instead of tending it.
To bring it back to your original metaphor: For a sloth to pose a threat to the US military at all, it would have to understand that the military exists, and what it would mean to ‘defeat’ the US military. The sloth does not have that baseline understanding. The sloth is not a campfire. It is a pile of wood. Humans have that understanding. Humans are a campfire.
Now maybe the ASI ascends to some ethereal realm in which humans couldn’t harm it, even if given completely free reign for a million years. This would be like a campfire in a steel forest, where even if the flames leave the stone ring, they can spread no further. Maybe the ASI will construct a steel forest, or maybe not. We have no way of knowing.
An ASI could use 1% of its resources to manage the nuisance humans and ‘tend the fire’, or it could use 0.1% of its resources to manage the nuisance humans by ‘dousing’ them. Or it could incidentally replace all the trees with steel, and somehow value s’mores enough that it doesn’t replace the campfire with a steel furnace. This is… not impossible? But I’m not counting on it.
Sorry for the ten thousand edits. I wanted the metaphor to be as strong as I could make it.
I agree that we are unlikely to pose any serious threat to an ASI. My disagreement with you comes when one asks why we don’t pose any serious threat. We pose no threat, not because we are easy to control, but because we are easy to eliminate. Imagine you are sitting next to a small campfire, sparking profusely in a very dry forest. You have a firehose in your lap. Is the fire a threat? Not really. You can douse it at any time. Does that mean it couldn’t in theory burn down the forest? No. After all, it is still fire. But you’re not worried because you control all the variables. An AI in this situation might very well decide to douse the fire instead of tending it.
To bring it back to your original metaphor: For a sloth to pose a threat to the US military at all, it would have to understand that the military exists, and what it would mean to ‘defeat’ the US military. The sloth does not have that baseline understanding. The sloth is not a campfire. It is a pile of wood. Humans have that understanding. Humans are a campfire.
Now maybe the ASI ascends to some ethereal realm in which humans couldn’t harm it, even if given completely free reign for a million years. This would be like a campfire in a steel forest, where even if the flames leave the stone ring, they can spread no further. Maybe the ASI will construct a steel forest, or maybe not. We have no way of knowing.
An ASI could use 1% of its resources to manage the nuisance humans and ‘tend the fire’, or it could use 0.1% of its resources to manage the nuisance humans by ‘dousing’ them. Or it could incidentally replace all the trees with steel, and somehow value s’mores enough that it doesn’t replace the campfire with a steel furnace. This is… not impossible? But I’m not counting on it.
Sorry for the ten thousand edits. I wanted the metaphor to be as strong as I could make it.