This reasoning has always struck me as deeply and profoundly silly.
The AI might also be in a computer simulation where the dark lord of the matrix might destroy us for not devoting all of our resources to building cheesecakes. In fact, so could we. I don’t see it influencing our behaviors any, nor should it.. You’re privileging the hypothesis.
As for the second part, you might also encounter an alien intelligence that you can’t protect yourself from, because you exhausted so many resources leaving humanity alive, showing down your bootstrapping. That’s the thing about aliens.
The AI might also be in a computer simulation where the dark lord of the matrix might destroy us for not devoting all of our resources to building cheesecakes.
If humans are building an AI, testing it before releasing it is just common sense. There’s no obvious reason to destroy them for not building cheesecakes.
It should be noted that that doesn’t mean that the AI shouldn’t kill humans. It should try to figure out a way to tell if it’s in a simulation before doing anything humans would find suspicious.
Yeah, it’s totally fake. In the real AI Deterrence argument, you’re supposed to leave substantial discussions of the topic and claims to use it post-Singularity, which will elevate a specific scenario to the point where it’s not privileging a hypothesis to consider the possibility and leave the humans alone on their planet as you eat the rest of the universe.
This reasoning has always struck me as deeply and profoundly silly.
The AI might also be in a computer simulation where the dark lord of the matrix might destroy us for not devoting all of our resources to building cheesecakes. In fact, so could we. I don’t see it influencing our behaviors any, nor should it.. You’re privileging the hypothesis.
As for the second part, you might also encounter an alien intelligence that you can’t protect yourself from, because you exhausted so many resources leaving humanity alive, showing down your bootstrapping. That’s the thing about aliens.
If humans are building an AI, testing it before releasing it is just common sense. There’s no obvious reason to destroy them for not building cheesecakes.
It should be noted that that doesn’t mean that the AI shouldn’t kill humans. It should try to figure out a way to tell if it’s in a simulation before doing anything humans would find suspicious.
I believe James is joking, and this is a fake argument that he knows is not actually persuasive.
Yeah, it’s totally fake. In the real AI Deterrence argument, you’re supposed to leave substantial discussions of the topic and claims to use it post-Singularity, which will elevate a specific scenario to the point where it’s not privileging a hypothesis to consider the possibility and leave the humans alone on their planet as you eat the rest of the universe.