This begs the question of how can the AI simulate you if its only link to the external world is a text-only terminal. That doesn’t seem to be enough data to go on.
Makes for a very scary sci-fi scenario, but I doubt that this situation could actually happen if the AI really is in a box.
Indeed, a similar point seems to apply to the whole anti-boxing argument. Are we really prepared to say that super-intelligence implies being able to extrapolate anything from a tiny number of data points?
It sounds a bit too much like the claim that a sufficiently intelligent being could “make A = ~A” or other such meaninglessness.
This begs the question of how can the AI simulate you if its only link to the external world is a text-only terminal. That doesn’t seem to be enough data to go on.
Makes for a very scary sci-fi scenario, but I doubt that this situation could actually happen if the AI really is in a box.
Indeed, a similar point seems to apply to the whole anti-boxing argument. Are we really prepared to say that super-intelligence implies being able to extrapolate anything from a tiny number of data points?
It sounds a bit too much like the claim that a sufficiently intelligent being could “make A = ~A” or other such meaninglessness.
Hyperintelligence != magic
Yes, but the AI could take over the world, and given a Singularity, it should be possible to recreate perfect simulations.
So really this example makes more sense if the AI is making a future threat.