Indeed, a similar point seems to apply to the whole anti-boxing argument. Are we really prepared to say that super-intelligence implies being able to extrapolate anything from a tiny number of data points?
It sounds a bit too much like the claim that a sufficiently intelligent being could “make A = ~A” or other such meaninglessness.
Indeed, a similar point seems to apply to the whole anti-boxing argument. Are we really prepared to say that super-intelligence implies being able to extrapolate anything from a tiny number of data points?
It sounds a bit too much like the claim that a sufficiently intelligent being could “make A = ~A” or other such meaninglessness.
Hyperintelligence != magic
Yes, but the AI could take over the world, and given a Singularity, it should be possible to recreate perfect simulations.
So really this example makes more sense if the AI is making a future threat.