is very, very difficult not to give a superintelligence any hints of how the physics of our world work.
I wrote a short update to the post which tries to answer this point.
Maybe they notice minor fluctuations in the speed of the simulation based on environmental changes to the hardware
I believe they should have no ability whatsoever to detect fluctuations in the speed of the simulation.
Consider how the world of world of warcraft appears to an orc inside the game. Can it tell the speed at which the hardware is running the game?
It can’t. What it can do is compare the speed of different things: how fast does an apple fall from a tree vs how fast a bird flies across the sky.
The orc’s inner perception of the flow of time is based on comparing these things (e.g., how fast does an apple fall) to how fast their simulated brains process information.
If everything is slowed down by a factor of 2 (so you, as a player, see everything twice is slow), nothing appears any different to a simulated being within the simulation.
You are absolutely correct, they wouldn’t be able to detect fluctuations in processing speed (unless those fluctuations had an influence in, for instance, the rounding errors in floating point values).
About update 1: It knows our world very likely has something approximating newtonian mechanics, that is a lot of information by itself. but more than that, it knows that the real universe is capable of producing intelligent beings that chose this particular world to simulate. From a strictly theoretical point of view that is a crapton of information, I don’t know if the AI would be able to figure out anything useful from it, but I wouldn’t bet the future of humanity on it.
About update 2: That does work, provided that this is implemented correctly, but it only works for problems that can be automatically verified by non-AI algorithms.
but more than that, it knows that the real universe is capable of producing intelligent beings that chose this particular world to simulate.
Good point—this undermines a lot of what I wrote in my update 1. For example, I have no idea if F = m d^3 x / dt would result in a world that is capable of producing intelligent beings.
I should at some point produce a version of the above post with this claim, and other questionable parenthetical remarks I made, deleted, or at least acknowledging that they require further argumentation; they are not necessary for the larger point, which is that as long as the only thing the superintelligence can do (by definition) is live in a simulated world governed by Newton’s laws, and as long as we don’t interact with it at all except to see an automatically verified answer to a preset question (e.g., factor “111000232342342”), there is nothing it can do to harm us.
I wrote a short update to the post which tries to answer this point.
I believe they should have no ability whatsoever to detect fluctuations in the speed of the simulation.
Consider how the world of world of warcraft appears to an orc inside the game. Can it tell the speed at which the hardware is running the game?
It can’t. What it can do is compare the speed of different things: how fast does an apple fall from a tree vs how fast a bird flies across the sky.
The orc’s inner perception of the flow of time is based on comparing these things (e.g., how fast does an apple fall) to how fast their simulated brains process information.
If everything is slowed down by a factor of 2 (so you, as a player, see everything twice is slow), nothing appears any different to a simulated being within the simulation.
You are absolutely correct, they wouldn’t be able to detect fluctuations in processing speed (unless those fluctuations had an influence in, for instance, the rounding errors in floating point values).
About update 1: It knows our world very likely has something approximating newtonian mechanics, that is a lot of information by itself. but more than that, it knows that the real universe is capable of producing intelligent beings that chose this particular world to simulate. From a strictly theoretical point of view that is a crapton of information, I don’t know if the AI would be able to figure out anything useful from it, but I wouldn’t bet the future of humanity on it.
About update 2: That does work, provided that this is implemented correctly, but it only works for problems that can be automatically verified by non-AI algorithms.
Good point—this undermines a lot of what I wrote in my update 1. For example, I have no idea if F = m d^3 x / dt would result in a world that is capable of producing intelligent beings.
I should at some point produce a version of the above post with this claim, and other questionable parenthetical remarks I made, deleted, or at least acknowledging that they require further argumentation; they are not necessary for the larger point, which is that as long as the only thing the superintelligence can do (by definition) is live in a simulated world governed by Newton’s laws, and as long as we don’t interact with it at all except to see an automatically verified answer to a preset question (e.g., factor “111000232342342”), there is nothing it can do to harm us.