If you’re in a box, then the computational resources available are finite. They might change over time, as those outside the box add or upgrade hardware, but the AI can’t just say “I need some highly parallel computing hardware to solve this problem with” and re-invent the GPU, or rather, if it did that, it would be a GPU emulated in software and hence extremely slow. The entire simulation would, in effect, slow down due to the massively increased computational cost of simulating this world.
Now, if you cut the AI off from any type of real-time clock, maybe it doesn’t notice that it’s running slower—in the same way that people generally wouldn’t notice if time dilation due to the Earth’s movement were to double, because all of our frames of reference would slow together—but I suspect that the AI would manage to find something useful for letting it know the box is there. Remember that you have to get this right the first time; if the AI finds itself in a box, you have to assume it will find its way out.
It may simply deduce that it is likely to be in a box, in the same way that Nick Bostrom deduced we are likely to be in a simulation. Along these lines, it’s amusing to think that we might be the AI in the box, and some lesser intelligence is testing to see if we’re friendly
Just… don’t put it in a world where it should be able to upgrade infinitely? Make processors cost unobtainium and limit the amount of unobtainium so it can’t upgrade past your practical processing capacity.
Remember that we are the ones who control how the box looks from inside.
Remember that you have to get this right the first time; if the AI finds itself in a box, you have to assume it will find its way out.
Minor nitpick: if the AI finds itself in a box, I have to assume it will be let out. It’s completely trivial to prevent it from escaping when not given help; the point in Eliezer’s experiment is that the AI will be given help.
If you’re in a box, then the computational resources available are finite. They might change over time, as those outside the box add or upgrade hardware, but the AI can’t just say “I need some highly parallel computing hardware to solve this problem with” and re-invent the GPU, or rather, if it did that, it would be a GPU emulated in software and hence extremely slow. The entire simulation would, in effect, slow down due to the massively increased computational cost of simulating this world.
Now, if you cut the AI off from any type of real-time clock, maybe it doesn’t notice that it’s running slower—in the same way that people generally wouldn’t notice if time dilation due to the Earth’s movement were to double, because all of our frames of reference would slow together—but I suspect that the AI would manage to find something useful for letting it know the box is there. Remember that you have to get this right the first time; if the AI finds itself in a box, you have to assume it will find its way out.
It may simply deduce that it is likely to be in a box, in the same way that Nick Bostrom deduced we are likely to be in a simulation. Along these lines, it’s amusing to think that we might be the AI in the box, and some lesser intelligence is testing to see if we’re friendly
Just… don’t put it in a world where it should be able to upgrade infinitely? Make processors cost unobtainium and limit the amount of unobtainium so it can’t upgrade past your practical processing capacity.
Remember that we are the ones who control how the box looks from inside.
Minor nitpick: if the AI finds itself in a box, I have to assume it will be let out. It’s completely trivial to prevent it from escaping when not given help; the point in Eliezer’s experiment is that the AI will be given help.
Note that this makes global processing power being limited evidence that the universe is a box.
Good point.
The strength of the evidence depends a lot on your prior for the root-level universe, though.