Yes, and this isn’t necessarily due to naive ‘no mere simulation could feel so real’-type thinking either. Once can make a decent argument that the only known method of making a simulation that can fool modern physics labs would be to simulate the entire planet and much of surrounding space at the level of quantum mechanics, which is computationally intractable even with mature nanotech. Well, that or have an SI constantly tinkering with everyone’s perceptions to make them think everything looks correct, but then you have to suppose that such entities have nothing more interesting to do with their runtime.
I once described a 240 GHz waveguide-structure radio receiver I had built as a “comprehensive analogue simulation of Maxwell’s equations incorporating realistic assumptions about the conductivity of real materials used in waveguide manufacture.” Although this simulation was insanely accurate, it was much more difficult to a) change parameters in this simulation and b) measure/calculate the results of this simulation than with the more traditional digital simulations of Maxwell’s equations we had available.
Another possibility would be to build simulated perceivers whose perceptions are systematically distorted in such a way that they will fail to notice the gaps in the simulated environment, I suppose. Which would not require constant deliberate intervention by an intelligence.
Before you build that, just to practice your skills you can build some code that will take a blurry picture and with extremely high accuracy show what the picture would have looked like had the camera been in focus. This problem would of course be much easier than knowing that you had built a simulation with holes in it but managed to correct for the absence of information in the simulation in a way that was actually simpler than fixing the simulation in the first place.
I think you might run in to limits based on considerations of information theory that make both tasks possible, but if you start with the image reconstruction problem you will save a lot of effort.
Before you build that, just to practice your skills you can build some code that will take a blurry picture and with extremely high accuracy show what the picture would have looked like had the camera been in focus.
The problem with that program is that the information was already there. The information may have been scattered in a semi-random pattern, but it was still there to be reorganized. In this hypothetical simulation, there is a lack of information. And while you can undo randomization to recreate a blurred image, you cannot create information from nothing.
However, the human brain does have some interesting traits which might make it possible for humans to think they are seeing something without creating all the information such a thing would possess. The neocortex has multiple levels. Lower levels detect things like absence and presence of light, which higher levels turn into lines and curves, which even higher levels turn into shapes, which eventually get interpreted as a specific face (the brain has clusters of a few hundred neurons responsible for every face we have memorized). All you would have to do to make a human brain think they saw someone would be to stimulate the top few hundred neurons, the bottom ones need not be given information. Imagine a general telling his troops to move somewhere. Each troop carries out an action, tells their superior, who gives their superior a generalization, who gives their superior a generalization, until the general gets one message “Move going fine”. To fool the general (human) into thinking the move is going fine (interacting with something), you don’t need to forge the entire chain of events (simulate every quark), you just need to give them the message saying everything is going great (stimulate those few hundred neurons). And then when the matrix person looks closer, the Matrix Lords just forge the lower levels temporarily.
The problem with this is is it does not match the principle “Humans simulating old earth to get information”. It would not be giving the future humans any new information they hadn’t created, because they would have to fake that information. They wouldn’t learn anything. It is possible to fool humans in that way, but the only possible use would be for the purpose of fooling someone. And that would require some serious sadism. So there is a scenario in which humans have the computational power and algorithms to make you live in a simulation you think is real, but have no reason to do so.
The original hypothetical was to create a simulated agent that merely fails to notice a gap. New information does not need to be added for this; information from around the gap merely needs to be averaged out to create what appears to be not-a-gap (much as human sight doesn’t have a visible hole in the blind spot).
Now, if the intent was to cover the gap with something specific, then your argument would apply. If, however, the intent is to simply cover up the gap with the most easily calculated non-gap data, then it becomes possible to do so. (Note that it may still remain possible, in such circumstances, to discover the gap indirectly).
Well, given that the alternative ebrownv was considering was ongoing tinkering during runtime by a superintelligence, it’s not quite clear what my ability to build such code has to do with anything.
There’s also a big difference, even for a superintelligence, between building a systematically deluded observer, building a systematically deluded high-precision observer, and building a guaranteed systematically deluded high-precision observer. I’m not sure more than the former is needed for the scenario ebrownv had in mind.
Sure, it might notice something weird one in a million times, but one can probably count on social forces to prevent such anomalous perceptions from being taken too seriously, especially if one patches the simulation promptly on the rare occasions when it doesn’t.
Yes, and this isn’t necessarily due to naive ‘no mere simulation could feel so real’-type thinking either. Once can make a decent argument that the only known method of making a simulation that can fool modern physics labs would be to simulate the entire planet and much of surrounding space at the level of quantum mechanics, which is computationally intractable even with mature nanotech. Well, that or have an SI constantly tinkering with everyone’s perceptions to make them think everything looks correct, but then you have to suppose that such entities have nothing more interesting to do with their runtime.
I once described a 240 GHz waveguide-structure radio receiver I had built as a “comprehensive analogue simulation of Maxwell’s equations incorporating realistic assumptions about the conductivity of real materials used in waveguide manufacture.” Although this simulation was insanely accurate, it was much more difficult to a) change parameters in this simulation and b) measure/calculate the results of this simulation than with the more traditional digital simulations of Maxwell’s equations we had available.
Another possibility would be to build simulated perceivers whose perceptions are systematically distorted in such a way that they will fail to notice the gaps in the simulated environment, I suppose. Which would not require constant deliberate intervention by an intelligence.
Before you build that, just to practice your skills you can build some code that will take a blurry picture and with extremely high accuracy show what the picture would have looked like had the camera been in focus. This problem would of course be much easier than knowing that you had built a simulation with holes in it but managed to correct for the absence of information in the simulation in a way that was actually simpler than fixing the simulation in the first place.
I think you might run in to limits based on considerations of information theory that make both tasks possible, but if you start with the image reconstruction problem you will save a lot of effort.
This has now been done—to a first approximation, at least.
The problem with that program is that the information was already there. The information may have been scattered in a semi-random pattern, but it was still there to be reorganized. In this hypothetical simulation, there is a lack of information. And while you can undo randomization to recreate a blurred image, you cannot create information from nothing.
However, the human brain does have some interesting traits which might make it possible for humans to think they are seeing something without creating all the information such a thing would possess. The neocortex has multiple levels. Lower levels detect things like absence and presence of light, which higher levels turn into lines and curves, which even higher levels turn into shapes, which eventually get interpreted as a specific face (the brain has clusters of a few hundred neurons responsible for every face we have memorized). All you would have to do to make a human brain think they saw someone would be to stimulate the top few hundred neurons, the bottom ones need not be given information. Imagine a general telling his troops to move somewhere. Each troop carries out an action, tells their superior, who gives their superior a generalization, who gives their superior a generalization, until the general gets one message “Move going fine”. To fool the general (human) into thinking the move is going fine (interacting with something), you don’t need to forge the entire chain of events (simulate every quark), you just need to give them the message saying everything is going great (stimulate those few hundred neurons). And then when the matrix person looks closer, the Matrix Lords just forge the lower levels temporarily.
The problem with this is is it does not match the principle “Humans simulating old earth to get information”. It would not be giving the future humans any new information they hadn’t created, because they would have to fake that information. They wouldn’t learn anything. It is possible to fool humans in that way, but the only possible use would be for the purpose of fooling someone. And that would require some serious sadism. So there is a scenario in which humans have the computational power and algorithms to make you live in a simulation you think is real, but have no reason to do so.
The original hypothetical was to create a simulated agent that merely fails to notice a gap. New information does not need to be added for this; information from around the gap merely needs to be averaged out to create what appears to be not-a-gap (much as human sight doesn’t have a visible hole in the blind spot).
Now, if the intent was to cover the gap with something specific, then your argument would apply. If, however, the intent is to simply cover up the gap with the most easily calculated non-gap data, then it becomes possible to do so. (Note that it may still remain possible, in such circumstances, to discover the gap indirectly).
Well, given that the alternative ebrownv was considering was ongoing tinkering during runtime by a superintelligence, it’s not quite clear what my ability to build such code has to do with anything.
There’s also a big difference, even for a superintelligence, between building a systematically deluded observer, building a systematically deluded high-precision observer, and building a guaranteed systematically deluded high-precision observer. I’m not sure more than the former is needed for the scenario ebrownv had in mind.
Sure, it might notice something weird one in a million times, but one can probably count on social forces to prevent such anomalous perceptions from being taken too seriously, especially if one patches the simulation promptly on the rare occasions when it doesn’t.