Suppose you were running a simulation, and it had some problems around object permanence, or colors not being quite constant (colors are surprisingly complicated to calculate since some of them depend on quantum effects), or other weird problems. What might you do to help that?
One answer might be to make the intelligences you are simulating ignore the types of errors that your system makes. And it turns out that we are blind to many changes around us!
Or conversely, if you are simulating an intelligence that happens to have change blindness, then you worry a lot less about fidelity in the areas that people mostly miss or ignore anyway.
The point is this: reality seems flawless because your brain assumes it is, and ignores cases where it isn’t. Even when the changes are large, like a completely different person taking over halfway through a conversation, or numerous continuity errors in movies that almost all bounce right off of us. So I don’t think that you can take amazing glitch free continuity as evidence that we’re not in a simulation, since we may not see the bugs.
I assumed you meant that you (as the one running the simulation) had arranged for people to be change-blind. Which means that there’s no particular reason that you yourself would be change-blind.
So you can’t just make the people copies of yourself, or the world a copy of your own world. You have to design them from scratch, and then put together a whole history for the universe so that their having evolved to be change-blind fits with the supposed past.
On edit: and of course you can’t just let them evolve and assume they’ll be change-blind, unless you have a pretty darned impressive ability to predict how that will come out.
OK, wait, I think I get it. It’s an anthropic thing. You happen to be human, and humans happen to be change-blind, so you take advantage of that to run your simulation, and we observe it because you wouldn’t have run the simulation if you (and therefore we) weren’t change-blind. Is that right?
Suppose you were running a simulation, and it had some problems around object permanence, or colors not being quite constant (colors are surprisingly complicated to calculate since some of them depend on quantum effects), or other weird problems. What might you do to help that?
One answer might be to make the intelligences you are simulating ignore the types of errors that your system makes. And it turns out that we are blind to many changes around us!
Or conversely, if you are simulating an intelligence that happens to have change blindness, then you worry a lot less about fidelity in the areas that people mostly miss or ignore anyway.
The point is this: reality seems flawless because your brain assumes it is, and ignores cases where it isn’t. Even when the changes are large, like a completely different person taking over halfway through a conversation, or numerous continuity errors in movies that almost all bounce right off of us. So I don’t think that you can take amazing glitch free continuity as evidence that we’re not in a simulation, since we may not see the bugs.
Doesn’t that mean you have to do an awful lot of work to design everything in tremendous detail, and also fabricate the back story?
I don’t actually follow—how does change blindness in people relate to how much stuff you have to design?
I assumed you meant that you (as the one running the simulation) had arranged for people to be change-blind. Which means that there’s no particular reason that you yourself would be change-blind.
So you can’t just make the people copies of yourself, or the world a copy of your own world. You have to design them from scratch, and then put together a whole history for the universe so that their having evolved to be change-blind fits with the supposed past.
On edit: and of course you can’t just let them evolve and assume they’ll be change-blind, unless you have a pretty darned impressive ability to predict how that will come out.
OK, wait, I think I get it. It’s an anthropic thing. You happen to be human, and humans happen to be change-blind, so you take advantage of that to run your simulation, and we observe it because you wouldn’t have run the simulation if you (and therefore we) weren’t change-blind. Is that right?