If you did live in such a world where everything was based around you then the controller could allocate resources even more efficiently by monitoring your mind and devoting more computational power to making the people around you believable when you get suspicious.
This is basically the model I use to estimate the computational power needed in case I am simulated (I don’t care for what substrate my consciousness runs on as long as I have no access to it). Given that my mind apparently retains only a limited number of bits I’d guess that an efficient simulation could get away with astronomically less resources than needed to simulate all the atoms in the universe. Modern physics experiments would put quite some demands on the cause propagation of the algorithm because apparently quantum effects in light from distant stars are measured, but as long as I don’t do the experiment the atoms don’t actually need to be simulated.
As long as the experiment conforms to your expectations, they don’t even need to simulate it. The only way they could get into trouble is if you expect a logical contradiction, they didn’t spot it in advance, and you might eventually work that out.
That’s exactly what I had in mind, although I did specify that the controller would never simulate anybody besides me to the level required to make them people.
How long is a piece of string?
If you did live in such a world where everything was based around you then the controller could allocate resources even more efficiently by monitoring your mind and devoting more computational power to making the people around you believable when you get suspicious.
This is basically the model I use to estimate the computational power needed in case I am simulated (I don’t care for what substrate my consciousness runs on as long as I have no access to it). Given that my mind apparently retains only a limited number of bits I’d guess that an efficient simulation could get away with astronomically less resources than needed to simulate all the atoms in the universe. Modern physics experiments would put quite some demands on the cause propagation of the algorithm because apparently quantum effects in light from distant stars are measured, but as long as I don’t do the experiment the atoms don’t actually need to be simulated.
http://lesswrong.com/lw/ii5/baseline_of_my_opinion_on_lw_topics/
As long as the experiment conforms to your expectations, they don’t even need to simulate it. The only way they could get into trouble is if you expect a logical contradiction, they didn’t spot it in advance, and you might eventually work that out.
“Oh, it turns out that was experimental error.”
at least now that we have better equipment the earlier result seems not to be repeatable.
… which is a fairly common scenario.
That’s exactly what I had in mind, although I did specify that the controller would never simulate anybody besides me to the level required to make them people.
Define “make them people”. Because it sounds like what you’re talking about is just the problem of other minds with “chatbot” substituted for “zombie”
I’m surprised that it sounded that way to you. I’ve amended my original post to clarify.