The experience machine objection is often levied against utilitarian theories that depend on utility being a function of observations or brain states. The version of the argument I’m considering here is a more cruxified version that strips out a bunch of confounding factors and goes something like this: imagine you had a machine you could step into that would perfectly simulate your experience in the real world. The objection goes that since most people would feel at least slightly more willing to stay in reality than go in the machine, there’s at least some value to being in the “real” world, therefore we can’t accept any branch of utilitarianism that assumes utility is soley a function of observations or brain states.
I think if you accept the premise that the machine somehow magically truly simulates perfectly and indistinguishably from actual reality, in such a way that there is absolutely no way of knowing the difference between the simulation and the outside universe, then the simulated universe is essentially isomorphic to reality, and we should be fully indifferent. I’m not sure it even makes sense to say either universe is more “real”, since they’re literally identical in every way that matters (for the differences we can’t observe even in theory, I appeal to Newton’s flaming laser sword). Our intuitions here should be closer to stepping into an identical parallel universe, rather than entering a simulation.
However, I think it’s not actually possible to have such a perfect experience machine, which would explain our intuition for not wanting to step inside. First, if this machine simulates reality using our knowledge of physics at the time, it’s entirely possible that there are huge parts of physics you would never be able to find out about inside the machine, since you can never be 100% sure whether you really know the Theory of Everything. Second, this machine would have to be smaller than the universe in some sense, since it’s part of the universe. As a result, the simulation would probably have to cut corners or reduce the size of the simulated universe substantially to compensate.
These things both impact the possible observations you can have inside the machine, which allows you to distinguish between simulation and reality, which means it’s totally valid to penalize the utility of living inside a simulation by some amount depending on how strongly you feel about the limitations (and how good the machine is). Just because there’s a penalty doesn’t mean that other factors can’t overcome that, though. Lots of versions of the objection try to sweeten the deal for the world inside the machine further (“you can experience anything you want”/”you get maximum serotonin”/etc); this doesn’t really change the core of the argument of whether our utility function should depend on anything other than observations. If the perks are really good and you care less about the limitations than the perks, then it makes perfect sense to go inside the machine; if you care more about the limitations than the perks, it makes perfect sense not to go inside the machine.
The crux of the experience machine thought experiment is that even when all else is held constant, we should assign epsilon more utility to whatever is “real”, therefore utility does not depend soley on your observations/brain states. I argue that this epsilon penalty makes sense given practical limitations to any real experience machines, which is probably what informs our intuitions, and that if you somehow handwaved those limitations way then we really truly should be indifferent.
Dissolving the Experience Machine Objection
The experience machine objection is often levied against utilitarian theories that depend on utility being a function of observations or brain states. The version of the argument I’m considering here is a more cruxified version that strips out a bunch of confounding factors and goes something like this: imagine you had a machine you could step into that would perfectly simulate your experience in the real world. The objection goes that since most people would feel at least slightly more willing to stay in reality than go in the machine, there’s at least some value to being in the “real” world, therefore we can’t accept any branch of utilitarianism that assumes utility is soley a function of observations or brain states.
I think if you accept the premise that the machine somehow magically truly simulates perfectly and indistinguishably from actual reality, in such a way that there is absolutely no way of knowing the difference between the simulation and the outside universe, then the simulated universe is essentially isomorphic to reality, and we should be fully indifferent. I’m not sure it even makes sense to say either universe is more “real”, since they’re literally identical in every way that matters (for the differences we can’t observe even in theory, I appeal to Newton’s flaming laser sword). Our intuitions here should be closer to stepping into an identical parallel universe, rather than entering a simulation.
However, I think it’s not actually possible to have such a perfect experience machine, which would explain our intuition for not wanting to step inside. First, if this machine simulates reality using our knowledge of physics at the time, it’s entirely possible that there are huge parts of physics you would never be able to find out about inside the machine, since you can never be 100% sure whether you really know the Theory of Everything. Second, this machine would have to be smaller than the universe in some sense, since it’s part of the universe. As a result, the simulation would probably have to cut corners or reduce the size of the simulated universe substantially to compensate.
These things both impact the possible observations you can have inside the machine, which allows you to distinguish between simulation and reality, which means it’s totally valid to penalize the utility of living inside a simulation by some amount depending on how strongly you feel about the limitations (and how good the machine is). Just because there’s a penalty doesn’t mean that other factors can’t overcome that, though. Lots of versions of the objection try to sweeten the deal for the world inside the machine further (“you can experience anything you want”/”you get maximum serotonin”/etc); this doesn’t really change the core of the argument of whether our utility function should depend on anything other than observations. If the perks are really good and you care less about the limitations than the perks, then it makes perfect sense to go inside the machine; if you care more about the limitations than the perks, it makes perfect sense not to go inside the machine.
The crux of the experience machine thought experiment is that even when all else is held constant, we should assign epsilon more utility to whatever is “real”, therefore utility does not depend soley on your observations/brain states. I argue that this epsilon penalty makes sense given practical limitations to any real experience machines, which is probably what informs our intuitions, and that if you somehow handwaved those limitations way then we really truly should be indifferent.