I don’t think that the ability to simulate without rewarding the simulation is what pushes it over the threshold of “unfair”.
It only seems that way because you’re thinking from the non-simulated agents point of view. How do you think you’d feel if you were a simulated agent, and after you made your decision Omega said ‘Ok, cheers for solving that complicated puzzle, I’m shutting this reality down now because you were just a simulation I needed to set a problem in another reality’. That sounds pretty unfair to me. Wouldn’t you be saying ‘give me my money you cheating scum’?
And as has been already pointed out, they’re very different problems. If Omega actually is trustworthy, integrating across all the simulations gives infinite utility for all the (simulated) TDT agents and a total $1001000 utility for the (supposedly non-simulated) CDT agent.
It only seems that way because you’re thinking from the non-simulated agents point of view. How do you think you’d feel if you were a simulated agent, and after you made your decision Omega said ‘Ok, cheers for solving that complicated puzzle, I’m shutting this reality down now because you were just a simulation I needed to set a problem in another reality’. That sounds pretty unfair to me. Wouldn’t you be saying ‘give me my money you cheating scum’?
We were discussing if it is a “fair” test of the decision theory, not if it provides a “fair” experience to any people/agents that are instantiated within the scenario.
And as has been already pointed out, they’re very different problems. If Omega actually is trustworthy, integrating across all the simulations gives infinite utility for all the (simulated) TDT agents and a total $1001000 utility for the (supposedly non-simulated) CDT agent.
I am aware that they are different problems. That is why the version of the problem in which simulated agents get utility that the real agent cares about does nothing to address the criticism of TDT that it loses in the version where simulated agents get no utility. Postulating the former in response to the latter was a fail in using the Least Convenient Possible World.
The complaints about Omega being untrustworthy are weak. Just reformulate the problem so Omega says to all agents, simulated or otherwise, “You are participating in a game that involves simulated agents and you may or may not be one of the simulated agents yourself. The agents involved in the game are the following: <describes agents’ roles in third person>”.
The complaints about Omega being untrustworthy are weak. Just reformulate the problem so Omega says to all agents, simulated or otherwise, “You are participating in a game that involves simulated agents and you may or may not be one of the simulated agents yourself. The agents involved in the game are the following: <describes agents’ roles in third person>”.
Good point.
That clears up the summing utility across possible worlds possibility, but it still doesn’t address the fact that the TDT agent is being asked to (potentially) make two decisions while the non-TDT agent is being asked to make only one. That seems to me to make the scenario unfair (it’s what I was trying to get at in the ‘very different problems’ statement).
It only seems that way because you’re thinking from the non-simulated agents point of view. How do you think you’d feel if you were a simulated agent, and after you made your decision Omega said ‘Ok, cheers for solving that complicated puzzle, I’m shutting this reality down now because you were just a simulation I needed to set a problem in another reality’. That sounds pretty unfair to me. Wouldn’t you be saying ‘give me my money you cheating scum’?
And as has been already pointed out, they’re very different problems. If Omega actually is trustworthy, integrating across all the simulations gives infinite utility for all the (simulated) TDT agents and a total $1001000 utility for the (supposedly non-simulated) CDT agent.
We were discussing if it is a “fair” test of the decision theory, not if it provides a “fair” experience to any people/agents that are instantiated within the scenario.
I am aware that they are different problems. That is why the version of the problem in which simulated agents get utility that the real agent cares about does nothing to address the criticism of TDT that it loses in the version where simulated agents get no utility. Postulating the former in response to the latter was a fail in using the Least Convenient Possible World.
The complaints about Omega being untrustworthy are weak. Just reformulate the problem so Omega says to all agents, simulated or otherwise, “You are participating in a game that involves simulated agents and you may or may not be one of the simulated agents yourself. The agents involved in the game are the following: <describes agents’ roles in third person>”.
Good point.
That clears up the summing utility across possible worlds possibility, but it still doesn’t address the fact that the TDT agent is being asked to (potentially) make two decisions while the non-TDT agent is being asked to make only one. That seems to me to make the scenario unfair (it’s what I was trying to get at in the ‘very different problems’ statement).