Given a deterministic toy world that supports computation, like Game of Life, what’s the right way to implement decision-making agents in that world? Keep in mind that, by the diagonal lemma, you can build a Game of Life world containing a computer that knows an exact compressed description of the whole world, including the computer.
To me, this framing shows that determinism vs nondeterminism is irrelevant to decision theory. Chris is right that any satisfactory way to handle counterfactuals should focus on what is known or can be inferred by the agent, not on whether the world is deterministic.
Determinism vs indeterminism is relevant to what you can get out of DT.: if you can make real choices, you can grab utility by choosing more optimal futures, if you can’t you are stuck with predtermined utility. It’s kind of true that you can define a sort of minimal outcome of DT that’s available even in a determinsitic universe, but to stick to that is to give up on potential utility, and why would a rational agent want to do that?
Mere determinism leaves you somewhat worse off, because physics can mean you can’t choose a logically possible action that yields more DT. Choosing to adopt the view that DT is about understanding the world, not grabbing utility, also leaves you worse off, even if determinism isn’t true, because resources that are going into the project of understanding the world aren’t going into maximising utility. One would expect a businessman to make more money than a scientist.
Given a deterministic toy world that supports computation, like Game of Life, what’s the right way to implement decision-making agents in that world? Keep in mind that, by the diagonal lemma, you can build a Game of Life world containing a computer that knows an exact compressed description of the whole world, including the computer.
To me, this framing shows that determinism vs nondeterminism is irrelevant to decision theory. Chris is right that any satisfactory way to handle counterfactuals should focus on what is known or can be inferred by the agent, not on whether the world is deterministic.
Determinism vs indeterminism is relevant to what you can get out of DT.: if you can make real choices, you can grab utility by choosing more optimal futures, if you can’t you are stuck with predtermined utility. It’s kind of true that you can define a sort of minimal outcome of DT that’s available even in a determinsitic universe, but to stick to that is to give up on potential utility, and why would a rational agent want to do that?
In what kind of situation would we miss out on potential utility?
In a situation where you are not even trying to make the utility-maximising choice, but only to find out which world you are in.
So like some kind of clone situation where deterministic world views would say both must make the same decision?
Mere determinism leaves you somewhat worse off, because physics can mean you can’t choose a logically possible action that yields more DT. Choosing to adopt the view that DT is about understanding the world, not grabbing utility, also leaves you worse off, even if determinism isn’t true, because resources that are going into the project of understanding the world aren’t going into maximising utility. One would expect a businessman to make more money than a scientist.