It is a mistake to focus too much on the world itself as given precisely what happened all (strict) counterfactuals are impossible
Perfect knowledge about everything is only possible if strict determinism holds. The non existence of real ,as opposed to merely logical, counterfactuals follows
trivially from determinism, but determinism is a very non trivial assumption. It is not known whether the world is deterministic, and it is a fact which needs to be established by empiricism, not surreptitiously assumed in armchair reflection on decision theory.
Perfect knowledge about everything is only possible if strict determinism holds.
What’s “perfect knowledge”? If you know there are two copies of you in idential rooms, with different consequences of decisions taken there, it seems like a perfect enough description of what’s going on. Determinism seems irrelevant in this sense as well, it’s only superficially similar to what it takes for a decision algorithm to be runnable, so that its reasults affect things, and that property is not “determinism”, just a setup with some sort of laws of computation.
Perfect knowledge about everything is only possible if strict determinism holds
It’s normally quite trivial to extend results from deterministic situations to probabilistically deterministic situations. But are you concerned about the possible existence of libertarian free will?
The non existence of real ,as opposed to merely logical, counterfactuals follows trivially from determinism, but determinism is a very non trivial assumption
If we already know what decision you are going to take, we can’t answers questions about what decision is best in a non-trivial sense without constructing a new situation where this knowledge has been erased.
It’s normally quite trivial to extend results from deterministic situations to probabilistically deterministic situations. But are you concerned about the possible existence of libertarian free will?
“Probablistically deterministic” means “indeterministic” for all purposes relevant to the argument. If you are forced to calculate proabilities because more than one thing can actually happen, then you are in world with
real counteractuals and an open future, and it is therefore automatically false that counterfactuals are only logical.
Given a deterministic toy world that supports computation, like Game of Life, what’s the right way to implement decision-making agents in that world? Keep in mind that, by the diagonal lemma, you can build a Game of Life world containing a computer that knows an exact compressed description of the whole world, including the computer.
To me, this framing shows that determinism vs nondeterminism is irrelevant to decision theory. Chris is right that any satisfactory way to handle counterfactuals should focus on what is known or can be inferred by the agent, not on whether the world is deterministic.
Determinism vs indeterminism is relevant to what you can get out of DT.: if you can make real choices, you can grab utility by choosing more optimal futures, if you can’t you are stuck with predtermined utility. It’s kind of true that you can define a sort of minimal outcome of DT that’s available even in a determinsitic universe, but to stick to that is to give up on potential utility, and why would a rational agent want to do that?
Mere determinism leaves you somewhat worse off, because physics can mean you can’t choose a logically possible action that yields more DT. Choosing to adopt the view that DT is about understanding the world, not grabbing utility, also leaves you worse off, even if determinism isn’t true, because resources that are going into the project of understanding the world aren’t going into maximising utility. One would expect a businessman to make more money than a scientist.
Perfect knowledge about everything is only possible if strict determinism holds. The non existence of real ,as opposed to merely logical, counterfactuals follows trivially from determinism, but determinism is a very non trivial assumption. It is not known whether the world is deterministic, and it is a fact which needs to be established by empiricism, not surreptitiously assumed in armchair reflection on decision theory.
What’s “perfect knowledge”? If you know there are two copies of you in idential rooms, with different consequences of decisions taken there, it seems like a perfect enough description of what’s going on. Determinism seems irrelevant in this sense as well, it’s only superficially similar to what it takes for a decision algorithm to be runnable, so that its reasults affect things, and that property is not “determinism”, just a setup with some sort of laws of computation.
It’s normally quite trivial to extend results from deterministic situations to probabilistically deterministic situations. But are you concerned about the possible existence of libertarian free will?
If we already know what decision you are going to take, we can’t answers questions about what decision is best in a non-trivial sense without constructing a new situation where this knowledge has been erased.
“Probablistically deterministic” means “indeterministic” for all purposes relevant to the argument. If you are forced to calculate proabilities because more than one thing can actually happen, then you are in world with real counteractuals and an open future, and it is therefore automatically false that counterfactuals are only logical.
Given a deterministic toy world that supports computation, like Game of Life, what’s the right way to implement decision-making agents in that world? Keep in mind that, by the diagonal lemma, you can build a Game of Life world containing a computer that knows an exact compressed description of the whole world, including the computer.
To me, this framing shows that determinism vs nondeterminism is irrelevant to decision theory. Chris is right that any satisfactory way to handle counterfactuals should focus on what is known or can be inferred by the agent, not on whether the world is deterministic.
Determinism vs indeterminism is relevant to what you can get out of DT.: if you can make real choices, you can grab utility by choosing more optimal futures, if you can’t you are stuck with predtermined utility. It’s kind of true that you can define a sort of minimal outcome of DT that’s available even in a determinsitic universe, but to stick to that is to give up on potential utility, and why would a rational agent want to do that?
In what kind of situation would we miss out on potential utility?
In a situation where you are not even trying to make the utility-maximising choice, but only to find out which world you are in.
So like some kind of clone situation where deterministic world views would say both must make the same decision?
Mere determinism leaves you somewhat worse off, because physics can mean you can’t choose a logically possible action that yields more DT. Choosing to adopt the view that DT is about understanding the world, not grabbing utility, also leaves you worse off, even if determinism isn’t true, because resources that are going into the project of understanding the world aren’t going into maximising utility. One would expect a businessman to make more money than a scientist.