Agents do not need to calculate what would have happened, if something impossible had happened.
They need to calculate the consequences of their possible actions.
These are all possible, by definition, from the point of view of the agent—who is genuinely uncertain about the action she is going to take. Thus, from her point of view at the time, these scenarios are not “counterfactual”. They do not contradict any facts known to her at the time. Rather they all lie within her cone of uncertainty.
That’s the difference between epistemic and metaphysical possibility. Something can be epistemically possible without being metaphysically possible if one doesn’t know if it’s metaphysically possible or not.
According to the MWI different outcomes to a decision are epistemic and metaphysical possibilities. At least I think they are—it is hard to say for sure without a definition of what the concept of “metaphysical possibility” refers to.
To explain what I mean, according to the conventional understanding of the MWI, the world splits. There are few pasts, and many futures. e.g. see:
Thus, from any starting position, there are many possible futures—and that’s true regardless of what any embedded agents think they know or don’t know about the state of the universe.
I’m thinking of determinism. I don’t know what you mean by “classical” or in what way you think many-worlds is non-”classically”-deterministic (or has any bearing on decision theory).
If all your possible actions are realised in a future multiverse of possibilities, it is not really true that all but one of those actions is “logically impossible” at the point when the decision to act is taken.
Many-worlds doesn’t have a lot to do with decision theory—but it does bear on your statement about paths not taken being “impossible”.
Actually, the way that TDT defines a decision, only one decision is ever logically possible, even under many-worlds. Versions of you that did different things must have effectively computed a different decision-problem.
So? Versions of you that choose different strategies must have ended up performing different computations due to splits at whatever time, hence, under TDT, one decision-process still only makes one decision.
If the world splits during the decision process, there is no need for any of the resulting divided worlds to be “logically impossible”.
The idea that all worlds but one are impossible is basically pre-quantum thinking. The basic idea of the MWI is that the world branches—and so there are many future possibilities. Other interpretations reach the same conclusion via other means—usually indeterminacy.
If you flip a quantum coin you may end up with a dead cat and an alive cat in different Everett branches, but that is not what we’re talking about.
What we’re talking about is that if you’re decision algorithm outputs a different answer, it’s a different algorithm regardless of where this algorithm is implemented. Same as if you’re getting a result of 5, you’re not calculating 1+1 anymore- you’re doing something else. You may be ignorant of the output of “1+1”, but it’s not mathematically possible for it to be anything other than 2.
I can see what you are talking about—but it isn’t what was originally being discussed. To recap, that was whether different actions are logically possible.
That seems like the same question to whether it is logically possible to have different worlds arising from the same initial conditions just before the decision was made—and according to the MWI, that is true: worlds branch.
The actions are the result of different calculations, sure—but the point is that before the decision was made, the world was in one state, and after it was made, it is divided into multiple worlds, with different decision outcomes in different worlds.
I classify that as meaning that multiple actions are possibilities, from a given starting state. The idea that only one path into the future is possible at any instant in time is incorrect. That is what quantum theory teaches—and it isn’t critical which interpretation you pick.
I think you may be confusing the microstate and macrostate here—the microstate may branch every-which-way, but the macrostate, i.e. the computer and its electronic state (or whatever it is the deciding system is), is very highly conserved across branching, and can be considered classically deterministic (the non-conserving paths appear as “thermodynamic” misbehaviour on the macro scale, and are hopefully rare). Since it is this macrostate which represents the decision process, impossible things don’t become possible just because branching is occurring.
Small fluctuations are often rapidly magnified into macroscopic fluctuations.
Computers sometimes contain elements designed to accelerate this process—in the form of entropy generators—which are used to seed random number generators—e.g. see:
I don’t think anyone is talking about impossible things becoming possible. The topic is whether considered paths in a decision can be legitimately considered to be possibilities—or whether they are actually impossible.
Counterfactuals don’t need to be about impossible things—and agents do calculate what would have happened, if something different had happened. And it is very hard to know whether it would have been possible for something different to happen.
The problem of counterfactuals is not actually a problem. Goodman’s book is riddled with nonsensical claims.
What can Pearl’s formalism accomplish, that earlier logics could not? As far as I can tell, “Bayes nets” just means that you’re going to make as many conditional-independence assumptions as you can, use an acyclic graph, and ignore time (or use a synchronous clock). But nothing changes about the logic.
What can Pearl’s formalism accomplish, that earlier logics could not? As far as I can tell, “Bayes nets” just means that you’re going to make as many conditional-independence assumptions as you can. But nothing changes about the logic.
The problem of counterfactuals is the problem what we do and should mean when we we discuss what “would” have happened, “if” something impossible had happened
...and this bit:
Recall that we seem to need counterfactuals in order to build agents that do useful decision theory—we need to build agents that can think about the consequences of each of their “possible actions”, and can choose the action with best expected-consequences. So we need to know how to compute those counterfactuals.
It is true that agents do sometimes calculate what would have happened if something in the past had happened a different way—e.g. to help analyse the worth of their decision retrospectively. That is probably not too common, though.
Agents do not need to calculate what would have happened, if something impossible had happened.
They need to calculate the consequences of their possible actions.
These are all possible, by definition, from the point of view of the agent—who is genuinely uncertain about the action she is going to take. Thus, from her point of view at the time, these scenarios are not “counterfactual”. They do not contradict any facts known to her at the time. Rather they all lie within her cone of uncertainty.
… but nevertheless, all but one are, in fact, logically impossible.
That’s the difference between epistemic and metaphysical possibility. Something can be epistemically possible without being metaphysically possible if one doesn’t know if it’s metaphysically possible or not.
Thanks, that’s exactly what I was trying to say.
According to the MWI different outcomes to a decision are epistemic and metaphysical possibilities. At least I think they are—it is hard to say for sure without a definition of what the concept of “metaphysical possibility” refers to.
To explain what I mean, according to the conventional understanding of the MWI, the world splits. There are few pasts, and many futures. e.g. see:
http://www.hedweb.com/everett/everett.htm#do
http://www.hedweb.com/everett/everett.htm#split
Thus, from any starting position, there are many possible futures—and that’s true regardless of what any embedded agents think they know or don’t know about the state of the universe.
What do you mean?
Are you perhaps thinking of a type of classical determinism—that pre-dates the many-worlds perspective...?
I’m thinking of determinism. I don’t know what you mean by “classical” or in what way you think many-worlds is non-”classically”-deterministic (or has any bearing on decision theory).
If all your possible actions are realised in a future multiverse of possibilities, it is not really true that all but one of those actions is “logically impossible” at the point when the decision to act is taken.
Many-worlds doesn’t have a lot to do with decision theory—but it does bear on your statement about paths not taken being “impossible”.
Actually, the way that TDT defines a decision, only one decision is ever logically possible, even under many-worlds. Versions of you that did different things must have effectively computed a different decision-problem.
Worlds can split before a decision—but they can split 1 second before, 1 millisecond before—or while the decision to be made is still being evaluated.
So? Versions of you that choose different strategies must have ended up performing different computations due to splits at whatever time, hence, under TDT, one decision-process still only makes one decision.
If the world splits during the decision process, there is no need for any of the resulting divided worlds to be “logically impossible”.
The idea that all worlds but one are impossible is basically pre-quantum thinking. The basic idea of the MWI is that the world branches—and so there are many future possibilities. Other interpretations reach the same conclusion via other means—usually indeterminacy.
If you flip a quantum coin you may end up with a dead cat and an alive cat in different Everett branches, but that is not what we’re talking about.
What we’re talking about is that if you’re decision algorithm outputs a different answer, it’s a different algorithm regardless of where this algorithm is implemented. Same as if you’re getting a result of 5, you’re not calculating 1+1 anymore- you’re doing something else. You may be ignorant of the output of “1+1”, but it’s not mathematically possible for it to be anything other than 2.
I can see what you are talking about—but it isn’t what was originally being discussed. To recap, that was whether different actions are logically possible.
That seems like the same question to whether it is logically possible to have different worlds arising from the same initial conditions just before the decision was made—and according to the MWI, that is true: worlds branch.
The actions are the result of different calculations, sure—but the point is that before the decision was made, the world was in one state, and after it was made, it is divided into multiple worlds, with different decision outcomes in different worlds.
I classify that as meaning that multiple actions are possibilities, from a given starting state. The idea that only one path into the future is possible at any instant in time is incorrect. That is what quantum theory teaches—and it isn’t critical which interpretation you pick.
I think you may be confusing the microstate and macrostate here—the microstate may branch every-which-way, but the macrostate, i.e. the computer and its electronic state (or whatever it is the deciding system is), is very highly conserved across branching, and can be considered classically deterministic (the non-conserving paths appear as “thermodynamic” misbehaviour on the macro scale, and are hopefully rare). Since it is this macrostate which represents the decision process, impossible things don’t become possible just because branching is occurring.
For the other perspective see: http://en.wikipedia.org/wiki/Butterfly_effect
Small fluctuations are often rapidly magnified into macroscopic fluctuations.
Computers sometimes contain elements designed to accelerate this process—in the form of entropy generators—which are used to seed random number generators—e.g. see:
http://en.wikipedia.org/wiki/Hardware_random_number_generator#Physical_phenomena_with_quantum-random_properties
I don’t think anyone is talking about impossible things becoming possible. The topic is whether considered paths in a decision can be legitimately considered to be possibilities—or whether they are actually impossible.
Counterfactuals don’t need to be about impossible things—and agents do calculate what would have happened, if something different had happened. And it is very hard to know whether it would have been possible for something different to happen.
The problem of counterfactuals is not actually a problem. Goodman’s book is riddled with nonsensical claims.
What can Pearl’s formalism accomplish, that earlier logics could not? As far as I can tell, “Bayes nets” just means that you’re going to make as many conditional-independence assumptions as you can, use an acyclic graph, and ignore time (or use a synchronous clock). But nothing changes about the logic.
I am not sure. I haven’t got much from Pearl so far. I did once try to go through The Art and Science of Cause and Effect—but it was pretty yawn-inducing.
I was replying to this bit in the post:
...and this bit:
It is true that agents do sometimes calculate what would have happened if something in the past had happened a different way—e.g. to help analyse the worth of their decision retrospectively. That is probably not too common, though.