It makes no difference. In fact, many-worlds is a deterministic universe; it just so happens there are different versions of future-you who experience/do different things, so it’s not “deterministic from your viewpoint”.
So I’d like to argue that it makes at least a little difference. When we engage in practical deliberation, when we think about what to do, we are thinking about what is possible and about ourselves as sources of what is possible. No one deliberates about the necessary, or about anything over which we have no control: we don’t deliberate about what the size of the sun should be, or whether or not modus tollens should be valid.
If we realize that the universe is deterministic, then we may still decide that we can deliberate, but we do now qualify this as a matter of ‘viewpoints’ or something like that. So the little difference this makes is in the way we qualify the idea of deliberation.
So do you agree that there is at least this little difference? Perhaps it is inconsequential, but it does mean that we learn something about what it means to deliberate when we learn we are living in a deterministic universe as opposed to one with a bunch of spontaneous free causes running around.
It all adds up to normality. Everything you do when making a decision is something a deterministic agent can do, and a deterministic agent that deliberates well will (on average) experience higher expected value than deterministic agents that deliberate poorly.
You’re getting closer to the sequence of posts that covers this in more detail, so I’ll just say that I endorse what’s said in this sequence.
It all adds up to normality. Everything you do when making a decision is something a deterministic agent can do, and a deterministic agent that deliberates well will (on average) experience higher expected value than deterministic agents that deliberate poorly.
What is normality exactly? It’s not the ideas and intuitions I came to the table with, unless the theory actually proposes to teach me nothing. My questions is this: “what do I learn when I learn that the universe is deterministic?” Do I learn anything that has to do with deliberation? One reasonable answer (and one way to explain the normality point) would just be ‘no, it has nothing to do with action.’ But this would strike many people as odd, since we recognize in our deliberation a distinction between future events we can bring about or prevent, and future states we cannot bring about or prevent.
You’re getting closer to the sequence of posts that covers this in more detail, so I’ll just say that I endorse what’s said in this sequence.
I find I have an extremely hard time understanding some of the arguments in that sequence, after several attempts. I would dearly love to have some of it explained in response to my questions. I find this argument in parcticular to be very confusing:
But have you ever seen the future change from one time to another? Have you wandered by a lamp at exactly 7:02am, and seen that it is OFF; then, a bit later, looked in again on the “the lamp at exactly 7:02am”, and discovered that it is now ON?
Naturally, we often feel like we are “changing the future”. Logging on to your online bank account, you discover that your credit card bill comes due tomorrow, and, for some reason, has not been paid automatically. Imagining the future-by-default—extrapolating out the world as it would be without any further actions—you see that the bill not being paid, and interest charges accruing on your credit card. So you pay the bill online. And now, imagining tomorrow, it seems to you that the interest charges will not occur. So at 1:00pm, you imagined a future in which your credit card accrued interest charges, and at 1:02pm, you imagined a future in which it did not. And so your imagination of the future changed, from one time to another.
This argument (which reappears in the ‘timeless control’ article) seems to hang on a very weird idea of ‘changing the future’. No one I have ever talked to believes that they can literally change a future moment from having one property to having another, and that this change is distinct from a change that takes place over an extent of time. I certainly don’t see how anyone could take this as a way to treat the world as undetermined. This seems like very much a strawman view, born from an equivocation on the word ‘change’.
But I expect I am missing something (perhaps something revealed later on in the more technical stage of the article). Can you help me?
I meant that learning the universe is deterministic should not turn one into a fatalist who doesn’t care about making good decisions (which is the intuition that many people have about determinism), because goals and choices mean something even in a deterministic universe. As an analogy, note that all of the agents in my decision theory sequence are deterministic (with one kind-of exception: they can make a deterministic choice to adopt a mixed strategy), but some of them characteristically do better than others.
Regarding the “changing the future” idea, let’s think of what it means in the context of two deterministic computer programs playing chess. It is a fact that only one game actually gets played, but many alternate moves are explored in hypotheticals (within the programs) along the way. When one program decides to make a particular move, it’s not that “the future changed” (since someone with a faster computer could have predicted in advance what moves the programs make, the future is in that sense fixed), but rather that of all the hypothetical moves it explored, the program chose one according to a particular set of criteria. Other programs would have chosen another moves in those circumstances, which would have led to different games in the end.
When you or I are deciding what to do, the different hypothetical options all feel like they’re on an equal basis, because we haven’t figured out what to choose. That doesn’t mean that different possible futures are all real, and that all but one vanish when we make our decision. The hypothetical futures exist on our map, not in the territory; it may be that no version of you anywhere chooses option X, even though you considered it.
but some of them characteristically do better than others.
A fair point, though I would be interested to hear how the algorithm described in DT relate to action (it can’t be that they describe action, since we needn’t act on the output of a DT, especially given that we’re often akratic). When the metaethics sequence, for all the trouble I have with its arguments, gets into an account of free will, I don’t generally find myself in disagreement. I’ve been looking over that and the physics sequences in the last couple of days, and I think I’ve found the point where I need to do some more reading: I think I just don’t believe either that the universe is timeless, or that it’s a block universe. So I should read Barbour’s book.
Thanks, buy the way for posting that DT series, and for answering my questions. Both have been very helpful.
Does that make more sense?
It does, but I find myself, as I said, unable to grant the premise that statements about the future have truth value. I think I do just need to read up on this view of time.
Thanks, buy the way for posting that DT series, and for answering my questions. Both have been very helpful.
You’re welcome!
I would be interested to hear how the algorithm described in DT relate to action (it can’t be that they describe action, since we needn’t act on the output of a DT, especially given that we’re often akratic).
Yeah, a human who consciously endorses a particular decision theory is not the same sort of agent as a simple algorithm that runs that decision theory. But that has more to do with the messy psychology of human beings than with decision theory in its abstract mathematical form.
So I’d like to argue that it makes at least a little difference. When we engage in practical deliberation, when we think about what to do, we are thinking about what is possible and about ourselves as sources of what is possible. No one deliberates about the necessary, or about anything over which we have no control: we don’t deliberate about what the size of the sun should be, or whether or not modus tollens should be valid.
If we realize that the universe is deterministic, then we may still decide that we can deliberate, but we do now qualify this as a matter of ‘viewpoints’ or something like that. So the little difference this makes is in the way we qualify the idea of deliberation.
So do you agree that there is at least this little difference? Perhaps it is inconsequential, but it does mean that we learn something about what it means to deliberate when we learn we are living in a deterministic universe as opposed to one with a bunch of spontaneous free causes running around.
It all adds up to normality. Everything you do when making a decision is something a deterministic agent can do, and a deterministic agent that deliberates well will (on average) experience higher expected value than deterministic agents that deliberate poorly.
You’re getting closer to the sequence of posts that covers this in more detail, so I’ll just say that I endorse what’s said in this sequence.
What is normality exactly? It’s not the ideas and intuitions I came to the table with, unless the theory actually proposes to teach me nothing. My questions is this: “what do I learn when I learn that the universe is deterministic?” Do I learn anything that has to do with deliberation? One reasonable answer (and one way to explain the normality point) would just be ‘no, it has nothing to do with action.’ But this would strike many people as odd, since we recognize in our deliberation a distinction between future events we can bring about or prevent, and future states we cannot bring about or prevent.
I find I have an extremely hard time understanding some of the arguments in that sequence, after several attempts. I would dearly love to have some of it explained in response to my questions. I find this argument in parcticular to be very confusing:
This argument (which reappears in the ‘timeless control’ article) seems to hang on a very weird idea of ‘changing the future’. No one I have ever talked to believes that they can literally change a future moment from having one property to having another, and that this change is distinct from a change that takes place over an extent of time. I certainly don’t see how anyone could take this as a way to treat the world as undetermined. This seems like very much a strawman view, born from an equivocation on the word ‘change’.
But I expect I am missing something (perhaps something revealed later on in the more technical stage of the article). Can you help me?
I meant that learning the universe is deterministic should not turn one into a fatalist who doesn’t care about making good decisions (which is the intuition that many people have about determinism), because goals and choices mean something even in a deterministic universe. As an analogy, note that all of the agents in my decision theory sequence are deterministic (with one kind-of exception: they can make a deterministic choice to adopt a mixed strategy), but some of them characteristically do better than others.
Regarding the “changing the future” idea, let’s think of what it means in the context of two deterministic computer programs playing chess. It is a fact that only one game actually gets played, but many alternate moves are explored in hypotheticals (within the programs) along the way. When one program decides to make a particular move, it’s not that “the future changed” (since someone with a faster computer could have predicted in advance what moves the programs make, the future is in that sense fixed), but rather that of all the hypothetical moves it explored, the program chose one according to a particular set of criteria. Other programs would have chosen another moves in those circumstances, which would have led to different games in the end.
When you or I are deciding what to do, the different hypothetical options all feel like they’re on an equal basis, because we haven’t figured out what to choose. That doesn’t mean that different possible futures are all real, and that all but one vanish when we make our decision. The hypothetical futures exist on our map, not in the territory; it may be that no version of you anywhere chooses option X, even though you considered it.
Does that make more sense?
A fair point, though I would be interested to hear how the algorithm described in DT relate to action (it can’t be that they describe action, since we needn’t act on the output of a DT, especially given that we’re often akratic). When the metaethics sequence, for all the trouble I have with its arguments, gets into an account of free will, I don’t generally find myself in disagreement. I’ve been looking over that and the physics sequences in the last couple of days, and I think I’ve found the point where I need to do some more reading: I think I just don’t believe either that the universe is timeless, or that it’s a block universe. So I should read Barbour’s book.
Thanks, buy the way for posting that DT series, and for answering my questions. Both have been very helpful.
It does, but I find myself, as I said, unable to grant the premise that statements about the future have truth value. I think I do just need to read up on this view of time.
You’re welcome!
Yeah, a human who consciously endorses a particular decision theory is not the same sort of agent as a simple algorithm that runs that decision theory. But that has more to do with the messy psychology of human beings than with decision theory in its abstract mathematical form.