It seems to me that your confusion is contending there are two past/present states (HA+A / HB+B) when in fact reality is simply H → S → C. There is one history, one state, and one choice that you will end up making. The idea that there is a HA and HB and so on is wrong, since that history H has already happened and produced state S.
I guess I invited this interpretation with the phrasing “there are two relevantly-different states of the world I could be in”. But what I meant could be rephrased as “either the propositions ‘HA happened, A is the current state, I will choose CA, FA will happen’ are all true, or the propositions ‘HB happened, B is the current state, I will choose CB, FB will happen’ are all true; the ones that aren’t all true are all false”.
I’m not sure how much that rephrasing would change the rest of your answer, so I won’t spend too much time trying to engage with it until you tell me, but broadly I’m not sure whether you are defending compatibilism or hard determinism. (From context I was expecting the former, but from the text itself I’m not so sure.)
I’m not sure how much that rephrasing would change the rest of your answer
Well, it makes the confusion more obvious, because now it’s clearer that HA/A and HB/B are complete balderdash. This will be apparent if you try to unpack exactly what the difference between them is, other than your choice. (Specifically, the algorithm used to compute your choice.)
Let’s say I give you a read-only SD card containing some data. You will insert this card into a device that will run some algorithm and output “A” or “B”. The data on the card will not change as a result of the device’s output, nor will the device’s output retroactively cause different data to have been entered on the card! All that will be revealed is the device’s interpretation of that data. To the extent there is any uncertainty about the entire process, it’s simply that the device is a black box—we don’t know what algorithm it uses to make the decision.
So, tl;dr: the choice you make does not reveal anything about the state or history of the world (SD card), except for the part that is your decision algorithm’s implementation. If we draw a box around “the parts of your brain that are involved in this decision”, then you could say that the output choice tells you something about the state and history of those parts of your brain. But even there, there’s no backward causality—it’s again simply resolving your uncertainty about the box, not doing anything to the actual contents, except to the extent that running the decision procedure makes changes to the device’s state.
broadly I’m not sure whether you are defending compatibilism or hard determinism
As other people have mentioned, rationalists don’t typically think in those terms. There isn’t actually any difference between those two ideas, and there’s really nothing to “defend”. As with a myriad other philosophical questions, the question itself is just map-territory confusion or a problem with word definitions.
Human brains have lots of places where it’s easy to slip on logical levels and end up with things that feel like questions or paradoxes when in fact what’s going on is really simple once you put back in the missing terms or expand the definitions properly. (This is often because brains don’t tend to include themselves as part of reality, so this is where the missing definitions can usually be found!)
In the particular case you’ve presented, that tendency manifests in the part where no part of your problem specification explicitly calls out the brain or its decision procedures as components of the process. Once you include those missing pieces, it’s straightforward to see that the only place where hypothetical alternative choices exist is in the decider’s brain, and that no retrocausality is involved.
In the parts of reality that do not include your brain, they are already in some state and already have some history. When you make a decision, you already know what state and history exist for those parts of reality, at least to the extent that state and history is decision-relevant. What you don’t know is which choice you will make.
You then can imagine CA and CB—i.e., picture them in your brain—as part of running your decision algorithm. Running this algorithm then makes changes to the history and state of your brain—but not to any of the inputs that your brain took in.
Suppose I follow the following decision procedure:
Make a list of alternatives
Give them a score from 1-10 and sort the list
Flip a coin
If it comes up heads, choose the first item
If it comes up tails, cross off that item and go back to step 3
None of these steps is retrocausal, in the sense of “revealing” or “choosing” anything about the past. As I perform these steps, I am altering H and S of my brain (and workspace) until a decision is arrived at. At no point is there an “A” or “B” here, except in the contents of the list.
Since there is a random element I don’t even know what choice I will make, but the only thing that was “revealed” is my scoring and which way the coin flips went—all of which happened as I went through the process. When I get to the “choice” part, it’s the result of the steps that went before, not something that determines the steps.
This is just an example, of course, but it literally doesn’t matter what your decision procedure is, because it’s still not changing the original inputs of the process. Nothing is retroactively chosen or revealed. Instead, the world-state is being changed by the process of making the decision, in normal forward causality.
As soon as you fully expand your terms to any specific decision procedure, and include your brain as part of the definition of “history” and “state”, the illusion of retrocausality vanishes.
A pair of timelines, showing two possible outcomes, with the decision procedure parenthesized:
H → S → (HA → SA) → CA
H → S → (HB → SB) → CB
The decision procedure operates on history H, state S as its initial input. During the process it will produce a new history and final state, following some path that will result in CA or CB. But CA and CB do not reveal or “choose” anything about the H or S that existed prior to beginning the decision procedure. Instead, the steps go forward in time creating HA or HB as they go along.
It’s as if you said, “isn’t it weird, how if I flip a coin and then go down street A or B accordingly, coming to whichever restaurant is on that street, that the cuisine of the restaurant I arrive at reveals which way my coin flip went?”
No. No. It’s not weird at all! That’s what you should expect to happen! The restaurant you arrived at does not determine the coin flip, the coin flip determines the restaurant.
As soon as you make the decision procedure a concrete procedure—be it flipping a coin or otherwise—it should hopefully become clear that the choice is the output of the steps taken; the steps taken are not retroactively caused by the output of the process.
The confusion in your original post is that you’re not treating “choice” as a process with steps that produce an output, but rather as something mysterious that happens instantaneously while somehow being outside of reality. If you properly place “choice” as a series of events in normal spacetime, there is no paradox or retrocausality to be had. It’s just normal things happening in the normal order.
LW compatibilism isn’t believing that choice magically happens outside of spacetime while everything else happens deterministically, but rather including your decision procedure as part of “things happening deterministically”.
This is hard to respond to, in part because I don’t recognise my views in your descriptions of them, and most of what you wrote doesn’t have a very obvious-to-me connection to what I wrote. I suspect you’ll take this as further evidence of my confusion, but I think you must have misunderstood me.
The confusion in your original post is that you’re not treating “choice” as a process with steps that produce an output, but rather as something mysterious that happens instantaneously while somehow being outside of reality.
No I’m not. But I don’t know how to clarify this, because I don’t understand why you think I am. I do think we can narrow down a ‘moment of decision’ if we want to, meaning e.g. the point in time where the agent becomes conscious of which action they will take, or when something that looks to us like a point of no return is reached. But obviously the decision process is a process, and I don’t get why you think I don’t understand or have failed to account for this.
LW compatibilism isn’t believing that choice magically happens outside of spacetime while everything else happens deterministically, but rather including your decision procedure as part of “things happening deterministically”.
I’m fully aware of that; as far as I know it’s an accurate description of every version of compatibilism, not just ‘LW compatibilism’.
retrocausal, in the sense of “revealing” or “choosing” anything about the past
How is ‘revealing something about the past’ retrocausal?
As other people have mentioned, rationalists don’t typically think in those terms. There isn’t actually any difference between those two ideas, and there’s really nothing to “defend”.
There is a difference: the meaning of the words ‘free will’, or in other words the content of the concept ‘free will’. From one angle it’s pure semantics, sure—but it’s not completely boring and pointless, because we’re not in a situation where we all have the exact same set of concepts and are just arguing about which labels to apply to them.
the only place where hypothetical alternative choices exist is in the decider’s brain
This and other passages make me think you’re still interpreting me as saying that the two possible choices ‘exist’ in reality somewhere, as something other than ideas in brains. But I’m not. They exist in a) my description of two versions of reality that hypothetically (and mutually exclusively) could exist, and b) the thoughts of the chooser, to whom they feel like open possibilities until the choice process is complete. At the beginning of my scenario description I stipulated determinism, so what else could I mean?
Well, it makes the confusion more obvious, because now it’s clearer that HA/A and HB/B are complete balderdash.
Even with the context of the rest of your comment, I don’t understand what you mean by ‘HA/A and HB/B are complete balderdash’. If there’s something incoherent or contradictory about “either the propositions ‘HA happened, A is the current state, I will choose CA, FA will happen’ are all true, or the propositions ‘HB happened, B is the current state, I will choose CB, FB will happen’ are all true; the ones that aren’t all true are all false”, can you be specific about what it is? Or if the error is somewhere else in my little hypothetical, can you identify it with direct quotes?
Which seems to give me just as much control[4] over the past as I have over the future.
And the footnote:
whatever I can do to make my world the one with FA in it, I can do to make my world the one with HA in it.
This is only trivially true in the sense of saying “whatever I can do to arrive at McDonalds, I can do to make my world the one where I walked in the direction of McDonalds”. This is ordinary reality and nothing to be “bothered” by—which obviates the original question’s apparent presupposition that something weird is going on.
If there’s something incoherent or contradictory about “either the propositions ‘HA happened, A is the current state, I will choose CA, FA will happen’ are all true, or the propositions ‘HB happened, B is the current state, I will choose CB, FB will happen’ are all true; the ones that aren’t all true are all false”, can you be specific about what it is?
It’s fine so long as HA/A and HB/B are understood to be the events and states during the actual decision-making process, and not referencing anything before that point, i.e.:
H → S → (HA ->A) → CA → FA
H → S → (HB ->B) → CB → FB
Think of H as events happening in the world, then written onto a read-only SD card labeled “S”. At this moment, the contents of S are already fixed. S is then fed into a device which will then operate upon the data and reveal its interpretation of the data by outputting the text “A” or “B”. The history of events occurring inside the device will be different according to whatever the content of the SD card was, but the content of the card isn’t “revealed” or “chosen” or “controlled” by this process.
How is ‘revealing something about the past’ retrocausal?
It isn’t; but neither is it actually revealing anything about the past that couldn’t have been ascertained prior to executing the decision procedure or in parallel with it. The decision procedure can only “reveal” the process and results of the decision procedure itself, since that process and result were not present in the history and state of the world before the procedure began.
I don’t know how to clarify this, because I don’t understand why you think I am. I do think we can narrow down a ‘moment of decision’ if we want to, meaning e.g. the point in time where the agent becomes conscious of which action they will take, or when something that looks to us like a point of no return is reached. But obviously the decision process is a process, and I don’t get why you think I don’t understand or have failed to account for this.
Here is the relevant text from your original post:
State A: Past events HA have happened, current state of the world is A, I will choose CA, future FA will happen.
State B: Past events HB have happened, current state of the world is B, I will choose CB, future FB will happen.
These definitions clearly state “I will choose”—i.e., the decision process has not yet begun. But if the decision process hasn’t yet begun, then there is only one world-state, and thus it is meaningless to give that single state two names (HA/A and HB/B).
Before you choose, you can literally examine any aspect of the current world-state that you like and confirm it to your heart’s content. You already know which events have happened and what the state of the world is, so there can’t be two such states, and your choice does not “reveal” anything about the world-state that existed prior to the start of the decision process.
This is why I’m describing HA/A and HB/B in your post as incoherent, and assuming that this description must be based on an instantaneous, outside-reality concept of “choice”, which seems to be the only way the stated model can make any sense (even in its own terms).
In contrast, if you label every point of the timeline as to what is happening, the only logically coherent timeline is H → S → ( H[A/B] → A/B ) → C[A/B] → F[A/B] -- where it’s obvious that this is just reality as normal, where the decision procedure neither “chooses” nor “reveals” anything about the history of the world prior to its beginning execution. (IOW, it can only “reveal” or “choose” or “control” the present and future, not the past.)
But if you were using that interpretation, then your original question appears to have no meaning: what would it mean to be bothered that the restaurant you eat at today will “reveal” which way you flipped the coin you used to decide?
I guess I invited this interpretation with the phrasing “there are two relevantly-different states of the world I could be in”. But what I meant could be rephrased as “either the propositions ‘HA happened, A is the current state, I will choose CA, FA will happen’ are all true, or the propositions ‘HB happened, B is the current state, I will choose CB, FB will happen’ are all true; the ones that aren’t all true are all false”.
I’m not sure how much that rephrasing would change the rest of your answer, so I won’t spend too much time trying to engage with it until you tell me, but broadly I’m not sure whether you are defending compatibilism or hard determinism. (From context I was expecting the former, but from the text itself I’m not so sure.)
Well, it makes the confusion more obvious, because now it’s clearer that HA/A and HB/B are complete balderdash. This will be apparent if you try to unpack exactly what the difference between them is, other than your choice. (Specifically, the algorithm used to compute your choice.)
Let’s say I give you a read-only SD card containing some data. You will insert this card into a device that will run some algorithm and output “A” or “B”. The data on the card will not change as a result of the device’s output, nor will the device’s output retroactively cause different data to have been entered on the card! All that will be revealed is the device’s interpretation of that data. To the extent there is any uncertainty about the entire process, it’s simply that the device is a black box—we don’t know what algorithm it uses to make the decision.
So, tl;dr: the choice you make does not reveal anything about the state or history of the world (SD card), except for the part that is your decision algorithm’s implementation. If we draw a box around “the parts of your brain that are involved in this decision”, then you could say that the output choice tells you something about the state and history of those parts of your brain. But even there, there’s no backward causality—it’s again simply resolving your uncertainty about the box, not doing anything to the actual contents, except to the extent that running the decision procedure makes changes to the device’s state.
As other people have mentioned, rationalists don’t typically think in those terms. There isn’t actually any difference between those two ideas, and there’s really nothing to “defend”. As with a myriad other philosophical questions, the question itself is just map-territory confusion or a problem with word definitions.
Human brains have lots of places where it’s easy to slip on logical levels and end up with things that feel like questions or paradoxes when in fact what’s going on is really simple once you put back in the missing terms or expand the definitions properly. (This is often because brains don’t tend to include themselves as part of reality, so this is where the missing definitions can usually be found!)
In the particular case you’ve presented, that tendency manifests in the part where no part of your problem specification explicitly calls out the brain or its decision procedures as components of the process. Once you include those missing pieces, it’s straightforward to see that the only place where hypothetical alternative choices exist is in the decider’s brain, and that no retrocausality is involved.
In the parts of reality that do not include your brain, they are already in some state and already have some history. When you make a decision, you already know what state and history exist for those parts of reality, at least to the extent that state and history is decision-relevant. What you don’t know is which choice you will make.
You then can imagine CA and CB—i.e., picture them in your brain—as part of running your decision algorithm. Running this algorithm then makes changes to the history and state of your brain—but not to any of the inputs that your brain took in.
Suppose I follow the following decision procedure:
Make a list of alternatives
Give them a score from 1-10 and sort the list
Flip a coin
If it comes up heads, choose the first item
If it comes up tails, cross off that item and go back to step 3
None of these steps is retrocausal, in the sense of “revealing” or “choosing” anything about the past. As I perform these steps, I am altering H and S of my brain (and workspace) until a decision is arrived at. At no point is there an “A” or “B” here, except in the contents of the list.
Since there is a random element I don’t even know what choice I will make, but the only thing that was “revealed” is my scoring and which way the coin flips went—all of which happened as I went through the process. When I get to the “choice” part, it’s the result of the steps that went before, not something that determines the steps.
This is just an example, of course, but it literally doesn’t matter what your decision procedure is, because it’s still not changing the original inputs of the process. Nothing is retroactively chosen or revealed. Instead, the world-state is being changed by the process of making the decision, in normal forward causality.
As soon as you fully expand your terms to any specific decision procedure, and include your brain as part of the definition of “history” and “state”, the illusion of retrocausality vanishes.
A pair of timelines, showing two possible outcomes, with the decision procedure parenthesized:
H → S → (HA → SA) → CA
H → S → (HB → SB) → CB
The decision procedure operates on history H, state S as its initial input. During the process it will produce a new history and final state, following some path that will result in CA or CB. But CA and CB do not reveal or “choose” anything about the H or S that existed prior to beginning the decision procedure. Instead, the steps go forward in time creating HA or HB as they go along.
It’s as if you said, “isn’t it weird, how if I flip a coin and then go down street A or B accordingly, coming to whichever restaurant is on that street, that the cuisine of the restaurant I arrive at reveals which way my coin flip went?”
No. No. It’s not weird at all! That’s what you should expect to happen! The restaurant you arrived at does not determine the coin flip, the coin flip determines the restaurant.
As soon as you make the decision procedure a concrete procedure—be it flipping a coin or otherwise—it should hopefully become clear that the choice is the output of the steps taken; the steps taken are not retroactively caused by the output of the process.
The confusion in your original post is that you’re not treating “choice” as a process with steps that produce an output, but rather as something mysterious that happens instantaneously while somehow being outside of reality. If you properly place “choice” as a series of events in normal spacetime, there is no paradox or retrocausality to be had. It’s just normal things happening in the normal order.
LW compatibilism isn’t believing that choice magically happens outside of spacetime while everything else happens deterministically, but rather including your decision procedure as part of “things happening deterministically”.
This is hard to respond to, in part because I don’t recognise my views in your descriptions of them, and most of what you wrote doesn’t have a very obvious-to-me connection to what I wrote. I suspect you’ll take this as further evidence of my confusion, but I think you must have misunderstood me.
No I’m not. But I don’t know how to clarify this, because I don’t understand why you think I am. I do think we can narrow down a ‘moment of decision’ if we want to, meaning e.g. the point in time where the agent becomes conscious of which action they will take, or when something that looks to us like a point of no return is reached. But obviously the decision process is a process, and I don’t get why you think I don’t understand or have failed to account for this.
I’m fully aware of that; as far as I know it’s an accurate description of every version of compatibilism, not just ‘LW compatibilism’.
How is ‘revealing something about the past’ retrocausal?
There is a difference: the meaning of the words ‘free will’, or in other words the content of the concept ‘free will’. From one angle it’s pure semantics, sure—but it’s not completely boring and pointless, because we’re not in a situation where we all have the exact same set of concepts and are just arguing about which labels to apply to them.
This and other passages make me think you’re still interpreting me as saying that the two possible choices ‘exist’ in reality somewhere, as something other than ideas in brains. But I’m not. They exist in a) my description of two versions of reality that hypothetically (and mutually exclusively) could exist, and b) the thoughts of the chooser, to whom they feel like open possibilities until the choice process is complete. At the beginning of my scenario description I stipulated determinism, so what else could I mean?
Even with the context of the rest of your comment, I don’t understand what you mean by ‘HA/A and HB/B are complete balderdash’. If there’s something incoherent or contradictory about “either the propositions ‘HA happened, A is the current state, I will choose CA, FA will happen’ are all true, or the propositions ‘HB happened, B is the current state, I will choose CB, FB will happen’ are all true; the ones that aren’t all true are all false”, can you be specific about what it is? Or if the error is somewhere else in my little hypothetical, can you identify it with direct quotes?
Direct quotes:
And the footnote:
This is only trivially true in the sense of saying “whatever I can do to arrive at McDonalds, I can do to make my world the one where I walked in the direction of McDonalds”. This is ordinary reality and nothing to be “bothered” by—which obviates the original question’s apparent presupposition that something weird is going on.
It’s fine so long as HA/A and HB/B are understood to be the events and states during the actual decision-making process, and not referencing anything before that point, i.e.:
H → S → (HA ->A) → CA → FA
H → S → (HB ->B) → CB → FB
Think of H as events happening in the world, then written onto a read-only SD card labeled “S”. At this moment, the contents of S are already fixed. S is then fed into a device which will then operate upon the data and reveal its interpretation of the data by outputting the text “A” or “B”. The history of events occurring inside the device will be different according to whatever the content of the SD card was, but the content of the card isn’t “revealed” or “chosen” or “controlled” by this process.
It isn’t; but neither is it actually revealing anything about the past that couldn’t have been ascertained prior to executing the decision procedure or in parallel with it. The decision procedure can only “reveal” the process and results of the decision procedure itself, since that process and result were not present in the history and state of the world before the procedure began.
Here is the relevant text from your original post:
These definitions clearly state “I will choose”—i.e., the decision process has not yet begun. But if the decision process hasn’t yet begun, then there is only one world-state, and thus it is meaningless to give that single state two names (HA/A and HB/B).
Before you choose, you can literally examine any aspect of the current world-state that you like and confirm it to your heart’s content. You already know which events have happened and what the state of the world is, so there can’t be two such states, and your choice does not “reveal” anything about the world-state that existed prior to the start of the decision process.
This is why I’m describing HA/A and HB/B in your post as incoherent, and assuming that this description must be based on an instantaneous, outside-reality concept of “choice”, which seems to be the only way the stated model can make any sense (even in its own terms).
In contrast, if you label every point of the timeline as to what is happening, the only logically coherent timeline is H → S → ( H[A/B] → A/B ) → C[A/B] → F[A/B] -- where it’s obvious that this is just reality as normal, where the decision procedure neither “chooses” nor “reveals” anything about the history of the world prior to its beginning execution. (IOW, it can only “reveal” or “choose” or “control” the present and future, not the past.)
But if you were using that interpretation, then your original question appears to have no meaning: what would it mean to be bothered that the restaurant you eat at today will “reveal” which way you flipped the coin you used to decide?