I think a lot of writing/thinking about this topic is needlessly complicated. CDT clearly doesn’t work if it’s causal model is wrong. I don’t get why there’s any controversy about that. Further, it’s incredibly misleading to use the word “control”, when you mean “correlated”. In the cases of constrained behavior (by superior modeling or simulation of the perception/decision mechanism), that’s not actually a free cause—the “choice” is actually caused by some upstream event or state of the universe.
You can describe the same thing at two levels of abstraction: “I moved the bishop to threaten my opponent’s queen” vs “I moved the bishop because all the particles and fields in the universe continued to follow their orderly motions according to the fundamental laws of physics, and the result was that I moved the bishop”. The levels are both valid, but it’s easy to spout obvious nonsense by mixing them up: “Why is the chess algorithm analyzing all those positions? So much work and wasted electricity, when the answer is predetermined!!!!” :-P
Anyway, I think (maybe) your comment mixes up these two levels. When we say “control” we’re talking a thing that only makes sense at the higher (algorithm) level. When we say “the state of the universe is a constraint”, that only makes sense at the lower (physics) level.
For the identical twins, I think if we want to be at the higher (algorithm) level, we can say (as in Vladimir_Nesov’s comment) that “control” is exerted by “the algorithm”, and “the algorithm” is some otherworldly abstraction on a different plane of existence, and “the algorithm” has two physical instantiations, and when “the algorithm” exerts “control”, it controls both of the two physical instantiations.
Or, alternatively, you can say that “control” is exerted by a physical instantiation of the algorithm, but that “control” can influence things in the past etc.
(Not too confident about any of this, sorry if I’m confused.)
Yup, there are different levels of abstraction to model/predict/organize our understanding of the universe. These are not exclusive, nor independent, though—there’s only one actual universe! Mixing (or more commonly switching among) levels is not a problem if all the levels are correct on the aspects being predicted. The lower levels are impossibly hard to calculate, but they’re what actually happens. The higher levels are more accessible, but sometimes wrong. When you get contradictory results at a high level, you know something’s wrong, and have to look at the lower levels (and when this isn’t possible, as it so often isn’t, you kind of have to guess. but you need to be clear that you’re guessing and that your model is being used outside it’s validity domain).
This is relevant when talking about “control”, as there are some things that “feel” possible (say, moving the bishop to a different-colored space on the board), but actually aren’t (because the lower-level rules don’t work that way).
I am surprised that, to this day, there are people on LW who haven’t yet dissolved free will, despite the topic being covered explicitly (both in the Sequences and in a litany of other posts) over a decade ago.
No, “libertarian free will” (the idea that there exists some notion of “choice” independent of and unaffected by physics) does not exist. Yes, this means that if you are a god sitting outside of the universe, you can (modulo largely irrelevant fuzz factors like quantum indeterminacy) compute what any individual physical subsystem in the universe will do, including subsystems that refer to themselves as “agents”.
But so what? In point of fact, you are not a god. So what bearing does this hypothetical god’s perspective have on you, in your position as a merely-physical subsystem-of-the-universe? Perhaps you imagine that the existence of such a god has deep relevance for your decision-making right here and now, but if so, you are merely mistaken. (Indeed, for some people who consistently make this mistake, it may even be the case that hearing of the god’s existence has harmed them.)
Suppose, then, that you are not God. It follows that you cannot know what you will decide before actually making the decision. So in the moments leading up to the decision, what will you do? Pleading “but it is a predetermined physical fact!” benefits you not at all; whether there exists a fact of the matter is irrelevant when you cannot access that information even in principle. Whatever you do, then—whatever process of deliberation you follow, whatever algorithm you instantiate—must depend on something other than plucking information out of the mind of God.
What might this deliberation process entail? It depends, naturally, on what type of physical system you are; some systems (e.g. rocks) “deliberate” via extremely simplistic means. But if you happen to be a human—or anything else that might be recognizably “agent-like”—your deliberation will probably consist of something like imagining different choices you might make, visualizing the outcome conditional on each of those choices, and then selecting the choice-outcome pair that ranks best in your estimation.
If you follow this procedure, you will end up making only one choice in the end. What does this entail for all of the other conditional futures you imagined? If you were to ask God, he would tell you that those futures were logically incoherent—not simply physically impossible, but incapable of happening even in principle. But as to the question of how much bearing this (true) fact had on your ability to envision those futures, back when you didn’t yet know which way you would choose—the answer is no bearing at all.
This, then, is all that “free will” is: it is the way it feels, from the inside, to instantiate an algorithm following the decision-making procedure outlined above. In short, free will is an “abstraction” in exactly the sense Steven Byrnes described in the grandparent comment, and your objection
Yup, there are different levels of abstraction to model/predict/organize our understanding of the universe. These are not exclusive, nor independent, though—there’s only one actual universe! Mixing (or more commonly switching among) levels is not a problem if all the levels are correct on the aspects being predicted. The lower levels are impossibly hard to calculate, but they’re what actually happens. The higher levels are more accessible, but sometimes wrong.
is simply incorrect—an error that arose from mixing levels of abstraction that have nothing to do with each other. The “higher-level” picture does not contradict the “lower-level” picture; God’s existence has no bearing on your ability to imagine conditional outcomes, and it is this latter concept that people refer to with terms like “free will” or “control”.
I am surprised that, to this day, there are people on LW who haven’t yet dissolved free will
There is no clear account of this topic. It’s valuable to remain aware of that, so that the situation may be improved. Many of the points you present as evident are hard to interpret, let alone ascertain, it’s not a situation where doubt and disagreement are inappropriate.
The notion of control that makes sense to me is enacting a particular self-fulfilling belief out of a collection of available correct self-fulfilling beliefs. This is not correlation (in case of correlation one should look for a single belief in control of the correlated events), but the events controlled by such beliefs may well be instantiated by processes unrelated to the algorithm that determines which of the possible self-fulfilling beliefs gets enacted, that is unrelated to the algorithm that controls the events. The belief itself doesn’t have to be explicitly instantiated at all, it’s part of the algorithm’s abstract computation. The processes instantiating the events, and those channeling the algorithm, only have to be understood by the algorithm that discovers correct self-fulfilling beliefs and decides which one to enact, these processes don’t have to themselves be controlled by it, in fact them not being controlled makes for a better setup, this way the belief under consideration is more specific. (I don’t understand the preferred alternative you refer to with “free cause”.)
I typed too fast, and combined “free choice” and “uncaused action”. I don’t have a gears-level understanding of causality that admits of the BOTH predictability of decision, AND an algorithm that “decides which one to enact”. It seems to me that in order to be predictable, the decision has to be caused by some observable configuration BEFORE the prediction. That is, there is an upstream cause to the prediction and to the decision.
I disagree that “control” is misleading. Or rather, I think that the concept of “controlling” something is sort of weird and maybe doesn’t make sense when you drill down on it; and the best ways of turning it into a concept that makes sense (and has practical importance) tends to require taking some correlations into account.
Just saying “correlation” also isn’t sufficient, because not all correlations are preserved across different actions you can take.
This. The deterministic prisoner’s dilemma reminds me a lot of quantum entanglement and Bell’s theorem experiments—except it doesn’t even have THAT amount of mystery, it’s just plain old correlation. If I pick two boxes, put $1000 into one, and send them both at near lightspeed in opposite directions, you’re not doing FTL signalling when you open one and find the money, thus deducing instantly that the other is empty. This is the same, but it feels weird because intelligences; however, unless you believe in supernatural source of free will (in which case CDT is the right choice regardless, and you could reasonably defect), intelligences should be subject to the same exact causal chains as boxes full with money.
The deterministic prisoner’s dilemma reminds me a lot of quantum entanglement and Bell’s theorem experiments—except it doesn’t even have THAT amount of mystery, it’s just plain old correlation.
I agree. EPR mean lack of local determinism , so the solution is ambiguous between indeterminism and nonlocal determinism.
I think a lot of writing/thinking about this topic is needlessly complicated. CDT clearly doesn’t work if it’s causal model is wrong. I don’t get why there’s any controversy about that. Further, it’s incredibly misleading to use the word “control”, when you mean “correlated”. In the cases of constrained behavior (by superior modeling or simulation of the perception/decision mechanism), that’s not actually a free cause—the “choice” is actually caused by some upstream event or state of the universe.
You can describe the same thing at two levels of abstraction: “I moved the bishop to threaten my opponent’s queen” vs “I moved the bishop because all the particles and fields in the universe continued to follow their orderly motions according to the fundamental laws of physics, and the result was that I moved the bishop”. The levels are both valid, but it’s easy to spout obvious nonsense by mixing them up: “Why is the chess algorithm analyzing all those positions? So much work and wasted electricity, when the answer is predetermined!!!!” :-P
Anyway, I think (maybe) your comment mixes up these two levels. When we say “control” we’re talking a thing that only makes sense at the higher (algorithm) level. When we say “the state of the universe is a constraint”, that only makes sense at the lower (physics) level.
For the identical twins, I think if we want to be at the higher (algorithm) level, we can say (as in Vladimir_Nesov’s comment) that “control” is exerted by “the algorithm”, and “the algorithm” is some otherworldly abstraction on a different plane of existence, and “the algorithm” has two physical instantiations, and when “the algorithm” exerts “control”, it controls both of the two physical instantiations.
Or, alternatively, you can say that “control” is exerted by a physical instantiation of the algorithm, but that “control” can influence things in the past etc.
(Not too confident about any of this, sorry if I’m confused.)
Yup, there are different levels of abstraction to model/predict/organize our understanding of the universe. These are not exclusive, nor independent, though—there’s only one actual universe! Mixing (or more commonly switching among) levels is not a problem if all the levels are correct on the aspects being predicted. The lower levels are impossibly hard to calculate, but they’re what actually happens. The higher levels are more accessible, but sometimes wrong. When you get contradictory results at a high level, you know something’s wrong, and have to look at the lower levels (and when this isn’t possible, as it so often isn’t, you kind of have to guess. but you need to be clear that you’re guessing and that your model is being used outside it’s validity domain).
This is relevant when talking about “control”, as there are some things that “feel” possible (say, moving the bishop to a different-colored space on the board), but actually aren’t (because the lower-level rules don’t work that way).
I am surprised that, to this day, there are people on LW who haven’t yet dissolved free will, despite the topic being covered explicitly (both in the Sequences and in a litany of other posts) over a decade ago.
No, “libertarian free will” (the idea that there exists some notion of “choice” independent of and unaffected by physics) does not exist. Yes, this means that if you are a god sitting outside of the universe, you can (modulo largely irrelevant fuzz factors like quantum indeterminacy) compute what any individual physical subsystem in the universe will do, including subsystems that refer to themselves as “agents”.
But so what? In point of fact, you are not a god. So what bearing does this hypothetical god’s perspective have on you, in your position as a merely-physical subsystem-of-the-universe? Perhaps you imagine that the existence of such a god has deep relevance for your decision-making right here and now, but if so, you are merely mistaken. (Indeed, for some people who consistently make this mistake, it may even be the case that hearing of the god’s existence has harmed them.)
Suppose, then, that you are not God. It follows that you cannot know what you will decide before actually making the decision. So in the moments leading up to the decision, what will you do? Pleading “but it is a predetermined physical fact!” benefits you not at all; whether there exists a fact of the matter is irrelevant when you cannot access that information even in principle. Whatever you do, then—whatever process of deliberation you follow, whatever algorithm you instantiate—must depend on something other than plucking information out of the mind of God.
What might this deliberation process entail? It depends, naturally, on what type of physical system you are; some systems (e.g. rocks) “deliberate” via extremely simplistic means. But if you happen to be a human—or anything else that might be recognizably “agent-like”—your deliberation will probably consist of something like imagining different choices you might make, visualizing the outcome conditional on each of those choices, and then selecting the choice-outcome pair that ranks best in your estimation.
If you follow this procedure, you will end up making only one choice in the end. What does this entail for all of the other conditional futures you imagined? If you were to ask God, he would tell you that those futures were logically incoherent—not simply physically impossible, but incapable of happening even in principle. But as to the question of how much bearing this (true) fact had on your ability to envision those futures, back when you didn’t yet know which way you would choose—the answer is no bearing at all.
This, then, is all that “free will” is: it is the way it feels, from the inside, to instantiate an algorithm following the decision-making procedure outlined above. In short, free will is an “abstraction” in exactly the sense Steven Byrnes described in the grandparent comment, and your objection
is simply incorrect—an error that arose from mixing levels of abstraction that have nothing to do with each other. The “higher-level” picture does not contradict the “lower-level” picture; God’s existence has no bearing on your ability to imagine conditional outcomes, and it is this latter concept that people refer to with terms like “free will” or “control”.
There is no clear account of this topic. It’s valuable to remain aware of that, so that the situation may be improved. Many of the points you present as evident are hard to interpret, let alone ascertain, it’s not a situation where doubt and disagreement are inappropriate.
The notion of control that makes sense to me is enacting a particular self-fulfilling belief out of a collection of available correct self-fulfilling beliefs. This is not correlation (in case of correlation one should look for a single belief in control of the correlated events), but the events controlled by such beliefs may well be instantiated by processes unrelated to the algorithm that determines which of the possible self-fulfilling beliefs gets enacted, that is unrelated to the algorithm that controls the events. The belief itself doesn’t have to be explicitly instantiated at all, it’s part of the algorithm’s abstract computation. The processes instantiating the events, and those channeling the algorithm, only have to be understood by the algorithm that discovers correct self-fulfilling beliefs and decides which one to enact, these processes don’t have to themselves be controlled by it, in fact them not being controlled makes for a better setup, this way the belief under consideration is more specific. (I don’t understand the preferred alternative you refer to with “free cause”.)
I typed too fast, and combined “free choice” and “uncaused action”. I don’t have a gears-level understanding of causality that admits of the BOTH predictability of decision, AND an algorithm that “decides which one to enact”. It seems to me that in order to be predictable, the decision has to be caused by some observable configuration BEFORE the prediction. That is, there is an upstream cause to the prediction and to the decision.
I disagree that “control” is misleading. Or rather, I think that the concept of “controlling” something is sort of weird and maybe doesn’t make sense when you drill down on it; and the best ways of turning it into a concept that makes sense (and has practical importance) tends to require taking some correlations into account.
Just saying “correlation” also isn’t sufficient, because not all correlations are preserved across different actions you can take.
I think we’re agreed there. IMO, THAT is the complexity to focus on—what causes decisions, and what counterfactual decisions are “possible”.
It would require asymmetries and counterfactuals, but they exist or at least there are good enough approximations to then.
PS. What effect would the incomprehensibility of control have on the Control Problem?
This. The deterministic prisoner’s dilemma reminds me a lot of quantum entanglement and Bell’s theorem experiments—except it doesn’t even have THAT amount of mystery, it’s just plain old correlation. If I pick two boxes, put $1000 into one, and send them both at near lightspeed in opposite directions, you’re not doing FTL signalling when you open one and find the money, thus deducing instantly that the other is empty. This is the same, but it feels weird because intelligences; however, unless you believe in supernatural source of free will (in which case CDT is the right choice regardless, and you could reasonably defect), intelligences should be subject to the same exact causal chains as boxes full with money.
I agree. EPR mean lack of local determinism , so the solution is ambiguous between indeterminism and nonlocal determinism.