I found parts of your framing quite original and I’m still trying to understand all the consequences.
Firstly, I’m also opposed to characterising the problem in terms of logical counterfactuals. I’ve argued before that Counterfactuals are an Answer Not a Question, although maybe it would have been clearer to say that they are a Tool Not a Question instead. If we’re talking strictly, it doesn’t make sense to ask what maths would. be like if 1+1=3 as it doesn’t, but we can construct a para-consistent logic where it makes sense to do something analogous to pretending 1+1=3. And so maybe one form of “logical counterfactual” could be useful for solving these problems, but that doesn’t mean asking what logical counterfactuals are, as though they were ontologically basic, as though they were in the map not the territory, as though they were a single unified concept, makes sense.
Secondly, “free will” is such a loaded word that using it in a non-standard fashion simply obscures and confuses the discussion. Nonetheless, I think you are touching upon an important point here. I have a framing which I believe helps clarify the situation. If there’s only one possible decision, this gives us a Trivial Decision Problem. So to have a non-trivial decision problem, we’d need a model containing at least two decisions. If we actually did have libertarian free will, then our decision problems would always be non-trivial. However, in the absence of this, the only way to avoid triviality would be to augment the factual with at least one counterfactual.
Counterfactual non-realism: Hmm… I see how this could be a useful concept, but the definition given feels a bit vague. For example, recently I’ve been arguing in favour of what counts as a valid counterfactual being at least partially a matter of social convention. Is that counterfactual non-realism?
Further, it seems a bit strange to associate material conditions with counterfactual non-realism. Material conditions only provide the outcome when we have a consistent counterfactual. So, either a) we believe in libertarian free will b) we use something like the erasure approach to remove information such that we have multiple consistent possibilities (see https://www.lesswrong.com/posts/BRuWm4GxcTNPn4XDX/deconfusing-logical-counterfactuals). Proof-based UDT doesn’t quite use material conditionals, it uses a paraconsistent version of them instead. Although, maybe I’m just being too pedantic here. In any case, we can find ways of making paraconsistent logic behave as expected in any scenario, however it would require a separate ground. That is, it isn’t enough that the logic merely seems to work, but we should be able to provide a separate reason for why using a paraconsistent logic in that way is good.
Also, another approach which kind of aligns with counterfactual non-realism is to say that given the state of the universe at any particular time we can determine the past and future and that there are no counterfactuals beyond those we generate by imagining state Y at time T instead of state X. So, to imagine counterfactually taking action Y we replace the agent doing X with another agent doing Y and flow causation both forwards and backwards. (See this post for more detail). It could be argued that these count as counterfactuals, but I’d align it with counterfactual non-realism as it doesn’t have decision counterfactuals as seperate ontological elements.
Policy-dependent source code—this is actually a pretty interesting framing. I’ve always defaulted to thinking about counterfactuals in terms of actions, but when we’re talking about things in terms of problems like Counterfactual Mugging, characterising counterfactuals in terms of policy might be more natural. It’s strange why this feels fresh to me—I mean UDT takes this approach—but I never considered the possibility of non-UDT policy counterfactuals. I guess from a philosophical perspective it makes sense to first consider whether policy-dependent source code makes sense and then if it does further ask whether UDT makes sense.
As a side note—one thing I don’t understand is why more people don’t seem to want to use just the word “will” without the “free” part in front of it.
It seems like a much more straightforward and less fraught term, and something that we obviously have. Do we have a “will”? Obviously yes—we want things, we choose things, etc. Is that will “free”? Well what does that mean?
EDIT: I feel like this is a case of philosophers baking in a confusion into their standard term. It’d be like if instead of space we always talked about “absolute space”. And then post-Einstein people argued about whether “absolute space” existed or not, without ever just using the term “space” just by itself.
Philosophers talk about free will because it is contentious and therefore worth discussing philosophically , whereas will, qua wants and desires, isn’t.
cf, the silly physicists who insist on talking about dark matter, when anyone can see that ordinary matter exists.
Philosophers talk about free will because it is contentious and therefore worth discussing philosophically , whereas will, qua wants and desires, isn’t.
Fair point. But then why do so many (including philosophers) make statements like, “we seem to have free will”, or “this experience of apparent free will that we have requires explanation.”
If ‘free will’ in those statements means something different from ‘will’, then it seems like they’re assuming the (wrong) explanation.
cf, the silly physicists who insist on talking about dark matter, when anyone can see that ordinary matter exists.
If physicists often used the term “dark matter” in ways that suggested it’s the same thing as people’s folk concept of matter, then I’d agree that they were silly.
Fair point. But then why do so many (including philosophers) make statements like, “we seem to have free will”, or “this experience of apparent free will that we have requires explanation.”
Why specific philosophers say specific things is usually explained by the philosophers themselves, since it is hard to gain a reputation in the field by making unsupported assertions. But you seem to be making the point that is strange that any philosopher argues in favour of free will, since, according to you it is obviously non-existent. The answer to that is that you are not capable of reproducing all the arguments for or against a claim yourself, so your personal guesswork is not a good guide to how plausible something is.
“this experience of apparent free will that we have requires explanation.”
Doens’t everything require explanation? Even your man Yudkowsky offers an explanation of the feeling of free will.
If physicists often used the term “dark matter” in ways that suggested it’s the same thing as people’s folk concept of matter, then I’d agree that they were silly.
Physicists do use the word “matter” in a sense that departs from folk usage. For instance, they assert that it is mostly nothingness, and that it is equivalent to energy.
But you seem to be making the point that is strange that any philosopher argues in favour of free will, since, according to you it is obviously non-existent.
I didn’t mean that just the philosophers who believe in (libertarian, contra-causal) free will make statements like “we seem to have free will”, or “this experience of apparent free will that we have requires explanation”. I’ve heard those statements even from those questioning such free will.
They’ll say, “we seem to have free will, but actually it’s an illusion”.
What I do not see is proponents of determinism saying that “free will” is the wrong term, that most of the intuitive properties that our wants and choices seem to have are satisfied by the idea of a “will” plane and simple. And then starting the argument from there about whether there are additional properties that that will has or seems to have s.t. it’s reasonable to append the term “free” to the front.
Maybe it’s popularizers that I have to blame, rather than philosophers. I’m not sure. My complaint is that somehow the standard sides of the debate came to be labeled “free will” vs “determinism” rather than “uncaused will” vs “determined will”.
I think the “fee will” vs “determinism” framing unfairly makes it seem like whether any wanting or choosing is happening is at stake, such that people had to come up with the special term “compatibilism” for the position that “no no, there’s still wanting and choosing going on”.
If you started the debate with everyone agreeing, “obviously there’s some form of wanting and choosing happening,” and then asking, “but what form does it take and where does it come from? Can it be said to be caused by anything?” then I think the natural terms for the two camps would be something like “uncaused will” and “determined will”.
I think those terms accurately describe the major sides of the popular debate and are less likely to prejudice people’s intuitions in favor of the free/uncaused will side.
So what I don’t understand is: why don’t proponents of determinism push that framing?
Proponents of determinism tend to say that libertarian free will doesn’t exist, but compatibilist free will might. It is likely that they are expressing the same idea as you, but in different language.
Secondly, “free will” is such a loaded word that using it in a non-standard fashion simply obscures and confuses the discussion.
Wikipedia says “Free will is the ability to choose between different possible courses of action unimpeded.” SEP says “The term “free will” has emerged over the past two millennia as the canonical designator for a significant kind of control over one’s actions.” So my usage seems pretty standard.
For example, recently I’ve been arguing in favour of what counts as a valid counterfactual being at least partially a matter of social convention.
All word definitions are determined in large part by social convention. The question is whether the social convention corresponds to a definition (e.g. with truth conditions) or not. If it does, then the social convention is realist, if not, it’s nonrealist (perhaps emotivist, etc).
Material conditions only provide the outcome when we have a consistent counterfactual.
Not necessarily. An agent may be uncertain over its own action, and thus have uncertainty about material conditionals involving its action. The “possible worlds” represented by this uncertainty may be logically inconsistent, in ways the agent can’t determine before making the decision.
Proof-based UDT doesn’t quite use material conditionals, it uses a paraconsistent version of them instead.
I don’t understand this? I thought it searched for proofs of the form “if I take this action, then I get at least this much utility”, which is a material conditional.
So, to imagine counterfactually taking action Y we replace the agent doing X with another agent doing Y and flow causation both forwards and backwards.
Policy-dependent source code does this; one’s source code depends on one’s policy.
I guess from a philosophical perspective it makes sense to first consider whether policy-dependent source code makes sense and then if it does further ask whether UDT makes sense.
I think UDT makes sense in “dualistic” decision problems that are already factorized as “this policy leads to these consequences”. Extending it to a nondualist case brings up difficulties, including the free will / determinism issue. Policy-dependent source code is a way of interpreting UDT in a setting with deterministic, knowable physics.
Not quite. The way you are using it doesn’t necessarily imply real control, it may be imaginary control.
All word definitions are determined in large part by social convention
True. Maybe I should clarify what I’m suggesting. My current theory is that there are multiple reasonable definitions of counterfactual and it comes down to social norms as to what we accept as a valid counterfactual. However, it is still very much a work in progress, so I wouldn’t be able to provide more than vague details.
The “possible worlds” represented by this uncertainty may be logically inconsistent, in ways the agent can’t determine before making the decision.
I guess my point was that this notion of counterfactual isn’t strictly a material conditional due to the principle of explosion. It’s a “para-consistent material conditional” by which I mean the algorithm is limited in such a way as to prevent this explosion.
Policy-dependent source code does this; one’s source code depends on one’s policy.
Hmm… good point. However, were you flowing this all the way back in time? Such as if you change someone’s source code, you’d also have to change the person who programmed them.
I think UDT makes sense in “dualistic” decision problems’\
The way you are using it doesn’t necessarily imply real control, it may be imaginary control.
I’m discussing a hypothetical agent who believes itself to have control. So its beliefs include “I have free will”. Its belief isn’t “I believe that I have free will”.
It’s a “para-consistent material conditional” by which I mean the algorithm is limited in such a way as to prevent this explosion.
Yes, that makes sense.
However, were you flowing this all the way back in time?
Yes (see thread with Abram Demski).
What do you mean by dualistic?
Already factorized as an agent interacting with an environment.
Hmm, yeah this could be a viable theory. Anyway to summarise the argument I make in Is Backwards Causation Necessarily Absurd?, I point out that since physics is pretty much reversible, instead of A causing B, it seems as though we could also imagine B causing A and time going backwards. In this view, it would be reasonable to say that one-boxing (backwards-)caused the box to be full in Newcombs. I only sketched the theory because I don’t have enough physics knowledge to evaluate it. But the point is that we can give justification for a non-standard model of causality.
I found parts of your framing quite original and I’m still trying to understand all the consequences.
Firstly, I’m also opposed to characterising the problem in terms of logical counterfactuals. I’ve argued before that Counterfactuals are an Answer Not a Question, although maybe it would have been clearer to say that they are a Tool Not a Question instead. If we’re talking strictly, it doesn’t make sense to ask what maths would. be like if 1+1=3 as it doesn’t, but we can construct a para-consistent logic where it makes sense to do something analogous to pretending 1+1=3. And so maybe one form of “logical counterfactual” could be useful for solving these problems, but that doesn’t mean asking what logical counterfactuals are, as though they were ontologically basic, as though they were in the map not the territory, as though they were a single unified concept, makes sense.
Secondly, “free will” is such a loaded word that using it in a non-standard fashion simply obscures and confuses the discussion. Nonetheless, I think you are touching upon an important point here. I have a framing which I believe helps clarify the situation. If there’s only one possible decision, this gives us a Trivial Decision Problem. So to have a non-trivial decision problem, we’d need a model containing at least two decisions. If we actually did have libertarian free will, then our decision problems would always be non-trivial. However, in the absence of this, the only way to avoid triviality would be to augment the factual with at least one counterfactual.
Counterfactual non-realism: Hmm… I see how this could be a useful concept, but the definition given feels a bit vague. For example, recently I’ve been arguing in favour of what counts as a valid counterfactual being at least partially a matter of social convention. Is that counterfactual non-realism?
Further, it seems a bit strange to associate material conditions with counterfactual non-realism. Material conditions only provide the outcome when we have a consistent counterfactual. So, either a) we believe in libertarian free will b) we use something like the erasure approach to remove information such that we have multiple consistent possibilities (see https://www.lesswrong.com/posts/BRuWm4GxcTNPn4XDX/deconfusing-logical-counterfactuals). Proof-based UDT doesn’t quite use material conditionals, it uses a paraconsistent version of them instead. Although, maybe I’m just being too pedantic here. In any case, we can find ways of making paraconsistent logic behave as expected in any scenario, however it would require a separate ground. That is, it isn’t enough that the logic merely seems to work, but we should be able to provide a separate reason for why using a paraconsistent logic in that way is good.
Also, another approach which kind of aligns with counterfactual non-realism is to say that given the state of the universe at any particular time we can determine the past and future and that there are no counterfactuals beyond those we generate by imagining state Y at time T instead of state X. So, to imagine counterfactually taking action Y we replace the agent doing X with another agent doing Y and flow causation both forwards and backwards. (See this post for more detail). It could be argued that these count as counterfactuals, but I’d align it with counterfactual non-realism as it doesn’t have decision counterfactuals as seperate ontological elements.
Policy-dependent source code—this is actually a pretty interesting framing. I’ve always defaulted to thinking about counterfactuals in terms of actions, but when we’re talking about things in terms of problems like Counterfactual Mugging, characterising counterfactuals in terms of policy might be more natural. It’s strange why this feels fresh to me—I mean UDT takes this approach—but I never considered the possibility of non-UDT policy counterfactuals. I guess from a philosophical perspective it makes sense to first consider whether policy-dependent source code makes sense and then if it does further ask whether UDT makes sense.
As a side note—one thing I don’t understand is why more people don’t seem to want to use just the word “will” without the “free” part in front of it.
It seems like a much more straightforward and less fraught term, and something that we obviously have. Do we have a “will”? Obviously yes—we want things, we choose things, etc. Is that will “free”? Well what does that mean?
EDIT: I feel like this is a case of philosophers baking in a confusion into their standard term. It’d be like if instead of space we always talked about “absolute space”. And then post-Einstein people argued about whether “absolute space” existed or not, without ever just using the term “space” just by itself.
Philosophers talk about free will because it is contentious and therefore worth discussing philosophically , whereas will, qua wants and desires, isn’t.
cf, the silly physicists who insist on talking about dark matter, when anyone can see that ordinary matter exists.
Fair point. But then why do so many (including philosophers) make statements like, “we seem to have free will”, or “this experience of apparent free will that we have requires explanation.”
If ‘free will’ in those statements means something different from ‘will’, then it seems like they’re assuming the (wrong) explanation.
If physicists often used the term “dark matter” in ways that suggested it’s the same thing as people’s folk concept of matter, then I’d agree that they were silly.
Why specific philosophers say specific things is usually explained by the philosophers themselves, since it is hard to gain a reputation in the field by making unsupported assertions. But you seem to be making the point that is strange that any philosopher argues in favour of free will, since, according to you it is obviously non-existent. The answer to that is that you are not capable of reproducing all the arguments for or against a claim yourself, so your personal guesswork is not a good guide to how plausible something is.
Doens’t everything require explanation? Even your man Yudkowsky offers an explanation of the feeling of free will.
Physicists do use the word “matter” in a sense that departs from folk usage. For instance, they assert that it is mostly nothingness, and that it is equivalent to energy.
I didn’t mean that just the philosophers who believe in (libertarian, contra-causal) free will make statements like “we seem to have free will”, or “this experience of apparent free will that we have requires explanation”. I’ve heard those statements even from those questioning such free will.
They’ll say, “we seem to have free will, but actually it’s an illusion”.
What I do not see is proponents of determinism saying that “free will” is the wrong term, that most of the intuitive properties that our wants and choices seem to have are satisfied by the idea of a “will” plane and simple. And then starting the argument from there about whether there are additional properties that that will has or seems to have s.t. it’s reasonable to append the term “free” to the front.
Maybe it’s popularizers that I have to blame, rather than philosophers. I’m not sure. My complaint is that somehow the standard sides of the debate came to be labeled “free will” vs “determinism” rather than “uncaused will” vs “determined will”.
I think the “fee will” vs “determinism” framing unfairly makes it seem like whether any wanting or choosing is happening is at stake, such that people had to come up with the special term “compatibilism” for the position that “no no, there’s still wanting and choosing going on”.
If you started the debate with everyone agreeing, “obviously there’s some form of wanting and choosing happening,” and then asking, “but what form does it take and where does it come from? Can it be said to be caused by anything?” then I think the natural terms for the two camps would be something like “uncaused will” and “determined will”.
I think those terms accurately describe the major sides of the popular debate and are less likely to prejudice people’s intuitions in favor of the free/uncaused will side.
So what I don’t understand is: why don’t proponents of determinism push that framing?
Proponents of determinism tend to say that libertarian free will doesn’t exist, but compatibilist free will might. It is likely that they are expressing the same idea as you, but in different language.
That’s an interesting point
Wikipedia says “Free will is the ability to choose between different possible courses of action unimpeded.” SEP says “The term “free will” has emerged over the past two millennia as the canonical designator for a significant kind of control over one’s actions.” So my usage seems pretty standard.
All word definitions are determined in large part by social convention. The question is whether the social convention corresponds to a definition (e.g. with truth conditions) or not. If it does, then the social convention is realist, if not, it’s nonrealist (perhaps emotivist, etc).
Not necessarily. An agent may be uncertain over its own action, and thus have uncertainty about material conditionals involving its action. The “possible worlds” represented by this uncertainty may be logically inconsistent, in ways the agent can’t determine before making the decision.
I don’t understand this? I thought it searched for proofs of the form “if I take this action, then I get at least this much utility”, which is a material conditional.
Policy-dependent source code does this; one’s source code depends on one’s policy.
I think UDT makes sense in “dualistic” decision problems that are already factorized as “this policy leads to these consequences”. Extending it to a nondualist case brings up difficulties, including the free will / determinism issue. Policy-dependent source code is a way of interpreting UDT in a setting with deterministic, knowable physics.
Not quite. The way you are using it doesn’t necessarily imply real control, it may be imaginary control.
True. Maybe I should clarify what I’m suggesting. My current theory is that there are multiple reasonable definitions of counterfactual and it comes down to social norms as to what we accept as a valid counterfactual. However, it is still very much a work in progress, so I wouldn’t be able to provide more than vague details.
I guess my point was that this notion of counterfactual isn’t strictly a material conditional due to the principle of explosion. It’s a “para-consistent material conditional” by which I mean the algorithm is limited in such a way as to prevent this explosion.
Hmm… good point. However, were you flowing this all the way back in time? Such as if you change someone’s source code, you’d also have to change the person who programmed them.
What do you mean by dualistic?
I’m discussing a hypothetical agent who believes itself to have control. So its beliefs include “I have free will”. Its belief isn’t “I believe that I have free will”.
Yes, that makes sense.
Yes (see thread with Abram Demski).
Already factorized as an agent interacting with an environment.
Hmm, yeah this could be a viable theory. Anyway to summarise the argument I make in Is Backwards Causation Necessarily Absurd?, I point out that since physics is pretty much reversible, instead of A causing B, it seems as though we could also imagine B causing A and time going backwards. In this view, it would be reasonable to say that one-boxing (backwards-)caused the box to be full in Newcombs. I only sketched the theory because I don’t have enough physics knowledge to evaluate it. But the point is that we can give justification for a non-standard model of causality.