Secondly, “free will” is such a loaded word that using it in a non-standard fashion simply obscures and confuses the discussion.
Wikipedia says “Free will is the ability to choose between different possible courses of action unimpeded.” SEP says “The term “free will” has emerged over the past two millennia as the canonical designator for a significant kind of control over one’s actions.” So my usage seems pretty standard.
For example, recently I’ve been arguing in favour of what counts as a valid counterfactual being at least partially a matter of social convention.
All word definitions are determined in large part by social convention. The question is whether the social convention corresponds to a definition (e.g. with truth conditions) or not. If it does, then the social convention is realist, if not, it’s nonrealist (perhaps emotivist, etc).
Material conditions only provide the outcome when we have a consistent counterfactual.
Not necessarily. An agent may be uncertain over its own action, and thus have uncertainty about material conditionals involving its action. The “possible worlds” represented by this uncertainty may be logically inconsistent, in ways the agent can’t determine before making the decision.
Proof-based UDT doesn’t quite use material conditionals, it uses a paraconsistent version of them instead.
I don’t understand this? I thought it searched for proofs of the form “if I take this action, then I get at least this much utility”, which is a material conditional.
So, to imagine counterfactually taking action Y we replace the agent doing X with another agent doing Y and flow causation both forwards and backwards.
Policy-dependent source code does this; one’s source code depends on one’s policy.
I guess from a philosophical perspective it makes sense to first consider whether policy-dependent source code makes sense and then if it does further ask whether UDT makes sense.
I think UDT makes sense in “dualistic” decision problems that are already factorized as “this policy leads to these consequences”. Extending it to a nondualist case brings up difficulties, including the free will / determinism issue. Policy-dependent source code is a way of interpreting UDT in a setting with deterministic, knowable physics.
Not quite. The way you are using it doesn’t necessarily imply real control, it may be imaginary control.
All word definitions are determined in large part by social convention
True. Maybe I should clarify what I’m suggesting. My current theory is that there are multiple reasonable definitions of counterfactual and it comes down to social norms as to what we accept as a valid counterfactual. However, it is still very much a work in progress, so I wouldn’t be able to provide more than vague details.
The “possible worlds” represented by this uncertainty may be logically inconsistent, in ways the agent can’t determine before making the decision.
I guess my point was that this notion of counterfactual isn’t strictly a material conditional due to the principle of explosion. It’s a “para-consistent material conditional” by which I mean the algorithm is limited in such a way as to prevent this explosion.
Policy-dependent source code does this; one’s source code depends on one’s policy.
Hmm… good point. However, were you flowing this all the way back in time? Such as if you change someone’s source code, you’d also have to change the person who programmed them.
I think UDT makes sense in “dualistic” decision problems’\
The way you are using it doesn’t necessarily imply real control, it may be imaginary control.
I’m discussing a hypothetical agent who believes itself to have control. So its beliefs include “I have free will”. Its belief isn’t “I believe that I have free will”.
It’s a “para-consistent material conditional” by which I mean the algorithm is limited in such a way as to prevent this explosion.
Yes, that makes sense.
However, were you flowing this all the way back in time?
Yes (see thread with Abram Demski).
What do you mean by dualistic?
Already factorized as an agent interacting with an environment.
Hmm, yeah this could be a viable theory. Anyway to summarise the argument I make in Is Backwards Causation Necessarily Absurd?, I point out that since physics is pretty much reversible, instead of A causing B, it seems as though we could also imagine B causing A and time going backwards. In this view, it would be reasonable to say that one-boxing (backwards-)caused the box to be full in Newcombs. I only sketched the theory because I don’t have enough physics knowledge to evaluate it. But the point is that we can give justification for a non-standard model of causality.
Wikipedia says “Free will is the ability to choose between different possible courses of action unimpeded.” SEP says “The term “free will” has emerged over the past two millennia as the canonical designator for a significant kind of control over one’s actions.” So my usage seems pretty standard.
All word definitions are determined in large part by social convention. The question is whether the social convention corresponds to a definition (e.g. with truth conditions) or not. If it does, then the social convention is realist, if not, it’s nonrealist (perhaps emotivist, etc).
Not necessarily. An agent may be uncertain over its own action, and thus have uncertainty about material conditionals involving its action. The “possible worlds” represented by this uncertainty may be logically inconsistent, in ways the agent can’t determine before making the decision.
I don’t understand this? I thought it searched for proofs of the form “if I take this action, then I get at least this much utility”, which is a material conditional.
Policy-dependent source code does this; one’s source code depends on one’s policy.
I think UDT makes sense in “dualistic” decision problems that are already factorized as “this policy leads to these consequences”. Extending it to a nondualist case brings up difficulties, including the free will / determinism issue. Policy-dependent source code is a way of interpreting UDT in a setting with deterministic, knowable physics.
Not quite. The way you are using it doesn’t necessarily imply real control, it may be imaginary control.
True. Maybe I should clarify what I’m suggesting. My current theory is that there are multiple reasonable definitions of counterfactual and it comes down to social norms as to what we accept as a valid counterfactual. However, it is still very much a work in progress, so I wouldn’t be able to provide more than vague details.
I guess my point was that this notion of counterfactual isn’t strictly a material conditional due to the principle of explosion. It’s a “para-consistent material conditional” by which I mean the algorithm is limited in such a way as to prevent this explosion.
Hmm… good point. However, were you flowing this all the way back in time? Such as if you change someone’s source code, you’d also have to change the person who programmed them.
What do you mean by dualistic?
I’m discussing a hypothetical agent who believes itself to have control. So its beliefs include “I have free will”. Its belief isn’t “I believe that I have free will”.
Yes, that makes sense.
Yes (see thread with Abram Demski).
Already factorized as an agent interacting with an environment.
Hmm, yeah this could be a viable theory. Anyway to summarise the argument I make in Is Backwards Causation Necessarily Absurd?, I point out that since physics is pretty much reversible, instead of A causing B, it seems as though we could also imagine B causing A and time going backwards. In this view, it would be reasonable to say that one-boxing (backwards-)caused the box to be full in Newcombs. I only sketched the theory because I don’t have enough physics knowledge to evaluate it. But the point is that we can give justification for a non-standard model of causality.