I don’t think you understand EY’s position at all.
The actual argument can be summarized more like this:
“If free will means anything, then it must mean our algorithm’s ability to determine our actions. Therefore free will is not only compatible with determinism, it’s absolutely dependent on determinism. If our mind’s state didn’t determine our actions, it would be then that there would be no possibility of free will.
The sort of confusion which thinks free will to be incompatible with determinism, derives from people picturing their selves as being restrained by physics instead of being part of physics.”
It’s not obvious that a determiistic system, such as a billiard ball, is in control of its actions just because it is deterministic. Control is making choices between possible courses of actions. If a system is deterministic, the possibilities it considers are merely hypothetical, it if is indeterministic, they are real possibilites that could actually happen. It is not at all clear that the latter is not lack of control.
It’s not obvious that a determiistic system, such as a billiard ball, is in control of its actions just because it is deterministic.
I believe the billiard ball to be a meaningless analogy because billiard balls have no minds, make no considerations over futures, and have no preferences over futures either. As such billiard balls do not “choose” and do not have wills (free or otherwise).
Control is making choices between possible courses of actions.
By “making choices between” do you mean just “having a conscious preference between” or do you mean “affecting the probability (positively or negatively) of each possible action occuring, according to said conscious preferences”?
If a system is deterministic, the possibilities it considers are merely hypothetical, it if is indeterministic, they are real possibilites that could actually happen.
Consider the configuration space of the preferences of a conscious mind A, and the configuration space of action B.
For A to control B means for the various possible configurations in the preferences of Mind A to constrain differently the various probability weights in the configuration space of action B.
E.g. if the configuration of my mind is that I’m a “Fringe” fan, this makes it directly more likely that I’ll watch the Fringe series finale. So I have control over my personal action of watching the series.
On the other hand I can’t control my heartbeat directly. It is still deterministic in a physical sense (indeed more so than me watching Fringe), but its probability is unconstrained by my preferences. So again my conscious mind’s state A doesn’t constrain the configuration space of B, and I don’t have control over my heartbeat.
Lastly, let’s consider an effectively indeterministic system like e.g. dice (use quantum dice for the nitpickers). I can throw the dice, and I can hope for a particular number, but “indeterministic” pretty much means by definition that their result aren’t determined by a previous state, which includes my preferences. So I have no control over the dice’s outcome, no matter how I would prefer one possible state over another.
So, yeah: determinism by itself isn’t sufficient—the core of the issue is how much my preferences determine the probability weights in the configuration space of actions.
I believe the billiard ball to be a meaningless analogy because billiard balls have no minds, make no considerations over futures, and have no preferences over futures either. As such billiard balls do not “choose” and do not have wills (free or otherwise).
That’s kind of what I was getting at.
By “making choices between” do you mean just “having a conscious preference between” or do you mean “affecting the probability (positively or negatively) of each possible action occuring, according to said conscious preferences”?
Neither. The point I went on to is that both count.
Lastly, let’s consider an effectively indeterministic system like e.g. dice (use quantum dice for the nitpickers). I can throw the dice, and I can hope for a particular number, but “indeterministic” pretty much means by definition that their result aren’t determined by a previous state, which includes my preferences.
That isn’t an argument against indeterminism-based FW, if it was meant to be.
Can you then explain what you mean by the phrase “making choices between”?
That isn’t an argument against indeterminism-based FW, if it was meant to be.
I’ll resummarize my point, and I hope you explain where you disagree with it this time (frankly, this style of discussion, where you don’t seem to want to volunteer much information is rather tiring for me)
I know no meaning of “control of A over B” which doesn’t have to do with A causally helping determine the probabilities of B’s configuration space. The more it affects those probabilities, the more control A has over B. If those probabilities are not determined by A at all, then obviously A has no control over B. So the complete “indeterminism” of an action, means the utter lack of control of A over B.
Can you please tell me where you start disagreeing with the above paragraph?
You haven’t straightforwardly answered the question of whether you are arguing against indeterminism based free will.
I consider libertarian free will not only false, I consider it self-contradictory. In short not only it doesn’t exist, I don’t see how it could possibly exist (for coherent definitions of determinism and free will) in even a hypothetical universe.
If there’s a distinction you’re making between libertarian free will and “indeterminism-based” free will, sorry but I’m not aware of the distinction.
No one is talking about complete indeterminism.
Then separate the indeterministic parts of a system from the deterministic parts, and the argument still applies: You can’t determine the probabilities of the indeterministic parts, therefore you can’t control them, therefore the more indeterministics parts there are, the less becomes your maximum-possible control over the whole.
If you have any control, it must be over the parts and over the extent you can determine the probabilities—in short the more deterministic something is, the more the maximum-possible control you can determine it is. This again seem pretty self-evident to me.
In short what supporters of libertarian free-will are claiming about determinism (that it would eliminate free will) is actually correct about indeterminism.
Also, a non-deterministic process A can still control B in your sense.
I was talking about A as mind-state, e.g. preferences (values, ethics, etc), not the decision-making process (let’s call it D) that connects the preferences and the choice B.
The more the outcome of D is determined by A, the more control those preferences, values, ethics (in short the person) has over B.
This again seems so obvious to me that it seems practically a tautology.
I consider libertarian free will not only false, I consider it self-contradictory. In short not only it doesn’t exist, I don’t see how it could possibly exist (for coherent definitions of determinism and free will) in even a hypothetical universe.
Where;s the argument that the indeterministic model [of libertarian free will] is incoherent?
If there’s a distinction you’re making between libertarian free will and “indeterminism-based” free will, sorry but I’m not aware of the distinction
indeterminism based free will is naturalsitic libertarian FW
Then separate the indeterministic parts of a system from the deterministic parts, and the argument still applies: You can’t determine the probabilities of the indeterministic parts, therefore you can’t control them, therefore the more indeterministics parts there are, the less becomes your maximum-possible control over the whole.
That depends what you mean by “you”. That your brain thinks thoughts does not mean that you, the person, are not thinking
thoughts. Decisions made by your neural subsystems are made by your, the person. You (some homunculus?) don;t need to pre-think your thoughts for them to be yours, not do you need to pre-choose your choices.
If you have any control, it must be over the parts and over the extent you can determine the probabilities
What does “you” mean there?
in short the more deterministic something is, the more the maximum-possible control you can determine it >is. This again seem pretty self-evident to me.
A deterministic brain might be a nice toy for an immateria homunculujs, but we are dealing with naturalism
here. We are dealing with how a system can choose between possible actions. indeterminism means the possibiltieis are real possibilities.
The more the outcome of D is determined by A, the more control those preferences, values, ethics (in short the person) has over B.
That your brain thinks thoughts does not mean that you, the person, are not thinking thoughts. Decisions made by your neural subsystems are made by your, the person.
Of course, that’s my whole point. That my brain is making choices doesn’t means that I’m not making choices.
If you have any control, it must be over the parts and over the extent you can determine the probabilities
What does “you” mean there?
It doesn’t matter for the purpose of the question. No matter how you define yourself, my statement still applies. Personally I’d define it as my personality which includes my preferences, my values, my ways of thinking, etc. But as I said it doesn’t matter for the purpose of the question. For any person’s definition of “you” the statement still applies.
But where’s the choice?
Okay, look. When you say “where’s the choice?” I can only understand your question as saying “where’s the decision process?” The answer is that the decision process happens physically in your brain.
So “the choice” is very real and physically occurring in your brain.
If you mean something else with choice other than “decision process”, then please clarify what you mean.
Okay, look. When you say “where’s the choice?” I can only understand your question as saying “where’s the decision process?” The answer is that the decision process happens physically in your brain.
That’s not what I mean. I mean that any deterministic process can be divided into stages,such that stage 1 “contriols” stage 2 and so on. But because it is deterministic every probabiity is 1. But choice is choice
between options. Where are the other options, the things you could have done but didn’t?
But choice is choice between options. Where are the other options, the things you could have done but didn’t?
You have subjective uncertainty about what you will do, so you know only of a set of hypothetical actions, given by descriptions that you can use. Even though only one of these will actually take place, your decision algorithm is working with the whole set, it can’t work with the actual action in particular, because it doesn’t know what it is. So in one sense, “options” may refer to this element of the decision algorithm.
The decision process is a selection between modelled actions and between modelled futures—it isn’t making a selection between actual physical futures, one real and others not.
e.g. If I decide to step forward, but just before I do so, someone pulls me back; my choice was equally real even if I failed to actualize it against my will; my decision process concluded.
Indeed if I’m insane and make a choice to flap my wings and fly, my decision process is still real even if the action I decide to take is physically impossible and my model of my available options is horribly flawed.
So, the “other options”, same as the option you pick, they’re all representations encoded in your brain, and physically real at that level.
Please post this question in direct response to the comment where I called the indterministic model incoherent, in order to have a cleaner structure in the discussion.
Yes… and yet, the slightest touch of indeterminism does not immediately wipe out the possibility of free will. You said it was absolutely dependent on determinism. That is false. Was that not clear?
If I say that a forest fire is absolutely dependent on the presence of oxygen in the atmosphere, it doesn’t follow that the “slightest touch” of nitrogen would immediately wipe out the possibility of fires.
And yet the fire would still be absolutely dependent on the presence of oxygen.
If “determinism” is taken to mean the theory that the past uniquely and completely determines the future (“hard” determinism?), then the more accurate analogy would be to say that “forest fires are absolutely dependent on an atmosphere of pure oxygen”.
At this point the dispute becomes a linguistic triviality, I think.
My position is as follows: If some elements of a system are deterministic and others non-deterministic, then if free will is expressed anywhere it can only be expressed with the deterministic elements, not with the non-deterministic ones; much as fire is fueled by the oxygen in the atmosphere, not by the nitrogen of the atmosphere.
I think that position your correct (and well put) regardless of what EY may or may not think.
Many people are offended at the thought of being controlled by physics when they are in fact a part of physics.
That answers the more relevant question of “What’s all the stupid fuss over this supposed question of free will?” People treat it like it’s some big mysterious conundrum, when that feeling of mystery should tell them they are confused and should check their premises.
“If free will means anything, then it must mean our algorithm’s ability to determine our actions. Therefore free will is not only compatible with determinism, it’s absolutely dependent on determinism. If our mind’s state didn’t determine our actions, it would be then that there would be no possibility of free will.
Our ability to determine our decisions need not be deterministic. Not all algorithms are deterministic.
If our mind’s state didn’t determine our actions, it would be then that there would be no possibility of free wil
See two stage theories. The mind/body system has to reliably put a decision into practice once it has been made, but that doesnt imply the decision-making has to be deterministic.
I don’t think you understand EY’s position at all.
The actual argument can be summarized more like this: “If free will means anything, then it must mean our algorithm’s ability to determine our actions. Therefore free will is not only compatible with determinism, it’s absolutely dependent on determinism. If our mind’s state didn’t determine our actions, it would be then that there would be no possibility of free will.
The sort of confusion which thinks free will to be incompatible with determinism, derives from people picturing their selves as being restrained by physics instead of being part of physics.”
I’d take that, minus the crucial dependence on determinism. A system can contain stochastic elements and yet be compatible with free will.
The more the randomness in the system, the less your actions are determined by your mind’s state, the less you control your actions.
It’s not obvious that a determiistic system, such as a billiard ball, is in control of its actions just because it is deterministic. Control is making choices between possible courses of actions. If a system is deterministic, the possibilities it considers are merely hypothetical, it if is indeterministic, they are real possibilites that could actually happen. It is not at all clear that the latter is not lack of control.
I believe the billiard ball to be a meaningless analogy because billiard balls have no minds, make no considerations over futures, and have no preferences over futures either. As such billiard balls do not “choose” and do not have wills (free or otherwise).
By “making choices between” do you mean just “having a conscious preference between” or do you mean “affecting the probability (positively or negatively) of each possible action occuring, according to said conscious preferences”?
Consider the configuration space of the preferences of a conscious mind A, and the configuration space of action B. For A to control B means for the various possible configurations in the preferences of Mind A to constrain differently the various probability weights in the configuration space of action B.
E.g. if the configuration of my mind is that I’m a “Fringe” fan, this makes it directly more likely that I’ll watch the Fringe series finale. So I have control over my personal action of watching the series.
On the other hand I can’t control my heartbeat directly. It is still deterministic in a physical sense (indeed more so than me watching Fringe), but its probability is unconstrained by my preferences. So again my conscious mind’s state A doesn’t constrain the configuration space of B, and I don’t have control over my heartbeat.
Lastly, let’s consider an effectively indeterministic system like e.g. dice (use quantum dice for the nitpickers). I can throw the dice, and I can hope for a particular number, but “indeterministic” pretty much means by definition that their result aren’t determined by a previous state, which includes my preferences. So I have no control over the dice’s outcome, no matter how I would prefer one possible state over another.
So, yeah: determinism by itself isn’t sufficient—the core of the issue is how much my preferences determine the probability weights in the configuration space of actions.
That’s kind of what I was getting at.
Neither. The point I went on to is that both count.
That isn’t an argument against indeterminism-based FW, if it was meant to be.
Can you then explain what you mean by the phrase “making choices between”?
I’ll resummarize my point, and I hope you explain where you disagree with it this time (frankly, this style of discussion, where you don’t seem to want to volunteer much information is rather tiring for me)
I know no meaning of “control of A over B” which doesn’t have to do with A causally helping determine the probabilities of B’s configuration space. The more it affects those probabilities, the more control A has over B. If those probabilities are not determined by A at all, then obviously A has no control over B. So the complete “indeterminism” of an action, means the utter lack of control of A over B.
Can you please tell me where you start disagreeing with the above paragraph?
I should have said neither specifically. It was intended to cover both the more detailed options.
You haven’t straightforwardly answered the question of whether you are arguing against indeterminism based free will.
No one is talking about complete indeterminism. Also, a non-deterministic process A can still control B in your sense.
I consider libertarian free will not only false, I consider it self-contradictory. In short not only it doesn’t exist, I don’t see how it could possibly exist (for coherent definitions of determinism and free will) in even a hypothetical universe.
If there’s a distinction you’re making between libertarian free will and “indeterminism-based” free will, sorry but I’m not aware of the distinction.
Then separate the indeterministic parts of a system from the deterministic parts, and the argument still applies: You can’t determine the probabilities of the indeterministic parts, therefore you can’t control them, therefore the more indeterministics parts there are, the less becomes your maximum-possible control over the whole.
If you have any control, it must be over the parts and over the extent you can determine the probabilities—in short the more deterministic something is, the more the maximum-possible control you can determine it is. This again seem pretty self-evident to me.
In short what supporters of libertarian free-will are claiming about determinism (that it would eliminate free will) is actually correct about indeterminism.
I was talking about A as mind-state, e.g. preferences (values, ethics, etc), not the decision-making process (let’s call it D) that connects the preferences and the choice B.
The more the outcome of D is determined by A, the more control those preferences, values, ethics (in short the person) has over B.
This again seems so obvious to me that it seems practically a tautology.
Where;s the argument that the indeterministic model [of libertarian free will] is incoherent?
indeterminism based free will is naturalsitic libertarian FW
That depends what you mean by “you”. That your brain thinks thoughts does not mean that you, the person, are not thinking thoughts. Decisions made by your neural subsystems are made by your, the person. You (some homunculus?) don;t need to pre-think your thoughts for them to be yours, not do you need to pre-choose your choices.
What does “you” mean there?
A deterministic brain might be a nice toy for an immateria homunculujs, but we are dealing with naturalism here. We are dealing with how a system can choose between possible actions. indeterminism means the possibiltieis are real possibilities.
But where’s the choice?
Of course, that’s my whole point. That my brain is making choices doesn’t means that I’m not making choices.
It doesn’t matter for the purpose of the question. No matter how you define yourself, my statement still applies. Personally I’d define it as my personality which includes my preferences, my values, my ways of thinking, etc. But as I said it doesn’t matter for the purpose of the question. For any person’s definition of “you” the statement still applies.
Okay, look. When you say “where’s the choice?” I can only understand your question as saying “where’s the decision process?” The answer is that the decision process happens physically in your brain.
So “the choice” is very real and physically occurring in your brain.
If you mean something else with choice other than “decision process”, then please clarify what you mean.
That’s not what I mean. I mean that any deterministic process can be divided into stages,such that stage 1 “contriols” stage 2 and so on. But because it is deterministic every probabiity is 1. But choice is choice between options. Where are the other options, the things you could have done but didn’t?
You have subjective uncertainty about what you will do, so you know only of a set of hypothetical actions, given by descriptions that you can use. Even though only one of these will actually take place, your decision algorithm is working with the whole set, it can’t work with the actual action in particular, because it doesn’t know what it is. So in one sense, “options” may refer to this element of the decision algorithm.
The decision process is a selection between modelled actions and between modelled futures—it isn’t making a selection between actual physical futures, one real and others not.
e.g. If I decide to step forward, but just before I do so, someone pulls me back; my choice was equally real even if I failed to actualize it against my will; my decision process concluded.
Indeed if I’m insane and make a choice to flap my wings and fly, my decision process is still real even if the action I decide to take is physically impossible and my model of my available options is horribly flawed.
So, the “other options”, same as the option you pick, they’re all representations encoded in your brain, and physically real at that level.
Thats a description of the deterministic model. Where;s the argument that the indterministic model is incoherent?
Please post this question in direct response to the comment where I called the indterministic model incoherent, in order to have a cleaner structure in the discussion.
Yes… and yet, the slightest touch of indeterminism does not immediately wipe out the possibility of free will. You said it was absolutely dependent on determinism. That is false. Was that not clear?
If I say that a forest fire is absolutely dependent on the presence of oxygen in the atmosphere, it doesn’t follow that the “slightest touch” of nitrogen would immediately wipe out the possibility of fires.
And yet the fire would still be absolutely dependent on the presence of oxygen.
If “determinism” is taken to mean the theory that the past uniquely and completely determines the future (“hard” determinism?), then the more accurate analogy would be to say that “forest fires are absolutely dependent on an atmosphere of pure oxygen”.
At this point the dispute becomes a linguistic triviality, I think.
My position is as follows: If some elements of a system are deterministic and others non-deterministic, then if free will is expressed anywhere it can only be expressed with the deterministic elements, not with the non-deterministic ones; much as fire is fueled by the oxygen in the atmosphere, not by the nitrogen of the atmosphere.
(Control requires presence of determinism, doesn’t require absence of randomness. There is no dichotomy in the intended sense.)
I think that position your correct (and well put) regardless of what EY may or may not think.
Many people are offended at the thought of being controlled by physics when they are in fact a part of physics.
That answers the more relevant question of “What’s all the stupid fuss over this supposed question of free will?” People treat it like it’s some big mysterious conundrum, when that feeling of mystery should tell them they are confused and should check their premises.
Our ability to determine our decisions need not be deterministic. Not all algorithms are deterministic.
See two stage theories. The mind/body system has to reliably put a decision into practice once it has been made, but that doesnt imply the decision-making has to be deterministic.