So with respect to free will, we can instead ask the question, “Why would humans feel like they have free will?” If we can answer this well enough, then hopefully we can dissolve the original question.
Only we can’t. The original question was whether some organisms have the ability to make choices that aren’t fully determined by outside circumstances. That isn’t addressed by answering the question “why would humans feel like they have free will”.
a) humans have FW and feel they do
b) humans don’t have FW, but feel they do
c) humans have FW but feel they don’t
d) humans don’t have or feel they have FW
Yudkowsky shows that a way in which (b) could be possible. But he doesn’t show that (a) is impossible. IOW,, he doesn’t address the original question at all.
Is the original question a bad one which should be replaced? Some approaches to answering the original question are unfathomable (eg the idea of FW as a fundamental tertium datur beyond determinism and indeterminism), others are not. Some naturalistic theories of FW are potentially empirically testable, so throwing out the question involves throwing out a set of empirically respectable theories
ETA:
The falling tree question: recognising that different sides in the argument are really using different definitions does dissolve the question. But Yudkowsky’s approach to Free Will is not analogous, because there is no side of the debate that unknowingly defines FW as the feeling of being able to make choices as opposed to the ability. EY introduced that definition. (There is a disagreement about compatibilist versus libertarian notions of Free WIll, which I have deliberately omitted for simpliciity, but that is still not anologous to the Falling Tree problem because the various sides are quite aware that their definitions differs. “It all depends what you mean by..”)..
“Outside circumstances” including what? Your definition is too vague.
As far as I’ve been able to tell, the question is confused. Before you ask what it is, first you must define what free will is, in a rigorous and exclusive manner; your definition shouldn’t include things you don’t want it to include, nor should it exclude things you don’t want to exclude. You’ve managed to include everything you want included, but your definition fails to exclude things you don’t want included—namely, your definition includes Eliezer’s definition.
Because Eliezer isn’t describing how we could experience free will even where free will doesn’t exist, he’s offering a definition of what free will is. Once you use Eliezer’s definition, the confusion goes away—the question becomes meaningless.
Anything. If my choices are fully determined by anything outside of me, they are not my choices.
Your definition is too vague.
I didn’t specify which outside circumstances because it doens;t matter.
namely, your definition includes Eliezer’s definition.
No Feelings aren’t abilities. An ability to choose does not concptually include a feeling of freedom.
Because Eliezer isn’t describing how we could experience free will even where free will doesn’t exist, he’s offering a definition of what free will is.
A re-definition. A different definition, Hence he is not answering or disolving the original question.
Once you use Eliezer’s definition, the confusion goes away—the question becomes meaningless.
That is false. Once you start using a different definition, you start talking about something else. Changing the subject is not dissolving the question.
To what extent? If something outside your nervous system enters into your nervous system—say, LSD—does it qualify as internal? Does it only count if you chose to imbibe it, or would it also count if somebody else forced you to, or if circumstance forced it upon you (say, you consumed something unknowingly)?
Oh good grief. You can call anything vague if you set the bar high enough. Am I being significantly more vague than EY was?
ETA:
Woops, looks the people who wite the Skeptic’s Dictionary are mystical trolls too:
“Free will is a concept in traditional philosophy used to refer to the belief that human behavior is not absolutely determined by external causes, but is the result of choices made by an act of will by the agent. ”
Yes. I understood precisely what Eliezer was referring to.
Whereas I have no idea whatsoever what you’re referring to. Elaborating:
You state that the question of free will comes down to:
“Whether some organisms have the ability to make choices that aren’t fully determined by outside circumstances.”
When asked to define “outside circumstances,” drilling down, it becomes anything outside the central nervous system.
Which leaves the question in an uncomfortable position whereby it is calling dualism a form of determinism. Indeed, any solution which posits a non-reductionist answer to the question of free will is being called determinism by your definition.
Worse still, your formulation is completely senseless in the reductionist form you’ve left it; you deny non-reductionist answers, but you implicitly deny all reductionist answers as well, because they’ve -already- answered your question: No choice happens whatsoever that is “fully determined” by things outside your central nervous system, that denies the very -concept- of reductionism. Your question maintains meaning only as rhetoric. To say Eliezer hasn’t answered it in that context is to complain that he didn’t preface his arguments with a statement that the brain is the organ which is making these choices.
Which leads me right back to “You have to be trolling.”
Which leaves the question in an uncomfortable position whereby it is calling dualism a form of determinism. Indeed, any solution which posits a non-reductionist answer to the question of free will is being called determinism by your definition.
A dualist would regard their immaterial mind as internal. I was givin a non-dualist asnwer to the question “what is outside” because I thought there weren’t any dualists round here. Are you a dualist? Am I being vague because I correctly anticipated your background assumptions?
Worse still, your formulation is completely senseless in the reductionist form you’ve left it; you deny non-reductionist answers, but you implicitly deny all reductionist answers as well, because they’ve -already- answered your question: No choice happens whatsoever that is “fully determined” by things outside your central nervous system, that denies the very -concept- of reductionism.
Events happen that are fully determined by outside events, for instance if someoen pushes you out of a window. We wouldn’t call them free choices, but so what? All that means is that I have correctly identified
what free choice is about: my definition picks out the set of free choices.
Your question maintains meaning only as rhetoric.
I have no idea what you mean by that.
To say Eliezer hasn’t answered it in that context is to complain that he didn’t preface his arguments with a statement that the brain is the organ which is making these choices.
He hasn’t answered the question of FW because he hasn’t said anything at all about whether. or not brains can make choices that are not entirely determined by outside events.
A dualist would regard their immaterial mind as internal. I was givin a non-dualist asnwer to the question “what is outside” because I thought there weren’t any dualists round here. Are you a dualist? Am I being vague because I correctly anticipated your background assumptions?
Worse still, your formulation is completely senseless in the reductionist form you’ve left it; you deny non-reductionist answers, but you implicitly deny all reductionist answers as well, because they’ve -already- answered your question: No choice happens whatsoever that is “fully determined” by things outside your central nervous system, that denies the very -concept- of reductionism.
Events happen that are fully determined by outside events, for instance if someoen pushes you out of a window. We wouldn’t call them free choices, but so what? All that means is that I have correctly identified
what free choice is about: my definition picks out the set of free choices.
Your question maintains meaning only as rhetoric.
I have no ide what you mean by that.
To say Eliezer hasn’t answered it in that context is to complain that he didn’t preface his arguments with a statement that the brain is the organ which is making these choices.
He hasn’t answered the question of FW because he hasn’t said anything at all about whether. or not brains can make choices that are not entirely determined by outside events.
But Yudkowsky’s approach to Free Will is not ananlogous, because there is no side of the debate that unknowingly defines FW as the feeling of being able to make choices as opposed to the ability
The problem is that one side unknowingly defines “choices” as the ability for a person to make choices and at the same time have the universe not determine those choices, as if the person isn’t a subelement of the universe.
Once you realize that the person is a subelement of the universe and that each choice determined by the person is therefore necessarily determined by the universe, the question of free will is dissolved. Yes the past state of the universe determines everything, but that doesn’t reduce the extent that the person determines something because the person isn’t outside the universe.
The one thing I don’t remember mentioned is the opposite effect (but maybe I missed it) - if you experienced a failure to acdcomplish something, the free will explanation is likely to make you stop investigating the root cause, leaving it as a mystery.
One side knowingly defines free choices as choices that aren’t entirely determined by outside influences.
“”Metaphysical freedom [..] one
of the two main kinds, involves not being completely governed by deterministic causal
laws.”—Oxford Companion to Philosophy.
Once you realize that the person is a subelement of the universe and that each choice determined by the person is therefore necessarily determined by the universe,
That makes no sense. Determinism is not true just because everything is part of the universe. Believers in indetrminism don’t deny that everything is part of the universe. All you can conclude from the claim that people
are made of atoms is that whatever power of choice or voliton they have, however free or unfree, is implemented by atoms. But implementation is not determinism. The claim that people
are made of atoms exlcudes supernatural liberrtatian free will, the theory that free will is implemented by some immaterial spirit. It is does not exclude naturalistic libetarian free will or compatibilism. Since it leaved multiple
options open it is not “the answer”.
Moreover, one should not expect the problem of free will to have a one-line solution that only states something that is already believed by most philosophers (that’s philosophers, not theologians).
One side knowingly defines free choices as choices that aren’t entirely determined by outside influences.
They consider that a single stochastic element in a decision process suffices to make the decision process “free will”, even if the stochastic element (to the extent it’s stochastic) by definition wouldn’t have any causal connection to a person’s motivation or values?
People I’ve argued with on the internet regarding free will tend to believe the opposite, that non-deterministic free will somehow imbues more meaning to their choices, though expressed as in the above paragraph it would clearly imbue less meaning to their choices. (something’s meaning is the extent and the ways it’s connected to things we value, and random elements aren’t)
Determinism is not true just because everything is part of the universe.”
I didn’t say it was. I said that as each person is part of the universe, therefore “everything determined by the person is determined by the universe”.
That by itself would allow some things to be truly random (e.g. if the collapse interpretation of Quantum Mechanics was true), but I was specifically talking about the things that are determined to the extent that they’re determined. I don’t know how much clearer I can make it than this.
Also, I think this is the last time I respond to this discussion of free will. I think my position has been made as clear as I can make it, and I think your responses (both now and with the previous account) haven’t yet provided me with even a single useful counterpoint. So for me to keep on discussing this seems of negative utility.
They consider that a single stochastic element in a decision process suffices to make the decision process “free will”, even if the stochastic element (to the extent it’s stochastic) by definition wouldn’t have any causal connection to a person’s motivation or values?
Indeterministic choices can have a connection to the agents values that is not deterministically causal.
Take 6 things you like doing write them on small pieces of paper, and glue then to a die. However the die lands
it will not be against your values. Is that “causal connection”? Maybe, in a broad sense. however, only strict predetermination of the undermined is excluded. That is not enough to bring about complete separation of
inderministic choices and values.
People I’ve argued with on the internet regarding free will tend to believe the opposite, that non-deterministic free will somehow imbues more meaning to their choices, though expressed as in the above paragraph it would clearly imbue less meaning to their choices. (something’s meaning is the extent and the ways it’s connected to things we value, and random elements aren’t)
Since the above is not in fact a problem, inderrministic freedom does lend more meaning to choices. if it is true elements of the future world can be traced back to my decisions in a way that stops there—whereas under determinist I am just one link in a very long chain.
I didn’t say it was. I said that as each person is part of the universe, therefore “everything determined by the person is determined by the universe”.
Okay, I said I wasn’t gonna respond again, but I’d like to give you one last hypothetical, and then ask you a question regarding it.
Alice and Bob are taken by aliens and each (separately) given 4 choices arranged in a 2x2 table. Column A, Row 1: Carl is promoted to a significantly higher-paying position that he’ll also be enjoying more Column A, Row 2: Carl is (unknowingly to him) implanted with a well-designed artificial heart which will be sure to secure his health against all heart-related issues. Column B, Row 1: Carl is demoted to a significantly lower-paying position that will also be enjoying less. Column B, Row 2: Carl is (unknowingly to him) implanted with a badly-designed artificial heart which will be sure to worsen his health in regards to heart-related issues.
“Choose Column and Row for the action you want to take” say the Aliens. “Of the ones you choose, please state also which column or row that will be Definite, and which will be Stochastic” ″What do you mean by ‘definite’ and ‘stochastic’?” ask both Alice and Bob. ”We’ll definitely do something in the element which you pronounce Definite, but there’s only a 55% chance we’ll go with the element you deem Stochastic—we’ll be flipping a non-fair coin to determine that one with your choice corresponding to just the more likely side”.
Alice does her calculation. Her values and ethics all deterministically argue in favor of giving primary importance to Column A (the ‘good’ results) -- she’s definite about that, nor can she imagine a recognizable self of hers that would choose column B against a random individual. Then she calculates with significantly less certainty that A2 (the better heart cell) seems better than the A1 cell (the better job cell). “For ‘Definite’, I pick column A, For ‘Stochastic’ I pick row 2 -- In short the better heart with 55% probability, and the better job with 45% probability ” she tells the aliens. ”Apologies”, the aliens say, “but the coin went the other way than your preference. and we’ll have to do A1 instead—give Carl the better job instead of the better heart.”
Bob does his calculation. He has very strong ethics against people messing with other people’s bodies against their will. Even being given unfairly a worse job pales in comparison to the gross aversion Bob has against unconsented medical procedures. So with great definity following deterministically from Bob’s values Bob chooses “Row 1” as his primary column. He’s significantly less certain about column 1 or column 2. Promoting or Demoting a random individual he’s not aware of—either could be judged fair or unfair if he had knowledge about Carl which he doesn’t. With some uncertainty he goes for A1 rather than B1. “For Definite I choose Row 1 (the jobs row). For Stochastic I choose Column A—in short give him the better job.” ″Congratulations, the unfair coin we flipped went with your choice. A1it’ll be.”
So, after a decision process with both stochastic and deterministic elements, Alice and Bob both ended up causing the selection of A1. But Alice had “A” as the deterministic element, and Bob had “1″ as the deterministic element.
Now here’s my question: If you had to estimate their characters, values and personalities, wouldn’t you be able to attribute more meaning to the Deterministic element, instead of the one left to partial randomness? The partially random element would indeed completely mislead you in regards to Alice’s decision process.
“if it is true elements of the future world can be traced back to my decisions in a way that stops there—whereas under determinist I am just one link in a very long chain.”
You assign good connotations to “stops there” and bad connotations to “one link in a very long chain”. But when I speak about “meaning”, I don’t mean ‘good meaning’ or ‘bad meaning’, I mean the amount of measurable information we can derive from the choice in question. Meaning as a metric which could theoretically be measurable in bits. And there’s 0 bits of information that can be derived from a truly random element. But from “one link in a very long chains” we can derive bits of information about both the past and the future—what the person may have likely done in the past, what they’re likely to choose in the future.
Now here’s my question: If you had to estimate their characters, values and personalities, wouldn’t you be able to attribute more meaning to the Deterministic element, instead of the one left to partial randomness? The partially random element would indeed completely mislead you in regards to Alice’s decision process.
I don’t see how any of that is relevant to FW. Firstly, you are not contrasting deterministic decision making by an individual with stochastic decision making by an individual; the stochastic
decision is supplied by someone else. It is not a roll of one’s personal die, with ones personal
values pasted onto its sides. The selection of choices is arbitrary and unconnected with Alice and Bob’s values.
Secondly, your notion of meaning, or information content is one that hinges on how much information
an external observver csn get out of someone’s else’s choice. That is quite orthogonal to the issue of whether FW makes your choices more meaningful to you.
Perhaps you think determinstic decisions are expressive of an individual’s psychology, because they can be predicted from an individuals psychology. But if you can predict someone’s decisions, why should they believe that have nonetheless made a free choice?
You assign good connotations to “stops there” and bad connotations to “one link in a very long chain”. But when I speak about “meaning”, I don’t mean ‘good meaning’ or ‘bad meaning’, I mean the amount of measurable information we can derive from the choice in question. Meaning as a metric which could theoretically be measurable in bits. And there’s 0 bits of information that can be derived from a truly random element. But from “one link in a very long chains” we can derive bits of information about both the past and the future—what the person may have likely done in the past, what they’re likely to choose in the future.
But if you can predict someone’s decisions, why should they believe that have nonetheless made a free choice?
Someone being free is always understood to mean something roughly equal to “able to act according to one’s own desires”, it doesn’t mean “unpredictable”.
But it is not obvious where the border lies between brainwashing/indoctrination and simply sharing information. If we are discussing a mutual acquaintance (let’s call her Alice) and I tell you that she did some not nice action yesterday, you may have a desire to shun her the next time you two meet. Is that desire “your own”?
One could say that it is because you simply used your knowledge of her past actions to decide for yourself that you should shun her. On the other hand, one could say that I basically am controlling your actions, because me telling you what I said has affected your actions.
You can very easily yourself make lots of other borderline cases like this, and in fact they come up in real life very often. Consider the case where parents “indoctrinates” their kids with their religion. When the kid grows up to follow that religion, was it the kid’s own choice? Again, we find that the distinction is not complete. If the kid had not been raised to that religion, he likely would not be following it. But this is how most people in the world got their religion. I doubt that you go around to everyone and say that deep down they don’t really believe in it… But that is a separate discussion.
Anyway, what I am trying to say, is that for every desire one has originated, their likely was some (external) reason why they have that desire. Like me telling them how nasty Alice had been, or their parents telling them that god exists. (And maybe Alice was nasty, or maybe she wasn’t maybe god doesn’t exist or maybe he does, but that has no relevance.) In any case, that desire was caused by the outside factor, which shows that it is not very meaningful to try to separate out which desires where caused by outside factors. (As they all are to some extent or another.)
But it is not obvious where the border lies between brainwashing/indoctrination and simply sharing information.
Lots of borders aren’t obvious. Why should that present a special problem in this case?
One could say that it is because you simply used your knowledge of her past actions to decide for yourself that you should shun her. On the other hand, one could say that I basically am controlling your actions, because me telling you what I said has affected your actions.
Anyway, what I am trying to say, is that for every desire one has originated, their likely was some (external) reason why they have that desire.
I don’t see why I should regard a desire as being originated when it also has some deterministic external cause.
If, OTOH, if a “reason” is just an influence, or partial cause, then it is compatible with partial origination.
I don’t see why I would have to do either. I need both the internal disposition to shun her, and the information. It is not either/or.
Only we can’t. The original question was whether some organisms have the ability to make choices that aren’t fully determined by outside circumstances. That isn’t addressed by answering the question “why would humans feel like they have free will”.
a) humans have FW and feel they do
b) humans don’t have FW, but feel they do
c) humans have FW but feel they don’t
d) humans don’t have or feel they have FW
Yudkowsky shows that a way in which (b) could be possible. But he doesn’t show that (a) is impossible. IOW,, he doesn’t address the original question at all.
Is the original question a bad one which should be replaced? Some approaches to answering the original question are unfathomable (eg the idea of FW as a fundamental tertium datur beyond determinism and indeterminism), others are not. Some naturalistic theories of FW are potentially empirically testable, so throwing out the question involves throwing out a set of empirically respectable theories
ETA: The falling tree question: recognising that different sides in the argument are really using different definitions does dissolve the question. But Yudkowsky’s approach to Free Will is not analogous, because there is no side of the debate that unknowingly defines FW as the feeling of being able to make choices as opposed to the ability. EY introduced that definition. (There is a disagreement about compatibilist versus libertarian notions of Free WIll, which I have deliberately omitted for simpliciity, but that is still not anologous to the Falling Tree problem because the various sides are quite aware that their definitions differs. “It all depends what you mean by..”)..
“Outside circumstances” including what? Your definition is too vague.
As far as I’ve been able to tell, the question is confused. Before you ask what it is, first you must define what free will is, in a rigorous and exclusive manner; your definition shouldn’t include things you don’t want it to include, nor should it exclude things you don’t want to exclude. You’ve managed to include everything you want included, but your definition fails to exclude things you don’t want included—namely, your definition includes Eliezer’s definition.
Because Eliezer isn’t describing how we could experience free will even where free will doesn’t exist, he’s offering a definition of what free will is. Once you use Eliezer’s definition, the confusion goes away—the question becomes meaningless.
Anything. If my choices are fully determined by anything outside of me, they are not my choices.
I didn’t specify which outside circumstances because it doens;t matter.
No Feelings aren’t abilities. An ability to choose does not concptually include a feeling of freedom.
A re-definition. A different definition, Hence he is not answering or disolving the original question.
That is false. Once you start using a different definition, you start talking about something else. Changing the subject is not dissolving the question.
Define “outside of me.” Does a proton in your brain count as outside of you? What about a neuron?
Outside of my control systems, my CNS.
To what extent? If something outside your nervous system enters into your nervous system—say, LSD—does it qualify as internal? Does it only count if you chose to imbibe it, or would it also count if somebody else forced you to, or if circumstance forced it upon you (say, you consumed something unknowingly)?
Whatever. I have given as much detail as is needed for a philsophical definition. Definitions aren’t theories.
Intentional preservation of vagueness. You’re either a troll or a mystic. I think the “troll” description is probably less insulting in this context.
Oh good grief. You can call anything vague if you set the bar high enough. Am I being significantly more vague than EY was?
ETA:
Woops, looks the people who wite the Skeptic’s Dictionary are mystical trolls too:
“Free will is a concept in traditional philosophy used to refer to the belief that human behavior is not absolutely determined by external causes, but is the result of choices made by an act of will by the agent. ”
Yes. I understood precisely what Eliezer was referring to.
Whereas I have no idea whatsoever what you’re referring to. Elaborating:
You state that the question of free will comes down to: “Whether some organisms have the ability to make choices that aren’t fully determined by outside circumstances.”
When asked to define “outside circumstances,” drilling down, it becomes anything outside the central nervous system.
Which leaves the question in an uncomfortable position whereby it is calling dualism a form of determinism. Indeed, any solution which posits a non-reductionist answer to the question of free will is being called determinism by your definition.
Worse still, your formulation is completely senseless in the reductionist form you’ve left it; you deny non-reductionist answers, but you implicitly deny all reductionist answers as well, because they’ve -already- answered your question: No choice happens whatsoever that is “fully determined” by things outside your central nervous system, that denies the very -concept- of reductionism. Your question maintains meaning only as rhetoric. To say Eliezer hasn’t answered it in that context is to complain that he didn’t preface his arguments with a statement that the brain is the organ which is making these choices.
Which leads me right back to “You have to be trolling.”
A dualist would regard their immaterial mind as internal. I was givin a non-dualist asnwer to the question “what is outside” because I thought there weren’t any dualists round here. Are you a dualist? Am I being vague because I correctly anticipated your background assumptions?
Events happen that are fully determined by outside events, for instance if someoen pushes you out of a window. We wouldn’t call them free choices, but so what? All that means is that I have correctly identified what free choice is about: my definition picks out the set of free choices.
I have no idea what you mean by that.
He hasn’t answered the question of FW because he hasn’t said anything at all about whether. or not brains can make choices that are not entirely determined by outside events.
A dualist would regard their immaterial mind as internal. I was givin a non-dualist asnwer to the question “what is outside” because I thought there weren’t any dualists round here. Are you a dualist? Am I being vague because I correctly anticipated your background assumptions?
Events happen that are fully determined by outside events, for instance if someoen pushes you out of a window. We wouldn’t call them free choices, but so what? All that means is that I have correctly identified what free choice is about: my definition picks out the set of free choices.
I have no ide what you mean by that.
He hasn’t answered the question of FW because he hasn’t said anything at all about whether. or not brains can make choices that are not entirely determined by outside events.
Then philosophical definitions must not be enough to answer questions. Hardly new information.
[Edited for tone.]
The problem is that one side unknowingly defines “choices” as the ability for a person to make choices and at the same time have the universe not determine those choices, as if the person isn’t a subelement of the universe.
Once you realize that the person is a subelement of the universe and that each choice determined by the person is therefore necessarily determined by the universe, the question of free will is dissolved. Yes the past state of the universe determines everything, but that doesn’t reduce the extent that the person determines something because the person isn’t outside the universe.
One side knowingly defines free choices as choices that aren’t entirely determined by outside influences.
“”Metaphysical freedom [..] one of the two main kinds, involves not being completely governed by deterministic causal laws.”—Oxford Companion to Philosophy.
That makes no sense. Determinism is not true just because everything is part of the universe. Believers in indetrminism don’t deny that everything is part of the universe. All you can conclude from the claim that people are made of atoms is that whatever power of choice or voliton they have, however free or unfree, is implemented by atoms. But implementation is not determinism. The claim that people are made of atoms exlcudes supernatural liberrtatian free will, the theory that free will is implemented by some immaterial spirit. It is does not exclude naturalistic libetarian free will or compatibilism. Since it leaved multiple options open it is not “the answer”.
Moreover, one should not expect the problem of free will to have a one-line solution that only states something that is already believed by most philosophers (that’s philosophers, not theologians).
They consider that a single stochastic element in a decision process suffices to make the decision process “free will”, even if the stochastic element (to the extent it’s stochastic) by definition wouldn’t have any causal connection to a person’s motivation or values?
People I’ve argued with on the internet regarding free will tend to believe the opposite, that non-deterministic free will somehow imbues more meaning to their choices, though expressed as in the above paragraph it would clearly imbue less meaning to their choices. (something’s meaning is the extent and the ways it’s connected to things we value, and random elements aren’t)
I didn’t say it was. I said that as each person is part of the universe, therefore “everything determined by the person is determined by the universe”.
That by itself would allow some things to be truly random (e.g. if the collapse interpretation of Quantum Mechanics was true), but I was specifically talking about the things that are determined to the extent that they’re determined. I don’t know how much clearer I can make it than this.
Also, I think this is the last time I respond to this discussion of free will. I think my position has been made as clear as I can make it, and I think your responses (both now and with the previous account) haven’t yet provided me with even a single useful counterpoint. So for me to keep on discussing this seems of negative utility.
Indeterministic choices can have a connection to the agents values that is not deterministically causal. Take 6 things you like doing write them on small pieces of paper, and glue then to a die. However the die lands it will not be against your values. Is that “causal connection”? Maybe, in a broad sense. however, only strict predetermination of the undermined is excluded. That is not enough to bring about complete separation of inderministic choices and values.
Since the above is not in fact a problem, inderrministic freedom does lend more meaning to choices. if it is true elements of the future world can be traced back to my decisions in a way that stops there—whereas under determinist I am just one link in a very long chain.
That’s a non sequitur.
Okay, I said I wasn’t gonna respond again, but I’d like to give you one last hypothetical, and then ask you a question regarding it.
Alice and Bob are taken by aliens and each (separately) given 4 choices arranged in a 2x2 table.
Column A, Row 1: Carl is promoted to a significantly higher-paying position that he’ll also be enjoying more
Column A, Row 2: Carl is (unknowingly to him) implanted with a well-designed artificial heart which will be sure to secure his health against all heart-related issues.
Column B, Row 1: Carl is demoted to a significantly lower-paying position that will also be enjoying less.
Column B, Row 2: Carl is (unknowingly to him) implanted with a badly-designed artificial heart which will be sure to worsen his health in regards to heart-related issues.
“Choose Column and Row for the action you want to take” say the Aliens. “Of the ones you choose, please state also which column or row that will be Definite, and which will be Stochastic”
″What do you mean by ‘definite’ and ‘stochastic’?” ask both Alice and Bob.
”We’ll definitely do something in the element which you pronounce Definite, but there’s only a 55% chance we’ll go with the element you deem Stochastic—we’ll be flipping a non-fair coin to determine that one with your choice corresponding to just the more likely side”.
Alice does her calculation. Her values and ethics all deterministically argue in favor of giving primary importance to Column A (the ‘good’ results) -- she’s definite about that, nor can she imagine a recognizable self of hers that would choose column B against a random individual. Then she calculates with significantly less certainty that A2 (the better heart cell) seems better than the A1 cell (the better job cell). “For ‘Definite’, I pick column A, For ‘Stochastic’ I pick row 2 -- In short the better heart with 55% probability, and the better job with 45% probability ” she tells the aliens.
”Apologies”, the aliens say, “but the coin went the other way than your preference. and we’ll have to do A1 instead—give Carl the better job instead of the better heart.”
Bob does his calculation. He has very strong ethics against people messing with other people’s bodies against their will. Even being given unfairly a worse job pales in comparison to the gross aversion Bob has against unconsented medical procedures. So with great definity following deterministically from Bob’s values Bob chooses “Row 1” as his primary column. He’s significantly less certain about column 1 or column 2. Promoting or Demoting a random individual he’s not aware of—either could be judged fair or unfair if he had knowledge about Carl which he doesn’t. With some uncertainty he goes for A1 rather than B1. “For Definite I choose Row 1 (the jobs row). For Stochastic I choose Column A—in short give him the better job.”
″Congratulations, the unfair coin we flipped went with your choice. A1it’ll be.”
So, after a decision process with both stochastic and deterministic elements, Alice and Bob both ended up causing the selection of A1. But Alice had “A” as the deterministic element, and Bob had “1″ as the deterministic element.
Now here’s my question: If you had to estimate their characters, values and personalities, wouldn’t you be able to attribute more meaning to the Deterministic element, instead of the one left to partial randomness? The partially random element would indeed completely mislead you in regards to Alice’s decision process.
You assign good connotations to “stops there” and bad connotations to “one link in a very long chain”. But when I speak about “meaning”, I don’t mean ‘good meaning’ or ‘bad meaning’, I mean the amount of measurable information we can derive from the choice in question. Meaning as a metric which could theoretically be measurable in bits. And there’s 0 bits of information that can be derived from a truly random element. But from “one link in a very long chains” we can derive bits of information about both the past and the future—what the person may have likely done in the past, what they’re likely to choose in the future.
Now I’m hopefully done.
I don’t see how any of that is relevant to FW. Firstly, you are not contrasting deterministic decision making by an individual with stochastic decision making by an individual; the stochastic decision is supplied by someone else. It is not a roll of one’s personal die, with ones personal values pasted onto its sides. The selection of choices is arbitrary and unconnected with Alice and Bob’s values.
Secondly, your notion of meaning, or information content is one that hinges on how much information an external observver csn get out of someone’s else’s choice. That is quite orthogonal to the issue of whether FW makes your choices more meaningful to you.
Perhaps you think determinstic decisions are expressive of an individual’s psychology, because they can be predicted from an individuals psychology. But if you can predict someone’s decisions, why should they believe that have nonetheless made a free choice?
And what’ that got to do with free choice?
Someone being free is always understood to mean something roughly equal to “able to act according to one’s own desires”, it doesn’t mean “unpredictable”.
Act on desires one happens to have, or act on desires one has originated?
Can you try to say what the difference is? At this point I think you are tying yourself up in semantic knots.
An obvious objection to “one is free if one is able to act according to ones desires” is that ones desire mught be implanted, eg brain washing
But it is not obvious where the border lies between brainwashing/indoctrination and simply sharing information. If we are discussing a mutual acquaintance (let’s call her Alice) and I tell you that she did some not nice action yesterday, you may have a desire to shun her the next time you two meet. Is that desire “your own”?
One could say that it is because you simply used your knowledge of her past actions to decide for yourself that you should shun her. On the other hand, one could say that I basically am controlling your actions, because me telling you what I said has affected your actions.
You can very easily yourself make lots of other borderline cases like this, and in fact they come up in real life very often. Consider the case where parents “indoctrinates” their kids with their religion. When the kid grows up to follow that religion, was it the kid’s own choice? Again, we find that the distinction is not complete. If the kid had not been raised to that religion, he likely would not be following it. But this is how most people in the world got their religion. I doubt that you go around to everyone and say that deep down they don’t really believe in it… But that is a separate discussion.
Anyway, what I am trying to say, is that for every desire one has originated, their likely was some (external) reason why they have that desire. Like me telling them how nasty Alice had been, or their parents telling them that god exists. (And maybe Alice was nasty, or maybe she wasn’t maybe god doesn’t exist or maybe he does, but that has no relevance.) In any case, that desire was caused by the outside factor, which shows that it is not very meaningful to try to separate out which desires where caused by outside factors. (As they all are to some extent or another.)
Lots of borders aren’t obvious. Why should that present a special problem in this case?
I don’t see why I should regard a desire as being originated when it also has some deterministic external cause. If, OTOH, if a “reason” is just an influence, or partial cause, then it is compatible with partial origination.
I don’t see why I would have to do either. I need both the internal disposition to shun her, and the information. It is not either/or.