Do you feel that without possible futures it’s not actually a choice? Like, imagine a piece of rock. There are can be events E1, E2, E3 that happen to it at different moments of time. Being part of the causal universe, the rock partually causes these events. But it doesn’t choose anything. Is it similar to how you feel abut human choice under determenism?
As an intuition pump, imagine also a rock in a non-deterministic universe where either E3 or E3′ happens after E2. And also imagine a human in a deterministic one. Would the indeterministic rock be more free than deterministic one? Would it be more free than a human in a deterministic universe? WHere does this extra freedom comes from?
To me, free will means something like ‘ability to choose between different possible futures’. And if there’s no forward-in-time branching, there’s only one possible future. (I admit that ‘branching’ is very under-defined here, and so is ‘possible’
There is this intuitive vague feeling that freedom of will has to do something with possibility of alternatives. People feel it, but do not have an actual model of how this all work together. And the thing is, this intuition is true. Just, as it happens, not the way people initially think it is.
Here is a neat compatibilist model, according to which you (and not a rock) have an ability to select between different outcomes in a deterministic universe and which explicitly specify what ‘possible’ mean: possibility is in the mind and so is the branching of futures. When you are executing your decision making algorithm you mark some outcomes as ‘possible’ and backpropagate from them to the current choice you are making. Thus, your mental map of the reality has branches of possible futures between which you are choosing. By design, the algorithm doesn’t allow you to choose an outcome you deem impossible. If you already know for certain what you will choose, than you’ve already chosen. So the initial intuition is kind of true. You do need ‘possible futures’ to exist so that you can have free will: perform your decision making ability which separates you from the rock. But the possibility, and branching futures do not need to exist separately of you. They can just be part of your mind.
And when you truly understand it, the alternatives seem kind of ridiculous, to be honest. Why would parts of your decision making process exist outside of your mind? What does it even mean for possible futures to exist separately of the mind that is modelling them? The whole point of future is not there yet, so even the ‘actual future’ doesn’t exist at the moment of decision making. And then there is the whole other layer of non-existence with the concept of ‘physical possibility’.
What does it even mean for possible futures to exist separately of the mind that is modelling them?
Not physically, but platonic objects that serve as semantics for formal syntax make sense, and only syntax straightforwardly exists in the mind, not semantics it admits. So these are the parts of decision making that exist outside of your mind, in the same sense as mathematical objects exist outside of a mathematician’s mind.
Good point. I’m equalizing between logical existence and existence in one’s mind in this post, but if we don’t do that then indeed we can say that possible futures exist platonically just as mathematical objects.
I’m equalizing between logical existence and existence in one’s mind in this post
But then territory is in the mind? The distinction is mind’s blindness to most of the details of the platonic objects it reasons about, thus they are separate existence only partially observed.
I should clarify that I’m not arguing for libertarianism here, just trying to understand the appeal of (and sometimes arguing against) compatibilism.
(FWIW, I don’t think libertarian free will is definitely incoherent or impossible, and combined with my incompatibilism that makes me in practice a libertarian-by-default: if I’m free to choose which stance to take, libertarianism is the correct one. Not that that helps much in resolving any of the difficult downstream questions, e.g. about when and to what extent people are morally responsible for their choices.)
Here is a neat compatibilist model, according to which you (and not a rock) have an ability to select between different outcomes in a deterministic universe and which explicitly specify what ‘possible’ mean: possibility is in the mind and so is the branching of futures. When you are executing your decision making algorithm you mark some outcomes as ‘possible’ and backpropagate from them to the current choice you are making. Thus, your mental map of the reality has branches of possible futures between which you are choosing. By design, the algorithm doesn’t allow you to choose an outcome you deem impossible. If you already know for certain what you will choose, than you’ve already chosen. So the initial intuition is kind of true. You do need ‘possible futures’ to exist so that you can have free will: perform your decision making ability which separates you from the rock. But the possibility, and branching futures do not need to exist separately of you. They can just be part of your mind.
I’m sorry to give a repetitive response to a thoughtful comment, but my reaction to this is the predictable one: I don’t think I’m failing to understand you, but what you’re describing as free will is what I would describe as the illusion of free will.
Aside from the semantic question, I suspect a crux is that you are confident that libertarian free will is ‘not even wrong’, i.e. almost meaninglessly vague in its original form and incoherent if specified more precisely? So the only way to rescue the concept is to define free will in such a way that we only need to explain why we feel like we have the thing we vaguely gesture at when we talk about libertarian free will.
If so, I disagree: I admit that I don’t have a good model of libertarian free will, but I haven’t seen sufficient reason to completely rule it out. So I prefer to keep the phrase ‘free will’ for something that fits with my (and I think many other people’s) instinctive libertarianism, rather than repurpose it for something else.
I should clarify that I’m not arguing for libertarianism here, just trying to understand the appeal of (and sometimes arguing against) compatibilism.
The major appeal of compatibilism for me is that there is an actual model, describing how freedom of will works, how it depends on the notion of possibility, allows to distinguish between entities that have free will and entities who do not and how it corresponds to the layman intuitions and usage of the term and adds up to normality while solving practical matters such as the questions of personal responsibility.
I’ve yet to see anything with similar level of clarity from any other perspective on the matter.
So the only way to rescue the concept is to define free will in such a way that we only need to explain why we feel like we have the thing we vaguely gesture at when we talk about libertarian free will.
I don’t think that the explanation I’ve given you can be said to be just about the feeling of free will. It’s part of it. But also it explains the actual decision making algorith, corresponding to these feelings. This algorith is executing in reality. And having this algorithm being executed on your brain gives new abilities compared to not having one (back to a person and a rock example). Neither this algorithm is just about your beliefs. At this moment calling it “an illusion” seems very semantically weird to me. Especially when there isn’t a propper model of what non-illusion supposed to be.
Could you help me understand why your choice of definitions is like that? Do you call it illusion because the outcomes you deem possible are not meta-possible: only one will be the output of your decision making algorithm and so only one can really happen? But isn’t it the same with indeterminism? Or is it because the possible futures in your mind do not correspond to something outside of it?
I agree that your model is clearer and probably more useful than any libertarian model I’m aware of (with the possible exception, when it comes to clarity, of some simple models that are technically libertarian but not very interesting).
Do you call it illusion because the outcomes you deem possible are not meta-possible: only one will be the output of your decision making algorithm and so only one can really happen?
Something like that. The SEP says “For most newcomers to the problem of free will, it will seem obvious that an action is up to an agent only if she had the freedom to do otherwise.”, and basically I a) have not let go of that naive conception of free will, and b) reject the analyses of ‘freedom to do otherwise’ that are consistent with complete physical determinism.
I know it seems like the alternatives are worse; I remember getting excited about reading a bunch of Serious Philosophy about free will, only to find that the libertarian models that weren’t completely mysterious were all like ‘mostly determinism, but maybe some randomness happens inside the brain at a crucial moment, and then everything downstream of that counts as free will for some reason’.
But basically I think there’s enough of a crack in our understanding of the world to allow for the possibility that either a) a brilliant theory of libertarian free will will emerge and receive some support from, or at least remain consistent with, developments in physics; or b) libertarian free will is real but just inherently baffling, like consciousness (qualia) or some of the impossible ontological questions.
But what’s the difference between determinist and indeterminist universes here? In any case we have a decision making algorithm. In any case there will be only one actual output of it. The only difference I see is something that can be called “unpredictability in principle” or “desicion instability”. If we run the exact same decision making algorithm again in the exact same context multiple times, in determenist universe we get the exact same output every time, while in indeterminist universe the outputs will differ. So it leads us to this completely unsatisfying perspective:
‘mostly determinism, but maybe some randomness happens inside the brain at a crucial moment, and then everything downstream of that counts as free will for some reason’.
Notice also, that even if it’s impossible to actually run the same decision making algorithm in the same context from inside this determinist universe, this will still not be satisfying for your intuition. Because what if someone outside of the universe is recreating a whole simulation of our universe in exact details and thus completely able to predict my desicions? It doesn’t even matter if these beings outside of the universe with their simulation exist. It’s just the principle of things.
And the thing is, the intition of requiring “desicion instability” isn’t that obvious for the newcomer to the problem of free will. It’s a specific and weird bullet to swallow. How do people arrive to this? I suspect that it goes something like that: When we imagine multiple exact replications of our decision making algorithm always comming to the same conclusion, it feels that we are not free to come to the other conclusion, thus our desicion making isn’t free in the first place. I think this is a very subtle goalpost shift.
Originally we do not demand from the concept of freedom of will the ability to retroactively change our desicions. When you make a choice five minutes ago, you do not claim to not have free will unless you can timetravel back and make a different choice. We can not change the choice we’ve already made. But it doesn’t mean that this choice wasn’t free.
The situation with recreating you desicion making algorithm in exact same conditions as before is exactly that. You’ve already made the choice. And now you can’t retroactively make it different. But this doesn’t mean that this choice wasn’t free in the first place.
But basically I think there’s enough of a crack in our understanding of the world to allow for the possibility that either a) a brilliant theory of libertarian free will will emerge and receive some support from, or at least remain consistent with, developments in physics; or b) libertarian free will is real but just inherently baffling, like consciousness (qualia) or some of the impossible ontological questions.
I think there is a case for a “Generalised God of the Gaps” principle to be made here.
The only difference I see is something that can be called “unpredictability in principle” or “desicion instability”.
Note that there is no fact that decision-making actually is an algorithm: that’s just an assumption rationalists favour.
Note that everyone subjectively experiences an amount of “decision instability”—you might be unable to make a decision , or immediately regret a decision.
So the territory is much more in favour of decision instability than your favoured map.
a) a brilliant theory of libertarian free will will emerge and receive some support from, or at least remain consistent with, developments in physics;
Some libertarians already have mechanistic (up to indeterminism) theories, eg. Robert Kane.
The major appeal of compatibilism for me is that there is an actual model, describing how freedom of will works,
ie., it doesn’t. Compatibilism has to manage expectations.
But also it explains the actual decision making algorith, corresponding to these feelings.
Libertarians can say that free agency is the execution of an algorithm, too. It’s just that it would be an indeterministic algorithm.
(Incidentally, no one has put forward any reason that any algorithm should feel like anything).
Do you call it illusion because the outcomes you deem possible are not meta-possible: only one will be the output of your decision making algorithm and so only one can really happen? But isn’t it the same with indeterminism?
No. An indeterministic coin-flip has two really possible outcomes.
Libertarians can say that free agency is the execution of an algorithm, too. It’s just that it would be an indeterministic algorithm.
Libertarian algorithmic explanation have to be quite different from compatibilist one. At least, it needs to account for the source of connection between possible futures in your mind and ‘real’ possible futures, the nature of this ‘realness’, has its own different way to reduce ‘couldness’ and ‘possibility’ to being, has a model of what happens to all the alternative future branches, how previously undeterminable events become determinated by actually happening in the present and how combination of determinable and indeterminable events produce free will. If you think these are answered questions, please make a separate post about it.
(Incidentally, no one has put forward any reason that any algorithm should feel like anything).
Not really relevant, but here is a reason for you. Feeling X is having a representation of X in your model of self. Some things are encoded to have representation in it and some are not, depending on whether this information is deemed important for central planning agent by evolution. Global desicion making is extremely important and maybe even the reason why central planning agent exists in the first place, so the steps of this algorithm are encoded in the model of the self.
No. An indeterministic coin-flip has two really possible outcomes.
Call them real as much as you want, it’s still either head or tails, when you actually flipped the coin, not both.
ie., it doesn’t.
Sigh. We’ve had multiple opportunities to discuss these issues before and sadly you haven’t manage to explain anything about libertarianism to my satisfaction and kept talking pass me. Not sure whether it’s more of your fault or mine but In any case I’d like to discuss these questions with someone who I have more hope of understanding my position and explaining theirs. So this is my last reply to you in this thread. I repeat my request to write your own post on the matter if you think you have something to say. Frankly, I find the fact that you write replies in a thread addressed to compatibilists, a bit gauche.
Libertarian algorithmic explanation have to be quite different from compatibilist one
Of course: they have to explain more.
At least, it needs to account for the source of connection between possible futures in your mind and ‘real’ possible futures,
Of course, but that’s just a special case of accurate map-making, not some completely unique problem.
the nature of this ‘realness’
Determinism is a special case of indeterminism. Indeterminism is tautologically equivalent to real possibilities. Since determinism is a special case, it is more in need of defense than the special case.
I explained that in my PM of 1st July 2022, which you never replied to.
I previously said that determinism is just a special case of indeterminism where some of the transitions have probabilities less than 1.0. Likewise,a causal diagram is a special case of a probabilistic state transition diagram.
.
If a causal diagram is a full explanation of determinism, a probabilistic state transition diagram is a full explanation of indeterminism.
What, in general is the problem? If you know what a word means , it is usually easy figure out what the opposite means. “Poor” means not-rich … so if you know what “rich” means without additional information or additional concepts. Why would “indeterminism” be an exception to rule?development/1592730043_ch44lev1sec4.html
If a causal diagram is a full explanation of determinism, a probabilistic state transition diagram is a full explanation of indeterminism.
What, in general is the problem? If you know what a word means , it is usually easy figure out what the opposite means. “Poor” means not-rich … so if you know what “rich” means without additional information or additional concepts. Why would “indeterminism” be an exception to rule?
how previously undeterminable events become determinated by actually happening in the present and how combination of determinable and indeterminable events produce free will. If you think these are answered questions, please make a separate post about it.
No libertarian makes the claim that undeterminable events become determined. Undetermined future events eventually happen..which does not make them causally determinist in retrospect. (Once they have happened, we can determine their values, but that is a different sense of “determine”).
I have already explained that in my July 1st reply, quoting previous explanations I had already given.
It just seems that it’s the way the universe is. And while not a really satisfying answer it at least makes sense. If the universe allows causality—no surprise that we have causality. Compare to this: universe doesn’t allow some things to be determinable but we still somehow are able to determine them. - This seems as an obvious contradiction to me and is the reason I can’t grasp an understanding of indeterminism on a gut level
It’s not a contradiction because the two “determines” mean different things.
That’s conflating two meanings of “determined”. There’s an epistemic meaning by which you “determine” that something has happened, you gain positive or “determinate”.knowledge of it. And there’s causal determinism, the idea that a situation can only turn out out or evolve in one particular way. They are related , but not in such a way that you can infer causal determinism from epistemic determinism. You can have determinate knowledge of an indeterminate coin flip.
Feeling X is having a representation of X in your model of self
No one has put forward.a reason why having a representation of X should feel like anything.
Call them real as much as you want, it’s still either head or tails, when you actually flipped the coin, not both.
You are saying what...? That there cannot have been two possibilities, because there is only one actuality? But that there can be is the whole point of the word “possibility”, even for in-the-mind possibilities.
We’ve had multiple opportunities to discuss these issues before and sadly you haven’t manage to explain anything about libertarianism to my satisfaction
You ignored my long message of July 1st. It’s not that I am not trying to communicate.
Why do you think LFW is real? The only naturalistic frameworks that I’ve seen that support LFW are the ones that are like Penrose’s Orch-OR, that postulate that ‘decisions’ are quantum (any process that is caused by the collapse of the quantum states of the brain). But it seems unlikely that the brain behaves as a coherent quantum state. If the brain is classical, decisions are macroscopic and they are determined, even in Copenhagen.
And in this sense, what you have is some inherent randomness within the decision-making algorithms of the brain, there’s no special capability of the self to ‘freely’ choose while at the same time not being determined by their circumstances, there’s just a truly-random factor in the decision-making process.
I’m not saying it’s real—just that I’m not convinced it’s incoherent or impossible.
And in this sense, what you have is some inherent randomness within the decision-making algorithms of the brain
This might get me thrown into LW jail for posting under the influence of mysterianism, but:
I’m not convinced that there can’t be a third option alongside ordinary physical determinism and mere randomness. There’s a gaping hole in our (otherwise amazingly successful and seemingly on the way to being comprehensive) physical picture of reality: what the heck is subjective experience? From the objective, physical perspective there’s no reason anything should be accompanied by feelings; but each of us knows from direct experience that at least some things are. To me, the Hard Problem is real but probably completely intractable. Likewise, there are some metaphysical questions that I think are irresolvably mysterious—Why is there anything? Why this in particular? -- and they point to the fact that our existing concepts, and I suspect our brains, are inadequate to the full description or explanation of reality. This is of course not a good excuse for an anything-goes embrace of baseless speculation or wishful thinking; but the link between free will and consciousness, combined with the baffling mystery of consciousness (in the qualia sense), leaves me open to the possibility that free will is something weird and different from anything we currently understand and maybe even inexplicable.
Thank you for your answer.
Do you feel that without possible futures it’s not actually a choice? Like, imagine a piece of rock. There are can be events E1, E2, E3 that happen to it at different moments of time. Being part of the causal universe, the rock partually causes these events. But it doesn’t choose anything. Is it similar to how you feel abut human choice under determenism?
As an intuition pump, imagine also a rock in a non-deterministic universe where either E3 or E3′ happens after E2. And also imagine a human in a deterministic one. Would the indeterministic rock be more free than deterministic one? Would it be more free than a human in a deterministic universe? WHere does this extra freedom comes from?
There is this intuitive vague feeling that freedom of will has to do something with possibility of alternatives. People feel it, but do not have an actual model of how this all work together. And the thing is, this intuition is true. Just, as it happens, not the way people initially think it is.
Here is a neat compatibilist model, according to which you (and not a rock) have an ability to select between different outcomes in a deterministic universe and which explicitly specify what ‘possible’ mean: possibility is in the mind and so is the branching of futures. When you are executing your decision making algorithm you mark some outcomes as ‘possible’ and backpropagate from them to the current choice you are making. Thus, your mental map of the reality has branches of possible futures between which you are choosing. By design, the algorithm doesn’t allow you to choose an outcome you deem impossible. If you already know for certain what you will choose, than you’ve already chosen. So the initial intuition is kind of true. You do need ‘possible futures’ to exist so that you can have free will: perform your decision making ability which separates you from the rock. But the possibility, and branching futures do not need to exist separately of you. They can just be part of your mind.
And when you truly understand it, the alternatives seem kind of ridiculous, to be honest. Why would parts of your decision making process exist outside of your mind? What does it even mean for possible futures to exist separately of the mind that is modelling them? The whole point of future is not there yet, so even the ‘actual future’ doesn’t exist at the moment of decision making. And then there is the whole other layer of non-existence with the concept of ‘physical possibility’.
Not physically, but platonic objects that serve as semantics for formal syntax make sense, and only syntax straightforwardly exists in the mind, not semantics it admits. So these are the parts of decision making that exist outside of your mind, in the same sense as mathematical objects exist outside of a mathematician’s mind.
Good point. I’m equalizing between logical existence and existence in one’s mind in this post, but if we don’t do that then indeed we can say that possible futures exist platonically just as mathematical objects.
But then territory is in the mind? The distinction is mind’s blindness to most of the details of the platonic objects it reasons about, thus they are separate existence only partially observed.
I should clarify that I’m not arguing for libertarianism here, just trying to understand the appeal of (and sometimes arguing against) compatibilism.
(FWIW, I don’t think libertarian free will is definitely incoherent or impossible, and combined with my incompatibilism that makes me in practice a libertarian-by-default: if I’m free to choose which stance to take, libertarianism is the correct one. Not that that helps much in resolving any of the difficult downstream questions, e.g. about when and to what extent people are morally responsible for their choices.)
I’m sorry to give a repetitive response to a thoughtful comment, but my reaction to this is the predictable one: I don’t think I’m failing to understand you, but what you’re describing as free will is what I would describe as the illusion of free will.
Aside from the semantic question, I suspect a crux is that you are confident that libertarian free will is ‘not even wrong’, i.e. almost meaninglessly vague in its original form and incoherent if specified more precisely? So the only way to rescue the concept is to define free will in such a way that we only need to explain why we feel like we have the thing we vaguely gesture at when we talk about libertarian free will.
If so, I disagree: I admit that I don’t have a good model of libertarian free will, but I haven’t seen sufficient reason to completely rule it out. So I prefer to keep the phrase ‘free will’ for something that fits with my (and I think many other people’s) instinctive libertarianism, rather than repurpose it for something else.
The major appeal of compatibilism for me is that there is an actual model, describing how freedom of will works, how it depends on the notion of possibility, allows to distinguish between entities that have free will and entities who do not and how it corresponds to the layman intuitions and usage of the term and adds up to normality while solving practical matters such as the questions of personal responsibility.
I’ve yet to see anything with similar level of clarity from any other perspective on the matter.
I don’t think that the explanation I’ve given you can be said to be just about the feeling of free will. It’s part of it. But also it explains the actual decision making algorith, corresponding to these feelings. This algorith is executing in reality. And having this algorithm being executed on your brain gives new abilities compared to not having one (back to a person and a rock example). Neither this algorithm is just about your beliefs. At this moment calling it “an illusion” seems very semantically weird to me. Especially when there isn’t a propper model of what non-illusion supposed to be.
Could you help me understand why your choice of definitions is like that? Do you call it illusion because the outcomes you deem possible are not meta-possible: only one will be the output of your decision making algorithm and so only one can really happen? But isn’t it the same with indeterminism? Or is it because the possible futures in your mind do not correspond to something outside of it?
I agree that your model is clearer and probably more useful than any libertarian model I’m aware of (with the possible exception, when it comes to clarity, of some simple models that are technically libertarian but not very interesting).
Something like that. The SEP says “For most newcomers to the problem of free will, it will seem obvious that an action is up to an agent only if she had the freedom to do otherwise.”, and basically I a) have not let go of that naive conception of free will, and b) reject the analyses of ‘freedom to do otherwise’ that are consistent with complete physical determinism.
I know it seems like the alternatives are worse; I remember getting excited about reading a bunch of Serious Philosophy about free will, only to find that the libertarian models that weren’t completely mysterious were all like ‘mostly determinism, but maybe some randomness happens inside the brain at a crucial moment, and then everything downstream of that counts as free will for some reason’.
But basically I think there’s enough of a crack in our understanding of the world to allow for the possibility that either a) a brilliant theory of libertarian free will will emerge and receive some support from, or at least remain consistent with, developments in physics; or b) libertarian free will is real but just inherently baffling, like consciousness (qualia) or some of the impossible ontological questions.
But what’s the difference between determinist and indeterminist universes here? In any case we have a decision making algorithm. In any case there will be only one actual output of it. The only difference I see is something that can be called “unpredictability in principle” or “desicion instability”. If we run the exact same decision making algorithm again in the exact same context multiple times, in determenist universe we get the exact same output every time, while in indeterminist universe the outputs will differ. So it leads us to this completely unsatisfying perspective:
Notice also, that even if it’s impossible to actually run the same decision making algorithm in the same context from inside this determinist universe, this will still not be satisfying for your intuition. Because what if someone outside of the universe is recreating a whole simulation of our universe in exact details and thus completely able to predict my desicions? It doesn’t even matter if these beings outside of the universe with their simulation exist. It’s just the principle of things.
And the thing is, the intition of requiring “desicion instability” isn’t that obvious for the newcomer to the problem of free will. It’s a specific and weird bullet to swallow. How do people arrive to this? I suspect that it goes something like that: When we imagine multiple exact replications of our decision making algorithm always comming to the same conclusion, it feels that we are not free to come to the other conclusion, thus our desicion making isn’t free in the first place. I think this is a very subtle goalpost shift.
Originally we do not demand from the concept of freedom of will the ability to retroactively change our desicions. When you make a choice five minutes ago, you do not claim to not have free will unless you can timetravel back and make a different choice. We can not change the choice we’ve already made. But it doesn’t mean that this choice wasn’t free.
The situation with recreating you desicion making algorithm in exact same conditions as before is exactly that. You’ve already made the choice. And now you can’t retroactively make it different. But this doesn’t mean that this choice wasn’t free in the first place.
I think there is a case for a “Generalised God of the Gaps” principle to be made here.
Note that there is no fact that decision-making actually is an algorithm: that’s just an assumption rationalists favour.
Note that everyone subjectively experiences an amount of “decision instability”—you might be unable to make a decision , or immediately regret a decision.
So the territory is much more in favour of decision instability than your favoured map.
Some libertarians already have mechanistic (up to indeterminism) theories, eg. Robert Kane.
ie., it doesn’t. Compatibilism has to manage expectations.
Libertarians can say that free agency is the execution of an algorithm, too. It’s just that it would be an indeterministic algorithm.
(Incidentally, no one has put forward any reason that any algorithm should feel like anything).
No. An indeterministic coin-flip has two really possible outcomes.
Libertarian algorithmic explanation have to be quite different from compatibilist one. At least, it needs to account for the source of connection between possible futures in your mind and ‘real’ possible futures, the nature of this ‘realness’, has its own different way to reduce ‘couldness’ and ‘possibility’ to being, has a model of what happens to all the alternative future branches, how previously undeterminable events become determinated by actually happening in the present and how combination of determinable and indeterminable events produce free will. If you think these are answered questions, please make a separate post about it.
Not really relevant, but here is a reason for you. Feeling X is having a representation of X in your model of self. Some things are encoded to have representation in it and some are not, depending on whether this information is deemed important for central planning agent by evolution. Global desicion making is extremely important and maybe even the reason why central planning agent exists in the first place, so the steps of this algorithm are encoded in the model of the self.
Call them real as much as you want, it’s still either head or tails, when you actually flipped the coin, not both.
Sigh. We’ve had multiple opportunities to discuss these issues before and sadly you haven’t manage to explain anything about libertarianism to my satisfaction and kept talking pass me. Not sure whether it’s more of your fault or mine but In any case I’d like to discuss these questions with someone who I have more hope of understanding my position and explaining theirs. So this is my last reply to you in this thread. I repeat my request to write your own post on the matter if you think you have something to say. Frankly, I find the fact that you write replies in a thread addressed to compatibilists, a bit gauche.
Of course: they have to explain more.
Of course, but that’s just a special case of accurate map-making, not some completely unique problem.
Determinism is a special case of indeterminism. Indeterminism is tautologically equivalent to real possibilities. Since determinism is a special case, it is more in need of defense than the special case.
I explained that in my PM of 1st July 2022, which you never replied to.
No libertarian makes the claim that undeterminable events become determined. Undetermined future events eventually happen..which does not make them causally determinist in retrospect. (Once they have happened, we can determine their values, but that is a different sense of “determine”).
I have already explained that in my July 1st reply, quoting previous explanations I had already given.
No one has put forward.a reason why having a representation of X should feel like anything.
You are saying what...? That there cannot have been two possibilities, because there is only one actuality? But that there can be is the whole point of the word “possibility”, even for in-the-mind possibilities.
You ignored my long message of July 1st. It’s not that I am not trying to communicate.
Why do you think LFW is real? The only naturalistic frameworks that I’ve seen that support LFW are the ones that are like Penrose’s Orch-OR, that postulate that ‘decisions’ are quantum (any process that is caused by the collapse of the quantum states of the brain). But it seems unlikely that the brain behaves as a coherent quantum state. If the brain is classical, decisions are macroscopic and they are determined, even in Copenhagen.
And in this sense, what you have is some inherent randomness within the decision-making algorithms of the brain, there’s no special capability of the self to ‘freely’ choose while at the same time not being determined by their circumstances, there’s just a truly-random factor in the decision-making process.
I’m not saying it’s real—just that I’m not convinced it’s incoherent or impossible.
This might get me thrown into LW jail for posting under the influence of mysterianism, but:
I’m not convinced that there can’t be a third option alongside ordinary physical determinism and mere randomness. There’s a gaping hole in our (otherwise amazingly successful and seemingly on the way to being comprehensive) physical picture of reality: what the heck is subjective experience? From the objective, physical perspective there’s no reason anything should be accompanied by feelings; but each of us knows from direct experience that at least some things are. To me, the Hard Problem is real but probably completely intractable. Likewise, there are some metaphysical questions that I think are irresolvably mysterious—Why is there anything? Why this in particular? -- and they point to the fact that our existing concepts, and I suspect our brains, are inadequate to the full description or explanation of reality. This is of course not a good excuse for an anything-goes embrace of baseless speculation or wishful thinking; but the link between free will and consciousness, combined with the baffling mystery of consciousness (in the qualia sense), leaves me open to the possibility that free will is something weird and different from anything we currently understand and maybe even inexplicable.