I agree that your model is clearer and probably more useful than any libertarian model I’m aware of (with the possible exception, when it comes to clarity, of some simple models that are technically libertarian but not very interesting).
Do you call it illusion because the outcomes you deem possible are not meta-possible: only one will be the output of your decision making algorithm and so only one can really happen?
Something like that. The SEP says “For most newcomers to the problem of free will, it will seem obvious that an action is up to an agent only if she had the freedom to do otherwise.”, and basically I a) have not let go of that naive conception of free will, and b) reject the analyses of ‘freedom to do otherwise’ that are consistent with complete physical determinism.
I know it seems like the alternatives are worse; I remember getting excited about reading a bunch of Serious Philosophy about free will, only to find that the libertarian models that weren’t completely mysterious were all like ‘mostly determinism, but maybe some randomness happens inside the brain at a crucial moment, and then everything downstream of that counts as free will for some reason’.
But basically I think there’s enough of a crack in our understanding of the world to allow for the possibility that either a) a brilliant theory of libertarian free will will emerge and receive some support from, or at least remain consistent with, developments in physics; or b) libertarian free will is real but just inherently baffling, like consciousness (qualia) or some of the impossible ontological questions.
But what’s the difference between determinist and indeterminist universes here? In any case we have a decision making algorithm. In any case there will be only one actual output of it. The only difference I see is something that can be called “unpredictability in principle” or “desicion instability”. If we run the exact same decision making algorithm again in the exact same context multiple times, in determenist universe we get the exact same output every time, while in indeterminist universe the outputs will differ. So it leads us to this completely unsatisfying perspective:
‘mostly determinism, but maybe some randomness happens inside the brain at a crucial moment, and then everything downstream of that counts as free will for some reason’.
Notice also, that even if it’s impossible to actually run the same decision making algorithm in the same context from inside this determinist universe, this will still not be satisfying for your intuition. Because what if someone outside of the universe is recreating a whole simulation of our universe in exact details and thus completely able to predict my desicions? It doesn’t even matter if these beings outside of the universe with their simulation exist. It’s just the principle of things.
And the thing is, the intition of requiring “desicion instability” isn’t that obvious for the newcomer to the problem of free will. It’s a specific and weird bullet to swallow. How do people arrive to this? I suspect that it goes something like that: When we imagine multiple exact replications of our decision making algorithm always comming to the same conclusion, it feels that we are not free to come to the other conclusion, thus our desicion making isn’t free in the first place. I think this is a very subtle goalpost shift.
Originally we do not demand from the concept of freedom of will the ability to retroactively change our desicions. When you make a choice five minutes ago, you do not claim to not have free will unless you can timetravel back and make a different choice. We can not change the choice we’ve already made. But it doesn’t mean that this choice wasn’t free.
The situation with recreating you desicion making algorithm in exact same conditions as before is exactly that. You’ve already made the choice. And now you can’t retroactively make it different. But this doesn’t mean that this choice wasn’t free in the first place.
But basically I think there’s enough of a crack in our understanding of the world to allow for the possibility that either a) a brilliant theory of libertarian free will will emerge and receive some support from, or at least remain consistent with, developments in physics; or b) libertarian free will is real but just inherently baffling, like consciousness (qualia) or some of the impossible ontological questions.
I think there is a case for a “Generalised God of the Gaps” principle to be made here.
The only difference I see is something that can be called “unpredictability in principle” or “desicion instability”.
Note that there is no fact that decision-making actually is an algorithm: that’s just an assumption rationalists favour.
Note that everyone subjectively experiences an amount of “decision instability”—you might be unable to make a decision , or immediately regret a decision.
So the territory is much more in favour of decision instability than your favoured map.
a) a brilliant theory of libertarian free will will emerge and receive some support from, or at least remain consistent with, developments in physics;
Some libertarians already have mechanistic (up to indeterminism) theories, eg. Robert Kane.
I agree that your model is clearer and probably more useful than any libertarian model I’m aware of (with the possible exception, when it comes to clarity, of some simple models that are technically libertarian but not very interesting).
Something like that. The SEP says “For most newcomers to the problem of free will, it will seem obvious that an action is up to an agent only if she had the freedom to do otherwise.”, and basically I a) have not let go of that naive conception of free will, and b) reject the analyses of ‘freedom to do otherwise’ that are consistent with complete physical determinism.
I know it seems like the alternatives are worse; I remember getting excited about reading a bunch of Serious Philosophy about free will, only to find that the libertarian models that weren’t completely mysterious were all like ‘mostly determinism, but maybe some randomness happens inside the brain at a crucial moment, and then everything downstream of that counts as free will for some reason’.
But basically I think there’s enough of a crack in our understanding of the world to allow for the possibility that either a) a brilliant theory of libertarian free will will emerge and receive some support from, or at least remain consistent with, developments in physics; or b) libertarian free will is real but just inherently baffling, like consciousness (qualia) or some of the impossible ontological questions.
But what’s the difference between determinist and indeterminist universes here? In any case we have a decision making algorithm. In any case there will be only one actual output of it. The only difference I see is something that can be called “unpredictability in principle” or “desicion instability”. If we run the exact same decision making algorithm again in the exact same context multiple times, in determenist universe we get the exact same output every time, while in indeterminist universe the outputs will differ. So it leads us to this completely unsatisfying perspective:
Notice also, that even if it’s impossible to actually run the same decision making algorithm in the same context from inside this determinist universe, this will still not be satisfying for your intuition. Because what if someone outside of the universe is recreating a whole simulation of our universe in exact details and thus completely able to predict my desicions? It doesn’t even matter if these beings outside of the universe with their simulation exist. It’s just the principle of things.
And the thing is, the intition of requiring “desicion instability” isn’t that obvious for the newcomer to the problem of free will. It’s a specific and weird bullet to swallow. How do people arrive to this? I suspect that it goes something like that: When we imagine multiple exact replications of our decision making algorithm always comming to the same conclusion, it feels that we are not free to come to the other conclusion, thus our desicion making isn’t free in the first place. I think this is a very subtle goalpost shift.
Originally we do not demand from the concept of freedom of will the ability to retroactively change our desicions. When you make a choice five minutes ago, you do not claim to not have free will unless you can timetravel back and make a different choice. We can not change the choice we’ve already made. But it doesn’t mean that this choice wasn’t free.
The situation with recreating you desicion making algorithm in exact same conditions as before is exactly that. You’ve already made the choice. And now you can’t retroactively make it different. But this doesn’t mean that this choice wasn’t free in the first place.
I think there is a case for a “Generalised God of the Gaps” principle to be made here.
Note that there is no fact that decision-making actually is an algorithm: that’s just an assumption rationalists favour.
Note that everyone subjectively experiences an amount of “decision instability”—you might be unable to make a decision , or immediately regret a decision.
So the territory is much more in favour of decision instability than your favoured map.
Some libertarians already have mechanistic (up to indeterminism) theories, eg. Robert Kane.