It’s a good recommendation, thank you. But whether it’s better depends on the individual reader. For me, way back when I got stuck on the idea that both determinism and randomness seemed incompatible with free will to me, what got me out of is was someone asking me, “Well, what is it you want from your free will? If what you want is to act for reasons, then determinism doesn’t take that away.” Changed the way I thought just enough for further reflection and a bit more reading to get me the rest of the way.
Stylistically, I found the Free Will sequence helped me examine and internalize the relevant ideas far more intuitively than more academic philosophy sources ever did, because what I needed was to be beaten over the head with the point that it wasn’t actually mysterious. Reading summaries of past arguments by various philosophers, most of whom were either partly-religious in nature or unable/unwilling (for many reasons) to engage with the nature of physical reality as we moderns understand it, had never been enough for me.
Not arguing with that, for sure. Still, just knowing, at a gut level, that non-mysterious solutions exist was a critical step on my own journey.
Indeterminism makes the problem harder; randomness means there is no part of “me” (physical or otherwise) deciding what I do, and I don’t know of any non-random indeterministic conception of free will. I’ve looked, and having seen anything that would even suggest a shape of what such a thing could look like.
Supernatural solutions don’t actually address the question of determinism at all, despite sometimes claiming to do so (at best, they hide the gears somewhere unobservable-in-principle).
And I don’t think the more psychological arguments about belief in free will and uncertainty about your own mind-state or predictability within world are likely to be helpful to the OP given the content of the post and prior comments.
Indeterminism makes the problem harder; randomness means there is no part of “me” (physical or otherwise) deciding what I do,
It means there is no one part of you deciding everything, no ghostly string-puller. But naturalistic determinism means that, too.
The scientific question of free will becomes the question of how the machine behaves, whether it has the combination of unpredictability, self direction, self modification and so on, that might characterise free will… depending on how you define free will.
(Or may be hung op on the idea that if there is one little bit of randomnesss, then everything is attributable to that).
I’ve looked, and having seen anything that would even suggest a shape of what such a thing could look like.
I don’t know what you mean by “I’ve looked” but Robert Kane and Tony Dore have such theories.
If you’re talking about the two-stage model, I’m aware of it but haven’t read their original writings. Still, I don’t see how that could possibly help make my choices more or less “free” in any sense I care about for any philosophical, motivational, or moral reason.
If I am deterministically selecting among options generated within myself by an indeterministic process, sure that’s possible, and I appreciate that it’s an actual question we could find an answer to. But, I’ve never been able to see why I might prefer that situation to deterministically choosing among states generated by any other process that’s outside my control, whether it happens inside by body or not, whether it’s deterministic or not. (Yes, I realize I am essentially rejecting the idea that I should consider the option-generating indeterministic process to be part of “me.” Maybe that’s a mistake,, but that’s how my me-concept is (currently) shaped.).
To put it another way: Imagine I am playing a game where I (deterministically) deliberate and choose among options presented to me. Why does the question of whether my choice is free or not depend on whether the process that generated the list of options is deterministic or not? Why does it depend on whether the option-generating indeterministic module is located inside or outside my body?
Separately, I also have a hard time with the idea that this implies that the question of free will could depend on which version of quantum mechanics is (more) true, because if Many Worlds is correct then it is no longer true that the future is indeterministic; instead it is only true that different parts of current-me will (deterministically) no longer be in communication with one another in the future.
(Continuing with the game-themed thought experiments because they’re readily available and easy to describe) This idea feels as strange to me as it would be to say that a contestant’s answers on Who Wants to Be a Millionaire become more or less free if you take away or use the 50-50 lifeline. I don’t mean that to be flippant. In some sense, it’s true—all of a sudden there are fewer options to freely choose among. But it is also a deterministic fact about the world that at some point in the future, that lifeline may be invoked, and certain options but not others will disappear. To me that seems like a strange hook to hang my self-concept, will, and moral responsibility from.
If I am deterministically selecting among options generated within myself by an indeterministic process,
I didn’t say that was the case any more than indeterministically choosing between deterministically generated options.
sure that’s possible, and I appreciate that it’s an actual question we could find an answer to. But, I’ve never been able to see why I might prefer that situation to deterministically choosing among states generated by any other process that’s outside my control, whether it happens inside by body or not, whether it’s deterministic or not
In the big picture, this is happening in an indeterministic universe. So.what you get is really being able to change things ..to bring about futures that aren’t inevitable; and you being able to change things..the causal chain begins at you.
Why does the question of whether my choice is free or not depend on whether the process that generated the list of options is deterministic or not?
Indeterminism is, tautologously, freedom from determinism. The standard argument against libertarian free will depends on the universe working in a certain way, ie. being deterministic. The claim that libertarian free will depends on the universe being indeterministic is a corollary.
But it is also a deterministic fact about the world that at some point in the future, that lifeline may be invoked
Why would it be a deterministic fact in an indeterministic world?
Indeterminism is, tautologously, freedom from determinism.
Yes, and determinism isn’t the thing I want freedom from. External control is, mostly.
Why would it be a deterministic fact in an indeterministic world?
The “may” is important there, and I intended it to be a probabilistic may, not a permission-granting may. It is a deterministic fact that it might be invoked, not that it necessarily will.
Yes, and determinism isn’t the thing I want freedom from. External control is, mostly.
Values differ. But it’s strange for rationalists not to care about the openness of the future when the whole AI safety thing is about steering towards a non dystopian future.
Yes, agreed, that is one of the points of disagreement about free will. I find it more strange to think the future is more steerable in a world where you can’t predict the outcomes of actions even in principle.
In the case of steering the future and AI, the thing in question is more about who is doing the steering, less about the gears-level question of how steering works as a concept. It’s similar to how a starving man cares more about getting a loaf of bread than he does about getting a lesson on the biochemistry of fermentation. Whether humans or AIs or aliens decide the direction of the future, they all do so from within the same universal laws and mechanisms. Free will isn’t a point of difference among options, and it isn’t a lever anyone can pull that affects what needs to be done.
I am also happy to concede, that yes, creating an unfriendly AI that kills all humans is a form of steering the future. Right off a cliff, one time. That’s very different than steering in a direction I want to steer (or be steered) in. It’s also very different from retaining the ability to continue to steer and course correct.
Yes, agreed, that is one of the points of disagreement about free will. I find it more strange to think the future is more steerable in a world where you can’t predict the outcomes of actions even in principle.
Determinism doesn’t give you perfect predictive ability, since you can still have limitations of cognition and information Indeterminism doesn’t have to take it away, either: it’s a feature of two-stage theories that the indeterminism is mostly at the decision making stage, not the decision-executing stage.
In the case of steering the future and AI, the thing in question is more about who is doing the steering, less about the gears-level question of how steering works as a concept.
Says who? If we are predetermined to be killed bi ASI, that’s that—all outr current efforts are in vain.
Free will isn’t a point of difference among options,
No, it’s a point about whether there are options.
It’s also very different from retaining the ability to continue to steer and course correct.
Which you can’t “retain”, since you never had it, under determinism.
But it would be better to read something written by an expert like this:-
https://plato.stanford.edu/entries/freewill/#ArguForRealFreeWill
It’s a good recommendation, thank you. But whether it’s better depends on the individual reader. For me, way back when I got stuck on the idea that both determinism and randomness seemed incompatible with free will to me, what got me out of is was someone asking me, “Well, what is it you want from your free will? If what you want is to act for reasons, then determinism doesn’t take that away.” Changed the way I thought just enough for further reflection and a bit more reading to get me the rest of the way.
Stylistically, I found the Free Will sequence helped me examine and internalize the relevant ideas far more intuitively than more academic philosophy sources ever did, because what I needed was to be beaten over the head with the point that it wasn’t actually mysterious. Reading summaries of past arguments by various philosophers, most of whom were either partly-religious in nature or unable/unwilling (for many reasons) to engage with the nature of physical reality as we moderns understand it, had never been enough for me.
Neither does indeterminism. But determinism takes away an open, changeable future.
Naturalistic libertarian free will isn’t mysterious, and isn’t recommended in the Sequences either. The sequences do not give a unique solution.
Not arguing with that, for sure. Still, just knowing, at a gut level, that non-mysterious solutions exist was a critical step on my own journey.
Indeterminism makes the problem harder; randomness means there is no part of “me” (physical or otherwise) deciding what I do, and I don’t know of any non-random indeterministic conception of free will. I’ve looked, and having seen anything that would even suggest a shape of what such a thing could look like.
Supernatural solutions don’t actually address the question of determinism at all, despite sometimes claiming to do so (at best, they hide the gears somewhere unobservable-in-principle).
And I don’t think the more psychological arguments about belief in free will and uncertainty about your own mind-state or predictability within world are likely to be helpful to the OP given the content of the post and prior comments.
It means there is no one part of you deciding everything, no ghostly string-puller. But naturalistic determinism means that, too.
The scientific question of free will becomes the question of how the machine behaves, whether it has the combination of unpredictability, self direction, self modification and so on, that might characterise free will… depending on how you define free will.
(Or may be hung op on the idea that if there is one little bit of randomnesss, then everything is attributable to that).
I don’t know what you mean by “I’ve looked” but Robert Kane and Tony Dore have such theories.
If you’re talking about the two-stage model, I’m aware of it but haven’t read their original writings. Still, I don’t see how that could possibly help make my choices more or less “free” in any sense I care about for any philosophical, motivational, or moral reason.
If I am deterministically selecting among options generated within myself by an indeterministic process, sure that’s possible, and I appreciate that it’s an actual question we could find an answer to. But, I’ve never been able to see why I might prefer that situation to deterministically choosing among states generated by any other process that’s outside my control, whether it happens inside by body or not, whether it’s deterministic or not. (Yes, I realize I am essentially rejecting the idea that I should consider the option-generating indeterministic process to be part of “me.” Maybe that’s a mistake,, but that’s how my me-concept is (currently) shaped.).
To put it another way: Imagine I am playing a game where I (deterministically) deliberate and choose among options presented to me. Why does the question of whether my choice is free or not depend on whether the process that generated the list of options is deterministic or not? Why does it depend on whether the option-generating indeterministic module is located inside or outside my body?
Separately, I also have a hard time with the idea that this implies that the question of free will could depend on which version of quantum mechanics is (more) true, because if Many Worlds is correct then it is no longer true that the future is indeterministic; instead it is only true that different parts of current-me will (deterministically) no longer be in communication with one another in the future.
(Continuing with the game-themed thought experiments because they’re readily available and easy to describe) This idea feels as strange to me as it would be to say that a contestant’s answers on Who Wants to Be a Millionaire become more or less free if you take away or use the 50-50 lifeline. I don’t mean that to be flippant. In some sense, it’s true—all of a sudden there are fewer options to freely choose among. But it is also a deterministic fact about the world that at some point in the future, that lifeline may be invoked, and certain options but not others will disappear. To me that seems like a strange hook to hang my self-concept, will, and moral responsibility from.
I didn’t say that was the case any more than indeterministically choosing between deterministically generated options.
In the big picture, this is happening in an indeterministic universe. So.what you get is really being able to change things ..to bring about futures that aren’t inevitable; and you being able to change things..the causal chain begins at you.
Indeterminism is, tautologously, freedom from determinism. The standard argument against libertarian free will depends on the universe working in a certain way, ie. being deterministic. The claim that libertarian free will depends on the universe being indeterministic is a corollary.
Why would it be a deterministic fact in an indeterministic world?
Yes, and determinism isn’t the thing I want freedom from. External control is, mostly.
The “may” is important there, and I intended it to be a probabilistic may, not a permission-granting may. It is a deterministic fact that it might be invoked, not that it necessarily will.
Values differ. But it’s strange for rationalists not to care about the openness of the future when the whole AI safety thing is about steering towards a non dystopian future.
Yes, agreed, that is one of the points of disagreement about free will. I find it more strange to think the future is more steerable in a world where you can’t predict the outcomes of actions even in principle.
In the case of steering the future and AI, the thing in question is more about who is doing the steering, less about the gears-level question of how steering works as a concept. It’s similar to how a starving man cares more about getting a loaf of bread than he does about getting a lesson on the biochemistry of fermentation. Whether humans or AIs or aliens decide the direction of the future, they all do so from within the same universal laws and mechanisms. Free will isn’t a point of difference among options, and it isn’t a lever anyone can pull that affects what needs to be done.
I am also happy to concede, that yes, creating an unfriendly AI that kills all humans is a form of steering the future. Right off a cliff, one time. That’s very different than steering in a direction I want to steer (or be steered) in. It’s also very different from retaining the ability to continue to steer and course correct.
Determinism doesn’t give you perfect predictive ability, since you can still have limitations of cognition and information Indeterminism doesn’t have to take it away, either: it’s a feature of two-stage theories that the indeterminism is mostly at the decision making stage, not the decision-executing stage.
Says who? If we are predetermined to be killed bi ASI, that’s that—all outr current efforts are in vain.
No, it’s a point about whether there are options.
Which you can’t “retain”, since you never had it, under determinism.