If I am deterministically selecting among options generated within myself by an indeterministic process,
I didn’t say that was the case any more than indeterministically choosing between deterministically generated options.
sure that’s possible, and I appreciate that it’s an actual question we could find an answer to. But, I’ve never been able to see why I might prefer that situation to deterministically choosing among states generated by any other process that’s outside my control, whether it happens inside by body or not, whether it’s deterministic or not
In the big picture, this is happening in an indeterministic universe. So.what you get is really being able to change things ..to bring about futures that aren’t inevitable; and you being able to change things..the causal chain begins at you.
Why does the question of whether my choice is free or not depend on whether the process that generated the list of options is deterministic or not?
Indeterminism is, tautologously, freedom from determinism. The standard argument against libertarian free will depends on the universe working in a certain way, ie. being deterministic. The claim that libertarian free will depends on the universe being indeterministic is a corollary.
But it is also a deterministic fact about the world that at some point in the future, that lifeline may be invoked
Why would it be a deterministic fact in an indeterministic world?
Indeterminism is, tautologously, freedom from determinism.
Yes, and determinism isn’t the thing I want freedom from. External control is, mostly.
Why would it be a deterministic fact in an indeterministic world?
The “may” is important there, and I intended it to be a probabilistic may, not a permission-granting may. It is a deterministic fact that it might be invoked, not that it necessarily will.
Yes, and determinism isn’t the thing I want freedom from. External control is, mostly.
Values differ. But it’s strange for rationalists not to care about the openness of the future when the whole AI safety thing is about steering towards a non dystopian future.
Yes, agreed, that is one of the points of disagreement about free will. I find it more strange to think the future is more steerable in a world where you can’t predict the outcomes of actions even in principle.
In the case of steering the future and AI, the thing in question is more about who is doing the steering, less about the gears-level question of how steering works as a concept. It’s similar to how a starving man cares more about getting a loaf of bread than he does about getting a lesson on the biochemistry of fermentation. Whether humans or AIs or aliens decide the direction of the future, they all do so from within the same universal laws and mechanisms. Free will isn’t a point of difference among options, and it isn’t a lever anyone can pull that affects what needs to be done.
I am also happy to concede, that yes, creating an unfriendly AI that kills all humans is a form of steering the future. Right off a cliff, one time. That’s very different than steering in a direction I want to steer (or be steered) in. It’s also very different from retaining the ability to continue to steer and course correct.
Yes, agreed, that is one of the points of disagreement about free will. I find it more strange to think the future is more steerable in a world where you can’t predict the outcomes of actions even in principle.
Determinism doesn’t give you perfect predictive ability, since you can still have limitations of cognition and information Indeterminism doesn’t have to take it away, either: it’s a feature of two-stage theories that the indeterminism is mostly at the decision making stage, not the decision-executing stage.
In the case of steering the future and AI, the thing in question is more about who is doing the steering, less about the gears-level question of how steering works as a concept.
Says who? If we are predetermined to be killed bi ASI, that’s that—all outr current efforts are in vain.
Free will isn’t a point of difference among options,
No, it’s a point about whether there are options.
It’s also very different from retaining the ability to continue to steer and course correct.
Which you can’t “retain”, since you never had it, under determinism.
I didn’t say that was the case any more than indeterministically choosing between deterministically generated options.
In the big picture, this is happening in an indeterministic universe. So.what you get is really being able to change things ..to bring about futures that aren’t inevitable; and you being able to change things..the causal chain begins at you.
Indeterminism is, tautologously, freedom from determinism. The standard argument against libertarian free will depends on the universe working in a certain way, ie. being deterministic. The claim that libertarian free will depends on the universe being indeterministic is a corollary.
Why would it be a deterministic fact in an indeterministic world?
Yes, and determinism isn’t the thing I want freedom from. External control is, mostly.
The “may” is important there, and I intended it to be a probabilistic may, not a permission-granting may. It is a deterministic fact that it might be invoked, not that it necessarily will.
Values differ. But it’s strange for rationalists not to care about the openness of the future when the whole AI safety thing is about steering towards a non dystopian future.
Yes, agreed, that is one of the points of disagreement about free will. I find it more strange to think the future is more steerable in a world where you can’t predict the outcomes of actions even in principle.
In the case of steering the future and AI, the thing in question is more about who is doing the steering, less about the gears-level question of how steering works as a concept. It’s similar to how a starving man cares more about getting a loaf of bread than he does about getting a lesson on the biochemistry of fermentation. Whether humans or AIs or aliens decide the direction of the future, they all do so from within the same universal laws and mechanisms. Free will isn’t a point of difference among options, and it isn’t a lever anyone can pull that affects what needs to be done.
I am also happy to concede, that yes, creating an unfriendly AI that kills all humans is a form of steering the future. Right off a cliff, one time. That’s very different than steering in a direction I want to steer (or be steered) in. It’s also very different from retaining the ability to continue to steer and course correct.
Determinism doesn’t give you perfect predictive ability, since you can still have limitations of cognition and information Indeterminism doesn’t have to take it away, either: it’s a feature of two-stage theories that the indeterminism is mostly at the decision making stage, not the decision-executing stage.
Says who? If we are predetermined to be killed bi ASI, that’s that—all outr current efforts are in vain.
No, it’s a point about whether there are options.
Which you can’t “retain”, since you never had it, under determinism.