Yes, and determinism isn’t the thing I want freedom from. External control is, mostly.
Values differ. But it’s strange for rationalists not to care about the openness of the future when the whole AI safety thing is about steering towards a non dystopian future.
Yes, agreed, that is one of the points of disagreement about free will. I find it more strange to think the future is more steerable in a world where you can’t predict the outcomes of actions even in principle.
In the case of steering the future and AI, the thing in question is more about who is doing the steering, less about the gears-level question of how steering works as a concept. It’s similar to how a starving man cares more about getting a loaf of bread than he does about getting a lesson on the biochemistry of fermentation. Whether humans or AIs or aliens decide the direction of the future, they all do so from within the same universal laws and mechanisms. Free will isn’t a point of difference among options, and it isn’t a lever anyone can pull that affects what needs to be done.
I am also happy to concede, that yes, creating an unfriendly AI that kills all humans is a form of steering the future. Right off a cliff, one time. That’s very different than steering in a direction I want to steer (or be steered) in. It’s also very different from retaining the ability to continue to steer and course correct.
Yes, agreed, that is one of the points of disagreement about free will. I find it more strange to think the future is more steerable in a world where you can’t predict the outcomes of actions even in principle.
Determinism doesn’t give you perfect predictive ability, since you can still have limitations of cognition and information Indeterminism doesn’t have to take it away, either: it’s a feature of two-stage theories that the indeterminism is mostly at the decision making stage, not the decision-executing stage.
In the case of steering the future and AI, the thing in question is more about who is doing the steering, less about the gears-level question of how steering works as a concept.
Says who? If we are predetermined to be killed bi ASI, that’s that—all outr current efforts are in vain.
Free will isn’t a point of difference among options,
No, it’s a point about whether there are options.
It’s also very different from retaining the ability to continue to steer and course correct.
Which you can’t “retain”, since you never had it, under determinism.
Values differ. But it’s strange for rationalists not to care about the openness of the future when the whole AI safety thing is about steering towards a non dystopian future.
Yes, agreed, that is one of the points of disagreement about free will. I find it more strange to think the future is more steerable in a world where you can’t predict the outcomes of actions even in principle.
In the case of steering the future and AI, the thing in question is more about who is doing the steering, less about the gears-level question of how steering works as a concept. It’s similar to how a starving man cares more about getting a loaf of bread than he does about getting a lesson on the biochemistry of fermentation. Whether humans or AIs or aliens decide the direction of the future, they all do so from within the same universal laws and mechanisms. Free will isn’t a point of difference among options, and it isn’t a lever anyone can pull that affects what needs to be done.
I am also happy to concede, that yes, creating an unfriendly AI that kills all humans is a form of steering the future. Right off a cliff, one time. That’s very different than steering in a direction I want to steer (or be steered) in. It’s also very different from retaining the ability to continue to steer and course correct.
Determinism doesn’t give you perfect predictive ability, since you can still have limitations of cognition and information Indeterminism doesn’t have to take it away, either: it’s a feature of two-stage theories that the indeterminism is mostly at the decision making stage, not the decision-executing stage.
Says who? If we are predetermined to be killed bi ASI, that’s that—all outr current efforts are in vain.
No, it’s a point about whether there are options.
Which you can’t “retain”, since you never had it, under determinism.