For counterfactual nonrealism, it’s simply the uncertainty an agent has about their own action, while believing themselves to control their action.
For policy-dependent source code, the “different possibilities” correspond to different source code. An agent with fixed source code can only take one possible action (from a logically omniscent perspective), but the counterfactuals change the agent’s source code, getting around this constraint.
when modeling a complex/not entirely understood system, probabilities may be a more effective framework.
Just as, if the output of a program were known before it was run, it probably wouldn’t need to be run, we don’t know what we’ll decide before we decide, though we do after, and we’re not sure how we could have predicted the outcome in advance.
I’m trying t understand where exactly in your approach you sneak in the free will...
For counterfactual nonrealism, it’s simply the uncertainty an agent has about their own action, while believing themselves to control their action.
For policy-dependent source code, the “different possibilities” correspond to different source code. An agent with fixed source code can only take one possible action (from a logically omniscent perspective), but the counterfactuals change the agent’s source code, getting around this constraint.
I think
when modeling a complex/not entirely understood system, probabilities may be a more effective framework.
Just as, if the output of a program were known before it was run, it probably wouldn’t need to be run, we don’t know what we’ll decide before we decide, though we do after, and we’re not sure how we could have predicted the outcome in advance.