As in, you could score some actions, but then there isn’t a sense in which you “can” choose one according to any criterion.
I’ve noticed that issue as well. Counterfactuals are more a convenient model/story than something to be taken literally. You’ve grounded decision by taking counterfactuals to exist a priori. I ground them by noting that our desire to construct counterfactuals is ultimately based on evolved instincts and/or behaviours so these stories aren’t just arbitrary stories but a way in which we can leverage the lessons that have been instilled in us by evolution. I’m curious, given this explanation, why do we still need choices to be actual?
Do you think of counterfactuals as a speedup on evolution? Could this be operationalized by designing AIs that quantilize on some animal population, therefore not being far from the population distribution, but still surviving/reproducing better than average?
I’ve noticed that issue as well. Counterfactuals are more a convenient model/story than something to be taken literally. You’ve grounded decision by taking counterfactuals to exist a priori. I ground them by noting that our desire to construct counterfactuals is ultimately based on evolved instincts and/or behaviours so these stories aren’t just arbitrary stories but a way in which we can leverage the lessons that have been instilled in us by evolution. I’m curious, given this explanation, why do we still need choices to be actual?
Do you think of counterfactuals as a speedup on evolution? Could this be operationalized by designing AIs that quantilize on some animal population, therefore not being far from the population distribution, but still surviving/reproducing better than average?
Speedup on evolution?
Maybe? Might work okayish, but doubt the best solution is that speculative.