Remember the counterfactual zombie principle: you are only implication, your decision or your knowledge only says what it would be if you exist, but you can’t assume that you do exist.
When you counterfactual-consider A6, you consider how the world-with-A6 will be, but don’t assume that it exists, and so can’t infer that it’s of highest utility. You are right that your copy in world-with-A6 would also choose A6, but that also doesn’t have to be an action of maximum utility, since it’s not guaranteed the situation will exist. For the action that you do choose, you may know that you’ve chosen it, but for the action you counterfactually-consider, you don’t assume that you do choose it. (In causal networks, this seems to correspond to cutting off the action-node from yourself before setting it to a value.)
Remember the counterfactual zombie principle: you are only implication, your decision or your knowledge only says what it would be if you exist, but you can’t assume that you do exist.
When you counterfactual-consider A6, you consider how the world-with-A6 will be, but don’t assume that it exists, and so can’t infer that it’s of highest utility. You are right that your copy in world-with-A6 would also choose A6, but that also doesn’t have to be an action of maximum utility, since it’s not guaranteed the situation will exist. For the action that you do choose, you may know that you’ve chosen it, but for the action you counterfactually-consider, you don’t assume that you do choose it. (In causal networks, this seems to correspond to cutting off the action-node from yourself before setting it to a value.)