Yes, it is similar, and your state of knowledge is a (small) part of the state of the Universe, unless you are a Cartesian dualist. Indeed we don’t have “a complete model of the universe including yourself”, and learning the agent’s decision theory, among other things, is a useful way toward understanding the agent’s actions. There is nothing logically counterfactual about it.
If you know that the other (deterministic or probabilistic) agent is a clone of you with exactly the same set of inputs as you do, you know that they will do exactly the same thing you do (potentially flipping a coin the same way you do, though possibly with a different outcome if they are probabilistic). There is no known magic in the universe that would allow for to anything else. Unless they are inexact clone, of course, because in the zoomed out view you don’t see the small differences that can lead to very different outcomes.
If you know that the other (deterministic or probabilistic) agent is a clone of you with exactly the same set of inputs as you do, you know that they will do exactly the same thing you do (potentially flipping a coin the same way you do, though possibly with a different outcome if they are probabilistic)
That isn’t literally true of a probabilistic agent: you can’t use a coin to predict another coin. It is sort-of true that you find a similar statistical pattern....but that is rather beside the point of counterfactuals: if anything is probabilistic (really and not as a result of limited information), it is indeterministic, and if anything is indeterministic, then then it might have happened in another way, and there is your (real, not logical) counterfactual.
Yes, it is similar, and your state of knowledge is a (small) part of the state of the Universe, unless you are a Cartesian dualist. Indeed we don’t have “a complete model of the universe including yourself”, and learning the agent’s decision theory, among other things, is a useful way toward understanding the agent’s actions. There is nothing logically counterfactual about it.
If you know that the other (deterministic or probabilistic) agent is a clone of you with exactly the same set of inputs as you do, you know that they will do exactly the same thing you do (potentially flipping a coin the same way you do, though possibly with a different outcome if they are probabilistic). There is no known magic in the universe that would allow for to anything else. Unless they are inexact clone, of course, because in the zoomed out view you don’t see the small differences that can lead to very different outcomes.
That isn’t literally true of a probabilistic agent: you can’t use a coin to predict another coin. It is sort-of true that you find a similar statistical pattern....but that is rather beside the point of counterfactuals: if anything is probabilistic (really and not as a result of limited information), it is indeterministic, and if anything is indeterministic, then then it might have happened in another way, and there is your (real, not logical) counterfactual.