My point is just that “prior / equilibrium selection problem” is a subset of the “you don’t know everything about the other player” problem, which I think you agree with?
I see two problems: one of trying to coordinate on priors, and one of trying to deal with having not successfully coordinated. I think that which is easier depends on the problem: if we’re applying it to CAIS, HRI or a multipolar scenario. Sometimes it’s easier to coordinate on a prior before hand, sometimes it’s easier to be robust to differing priors, and sometimes you have to go for a bit of both. I think it’s reasonable to call both solution techniques to the “prior / equilibrium selection problem”, but the framings shoot for different solutions, both of which I view as necessary sometimes.
The strategy of agreeing on a joint welfare function is already a heuristic and isn’t an optimal strategy; it feels very weird to suppose that initially a heuristic is used and then we suddenly switch to pure optimality.
I don’t really know what you mean by this. Specifically I don’t know from who’s perspective it isn’t optimal and under what beliefs.
A few things to point out:
The strategy of agreeing on a joint welfare function and optimizing it is an optimal strategy for some belief in infinitely iterated settings (because there is a folk theorem so almost everything is an optimal strategy for some belief)
Since we’re currently making norms for these interactions, we are currently designing these beliefs. This means that we can make it be the case that having that belief is justified in future deployments.
If we want to talk about “optimality” in terms of “equilibria selection procedures” or “coordination norms” we have to have a metric to say some outcomes are “better” than others. This is not a utility function for the agents, but for us as the norm designers. Social welfare seems good for this.
I see two problems: one of trying to coordinate on priors, and one of trying to deal with having not successfully coordinated. I think that which is easier depends on the problem: if we’re applying it to CAIS, HRI or a multipolar scenario. Sometimes it’s easier to coordinate on a prior before hand, sometimes it’s easier to be robust to differing priors, and sometimes you have to go for a bit of both. I think it’s reasonable to call both solution techniques to the “prior / equilibrium selection problem”, but the framings shoot for different solutions, both of which I view as necessary sometimes.
I don’t really know what you mean by this. Specifically I don’t know from who’s perspective it isn’t optimal and under what beliefs.
A few things to point out:
The strategy of agreeing on a joint welfare function and optimizing it is an optimal strategy for some belief in infinitely iterated settings (because there is a folk theorem so almost everything is an optimal strategy for some belief)
Since we’re currently making norms for these interactions, we are currently designing these beliefs. This means that we can make it be the case that having that belief is justified in future deployments.
If we want to talk about “optimality” in terms of “equilibria selection procedures” or “coordination norms” we have to have a metric to say some outcomes are “better” than others. This is not a utility function for the agents, but for us as the norm designers. Social welfare seems good for this.