doesn’t Bostrom’s model of “naive unilateralists” by definition preclude updating on the behavior of other group members?
Yeah, this is right; it’s what I tried to clarify in the second paragraph.
isn’t updating on the beliefs of others (as signaled by their behavior) an example of adopting a version of the “principle of conformity” that he endorses as a solution to the curse? If so, it seems like you are framing a proof of Bostrom’s point as a rebuttal to it.
The introduction of the post tries to explain how this post relates to Bostrom et al’s paper (e.g., I’m not rebutting Bostrom et al). But I’ll say some more here.
You’re broadly right on the principle of conformity. The paper suggests a few ways to implement it, one of which is being rational. But they don’t go so far as to endorse this because they consider it mostly unrealistic. I tried to point to some reasons it might not be. Bostrom et al are sceptical because (i) identical priors are assumed and (ii) it would be surprising for humans to be this thoughtful anyway. The derivation above should help motivate why identical priors are sufficient but not necessary for the main upshot, and what I included in the conclusion suggests that many humans—or at least some firms—actually do the rational thing by default.
But the main point of the post is to do what I explained in the introduction: correct misconceptions and clarify. My experience of informal discussions of the curse suggests people think of it as a flaw of collective action that applies to agents simpliciter, and I wanted to flesh out this mistake. I think the formal framework I used is better at capturing the relevant intuition than the one used in Bostrom et al.
Yeah, this is right; it’s what I tried to clarify in the second paragraph.
The introduction of the post tries to explain how this post relates to Bostrom et al’s paper (e.g., I’m not rebutting Bostrom et al). But I’ll say some more here.
You’re broadly right on the principle of conformity. The paper suggests a few ways to implement it, one of which is being rational. But they don’t go so far as to endorse this because they consider it mostly unrealistic. I tried to point to some reasons it might not be. Bostrom et al are sceptical because (i) identical priors are assumed and (ii) it would be surprising for humans to be this thoughtful anyway. The derivation above should help motivate why identical priors are sufficient but not necessary for the main upshot, and what I included in the conclusion suggests that many humans—or at least some firms—actually do the rational thing by default.
But the main point of the post is to do what I explained in the introduction: correct misconceptions and clarify. My experience of informal discussions of the curse suggests people think of it as a flaw of collective action that applies to agents simpliciter, and I wanted to flesh out this mistake. I think the formal framework I used is better at capturing the relevant intuition than the one used in Bostrom et al.
That makes sense. Thank you for the explanation!