Sorry for the late reply; I don’t have much time for LW these days, sadly.
Based on some of your comments, perhaps I’m operating under a different definition of group vs. individual rationality? If uncoordinated individuals making locally optimal choices would lead to a suboptimal global outcome, and this is generally known to the group, then they must act to rationally solve the coordination problem, not merely fall back to non-coordination. A bunch of people unanimously playing D in the prisoner’s dilemma are clearly not, in any coherent sense, rationally maximizing individual outcomes. Thus I don’t really see such a scenario as presenting a group vs. individual conflict, but rather a practical problem of coordinated action. Certainly, solving such problems applies to any rational agent, not just humans.
The part about giving undue weight to unlikely ideas—which seems to comprise about half the post—by mis-calibrating confidence levels to motivate behavior seems to be strictly human-oriented. Lacking the presence of human cognitive biases, the decision to examine low-confidence ideas is just another coordination issue with no special features; in fact it’s an unusually tractable one, as a passable solution exists (random choice, as per CannibalSmith’s comment, which was also my immediate thought) even with the presumption that coordination is not only expensive but essentially impossible!
Overall, any largely symmetric, fault-tolerant coordination problem that can be trivially resolved by a quasi-Kantian maxim of “always take the action that would work out best if everyone took that action” is a “problem” only insofar as humans are unreliable and will probably screw up; thus any proposed solution is necessarily non-general.
The situation is much stickier in other cases; for instance, if coordination costs are comparable to the gains from coordination, or if it’s not clear that every individual has a reasonable expectation of preferring the group-optimal outcome, or if the optimal actions are asymmetric in ways not locally obvious, or if the optimal group action isn’t amenable to a partition/parallelize/recombine algorithm. But none of those are the case in your example! Perhaps that sort of thing is what Eliezer et al. are working on, but (due to aforementioned time constraints) I’ve not kept up with LW, so you’ll have to forgive me if this is all old hat.
At any rate, tl;dr version: wedrifid’s “Anything an irrational agent can do due to an epistemic flaw a rational agent can do because it is the best thing for it to do.” and the associated comment thread pretty much covers what I had in mind when I left the earlier comment. Hope that clarifies matters.
Sorry for the late reply; I don’t have much time for LW these days, sadly.
Based on some of your comments, perhaps I’m operating under a different definition of group vs. individual rationality? If uncoordinated individuals making locally optimal choices would lead to a suboptimal global outcome, and this is generally known to the group, then they must act to rationally solve the coordination problem, not merely fall back to non-coordination. A bunch of people unanimously playing D in the prisoner’s dilemma are clearly not, in any coherent sense, rationally maximizing individual outcomes. Thus I don’t really see such a scenario as presenting a group vs. individual conflict, but rather a practical problem of coordinated action. Certainly, solving such problems applies to any rational agent, not just humans.
The part about giving undue weight to unlikely ideas—which seems to comprise about half the post—by mis-calibrating confidence levels to motivate behavior seems to be strictly human-oriented. Lacking the presence of human cognitive biases, the decision to examine low-confidence ideas is just another coordination issue with no special features; in fact it’s an unusually tractable one, as a passable solution exists (random choice, as per CannibalSmith’s comment, which was also my immediate thought) even with the presumption that coordination is not only expensive but essentially impossible!
Overall, any largely symmetric, fault-tolerant coordination problem that can be trivially resolved by a quasi-Kantian maxim of “always take the action that would work out best if everyone took that action” is a “problem” only insofar as humans are unreliable and will probably screw up; thus any proposed solution is necessarily non-general.
The situation is much stickier in other cases; for instance, if coordination costs are comparable to the gains from coordination, or if it’s not clear that every individual has a reasonable expectation of preferring the group-optimal outcome, or if the optimal actions are asymmetric in ways not locally obvious, or if the optimal group action isn’t amenable to a partition/parallelize/recombine algorithm. But none of those are the case in your example! Perhaps that sort of thing is what Eliezer et al. are working on, but (due to aforementioned time constraints) I’ve not kept up with LW, so you’ll have to forgive me if this is all old hat.
At any rate, tl;dr version: wedrifid’s “Anything an irrational agent can do due to an epistemic flaw a rational agent can do because it is the best thing for it to do.” and the associated comment thread pretty much covers what I had in mind when I left the earlier comment. Hope that clarifies matters.