Do they really want the same thing? It’s hard to tell when we’re also offering other tasty treats. Prostitutes will do things for cocaine that they wouldn’t do otherwise, and pretend to enjoy themselves.
Maybe they would, maybe they wouldn’t. But if rationalism doesn’t at least offer something comparable to other options, many people won’t even try.
As for Aumann, Robin Hanson points out that we’d often do better to update toward “average beliefs” than the beliefs of our chosen in-group. So it appears I can already maximize my “Aumann benefit” by conversing with random strangers. It seems to me that the benefits of a rationalist group are precisely the opposite: Discover good arguments and important data that we were previously unaware of, that we find convincing regardless of source. If we value our own mere opinions too highly, we’ve already lost.
Robin’s argument in that link seems to be that taking pleasure in disagreement with average beliefs is, all else equal, a bad thing; it’s certainly not an argument in favor of updating toward average beliefs. Aumann agreement only strictly applies to ideal rationalists with shared assumptions, but as a rule of thumb one should update toward other agents’ beliefs based on the demonstrated rationality of their belief-forming process.
Among other reasons, because mass opinion often influences decisions, e.g. politics, in ways that impact everyone, including us. The greater the average rationality of the masses, the better those decisions are likely to be.
But if rationalism doesn’t at least offer something comparable to other options, many people won’t even try.
True. I think the goal here is a bit more complex than “maximize number of self-proclaimed rationalists”, though.
Robin’s argument in that link seems to be that taking pleasure in disagreement with average beliefs is, all else equal, a bad thing; it’s certainly not an argument in favor of updating toward average beliefs.
I was presenting it as an argument favoring updates toward average beliefs over doing so for in-group beliefs, but you’re still right that it’s really making an unrelated point.
Aumann agreement only strictly applies to ideal rationalists with shared assumptions, but as a rule of thumb one should update toward other agents’ beliefs based on the demonstrated rationality of their belief-forming process.
I find such demonstrations quite difficult to identify. Doing so requires both confidence in the correctness of their conclusion and, to a lesser extent, confidence that the beliefs you observe aren’t being selected for by other rationalists.
Maybe they would, maybe they wouldn’t. But if rationalism doesn’t at least offer something comparable to other options, many people won’t even try.
Robin’s argument in that link seems to be that taking pleasure in disagreement with average beliefs is, all else equal, a bad thing; it’s certainly not an argument in favor of updating toward average beliefs. Aumann agreement only strictly applies to ideal rationalists with shared assumptions, but as a rule of thumb one should update toward other agents’ beliefs based on the demonstrated rationality of their belief-forming process.
“Maybe they would, maybe they wouldn’t. But if rationalism doesn’t at least offer something comparable to other options, many people won’t even try.”
So why should we want to attract such people?
We know why cult groups usually try to attract as many people as possible: they’re just raw material to them, explicitly or implicitly.
How is it to our benefit to adopt an r-strategy, rather than a K?
Among other reasons, because mass opinion often influences decisions, e.g. politics, in ways that impact everyone, including us. The greater the average rationality of the masses, the better those decisions are likely to be.
Rational arguments, being restricted to sanity, are un-optimized for swaying masses for political gain.
It’s not a good idea to fight irrationality’s strengths with rationality’s weaknesses.
True. I think the goal here is a bit more complex than “maximize number of self-proclaimed rationalists”, though.
I was presenting it as an argument favoring updates toward average beliefs over doing so for in-group beliefs, but you’re still right that it’s really making an unrelated point.
I find such demonstrations quite difficult to identify. Doing so requires both confidence in the correctness of their conclusion and, to a lesser extent, confidence that the beliefs you observe aren’t being selected for by other rationalists.