Not feasible. Let’s aim for a more modest goal, say, better PR and functional communities.
Moreover, not this community’s comparative advantage. Why do we think we’d be any better than anyone else at running the world? And why wouldn’t we be subject to free-riders, power-seekers, and rationalists-of-fortune if we started winning?
We think we’d be better at running the world because we think rationalists should be better at pretty much everything that benefits from knowing the truth. If we didn’t believe that we wouldn’t be (aspiring) rationalists. And just because we couldn’t do it perfectly doesn’t mean we’re not better than the alternatives.
And just because we couldn’t do it perfectly doesn’t mean we’re not better than the alternatives.
I wonder how well a group whose members didn’t study how to think and instead devoted themselves to not letting emotions interfere with their decisions would do. All its work would be advances, I think—there would be no analog to the “valley of rationality” in which people lost touch with their intuitions and made poor decisions.
In fact, I would assert the exact opposite: that attempting to remove emotions from decisionmaking is what causes the “valley of rationality.” Furthermore, I suspect it is a necessary transitional phase, comparable in it’s horrific necessity to the process of re-breaking a mangled shinbone so that it can heal straight.
attempting to remove emotions from decisionmaking is what causes the “valley of rationality.”
I disagree with the implication of this. I think the main causes are misusing tools like Bayesian updating and considering what a rationalist would do, and trying to do that.
Insofar as poorly calibrated emotions are part of the problem, one must subtract the problems that would have been caused by them under non-aspiring rationalist conditions from those under aspiring-rationalist conditions to calculate what the aspiring rationalism is responsible for. I don’t think this usually leaves much left over, positive or negative.
Functional communities would be nice. I’m not so sure that better PR is the way to go. Why not no PR? Why not subtle induction via existing infrastructure? Let the people who most deserve to be here be the ones who will find us. Let us not go out with blaring trumpet, but with fishing lure.
We have neither the numbers, the organizational skill, nor the social skills to be good at this. There is a joke that organizing libertarians is like herding cats and the same principle seems to be partly true here for the same reason: Lw draws a lot of smart contrarian people. Unless there is a technological way to conquer the world, say the Singularity, but that demands an entirely different organizational strategy, namely channeling all efforts into FAI.
Not feasible. Let’s aim for a more modest goal, say, better PR and functional communities.
Moreover, not this community’s comparative advantage. Why do we think we’d be any better than anyone else at running the world? And why wouldn’t we be subject to free-riders, power-seekers, and rationalists-of-fortune if we started winning?
We think we’d be better at running the world because we think rationalists should be better at pretty much everything that benefits from knowing the truth. If we didn’t believe that we wouldn’t be (aspiring) rationalists. And just because we couldn’t do it perfectly doesn’t mean we’re not better than the alternatives.
Overconfidence seems like a poor qualification.
And yet confidence seems a good one. The question is how much is too much, which can really only be verified after the fact.
I wonder how well a group whose members didn’t study how to think and instead devoted themselves to not letting emotions interfere with their decisions would do. All its work would be advances, I think—there would be no analog to the “valley of rationality” in which people lost touch with their intuitions and made poor decisions.
I dispute your claim.
In fact, I would assert the exact opposite: that attempting to remove emotions from decisionmaking is what causes the “valley of rationality.” Furthermore, I suspect it is a necessary transitional phase, comparable in it’s horrific necessity to the process of re-breaking a mangled shinbone so that it can heal straight.
I’m well disposed towards your viewpoint on that.
I disagree with the implication of this. I think the main causes are misusing tools like Bayesian updating and considering what a rationalist would do, and trying to do that.
Insofar as poorly calibrated emotions are part of the problem, one must subtract the problems that would have been caused by them under non-aspiring rationalist conditions from those under aspiring-rationalist conditions to calculate what the aspiring rationalism is responsible for. I don’t think this usually leaves much left over, positive or negative.
Functional communities would be nice. I’m not so sure that better PR is the way to go. Why not no PR? Why not subtle induction via existing infrastructure? Let the people who most deserve to be here be the ones who will find us. Let us not go out with blaring trumpet, but with fishing lure.
What specific concerns make you disagree with its feasibility?
We have neither the numbers, the organizational skill, nor the social skills to be good at this. There is a joke that organizing libertarians is like herding cats and the same principle seems to be partly true here for the same reason: Lw draws a lot of smart contrarian people. Unless there is a technological way to conquer the world, say the Singularity, but that demands an entirely different organizational strategy, namely channeling all efforts into FAI.