Elo’s a nice guy, but I have no idea what he’s talking about either.
Maybe rationality improves your quality of life or subjective well-being, there is certainly evidence for that.
But in terms of accomplishing more material and outwardly visible goals, you’re right that the evidence is scant. CFAR and EA could be evidence, but there are a lot of non-rat institutions that perform well too.
I think overall the success of EA and Rationality is pretty visible. Open Phil has access to over 10 billion dollars, which makes them one of the 10 largest foundations in the worlds, we have successfully created safety teams at many of the world’s top AI labs, have had many of the world’s top entrepreneurs and researchers speak at our conferences, and generally seem to be doing much better at achieving our goals than I bet almost anyone would have naively expected had you asked them in 2010.
Obviously, not everyone who reads LessWrong suddenly develops superpowers, and generally as communities grow the average level of success or competence goes down, but in aggregate I think we are doing pretty well.
(Note: I don’t think most of Open Phil would self-identify as rationalist, but in particular their focus on AI alignment seems heavily influenced by the rationality community, and in general I think that a lot of the staff at Open Phil are executing the kind of algorithms that we usually describe here as “the art of rationality” (and many of them have read LessWrong and found it quite valuable))
I agree with the thrust of this comment, but I’d like to push back against “have had many of the world’s top entrepreneurs and researchers speak at our conferences” as a measure of success (although perhaps it’s a predictor in the right context).
Elo’s a nice guy, but I have no idea what he’s talking about either.
Maybe rationality improves your quality of life or subjective well-being, there is certainly evidence for that.
But in terms of accomplishing more material and outwardly visible goals, you’re right that the evidence is scant. CFAR and EA could be evidence, but there are a lot of non-rat institutions that perform well too.
I think overall the success of EA and Rationality is pretty visible. Open Phil has access to over 10 billion dollars, which makes them one of the 10 largest foundations in the worlds, we have successfully created safety teams at many of the world’s top AI labs, have had many of the world’s top entrepreneurs and researchers speak at our conferences, and generally seem to be doing much better at achieving our goals than I bet almost anyone would have naively expected had you asked them in 2010.
Obviously, not everyone who reads LessWrong suddenly develops superpowers, and generally as communities grow the average level of success or competence goes down, but in aggregate I think we are doing pretty well.
(Note: I don’t think most of Open Phil would self-identify as rationalist, but in particular their focus on AI alignment seems heavily influenced by the rationality community, and in general I think that a lot of the staff at Open Phil are executing the kind of algorithms that we usually describe here as “the art of rationality” (and many of them have read LessWrong and found it quite valuable))
I agree with the thrust of this comment, but I’d like to push back against “have had many of the world’s top entrepreneurs and researchers speak at our conferences” as a measure of success (although perhaps it’s a predictor in the right context).
Agree that it’s a weaker one, I guess it’s one that comes up for me because I worked more directly on it :P