Elo’s a nice guy, but I have no idea what he’s talking about either.
Maybe rationality improves your quality of life or subjective well-being, there is certainly evidence for that.
But in terms of accomplishing more material and outwardly visible goals, you’re right that the evidence is scant. CFAR and EA could be evidence, but there are a lot of non-rat institutions that perform well too.
I think overall the success of EA and Rationality is pretty visible. Open Phil has access to over 10 billion dollars, which makes them one of the 10 largest foundations in the worlds, we have successfully created safety teams at many of the world’s top AI labs, have had many of the world’s top entrepreneurs and researchers speak at our conferences, and generally seem to be doing much better at achieving our goals than I bet almost anyone would have naively expected had you asked them in 2010.
Obviously, not everyone who reads LessWrong suddenly develops superpowers, and generally as communities grow the average level of success or competence goes down, but in aggregate I think we are doing pretty well.
(Note: I don’t think most of Open Phil would self-identify as rationalist, but in particular their focus on AI alignment seems heavily influenced by the rationality community, and in general I think that a lot of the staff at Open Phil are executing the kind of algorithms that we usually describe here as “the art of rationality” (and many of them have read LessWrong and found it quite valuable))
I agree with the thrust of this comment, but I’d like to push back against “have had many of the world’s top entrepreneurs and researchers speak at our conferences” as a measure of success (although perhaps it’s a predictor in the right context).
...? “Winning” isn’t just an abstraction, actually winning means getting something you value. Now, maybe many rationalists are in fact winning, but if so, there are specific values we’re attaining. It shouldn’t be hard to delineate them.
It should look like, “This person got a new job that makes them much happier, that person lost weight on an evidence-based diet after failing to do so on a string of other diets, this other person found a significant other once they started practicing Alicorn’s self-awareness techniques and learned to accept their nervousness on a first date...” It might even look like, “This person developed a new technology and is currently working on a startup to build more prototypes.”
In none of these cases should it be hard to explain how we’re winning, nor should Tim’s “not looking carefully enough” be an issue. Even if the wins are limited to subjective well-being, you should at least be able to explain that! Do you believe that we’re winning, or do you merely believe you believe it?
The rationalists are winning. You are not looking carefully enough.
Can you be more specific? What evidence leads you to believe that rationalists are winning?
Elo’s a nice guy, but I have no idea what he’s talking about either.
Maybe rationality improves your quality of life or subjective well-being, there is certainly evidence for that.
But in terms of accomplishing more material and outwardly visible goals, you’re right that the evidence is scant. CFAR and EA could be evidence, but there are a lot of non-rat institutions that perform well too.
I think overall the success of EA and Rationality is pretty visible. Open Phil has access to over 10 billion dollars, which makes them one of the 10 largest foundations in the worlds, we have successfully created safety teams at many of the world’s top AI labs, have had many of the world’s top entrepreneurs and researchers speak at our conferences, and generally seem to be doing much better at achieving our goals than I bet almost anyone would have naively expected had you asked them in 2010.
Obviously, not everyone who reads LessWrong suddenly develops superpowers, and generally as communities grow the average level of success or competence goes down, but in aggregate I think we are doing pretty well.
(Note: I don’t think most of Open Phil would self-identify as rationalist, but in particular their focus on AI alignment seems heavily influenced by the rationality community, and in general I think that a lot of the staff at Open Phil are executing the kind of algorithms that we usually describe here as “the art of rationality” (and many of them have read LessWrong and found it quite valuable))
I agree with the thrust of this comment, but I’d like to push back against “have had many of the world’s top entrepreneurs and researchers speak at our conferences” as a measure of success (although perhaps it’s a predictor in the right context).
Agree that it’s a weaker one, I guess it’s one that comes up for me because I worked more directly on it :P
No. You will have to see for yourself. Of course you’d have to be looking for that to work.
...? “Winning” isn’t just an abstraction, actually winning means getting something you value. Now, maybe many rationalists are in fact winning, but if so, there are specific values we’re attaining. It shouldn’t be hard to delineate them.
It should look like, “This person got a new job that makes them much happier, that person lost weight on an evidence-based diet after failing to do so on a string of other diets, this other person found a significant other once they started practicing Alicorn’s self-awareness techniques and learned to accept their nervousness on a first date...” It might even look like, “This person developed a new technology and is currently working on a startup to build more prototypes.”
In none of these cases should it be hard to explain how we’re winning, nor should Tim’s “not looking carefully enough” be an issue. Even if the wins are limited to subjective well-being, you should at least be able to explain that! Do you believe that we’re winning, or do you merely believe you believe it?