how should we measure the “success of rationalists”?
how big success should we expect?
These are related, because the bigger the success, the easier it would be to measure it. Like, if rationalists would openly take over the world, create a Global Rationalist Government, fix poverty worldwide, make cryonics mandatory, etc., no one would be asking whether the rationalists are truly winning.
On the other hand, if I had a magical button which would guarantee that every rationalist will in five years have twice as much wealth (compared to the counterfactual world where the magic button does not work), its effects would be mostly invisible. Maybe the affected people would notice, but even they could easily attribute the effects to something else, such as their own hard work or luck. From outside: the millionaires would remain millionaires, the employees would mostly remain employees, the poor ones would remain mostly poor.
This gives us two versions of the complaint:
if we expect big success, why don’t we obviously observe it?
if we expect small success, what makes us believe it exists, if we are not measuring it?
I believe that many people have the “small” success, but yeah, I don’t have solid data to back this statement. I wish I had.
Also, we had some “big” successes. Specifically: AI safety became a mainstream topic, or the existence of effective altruism. These are easy to downplay, but I think that 10 years ago, few people would have predicted that AI safety will become a mainstream topic.
I also believe that we could do much better, and I hope that one day we will. I think that an important task will be learning to cooperate better. (Which includes learning how to avoid being exploited by malicious actors, which unfortunately also sometimes target our community. But the solution is not keeping everyone at internet distance. The solution is to develop some mechanism to “trust but verify”.)
On the other hand, the stories about Beisutsukai, Anasûrimbor Kellhus, or recursively self-improving AI, primed us to expect “rationalist fast takeoff”. Of which there is no evidence so far. The rationalist takeoff—if there is such a thing—is definitely slow. I wonder if there is a way to make it faster; I think our behavior is far from optimal, but I wonder whether it can realistically get better.
Two questions:
how should we measure the “success of rationalists”?
how big success should we expect?
These are related, because the bigger the success, the easier it would be to measure it. Like, if rationalists would openly take over the world, create a Global Rationalist Government, fix poverty worldwide, make cryonics mandatory, etc., no one would be asking whether the rationalists are truly winning.
On the other hand, if I had a magical button which would guarantee that every rationalist will in five years have twice as much wealth (compared to the counterfactual world where the magic button does not work), its effects would be mostly invisible. Maybe the affected people would notice, but even they could easily attribute the effects to something else, such as their own hard work or luck. From outside: the millionaires would remain millionaires, the employees would mostly remain employees, the poor ones would remain mostly poor.
This gives us two versions of the complaint:
if we expect big success, why don’t we obviously observe it?
if we expect small success, what makes us believe it exists, if we are not measuring it?
I believe that many people have the “small” success, but yeah, I don’t have solid data to back this statement. I wish I had.
Also, we had some “big” successes. Specifically: AI safety became a mainstream topic, or the existence of effective altruism. These are easy to downplay, but I think that 10 years ago, few people would have predicted that AI safety will become a mainstream topic.
I also believe that we could do much better, and I hope that one day we will. I think that an important task will be learning to cooperate better. (Which includes learning how to avoid being exploited by malicious actors, which unfortunately also sometimes target our community. But the solution is not keeping everyone at internet distance. The solution is to develop some mechanism to “trust but verify”.)
On the other hand, the stories about Beisutsukai, Anasûrimbor Kellhus, or recursively self-improving AI, primed us to expect “rationalist fast takeoff”. Of which there is no evidence so far. The rationalist takeoff—if there is such a thing—is definitely slow. I wonder if there is a way to make it faster; I think our behavior is far from optimal, but I wonder whether it can realistically get better.