I’m not sure if this addresses all of the things you’re saying. If not, let me know.
I’m not claiming that all or even most rationalists actually are successful in leaning closer to Real Rationality than Hollywood Rationality. I’m claiming that a very large majority 1) endorse and 2) aspire towards the former rather than the latter.
Incremental Progress and the Valley talks about the relationship between rationality and winning. In short, what the post says and what I think the majority opinion amongst rationalists is, is that in the long run it does bring you closer to winning, but 1) a given step forward towards being more rational sometimes moves you a step back in towards on winning rather than forward, and 2) we’re not really at the point in our art where it leads to a sizeable increase in winning.
As for convincing people about the threat of AI:
1) I don’t thing the art of rationality has spent too much time on persuasion, compared to, say, probability theory.
2) I think there’s been some amount of effort put towards persuasion. People reference Influence: The Psychology of Persuasion by Robert Cialdini a fair bit).
3) People very much care about anything even remotely relevant to lowering the chance of unfriendly AI or any other existential risk and will be extremely open to any ideas you or others have on how to do better.
4) There very well might be some somewhat low hanging fruit in terms of being better at persuading others in the context of AI risk.
5) Convincing people about the importance is a pretty difficult thing, and so lack of success very well might be more about difficulty than competence.
I’m not sure if this addresses all of the things you’re saying. If not, let me know.
I’m not claiming that all or even most rationalists actually are successful in leaning closer to Real Rationality than Hollywood Rationality. I’m claiming that a very large majority 1) endorse and 2) aspire towards the former rather than the latter.
Incremental Progress and the Valley talks about the relationship between rationality and winning. In short, what the post says and what I think the majority opinion amongst rationalists is, is that in the long run it does bring you closer to winning, but 1) a given step forward towards being more rational sometimes moves you a step back in towards on winning rather than forward, and 2) we’re not really at the point in our art where it leads to a sizeable increase in winning.
As for convincing people about the threat of AI:
1) I don’t thing the art of rationality has spent too much time on persuasion, compared to, say, probability theory.
2) I think there’s been some amount of effort put towards persuasion. People reference Influence: The Psychology of Persuasion by Robert Cialdini a fair bit).
3) People very much care about anything even remotely relevant to lowering the chance of unfriendly AI or any other existential risk and will be extremely open to any ideas you or others have on how to do better.
4) There very well might be some somewhat low hanging fruit in terms of being better at persuading others in the context of AI risk.
5) Convincing people about the importance is a pretty difficult thing, and so lack of success very well might be more about difficulty than competence.