This may be true in other communities, but I think if you’re more status motivated in AI safety and EA you are more likely to be concerned about potential downside risks. Especially post SBF.
Instead of trying to maximize the good, I see a lot of people trying to minimize the chance that things go poorly in a way that could look bad for them.
You are generally given more respect and funding and social standing if you are very concerned about downside risks and reputation hazards.
If anything, the more status-oriented you are in EA, the more likely you are to care about downside risks because of the Copenhagen theory of ethics.
This may be true in other communities, but I think if you’re more status motivated in AI safety and EA you are more likely to be concerned about potential downside risks. Especially post SBF.
Instead of trying to maximize the good, I see a lot of people trying to minimize the chance that things go poorly in a way that could look bad for them.
You are generally given more respect and funding and social standing if you are very concerned about downside risks and reputation hazards.
If anything, the more status-oriented you are in EA, the more likely you are to care about downside risks because of the Copenhagen theory of ethics.