This may be true in other communities, but I think if you’re more status motivated in AI safety and EA you are more likely to be concerned about potential downside risks. Especially post SBF.
Instead of trying to maximize the good, I see a lot of people trying to minimize the chance that things go poorly in a way that could look bad for them.
You are generally given more respect and funding and social standing if you are very concerned about downside risks and reputation hazards.
If anything, the more status-oriented you are in EA, the more likely you are to care about downside risks because of the Copenhagen theory of ethics.
Great review! Thanks for sharing.
I’m curious—even if you didn’t achieve fundamental well-being, do you feel like there were any intermediate improvements in your general well-being?
Also, did you end up continuing to try the extended course which they offered?
I remember they offered that for me since I hadn’t attained fundamental well-being. I’d totally been meaning to do it, but never followed through, showing that the class format really was quite helpful.