I think the possibility of living for a googol years vastly outweighs the amount of happiness I’d get directly from any job. And making the world a better place is agreed by everyone I’ve seen comment on the topic (including Eliezer Yudkowsky—https://www.lesswrong.com/posts/vwnSPgwtmLjvTK2Wa/amputation-of-destiny) to be an essential part of happiness, and the window of opportunity for that might well close in a hundred years or so, when AI is able to do everything for us.
If you want to be happy, find a career that you enjoy! (but spend more time on personal relationships and a fulfilling social scene)
Making the world a better place can indeed be fulfilling and contribute to personal happiness, but I would not recommend AI safety work on that basis.
I think the possibility of living for a googol years vastly outweighs the amount of happiness I’d get directly from any job. And making the world a better place is agreed by everyone I’ve seen comment on the topic (including Eliezer Yudkowsky—https://www.lesswrong.com/posts/vwnSPgwtmLjvTK2Wa/amputation-of-destiny) to be an essential part of happiness, and the window of opportunity for that might well close in a hundred years or so, when AI is able to do everything for us.