I have considered running a business, and various other ways to make more money, but actually have moved away from making money towards trying really hard to do what I can to make the future more likely to go better. A good-outcome singularity floats all boats!
So anyone who thinks they have even remotely the sort of competence which could help build aligned AI should work on that, even if they think they aren’t of the highest caliber or not seeing immediate positive benefits accruing from their initial attempts. I’m definitely of the opinion that the more people we can get working on it the better (provided that you don’t hinder the best thinkers with the fumbling of the worst).
I have considered running a business, and various other ways to make more money, but actually have moved away from making money towards trying really hard to do what I can to make the future more likely to go better. A good-outcome singularity floats all boats!
So anyone who thinks they have even remotely the sort of competence which could help build aligned AI should work on that, even if they think they aren’t of the highest caliber or not seeing immediate positive benefits accruing from their initial attempts. I’m definitely of the opinion that the more people we can get working on it the better (provided that you don’t hinder the best thinkers with the fumbling of the worst).
Further thoughts here: https://www.lesswrong.com/posts/WRSLEdvsbucBDiast/how-to-plan-for-a-radically-uncertain-future?commentId=YFXi8TrKvbCvxbkYT