I think that there’s not much more that most individuals can do about x-risk as a full-time pursuit than we can as aware and interested civilians.
I also think that unfriendly AI Foom is a small part of the disaster space, compared to the current volume of unfriendly natural intelligence we face. Increase in destructive power of small (or not-so-small) groups of humans seems 20-1000x more likely (and I generally think toward the higher end of that) to filter us than a single or small number of AI entities becoming powerful enough to do so.
Or on healthcare or architecture or garbage collection or any of the billion things humans do for each other.
Some thought to far-mode issues is worthwhile, and you might be able to contribute a bit as a funder or hobbyist, but for most people, including most rationalists, it shouldn’t be your primary drive.
I think that there’s not much more that most individuals can do about x-risk as a full-time pursuit than we can as aware and interested civilians.
I also think that unfriendly AI Foom is a small part of the disaster space, compared to the current volume of unfriendly natural intelligence we face. Increase in destructive power of small (or not-so-small) groups of humans seems 20-1000x more likely (and I generally think toward the higher end of that) to filter us than a single or small number of AI entities becoming powerful enough to do so.
So it would be better to work on computer security? Or on education, so that we raise fewer unfriendly natural intelligences?
Also, AI safety research benefits AI research in general and AI research in general benefits humanity. Again only marginal contributions?
Or on healthcare or architecture or garbage collection or any of the billion things humans do for each other.
Some thought to far-mode issues is worthwhile, and you might be able to contribute a bit as a funder or hobbyist, but for most people, including most rationalists, it shouldn’t be your primary drive.