So you think that other people could contribute much more to x-risk
“marginal” in that sentence was meant literally—the additional contribution to the cause that you’re considering. Actually, I think there’s not much room for anybody to contribute large amounts to x-risk mitigation. Most people (and since I know nothing of you, I put you in that class) will do more good for humanity by working at something that improves near-term situations than by working on theoretical and unlikely problems.
So you think there’s not much we can do about x-risk? What makes you think that? Or, alternatively, if you think that only few people who can do much good in x-risk mitigation, what properties enable them to do that?
Oh, and why do you consider AI safety a “theoretical [or] unlikely” problem?
I think that there’s not much more that most individuals can do about x-risk as a full-time pursuit than we can as aware and interested civilians.
I also think that unfriendly AI Foom is a small part of the disaster space, compared to the current volume of unfriendly natural intelligence we face. Increase in destructive power of small (or not-so-small) groups of humans seems 20-1000x more likely (and I generally think toward the higher end of that) to filter us than a single or small number of AI entities becoming powerful enough to do so.
Or on healthcare or architecture or garbage collection or any of the billion things humans do for each other.
Some thought to far-mode issues is worthwhile, and you might be able to contribute a bit as a funder or hobbyist, but for most people, including most rationalists, it shouldn’t be your primary drive.
“marginal” in that sentence was meant literally—the additional contribution to the cause that you’re considering. Actually, I think there’s not much room for anybody to contribute large amounts to x-risk mitigation. Most people (and since I know nothing of you, I put you in that class) will do more good for humanity by working at something that improves near-term situations than by working on theoretical and unlikely problems.
So you think there’s not much we can do about x-risk? What makes you think that? Or, alternatively, if you think that only few people who can do much good in x-risk mitigation, what properties enable them to do that?
Oh, and why do you consider AI safety a “theoretical [or] unlikely” problem?
I think that there’s not much more that most individuals can do about x-risk as a full-time pursuit than we can as aware and interested civilians.
I also think that unfriendly AI Foom is a small part of the disaster space, compared to the current volume of unfriendly natural intelligence we face. Increase in destructive power of small (or not-so-small) groups of humans seems 20-1000x more likely (and I generally think toward the higher end of that) to filter us than a single or small number of AI entities becoming powerful enough to do so.
So it would be better to work on computer security? Or on education, so that we raise fewer unfriendly natural intelligences?
Also, AI safety research benefits AI research in general and AI research in general benefits humanity. Again only marginal contributions?
Or on healthcare or architecture or garbage collection or any of the billion things humans do for each other.
Some thought to far-mode issues is worthwhile, and you might be able to contribute a bit as a funder or hobbyist, but for most people, including most rationalists, it shouldn’t be your primary drive.