There are two common arguments I hear against spending significant personal effort on the topic of AI risk. I don’t necessarily endorse either of these (though #1a is my biggest fear on the topic—there’s not much about prevention of AI divergence that isn’t pretty nightmarish when applied to prevention of human value divergence).
1) There’s no coherent description of how this work actually addresses any actual risk. I don’t see much reason to believe that working on AI risk actually reduces AI risk. Working on decision theory and moral philosophy may be useful for other things, of course.
2) On the margin (of your effort/results), there are topics you can make more personal impact on and have a larger overall impact on the future.
There are two common arguments I hear against spending significant personal effort on the topic of AI risk. I don’t necessarily endorse either of these (though #1a is my biggest fear on the topic—there’s not much about prevention of AI divergence that isn’t pretty nightmarish when applied to prevention of human value divergence).
1) There’s no coherent description of how this work actually addresses any actual risk. I don’t see much reason to believe that working on AI risk actually reduces AI risk. Working on decision theory and moral philosophy may be useful for other things, of course.
2) On the margin (of your effort/results), there are topics you can make more personal impact on and have a larger overall impact on the future.
[ edited to fix formatting ]
Your comment is malformatted.