Non-rhetorically, what’s the difference between AI risk questions and ordinary scientific questions, in this respect? “There aren’t clear / precise / interesting / tractable problems” is a thing we hear, but why do we hear that about AI risk as opposed to other fields with sort of undefined problems? Hasn’t a lot of scientific work started out asking imprecise, intuitive questions, or no? Clearly there’s some difference.
In fact, starting a scientific field, as opposed to continuing, is poorly funded, it’s not just AI risk. Another way to say this is that AI risk, as a scientific field, is pre-paradigmic.
Non-rhetorically, what’s the difference between AI risk questions and ordinary scientific questions, in this respect? “There aren’t clear / precise / interesting / tractable problems” is a thing we hear, but why do we hear that about AI risk as opposed to other fields with sort of undefined problems? Hasn’t a lot of scientific work started out asking imprecise, intuitive questions, or no? Clearly there’s some difference.
In fact, starting a scientific field, as opposed to continuing, is poorly funded, it’s not just AI risk. Another way to say this is that AI risk, as a scientific field, is pre-paradigmic.