Yes! I actually just discussed this with one of my advisors (an expert on machine learning), and he told me that if he could get funding to do it he would definitely be interested in dedicating a good chunk of his time to researching AGI safety. (For any funders who might read this and might be interested in providing that funding, please reach out to me by email Aryeh.Englander@jhuapl.edu. I’m going to try to reach out to some potential funders next week.)
I think that there are a lot of researchers who are sympathetic to AI risk concerns, but they either lack the funding to work on it or they don’t know how they might apply their area of expertise to do so. The former can definitely be fixed if there’s an interest from funding organizations. The latter can be fixed in many cases by reaching out and talking to the researcher.
There’s been discussion about there being a surplus of funding in EA and not enough people who want to get funded to do important work. If that is true, shouldn’t it be relatively easy for your presumably competent advisor to get such funding to work on AI safety?
I think it might be super recent that mainstream academics are expressing this sentiment, probably because the harbingers are probably obvious, and yes this is probably not that hard to fund by certain EA causes. Or how much money does he actually want?
If it ends up not being easy, it seems to me like that means that we are in fact funding constrained. Is that true or am I missing something?
(The advisor in question is just one person. If it was only them who wanted to work in AI safety but couldn’t do to a lack of funds, that wouldn’t be a big deal. But I am assuming that there are lots similar people in a similar boat. In which case the lack of funding would be an important problem.)
(I know this topic has been discussed previously. I bring it up again here because the situation with this advisor seems like a really good concrete example.)
My impression—which I kind of hope is wrong—has been that it is much easier to get an EA grant the more you are an “EA insider” or have EA insider connections. The only EA connection that my professor has is me. On the other hand, I understand the reluctance to some degree in the case of AI safety because funders are concerned that researchers will take the money and go do capabilities research instead.
Non-rhetorically, what’s the difference between AI risk questions and ordinary scientific questions, in this respect? “There aren’t clear / precise / interesting / tractable problems” is a thing we hear, but why do we hear that about AI risk as opposed to other fields with sort of undefined problems? Hasn’t a lot of scientific work started out asking imprecise, intuitive questions, or no? Clearly there’s some difference.
In fact, starting a scientific field, as opposed to continuing, is poorly funded, it’s not just AI risk. Another way to say this is that AI risk, as a scientific field, is pre-paradigmic.
Yes! I actually just discussed this with one of my advisors (an expert on machine learning), and he told me that if he could get funding to do it he would definitely be interested in dedicating a good chunk of his time to researching AGI safety. (For any funders who might read this and might be interested in providing that funding, please reach out to me by email Aryeh.Englander@jhuapl.edu. I’m going to try to reach out to some potential funders next week.)
I think that there are a lot of researchers who are sympathetic to AI risk concerns, but they either lack the funding to work on it or they don’t know how they might apply their area of expertise to do so. The former can definitely be fixed if there’s an interest from funding organizations. The latter can be fixed in many cases by reaching out and talking to the researcher.
There’s been discussion about there being a surplus of funding in EA and not enough people who want to get funded to do important work. If that is true, shouldn’t it be relatively easy for your presumably competent advisor to get such funding to work on AI safety?
I think it might be super recent that mainstream academics are expressing this sentiment, probably because the harbingers are probably obvious, and yes this is probably not that hard to fund by certain EA causes. Or how much money does he actually want?
Hopefully. I have a feeling it won’t be so easy, but we’ll see.
If it ends up not being easy, it seems to me like that means that we are in fact funding constrained. Is that true or am I missing something?
(The advisor in question is just one person. If it was only them who wanted to work in AI safety but couldn’t do to a lack of funds, that wouldn’t be a big deal. But I am assuming that there are lots similar people in a similar boat. In which case the lack of funding would be an important problem.)
(I know this topic has been discussed previously. I bring it up again here because the situation with this advisor seems like a really good concrete example.)
My impression—which I kind of hope is wrong—has been that it is much easier to get an EA grant the more you are an “EA insider” or have EA insider connections. The only EA connection that my professor has is me. On the other hand, I understand the reluctance to some degree in the case of AI safety because funders are concerned that researchers will take the money and go do capabilities research instead.
Non-rhetorically, what’s the difference between AI risk questions and ordinary scientific questions, in this respect? “There aren’t clear / precise / interesting / tractable problems” is a thing we hear, but why do we hear that about AI risk as opposed to other fields with sort of undefined problems? Hasn’t a lot of scientific work started out asking imprecise, intuitive questions, or no? Clearly there’s some difference.
In fact, starting a scientific field, as opposed to continuing, is poorly funded, it’s not just AI risk. Another way to say this is that AI risk, as a scientific field, is pre-paradigmic.