MIRI has stopped all funding of safety research (to focus on advocacy) explaining that the research that they have been funding (which does not have the problem that it helps the AI project more than it helps the AI safety project) cannot bear fruit quickly enough to materially effect our chances of survival.
MIRI has plenty of stored money that they could use to continue to fund technical safety research, but MIRI leadership assesses that it is not worth funding (even though MIRI was the first funder to fund AI safety research). MIRI leadership has enough experience and a good enough track record that the aforementioned assessment should have some bearing on any conversation about “other (less funded & staffed) approaches” to AI safety.
Yes, I do. I agree with Eliezer and Nate that the work MIRI was previously funding likely won’t yield many useful results, but I don’t think its correct to generalize to all agent foundations everywhere. Eg I’m bullish on natural abstractions, singular learning theory, comp mech, incomplete preferences, etc. None of which (except natural abstractions) was on Eliezer or Nate’s radar to my knowledge.
In the future I’d also recommend actually arguing for the position you’re trying to take, instead of citing an org you trust. You should probably trust Eliezer, Nate, and MIRI far less than you do, if you’re unable to argue for their position without reference to the org itself. In this circumstance I can see where MIRI is coming from, so its no problem on my end. But if I didn’t know where MIRI was coming from, I would be pretty annoyed. I also expect my comment here won’t change your mind too much, since you probably have a different idea of where MIRI is coming from, and your crux may not be any object level point, but the meta level point about how good Eliezer & Nate’s ability to judge research directions is, determining how much you defer to them & MIRI.
MIRI has stopped all funding of safety research (to focus on advocacy) explaining that the research that they have been funding (which does not have the problem that it helps the AI project more than it helps the AI safety project) cannot bear fruit quickly enough to materially effect our chances of survival.
I don’t see how that’s relevant to my comment.
MIRI has plenty of stored money that they could use to continue to fund technical safety research, but MIRI leadership assesses that it is not worth funding (even though MIRI was the first funder to fund AI safety research). MIRI leadership has enough experience and a good enough track record that the aforementioned assessment should have some bearing on any conversation about “other (less funded & staffed) approaches” to AI safety.
Do you see the relevance now?
Yes, I do. I agree with Eliezer and Nate that the work MIRI was previously funding likely won’t yield many useful results, but I don’t think its correct to generalize to all agent foundations everywhere. Eg I’m bullish on natural abstractions, singular learning theory, comp mech, incomplete preferences, etc. None of which (except natural abstractions) was on Eliezer or Nate’s radar to my knowledge.
In the future I’d also recommend actually arguing for the position you’re trying to take, instead of citing an org you trust. You should probably trust Eliezer, Nate, and MIRI far less than you do, if you’re unable to argue for their position without reference to the org itself. In this circumstance I can see where MIRI is coming from, so its no problem on my end. But if I didn’t know where MIRI was coming from, I would be pretty annoyed. I also expect my comment here won’t change your mind too much, since you probably have a different idea of where MIRI is coming from, and your crux may not be any object level point, but the meta level point about how good Eliezer & Nate’s ability to judge research directions is, determining how much you defer to them & MIRI.