I should note that I’m not sure whether OpenAI is a point against this claim or not (I think not but for complicated reasons). My vague impression is that they do tend to have their own set of assumptions, and are working on reasonably concrete things (I think those assumptions are wrong but am not that confident).
I do lean towards OpenAI and MIRI should both be fully funded, OpenAI just seems to be getting a lot more funding due to Elon’s involvement and generally being more “traditionally prestigious”.
Insofar as I think OpenAI shouldn’t be funded, it’s because I think it might be actively harmful.
(epistemic status: I am not very informed about the current goings on at OpenAI, this is a random person rambling hearsay and making the best guesses they can without doing a thorough review of their blog, let alone talking to them)
The reasons it might be actively harmful is because it seems like a lot of their work is more like actuallydeveloping AI instead of AI Safety, and sharing AI developments with the world that might accelerate progress.
MIRI is the only organization I know of working directly on AI safety that I’ve heard talk extensively about differential-technological-development. i.e, do research that would only help build Aligned AGI and doesn’t accelerate generic AI that might feed into Unaligned AGI.
I should note that I’m not sure whether OpenAI is a point against this claim or not (I think not but for complicated reasons). My vague impression is that they do tend to have their own set of assumptions, and are working on reasonably concrete things (I think those assumptions are wrong but am not that confident).
I do lean towards OpenAI and MIRI should both be fully funded, OpenAI just seems to be getting a lot more funding due to Elon’s involvement and generally being more “traditionally prestigious”.
Further thoughts here:
Insofar as I think OpenAI shouldn’t be funded, it’s because I think it might be actively harmful.
(epistemic status: I am not very informed about the current goings on at OpenAI, this is a random person rambling hearsay and making the best guesses they can without doing a thorough review of their blog, let alone talking to them)
The reasons it might be actively harmful is because it seems like a lot of their work is more like actually developing AI instead of AI Safety, and sharing AI developments with the world that might accelerate progress.
MIRI is the only organization I know of working directly on AI safety that I’ve heard talk extensively about differential-technological-development. i.e, do research that would only help build Aligned AGI and doesn’t accelerate generic AI that might feed into Unaligned AGI.