Insofar as I think OpenAI shouldn’t be funded, it’s because I think it might be actively harmful.
(epistemic status: I am not very informed about the current goings on at OpenAI, this is a random person rambling hearsay and making the best guesses they can without doing a thorough review of their blog, let alone talking to them)
The reasons it might be actively harmful is because it seems like a lot of their work is more like actuallydeveloping AI instead of AI Safety, and sharing AI developments with the world that might accelerate progress.
MIRI is the only organization I know of working directly on AI safety that I’ve heard talk extensively about differential-technological-development. i.e, do research that would only help build Aligned AGI and doesn’t accelerate generic AI that might feed into Unaligned AGI.
Further thoughts here:
Insofar as I think OpenAI shouldn’t be funded, it’s because I think it might be actively harmful.
(epistemic status: I am not very informed about the current goings on at OpenAI, this is a random person rambling hearsay and making the best guesses they can without doing a thorough review of their blog, let alone talking to them)
The reasons it might be actively harmful is because it seems like a lot of their work is more like actually developing AI instead of AI Safety, and sharing AI developments with the world that might accelerate progress.
MIRI is the only organization I know of working directly on AI safety that I’ve heard talk extensively about differential-technological-development. i.e, do research that would only help build Aligned AGI and doesn’t accelerate generic AI that might feed into Unaligned AGI.