My vague impression was that FHI was skeptical about the value of openness in AI development. Is that incorrect? AI strikes me as “dual use” technology analogous to nuclear physics (can be used for both nuclear power plants (benign) and nuclear bombs (not benign)). Not sure whether it’s good to make dual use technologies public. Also, if you’re a believer in the capabilities vs alignment model, it seems like maybe you’d want people working on alignment to collaborate more (in order to speed alignment research) but you’d prefer for those working on capabilities to be fumbling alone in the dark?
I suppose one bright spot is that insofar as ML researchers believe in something like the capabilities vs alignment model, they will be naturally be incentivized to keep capabilities research to themselves, but publish alignment research in order to help someone else avoid triggering an unfortunate accident?
Edit: sorry, I see you addressed this, unfortunately I don’t see a delete comment button
My vague impression was that FHI was skeptical about the value of openness in AI development. Is that incorrect? AI strikes me as “dual use” technology analogous to nuclear physics (can be used for both nuclear power plants (benign) and nuclear bombs (not benign)). Not sure whether it’s good to make dual use technologies public. Also, if you’re a believer in the capabilities vs alignment model, it seems like maybe you’d want people working on alignment to collaborate more (in order to speed alignment research) but you’d prefer for those working on capabilities to be fumbling alone in the dark?
I suppose one bright spot is that insofar as ML researchers believe in something like the capabilities vs alignment model, they will be naturally be incentivized to keep capabilities research to themselves, but publish alignment research in order to help someone else avoid triggering an unfortunate accident?
Edit: sorry, I see you addressed this, unfortunately I don’t see a delete comment button