What about the possibility of persuading the top several biggest actors (DeepMind, FAIR, etc.) to agree to something like that?
My understanding is that this has been tried, at various levels of strength, ever since OpenAI published its charter. My sense is that’s MIRI’s idea of “safety-conscious” looks like this, which it guessed was different from OpenAI’s sense; I kind of wish that had been a public discussion back in 2018.
My understanding is that this has been tried, at various levels of strength, ever since OpenAI published its charter. My sense is that’s MIRI’s idea of “safety-conscious” looks like this, which it guessed was different from OpenAI’s sense; I kind of wish that had been a public discussion back in 2018.