I don’t actually know. I only found out about this a few months ago. Before that, I thought they were still directly trying to solve the problem of “Friendly AI” (as it used to be known, before “alignment” became a buzzword).
The “alignment problem” humanity has as its urgent task is exactly the problem of aligning cognitive work that can be leveraged to prevent the proliferation of tech that destroys the world. Once you solve that, humanity can afford to take as much time as it needs to solve everything else.
Hi Mitchell, what would be the best thing to read about MIRI’s latest thinking on this issue (what you call Plan B)?
I don’t actually know. I only found out about this a few months ago. Before that, I thought they were still directly trying to solve the problem of “Friendly AI” (as it used to be known, before “alignment” became a buzzword).
This is the thread where I learned about plan B.
Maybe this comment sums up the new attitude:
Thanks Mitchell, that’s helpful.
I think we need a lot more serious thinking about Plan B strategies.