World 3 doesn’t strike me as a thing you can get in the critical period when AGI is a new technology. Worlds 1 and 2 sound approximately right to me, though the way I would say it is roughly: We can use math to better understand reasoning, and the process of doing this will likely improve our informal and heuristic descriptions of reasoning too, and will likely involve us recognizing that we were in some ways using the wrong high-level concepts to think about reasoning.
I haven’t run the characterization above by any MIRI researchers, and different MIRI researchers have different models of how the world is likeliest to achieve aligned AGI. Also, I think it’s generally hard to say what a process of getting less confused is likely to look like when you’re still confused.
World 3 doesn’t strike me as a thing you can get in the critical period when AGI is a new technology. Worlds 1 and 2 sound approximately right to me, though the way I would say it is roughly: We can use math to better understand reasoning, and the process of doing this will likely improve our informal and heuristic descriptions of reasoning too, and will likely involve us recognizing that we were in some ways using the wrong high-level concepts to think about reasoning.
I haven’t run the characterization above by any MIRI researchers, and different MIRI researchers have different models of how the world is likeliest to achieve aligned AGI. Also, I think it’s generally hard to say what a process of getting less confused is likely to look like when you’re still confused.