This feels like a rather different attitude compared to the “rocket alignment” essay. They’re maybe both compatible but the emphasis seems very different.
In terms of MIRI’s 2017 ‘strategic background’ outline, I’d say that these look like they’re in tension because they’re intervening on different parts of a larger plan. MIRI’s research has historically focused on:
For that reason, MIRI does research to intervene on 8 from various angles, such as by examining holes and anomalies in the field’s current understanding of real-world reasoning and decision-making. We hope to thereby reduce our own confusion about alignment-conducive AGI approaches and ultimately help make it feasible for developers to construct adequate “safety-stories” in an alignment setting. As we improve our understanding of the alignment problem, our aim is to share new insights and techniques with leading or up-and-coming developer groups, who we’re generally on good terms with.
I.e., our perspective was something like ‘we have no idea how to do alignment, so we’ll fiddle around in the hope that new theory pops out of our fiddling, and that this new theory makes it clearer what to do next’.
In contrast, Bob in the OP isn’t proposing a way to try to get less confused about some fundamental aspect of intelligence. He’s proposing a specific plan for how to actually design and align an AGI in real life:
“Let’s suppose we had a perfect solution to outer alignment. I have this idea for how we could solve inner alignment! First, we could get a human-level oracle AI. Then, we could get the oracle AI to build a human-level agent through hardcoded optimization. And then—”
This is also important, but it’s part of planning for step 6, not part of building toward step 8 (or prerequisites for 8).
This feels like a rather different attitude compared to the “rocket alignment” essay. They’re maybe both compatible but the emphasis seems very different.
Agreed!
In terms of MIRI’s 2017 ‘strategic background’ outline, I’d say that these look like they’re in tension because they’re intervening on different parts of a larger plan. MIRI’s research has historically focused on:
I.e., our perspective was something like ‘we have no idea how to do alignment, so we’ll fiddle around in the hope that new theory pops out of our fiddling, and that this new theory makes it clearer what to do next’.
In contrast, Bob in the OP isn’t proposing a way to try to get less confused about some fundamental aspect of intelligence. He’s proposing a specific plan for how to actually design and align an AGI in real life:
“Let’s suppose we had a perfect solution to outer alignment. I have this idea for how we could solve inner alignment! First, we could get a human-level oracle AI. Then, we could get the oracle AI to build a human-level agent through hardcoded optimization. And then—”
This is also important, but it’s part of planning for step 6, not part of building toward step 8 (or prerequisites for 8).
“Bob isn’t proposing a way to try to get less confused about some fundamental aspect of intelligence”
This might be what I missed. I thought he might be. (E.g., “let’s suppose we have” sounds to me like a brainstorming “mood” than a solution proposal.)