Christiano cares more about making aligned AIs that are competitive with unaligned AIs, whereas MIRI is more willing to settle for an AI with very narrow capabilities.
Looking at the transcript, it seems like “AI with very narrow capabilities” is referring to the “copy-paste a strawberry” example. It seems to me that the point of the strawberry example (see Eliezer’s posts 1, 2, and Dario Amodei’s comment here) is that by creating an AGI that can copy and paste a strawberry, we necessarily solve most of the alignment problem. So it isn’t the case that MIRI is aiming for an AI with very narrow capabilities (even task AGI is supposed to perform pivotal acts).
Someone from MIRI can chime in. I think that MIRI researchers are much happier to build AI that solves a narrow range of tasks, and isn’t necessarily competitive. I think I’m probably the most extreme person on this spectrum.
Looking at the transcript, it seems like “AI with very narrow capabilities” is referring to the “copy-paste a strawberry” example. It seems to me that the point of the strawberry example (see Eliezer’s posts 1, 2, and Dario Amodei’s comment here) is that by creating an AGI that can copy and paste a strawberry, we necessarily solve most of the alignment problem. So it isn’t the case that MIRI is aiming for an AI with very narrow capabilities (even task AGI is supposed to perform pivotal acts).
Someone from MIRI can chime in. I think that MIRI researchers are much happier to build AI that solves a narrow range of tasks, and isn’t necessarily competitive. I think I’m probably the most extreme person on this spectrum.