MIRI’s position seems to be that humans do need a lot of external support / error correction (see CEV) and this is a hard problem, but not so hard that it will likely turn out to be a blocking issue.
Note that Eliezer is currently more optimistic about task AGI than CEV (for the first AGI built), and I think Nate is too. I’m not sure what Benya thinks.
Oh, right, I had noticed that, and then forgot and went back to my previous model of MIRI. I don’t think Eliezer ever wrote down why he changed his mind about task AGI or how he is planning to use one. If the plan is something like “buy enough time to work on CEV at leisure”, then possibly I have much less disagreement on “metaphilosophical paternalism” with MIRI than I thought.
Note that Eliezer is currently more optimistic about task AGI than CEV (for the first AGI built), and I think Nate is too. I’m not sure what Benya thinks.
Oh, right, I had noticed that, and then forgot and went back to my previous model of MIRI. I don’t think Eliezer ever wrote down why he changed his mind about task AGI or how he is planning to use one. If the plan is something like “buy enough time to work on CEV at leisure”, then possibly I have much less disagreement on “metaphilosophical paternalism” with MIRI than I thought.