I think I’d be happy with a summary of persistent disagreement where Jonah or Scott said, “I don’t think MIRI’s efforts are valuable because we think that AI in general has made no progress on AGI for the last 60 years / I don’t think MIRI’s efforts are priorities because we don’t think we’ll get AGI for another 2-3 centuries, but aside from that MIRI isn’t doing anything wrong in particular, and it would be an admittedly different story if I thought that AI in general was making progress on AGI / AGI was due in the next 50 years”.
I don’t think MIRI’s efforts are valuable because I think that AI in general has made no progress on AGI for the last 60 years, but aside from that MIRI isn’t doing anything wrong in particular, and it would be an admittedly different story if I thought that AI in general was making progress on AGI.
is pretty close to my position.
I would qualify it by saying:
I’d replace “no progress” with “not enough progress for there to be a known research program with a reasonable chance of success.”
I have high confidence that some of the recent advances in narrow AI will contribute (whether directly or indirectly) to the eventual creation of AGI (contingent on this event occurring), just not necessarily in a foreseeable way.
If I discover that there’s been significantly more progress on AGI than I had thought, then I’ll have to reevaluate my position entirely. I could imagine updating in the directly of MIRI’s FAI work being very high value, or I could imagine continuing to believe that MIRI’s FAI research isn’t a priority, for reasons different from my current ones.
I think I’d be happy with a summary of persistent disagreement where Jonah or Scott said, “I don’t think MIRI’s efforts are valuable because we think that AI in general has made no progress on AGI for the last 60 years / I don’t think MIRI’s efforts are priorities because we don’t think we’ll get AGI for another 2-3 centuries, but aside from that MIRI isn’t doing anything wrong in particular, and it would be an admittedly different story if I thought that AI in general was making progress on AGI / AGI was due in the next 50 years”.
I think that your paraphrasing
is pretty close to my position.
I would qualify it by saying:
I’d replace “no progress” with “not enough progress for there to be a known research program with a reasonable chance of success.”
I have high confidence that some of the recent advances in narrow AI will contribute (whether directly or indirectly) to the eventual creation of AGI (contingent on this event occurring), just not necessarily in a foreseeable way.
If I discover that there’s been significantly more progress on AGI than I had thought, then I’ll have to reevaluate my position entirely. I could imagine updating in the directly of MIRI’s FAI work being very high value, or I could imagine continuing to believe that MIRI’s FAI research isn’t a priority, for reasons different from my current ones.
Agreed-on summaries of persistent disagreement aren’t ideal, but they’re more conversational progress than usually happens, so… thanks!