Seems pretty reasonable to me. I mean, there’s still a number of places where I think, “sure, but how exactly do we do that step?”.
One thing that has been working me is that we really don’t seem ready to figure out the ethical aspects of wildly out of distribution stuff like uploaded humans and nanotech and such. So I agree that the goal of “go just barely far enough out of distribution to prevent unaligned AGIs from being built, then go slow and think things through carefully” is a good one. I also agree that “actually get the AI to model and care about the real world” seems like a necessary precursor to the “stop other AGIs” goal.
Some problems that concern me are things like, “assuming that many actors are racing towards AGI, and assuming that some of these actors will be state actors with well-guarded secret labs, doesn’t this seem like kind of a dangerously high power level to have our hopefully-but-not-definitely aligned AGI operating at?” Also, getting that much lead in the capabilities race seems impractical, without the competitors having close-enough-to-insanely-powerful proto-AGIs to make the bar for defeating them even higher.
So I worry that trying to race for a pivotal act is not the best path forward. Can you think of other ways forward?
Seems pretty reasonable to me. I mean, there’s still a number of places where I think, “sure, but how exactly do we do that step?”. One thing that has been working me is that we really don’t seem ready to figure out the ethical aspects of wildly out of distribution stuff like uploaded humans and nanotech and such. So I agree that the goal of “go just barely far enough out of distribution to prevent unaligned AGIs from being built, then go slow and think things through carefully” is a good one. I also agree that “actually get the AI to model and care about the real world” seems like a necessary precursor to the “stop other AGIs” goal. Some problems that concern me are things like, “assuming that many actors are racing towards AGI, and assuming that some of these actors will be state actors with well-guarded secret labs, doesn’t this seem like kind of a dangerously high power level to have our hopefully-but-not-definitely aligned AGI operating at?” Also, getting that much lead in the capabilities race seems impractical, without the competitors having close-enough-to-insanely-powerful proto-AGIs to make the bar for defeating them even higher. So I worry that trying to race for a pivotal act is not the best path forward. Can you think of other ways forward?