I applaude the post. I think this is a step in the right direction of trying to consider the whole problem realistically. I think everyone working on alignment should have this written statement of a route to survival and flourishing; it would help us work collectively to improve our currently-vague thinking, and it would help better target the limited efforts we have to devote to alignment research.
My statement would be related but less ambitious. This is what we should do, if the clarity and will existed. I really hope there’s a viable route following more limited things we realistically could do.
I’m afraid I find most of your proposed approaches to still fall short of being fully realistic given the inefficiencies of public debate and government decision-making. Fortunately I think there is a route to survival and autonomy that’s a less narrow path; I have a draft in progress on my proposed plan, working title “a coordination-free plan for human survival”. Getting the US gov’t to nationalize AGI labs does sound plausible, but unlikely to happen until they’ve already produced human-level AGI or are near to doing so.
I think my proposed path is different based on two cruxes about AGI, and probably some others about how government decision-making works. My timelines include fairly short timelines, like the 3 years Aschenbrenner and some other OpenAI insiders hold. And I see general AGI as actually being easier than superhuman tool AI; once real continuous, self-directed learning is added to foundation model agents (it already exists but needs improvement), having an agent learn with human help becomes an enormous advantage over human-designed tool AI.
There’s much more to say, but I’m out of time so I’ll say that and hope to come back with more detail.
I applaude the post. I think this is a step in the right direction of trying to consider the whole problem realistically. I think everyone working on alignment should have this written statement of a route to survival and flourishing; it would help us work collectively to improve our currently-vague thinking, and it would help better target the limited efforts we have to devote to alignment research.
My statement would be related but less ambitious. This is what we should do, if the clarity and will existed. I really hope there’s a viable route following more limited things we realistically could do.
I’m afraid I find most of your proposed approaches to still fall short of being fully realistic given the inefficiencies of public debate and government decision-making. Fortunately I think there is a route to survival and autonomy that’s a less narrow path; I have a draft in progress on my proposed plan, working title “a coordination-free plan for human survival”. Getting the US gov’t to nationalize AGI labs does sound plausible, but unlikely to happen until they’ve already produced human-level AGI or are near to doing so.
I think my proposed path is different based on two cruxes about AGI, and probably some others about how government decision-making works. My timelines include fairly short timelines, like the 3 years Aschenbrenner and some other OpenAI insiders hold. And I see general AGI as actually being easier than superhuman tool AI; once real continuous, self-directed learning is added to foundation model agents (it already exists but needs improvement), having an agent learn with human help becomes an enormous advantage over human-designed tool AI.
There’s much more to say, but I’m out of time so I’ll say that and hope to come back with more detail.