Yes, it’s competently executed
Is it?
It certainly signals that the authors have a competent grasp of the AI industry and its mainstream models of what’s happening. But is it actually competent AI-policy work, even under the e/acc agenda?
My impression is that no, it’s not. It seems to live in an e/acc fanfic about a competent US racing to AGI, not in reality. It vaguely recommends doing a thousand things that would be nontrivial to execute if the Eye of Sauron were looking directly at them, and the Eye is very much not doing that. On the contrary, the wider Trump administration is doing things that directly contradict the most key recommendations here (energy, chips, science, talent, “American allies”), and this document seems to pretend this isn’t happening. A politically effective version of this document would have been written in a very different way; this one seems to be written mainly for entertainment/fantasizing purposes.
Like, it demonstrates that the people tasked with thinking about AI in the Trump administration have a solid enough understanding of the AI industry to recognize which policies would accelerate capability research. But that understanding hadn’t translated into capability-positive policy decisions so far. Is there reason to think this plan’s publication is going to turn that around...?
Is my take here wrong? I don’t have much experience here, this is a strong opinion weakly held. (Addressing that question to @Zvi as well.)
Well, an aligned Singularity would probably be relatively pleasant, since the entities fueling it would consider causing this sort of vast distress a negative and try to avoid it. Indeed, if you trust them not to drown you, there would be no need for this sort of frantic grasping-at-straws.
An unaligned Singularity would probably also be more pleasant, since the entities fueling it would likely try to make it look aligned, with the span of time between the treacherous turn and everyone dying likely being short.
This scenario covers a sort of “neutral-alignment/non-controlled” Singularity, where there’s no specific superintelligent actor (or coalition) in control of the whole process, and it’s instead guided by… market forces, I guess? With AGI labs continually releasing new models for private/corporate use, providing the tools/opportunities you can try to grasp to avoid drowning. I think this is roughly how things would go under “mainstream” models of AI progress (e. g., AI 2027). (I don’t expect it to actually go this way, I don’t think LLMs can power the Singularity.)