I agree this is plausible—though in the foundationality/dependency bucket I also wouldn’t rule out any of
misaligned AGI just straight appropriates hardware and executes a coup, bypassing existing software/AI infra
latent deceptive AGI itself gets ‘foundational’ in the sense above, large amounts of value dependent on its distribution, perhaps mainly by unwitting human aid
emotional dependence and welfare concern for non-dangerous AI transfers and hamstrings humanity’s chance of cooperating to constrain later, dangerous deployments
I agree this is plausible—though in the foundationality/dependency bucket I also wouldn’t rule out any of
misaligned AGI just straight appropriates hardware and executes a coup, bypassing existing software/AI infra
latent deceptive AGI itself gets ‘foundational’ in the sense above, large amounts of value dependent on its distribution, perhaps mainly by unwitting human aid
emotional dependence and welfare concern for non-dangerous AI transfers and hamstrings humanity’s chance of cooperating to constrain later, dangerous deployments