Honestly? If that is the case just act as if you have some painless terminal disease and you have one or two years to live. Do a bunch of things you’ve always wanted to do, maybe try a bunch of LSD to see what that’s about, skydive a bit, make peace with your parents, etc.
At the one year mark I don’t think the overton window contains any actions that would actually be effective at preventing AGI. Which leaves us with morally odious solutions like:
A coordinated strike against all AI labs in the US and Canada, but even that doesn’t actually buy that much time if all the key insights were published, it’ll just mean that China gets there shortly after.
Provoking nuclear war between China, Russia and US while having some secret bunker base with enough survivors to outlast whatever fallout there is, maybe relocate all AI safety researchers to New Zealand?
Release a deadly pandemic? Maybe use crispr to have it target people based on dna, which you’ve collected by giving our free lemonade at AI conferences?
As you see, the intersection between the set of pleasant solutions and the set effective solutions is empty if we’re at the T-1year mark. All the effective solutions that I can see involve killing lots of people for the belief that AI will be dangerous, which means that you darn better have an unshakeable degree of confidence in the idea, which I don’t have.
Honestly? If that is the case just act as if you have some painless terminal disease and you have one or two years to live. Do a bunch of things you’ve always wanted to do, maybe try a bunch of LSD to see what that’s about, skydive a bit, make peace with your parents, etc.
At the one year mark I don’t think the overton window contains any actions that would actually be effective at preventing AGI. Which leaves us with morally odious solutions like:
A coordinated strike against all AI labs in the US and Canada, but even that doesn’t actually buy that much time if all the key insights were published, it’ll just mean that China gets there shortly after.
Provoking nuclear war between China, Russia and US while having some secret bunker base with enough survivors to outlast whatever fallout there is, maybe relocate all AI safety researchers to New Zealand?
Release a deadly pandemic? Maybe use crispr to have it target people based on dna, which you’ve collected by giving our free lemonade at AI conferences?
As you see, the intersection between the set of pleasant solutions and the set effective solutions is empty if we’re at the T-1year mark. All the effective solutions that I can see involve killing lots of people for the belief that AI will be dangerous, which means that you darn better have an unshakeable degree of confidence in the idea, which I don’t have.
I think there are pleasant and potentially effective measures.
Offer a free vacation to some top AI experts.
Label decaf coffee as normal and give it to the lab.
DDos stack overflow.