Agree—I think people need to be prepared for “try-or-die” scenarios.
One unfun one I’ll toss into the list: “Company A is 12 months from building Cthulhu, and governments truly do not care and there is extremely strong reason to believe that will not change in the next year. All our policy efforts have failed, our existing technical methods are useless, and the end of the world has come. Everyone report for duty at Company B, we’re going to try to roll the hard six.”
If Company A is 12 months from building Cthulhu, we fucked up upstream. Also, I don’t understand why you’d want to play the AI arms race—you have better options. They expect an AI arms race. Use other tactics. Get into their OODA loop.
Agree—I think people need to be prepared for “try-or-die” scenarios.
One unfun one I’ll toss into the list: “Company A is 12 months from building Cthulhu, and governments truly do not care and there is extremely strong reason to believe that will not change in the next year. All our policy efforts have failed, our existing technical methods are useless, and the end of the world has come. Everyone report for duty at Company B, we’re going to try to roll the hard six.”
If Company A is 12 months from building Cthulhu, we fucked up upstream. Also, I don’t understand why you’d want to play the AI arms race—you have better options. They expect an AI arms race. Use other tactics. Get into their OODA loop.
Unsee the frontier lab.
...yes ? I think my scenario explicitly assumes that we’ve fucked up upstream in many, many ways.
Oh, by that I meant something like “yeah I really think it is not a good idea to focus on an AI arms race”. See also Slack matters more than any other outcome.