There’s something elegantly self-validating about the pivotal act strategy (if it works as intended). Imagine you ate all the world’s compute with some magic-like technology. People might be mad at first, but they will also be amazed at what just happened. You conclusively demonstrated that AI research is incredibly dangerous.
Still, it seems almost impossible to envision medium-sized or even large-ish teams pulling this off in a way that actually works, as opposed to triggering nuclear war or accidentally destroying the world. I think there might be a use for pivotal acts once you got a large player like a powerful government on board (and have improved/reformed their decision-making structure) or have become the next Amazon with the help of narrow AI. I don’t see the strategy working in a smaller lab and I agree with the OP that the costs to trust-building options are large.
There’s something elegantly self-validating about the pivotal act strategy (if it works as intended). Imagine you ate all the world’s compute with some magic-like technology. People might be mad at first, but they will also be amazed at what just happened. You conclusively demonstrated that AI research is incredibly dangerous.
Still, it seems almost impossible to envision medium-sized or even large-ish teams pulling this off in a way that actually works, as opposed to triggering nuclear war or accidentally destroying the world. I think there might be a use for pivotal acts once you got a large player like a powerful government on board (and have improved/reformed their decision-making structure) or have become the next Amazon with the help of narrow AI. I don’t see the strategy working in a smaller lab and I agree with the OP that the costs to trust-building options are large.