When I referred to pivotal acts, I implied the use of enforcement tools that are extremely powerful, of the sort implied in AGI Ruin. That is, enforcement tools that make an actual impact in extending timelines[1]. Perhaps I should start using a more precise term to describe this from now on.
It is hard for me to imagine how there can be consensus within a US government organization capable of launching a superhuman-enforcement-tool-based pivotal act (such as three letter agencies) to initiate a moratorium, much less consensus in the US government or between US and EU (especially given the rather interesting strategy EU is trying with their AI Act).
I continue to consider all superhuman-enforcement-tool-based pivotal acts as unilateral given this belief. My use of the world “unilateral” points to the fact that the organizations and people who currently have a non-trivial influence over the state of the world and its future will almost entirely be blindsided by the pivotal act, and that will result in destruction of trust and chaos and an increase in conflict. And I currently believe that this is actually more likely to increase P(doom) or existential risk for humanity, even if it extends the foom timeline.
Although not preventing ASI creation entirely. The destruction of humanity’s potential is also an existential risk, and the inability for us to create a utopia is too painful to bear.
When I referred to pivotal acts, I implied the use of enforcement tools that are extremely powerful, of the sort implied in AGI Ruin. That is, enforcement tools that make an actual impact in extending timelines[1]. Perhaps I should start using a more precise term to describe this from now on.
It is hard for me to imagine how there can be consensus within a US government organization capable of launching a superhuman-enforcement-tool-based pivotal act (such as three letter agencies) to initiate a moratorium, much less consensus in the US government or between US and EU (especially given the rather interesting strategy EU is trying with their AI Act).
I continue to consider all superhuman-enforcement-tool-based pivotal acts as unilateral given this belief. My use of the world “unilateral” points to the fact that the organizations and people who currently have a non-trivial influence over the state of the world and its future will almost entirely be blindsided by the pivotal act, and that will result in destruction of trust and chaos and an increase in conflict. And I currently believe that this is actually more likely to increase P(doom) or existential risk for humanity, even if it extends the foom timeline.
Although not preventing ASI creation entirely. The destruction of humanity’s potential is also an existential risk, and the inability for us to create a utopia is too painful to bear.