a considerable fraction of the remaining AI x-risk facing humanity stems from people pulling desperate (unsafe) moves with AGI to head off other AGI projects
In your post “Pivotal Act” Intentions, you wrote that you disagree with contributing to race dynamics by planning to invasively shut down AGI projects because AGI projects would, in reaction, try to maintain
the ability to implement their own pet theories on how safety/alignment should work, leading to more desperation, more risk-taking, and less safety overall.
Could you give some kind of very rough estimates here? How much more risk-taking do you expect in a world given how much / how many prominent “AI safety”-affiliated people declaring invasive pivotal act intentions? How much risk-taking do you expect in the alternative, where there are other pressures (economic, military, social, whatever), but not pressure from pivotal act threats? How much safety (probability of AGI not killing everyone) do you think this buys? You write:
15% of AGI dev teams (weighted by success probability) would destroy the world more-or-less immediately
In your post “Pivotal Act” Intentions, you wrote that you disagree with contributing to race dynamics by planning to invasively shut down AGI projects because AGI projects would, in reaction, try to maintain
Could you give some kind of very rough estimates here? How much more risk-taking do you expect in a world given how much / how many prominent “AI safety”-affiliated people declaring invasive pivotal act intentions? How much risk-taking do you expect in the alternative, where there are other pressures (economic, military, social, whatever), but not pressure from pivotal act threats? How much safety (probability of AGI not killing everyone) do you think this buys? You write:
What about non-immediately, in each alternative?