Though I think “how hard is world takeover” is mostly a function of the first two axes?
I claim almost entirely orthogonal. Examples of concrete disagreements here are easy to find once you go looking:
If AGI tries to take over the world everyone will coordinate to resist
Existing computer security works
Existing physical security works
I claim these don’t reduce cleanly to the form “It is possible to do [x]” because at a high level, this mostly reduces to “the world is not on fire because:”
existing law enforcement discourages effectively (vulnerable world)
existing people are mostly not evil (vulnerable world)
There is some projection onto the axis of “how feasible are things” where we don’t have very good existence proofs.
can an AI convince humans to perform illegal actions
can an AI write secure software to prevent a counter coup
etc.
These are all much much weaker than anything involving nanotechnology or other “indistinguishable from magic” scenarios.
And of course Meta makes everything worse. There was a presentation at Blackhat or Defcon by one of their security guys about how it’s easier to go after attackers than close security holes. In this way they contribute to making the world more vulnerable. I’m having trouble finding it though.
I claim almost entirely orthogonal. Examples of concrete disagreements here are easy to find once you go looking:
If AGI tries to take over the world everyone will coordinate to resist
Existing computer security works
Existing physical security works
I claim these don’t reduce cleanly to the form “It is possible to do [x]” because at a high level, this mostly reduces to “the world is not on fire because:”
existing security measures prevent effectively (not vulnerable world)
vs.
existing law enforcement discourages effectively (vulnerable world)
existing people are mostly not evil (vulnerable world)
There is some projection onto the axis of “how feasible are things” where we don’t have very good existence proofs.
can an AI convince humans to perform illegal actions
can an AI write secure software to prevent a counter coup
etc.
These are all much much weaker than anything involving nanotechnology or other “indistinguishable from magic” scenarios.
And of course Meta makes everything worse. There was a presentation at Blackhat or Defcon by one of their security guys about how it’s easier to go after attackers than close security holes. In this way they contribute to making the world more vulnerable. I’m having trouble finding it though.