Not a direct response: It’s been argued (e.g. I think Paul said this in his 2nd 80k podcast interview?) that this isn’t very realistic, because the low-hanging fruit (of easy to attack systems) is already being picked by slightly less advanced AI systems. This wouldn’t apply if you’re *already* in a discontinuous regime (but then it becomes circular).
Also not a direct response: It seems likely that some AIs will be much more/less cautious than humans, because they (e.g. implicitly) have very different discount rates. So AIs might take very risky gambles, which means both that we might get more sinister stumbles (good thing), but also that they might readily risk the earth (bad thing).
Yes.
Not a direct response: It’s been argued (e.g. I think Paul said this in his 2nd 80k podcast interview?) that this isn’t very realistic, because the low-hanging fruit (of easy to attack systems) is already being picked by slightly less advanced AI systems. This wouldn’t apply if you’re *already* in a discontinuous regime (but then it becomes circular).
Also not a direct response: It seems likely that some AIs will be much more/less cautious than humans, because they (e.g. implicitly) have very different discount rates. So AIs might take very risky gambles, which means both that we might get more sinister stumbles (good thing), but also that they might readily risk the earth (bad thing).