I think “feedback loops have a cap” is a much easier claim to defend than the implied “AI feedback loops will cap out before they can hurt humanity at an x-risk level”. That second one is especially hard to defend if e.g. general-intelligence abilities + computational speed lets the AI develop some other thing (like a really bad plague) that can hurt humanity at an x-risk level. Intelligence, itself, can figure out, harness, and accelerate the other feedback loops.
I think “feedback loops have a cap” is a much easier claim to defend than the implied “AI feedback loops will cap out before they can hurt humanity at an x-risk level”. That second one is especially hard to defend if e.g. general-intelligence abilities + computational speed lets the AI develop some other thing (like a really bad plague) that can hurt humanity at an x-risk level. Intelligence, itself, can figure out, harness, and accelerate the other feedback loops.