I think you’re saying that the fact that no historical feedback loop has ever destroyed the Earth (nor transformed it into a state which would not support human life) could be explained by the Anthropic Principle? Sure, that’s true enough. I was aiming more to provide an intuition for the idea that it’s very common and normal for feedback loops to eventually reach a limit, as there are many examples in the historical record.
Intuition aside: given the sheer number of historical feedback loops that have failed to destroy the Earth, it seems unavoidable that either (a) there are some fundamental principles at play that tend to place a cap on feedback loops, at least in the family of alternative universes that this universe has been sampled from, or (b) we have to lean on the Anthropic Principle very very hard indeed. It’s not hard to articulate causes for (a); for instance, any given feedback loop arises under a particular set of conditions, and once it has progressed sufficiently, it will begin to alter its own environment to the point where those conditions may no longer apply. (The forest fire consumes all available fuel, etc.)
I think “feedback loops have a cap” is a much easier claim to defend than the implied “AI feedback loops will cap out before they can hurt humanity at an x-risk level”. That second one is especially hard to defend if e.g. general-intelligence abilities + computational speed lets the AI develop some other thing (like a really bad plague) that can hurt humanity at an x-risk level. Intelligence, itself, can figure out, harness, and accelerate the other feedback loops.
I think you’re saying that the fact that no historical feedback loop has ever destroyed the Earth (nor transformed it into a state which would not support human life) could be explained by the Anthropic Principle? Sure, that’s true enough. I was aiming more to provide an intuition for the idea that it’s very common and normal for feedback loops to eventually reach a limit, as there are many examples in the historical record.
Intuition aside: given the sheer number of historical feedback loops that have failed to destroy the Earth, it seems unavoidable that either (a) there are some fundamental principles at play that tend to place a cap on feedback loops, at least in the family of alternative universes that this universe has been sampled from, or (b) we have to lean on the Anthropic Principle very very hard indeed. It’s not hard to articulate causes for (a); for instance, any given feedback loop arises under a particular set of conditions, and once it has progressed sufficiently, it will begin to alter its own environment to the point where those conditions may no longer apply. (The forest fire consumes all available fuel, etc.)
I think “feedback loops have a cap” is a much easier claim to defend than the implied “AI feedback loops will cap out before they can hurt humanity at an x-risk level”. That second one is especially hard to defend if e.g. general-intelligence abilities + computational speed lets the AI develop some other thing (like a really bad plague) that can hurt humanity at an x-risk level. Intelligence, itself, can figure out, harness, and accelerate the other feedback loops.