I think you present a good argument for plausibility.
For me to think this is likely to be important, it would take a stronger argument.
You mention proofs. I imagine they’re correct, and based on infinite time passing. Everything that’s possible will happen in infinite time. Whether this would happen within the heat death of the universe is a more relevant question.
For this to happen on a timescale that matters, it seems you’re positing an incompetent superintelligence. It hasn’t devoted enough of its processing to monitoring for these effects and correcting them when they happen. As a result, it eventually fails at its own goals.
This seems like it would only happen with an ASI with some particular blind spots for its intelligence.
This counts as disagreeing with some of the premises—which ones in particular?
Re “incompetent superintelligence”: denotationally yes, connotationally no. Yes in the sense that its competence is insufficient to keep the consequences of its actions within the bounds of its initial values. No in the sense that the purported reason for this failing is that such a task is categorically impossible, which cannot be solved with better resource allocation.
To be clear, I am summarizing arguments made elsewhere, which do not posit infinite time passing, or timescales so long as to not matter.
I think you present a good argument for plausibility.
For me to think this is likely to be important, it would take a stronger argument.
You mention proofs. I imagine they’re correct, and based on infinite time passing. Everything that’s possible will happen in infinite time. Whether this would happen within the heat death of the universe is a more relevant question.
For this to happen on a timescale that matters, it seems you’re positing an incompetent superintelligence. It hasn’t devoted enough of its processing to monitoring for these effects and correcting them when they happen. As a result, it eventually fails at its own goals.
This seems like it would only happen with an ASI with some particular blind spots for its intelligence.
This counts as disagreeing with some of the premises—which ones in particular?
Re “incompetent superintelligence”: denotationally yes, connotationally no. Yes in the sense that its competence is insufficient to keep the consequences of its actions within the bounds of its initial values. No in the sense that the purported reason for this failing is that such a task is categorically impossible, which cannot be solved with better resource allocation.
To be clear, I am summarizing arguments made elsewhere, which do not posit infinite time passing, or timescales so long as to not matter.