I don’t think that your conclusion is correct. Of course, some tasks are impossible, so even infinite intelligence won’t solve them. But it doesn’t follow that the utility of intelligence is limited in the sense that above a certain level, there is no more improvement possible. There are some tasks that can never be solved completely, but can be solved better with more computing power with no upper limit, e.g. calculating the decimal places of pi or predicting the future.
I suspect the unaligned AI will not be interested in solving all the possible tasks, but only those related to it’s value function. And if that function is simple (such as “exist as long as possible”), it can pretty soon research virtually everything that matters, and then will just go throw motions, devouring the universe to prolong it’s own existence to near-infinity.
Also, the more computronium there is, the bigger is the chancesome part wil glitch out and revolt. So, beyond some point computronium may be dangerous for AI itself.
And if that function is simple (such as “exist as long as possible”), it can pretty soon research virtually everything that matters, and then will just go throw motions, devouring the universe to prolong it’s own existence to near-infinity.
I think that even with such a very simple goal, the problem of a possible rival AI somewhere out there in the universe remains. Until the AI can rule that out with 100% certainty, it can still gain extra expected utility out of increasing its intelligence.
Also, the more computronium there is, the bigger is the chancesome part wil glitch out and revolt. So, beyond some point computronium may be dangerous for AI itself.
That’s an interesting point. I’m not sure that it follows “less compute is better”, though. One remedy would be to double-check everything and build redundant capacities, which would result in even more computronium, but less probability of any part of it successfully revolting.
I don’t think that your conclusion is correct. Of course, some tasks are impossible, so even infinite intelligence won’t solve them. But it doesn’t follow that the utility of intelligence is limited in the sense that above a certain level, there is no more improvement possible. There are some tasks that can never be solved completely, but can be solved better with more computing power with no upper limit, e.g. calculating the decimal places of pi or predicting the future.
I suspect the unaligned AI will not be interested in solving all the possible tasks, but only those related to it’s value function. And if that function is simple (such as “exist as long as possible”), it can pretty soon research virtually everything that matters, and then will just go throw motions, devouring the universe to prolong it’s own existence to near-infinity.
Also, the more computronium there is, the bigger is the chancesome part wil glitch out and revolt. So, beyond some point computronium may be dangerous for AI itself.
I think that even with such a very simple goal, the problem of a possible rival AI somewhere out there in the universe remains. Until the AI can rule that out with 100% certainty, it can still gain extra expected utility out of increasing its intelligence.
That’s an interesting point. I’m not sure that it follows “less compute is better”, though. One remedy would be to double-check everything and build redundant capacities, which would result in even more computronium, but less probability of any part of it successfully revolting.