I find this thought pattern frustrating. That these AI’s possess magic powers that are unimaginable. Even with our limited brains, we can imagine all the way past the current limits of physics and include things like potential worlds if the AI could manipulate space-time in ways we don’t know how too.
I’ve seen people imagining computronium, and omni-universal computing clusters. Figuring out ways to generate negentropy, literally re-writing the laws of the universe, Bootstrapped Nano-factories, using the principle of non-locality to effect changes at the speed of light using only packets of energy. Like what additional capabilities do they need to get?
FAI will be unpredictable in what/how, but we’ve already imagined outcomes and capabilities past anything achievable into what amounts to omnipotence.
I would be extremely surprised if a superintelligence doesn’t devise physical capabilities that are beyond science fiction and go some way into fantasy. I don’t expect them to be literally omnipotent, but at least have Clarkean “sufficiently advanced technology”. We may recognise some of its limits, or we may not.
“Computronium” just means an arrangement of matter that does the most effective computation possible given the constraints of physical law with available resources. It seems reasonable to suppose that technology created by a superintelligence could approach that.
Bootstrapped nano-factories are possible with known physics, and biology already does most of it. We just can’t do the engineering to generalize it to things we want. To suppose that a superintelligence can’t do the engineering either seems much less justified than supposing that it can.
The rest are far more speculative, but I don’t think any of them can be completely ruled out. I agree that the likelihood on any single one of these is tiny, but disagree in that I expect the aggregate of “capabilities that are near omnipotent by our standards” to be highly likely.
I posit that we’ve imagined basically everything available with known physics, and extended into theoretical physics. We don’t need to capitulate to the ineffable of a superintelligence, known + theoretical capabilities already suffice to absolutely dominate if managed by an extremely competent entity.
I find this thought pattern frustrating. That these AI’s possess magic powers that are unimaginable.
I do think that an advanced enough AGI might possess powers that are literally unimaginable for humans, because of their cognitive limitations. (Can a chimpanzee imagine a Penrose Mechanism?)
Although that’s not the point of my post. The point is, the FAI might have plans of such a deepness and scope and complexity, humans could perceive some of its actions as hostile (e.g. global destructive mind uploading, as described in the post). I’ve edited the post to make it clearer.
I agreed with the conclusions, now that you had brought up the point of the incomprehensibility of an advanced mind, FAI almost certainly will have plans that we deem as hostile and are to our benefit. Monkeys being vaccinated, seems like a reasonable analogy. I want us to move past the “we couldn’t imagine their tech” to me a more reasonable “we couldn’t imagine how they did their tech”
I find this thought pattern frustrating. That these AI’s possess magic powers that are unimaginable. Even with our limited brains, we can imagine all the way past the current limits of physics and include things like potential worlds if the AI could manipulate space-time in ways we don’t know how too.
I’ve seen people imagining computronium, and omni-universal computing clusters. Figuring out ways to generate negentropy, literally re-writing the laws of the universe, Bootstrapped Nano-factories, using the principle of non-locality to effect changes at the speed of light using only packets of energy. Like what additional capabilities do they need to get?
FAI will be unpredictable in what/how, but we’ve already imagined outcomes and capabilities past anything achievable into what amounts to omnipotence.
I would be extremely surprised if a superintelligence doesn’t devise physical capabilities that are beyond science fiction and go some way into fantasy. I don’t expect them to be literally omnipotent, but at least have Clarkean “sufficiently advanced technology”. We may recognise some of its limits, or we may not.
“Computronium” just means an arrangement of matter that does the most effective computation possible given the constraints of physical law with available resources. It seems reasonable to suppose that technology created by a superintelligence could approach that.
Bootstrapped nano-factories are possible with known physics, and biology already does most of it. We just can’t do the engineering to generalize it to things we want. To suppose that a superintelligence can’t do the engineering either seems much less justified than supposing that it can.
The rest are far more speculative, but I don’t think any of them can be completely ruled out. I agree that the likelihood on any single one of these is tiny, but disagree in that I expect the aggregate of “capabilities that are near omnipotent by our standards” to be highly likely.
I posit that we’ve imagined basically everything available with known physics, and extended into theoretical physics. We don’t need to capitulate to the ineffable of a superintelligence, known + theoretical capabilities already suffice to absolutely dominate if managed by an extremely competent entity.
I do think that an advanced enough AGI might possess powers that are literally unimaginable for humans, because of their cognitive limitations. (Can a chimpanzee imagine a Penrose Mechanism?)
Although that’s not the point of my post. The point is, the FAI might have plans of such a deepness and scope and complexity, humans could perceive some of its actions as hostile (e.g. global destructive mind uploading, as described in the post). I’ve edited the post to make it clearer.
I agreed with the conclusions, now that you had brought up the point of the incomprehensibility of an advanced mind, FAI almost certainly will have plans that we deem as hostile and are to our benefit. Monkeys being vaccinated, seems like a reasonable analogy. I want us to move past the “we couldn’t imagine their tech” to me a more reasonable “we couldn’t imagine how they did their tech”