How would you build such an AI? Most or all proposals for developing a super-human AI require extensive feedback between the AI and the environment. A machine cannot iteratively learn how to become super-intelligent if it has no way of testing improvements to itself versus the real universe and feedback from it’s operators, can it?
I’ll allow that if an extremely computationally expensive simulation of the real world were used, it is at least possible to imagine that the AI could iteratively make itself smarter by using the simulation to test improvements.
However, this poses a problem. At some point N years from today, it is predicted that we will have sufficiently advanced computer hardware to support a super intelligent AI. (N can be negative for those who believe that day is in the past). So we need X amount of computational power (I think using the Whole Brain Emulation roadmap can give you a guesstimate for X)
Well, to also simulate enough of the universe to a sufficient level of detail for the AI to learn against it, we need Y amount of computational power. Y is a big number, and most likely bigger than X. Thus, there will be years (decades?, centuries?) which X is available to a sufficiently well funded group, but X+Y is not.
It’s entirely reasonable to suppose that we will have to deal with AI (and survive them...) before we ever have the ability to create this kind of box.
How would you build such an AI? Most or all proposals for developing a super-human AI require extensive feedback between the AI and the environment. A machine cannot iteratively learn how to become super-intelligent if it has no way of testing improvements to itself versus the real universe and feedback from it’s operators, can it?
I’ll allow that if an extremely computationally expensive simulation of the real world were used, it is at least possible to imagine that the AI could iteratively make itself smarter by using the simulation to test improvements.
However, this poses a problem. At some point N years from today, it is predicted that we will have sufficiently advanced computer hardware to support a super intelligent AI. (N can be negative for those who believe that day is in the past). So we need X amount of computational power (I think using the Whole Brain Emulation roadmap can give you a guesstimate for X)
Well, to also simulate enough of the universe to a sufficient level of detail for the AI to learn against it, we need Y amount of computational power. Y is a big number, and most likely bigger than X. Thus, there will be years (decades?, centuries?) which X is available to a sufficiently well funded group, but X+Y is not.
It’s entirely reasonable to suppose that we will have to deal with AI (and survive them...) before we ever have the ability to create this kind of box.