I think this sort of view mythologizes “intelligence” too much. I used to think along similar lines too, but then I tried to figure out how it works, and… Intelligence isn’t some sort of arcane, philosophically exalted substance, and superintelligences are not gods or Lovecraftian entities.
It’s just an algorithm. Probably some sort of general-purpose planning procedure hooked up to a (structure isomorphic to a hierarchically-arranged set of) utility function(s), implemented in an approximate manner due to computational limitations, with lots of little optimizations like cached heuristics thrown in. Nothing worth worshiping, certainly.
Superintelligences would be “superior” to us only in the sense of having more compute available to them and, say, having larger working-memory bandwidth. They would think dramatically quicker, yes, and draw inferences that wouldn’t easily occur to us. But they would not be somehow more general than we are, because human general intelligence is already as “general” as an intelligence can be. We can comprehend any mathematical object, everything in this universe is made up of math, and any concept a superintelligence can invent will be made out of math as well; hence we would (in theory) be able to understand anything a superintelligence thinks.
Granted, for particularly complex ideas, that would take a lot of time – same way it’d take you a while to explain quantum field theory to Newton. But that is the relevant comparison here. Not cats to humans.
I think this sort of view mythologizes “intelligence” too much. I used to think along similar lines too, but then I tried to figure out how it works, and… Intelligence isn’t some sort of arcane, philosophically exalted substance, and superintelligences are not gods or Lovecraftian entities.
It’s just an algorithm. Probably some sort of general-purpose planning procedure hooked up to a (structure isomorphic to a hierarchically-arranged set of) utility function(s), implemented in an approximate manner due to computational limitations, with lots of little optimizations like cached heuristics thrown in. Nothing worth worshiping, certainly.
Superintelligences would be “superior” to us only in the sense of having more compute available to them and, say, having larger working-memory bandwidth. They would think dramatically quicker, yes, and draw inferences that wouldn’t easily occur to us. But they would not be somehow more general than we are, because human general intelligence is already as “general” as an intelligence can be. We can comprehend any mathematical object, everything in this universe is made up of math, and any concept a superintelligence can invent will be made out of math as well; hence we would (in theory) be able to understand anything a superintelligence thinks.
Granted, for particularly complex ideas, that would take a lot of time – same way it’d take you a while to explain quantum field theory to Newton. But that is the relevant comparison here. Not cats to humans.