A helpful way of thinking about 2 is imagining something less intelligent than humans trying to predict how humans will overpower it.
You could imagine a gorilla thinking “there’s no way a human could overpower us. I would just punch it if it came into my territory.”
The actual way a human would overpower it is literally impossible for the gorilla to understand (invent writing, build a global economy, invent chemistry, build a tranquilizer dart gun...)
The AI in the AI takeover scenario is that jump of intelligence and creativity above us. There’s literally no way a puny human brain could predict what tactics it would use. I’d imagine it almost definitely involves inventing new branches of science.
I’d suggest there may be an upper bound to intelligence because intelligence is bound by time and any AI lives in time like us. They can’t gather information from the environment any faster. They cannot automatically gather all the right information. They cannot know what they do not know.
The system of information, brain propagation, cellular change runs at a certain speed for us. We cannot know if it is even possible to run faster.
One of the magical thinking criticisms I have of AI is that it suddenly is virtually omniscient. Is that AI observing mold cultures and about to discover penicillin? Is it doing some extremely narrow gut bateria experiment to reveal the source of some disease? No it’s not. Because there are infinite experiments to run. It cannot know what it does not know. Some things are Petri dishes and long periods of time in the physical world and require a level of observation the AI may not possess.
Yes, physical constraints do impose an upper bound. However, I would be shocked if human-level intelligence were anywhere close to that upper bound. The James Webb Space Telescope has an upper bound on the level of detail it can see based on things like available photons and diffraction but it’s way beyond what we can detect with the naked human eye.
A helpful way of thinking about 2 is imagining something less intelligent than humans trying to predict how humans will overpower it.
You could imagine a gorilla thinking “there’s no way a human could overpower us. I would just punch it if it came into my territory.”
The actual way a human would overpower it is literally impossible for the gorilla to understand (invent writing, build a global economy, invent chemistry, build a tranquilizer dart gun...)
The AI in the AI takeover scenario is that jump of intelligence and creativity above us. There’s literally no way a puny human brain could predict what tactics it would use. I’d imagine it almost definitely involves inventing new branches of science.
I’d suggest there may be an upper bound to intelligence because intelligence is bound by time and any AI lives in time like us. They can’t gather information from the environment any faster. They cannot automatically gather all the right information. They cannot know what they do not know.
The system of information, brain propagation, cellular change runs at a certain speed for us. We cannot know if it is even possible to run faster.
One of the magical thinking criticisms I have of AI is that it suddenly is virtually omniscient. Is that AI observing mold cultures and about to discover penicillin? Is it doing some extremely narrow gut bateria experiment to reveal the source of some disease? No it’s not. Because there are infinite experiments to run. It cannot know what it does not know. Some things are Petri dishes and long periods of time in the physical world and require a level of observation the AI may not possess.
Yes, physical constraints do impose an upper bound. However, I would be shocked if human-level intelligence were anywhere close to that upper bound. The James Webb Space Telescope has an upper bound on the level of detail it can see based on things like available photons and diffraction but it’s way beyond what we can detect with the naked human eye.