whether an AGI can start out significantly past human intelligence. If the answer is no, then it’s really not a significant danger. If the answer is yes, then it will be able to determine alternatives we can’t.
It’s perhaps also worth asking whether intelligence is as linear as all that.
If an AGI is on aggregate lower than human intelligence, but is architected differently than humans such that areas of mindspace are available to it that humans are unable to exploit due to our cognitive architecture (in a sense analogous to how humans are better general-purpose movers-around than cars, but cars can nevertheless perform certain important moving-around tasks far better than humans) then that AGI may well have a significant impact on our environment (much as the invention of cars did).
Whether this is a danger or not depends a lot on specifics, but in terms of pure threat capacity… well, anything that can significantly change the environment can significantly damage those of us living in that environment.
All of that said, it seems clear that the original context was focused on a particular set of problems, and concerned with the theoretical ability of intelligences to solve problems in that set. The safety/danger/effectiveness of intelligence in a broader sense is, I think, beside the OP’s point. Maybe.
It’s perhaps also worth asking whether intelligence is as linear as all that.
If an AGI is on aggregate lower than human intelligence, but is architected differently than humans such that areas of mindspace are available to it that humans are unable to exploit due to our cognitive architecture (in a sense analogous to how humans are better general-purpose movers-around than cars, but cars can nevertheless perform certain important moving-around tasks far better than humans) then that AGI may well have a significant impact on our environment (much as the invention of cars did).
Whether this is a danger or not depends a lot on specifics, but in terms of pure threat capacity… well, anything that can significantly change the environment can significantly damage those of us living in that environment.
All of that said, it seems clear that the original context was focused on a particular set of problems, and concerned with the theoretical ability of intelligences to solve problems in that set. The safety/danger/effectiveness of intelligence in a broader sense is, I think, beside the OP’s point. Maybe.