On many useful cognitive tasks(chess, theoretical research, invention, mathematics, etc.), beginner/dumb/unskilled humans are closer to a chimpanzee/rock than peak humans
All of these tasks require some amount of learning. AIXI can’t play chess if it has never been told the rules or seen any other info about chess ever.
So a more reasonable comparison would probably involve comparing people of different IQ’s who have made comparable effort to learn a topic.
Intelligence often doesn’t look like solving the same problems better, but solving new problems. In many cases, problems are almost boolean, either you can solve them or you can’t. The problems you mentioned are all within the range of human variation. Not so trivial any human can do them, nor so advanced no human can do them.
Among humans +6 SD g factor humans do not seem in general as more capable than +3 SD g factor humans as +3 SD g factor humans are compared to median humans.
This is a highly subjective judgement. But there is no particularly strong reason to think that human intelligence has a Gaussian distribution. The more you select for humans with extremely high g factors, the more you goodheart to the specifics of the g factor tests. This goodhearting is relitively limited, but still there at +6SD.
3.0. I believe that for similar levels of cognitive investment narrow optimisers outperform general optimisers on narrow domains.
I think this is both trivially true, and pragmatically false. Suppose some self modifying superintelligence needs to play chess. It will probably largely just write a chess algorithm and put most of it’s compute into that. This will be near equal to the same algorithm without the general AI attached. (probably slightly worse at chess, the superintelligence is keeping an eye out just in case something else happens, a pure chess algorithm can’t notice a riot in the spectator stands, a superintelligence probably would devote a little compute to checking for such possibilities.)
However, this is an algorithm written by a superintelligence, and it is likely to beat the pants off any human written algorithm.
4.1. I expect it to be much more difficult for any single agent to attain decisive cognitive superiority to civilisation, or to a relevant subset of civilisation.
Being smarter than civilization is not a high bar at all. The government often makes utterly dumb decisions. The average person often believes a load of nonsense. Some processes in civilization seem to run on the soft minimum of the intelligences of the individuals contributing to them. Others run on the mean. Some processes, like the stock market, are hard for most humans to beat, but still beaten a little by the experts.
My intuition is that the level of cognitive power required to achieve absolute strategic dominance is crazily high.
My intuition is that the comparison to a +12SD human is about as useful as comparing heavy construction equipment to top athletes. Machines usually operate on a different scale to humans. The +12 SD runner isn’t that much faster than the +6SD runner, Especially because, as you reach into the peaks of athletic performance, the humans are running close to biological limits, and the gap between top competitors narrows.
All of these tasks require some amount of learning. AIXI can’t play chess if it has never been told the rules or seen any other info about chess ever.
So a more reasonable comparison would probably involve comparing people of different IQ’s who have made comparable effort to learn a topic.
Intelligence often doesn’t look like solving the same problems better, but solving new problems. In many cases, problems are almost boolean, either you can solve them or you can’t. The problems you mentioned are all within the range of human variation. Not so trivial any human can do them, nor so advanced no human can do them.
This is a highly subjective judgement. But there is no particularly strong reason to think that human intelligence has a Gaussian distribution. The more you select for humans with extremely high g factors, the more you goodheart to the specifics of the g factor tests. This goodhearting is relitively limited, but still there at +6SD.
I think this is both trivially true, and pragmatically false. Suppose some self modifying superintelligence needs to play chess. It will probably largely just write a chess algorithm and put most of it’s compute into that. This will be near equal to the same algorithm without the general AI attached. (probably slightly worse at chess, the superintelligence is keeping an eye out just in case something else happens, a pure chess algorithm can’t notice a riot in the spectator stands, a superintelligence probably would devote a little compute to checking for such possibilities.)
However, this is an algorithm written by a superintelligence, and it is likely to beat the pants off any human written algorithm.
Being smarter than civilization is not a high bar at all. The government often makes utterly dumb decisions. The average person often believes a load of nonsense. Some processes in civilization seem to run on the soft minimum of the intelligences of the individuals contributing to them. Others run on the mean. Some processes, like the stock market, are hard for most humans to beat, but still beaten a little by the experts.
My intuition is that the comparison to a +12SD human is about as useful as comparing heavy construction equipment to top athletes. Machines usually operate on a different scale to humans. The +12 SD runner isn’t that much faster than the +6SD runner, Especially because, as you reach into the peaks of athletic performance, the humans are running close to biological limits, and the gap between top competitors narrows.