It’s a bit of a strange question—why care if humans will solve everything that an AI will solve?
Because I started out convinced that human cognition is qualitatively closer to superintelligent cognition than it is to many expressions of animal cognition (I find the “human—ant dynamic” a very poor expression for the difference between human cognition and superintelligent cognition).
But ok.
Suppose you put an AI to solving a really big instance of a problem that it’s really good at, so big of an instance that it takes an appreciable fraction of the lifespan of the universe to solve it.
In that case you already seem to be granting that it may be that it will take humans much longer to solve it, which I would assume could imply that humans run out of time before they don’t have enough resources in the universe to solve it.
This may make the AI system more powerful than humans as I defined “powerful”, but it doesn’t meet my intuitive notions of more “powerful”. It feels like the AI system is still just more effective/efficient.
When I mentioned “the expected lifetime of the universe”, I was trying to gesture at monkeys typing randomly at a typewriter eventually producing the works of Shakespeare.
There are problems for which humans have no better shot at solving than random brute force search. But I think any problem that human civilisation (starting from 2022) spends millennia trying to solve only via random brute force are probably problems that superintelligences have no option but to try random brute force on.
Superintelligences cannot learn maximum entropy distributions either.
And even if I did decide to concede this point (though it doesn’t map neatly to what I wanted), the class of problems that human civilisation can solve still seems closer to the class of problems that a superintelligence can solve than the class of problems that a chimpanzee can solve.
But honestly, this does little to intuition pump for me that human brains are not universal.
Because I started out convinced that human cognition is qualitatively closer to superintelligent cognition than it is to many expressions of animal cognition (I find the “human—ant dynamic” a very poor expression for the difference between human cognition and superintelligent cognition).
The main distinction I’m drawing is something like. Humans can do useful things like build rockets that chimpanzees can never do.
Superintelligences can do useful things like …. “more effectively/efficiently than humans can”. There doesn’t seem to be the gap of not being able to do the thing at all.
I’ve heard people analogise the gap between humans and superintelligences to the gaps between humans and ants, and that felt wrong to me, so I decided to investigate it?
To clarify, I would not consider that analogy cruxy at all. I don’t tend to think of humans vs ants when reasoning about humans vs superintelligences, instead I tend to think about humans vs superintelligences.
We could imagine a planet-scale AI observing what’s going on all over the world and coordinating giant undertakings as part of that. Its strategy could exploit subtle details in different locations that just happen to line up, unlike humans who have to delegate to others when the physical scale gets too big and who therefore have extremely severe bottleneck problems. By being literally physically as big relative to us as we are relative to ants, it doesn’t seem like an unreasonable comparison to make.
But idc, I don’t really tend to make animal comparisons when it comes to AGI.
Because I started out convinced that human cognition is qualitatively closer to superintelligent cognition than it is to many expressions of animal cognition (I find the “human—ant dynamic” a very poor expression for the difference between human cognition and superintelligent cognition).
This may make the AI system more powerful than humans as I defined “powerful”, but it doesn’t meet my intuitive notions of more “powerful”. It feels like the AI system is still just more effective/efficient.
When I mentioned “the expected lifetime of the universe”, I was trying to gesture at monkeys typing randomly at a typewriter eventually producing the works of Shakespeare.
There are problems for which humans have no better shot at solving than random brute force search. But I think any problem that human civilisation (starting from 2022) spends millennia trying to solve only via random brute force are probably problems that superintelligences have no option but to try random brute force on.
Superintelligences cannot learn maximum entropy distributions either.
And even if I did decide to concede this point (though it doesn’t map neatly to what I wanted), the class of problems that human civilisation can solve still seems closer to the class of problems that a superintelligence can solve than the class of problems that a chimpanzee can solve.
But honestly, this does little to intuition pump for me that human brains are not universal.
Qualitatively closer for what purpose?
The main distinction I’m drawing is something like. Humans can do useful things like build rockets that chimpanzees can never do.
Superintelligences can do useful things like …. “more effectively/efficiently than humans can”. There doesn’t seem to be the gap of not being able to do the thing at all.
Yes, but the appropriate way to draw the line likely depends on what the purpose of drawing the line is, so that is why I am asking about the purpose.
I’ve heard people analogise the gap between humans and superintelligences to the gaps between humans and ants, and that felt wrong to me, so I decided to investigate it?
To clarify, I would not consider that analogy cruxy at all. I don’t tend to think of humans vs ants when reasoning about humans vs superintelligences, instead I tend to think about humans vs superintelligences.
We could imagine a planet-scale AI observing what’s going on all over the world and coordinating giant undertakings as part of that. Its strategy could exploit subtle details in different locations that just happen to line up, unlike humans who have to delegate to others when the physical scale gets too big and who therefore have extremely severe bottleneck problems. By being literally physically as big relative to us as we are relative to ants, it doesn’t seem like an unreasonable comparison to make.
But idc, I don’t really tend to make animal comparisons when it comes to AGI.