The main distinction I’m drawing is something like. Humans can do useful things like build rockets that chimpanzees can never do.
Superintelligences can do useful things like …. “more effectively/efficiently than humans can”. There doesn’t seem to be the gap of not being able to do the thing at all.
I’ve heard people analogise the gap between humans and superintelligences to the gaps between humans and ants, and that felt wrong to me, so I decided to investigate it?
To clarify, I would not consider that analogy cruxy at all. I don’t tend to think of humans vs ants when reasoning about humans vs superintelligences, instead I tend to think about humans vs superintelligences.
We could imagine a planet-scale AI observing what’s going on all over the world and coordinating giant undertakings as part of that. Its strategy could exploit subtle details in different locations that just happen to line up, unlike humans who have to delegate to others when the physical scale gets too big and who therefore have extremely severe bottleneck problems. By being literally physically as big relative to us as we are relative to ants, it doesn’t seem like an unreasonable comparison to make.
But idc, I don’t really tend to make animal comparisons when it comes to AGI.
The main distinction I’m drawing is something like. Humans can do useful things like build rockets that chimpanzees can never do.
Superintelligences can do useful things like …. “more effectively/efficiently than humans can”. There doesn’t seem to be the gap of not being able to do the thing at all.
Yes, but the appropriate way to draw the line likely depends on what the purpose of drawing the line is, so that is why I am asking about the purpose.
I’ve heard people analogise the gap between humans and superintelligences to the gaps between humans and ants, and that felt wrong to me, so I decided to investigate it?
To clarify, I would not consider that analogy cruxy at all. I don’t tend to think of humans vs ants when reasoning about humans vs superintelligences, instead I tend to think about humans vs superintelligences.
We could imagine a planet-scale AI observing what’s going on all over the world and coordinating giant undertakings as part of that. Its strategy could exploit subtle details in different locations that just happen to line up, unlike humans who have to delegate to others when the physical scale gets too big and who therefore have extremely severe bottleneck problems. By being literally physically as big relative to us as we are relative to ants, it doesn’t seem like an unreasonable comparison to make.
But idc, I don’t really tend to make animal comparisons when it comes to AGI.