So on one hand, I sort of agree with this. For example, I think people giving IQ tests to LLMs and trying to draw strong conclusions from that (e.g. about how far off we are from ASI) is pretty silly. Human minds share an architecture that LLMs don’t share with us, and IQ tests measure differences along some dimension within the space of variation of that architecture, within our current cultural context. I think an actual ASI will have a mind that works quite differently and will quickly blow right past the IQ scale, similar to your example of eagles and hypersonic aircraft.
On the other hand, humans just sort of do obviously vary a ton in abilities, in a way we care about, despite the above? Like, just look around? Read about Von Neumann? Get stuck for days trying to solve a really (subjectively) hard math problem, and then see how quickly someone a bit smarter was able to solve it? One might argue this doesn’t matter if we can’t feasibly find anyone capable of solving alignment inside the variation of the human architecture. But Yudkowsky, and several others, with awareness and understanding of the problem, exist; so why not see what happens if we push a bit further? I sort of have this sense that once you’re able to understand a problem, you probably don’t need to be that much smarter to solve it, if it’s the sort of problem that’s amenable to intelligence at all.
On another note: I can imagine that, from the perspective of evolution in the ancestral environment, that maybe human intelligence variation appeared “small”, in that it didn’t cache out in much fitness advantage; and it’s just in the modern environment that IQ ends up conferring massive advantages in ability to think abstractly or something, which actually does cache out in stuff we care about.
So on one hand, I sort of agree with this. For example, I think people giving IQ tests to LLMs and trying to draw strong conclusions from that (e.g. about how far off we are from ASI) is pretty silly. Human minds share an architecture that LLMs don’t share with us, and IQ tests measure differences along some dimension within the space of variation of that architecture, within our current cultural context. I think an actual ASI will have a mind that works quite differently and will quickly blow right past the IQ scale, similar to your example of eagles and hypersonic aircraft.
On the other hand, humans just sort of do obviously vary a ton in abilities, in a way we care about, despite the above? Like, just look around? Read about Von Neumann? Get stuck for days trying to solve a really (subjectively) hard math problem, and then see how quickly someone a bit smarter was able to solve it? One might argue this doesn’t matter if we can’t feasibly find anyone capable of solving alignment inside the variation of the human architecture. But Yudkowsky, and several others, with awareness and understanding of the problem, exist; so why not see what happens if we push a bit further? I sort of have this sense that once you’re able to understand a problem, you probably don’t need to be that much smarter to solve it, if it’s the sort of problem that’s amenable to intelligence at all.
On another note: I can imagine that, from the perspective of evolution in the ancestral environment, that maybe human intelligence variation appeared “small”, in that it didn’t cache out in much fitness advantage; and it’s just in the modern environment that IQ ends up conferring massive advantages in ability to think abstractly or something, which actually does cache out in stuff we care about.