GPT-4 is far below village idiot level at most things a village idiot uses their brain for, despite surpassing humans at next-token prediction.
Could you give some examples? I take it that what Eliezer meant by village-idiot intelligence is less “specifically does everything a village idiot can do” and more “is as generally intelligent as a village idiot”. I feel like the list of things GPT-4 can do that a village idiot can’t would look much more indicative of general intelligence than the list of things a village idiot can do that GPT-4 can’t. (As opposed to AlphaZero, where the extent of the list is “can play some board games really well”)
I just can’t imagine anyone interacting with a village idiot and GPT-4 and concluding that the village idiot is smarter. If the average village idiot had the same capabilities as today’s GPT-4, and GPT-4 had the same capabilities as today’s village idiots, I feel like it would be immediately obvious that we hadn’t gotten village-idiot level AI yet. My thinking on this is still pretty messy though so I’m very open to having my mind changed on this.
Something like this plausibly came up in the Eliezer/Paul dialogues from 2021, but I couldn’t find it with a cursory search. Eliezer has also in various places acknowledged being wrong about what kind of results the current ML paradigm would get, which probably is a superset of this specific thing.
Just skimmed the dialogues, couldn’t find it either. I have seen Eliezer acknowledge what you said but I don’t really see how it’s related; for example, if GPT-4 had been Einstein-level then that would look good for his intelligence-gap theory but bad for his suspicion of the current ML paradigm.
The big one is obviously “make long time scale plans to navigate a complicated 3D environment, while controlling a floppy robot.”
I agree with Qumeric’s comment—the point is that the modern ML paradigm is incompatible with having a single scale for general intelligence. Even given the same amount of processing power as a human brain, modern ML would use it on a smaller model with a simpler architecture, that gets exposed to orders of magnitude more training data, and that training data would be pre-gathered text or video (or maybe a simple simulation) that could be fed in at massive rates, rather than slow real-time anything.
The intelligences this produces are hard to put on a nice linear scale leading from ants to humans.
The big one is obviously “make long time scale plans to navigate a complicated 3D environment, while controlling a floppy robot.”
This is like judging a dolphin on its tree-climbing ability and concluding it’s not as smart as a squirrel. That’s not what it was built for. In a large number of historically human domains, GPT-4 will dominate the village idiot and most other humans too.
Can you think of examples where it actually makes sense to compare GPT and the village idiot and the latter easily dominates? Language input/output is still a pretty large domain.
Thanks for the reply.
Could you give some examples? I take it that what Eliezer meant by village-idiot intelligence is less “specifically does everything a village idiot can do” and more “is as generally intelligent as a village idiot”. I feel like the list of things GPT-4 can do that a village idiot can’t would look much more indicative of general intelligence than the list of things a village idiot can do that GPT-4 can’t. (As opposed to AlphaZero, where the extent of the list is “can play some board games really well”)
I just can’t imagine anyone interacting with a village idiot and GPT-4 and concluding that the village idiot is smarter. If the average village idiot had the same capabilities as today’s GPT-4, and GPT-4 had the same capabilities as today’s village idiots, I feel like it would be immediately obvious that we hadn’t gotten village-idiot level AI yet. My thinking on this is still pretty messy though so I’m very open to having my mind changed on this.
Just skimmed the dialogues, couldn’t find it either. I have seen Eliezer acknowledge what you said but I don’t really see how it’s related; for example, if GPT-4 had been Einstein-level then that would look good for his intelligence-gap theory but bad for his suspicion of the current ML paradigm.
The big one is obviously “make long time scale plans to navigate a complicated 3D environment, while controlling a floppy robot.”
I agree with Qumeric’s comment—the point is that the modern ML paradigm is incompatible with having a single scale for general intelligence. Even given the same amount of processing power as a human brain, modern ML would use it on a smaller model with a simpler architecture, that gets exposed to orders of magnitude more training data, and that training data would be pre-gathered text or video (or maybe a simple simulation) that could be fed in at massive rates, rather than slow real-time anything.
The intelligences this produces are hard to put on a nice linear scale leading from ants to humans.
This is like judging a dolphin on its tree-climbing ability and concluding it’s not as smart as a squirrel. That’s not what it was built for. In a large number of historically human domains, GPT-4 will dominate the village idiot and most other humans too.
Can you think of examples where it actually makes sense to compare GPT and the village idiot and the latter easily dominates? Language input/output is still a pretty large domain.