Theroetically we could measure it by having humans play “the language model game” where you try to predict the next word in a text, repeatedly. How often you would get the next word wrong is a function of your natural loss. Of course, you’d get better at this game as you went along, just like LMs do, so what we’d want to measure is how well you’d do after playing for a few days.
There might have been a psychological study that resembles this. (I don’t know.) We could probably also replicate it via citizen science: create a website where you play this game, and get people to play it. My prediction is that DL LMs are already far superior to even the best humans at this game. (Note that this doesn’t mean I think DL is smarter than humans.)
Such a game already exists! See https://rr-lm-game.herokuapp.com/whichonescored2 and https://rr-lm-game.herokuapp.com/. I’ve been told humans tend to do pretty badly at the games (I didn’t do too well myself), so if you feel discouraged playing and want a similar style of game that’s perhaps a bit more fun (if slightly less relevant to the question at hand), I recommend https://www.redactle.com/.
Regardless, I guess I’m thinking of loss (in humans) in the more abstract sense of “what’s the distance between the correct and human-given answer [to an arbitrary question about the real world]?” If there’s some mathematically necessary positive amount of loss humans must have at a minimum, that would seemingly imply that there are fundamental limits to the ability of human cognition to model reality.
Is there some reasonable-ish way to think about loss in the domain(s) that humans are (currently) superior at? (This might be equivalent to asking for a test of general intelligence, if one wants to be fully comprehensive)
The scoring for that first game is downright bizarre. The optimal strategy for picking probabilities does not reflect the actual relative likelihoods of the options, but says “don’t overthink it”. In order to do well, you must overthink it.
(I run the team that created that game. I made the guess-most-likely-next-token game and Fabien Roger made the other one.)
The optimal strategy for picking probabilities in that game is to say what your probability for those two next tokens would have been if you hadn’t updated on being asked about them. What’s your problem with this?
It’s kind of sad that this scoring system is kind of complicated. But I don’t know how to construct simpler games such that we can unbiasedly infer human perplexity from what the humans do.
Theroetically we could measure it by having humans play “the language model game” where you try to predict the next word in a text, repeatedly. How often you would get the next word wrong is a function of your natural loss. Of course, you’d get better at this game as you went along, just like LMs do, so what we’d want to measure is how well you’d do after playing for a few days.
There might have been a psychological study that resembles this. (I don’t know.) We could probably also replicate it via citizen science: create a website where you play this game, and get people to play it. My prediction is that DL LMs are already far superior to even the best humans at this game. (Note that this doesn’t mean I think DL is smarter than humans.)
Such a game already exists! See https://rr-lm-game.herokuapp.com/whichonescored2 and https://rr-lm-game.herokuapp.com/. I’ve been told humans tend to do pretty badly at the games (I didn’t do too well myself), so if you feel discouraged playing and want a similar style of game that’s perhaps a bit more fun (if slightly less relevant to the question at hand), I recommend https://www.redactle.com/. Regardless, I guess I’m thinking of loss (in humans) in the more abstract sense of “what’s the distance between the correct and human-given answer [to an arbitrary question about the real world]?” If there’s some mathematically necessary positive amount of loss humans must have at a minimum, that would seemingly imply that there are fundamental limits to the ability of human cognition to model reality.
Yes, humans are way worse than even GPT-1 at next-token prediction, even after practicing for an hour.
EDIT: These results are now posted here
Is there some reasonable-ish way to think about loss in the domain(s) that humans are (currently) superior at? (This might be equivalent to asking for a test of general intelligence, if one wants to be fully comprehensive)
The scoring for that first game is downright bizarre. The optimal strategy for picking probabilities does not reflect the actual relative likelihoods of the options, but says “don’t overthink it”. In order to do well, you must overthink it.
(I run the team that created that game. I made the guess-most-likely-next-token game and Fabien Roger made the other one.)
The optimal strategy for picking probabilities in that game is to say what your probability for those two next tokens would have been if you hadn’t updated on being asked about them. What’s your problem with this?
It’s kind of sad that this scoring system is kind of complicated. But I don’t know how to construct simpler games such that we can unbiasedly infer human perplexity from what the humans do.
Yeah, if anyone builds a better version of this game, please let me know!