Using LLMs is an intellectual skill. I would be astonished if IQ was not pretty helpful for that.
I don’t think it is all that helpful, adjusting for the tasks that people do, after years of watching people use LLMs. Smart people are often too arrogant and proud, and know too much. “It’s just a pile of matrix multiplications and a very complicated if function and therefore can’t do anything” is the sort of thing only a smart person can convince themselves, where a dumb person thinking “I ask the smart little man in the magic box my questions and I get answers” is getting more out of it. (The benefits of LLM usage is also highly context dependent: so you’ll find studies showing LLMs assist most the highest performers, but also ones showing it helps most the lowest.) Like in 2020, the more you knew about AI, the dumber your uses of GPT-3 were, because you ‘knew’ that it couldn’t do anything and you had to hold its hand to do everything and you had to phrase everything in baby talk etc. You had to unlearn everything you knew and anthropomorphize it to meaningfully explore prompting. This requires a certain flexibility of mind that has less to do with IQ and more to do with, say, schizophrenia -the people in Cyborgism, who do the most interesting things with LLMs, are not extraordinarily intelligent. They are, however, kinda weird and crazy.
Smart people are often too arrogant and proud, and know too much.
I thought that might be the case. If you looked at GPT-3 or 3.5, then, the higher the quality of your own work, the less helpful (and, potentially, the more destructive and disruptive) it is to substitute in the LLM’s work; so higher IQ in these early years of LLMs may correlate with dismissing them and having little experience using them.
But this is a temporary effect. Those who initially dismissed LLMs will eventually come round; and, among younger people, especially as LLMs get better, higher-IQ people who try LLMs for the first time will find them worthwhile and use them just as much as their peers. And if you have two people who have both spent N hours using the same LLM for the same purposes, higher IQ will help, all else being equal.
Of course, if you’re simply reporting a correlation you observe, then all else is likely not equal. Please think about selection effects, such as those described here.
I don’t think it is all that helpful, adjusting for the tasks that people do, after years of watching people use LLMs. Smart people are often too arrogant and proud, and know too much. “It’s just a pile of matrix multiplications and a very complicated
if
function and therefore can’t do anything” is the sort of thing only a smart person can convince themselves, where a dumb person thinking “I ask the smart little man in the magic box my questions and I get answers” is getting more out of it. (The benefits of LLM usage is also highly context dependent: so you’ll find studies showing LLMs assist most the highest performers, but also ones showing it helps most the lowest.) Like in 2020, the more you knew about AI, the dumber your uses of GPT-3 were, because you ‘knew’ that it couldn’t do anything and you had to hold its hand to do everything and you had to phrase everything in baby talk etc. You had to unlearn everything you knew and anthropomorphize it to meaningfully explore prompting. This requires a certain flexibility of mind that has less to do with IQ and more to do with, say, schizophrenia -the people in Cyborgism, who do the most interesting things with LLMs, are not extraordinarily intelligent. They are, however, kinda weird and crazy.I thought that might be the case. If you looked at GPT-3 or 3.5, then, the higher the quality of your own work, the less helpful (and, potentially, the more destructive and disruptive) it is to substitute in the LLM’s work; so higher IQ in these early years of LLMs may correlate with dismissing them and having little experience using them.
But this is a temporary effect. Those who initially dismissed LLMs will eventually come round; and, among younger people, especially as LLMs get better, higher-IQ people who try LLMs for the first time will find them worthwhile and use them just as much as their peers. And if you have two people who have both spent N hours using the same LLM for the same purposes, higher IQ will help, all else being equal.
Of course, if you’re simply reporting a correlation you observe, then all else is likely not equal. Please think about selection effects, such as those described here.