Using LLMs is an intellectual skill. I would be astonished if IQ was not pretty helpful for that.
For editing adults, it is a good point that lots of them might find a personality tweak very useful, and e.g. if it gave them a big bump in motivation, that would likely be worth more than, say, 5-10 IQ points. An adult is in a good position to tell what’s the delta between their current personality and what might be ideal for their situation.
Deliberately tweaking personality does raise some “dual use” issues. Is there a set of genes that makes someone very unlikely to leave their abusive cult, or makes them loyal obedient citizens to their tyrannical government, or makes them never join the hated outgroup political party? I would be pretty on board with a norm of not doing research into that. Basic “Are there genes that cause personality disorders that ~everyone agrees are bad?” is fine; “motivation” as one undifferentiated category seems fine; Big 5 traits … have some known correlations with political alignment, which brings it into territory I’m not very comfortable with, but if it goes no farther that it might be fine.
Using LLMs is an intellectual skill. I would be astonished if IQ was not pretty helpful for that.
I don’t think it is all that helpful, adjusting for the tasks that people do, after years of watching people use LLMs. Smart people are often too arrogant and proud, and know too much. “It’s just a pile of matrix multiplications and a very complicated if function and therefore can’t do anything” is the sort of thing only a smart person can convince themselves, where a dumb person thinking “I ask the smart little man in the magic box my questions and I get answers” is getting more out of it. (The benefits of LLM usage is also highly context dependent: so you’ll find studies showing LLMs assist most the highest performers, but also ones showing it helps most the lowest.) Like in 2020, the more you knew about AI, the dumber your uses of GPT-3 were, because you ‘knew’ that it couldn’t do anything and you had to hold its hand to do everything and you had to phrase everything in baby talk etc. You had to unlearn everything you knew and anthropomorphize it to meaningfully explore prompting. This requires a certain flexibility of mind that has less to do with IQ and more to do with, say, schizophrenia -the people in Cyborgism, who do the most interesting things with LLMs, are not extraordinarily intelligent. They are, however, kinda weird and crazy.
Smart people are often too arrogant and proud, and know too much.
I thought that might be the case. If you looked at GPT-3 or 3.5, then, the higher the quality of your own work, the less helpful (and, potentially, the more destructive and disruptive) it is to substitute in the LLM’s work; so higher IQ in these early years of LLMs may correlate with dismissing them and having little experience using them.
But this is a temporary effect. Those who initially dismissed LLMs will eventually come round; and, among younger people, especially as LLMs get better, higher-IQ people who try LLMs for the first time will find them worthwhile and use them just as much as their peers. And if you have two people who have both spent N hours using the same LLM for the same purposes, higher IQ will help, all else being equal.
Of course, if you’re simply reporting a correlation you observe, then all else is likely not equal. Please think about selection effects, such as those described here.
Using LLMs is an intellectual skill. I would be astonished if IQ was not pretty helpful for that.
For editing adults, it is a good point that lots of them might find a personality tweak very useful, and e.g. if it gave them a big bump in motivation, that would likely be worth more than, say, 5-10 IQ points. An adult is in a good position to tell what’s the delta between their current personality and what might be ideal for their situation.
Deliberately tweaking personality does raise some “dual use” issues. Is there a set of genes that makes someone very unlikely to leave their abusive cult, or makes them loyal obedient citizens to their tyrannical government, or makes them never join the hated outgroup political party? I would be pretty on board with a norm of not doing research into that. Basic “Are there genes that cause personality disorders that ~everyone agrees are bad?” is fine; “motivation” as one undifferentiated category seems fine; Big 5 traits … have some known correlations with political alignment, which brings it into territory I’m not very comfortable with, but if it goes no farther that it might be fine.
I don’t think it is all that helpful, adjusting for the tasks that people do, after years of watching people use LLMs. Smart people are often too arrogant and proud, and know too much. “It’s just a pile of matrix multiplications and a very complicated
if
function and therefore can’t do anything” is the sort of thing only a smart person can convince themselves, where a dumb person thinking “I ask the smart little man in the magic box my questions and I get answers” is getting more out of it. (The benefits of LLM usage is also highly context dependent: so you’ll find studies showing LLMs assist most the highest performers, but also ones showing it helps most the lowest.) Like in 2020, the more you knew about AI, the dumber your uses of GPT-3 were, because you ‘knew’ that it couldn’t do anything and you had to hold its hand to do everything and you had to phrase everything in baby talk etc. You had to unlearn everything you knew and anthropomorphize it to meaningfully explore prompting. This requires a certain flexibility of mind that has less to do with IQ and more to do with, say, schizophrenia -the people in Cyborgism, who do the most interesting things with LLMs, are not extraordinarily intelligent. They are, however, kinda weird and crazy.I thought that might be the case. If you looked at GPT-3 or 3.5, then, the higher the quality of your own work, the less helpful (and, potentially, the more destructive and disruptive) it is to substitute in the LLM’s work; so higher IQ in these early years of LLMs may correlate with dismissing them and having little experience using them.
But this is a temporary effect. Those who initially dismissed LLMs will eventually come round; and, among younger people, especially as LLMs get better, higher-IQ people who try LLMs for the first time will find them worthwhile and use them just as much as their peers. And if you have two people who have both spent N hours using the same LLM for the same purposes, higher IQ will help, all else being equal.
Of course, if you’re simply reporting a correlation you observe, then all else is likely not equal. Please think about selection effects, such as those described here.