For what it’s worth, I think even current, primitive-compared-to-what-will-come LLMs sometimes do a good job of (choosing words carefully here) compiling information packages that a human might find useful in increasing their understanding. It’s very scattershot and always at risk of unsolicited hallucination, but in certain domains that are well and diversely represented in the training set, and for questions that have more or less objective answers, AI can genuinely aid insight.
The problem is the gulf between can and does. For reasons elaborated in the post, most people are disinclined to invest in deep understanding if a shallower version will get the near-term job done. (This is in no way unique to a post-AI world, but in a post-AI world the shallower versions are falling from the sky and growing on trees.)
My intuition is that the fix would have to be something pretty radical involving incentives. Non-paternalistically we’d need to incentivise real understanding and/or disincentivise the short cuts. Carrots usually being better than sticks, perhaps a system of micro-rewards for those who use GenAI in provably ‘deep’ ways? [awaits a beatdown in the comments]
All I can think of is how, with current models plus a little more Dakka, for genAI to deeply research a topic.
It wouldn’t be free. You might have to pay a fee with varying package prices. The model then buys a 1 time task license for say 10 reference books on the topic. It reads each one, translating it from “the original text and images” to a distilled version that focuses on details relevant to your prompt.
It assembles a set of “notes” where each note cites directly the text it was from. (And another model session validates this assertion)
It constructs a summary or essay or whatever form it needs to be in from the notes.
Masters thesis grade in 10 minutes and under $100...
For what it’s worth, I think even current, primitive-compared-to-what-will-come LLMs sometimes do a good job of (choosing words carefully here) compiling information packages that a human might find useful in increasing their understanding. It’s very scattershot and always at risk of unsolicited hallucination, but in certain domains that are well and diversely represented in the training set, and for questions that have more or less objective answers, AI can genuinely aid insight.
The problem is the gulf between can and does. For reasons elaborated in the post, most people are disinclined to invest in deep understanding if a shallower version will get the near-term job done. (This is in no way unique to a post-AI world, but in a post-AI world the shallower versions are falling from the sky and growing on trees.)
My intuition is that the fix would have to be something pretty radical involving incentives. Non-paternalistically we’d need to incentivise real understanding and/or disincentivise the short cuts. Carrots usually being better than sticks, perhaps a system of micro-rewards for those who use GenAI in provably ‘deep’ ways? [awaits a beatdown in the comments]
I think the micro rewards thing could work very well/naturally in a tutor style
All I can think of is how, with current models plus a little more Dakka, for genAI to deeply research a topic.
It wouldn’t be free. You might have to pay a fee with varying package prices. The model then buys a 1 time task license for say 10 reference books on the topic. It reads each one, translating it from “the original text and images” to a distilled version that focuses on details relevant to your prompt.
It assembles a set of “notes” where each note cites directly the text it was from. (And another model session validates this assertion)
It constructs a summary or essay or whatever form it needs to be in from the notes.
Masters thesis grade in 10 minutes and under $100...