Reading online comments about AI advances in non-tech-related spaces (look for phrases like “it’s been programmed to” and such – what underlying model-of-AI does that imply?), directly talking to people who are not particularly interested in technology, and reflecting on my own basic state of knowledge about modern AI when I’d been a tech-savvy but not particularly AI-interested individual.
Generally speaking, appreciating what “modern AIs are trained, not programmed” implies requires a fairly fine-grained model of AI. Not something you can develop by osmosis yet – you’d need to actually read a couple articles on the topic, actively trying to understand it. I’m highly certain most people never feel compelled to do that.
I suppose I don’t have hard data, though, that’s fair. If there are any polls or such available, I’d appreciate seeing that.
Edit: Edited the post a bit to make my epistemic status on the issue clear. I’m fairly confident, but it’s indeed prudent to point out what sources my confidence comes from.
Anecdotally I have also noticed this—when I tell people what I do, the thing they are frequently surprised by is that we don’t know how these things work.
As you implied, if you don’t understand how NN’s work, your natural closest analogue to ChatGPT is conventional software, which is at least understood by its programmers. This isn’t even people being dumb about it, it’s just a lack of knowledge about a specific piece of technology, and a lack of knowledge that there is something to know—that NN’s are in fact qualitatively different from other programs.
As you implied, if you don’t understand how NN’s work, your natural closest analogue to ChatGPT is conventional software, which is at least understood by its programmers.
It’s worth noting that conventional software is also often not fully understood. The ranking algorithms of the major tech companies are complex enough that there might be no human that fully understands them.
Reading online comments about AI advances in non-tech-related spaces (look for phrases like “it’s been programmed to” and such – what underlying model-of-AI does that imply?), directly talking to people who are not particularly interested in technology, and reflecting on my own basic state of knowledge about modern AI when I’d been a tech-savvy but not particularly AI-interested individual.
Generally speaking, appreciating what “modern AIs are trained, not programmed” implies requires a fairly fine-grained model of AI. Not something you can develop by osmosis yet – you’d need to actually read a couple articles on the topic, actively trying to understand it. I’m highly certain most people never feel compelled to do that.
I suppose I don’t have hard data, though, that’s fair. If there are any polls or such available, I’d appreciate seeing that.
Edit: Edited the post a bit to make my epistemic status on the issue clear. I’m fairly confident, but it’s indeed prudent to point out what sources my confidence comes from.
Anecdotally I have also noticed this—when I tell people what I do, the thing they are frequently surprised by is that we don’t know how these things work.
As you implied, if you don’t understand how NN’s work, your natural closest analogue to ChatGPT is conventional software, which is at least understood by its programmers. This isn’t even people being dumb about it, it’s just a lack of knowledge about a specific piece of technology, and a lack of knowledge that there is something to know—that NN’s are in fact qualitatively different from other programs.
It’s worth noting that conventional software is also often not fully understood. The ranking algorithms of the major tech companies are complex enough that there might be no human that fully understands them.