If you talk to a very uninformed person about AI you’ll find they have all sorts of broken assumptions. I think I was once in a twitter discussion with someone about AI and they said that chatGPT doesn’t hallucinate. Their evidence for this was that they’d asked chatGPT whether it hallucinates and it said that it didn’t.
I think your beliefs about the future of AI could fit into an analogy too: AI will be like immigrants. They’ll take our jobs and change the character of society.
So, perhaps analogies are a crude tool that is sometimes helpful.
People aren’t blank slates though.
If you talk to a very uninformed person about AI you’ll find they have all sorts of broken assumptions. I think I was once in a twitter discussion with someone about AI and they said that chatGPT doesn’t hallucinate. Their evidence for this was that they’d asked chatGPT whether it hallucinates and it said that it didn’t.
I think your beliefs about the future of AI could fit into an analogy too: AI will be like immigrants. They’ll take our jobs and change the character of society.
So, perhaps analogies are a crude tool that is sometimes helpful.