mnvr
Maybe talking isn’t the best way to communicate with LLMs
Trust your intuition—Kahneman’s book misses the forest for the trees
Hey, thanks for the comment. While I agree that “they’ll believe what they’re programmed to”, I feel that’s a bit too recent term. I was imagining a farther future where artificial intelligence goes beyond current LLMs and current political scenarios.
I’m doing these thought experiments of a future where artificial intelligence (whether as an evolution of current day LLMs or some other mechanism) have enough agency to seek out novel information. Such agency would be given to the artificial intelligence not out of benevolence, but just as a way to get it to do its job properly.
In such a scenario, saying that the AI will just believe what it is programmed to believe is akin to saying that a human child will never believe differently from what education / indoctrination (take your pick) they’ve been given as a child. That’s a good general rule of thumb, but not a given.
Fair enough! Also, thank you for the comment, I found myself nodding to all your observations.