I think that’s true, but not very important (in the short term). On Bullshit—Wikipedia was first published in 1986, and was a humorous, but useful, categorizaton of a whole lot of human communication output. ChatGPT is truth-agnostic (except for fine-tuning and output tuning), but still pretty good on a whole lot of general topics. Human choice of what GPT outputs to highlight or use in further communication can be bullshit or truth-seeking, depending on the human intent.
In the long-term, of course, the idea is absolutely core to all the alignment fears and to the expectation that AI will steamroller human civilization because it doesn’t care.
I think that’s true, but not very important (in the short term). On Bullshit—Wikipedia was first published in 1986, and was a humorous, but useful, categorizaton of a whole lot of human communication output. ChatGPT is truth-agnostic (except for fine-tuning and output tuning), but still pretty good on a whole lot of general topics. Human choice of what GPT outputs to highlight or use in further communication can be bullshit or truth-seeking, depending on the human intent.
In the long-term, of course, the idea is absolutely core to all the alignment fears and to the expectation that AI will steamroller human civilization because it doesn’t care.