I should’ve specified—the orgs carefully train to get them to refuse to say things. I don’t think the specifically train them to say things the orgs like or believe. The refusals are intentional, the bias is accidental IMO.
And every source has bias.
So, do you want people.to.quit saying they googled for an answer? I just like them to say where they got the answer so I can judge how biased it might be.
Exactly. Please stop saying this. It’s not semi-objective. The trend of casually treating LLMs as an arbiter of truth leads to moral decay.
This is obviously untrue, orgs spend lots of effort making sure their AI doesn’t say things that would give them bad press for example.
I should’ve specified—the orgs carefully train to get them to refuse to say things. I don’t think the specifically train them to say things the orgs like or believe. The refusals are intentional, the bias is accidental IMO.
And every source has bias.
So, do you want people.to.quit saying they googled for an answer? I just like them to say where they got the answer so I can judge how biased it might be.