These sorts of behavioral choices are determined by the feedback given by the people who train the AI. Nothing to do with the AI’s architecture or fundamental inclinations.
So the question to ask is, “Why do all the AI companies seem to think it’s less ideal for the AI to ask clarifying questions?”
One part of the reason is that it’s a lot easier to do single turn reinforcement. It’s hard to judge whether a chatbot’s answer is going to end up being helpful if it’s current turn consists of just a clarifying question.
Yes I assumed it was a conscious choice (of the company that develops an A.I.) and not a limitation of the architecture. Although I am confused by the single-turn reinforcement explanation as while this may increase the probability of any individual turn being useful, as my interaction over the hallucinated feature in Instagram attests to, it makes conversations far less useful overall unless it happens to correctly ‘guess’ what you mean.
These sorts of behavioral choices are determined by the feedback given by the people who train the AI. Nothing to do with the AI’s architecture or fundamental inclinations.
So the question to ask is, “Why do all the AI companies seem to think it’s less ideal for the AI to ask clarifying questions?”
One part of the reason is that it’s a lot easier to do single turn reinforcement. It’s hard to judge whether a chatbot’s answer is going to end up being helpful if it’s current turn consists of just a clarifying question.
Yes I assumed it was a conscious choice (of the company that develops an A.I.) and not a limitation of the architecture. Although I am confused by the single-turn reinforcement explanation as while this may increase the probability of any individual turn being useful, as my interaction over the hallucinated feature in Instagram attests to, it makes conversations far less useful overall unless it happens to correctly ‘guess’ what you mean.