I created an account simply to say this sounds like an excellent idea. Right until it encounters the real world.
There is a large issue that would have to be addressed before this could be implemented in practice. “This call may be monitored for quality assurance purposes.” In other words, the lack of privacy will need to be addressed, and it may lead many to immediately choose a different AI agent that values user privacy higher. In fact the consumption of user data to generate ‘AI slop’ is a powerful memetic influence, and I believe it would be difficult for any commercial enterprise to communicate an implementation of this in a way that is not concerning to paying customers.
Can you imagine a high priced consultant putting a tape recorder on the table and saying “This meeting may be recorded for quality assurance purposes?” It would simultaneously deter the business from sharing internal information necessary for the success of the project and throw doubt as to the competency and quality of that consultant.
It will be necessary to find a way to accurately communicate how this back channel will be utilized to improve the service without simultaneously driving away paying customers.
I do agree this would be bad if secret and non-privacy preserving.
I think we need to separate out use cases. For reporting on bad behavior (but not quite bad enough to trigger red flags and account closure), then a system like Clio seems ideal.
But what if the model is uncertain about something and wants to ask for clarification? But the user isn’t being bad, just posing a difficult question.
In such a case, as a user, I’d want the model to explicitly ask me for permission to ask the Anthropic staff its question, and for me to need to approve the exact wording. There’s plenty of situations that could be great in!
I created an account simply to say this sounds like an excellent idea. Right until it encounters the real world.
There is a large issue that would have to be addressed before this could be implemented in practice. “This call may be monitored for quality assurance purposes.” In other words, the lack of privacy will need to be addressed, and it may lead many to immediately choose a different AI agent that values user privacy higher. In fact the consumption of user data to generate ‘AI slop’ is a powerful memetic influence, and I believe it would be difficult for any commercial enterprise to communicate an implementation of this in a way that is not concerning to paying customers.
Can you imagine a high priced consultant putting a tape recorder on the table and saying “This meeting may be recorded for quality assurance purposes?” It would simultaneously deter the business from sharing internal information necessary for the success of the project and throw doubt as to the competency and quality of that consultant.
It will be necessary to find a way to accurately communicate how this back channel will be utilized to improve the service without simultaneously driving away paying customers.
I do agree this would be bad if secret and non-privacy preserving.
I think we need to separate out use cases. For reporting on bad behavior (but not quite bad enough to trigger red flags and account closure), then a system like Clio seems ideal.
But what if the model is uncertain about something and wants to ask for clarification? But the user isn’t being bad, just posing a difficult question.
In such a case, as a user, I’d want the model to explicitly ask me for permission to ask the Anthropic staff its question, and for me to need to approve the exact wording. There’s plenty of situations that could be great in!