The advertising question was just an example of the general trust question. Another example is that a chatbot may come to seem unreliable through “not understanding” the words it produces. Here it’s common for current LLMs to periodically give the impression of “not understanding what they say” by periodically producing output that’s contradictory to what they previous outputted or which involves an inappropriate use of a word. Just consider a common complaint between humans is “you don’t know what love means”. Yet another example is this. Large language models today are often controlled by engineered prompts and hackers have had considerable success getting around any constraints which these prompts impose. This sort of unreliability indicates any “promise” of a chatbot is going to be questionable, which can be seen as a violation of trust.
I think that humans are generally very good at intuitively understanding contextuality in interpersonal patterns
Well, one aspect here is the “a chatbot relationship can be a real relationship” assumption seems to imply that some of the contexts of a chatbot relationship would be shared with a real relationship. Perhaps this would be processed by most people as “a real relationship but still not the same as a relationship with a person” but there are some indications that this might not always happen. The Google engineer who tried to get a lawyer for a chatbot they believed was being “held captive” by Google comes to mind.
As far as humans being good at context, it depends what one means by good. On the one hand, most people succeed most of the time at treating other people according to broad social relationship those other people fall into. IE, treating children as children, bosses as bosses, platonic friends as platonic friends etc. But one should consider that some of the largest challenges people face in human society involve changing their social relationship with another person—starting changing a stranger relationship or platonic friendship relationship to a romantic relationship, changing a boyfriend/girlfriend relationship to a husband/wife relationship, even changing a acquaintance relationship to a friendship relationship and changing a stranger relationship to an employee relationship, etc. This type of transition is hard for people virtually by definition since it involves various kinds of competition. These are considered “life’s challenges”.
A lot of human “bad behavior” is attributed to one person using pressure to force one of these relationship changes or in reacting to “losing” in the context of a social relationship (being dumped, fired, divorced, etc). And lot of socializing involves teaching humans to not violate social norms as they attempt these changes of relationships. Which comes back to the question of whether a chatbot would help teach a person to “gracefully” move between these social relationships. Again, I’m not saying a chatbot romance would automatically be problematic but I think these issues need addressing.
The advertising question was just an example of the general trust question. Another example is that a chatbot may come to seem unreliable through “not understanding” the words it produces. Here it’s common for current LLMs to periodically give the impression of “not understanding what they say” by periodically producing output that’s contradictory to what they previous outputted or which involves an inappropriate use of a word. Just consider a common complaint between humans is “you don’t know what love means”. Yet another example is this. Large language models today are often controlled by engineered prompts and hackers have had considerable success getting around any constraints which these prompts impose. This sort of unreliability indicates any “promise” of a chatbot is going to be questionable, which can be seen as a violation of trust.
Well, one aspect here is the “a chatbot relationship can be a real relationship” assumption seems to imply that some of the contexts of a chatbot relationship would be shared with a real relationship. Perhaps this would be processed by most people as “a real relationship but still not the same as a relationship with a person” but there are some indications that this might not always happen. The Google engineer who tried to get a lawyer for a chatbot they believed was being “held captive” by Google comes to mind.
As far as humans being good at context, it depends what one means by good. On the one hand, most people succeed most of the time at treating other people according to broad social relationship those other people fall into. IE, treating children as children, bosses as bosses, platonic friends as platonic friends etc. But one should consider that some of the largest challenges people face in human society involve changing their social relationship with another person—starting changing a stranger relationship or platonic friendship relationship to a romantic relationship, changing a boyfriend/girlfriend relationship to a husband/wife relationship, even changing a acquaintance relationship to a friendship relationship and changing a stranger relationship to an employee relationship, etc. This type of transition is hard for people virtually by definition since it involves various kinds of competition. These are considered “life’s challenges”.
A lot of human “bad behavior” is attributed to one person using pressure to force one of these relationship changes or in reacting to “losing” in the context of a social relationship (being dumped, fired, divorced, etc). And lot of socializing involves teaching humans to not violate social norms as they attempt these changes of relationships. Which comes back to the question of whether a chatbot would help teach a person to “gracefully” move between these social relationships. Again, I’m not saying a chatbot romance would automatically be problematic but I think these issues need addressing.