Whenever I receive a message containing high conflict content and I have enough peace of mind, I tend to translate it in my head towards feelings and needs. So, in this case there is also a function involved F(feelings, original message) = output, but it is running on biological hardware.
When I do this myself I’m not a 100% right. In fact, I’m probably not even 50% right and this really doesn’t matter. The translation doesn’t have to be perfect. The moment I realize that this person might be sad because a certain need is not being fulfilled, then I can start asking questions.
It takes some effort to learn what is alive in the other person and it often doesn’t matter if your guess is wrong. People deeply appreciate when you’re curious about their inner lives.
On the senders side it can do exactly the same thing. GPT-3 guesses what is alive in me, opening up the opportunity to find out myself. You’re worried about 10% being lost in translation. In my personal experience, most people (including myself), when they’re upset are completely unaware of their feelings and needs. They get so absorbed by their situation that it causes a lot of unnecessary suffering.
How much of this value could be equally accomplished by a pop-up that, rather than trying to “translate” the message, reminds the user that the other person has feelings and needs that they may be having trouble expressing, and that the user should consider that, without any attempt for the AI to state its best guess of what those feelings and needs are? Because I think that alternative would address substantially all of my objections about this while possible preserving nearly all of the value.
I think that is a fair point, I honestly don’t know.
Intuitively, the translation would seem to help me more to become less reactive. I can think of two reasons:
It would unload some of the cognitive effort when it is most difficult, making it easier to switch the focus to feelings and needs.
It would make the message different every time, attuned to the situation. I would worry that the pop-up would be automatically clicked away after a while, because it is always the same.
But having said that, it is a fair point and I would definitely be open to any solution that would achieve the same result.
Whenever I receive a message containing high conflict content and I have enough peace of mind, I tend to translate it in my head towards feelings and needs. So, in this case there is also a function involved F(feelings, original message) = output, but it is running on biological hardware.
When I do this myself I’m not a 100% right. In fact, I’m probably not even 50% right and this really doesn’t matter. The translation doesn’t have to be perfect. The moment I realize that this person might be sad because a certain need is not being fulfilled, then I can start asking questions.
It takes some effort to learn what is alive in the other person and it often doesn’t matter if your guess is wrong. People deeply appreciate when you’re curious about their inner lives.
On the senders side it can do exactly the same thing. GPT-3 guesses what is alive in me, opening up the opportunity to find out myself. You’re worried about 10% being lost in translation. In my personal experience, most people (including myself), when they’re upset are completely unaware of their feelings and needs. They get so absorbed by their situation that it causes a lot of unnecessary suffering.
How much of this value could be equally accomplished by a pop-up that, rather than trying to “translate” the message, reminds the user that the other person has feelings and needs that they may be having trouble expressing, and that the user should consider that, without any attempt for the AI to state its best guess of what those feelings and needs are? Because I think that alternative would address substantially all of my objections about this while possible preserving nearly all of the value.
I think that is a fair point, I honestly don’t know.
Intuitively, the translation would seem to help me more to become less reactive. I can think of two reasons:
It would unload some of the cognitive effort when it is most difficult, making it easier to switch the focus to feelings and needs.
It would make the message different every time, attuned to the situation. I would worry that the pop-up would be automatically clicked away after a while, because it is always the same.
But having said that, it is a fair point and I would definitely be open to any solution that would achieve the same result.