I actually think this idea is not good, and would oppose its widespread adoption. For level-setting, I also want to say that I upvoted the post as I think it’s an idea worth thinking about.
I have noticed in the last few years, many services have been rolling out more and more features like this already—not so much “anger translators” but other sorts of suggested replies, etc. My problem with these is that it feels very intrusive, the computers intruding sometimes into our most intimate lives. For example, GMail’s suggested replies and autocompletes show up on all messages if not disabled. I would be fine with this for pro forma interactions, such as customer service reps or whatever, but they also show up when I’m conversing with family and close friends.
The problem is that it feels impossible to just completely ignore the suggestions—either I want to accept one or else I feel myself changing my intended reply intentionally to avoid one. Either way, the system is changing how I would communicate with my loved ones in a way that I really dislike. It reminds me one of the most memorable lines from the Matrix, (from memory, not verbatim) “1999, the peak of your civilization. Of course, I say your civilization, because as soon as we [AIs] started thinking for you, it really became our civilization.” To the extent that AI can automate more and more work that seems great to me, but automating interactions with our friends and family sounds like dystopia.
I agree that if your device automatically translated everything you send/receive without you having any control over it, that it would be incredibly dystopian, but that is not what I’m suggesting.
This would be a service that people actively chose and pay for, GPT-3 is not free. If you don’t like it, don’t use it.
I agree, that once you’re using the service that it would be very difficult to ignore the suggestions, but again that is the whole point. The goal of the app is to intervene in the case of a high conflict situation and point you in the directions of the other person feelings and needs. Or reminding you of your own feelings and needs.
Now, it might be true, that at this point GPT-3 is not very good at reminding your of the other persons feelings and needs—I can give you that. And it might have been a mistake of me to only use examples generated by GPT-3.
But whenever I speak to someone that is angry or upset, I wish that I could hear their feelings and needs. Just like myself, I believe that people find it difficult to express or hear those when shit hits the fan.
Yes, but if you apply this concept, you still won’t be hearing their feelings and needs. You will hear some function F(feelings, original message) = output. Likely one of the following:
(A) GPT on sender’s side, guesses 90% right: You might hear that 90% and lose the 10% that GPT did not guess
(B) GPT on sender’s side, guesses 80% right: You might hear mostly the 20% because the sender, consciously or unconsciously, alters their original message after seeing GPT’s mostly-right-but-still-wrong guess to emphasize the distance between their true feelings and the guess
(C) GPT on sender’s side, guesses 80% right: Sender might tweak the output so that its close to GPT’s output but not exactly—maybe this 100% captures their feelings and needs or maybe 10% is still lost trying to wedge their amorphous vibe into GPT’s format
(D) GPT on recipient’s side: You see the original message and now GPT is totally guessing at what the other person might be feeling—if its wrong you will have no human feedback in the moment to correct it so its output is at best useless and at worst will lead you to form overconfidently wrong beliefs about the sender’s emotional state.
(E) GPT nowhere, but app is popular in your community: You see an NVCish message from someone and now have to guess at whether their message is an original message, or the output of F(feelings, original message)
Note also that a recipient definitely has no way to know if they are in world (A), world (B), world (C), and depending on your relationship might also be in world (E).
Whenever I receive a message containing high conflict content and I have enough peace of mind, I tend to translate it in my head towards feelings and needs. So, in this case there is also a function involved F(feelings, original message) = output, but it is running on biological hardware.
When I do this myself I’m not a 100% right. In fact, I’m probably not even 50% right and this really doesn’t matter. The translation doesn’t have to be perfect. The moment I realize that this person might be sad because a certain need is not being fulfilled, then I can start asking questions.
It takes some effort to learn what is alive in the other person and it often doesn’t matter if your guess is wrong. People deeply appreciate when you’re curious about their inner lives.
On the senders side it can do exactly the same thing. GPT-3 guesses what is alive in me, opening up the opportunity to find out myself. You’re worried about 10% being lost in translation. In my personal experience, most people (including myself), when they’re upset are completely unaware of their feelings and needs. They get so absorbed by their situation that it causes a lot of unnecessary suffering.
How much of this value could be equally accomplished by a pop-up that, rather than trying to “translate” the message, reminds the user that the other person has feelings and needs that they may be having trouble expressing, and that the user should consider that, without any attempt for the AI to state its best guess of what those feelings and needs are? Because I think that alternative would address substantially all of my objections about this while possible preserving nearly all of the value.
I think that is a fair point, I honestly don’t know.
Intuitively, the translation would seem to help me more to become less reactive. I can think of two reasons:
It would unload some of the cognitive effort when it is most difficult, making it easier to switch the focus to feelings and needs.
It would make the message different every time, attuned to the situation. I would worry that the pop-up would be automatically clicked away after a while, because it is always the same.
But having said that, it is a fair point and I would definitely be open to any solution that would achieve the same result.
I actually think this idea is not good, and would oppose its widespread adoption. For level-setting, I also want to say that I upvoted the post as I think it’s an idea worth thinking about.
I have noticed in the last few years, many services have been rolling out more and more features like this already—not so much “anger translators” but other sorts of suggested replies, etc. My problem with these is that it feels very intrusive, the computers intruding sometimes into our most intimate lives. For example, GMail’s suggested replies and autocompletes show up on all messages if not disabled. I would be fine with this for pro forma interactions, such as customer service reps or whatever, but they also show up when I’m conversing with family and close friends.
The problem is that it feels impossible to just completely ignore the suggestions—either I want to accept one or else I feel myself changing my intended reply intentionally to avoid one. Either way, the system is changing how I would communicate with my loved ones in a way that I really dislike. It reminds me one of the most memorable lines from the Matrix, (from memory, not verbatim) “1999, the peak of your civilization. Of course, I say your civilization, because as soon as we [AIs] started thinking for you, it really became our civilization.” To the extent that AI can automate more and more work that seems great to me, but automating interactions with our friends and family sounds like dystopia.
I agree that if your device automatically translated everything you send/receive without you having any control over it, that it would be incredibly dystopian, but that is not what I’m suggesting.
This would be a service that people actively chose and pay for, GPT-3 is not free. If you don’t like it, don’t use it.
I agree, that once you’re using the service that it would be very difficult to ignore the suggestions, but again that is the whole point. The goal of the app is to intervene in the case of a high conflict situation and point you in the directions of the other person feelings and needs. Or reminding you of your own feelings and needs.
Now, it might be true, that at this point GPT-3 is not very good at reminding your of the other persons feelings and needs—I can give you that. And it might have been a mistake of me to only use examples generated by GPT-3.
But whenever I speak to someone that is angry or upset, I wish that I could hear their feelings and needs. Just like myself, I believe that people find it difficult to express or hear those when shit hits the fan.
Yes, but if you apply this concept, you still won’t be hearing their feelings and needs. You will hear some function F(feelings, original message) = output. Likely one of the following:
(A) GPT on sender’s side, guesses 90% right: You might hear that 90% and lose the 10% that GPT did not guess
(B) GPT on sender’s side, guesses 80% right: You might hear mostly the 20% because the sender, consciously or unconsciously, alters their original message after seeing GPT’s mostly-right-but-still-wrong guess to emphasize the distance between their true feelings and the guess
(C) GPT on sender’s side, guesses 80% right: Sender might tweak the output so that its close to GPT’s output but not exactly—maybe this 100% captures their feelings and needs or maybe 10% is still lost trying to wedge their amorphous vibe into GPT’s format
(D) GPT on recipient’s side: You see the original message and now GPT is totally guessing at what the other person might be feeling—if its wrong you will have no human feedback in the moment to correct it so its output is at best useless and at worst will lead you to form overconfidently wrong beliefs about the sender’s emotional state.
(E) GPT nowhere, but app is popular in your community: You see an NVCish message from someone and now have to guess at whether their message is an original message, or the output of F(feelings, original message)
Note also that a recipient definitely has no way to know if they are in world (A), world (B), world (C), and depending on your relationship might also be in world (E).
Whenever I receive a message containing high conflict content and I have enough peace of mind, I tend to translate it in my head towards feelings and needs. So, in this case there is also a function involved F(feelings, original message) = output, but it is running on biological hardware.
When I do this myself I’m not a 100% right. In fact, I’m probably not even 50% right and this really doesn’t matter. The translation doesn’t have to be perfect. The moment I realize that this person might be sad because a certain need is not being fulfilled, then I can start asking questions.
It takes some effort to learn what is alive in the other person and it often doesn’t matter if your guess is wrong. People deeply appreciate when you’re curious about their inner lives.
On the senders side it can do exactly the same thing. GPT-3 guesses what is alive in me, opening up the opportunity to find out myself. You’re worried about 10% being lost in translation. In my personal experience, most people (including myself), when they’re upset are completely unaware of their feelings and needs. They get so absorbed by their situation that it causes a lot of unnecessary suffering.
How much of this value could be equally accomplished by a pop-up that, rather than trying to “translate” the message, reminds the user that the other person has feelings and needs that they may be having trouble expressing, and that the user should consider that, without any attempt for the AI to state its best guess of what those feelings and needs are? Because I think that alternative would address substantially all of my objections about this while possible preserving nearly all of the value.
I think that is a fair point, I honestly don’t know.
Intuitively, the translation would seem to help me more to become less reactive. I can think of two reasons:
It would unload some of the cognitive effort when it is most difficult, making it easier to switch the focus to feelings and needs.
It would make the message different every time, attuned to the situation. I would worry that the pop-up would be automatically clicked away after a while, because it is always the same.
But having said that, it is a fair point and I would definitely be open to any solution that would achieve the same result.