This sounds like an interesting idea, it would be nice to have a tool that reminds us to be our better selves. However, it seems like it’d be quite problematic to let GPT mediate our communications. Besides the issues discussed in the other comments, I also think letting GPT handle the translation encourages a more superficial form of NVC. I think the main reason NVC works is not that the framework itself lets people communicate using non-violent language. Rather, it works because using the framework encourages people to be more emotionally aware and empathetic.
As a sender, NVC helps me because:
It makes me more aware of the inferences I am making (which may be false) based on my observations (which may not be complete). It reminds me that what I perceive is only my interpretation, which may not be true!
It helps me pay more attention to what I’m feeling and what I’m lacking, so I can focus more on problem solving (i.e. finding ways to meet my needs), rather than emotional arguments.
Note that sometimes, cooperative problem solving is not our goal, in which case we may not want to use NVC.
As a listener, NVC helps me because:
It reminds me that people usually have a valid reason why they are upset, which helps me be more empathetic. For example, I used to think people were angry at me because I didn’t do what they say, but later realised it was because they genuinely found my actions to be upsetting (e.g. they may find messiness to be distracting, whereas I don’t notice it).
Note that this is not always true, such as when someone is saying something to manipulate you.
My concerns with using GPT to handle the NVC translation are as follows:
I suspect an NVC translator would encourage people to “outsource” this thinking to GPT. This would lead to people following the framework without genuinely going through the process of thinking through their (or their partner’s) emotions and needs. This means people don’t really get to practise the NVC skills, and so don’t truly benefit from NVC.
Knowing that others may be using a translator may also make the conversation feel less genuine because it becomes easy to fake it (e.g. maybe they are actually really mad at me and can’t be bothered to make the effort to go through the NVC thought process, and are just using the translator because they believe it will get a better response).
When it’s presented as a translation, it gives the impression that the translation is the real answer, rather than just one of many possible answers.
(Also, I think some of the examples reflect a different problem that won’t be solved by NVC translation. For example, I think a better approach for the covid vaccine example would be asking the other person why they believe what they believe, rather than asking them to read the research with you.)
The idea of a tool to help open up the conversation is interesting though, so here are two possible alternatives I can think of (which may not be feasible):
Instead of translating your words, the app detects messages with strong negative emotions, and prompts you to reconsider when you send it, similar to how Outlook reminds you if you mention “attachment” in your email but don’t attach any files. This should be something that is enabled by the user, so it’s like a reminder from our calmer selves to our more impulsive selves to think through before sending.
Instead of providing a single translation, the app suggests possible feelings/needs based on other people’s experiences, while making it clear that yours may be different. For example, “Sometimes people say X because they are feeling Y1 and want Z1, or because they are feeling Y2 and want Z2. Do you feel like this applies to you? Or maybe you’re feeling something else?”
This sounds like an interesting idea, it would be nice to have a tool that reminds us to be our better selves. However, it seems like it’d be quite problematic to let GPT mediate our communications. Besides the issues discussed in the other comments, I also think letting GPT handle the translation encourages a more superficial form of NVC. I think the main reason NVC works is not that the framework itself lets people communicate using non-violent language. Rather, it works because using the framework encourages people to be more emotionally aware and empathetic.
As a sender, NVC helps me because:
It makes me more aware of the inferences I am making (which may be false) based on my observations (which may not be complete). It reminds me that what I perceive is only my interpretation, which may not be true!
It helps me pay more attention to what I’m feeling and what I’m lacking, so I can focus more on problem solving (i.e. finding ways to meet my needs), rather than emotional arguments.
Note that sometimes, cooperative problem solving is not our goal, in which case we may not want to use NVC.
As a listener, NVC helps me because:
It reminds me that people usually have a valid reason why they are upset, which helps me be more empathetic. For example, I used to think people were angry at me because I didn’t do what they say, but later realised it was because they genuinely found my actions to be upsetting (e.g. they may find messiness to be distracting, whereas I don’t notice it).
Note that this is not always true, such as when someone is saying something to manipulate you.
My concerns with using GPT to handle the NVC translation are as follows:
I suspect an NVC translator would encourage people to “outsource” this thinking to GPT. This would lead to people following the framework without genuinely going through the process of thinking through their (or their partner’s) emotions and needs. This means people don’t really get to practise the NVC skills, and so don’t truly benefit from NVC.
Knowing that others may be using a translator may also make the conversation feel less genuine because it becomes easy to fake it (e.g. maybe they are actually really mad at me and can’t be bothered to make the effort to go through the NVC thought process, and are just using the translator because they believe it will get a better response).
When it’s presented as a translation, it gives the impression that the translation is the real answer, rather than just one of many possible answers.
(Also, I think some of the examples reflect a different problem that won’t be solved by NVC translation. For example, I think a better approach for the covid vaccine example would be asking the other person why they believe what they believe, rather than asking them to read the research with you.)
The idea of a tool to help open up the conversation is interesting though, so here are two possible alternatives I can think of (which may not be feasible):
Instead of translating your words, the app detects messages with strong negative emotions, and prompts you to reconsider when you send it, similar to how Outlook reminds you if you mention “attachment” in your email but don’t attach any files. This should be something that is enabled by the user, so it’s like a reminder from our calmer selves to our more impulsive selves to think through before sending.
Instead of providing a single translation, the app suggests possible feelings/needs based on other people’s experiences, while making it clear that yours may be different. For example, “Sometimes people say X because they are feeling Y1 and want Z1, or because they are feeling Y2 and want Z2. Do you feel like this applies to you? Or maybe you’re feeling something else?”