I think this is a clever and interesting application of GPT. I’ve downvoted the post. Blunting feeling by filtering it through a robot is… nasty. I’m most opposed to the idea that this should be implemented on the receiver’s side. A machine to shield cheating partners from having to read a “fuck you” is a negative; I don’t want it in the world.
An implementation that only allows optional translations on the sender’s side is better, but still not something I like; RamblinDash’s comment covers the issues there well.
Can you explain why you think this is blunting feelings? Since my intention would be for it to do the exact opposite.
If we look at the example you give: shielding a partner from having to read a “fuck you”. In that case you would read the message containing “fuck you” first and GPT-3 might suggest that this person is feeling sad, vulnerable and angry. In my perspective this is exactly the opposite of “blunting feelings”. I would be pointed towards the inner life of the other person.
Hmm. Maybe blunting isn’t the right word. I can’t really describe the feeling I want to point to, but it’s sort of like “tech elites translating the language of the masses into something the elites find more palatable”—if that makes sense.
I think this is a clever and interesting application of GPT. I’ve downvoted the post. Blunting feeling by filtering it through a robot is… nasty. I’m most opposed to the idea that this should be implemented on the receiver’s side. A machine to shield cheating partners from having to read a “fuck you” is a negative; I don’t want it in the world.
An implementation that only allows optional translations on the sender’s side is better, but still not something I like; RamblinDash’s comment covers the issues there well.
Can you explain why you think this is blunting feelings? Since my intention would be for it to do the exact opposite.
If we look at the example you give: shielding a partner from having to read a “fuck you”. In that case you would read the message containing “fuck you” first and GPT-3 might suggest that this person is feeling sad, vulnerable and angry. In my perspective this is exactly the opposite of “blunting feelings”. I would be pointed towards the inner life of the other person.
Hmm. Maybe blunting isn’t the right word. I can’t really describe the feeling I want to point to, but it’s sort of like “tech elites translating the language of the masses into something the elites find more palatable”—if that makes sense.