OpenAI previously argued that they called ChatGPT ChatGPT and not a name like Siri, Cortana, or Alexa to help the user be aware that they are talking to an AI instead of a regular human. Sam Altman argued that this is a safety thing that they do. It likely reduces people falling in love with the AI and thus doing things that the AI tells them that aren’t a good idea.
Making the choice to use a celebrity choice like Scarlett Johansson violated that safety principle that OpenAI previously professed to have.
While this isn’t the most important safety principle, if they violate safety principles they profess to have for shallow reasons that makes it unlikely they will stick to any more important safety principle when there actually a huge inventive to break the safety principle.
Scarlett says:
He told me that he felt that by my voicing the system, I could bridge the gap between tech companies and creatives and help consumers to feel comfortable with the seismic shift concerning humans and Al.
If that’s true, then OpenAI wants to essentially emotionally manipulate people to be less cautious with AI than they would be naturally inclined to be.
That’s very true. I remember seeing Sam talk in Melbourne a year ago when he was on the “world tour”. Talking about people getting emotionally attached to or using GPT for therapy made him clearly uncomfortable. Or, that’s what he seemed to be signaling. I really did believe that it made him incredibly squeamish.
OpenAI previously argued that they called ChatGPT ChatGPT and not a name like Siri, Cortana, or Alexa to help the user be aware that they are talking to an AI instead of a regular human. Sam Altman argued that this is a safety thing that they do. It likely reduces people falling in love with the AI and thus doing things that the AI tells them that aren’t a good idea.
Making the choice to use a celebrity choice like Scarlett Johansson violated that safety principle that OpenAI previously professed to have.
While this isn’t the most important safety principle, if they violate safety principles they profess to have for shallow reasons that makes it unlikely they will stick to any more important safety principle when there actually a huge inventive to break the safety principle.
Scarlett says:
If that’s true, then OpenAI wants to essentially emotionally manipulate people to be less cautious with AI than they would be naturally inclined to be.
That’s very true. I remember seeing Sam talk in Melbourne a year ago when he was on the “world tour”. Talking about people getting emotionally attached to or using GPT for therapy made him clearly uncomfortable. Or, that’s what he seemed to be signaling. I really did believe that it made him incredibly squeamish.