[Question] What should the norms around AI voices be?

In previous discussions of AI risks, the ability for an AI to be very persuasive is often seen as one possible risk. Humans find some voices more persuasive than other voices.

If we can trust Scarlett Johansson’s description of her interactions with OpenAI, OpenAI wanted to use her voice, to increase how much users trust OpenAI’s model. Trusting a model more, likely means that the model is more persuasive.

AI companies could also multivar-test slight variations of their voices to maximize user engagement which would also likely push the voices in the direction of being more persuasive.

Zvi recently argued that it’s fine for OpenAI to provide their users with maximally compelling voices if the user want those voices, without getting pushback for it.

Are we as a community not worried anymore about the persuasive power of AI’s? As being someone who is not working directly in AI safety myself, why does this aspect seem underexplored by AI safety researchers?