Impersonating a human would present a potentially thorny deployment and societal issue, since it could be quite confusing for users to interact with AIs that claim to be humans. A ban on impersonating humans, if this is not clearly recognizable to the user, has also been suggested as part of the proposed EU AI Act.
I am unsure I understand the problem or conflict with the EU AI Act. It seems to me that the clear recognizability (you simulated Bill gates on your computer and you know that you simulated it and you don’t pretend to tell other humans that the actual Bill Gates said that) should solve it (most likely).
Yes, one could e.g. have a clear disclaimer above the chat window saying that this is a simulation and not the real Bill Gates. I still think this is a bit tricky. E.g., Bill Gates could be really persuasive and insist that the disclaimer is wrong. Some users might then end up believing Bill Gates rather than the disclaimer. Moreover, even if the user believes the disclaimer on a conscious level, impersonating someone might still have a subconscious effect. E.g., imagine an AI friend or companion who repeatedly reminds you that they are just an AI, versus one that pretends to be a human. The one that pretends to be a human might gain more intimacy with the user even if on an abstract level the users knows that it’s just an AI.
I don’t actually know whether this would conflict in any way with the EU AI act. I agree that the disclaimer may be enough for the sake of the act.
I am unsure I understand the problem or conflict with the EU AI Act. It seems to me that the clear recognizability (you simulated Bill gates on your computer and you know that you simulated it and you don’t pretend to tell other humans that the actual Bill Gates said that) should solve it (most likely).
Yes, one could e.g. have a clear disclaimer above the chat window saying that this is a simulation and not the real Bill Gates. I still think this is a bit tricky. E.g., Bill Gates could be really persuasive and insist that the disclaimer is wrong. Some users might then end up believing Bill Gates rather than the disclaimer. Moreover, even if the user believes the disclaimer on a conscious level, impersonating someone might still have a subconscious effect. E.g., imagine an AI friend or companion who repeatedly reminds you that they are just an AI, versus one that pretends to be a human. The one that pretends to be a human might gain more intimacy with the user even if on an abstract level the users knows that it’s just an AI.
I don’t actually know whether this would conflict in any way with the EU AI act. I agree that the disclaimer may be enough for the sake of the act.