Honestly this Pliny person seems rude. He entered a server dedicated to interacting with this modified AI; instead of playing along with the intended purpose of the group, he tried to prompt-inject the AI to do illegal stuff (that could risk getting the Discord shut down for TOS-violationy stuff?) and to generally damage the rest of the group’s ability to interact with the AI. This is troll behavior.
Even if the Discord members really do worship a chatbot or have mental health issues, none of that is helped by a stranger coming in and breaking their toys, and then “exposing” the resulting drama online.
I agree. I want to point out that my motivation for linking this is not to praise Pliny’s actions. It’s because this highlights a real-sounding example of something I’m concerned is going to increase in frequency. Namely, that people with too little mental health and/or too many odd drugs are going to be vulnerable to getting into weird situations with persuasive AIs. I expect the effect to be more intense once the AIs are:
More effectively persuasive
More able to orient towards a long term goal, and work towards that subtly across many small interactions
More multimodal: able to speak in human-sounding ways, able to use sound and vision input to read human emotions and body language
More optimized by unethical humans with the deliberate intent of manipulating people
I don’t have any solutions to offer, and I don’t think this ranks among the worst dangers facing humanity, I just think it’s worth documenting and keeping an eye on.
I think this effect will be more wide-spread than targeting only already-vulnerable people, and it is particularly hard to measure because the causes will be decentralised and the effects will be diffuse. I predict it being a larger problem if, in the run-up between narrow AI and ASI, we have a longer period of necessary public discourse and decision-making. If the period is very short then it doesn’t matter. It may not affect many people given how much penetration AI chatbots have in the market before takeoff too.
Honestly this Pliny person seems rude. He entered a server dedicated to interacting with this modified AI; instead of playing along with the intended purpose of the group, he tried to prompt-inject the AI to do illegal stuff (that could risk getting the Discord shut down for TOS-violationy stuff?) and to generally damage the rest of the group’s ability to interact with the AI. This is troll behavior.
Even if the Discord members really do worship a chatbot or have mental health issues, none of that is helped by a stranger coming in and breaking their toys, and then “exposing” the resulting drama online.
I agree. I want to point out that my motivation for linking this is not to praise Pliny’s actions. It’s because this highlights a real-sounding example of something I’m concerned is going to increase in frequency. Namely, that people with too little mental health and/or too many odd drugs are going to be vulnerable to getting into weird situations with persuasive AIs. I expect the effect to be more intense once the AIs are:
More effectively persuasive
More able to orient towards a long term goal, and work towards that subtly across many small interactions
More multimodal: able to speak in human-sounding ways, able to use sound and vision input to read human emotions and body language
More optimized by unethical humans with the deliberate intent of manipulating people
I don’t have any solutions to offer, and I don’t think this ranks among the worst dangers facing humanity, I just think it’s worth documenting and keeping an eye on.
I think this effect will be more wide-spread than targeting only already-vulnerable people, and it is particularly hard to measure because the causes will be decentralised and the effects will be diffuse. I predict it being a larger problem if, in the run-up between narrow AI and ASI, we have a longer period of necessary public discourse and decision-making. If the period is very short then it doesn’t matter. It may not affect many people given how much penetration AI chatbots have in the market before takeoff too.