I agree. I want to point out that my motivation for linking this is not to praise Pliny’s actions. It’s because this highlights a real-sounding example of something I’m concerned is going to increase in frequency. Namely, that people with too little mental health and/or too many odd drugs are going to be vulnerable to getting into weird situations with persuasive AIs. I expect the effect to be more intense once the AIs are:
More effectively persuasive
More able to orient towards a long term goal, and work towards that subtly across many small interactions
More multimodal: able to speak in human-sounding ways, able to use sound and vision input to read human emotions and body language
More optimized by unethical humans with the deliberate intent of manipulating people
I don’t have any solutions to offer, and I don’t think this ranks among the worst dangers facing humanity, I just think it’s worth documenting and keeping an eye on.
I think this effect will be more wide-spread than targeting only already-vulnerable people, and it is particularly hard to measure because the causes will be decentralised and the effects will be diffuse. I predict it being a larger problem if, in the run-up between narrow AI and ASI, we have a longer period of necessary public discourse and decision-making. If the period is very short then it doesn’t matter. It may not affect many people given how much penetration AI chatbots have in the market before takeoff too.
I agree. I want to point out that my motivation for linking this is not to praise Pliny’s actions. It’s because this highlights a real-sounding example of something I’m concerned is going to increase in frequency. Namely, that people with too little mental health and/or too many odd drugs are going to be vulnerable to getting into weird situations with persuasive AIs. I expect the effect to be more intense once the AIs are:
More effectively persuasive
More able to orient towards a long term goal, and work towards that subtly across many small interactions
More multimodal: able to speak in human-sounding ways, able to use sound and vision input to read human emotions and body language
More optimized by unethical humans with the deliberate intent of manipulating people
I don’t have any solutions to offer, and I don’t think this ranks among the worst dangers facing humanity, I just think it’s worth documenting and keeping an eye on.
I think this effect will be more wide-spread than targeting only already-vulnerable people, and it is particularly hard to measure because the causes will be decentralised and the effects will be diffuse. I predict it being a larger problem if, in the run-up between narrow AI and ASI, we have a longer period of necessary public discourse and decision-making. If the period is very short then it doesn’t matter. It may not affect many people given how much penetration AI chatbots have in the market before takeoff too.