I basically agree with this comment, and the basic reason I am much more pessimistic on sane AI governance than a lot of LWers is precisely because I expect LLMs to be more persuasive than humans, and there’s very strong evidence for it.
Here’s an RCT and pre-registered study on this topic, and while I find the sample numbers a little low (I’d like it to be more in the realm of 1000-2000 randomly selected people), this is the only type of study that can ensure that the effects are casual without relying much on your priors, so the fact that they show large persuasion from LLMs is really strong evidence for the belief that AI systems are better than humans at persuading people when given access to personal data and interaction.
I basically agree with this comment, and the basic reason I am much more pessimistic on sane AI governance than a lot of LWers is precisely because I expect LLMs to be more persuasive than humans, and there’s very strong evidence for it.
Here’s an RCT and pre-registered study on this topic, and while I find the sample numbers a little low (I’d like it to be more in the realm of 1000-2000 randomly selected people), this is the only type of study that can ensure that the effects are casual without relying much on your priors, so the fact that they show large persuasion from LLMs is really strong evidence for the belief that AI systems are better than humans at persuading people when given access to personal data and interaction.
https://arxiv.org/abs/2403.14380
More generally, it provides evidence for Sam Altman’s thesis that super-persuasive AI will come long before AI that’s good in every other field.