Possible counterpoint: people aren’t as unique as we like to imagine in order to flatter ourselves. I worked on Machine Learning in Ad targeting for several years, and my takeaway is that the majority of people (when looking at the scale of many millions of people) fall into groups. It’s not hard to pattern match a person to a behavioral group, and then use a strategy tailored to that specific group in order to manipulate the individual. So you’d need not only you, but everyone sufficiently similar to you, to have intense privacy standards. If that’s the case, it just isn’t feasible. We need to focus more on the fact that creating the manipulative AI agent at all is a dangerous idea, and less on trying to narrowly protect ourselves. Even protecting yourself and your in-group doesn’t help much, if the majority of society gets powerfully manipulated and become tools of the puppet-master AI.
Possible counterpoint: people aren’t as unique as we like to imagine in order to flatter ourselves. I worked on Machine Learning in Ad targeting for several years, and my takeaway is that the majority of people (when looking at the scale of many millions of people) fall into groups. It’s not hard to pattern match a person to a behavioral group, and then use a strategy tailored to that specific group in order to manipulate the individual. So you’d need not only you, but everyone sufficiently similar to you, to have intense privacy standards. If that’s the case, it just isn’t feasible. We need to focus more on the fact that creating the manipulative AI agent at all is a dangerous idea, and less on trying to narrowly protect ourselves. Even protecting yourself and your in-group doesn’t help much, if the majority of society gets powerfully manipulated and become tools of the puppet-master AI.