Yes, but on lesswrong, at least, we’ve been exposed to enough social psychology to understand why that’s a dangerous intrinsic goal to have. It’s certainly seductive, but aren’t there better things to do with increased agency than to seek to dominate other potential agents?
Unless there’s a friendly AI which has been built in secret somewhere, we’re still all human. With all the weaknesses and foibles of human nature; though we might try to mitigate those weaknesses, one of the biggest weaknesses in human nature is the belief that we have already mitigated those weaknesses, leading us to stop trying.
Status interactions are a big part of the human psyche. We signal in many ways—posture, facial expression, selection of clothing, word choice—and we respond to such signals automatically. If a man steps up to one and asks for directions to the local primary school, one would look at his signals before replying. Is he carrying a container of petrol and a box of matches, does he have a crazed look in his eye? Perhaps better to direct him to the local police station. Is he in a nice suit, smartly dressed, with well-shined shoes, accompanied by a small child in a brand-new school uniform? He probably has legitimate business at the school. And inbetween the two, there’s a whole range of potential sets of signals; and where there are signals, there are those who subvert the signals. Social hackers, I guess one could call them. And where such people exist—well, is it a good thing to pay attention to the signals or not? How much importance should one place on these signals, when the signals themselves could be subverted? How should one signal oneself—for any behaviour is a signal of some sort?
With all the weaknesses and foibles of human nature; though we might try to mitigate those weaknesses, one of the biggest weaknesses in human nature is the belief that we have already mitigated those weaknesses, leading us to stop trying.
Except that what’s being discussed here is the exploitation of those weaknesses, not their mitigation. And seeking to exploit those weaknesses as an end in and of itself leads to a particular kind of affective death spiral that rationalists claim to want to avoid, so I’m trying to raise a “what’s up with that?” signal before a particular set of adverse cultural values lock in.
Ah, I see; so while I’m saying that I expect that some exploitation will happen with high probability in any sufficiently large social group, you are trying to point out the negative side of said exploitation and thus cut it off, or at least reduce it, at an early stage.
Unless there’s a friendly AI which has been built in secret somewhere, we’re still all human. With all the weaknesses and foibles of human nature; though we might try to mitigate those weaknesses, one of the biggest weaknesses in human nature is the belief that we have already mitigated those weaknesses, leading us to stop trying.
Status interactions are a big part of the human psyche. We signal in many ways—posture, facial expression, selection of clothing, word choice—and we respond to such signals automatically. If a man steps up to one and asks for directions to the local primary school, one would look at his signals before replying. Is he carrying a container of petrol and a box of matches, does he have a crazed look in his eye? Perhaps better to direct him to the local police station. Is he in a nice suit, smartly dressed, with well-shined shoes, accompanied by a small child in a brand-new school uniform? He probably has legitimate business at the school. And inbetween the two, there’s a whole range of potential sets of signals; and where there are signals, there are those who subvert the signals. Social hackers, I guess one could call them. And where such people exist—well, is it a good thing to pay attention to the signals or not? How much importance should one place on these signals, when the signals themselves could be subverted? How should one signal oneself—for any behaviour is a signal of some sort?
Except that what’s being discussed here is the exploitation of those weaknesses, not their mitigation. And seeking to exploit those weaknesses as an end in and of itself leads to a particular kind of affective death spiral that rationalists claim to want to avoid, so I’m trying to raise a “what’s up with that?” signal before a particular set of adverse cultural values lock in.
Ah, I see; so while I’m saying that I expect that some exploitation will happen with high probability in any sufficiently large social group, you are trying to point out the negative side of said exploitation and thus cut it off, or at least reduce it, at an early stage.