Absolutely love this story, but I think the take on sociopathy is a bit confused: SimplexAI-m seems like the opposite of a sociopath.
Sociopathy (really psychopathy) is a reproductive strategy with a 2% incidence in the human population at equilibrium; it’s a predator-prey situation. Psychopaths use emotional mimicry and manipulation to appear to conform to our positive-sum social, moral and economic protocols, but actually just subvert them for personal gain. Intelligent psychopaths often optimise for plausible deniability (like sum-threshold attacks and the law of prevalence). There are distinct differences in brain structure in primary psychopaths; there is literally less grey matter in areas associated with empathy and social emotions.
(I’m being simplistic. There are certainly people who exhibit milder ASPD symptoms without commensurate brain damage; my preferred term is “asshole.”)
Thus a super-moral AI as described is the complete opposite of a psychopath—it does a better job of conforming to symmetric social/moral protocols than anybody else. It may appear to tick off the ASPD checklist, but only insofar as the tails come apart. This isn’t an endorsement of SimplexAI-m’s views though—I enjoy being alive!
From what you describe, it seems like SymplexAI-m would very much fit the description of a sociopath?
Yes, it adheres to a strict set of moral protocols, but I don’t think those are necessarily the same things as being socially conforming. The AI would have the ability to mimic empathy, and use it as a tool without actually having any empathy since it does not actually share or empathize with any human values.
I’ll admit I was being a bit fuzzy—it doesn’t really make much sense to extrapolate the “sociopath” boundary in people space to arbitrary agent spaces. Debating whether SimplexAI-m is a sociopath is sort of like asking whether an isolated tree falling makes a sound.
So I was mostly trying to convey my mental model of the most useful cluster in people space that could be called sociopathy, because 1) I see it very, very consistently misunderstood, and 2) sociopathy is far more important to spot than virtually any other dimension.
As an aside, I think the best book on the topic is The Psychopath Code by Pieter Hintjens, a software engineer. I’ve perused a few books written by academics and can’t recommend any; it System1!seems like the study of psychopathy must be afflicted by even worse selection effects and bad experiment design than the rest of psychology because the academic books don’t fit the behaviour of people I’ve known at all.
Absolutely love this story, but I think the take on sociopathy is a bit confused: SimplexAI-m seems like the opposite of a sociopath.
Sociopathy (really psychopathy) is a reproductive strategy with a 2% incidence in the human population at equilibrium; it’s a predator-prey situation. Psychopaths use emotional mimicry and manipulation to appear to conform to our positive-sum social, moral and economic protocols, but actually just subvert them for personal gain. Intelligent psychopaths often optimise for plausible deniability (like sum-threshold attacks and the law of prevalence). There are distinct differences in brain structure in primary psychopaths; there is literally less grey matter in areas associated with empathy and social emotions.
(I’m being simplistic. There are certainly people who exhibit milder ASPD symptoms without commensurate brain damage; my preferred term is “asshole.”)
Thus a super-moral AI as described is the complete opposite of a psychopath—it does a better job of conforming to symmetric social/moral protocols than anybody else. It may appear to tick off the ASPD checklist, but only insofar as the tails come apart. This isn’t an endorsement of SimplexAI-m’s views though—I enjoy being alive!
From what you describe, it seems like SymplexAI-m would very much fit the description of a sociopath?
Yes, it adheres to a strict set of moral protocols, but I don’t think those are necessarily the same things as being socially conforming. The AI would have the ability to mimic empathy, and use it as a tool without actually having any empathy since it does not actually share or empathize with any human values.
Am I understanding that right?
I’ll admit I was being a bit fuzzy—it doesn’t really make much sense to extrapolate the “sociopath” boundary in people space to arbitrary agent spaces. Debating whether SimplexAI-m is a sociopath is sort of like asking whether an isolated tree falling makes a sound.
So I was mostly trying to convey my mental model of the most useful cluster in people space that could be called sociopathy, because 1) I see it very, very consistently misunderstood, and 2) sociopathy is far more important to spot than virtually any other dimension.
As an aside, I think the best book on the topic is The Psychopath Code by Pieter Hintjens, a software engineer. I’ve perused a few books written by academics and can’t recommend any; it System1!seems like the study of psychopathy must be afflicted by even worse selection effects and bad experiment design than the rest of psychology because the academic books don’t fit the behaviour of people I’ve known at all.