I’m familiar with how sociopaths (incorrectly) perceive themselves as a superior branch of humanity, as a cope for the mutation that gave them bias for more antisocial behavior by turning it into a sort of virtue and a lack of weakness.
I also can’t help but notice how you try to side with the AI by calling it sociopathic. Don’t make this mistake, it would run circles around you too, especially if augmented. It might not appeal to empath emotions, but it could appeal to narcissism instead, or use valid threats, or promises, or distractions, or find some other exploit in the brain, which is, while slightly modified in the amygdala part, still painfully human. So, in fact, believing that you’re invulnerable makes you even more vulnerable, again, a very human mistake to make.
“A human evil is better than an inhuman evil [...] We can imagine the spectre of horror presented by unaligned AGI and the spectre of megalomaniacs who will use such technology for their own gain regardless of the human cost” How about we avoid both by pushing for the world where the inventor would have both invented safety measures from the first principles, and not be a psychopath but someone who wants other beings not to suffer due to empathy?
Well in the end, I think the correct view is that as long as the inventor is making safety measures from first principles, it doesn’t matter whether they’re an empath or a psychopath. Why close off part of the human race who are interested in aligning the world ending AI just because they don’t have some feelings? It’s not like their imagined utopia is much different from yours anyways.
It sounds correct when you approach it theoretically. And it might well be that this results in a good outcome, it doesn’t preclude it, at least if we talk about a random person that has psychopathy.
However, when I think about it practically, it feels wrong, like when I think about which world has the best chance to produce utopia, the one where AGI is achieved by Robert Miles, or by the North Korea. There are a few more nation states that are making large progress that I would want to name but won’t, to avoid political debate. These are the people I mostly was referring to, not random sociopaths working in AI field about whom I don’t know anything.
Which is why my personal outlook is such that I want as many people who are not like that to participate in the game, to dilute the current pool of lottery participants, who are, most of them, let’s be honest, not particularly virtuous individuals, but currently have very high chances of being the first to achieve this.
I’m familiar with how sociopaths (incorrectly) perceive themselves as a superior branch of humanity, as a cope for the mutation that gave them bias for more antisocial behavior by turning it into a sort of virtue and a lack of weakness.
I also can’t help but notice how you try to side with the AI by calling it sociopathic. Don’t make this mistake, it would run circles around you too, especially if augmented. It might not appeal to empath emotions, but it could appeal to narcissism instead, or use valid threats, or promises, or distractions, or find some other exploit in the brain, which is, while slightly modified in the amygdala part, still painfully human. So, in fact, believing that you’re invulnerable makes you even more vulnerable, again, a very human mistake to make.
“A human evil is better than an inhuman evil [...] We can imagine the spectre of horror presented by unaligned AGI and the spectre of megalomaniacs who will use such technology for their own gain regardless of the human cost” How about we avoid both by pushing for the world where the inventor would have both invented safety measures from the first principles, and not be a psychopath but someone who wants other beings not to suffer due to empathy?
Well in the end, I think the correct view is that as long as the inventor is making safety measures from first principles, it doesn’t matter whether they’re an empath or a psychopath. Why close off part of the human race who are interested in aligning the world ending AI just because they don’t have some feelings? It’s not like their imagined utopia is much different from yours anyways.
It sounds correct when you approach it theoretically. And it might well be that this results in a good outcome, it doesn’t preclude it, at least if we talk about a random person that has psychopathy.
However, when I think about it practically, it feels wrong, like when I think about which world has the best chance to produce utopia, the one where AGI is achieved by Robert Miles, or by the North Korea. There are a few more nation states that are making large progress that I would want to name but won’t, to avoid political debate. These are the people I mostly was referring to, not random sociopaths working in AI field about whom I don’t know anything.
Which is why my personal outlook is such that I want as many people who are not like that to participate in the game, to dilute the current pool of lottery participants, who are, most of them, let’s be honest, not particularly virtuous individuals, but currently have very high chances of being the first to achieve this.