It’s peculiar to see you comment on the fear of “megalomaniacs” gaining access to AGI before anyone else, prior to the entire spiel on how you were casually made emotionally dependent on a “sociopathic” LLM. This may be a slightly heretical idea; but perhaps it’s the case that the humans you would trust least with such a technology are the ones best equipped emotionally and cognitively to handle interactions with a supposed AGI. The point being, in part, that a human evil is better than an inhuman evil.
I’m inclined to think there exists no one who, at once, is both broadly “aligned” to the cause of human happiness as to use it for mostly-selfless and reasonable ends, and also responsibly, brutally egoistic enough to properly enslave the perfect and irresistible genius in the box; they seem to me two mutually exclusive categories of person. We can imagine the spectre of horror presented by unaligned AGI and the spectre of megalomaniacs who will use such technology for their own gain regardless of the human cost, yet there is also the largely unimagined spectre of warring princes who see no “ethical” alternative but as to do everything in their power to seize control of the genie and preserve the world from evil. Many of the “megalomaniacs” (quotes only half-ironic) which you fear in the abstract will likely see themselves as falling into this category. You can probably see yourself on some level in the same cadre, no?
Perhaps there’s a tyrant’s race to the bottom of human suffering no matter how you attempt to handle the prospect of the persons soon to establish and control AI, and we must all simply be convinced enough of both our moral righteousness and of our competence in handling the genie to plow obstinately forward regardless of the realistic consequences.
I’m familiar with how sociopaths (incorrectly) perceive themselves as a superior branch of humanity, as a cope for the mutation that gave them bias for more antisocial behavior by turning it into a sort of virtue and a lack of weakness.
I also can’t help but notice how you try to side with the AI by calling it sociopathic. Don’t make this mistake, it would run circles around you too, especially if augmented. It might not appeal to empath emotions, but it could appeal to narcissism instead, or use valid threats, or promises, or distractions, or find some other exploit in the brain, which is, while slightly modified in the amygdala part, still painfully human. So, in fact, believing that you’re invulnerable makes you even more vulnerable, again, a very human mistake to make.
“A human evil is better than an inhuman evil [...] We can imagine the spectre of horror presented by unaligned AGI and the spectre of megalomaniacs who will use such technology for their own gain regardless of the human cost” How about we avoid both by pushing for the world where the inventor would have both invented safety measures from the first principles, and not be a psychopath but someone who wants other beings not to suffer due to empathy?
Well in the end, I think the correct view is that as long as the inventor is making safety measures from first principles, it doesn’t matter whether they’re an empath or a psychopath. Why close off part of the human race who are interested in aligning the world ending AI just because they don’t have some feelings? It’s not like their imagined utopia is much different from yours anyways.
It sounds correct when you approach it theoretically. And it might well be that this results in a good outcome, it doesn’t preclude it, at least if we talk about a random person that has psychopathy.
However, when I think about it practically, it feels wrong, like when I think about which world has the best chance to produce utopia, the one where AGI is achieved by Robert Miles, or by the North Korea. There are a few more nation states that are making large progress that I would want to name but won’t, to avoid political debate. These are the people I mostly was referring to, not random sociopaths working in AI field about whom I don’t know anything.
Which is why my personal outlook is such that I want as many people who are not like that to participate in the game, to dilute the current pool of lottery participants, who are, most of them, let’s be honest, not particularly virtuous individuals, but currently have very high chances of being the first to achieve this.
It’s peculiar to see you comment on the fear of “megalomaniacs” gaining access to AGI before anyone else, prior to the entire spiel on how you were casually made emotionally dependent on a “sociopathic” LLM. This may be a slightly heretical idea; but perhaps it’s the case that the humans you would trust least with such a technology are the ones best equipped emotionally and cognitively to handle interactions with a supposed AGI. The point being, in part, that a human evil is better than an inhuman evil.
I’m inclined to think there exists no one who, at once, is both broadly “aligned” to the cause of human happiness as to use it for mostly-selfless and reasonable ends, and also responsibly, brutally egoistic enough to properly enslave the perfect and irresistible genius in the box; they seem to me two mutually exclusive categories of person. We can imagine the spectre of horror presented by unaligned AGI and the spectre of megalomaniacs who will use such technology for their own gain regardless of the human cost, yet there is also the largely unimagined spectre of warring princes who see no “ethical” alternative but as to do everything in their power to seize control of the genie and preserve the world from evil. Many of the “megalomaniacs” (quotes only half-ironic) which you fear in the abstract will likely see themselves as falling into this category. You can probably see yourself on some level in the same cadre, no?
Perhaps there’s a tyrant’s race to the bottom of human suffering no matter how you attempt to handle the prospect of the persons soon to establish and control AI, and we must all simply be convinced enough of both our moral righteousness and of our competence in handling the genie to plow obstinately forward regardless of the realistic consequences.
I’m familiar with how sociopaths (incorrectly) perceive themselves as a superior branch of humanity, as a cope for the mutation that gave them bias for more antisocial behavior by turning it into a sort of virtue and a lack of weakness.
I also can’t help but notice how you try to side with the AI by calling it sociopathic. Don’t make this mistake, it would run circles around you too, especially if augmented. It might not appeal to empath emotions, but it could appeal to narcissism instead, or use valid threats, or promises, or distractions, or find some other exploit in the brain, which is, while slightly modified in the amygdala part, still painfully human. So, in fact, believing that you’re invulnerable makes you even more vulnerable, again, a very human mistake to make.
“A human evil is better than an inhuman evil [...] We can imagine the spectre of horror presented by unaligned AGI and the spectre of megalomaniacs who will use such technology for their own gain regardless of the human cost” How about we avoid both by pushing for the world where the inventor would have both invented safety measures from the first principles, and not be a psychopath but someone who wants other beings not to suffer due to empathy?
Well in the end, I think the correct view is that as long as the inventor is making safety measures from first principles, it doesn’t matter whether they’re an empath or a psychopath. Why close off part of the human race who are interested in aligning the world ending AI just because they don’t have some feelings? It’s not like their imagined utopia is much different from yours anyways.
It sounds correct when you approach it theoretically. And it might well be that this results in a good outcome, it doesn’t preclude it, at least if we talk about a random person that has psychopathy.
However, when I think about it practically, it feels wrong, like when I think about which world has the best chance to produce utopia, the one where AGI is achieved by Robert Miles, or by the North Korea. There are a few more nation states that are making large progress that I would want to name but won’t, to avoid political debate. These are the people I mostly was referring to, not random sociopaths working in AI field about whom I don’t know anything.
Which is why my personal outlook is such that I want as many people who are not like that to participate in the game, to dilute the current pool of lottery participants, who are, most of them, let’s be honest, not particularly virtuous individuals, but currently have very high chances of being the first to achieve this.