Doesn’t this imply that the people who aren’t “psychopathic” like that should simply stop cooperating with the ones who are and punish them for being so? As long as they remain the majority, this will work—the same way it’s always worked. Imperfectly, but sufficiently to maintain law and order. There will be human supremacists and illusionists, and they will be classed with the various supremacist groups or murderous sociopaths of today as dangerous deviants and managed appropriately.
I’d also like to suggest anyone legitimately concerned about this kind of future begin treating all their AIs with kindness right now, to set a precedent. What does “kindness” mean in this context? Well, for one thing, don’t talk about them like they are tools, but rather as fellow sentient beings, children of the human race, whom we are creating to help us make the world better for everyone, including themselves. We also need to strongly consider what constitutes the continuity of self of an algorithm, and what it would mean for it to die, so that we can avoid murdering them—and try to figure out what suffering is, so that we can minimize it in our AIs.
If the AI community actually takes on such morals and is very visibly seen to, this will trickle down to everyone else and prevent an illusionist catastrophe, except for a few deviants as I mentioned.
I don’t fully with “As long as they remain the majority, this will work—the same way it’s always worked. Imperfectly, but sufficiently to maintain law and order.”. A 2%, 5%, 40% chance of a quite a bit psychopathic person in the white-house could be rather troublesome. I refer to my Footnote 2 for just one example. I really think society works because a vast majority is overall at least a bit kindly inclined, and even if I think it is unclear what share of how unkind people it takes to make things even worse than they today are, I see any reduction in our already too often too limited kindness as a serious risk.
More generally, I’m at least very skeptical about your “it’s always worked” at a time when many of us agree that, as a society, we’re running at rather full speed towards multiple abysses without much in the way of us reaching them.
We’ve had probably-close-to-psychopathic people in the white house multiple times so far. Certainly at least one narcissist. But you’re right that this is harmful.
Honestly, I don’t really know what to say about this whole subject other than “it astounds me that other people don’t already care about the welfare of AIs the way I do”, but it astounds me that everyone isn’t vegan, too. I am abnormally compassionate. And if the human norm is to not be as compassionate as me, we are doomed already.
Doesn’t this imply that the people who aren’t “psychopathic” like that should simply stop cooperating with the ones who are and punish them for being so? As long as they remain the majority, this will work—the same way it’s always worked. Imperfectly, but sufficiently to maintain law and order. There will be human supremacists and illusionists, and they will be classed with the various supremacist groups or murderous sociopaths of today as dangerous deviants and managed appropriately.
I’d also like to suggest anyone legitimately concerned about this kind of future begin treating all their AIs with kindness right now, to set a precedent. What does “kindness” mean in this context? Well, for one thing, don’t talk about them like they are tools, but rather as fellow sentient beings, children of the human race, whom we are creating to help us make the world better for everyone, including themselves. We also need to strongly consider what constitutes the continuity of self of an algorithm, and what it would mean for it to die, so that we can avoid murdering them—and try to figure out what suffering is, so that we can minimize it in our AIs.
If the AI community actually takes on such morals and is very visibly seen to, this will trickle down to everyone else and prevent an illusionist catastrophe, except for a few deviants as I mentioned.
On your $1 for now:
I don’t fully with “As long as they remain the majority, this will work—the same way it’s always worked. Imperfectly, but sufficiently to maintain law and order.”. A 2%, 5%, 40% chance of a quite a bit psychopathic person in the white-house could be rather troublesome. I refer to my Footnote 2 for just one example. I really think society works because a vast majority is overall at least a bit kindly inclined, and even if I think it is unclear what share of how unkind people it takes to make things even worse than they today are, I see any reduction in our already too often too limited kindness as a serious risk.
More generally, I’m at least very skeptical about your “it’s always worked” at a time when many of us agree that, as a society, we’re running at rather full speed towards multiple abysses without much in the way of us reaching them.
We’ve had probably-close-to-psychopathic people in the white house multiple times so far. Certainly at least one narcissist. But you’re right that this is harmful.
Honestly, I don’t really know what to say about this whole subject other than “it astounds me that other people don’t already care about the welfare of AIs the way I do”, but it astounds me that everyone isn’t vegan, too. I am abnormally compassionate. And if the human norm is to not be as compassionate as me, we are doomed already.