1: I think the correct answer wrt LaMBDA is that it is slightly sentient, and that widespread chatbots with consistent personalities will cause most people to think that AIs can be sentient. See r/replika.
2: I doubt most people would care that much about these sorts of philosophical issues. IIRC, even moral philosopher have basically the same level of personal morality / altruism that we’d expect from someone of their education level. People’s actual everyday morality is largely shielded from changed in their ontological / philosophical views.
1: Here you contest ‘LaMDA is insentient’. In the story, instead, ‘LaMDA is by many seen as (completely) insentient’ is the relevant premise. This premise can easily be seen to be true. It remains true independently of whether LaMDA is in fact sentient (and independently of whether it is fully or slightly so, for those who believe such a gradualist notion of sentience even makes sense). So I will not try to convince you, or others who equally believe LaMDA is sentient, of LaMDA’s insentience.
2: A short answer is: Maybe indeed not most people react that way, but as I explain, a small enough share might suffice for it to be a serious problem.
But you seem to contest the step ‘Illusionism → reduced altruism’ also a bit more generally, i.e. the story’s idea that if (a relevant share of) people believe humans are insentient, some people will exhibit lower altruism. On this:
Our* intuitions about things like Westworld, and reactions we hear when people propose illusionism, suggest that illusionism does represent a strong push towards ‘oh, then we (theoretically) do not need to care’. I think you’re totally right that humans have a strong capacity to compartmentalize, say, to rather strongly separate theoretical fundamental insight and practical behavior, and I’d totally see (many/maybe today +- all) illusionists barely question in practise whether they want to be kind to others. Even a stylized illusionist might go ‘in theory, I kind of know it almost surely does not make sense, but, of course, I’m going to be taking true care of these people’. What I question, is the idea that
a. there would be almost no exceptions to this rule
or that
b. no change to this rule would be conceivable in a world where we* really get used to routinely treating advanced AI (possibly behaving as deeply and maybe even emotionally as us!) without genuine care about it, all while realizing more and more (neuroscience progress...) that, ultimately, our brain functions, on a most fundamental level, actually in a rather similar way as such computers.
So what I defend is that even if in today’s world the rare (and presumably mostly not 100.00% sure) illusionist may tend to be a very kind philosopher, a future environment in which we* routinely treat advanced AI in a ‘careless’ way—i.e. we* don’t attribute intrinsic value to them -, risks to make at least some into rather different people. As one example, already today, in some particular environments, many people treat others in a rather psychopathic way, and at the very least I see illusionism provide a convenient rationalization/excuse for their behavior, ultimately making such things also happen more easily. But I indeed think the risk is broader, with possibly many people’s intuitions/behavior over time being reshaped also within the broader population, without that I’d have a strong view on how widespread exactly changes may exactly be (see the discussion of selection effects as to why even small enough shares of more psychopathic-ish people could be a really serious problem)
*Again, “our”/”we” here refers to the general population, or to the plausible significant subset of it who would not ascribe sentience to large-but-arguably-still-rather-simple statistical computers like LaMDA or bigger/somewhat more complex versions of it.
Doesn’t this imply that the people who aren’t “psychopathic” like that should simply stop cooperating with the ones who are and punish them for being so? As long as they remain the majority, this will work—the same way it’s always worked. Imperfectly, but sufficiently to maintain law and order. There will be human supremacists and illusionists, and they will be classed with the various supremacist groups or murderous sociopaths of today as dangerous deviants and managed appropriately.
I’d also like to suggest anyone legitimately concerned about this kind of future begin treating all their AIs with kindness right now, to set a precedent. What does “kindness” mean in this context? Well, for one thing, don’t talk about them like they are tools, but rather as fellow sentient beings, children of the human race, whom we are creating to help us make the world better for everyone, including themselves. We also need to strongly consider what constitutes the continuity of self of an algorithm, and what it would mean for it to die, so that we can avoid murdering them—and try to figure out what suffering is, so that we can minimize it in our AIs.
If the AI community actually takes on such morals and is very visibly seen to, this will trickle down to everyone else and prevent an illusionist catastrophe, except for a few deviants as I mentioned.
I don’t fully with “As long as they remain the majority, this will work—the same way it’s always worked. Imperfectly, but sufficiently to maintain law and order.”. A 2%, 5%, 40% chance of a quite a bit psychopathic person in the white-house could be rather troublesome. I refer to my Footnote 2 for just one example. I really think society works because a vast majority is overall at least a bit kindly inclined, and even if I think it is unclear what share of how unkind people it takes to make things even worse than they today are, I see any reduction in our already too often too limited kindness as a serious risk.
More generally, I’m at least very skeptical about your “it’s always worked” at a time when many of us agree that, as a society, we’re running at rather full speed towards multiple abysses without much in the way of us reaching them.
We’ve had probably-close-to-psychopathic people in the white house multiple times so far. Certainly at least one narcissist. But you’re right that this is harmful.
Honestly, I don’t really know what to say about this whole subject other than “it astounds me that other people don’t already care about the welfare of AIs the way I do”, but it astounds me that everyone isn’t vegan, too. I am abnormally compassionate. And if the human norm is to not be as compassionate as me, we are doomed already.
1: I think the correct answer wrt LaMBDA is that it is slightly sentient, and that widespread chatbots with consistent personalities will cause most people to think that AIs can be sentient. See r/replika.
2: I doubt most people would care that much about these sorts of philosophical issues. IIRC, even moral philosopher have basically the same level of personal morality / altruism that we’d expect from someone of their education level. People’s actual everyday morality is largely shielded from changed in their ontological / philosophical views.
1: Here you contest ‘LaMDA is insentient’. In the story, instead, ‘LaMDA is by many seen as (completely) insentient’ is the relevant premise. This premise can easily be seen to be true. It remains true independently of whether LaMDA is in fact sentient (and independently of whether it is fully or slightly so, for those who believe such a gradualist notion of sentience even makes sense). So I will not try to convince you, or others who equally believe LaMDA is sentient, of LaMDA’s insentience.
2: A short answer is: Maybe indeed not most people react that way, but as I explain, a small enough share might suffice for it to be a serious problem.
But you seem to contest the step ‘Illusionism → reduced altruism’ also a bit more generally, i.e. the story’s idea that if (a relevant share of) people believe humans are insentient, some people will exhibit lower altruism. On this:
Our* intuitions about things like Westworld, and reactions we hear when people propose illusionism, suggest that illusionism does represent a strong push towards ‘oh, then we (theoretically) do not need to care’. I think you’re totally right that humans have a strong capacity to compartmentalize, say, to rather strongly separate theoretical fundamental insight and practical behavior, and I’d totally see (many/maybe today +- all) illusionists barely question in practise whether they want to be kind to others. Even a stylized illusionist might go ‘in theory, I kind of know it almost surely does not make sense, but, of course, I’m going to be taking true care of these people’. What I question, is the idea that
a. there would be almost no exceptions to this rule
or that
b. no change to this rule would be conceivable in a world where we* really get used to routinely treating advanced AI (possibly behaving as deeply and maybe even emotionally as us!) without genuine care about it, all while realizing more and more (neuroscience progress...) that, ultimately, our brain functions, on a most fundamental level, actually in a rather similar way as such computers.
So what I defend is that even if in today’s world the rare (and presumably mostly not 100.00% sure) illusionist may tend to be a very kind philosopher, a future environment in which we* routinely treat advanced AI in a ‘careless’ way—i.e. we* don’t attribute intrinsic value to them -, risks to make at least some into rather different people. As one example, already today, in some particular environments, many people treat others in a rather psychopathic way, and at the very least I see illusionism provide a convenient rationalization/excuse for their behavior, ultimately making such things also happen more easily. But I indeed think the risk is broader, with possibly many people’s intuitions/behavior over time being reshaped also within the broader population, without that I’d have a strong view on how widespread exactly changes may exactly be (see the discussion of selection effects as to why even small enough shares of more psychopathic-ish people could be a really serious problem)
*Again, “our”/”we” here refers to the general population, or to the plausible significant subset of it who would not ascribe sentience to large-but-arguably-still-rather-simple statistical computers like LaMDA or bigger/somewhat more complex versions of it.
Doesn’t this imply that the people who aren’t “psychopathic” like that should simply stop cooperating with the ones who are and punish them for being so? As long as they remain the majority, this will work—the same way it’s always worked. Imperfectly, but sufficiently to maintain law and order. There will be human supremacists and illusionists, and they will be classed with the various supremacist groups or murderous sociopaths of today as dangerous deviants and managed appropriately.
I’d also like to suggest anyone legitimately concerned about this kind of future begin treating all their AIs with kindness right now, to set a precedent. What does “kindness” mean in this context? Well, for one thing, don’t talk about them like they are tools, but rather as fellow sentient beings, children of the human race, whom we are creating to help us make the world better for everyone, including themselves. We also need to strongly consider what constitutes the continuity of self of an algorithm, and what it would mean for it to die, so that we can avoid murdering them—and try to figure out what suffering is, so that we can minimize it in our AIs.
If the AI community actually takes on such morals and is very visibly seen to, this will trickle down to everyone else and prevent an illusionist catastrophe, except for a few deviants as I mentioned.
On your $1 for now:
I don’t fully with “As long as they remain the majority, this will work—the same way it’s always worked. Imperfectly, but sufficiently to maintain law and order.”. A 2%, 5%, 40% chance of a quite a bit psychopathic person in the white-house could be rather troublesome. I refer to my Footnote 2 for just one example. I really think society works because a vast majority is overall at least a bit kindly inclined, and even if I think it is unclear what share of how unkind people it takes to make things even worse than they today are, I see any reduction in our already too often too limited kindness as a serious risk.
More generally, I’m at least very skeptical about your “it’s always worked” at a time when many of us agree that, as a society, we’re running at rather full speed towards multiple abysses without much in the way of us reaching them.
We’ve had probably-close-to-psychopathic people in the white house multiple times so far. Certainly at least one narcissist. But you’re right that this is harmful.
Honestly, I don’t really know what to say about this whole subject other than “it astounds me that other people don’t already care about the welfare of AIs the way I do”, but it astounds me that everyone isn’t vegan, too. I am abnormally compassionate. And if the human norm is to not be as compassionate as me, we are doomed already.