Plausibly. You don;t now care about the same things you cared about when you were 10.
I have different interests now than I did when I was ten, but that’s not the same as having different terminal values.
Suppose a person doesn’t support vegetarianism; they’ve never really given it much consideration, but they default to the assumption that eating meat doesn’t cause much harm, and meat is tasty, so what’s the big deal?
When they get older, they watch some videos on the conditions in which animals are raised for slaughter, read some studies on the neurology of livestock animals with respect to their ability to suffer, and decide that mainstream livestock farming does cause a lot of harm after all, and so they become a vegetarian.
This doesn’t mean that their values have been altered at all. They’ve simply revised their behavior on new information with an application of the same values they already had. They started out caring about the suffering of sentient beings, and they ended up caring about the suffering of sentient beings, they just revised their beliefs about what actions that value should compel on the basis of other information.
To see whether person’s values have changed, we would want to look, not at whether they endorse the same behaviors or factual beliefs that they used to, but whether their past self could relate to the reasons their present self has for believing and supporting the things they do now.
The average SR-SI agent would not be a clipper for exactly the same reason that the average human is not an evil genius.
The fact that humans are mostly not evil geniuses says next to nothing about the power of intelligence and rationality to converge on human standards of goodness. We all share almost all the same brainware. To a pebblesorter, humans would nearly all be evil geniuses, possessed of powerful intellects, yet totally bereft of a proper moral concern with sorting pebbles.
Many humans are sociopaths, and that slight deviation from normal human brainware results in people who cannot be argued into caring about other people for their own sakes. Nor can a sociopath argue a neurotypical person into becoming a sociopath.
If intelligence and rationality cause people to update their terminal values, why do sociopaths whose intelligence and rationality are normal to high by human standards (of which there are many) not update into being non-sociopaths, or vice-versa?
Many humans are sociopaths, and that slight deviation from normal human brainware results in people who cannot be argued into caring about other people for their own sakes. Nor can a sociopath argue a neurotypical person into becoming a sociopath.
There’s a difference between being a sociopath and being a jerk. Sociopaths don’t need to rationalize dicking other people over.
If Ayn Rand’s works could actually turn formerly neurotypical people into sociopaths, that would be a hell of a find, and possibly spark a neuromedical breakthrough.
Sure, you can negotiate with an agent with conflicting values, but I don’t think its beside the point.
You can get a sociopath to cooperate with non-sociopaths by making them trade off for things they do care about, or using coercive power. But Clippy doesn’t have any concerns other than paperclips to trade off against its concern for paperclips, and we’re not in a position to coerce Clippy, because Clippy is powerful enough to treat us as an obstacle to be destroyed. The fact that the non-sociopath majority can more or less keep the sociopath minority under control doesn’t mean that we could persuade agents whose values deviate far from our own to accommodate us if we didn’t have coercive power over them.
Prawnoffate’s point to begin with was that humans could and would change their fundamental values on new information about what is moral. I suggested sociopaths as an example of people who wouldn’t change their values to conform to those of other people on the basis of argument or evidence, nor would ordinary humans change their fundamental values to a sociopath’s.
If we’ve progressed to a discussion of whether it’s possible to coerce less powerful agents into behaving in accordance with our values, I think we’ve departed from the context in which sociopaths were relevant in the first place.
Are you arguing Ayn Rand can argue sociopaths into caring about other people for their own sakes, or argue neurotypical people into becoming sociopaths?
(I could see both arguments, although as Desrtopa references, the latter seems unlikely. Maybe you could argue a neurotypical person into sociopathic-like behavior, which seems a weaker and more plausible claim.)
I have different interests now than I did when I was ten, but that’s not the same as having different terminal values.
You can construe the facts as being compatible with the theory of terminal values, but that doesn’t actually support the theory of TVs.
To a pebblesorter, humans would nearly all be evil geniuses, possessed of powerful intellects, yet totally bereft of a proper moral concern with sorting pebbles.
Ethics is about regulating behaviour to take into account the preferences of others. I don’t see how pebblesorting would count.
If intelligence and rationality cause people to update their terminal values, why do sociopaths whose intelligence and rationality are normal to high by human standards (of which there are many) not update into being non-sociopaths, or vice-versa?
Much the same way as I understand the meanings of most words. Why is that a problem in this case.
“That’s what it means by definition” wasn’t much help to you when it came to terminal values, why do you think “that’s what the word means” is useful here and not there? How do you determine that this word, and not that one, is an accurate description of a thing that exists?
Non psychopaths don’t generally put other people above themselves—that is, they treat people equally, incuding themselevs.
This is not, in fact, true. Non-psychopaths routinely apply double standards to themselves and other people, and don’t necessarily even realize they’re doing it.
If we accept that it’s true for the sake of an argument though, how do we know that they don’t just have a strong egalitarian bias?
How do you determine that this word, and not that one, is an accurate description of a thing that exists?
Are you saying ethical behavour doesn’t exist on this planet, or that ethical behaviour as I have defined it doens’t exist on this planet?
This is not, in fact, true. Non-psychopaths routinely apply double standards to themselves and other people, and don’t necessarily even realize they’re doing it.
OK. Non-psychopaths have a lesser degree of egotisitical bias. Does that prove they have some different bias? No. Does that prove an ideal rational and ethical agent would still have some bias from some point of view?
No
This is not, in fact, true. Non-psychopaths routinely apply double standards to themselves and other people, and don’t necessarily even realize they’re doing it.
That’s like saying they have a bias towards not having a bias.
Are you saying ethical behavour doesn’t exist on this planet, or that ethical behaviour as I have defined it doens’t exist on this planet?
I’m saying that ethical behavior as you have defined it is almost certainly not a universal psychological attractor. An SI-SR agent could look at humans and say “yep, this is by and large what humans think of as ‘ethics,’” but that doesn’t mean it would exert any sort of compulsion on it.
OK. Non-psychopaths have a lesser degree of egotisitical bias. Does that prove they have some different bias? No. Does that prove an ideal rational and ethical agent would still have some bias from some point of view? No
You not only haven’t proven that psychopaths are the ones with an additional bias, you haven’t even addressed the matter, you’ve just taken it for granted from the start.
How do you demonstrate that psychopaths have an egotistical bias, rather than non-psychopaths having an egalitarian bias, or rather than both of them having different value systems and pursuing them with equal degrees of rationality?
I’m saying that ethical behavior as you have defined it is almost certainly not a universal psychological attractor.
I didn’t say it was universal among all entities of all degrees of intelligence or rationality. I said there was a non neglible probability that agents of a certain level of rationality converging on an understanding of ethics.
An SI-SR agent could look at humans and say “yep, this is by and large what humans think of as ‘ethics,’” but that doesn’t mean it would exert any sort of compulsion on it.
“SR” stands to super rational. Rational agents find rational arguments rationally compelling. If rational arguments can be made for a certain understanding of ethics, they will be compelled by them.
You not only haven’t proven that psychopaths are the ones with an additional bias,
Do you contest that psychopaths have more egotistical bias than the general population?
you’ve just taken it for granted from the start.
Yes. I thought it was something everyone knows.
rather than non-psychopaths having an egalitarian bias, o
it is absurd to characterise the practice of treating everyone the same as a form of bias.
I didn’t say it was universal among all entities of all degrees of intelligence or rationality. I said there was a non neglible probability that agents of a certain level of rationality converging on an understanding of ethics.
Where does this non-negligible probability come from though? When I’ve asked you to provide any reason to suspect it, you’ve just said that as you’re not arguing there’s a high probability, there’s no need for you to answer that.
“SR” stands to super rational. Rational agents find rational arguments rationally compelling. If rational arguments can be made for a certain understanding of ethics, they will be compelled by them.
I have been implicitly asking all along here, what basis do we have for suspecting at all that any sort of universally rationally compelling ethical arguments exist at all?
Do you contest that psychopaths have more egotistical bias than the general population?
Yes.
it is absurd to characterise the practice of treating everyone the same as a form of bias.
Where does this non-negligible probability come from though?
Combining the probabilites of the steps of the argument.
I have been implicitly asking all along here, what basis do we have for suspecting at all that any sort of universally rationally compelling ethical arguments exist at all?
There are rationally compelling arguments.
Rationality probably universalisable since it is based on the avoidance of biases, incuding those regarding who
and where your are.
There is nothing about ethics that makes it unseceptible to rational argument.
There are examples of rational argument about ethics, and of people being compelled by them.
Do you contest that psychopaths have more egotistical bias than the general population?
Yes.
That is an extraordinary claim, and the burden is on you to support it.
It is absurd to characterise the practice of treating everyone the same as a form of bias.
Why?
In the sense of “Nothing is a kind of something” or “atheism is a kind of religion”.
Rationality probably universalisable since it is based on the avoidance of biases, incuding those regarding who and where your are.
There is nothing about ethics that makes it unseceptible to rational argument.
There are examples of rational argument about ethics, and of people being compelled by them.
Rationality may be universalizable, but that doesn’t mean ethics is.
If ethics are based on innate values extrapolated into systems of behavior according to their expected implications, then people will be susceptible to arguments regarding the expected implications of those beliefs, but not arguments regarding their innate values.
I would accept something like “if you accept that it’s bad to make sentient beings suffer, you should oppose animal abuse” can be rationally argued for, but that doesn’t mean that you can step back indefinitely and justify each premise behind it. How would you convince an entity which doesn’t already believe it that it should care about happiness or suffering at all?
That is an extraordinary claim, and the burden is on you to support it.
I would claim the reverse, that saying that sociopathic people have additional egocentric bias is an extraordinary claim, and so I will ask you to support it, but of course, I am quite prepared to reciprocate by supporting my own claim.
It’s much easier to subtract a heuristic from a developed mind by dysfunction than it is to add one. It is more likely as a prior that sociopaths are missing something that ordinary people possess, rather than having something that most people don’t, and that something appears to be the brain functions normally concerned with empathy. It’s not that they’re more concerned with self interest than other people, but that they’re less concerned with other people’s interests.
Human brains are not “rationality+biases,” so that a you could systematically subtract all the biases from a human brain and end up with perfect rationality. We are a bunch of cognitive adaptations, some of which are not at all in accordance with strict rationality, hacked together over our evolutionary history. So it makes little sense to judge humans with unusual neurology as being humans plus or minus additional biases, rather than being plus or minus additional functions or adaptations.
In the sense of “Nothing is a kind of something” or “atheism is a kind of religion”.
Is it a bias to treat people differently from rocks?
Now, if we’re going to categorize innate hardwired values, such as that which Clippy has for paperclips, as biases, then I would say “yes.”
I don’t think it makes sense to categorize such innate values as biases, and so I do not think that Clippy is “biased” compared to an ideally rational agent. Instrumental rationality is for pursuing agents’ innate values. But if you think it takes bias to get you from not caring about paperclips to caring about paperclips, can you explain how, with no bias, you can get from not caring about anything, to caring about something?
If there were in fact some sort of objective morality, under which some people were much more valuable than others, then an ethical system which valued all people equally would be systematically biased in favor of the less valuable.
So, I imagine the following conversation between two people (A and B): A: It’s absurd to say ‘atheism is a kind of religion,’ B: Why? A: Well, ‘religion’ is a word with an agreed-upon meaning, and it denotes a particular category of structures in the world, specifically those with properties X, Y, Z, etc. Atheism lacks those properties, so atheism is not a religion. B: I agree, but that merely shows the claim is mistaken. Why is it absurd? A: (thinks) Well, what I mean is that any mind capable of seriously considering the question ‘Is atheism a religion?’ should reach the same conclusion without significant difficulty. It’s not just mistaken, it’s obviously mistaken. And, more than that, I mean that to conclude instead that atheism is a religion is not just false, but the opposite of the truth… that is, it’s blatantly mistaken.
Is A in the dialog above capturing something like what you mean?
If so, I disagree with your claim. It may be mistaken to characterize the practice of treating everyone the same as a form of bias, but it is not obviously mistaken or blatantly mistaken. In fact, I’m not sure it’s mistaken at all, though if it is a bias, it’s one I endorse among humans in a lot of contexts.
So, terminology aside, I guess the question I’m really asking is: how would I conclude that treating everyone the same (as opposed to treating different people differently) is not actually a bias, given that this is not obvious to me?
Okay.
I have different interests now than I did when I was ten, but that’s not the same as having different terminal values.
Suppose a person doesn’t support vegetarianism; they’ve never really given it much consideration, but they default to the assumption that eating meat doesn’t cause much harm, and meat is tasty, so what’s the big deal?
When they get older, they watch some videos on the conditions in which animals are raised for slaughter, read some studies on the neurology of livestock animals with respect to their ability to suffer, and decide that mainstream livestock farming does cause a lot of harm after all, and so they become a vegetarian.
This doesn’t mean that their values have been altered at all. They’ve simply revised their behavior on new information with an application of the same values they already had. They started out caring about the suffering of sentient beings, and they ended up caring about the suffering of sentient beings, they just revised their beliefs about what actions that value should compel on the basis of other information.
To see whether person’s values have changed, we would want to look, not at whether they endorse the same behaviors or factual beliefs that they used to, but whether their past self could relate to the reasons their present self has for believing and supporting the things they do now.
The fact that humans are mostly not evil geniuses says next to nothing about the power of intelligence and rationality to converge on human standards of goodness. We all share almost all the same brainware. To a pebblesorter, humans would nearly all be evil geniuses, possessed of powerful intellects, yet totally bereft of a proper moral concern with sorting pebbles.
Many humans are sociopaths, and that slight deviation from normal human brainware results in people who cannot be argued into caring about other people for their own sakes. Nor can a sociopath argue a neurotypical person into becoming a sociopath.
If intelligence and rationality cause people to update their terminal values, why do sociopaths whose intelligence and rationality are normal to high by human standards (of which there are many) not update into being non-sociopaths, or vice-versa?
coughaynrandcough
There’s a difference between being a sociopath and being a jerk. Sociopaths don’t need to rationalize dicking other people over.
If Ayn Rand’s works could actually turn formerly neurotypical people into sociopaths, that would be a hell of a find, and possibly spark a neuromedical breakthrough.
That’s beside the point, though. Just because two agents have incompatible values doesn’t mean they can’t be persuaded otherwise.
ETA: in other words, persuading a sociopath to act like they’re ethical or vice versa is possible. It just doesn’t rewire their terminal values.
Sure, you can negotiate with an agent with conflicting values, but I don’t think its beside the point.
You can get a sociopath to cooperate with non-sociopaths by making them trade off for things they do care about, or using coercive power. But Clippy doesn’t have any concerns other than paperclips to trade off against its concern for paperclips, and we’re not in a position to coerce Clippy, because Clippy is powerful enough to treat us as an obstacle to be destroyed. The fact that the non-sociopath majority can more or less keep the sociopath minority under control doesn’t mean that we could persuade agents whose values deviate far from our own to accommodate us if we didn’t have coercive power over them.
Clippy is a superintelligence. Humans, neurotypical or no, are not.
I’m not saying it’s necessarily rational for sociopaths to act moral or vice versa. I’m saying people can be (and have been) persuaded of this.
Prawnoffate’s point to begin with was that humans could and would change their fundamental values on new information about what is moral. I suggested sociopaths as an example of people who wouldn’t change their values to conform to those of other people on the basis of argument or evidence, nor would ordinary humans change their fundamental values to a sociopath’s.
If we’ve progressed to a discussion of whether it’s possible to coerce less powerful agents into behaving in accordance with our values, I think we’ve departed from the context in which sociopaths were relevant in the first place.
Oh, sorry, I wasn’t disagreeing with you about that, just nitpicking your example. Should have made that clearer ;)
Are you arguing Ayn Rand can argue sociopaths into caring about other people for their own sakes, or argue neurotypical people into becoming sociopaths?
(I could see both arguments, although as Desrtopa references, the latter seems unlikely. Maybe you could argue a neurotypical person into sociopathic-like behavior, which seems a weaker and more plausible claim.)
Then that makes it twice as effective, doesn’t it?
(Edited for clarity.)
You can construe the facts as being compatible with the theory of terminal values, but that doesn’t actually support the theory of TVs.
Ethics is about regulating behaviour to take into account the preferences of others. I don’t see how pebblesorting would count.
Psychopathy is a strong egotistical bias.
How do you know that? Can you explain a process by which an SI-SR paperclipper could become convinced of this?
How can you you tell that psychopathy is an egotistical bias rather than non-psychopathy being an empathetic bias?
Much the same way as I understand the meanings of most words. Why is that a problem in this case.
Non psychopaths don’t generally put other people above themselves—that is, they treat people equally, incuding themselevs.
“That’s what it means by definition” wasn’t much help to you when it came to terminal values, why do you think “that’s what the word means” is useful here and not there? How do you determine that this word, and not that one, is an accurate description of a thing that exists?
This is not, in fact, true. Non-psychopaths routinely apply double standards to themselves and other people, and don’t necessarily even realize they’re doing it.
If we accept that it’s true for the sake of an argument though, how do we know that they don’t just have a strong egalitarian bias?
Are you saying ethical behavour doesn’t exist on this planet, or that ethical behaviour as I have defined it doens’t exist on this planet?
OK. Non-psychopaths have a lesser degree of egotisitical bias. Does that prove they have some different bias? No. Does that prove an ideal rational and ethical agent would still have some bias from some point of view? No
That’s like saying they have a bias towards not having a bias.
I’m saying that ethical behavior as you have defined it is almost certainly not a universal psychological attractor. An SI-SR agent could look at humans and say “yep, this is by and large what humans think of as ‘ethics,’” but that doesn’t mean it would exert any sort of compulsion on it.
You not only haven’t proven that psychopaths are the ones with an additional bias, you haven’t even addressed the matter, you’ve just taken it for granted from the start.
How do you demonstrate that psychopaths have an egotistical bias, rather than non-psychopaths having an egalitarian bias, or rather than both of them having different value systems and pursuing them with equal degrees of rationality?
I didn’t say it was universal among all entities of all degrees of intelligence or rationality. I said there was a non neglible probability that agents of a certain level of rationality converging on an understanding of ethics.
“SR” stands to super rational. Rational agents find rational arguments rationally compelling. If rational arguments can be made for a certain understanding of ethics, they will be compelled by them.
Do you contest that psychopaths have more egotistical bias than the general population?
Yes. I thought it was something everyone knows.
it is absurd to characterise the practice of treating everyone the same as a form of bias.
Where does this non-negligible probability come from though? When I’ve asked you to provide any reason to suspect it, you’ve just said that as you’re not arguing there’s a high probability, there’s no need for you to answer that.
I have been implicitly asking all along here, what basis do we have for suspecting at all that any sort of universally rationally compelling ethical arguments exist at all?
Yes.
Why?
Combining the probabilites of the steps of the argument.
There are rationally compelling arguments.
Rationality probably universalisable since it is based on the avoidance of biases, incuding those regarding who and where your are.
There is nothing about ethics that makes it unseceptible to rational argument.
There are examples of rational argument about ethics, and of people being compelled by them.
That is an extraordinary claim, and the burden is on you to support it.
In the sense of “Nothing is a kind of something” or “atheism is a kind of religion”.
Rationality may be universalizable, but that doesn’t mean ethics is.
If ethics are based on innate values extrapolated into systems of behavior according to their expected implications, then people will be susceptible to arguments regarding the expected implications of those beliefs, but not arguments regarding their innate values.
I would accept something like “if you accept that it’s bad to make sentient beings suffer, you should oppose animal abuse” can be rationally argued for, but that doesn’t mean that you can step back indefinitely and justify each premise behind it. How would you convince an entity which doesn’t already believe it that it should care about happiness or suffering at all?
I would claim the reverse, that saying that sociopathic people have additional egocentric bias is an extraordinary claim, and so I will ask you to support it, but of course, I am quite prepared to reciprocate by supporting my own claim.
It’s much easier to subtract a heuristic from a developed mind by dysfunction than it is to add one. It is more likely as a prior that sociopaths are missing something that ordinary people possess, rather than having something that most people don’t, and that something appears to be the brain functions normally concerned with empathy. It’s not that they’re more concerned with self interest than other people, but that they’re less concerned with other people’s interests.
Human brains are not “rationality+biases,” so that a you could systematically subtract all the biases from a human brain and end up with perfect rationality. We are a bunch of cognitive adaptations, some of which are not at all in accordance with strict rationality, hacked together over our evolutionary history. So it makes little sense to judge humans with unusual neurology as being humans plus or minus additional biases, rather than being plus or minus additional functions or adaptations.
Is it a bias to treat people differently from rocks?
Now, if we’re going to categorize innate hardwired values, such as that which Clippy has for paperclips, as biases, then I would say “yes.”
I don’t think it makes sense to categorize such innate values as biases, and so I do not think that Clippy is “biased” compared to an ideally rational agent. Instrumental rationality is for pursuing agents’ innate values. But if you think it takes bias to get you from not caring about paperclips to caring about paperclips, can you explain how, with no bias, you can get from not caring about anything, to caring about something?
If there were in fact some sort of objective morality, under which some people were much more valuable than others, then an ethical system which valued all people equally would be systematically biased in favor of the less valuable.
Can you expand on what you mean by “absurd” here?
In the sense of “Nothing is a kind of something” or “atheism is a kind of religion”.
Hm.
OK.
So, I imagine the following conversation between two people (A and B):
A: It’s absurd to say ‘atheism is a kind of religion,’
B: Why?
A: Well, ‘religion’ is a word with an agreed-upon meaning, and it denotes a particular category of structures in the world, specifically those with properties X, Y, Z, etc. Atheism lacks those properties, so atheism is not a religion.
B: I agree, but that merely shows the claim is mistaken. Why is it absurd?
A: (thinks) Well, what I mean is that any mind capable of seriously considering the question ‘Is atheism a religion?’ should reach the same conclusion without significant difficulty. It’s not just mistaken, it’s obviously mistaken. And, more than that, I mean that to conclude instead that atheism is a religion is not just false, but the opposite of the truth… that is, it’s blatantly mistaken.
Is A in the dialog above capturing something like what you mean?
If so, I disagree with your claim. It may be mistaken to characterize the practice of treating everyone the same as a form of bias, but it is not obviously mistaken or blatantly mistaken. In fact, I’m not sure it’s mistaken at all, though if it is a bias, it’s one I endorse among humans in a lot of contexts.
So, terminology aside, I guess the question I’m really asking is: how would I conclude that treating everyone the same (as opposed to treating different people differently) is not actually a bias, given that this is not obvious to me?