Much the same way as I understand the meanings of most words. Why is that a problem in this case.
“That’s what it means by definition” wasn’t much help to you when it came to terminal values, why do you think “that’s what the word means” is useful here and not there? How do you determine that this word, and not that one, is an accurate description of a thing that exists?
Non psychopaths don’t generally put other people above themselves—that is, they treat people equally, incuding themselevs.
This is not, in fact, true. Non-psychopaths routinely apply double standards to themselves and other people, and don’t necessarily even realize they’re doing it.
If we accept that it’s true for the sake of an argument though, how do we know that they don’t just have a strong egalitarian bias?
How do you determine that this word, and not that one, is an accurate description of a thing that exists?
Are you saying ethical behavour doesn’t exist on this planet, or that ethical behaviour as I have defined it doens’t exist on this planet?
This is not, in fact, true. Non-psychopaths routinely apply double standards to themselves and other people, and don’t necessarily even realize they’re doing it.
OK. Non-psychopaths have a lesser degree of egotisitical bias. Does that prove they have some different bias? No. Does that prove an ideal rational and ethical agent would still have some bias from some point of view?
No
This is not, in fact, true. Non-psychopaths routinely apply double standards to themselves and other people, and don’t necessarily even realize they’re doing it.
That’s like saying they have a bias towards not having a bias.
Are you saying ethical behavour doesn’t exist on this planet, or that ethical behaviour as I have defined it doens’t exist on this planet?
I’m saying that ethical behavior as you have defined it is almost certainly not a universal psychological attractor. An SI-SR agent could look at humans and say “yep, this is by and large what humans think of as ‘ethics,’” but that doesn’t mean it would exert any sort of compulsion on it.
OK. Non-psychopaths have a lesser degree of egotisitical bias. Does that prove they have some different bias? No. Does that prove an ideal rational and ethical agent would still have some bias from some point of view? No
You not only haven’t proven that psychopaths are the ones with an additional bias, you haven’t even addressed the matter, you’ve just taken it for granted from the start.
How do you demonstrate that psychopaths have an egotistical bias, rather than non-psychopaths having an egalitarian bias, or rather than both of them having different value systems and pursuing them with equal degrees of rationality?
I’m saying that ethical behavior as you have defined it is almost certainly not a universal psychological attractor.
I didn’t say it was universal among all entities of all degrees of intelligence or rationality. I said there was a non neglible probability that agents of a certain level of rationality converging on an understanding of ethics.
An SI-SR agent could look at humans and say “yep, this is by and large what humans think of as ‘ethics,’” but that doesn’t mean it would exert any sort of compulsion on it.
“SR” stands to super rational. Rational agents find rational arguments rationally compelling. If rational arguments can be made for a certain understanding of ethics, they will be compelled by them.
You not only haven’t proven that psychopaths are the ones with an additional bias,
Do you contest that psychopaths have more egotistical bias than the general population?
you’ve just taken it for granted from the start.
Yes. I thought it was something everyone knows.
rather than non-psychopaths having an egalitarian bias, o
it is absurd to characterise the practice of treating everyone the same as a form of bias.
I didn’t say it was universal among all entities of all degrees of intelligence or rationality. I said there was a non neglible probability that agents of a certain level of rationality converging on an understanding of ethics.
Where does this non-negligible probability come from though? When I’ve asked you to provide any reason to suspect it, you’ve just said that as you’re not arguing there’s a high probability, there’s no need for you to answer that.
“SR” stands to super rational. Rational agents find rational arguments rationally compelling. If rational arguments can be made for a certain understanding of ethics, they will be compelled by them.
I have been implicitly asking all along here, what basis do we have for suspecting at all that any sort of universally rationally compelling ethical arguments exist at all?
Do you contest that psychopaths have more egotistical bias than the general population?
Yes.
it is absurd to characterise the practice of treating everyone the same as a form of bias.
Where does this non-negligible probability come from though?
Combining the probabilites of the steps of the argument.
I have been implicitly asking all along here, what basis do we have for suspecting at all that any sort of universally rationally compelling ethical arguments exist at all?
There are rationally compelling arguments.
Rationality probably universalisable since it is based on the avoidance of biases, incuding those regarding who
and where your are.
There is nothing about ethics that makes it unseceptible to rational argument.
There are examples of rational argument about ethics, and of people being compelled by them.
Do you contest that psychopaths have more egotistical bias than the general population?
Yes.
That is an extraordinary claim, and the burden is on you to support it.
It is absurd to characterise the practice of treating everyone the same as a form of bias.
Why?
In the sense of “Nothing is a kind of something” or “atheism is a kind of religion”.
Rationality probably universalisable since it is based on the avoidance of biases, incuding those regarding who and where your are.
There is nothing about ethics that makes it unseceptible to rational argument.
There are examples of rational argument about ethics, and of people being compelled by them.
Rationality may be universalizable, but that doesn’t mean ethics is.
If ethics are based on innate values extrapolated into systems of behavior according to their expected implications, then people will be susceptible to arguments regarding the expected implications of those beliefs, but not arguments regarding their innate values.
I would accept something like “if you accept that it’s bad to make sentient beings suffer, you should oppose animal abuse” can be rationally argued for, but that doesn’t mean that you can step back indefinitely and justify each premise behind it. How would you convince an entity which doesn’t already believe it that it should care about happiness or suffering at all?
That is an extraordinary claim, and the burden is on you to support it.
I would claim the reverse, that saying that sociopathic people have additional egocentric bias is an extraordinary claim, and so I will ask you to support it, but of course, I am quite prepared to reciprocate by supporting my own claim.
It’s much easier to subtract a heuristic from a developed mind by dysfunction than it is to add one. It is more likely as a prior that sociopaths are missing something that ordinary people possess, rather than having something that most people don’t, and that something appears to be the brain functions normally concerned with empathy. It’s not that they’re more concerned with self interest than other people, but that they’re less concerned with other people’s interests.
Human brains are not “rationality+biases,” so that a you could systematically subtract all the biases from a human brain and end up with perfect rationality. We are a bunch of cognitive adaptations, some of which are not at all in accordance with strict rationality, hacked together over our evolutionary history. So it makes little sense to judge humans with unusual neurology as being humans plus or minus additional biases, rather than being plus or minus additional functions or adaptations.
In the sense of “Nothing is a kind of something” or “atheism is a kind of religion”.
Is it a bias to treat people differently from rocks?
Now, if we’re going to categorize innate hardwired values, such as that which Clippy has for paperclips, as biases, then I would say “yes.”
I don’t think it makes sense to categorize such innate values as biases, and so I do not think that Clippy is “biased” compared to an ideally rational agent. Instrumental rationality is for pursuing agents’ innate values. But if you think it takes bias to get you from not caring about paperclips to caring about paperclips, can you explain how, with no bias, you can get from not caring about anything, to caring about something?
If there were in fact some sort of objective morality, under which some people were much more valuable than others, then an ethical system which valued all people equally would be systematically biased in favor of the less valuable.
So, I imagine the following conversation between two people (A and B): A: It’s absurd to say ‘atheism is a kind of religion,’ B: Why? A: Well, ‘religion’ is a word with an agreed-upon meaning, and it denotes a particular category of structures in the world, specifically those with properties X, Y, Z, etc. Atheism lacks those properties, so atheism is not a religion. B: I agree, but that merely shows the claim is mistaken. Why is it absurd? A: (thinks) Well, what I mean is that any mind capable of seriously considering the question ‘Is atheism a religion?’ should reach the same conclusion without significant difficulty. It’s not just mistaken, it’s obviously mistaken. And, more than that, I mean that to conclude instead that atheism is a religion is not just false, but the opposite of the truth… that is, it’s blatantly mistaken.
Is A in the dialog above capturing something like what you mean?
If so, I disagree with your claim. It may be mistaken to characterize the practice of treating everyone the same as a form of bias, but it is not obviously mistaken or blatantly mistaken. In fact, I’m not sure it’s mistaken at all, though if it is a bias, it’s one I endorse among humans in a lot of contexts.
So, terminology aside, I guess the question I’m really asking is: how would I conclude that treating everyone the same (as opposed to treating different people differently) is not actually a bias, given that this is not obvious to me?
How do you know that? Can you explain a process by which an SI-SR paperclipper could become convinced of this?
How can you you tell that psychopathy is an egotistical bias rather than non-psychopathy being an empathetic bias?
Much the same way as I understand the meanings of most words. Why is that a problem in this case.
Non psychopaths don’t generally put other people above themselves—that is, they treat people equally, incuding themselevs.
“That’s what it means by definition” wasn’t much help to you when it came to terminal values, why do you think “that’s what the word means” is useful here and not there? How do you determine that this word, and not that one, is an accurate description of a thing that exists?
This is not, in fact, true. Non-psychopaths routinely apply double standards to themselves and other people, and don’t necessarily even realize they’re doing it.
If we accept that it’s true for the sake of an argument though, how do we know that they don’t just have a strong egalitarian bias?
Are you saying ethical behavour doesn’t exist on this planet, or that ethical behaviour as I have defined it doens’t exist on this planet?
OK. Non-psychopaths have a lesser degree of egotisitical bias. Does that prove they have some different bias? No. Does that prove an ideal rational and ethical agent would still have some bias from some point of view? No
That’s like saying they have a bias towards not having a bias.
I’m saying that ethical behavior as you have defined it is almost certainly not a universal psychological attractor. An SI-SR agent could look at humans and say “yep, this is by and large what humans think of as ‘ethics,’” but that doesn’t mean it would exert any sort of compulsion on it.
You not only haven’t proven that psychopaths are the ones with an additional bias, you haven’t even addressed the matter, you’ve just taken it for granted from the start.
How do you demonstrate that psychopaths have an egotistical bias, rather than non-psychopaths having an egalitarian bias, or rather than both of them having different value systems and pursuing them with equal degrees of rationality?
I didn’t say it was universal among all entities of all degrees of intelligence or rationality. I said there was a non neglible probability that agents of a certain level of rationality converging on an understanding of ethics.
“SR” stands to super rational. Rational agents find rational arguments rationally compelling. If rational arguments can be made for a certain understanding of ethics, they will be compelled by them.
Do you contest that psychopaths have more egotistical bias than the general population?
Yes. I thought it was something everyone knows.
it is absurd to characterise the practice of treating everyone the same as a form of bias.
Where does this non-negligible probability come from though? When I’ve asked you to provide any reason to suspect it, you’ve just said that as you’re not arguing there’s a high probability, there’s no need for you to answer that.
I have been implicitly asking all along here, what basis do we have for suspecting at all that any sort of universally rationally compelling ethical arguments exist at all?
Yes.
Why?
Combining the probabilites of the steps of the argument.
There are rationally compelling arguments.
Rationality probably universalisable since it is based on the avoidance of biases, incuding those regarding who and where your are.
There is nothing about ethics that makes it unseceptible to rational argument.
There are examples of rational argument about ethics, and of people being compelled by them.
That is an extraordinary claim, and the burden is on you to support it.
In the sense of “Nothing is a kind of something” or “atheism is a kind of religion”.
Rationality may be universalizable, but that doesn’t mean ethics is.
If ethics are based on innate values extrapolated into systems of behavior according to their expected implications, then people will be susceptible to arguments regarding the expected implications of those beliefs, but not arguments regarding their innate values.
I would accept something like “if you accept that it’s bad to make sentient beings suffer, you should oppose animal abuse” can be rationally argued for, but that doesn’t mean that you can step back indefinitely and justify each premise behind it. How would you convince an entity which doesn’t already believe it that it should care about happiness or suffering at all?
I would claim the reverse, that saying that sociopathic people have additional egocentric bias is an extraordinary claim, and so I will ask you to support it, but of course, I am quite prepared to reciprocate by supporting my own claim.
It’s much easier to subtract a heuristic from a developed mind by dysfunction than it is to add one. It is more likely as a prior that sociopaths are missing something that ordinary people possess, rather than having something that most people don’t, and that something appears to be the brain functions normally concerned with empathy. It’s not that they’re more concerned with self interest than other people, but that they’re less concerned with other people’s interests.
Human brains are not “rationality+biases,” so that a you could systematically subtract all the biases from a human brain and end up with perfect rationality. We are a bunch of cognitive adaptations, some of which are not at all in accordance with strict rationality, hacked together over our evolutionary history. So it makes little sense to judge humans with unusual neurology as being humans plus or minus additional biases, rather than being plus or minus additional functions or adaptations.
Is it a bias to treat people differently from rocks?
Now, if we’re going to categorize innate hardwired values, such as that which Clippy has for paperclips, as biases, then I would say “yes.”
I don’t think it makes sense to categorize such innate values as biases, and so I do not think that Clippy is “biased” compared to an ideally rational agent. Instrumental rationality is for pursuing agents’ innate values. But if you think it takes bias to get you from not caring about paperclips to caring about paperclips, can you explain how, with no bias, you can get from not caring about anything, to caring about something?
If there were in fact some sort of objective morality, under which some people were much more valuable than others, then an ethical system which valued all people equally would be systematically biased in favor of the less valuable.
Can you expand on what you mean by “absurd” here?
In the sense of “Nothing is a kind of something” or “atheism is a kind of religion”.
Hm.
OK.
So, I imagine the following conversation between two people (A and B):
A: It’s absurd to say ‘atheism is a kind of religion,’
B: Why?
A: Well, ‘religion’ is a word with an agreed-upon meaning, and it denotes a particular category of structures in the world, specifically those with properties X, Y, Z, etc. Atheism lacks those properties, so atheism is not a religion.
B: I agree, but that merely shows the claim is mistaken. Why is it absurd?
A: (thinks) Well, what I mean is that any mind capable of seriously considering the question ‘Is atheism a religion?’ should reach the same conclusion without significant difficulty. It’s not just mistaken, it’s obviously mistaken. And, more than that, I mean that to conclude instead that atheism is a religion is not just false, but the opposite of the truth… that is, it’s blatantly mistaken.
Is A in the dialog above capturing something like what you mean?
If so, I disagree with your claim. It may be mistaken to characterize the practice of treating everyone the same as a form of bias, but it is not obviously mistaken or blatantly mistaken. In fact, I’m not sure it’s mistaken at all, though if it is a bias, it’s one I endorse among humans in a lot of contexts.
So, terminology aside, I guess the question I’m really asking is: how would I conclude that treating everyone the same (as opposed to treating different people differently) is not actually a bias, given that this is not obvious to me?