Where does this non-negligible probability come from though?
Combining the probabilites of the steps of the argument.
I have been implicitly asking all along here, what basis do we have for suspecting at all that any sort of universally rationally compelling ethical arguments exist at all?
There are rationally compelling arguments.
Rationality probably universalisable since it is based on the avoidance of biases, incuding those regarding who
and where your are.
There is nothing about ethics that makes it unseceptible to rational argument.
There are examples of rational argument about ethics, and of people being compelled by them.
Do you contest that psychopaths have more egotistical bias than the general population?
Yes.
That is an extraordinary claim, and the burden is on you to support it.
It is absurd to characterise the practice of treating everyone the same as a form of bias.
Why?
In the sense of “Nothing is a kind of something” or “atheism is a kind of religion”.
Rationality probably universalisable since it is based on the avoidance of biases, incuding those regarding who and where your are.
There is nothing about ethics that makes it unseceptible to rational argument.
There are examples of rational argument about ethics, and of people being compelled by them.
Rationality may be universalizable, but that doesn’t mean ethics is.
If ethics are based on innate values extrapolated into systems of behavior according to their expected implications, then people will be susceptible to arguments regarding the expected implications of those beliefs, but not arguments regarding their innate values.
I would accept something like “if you accept that it’s bad to make sentient beings suffer, you should oppose animal abuse” can be rationally argued for, but that doesn’t mean that you can step back indefinitely and justify each premise behind it. How would you convince an entity which doesn’t already believe it that it should care about happiness or suffering at all?
That is an extraordinary claim, and the burden is on you to support it.
I would claim the reverse, that saying that sociopathic people have additional egocentric bias is an extraordinary claim, and so I will ask you to support it, but of course, I am quite prepared to reciprocate by supporting my own claim.
It’s much easier to subtract a heuristic from a developed mind by dysfunction than it is to add one. It is more likely as a prior that sociopaths are missing something that ordinary people possess, rather than having something that most people don’t, and that something appears to be the brain functions normally concerned with empathy. It’s not that they’re more concerned with self interest than other people, but that they’re less concerned with other people’s interests.
Human brains are not “rationality+biases,” so that a you could systematically subtract all the biases from a human brain and end up with perfect rationality. We are a bunch of cognitive adaptations, some of which are not at all in accordance with strict rationality, hacked together over our evolutionary history. So it makes little sense to judge humans with unusual neurology as being humans plus or minus additional biases, rather than being plus or minus additional functions or adaptations.
In the sense of “Nothing is a kind of something” or “atheism is a kind of religion”.
Is it a bias to treat people differently from rocks?
Now, if we’re going to categorize innate hardwired values, such as that which Clippy has for paperclips, as biases, then I would say “yes.”
I don’t think it makes sense to categorize such innate values as biases, and so I do not think that Clippy is “biased” compared to an ideally rational agent. Instrumental rationality is for pursuing agents’ innate values. But if you think it takes bias to get you from not caring about paperclips to caring about paperclips, can you explain how, with no bias, you can get from not caring about anything, to caring about something?
If there were in fact some sort of objective morality, under which some people were much more valuable than others, then an ethical system which valued all people equally would be systematically biased in favor of the less valuable.
Combining the probabilites of the steps of the argument.
There are rationally compelling arguments.
Rationality probably universalisable since it is based on the avoidance of biases, incuding those regarding who and where your are.
There is nothing about ethics that makes it unseceptible to rational argument.
There are examples of rational argument about ethics, and of people being compelled by them.
That is an extraordinary claim, and the burden is on you to support it.
In the sense of “Nothing is a kind of something” or “atheism is a kind of religion”.
Rationality may be universalizable, but that doesn’t mean ethics is.
If ethics are based on innate values extrapolated into systems of behavior according to their expected implications, then people will be susceptible to arguments regarding the expected implications of those beliefs, but not arguments regarding their innate values.
I would accept something like “if you accept that it’s bad to make sentient beings suffer, you should oppose animal abuse” can be rationally argued for, but that doesn’t mean that you can step back indefinitely and justify each premise behind it. How would you convince an entity which doesn’t already believe it that it should care about happiness or suffering at all?
I would claim the reverse, that saying that sociopathic people have additional egocentric bias is an extraordinary claim, and so I will ask you to support it, but of course, I am quite prepared to reciprocate by supporting my own claim.
It’s much easier to subtract a heuristic from a developed mind by dysfunction than it is to add one. It is more likely as a prior that sociopaths are missing something that ordinary people possess, rather than having something that most people don’t, and that something appears to be the brain functions normally concerned with empathy. It’s not that they’re more concerned with self interest than other people, but that they’re less concerned with other people’s interests.
Human brains are not “rationality+biases,” so that a you could systematically subtract all the biases from a human brain and end up with perfect rationality. We are a bunch of cognitive adaptations, some of which are not at all in accordance with strict rationality, hacked together over our evolutionary history. So it makes little sense to judge humans with unusual neurology as being humans plus or minus additional biases, rather than being plus or minus additional functions or adaptations.
Is it a bias to treat people differently from rocks?
Now, if we’re going to categorize innate hardwired values, such as that which Clippy has for paperclips, as biases, then I would say “yes.”
I don’t think it makes sense to categorize such innate values as biases, and so I do not think that Clippy is “biased” compared to an ideally rational agent. Instrumental rationality is for pursuing agents’ innate values. But if you think it takes bias to get you from not caring about paperclips to caring about paperclips, can you explain how, with no bias, you can get from not caring about anything, to caring about something?
If there were in fact some sort of objective morality, under which some people were much more valuable than others, then an ethical system which valued all people equally would be systematically biased in favor of the less valuable.