If I had to pick one between the two labels ‘moral realism’ and ‘moral anti-realism’ I would definitely choose realism.
I am not sure about how to reply to “what is the meaning of moral facts”: it seems too philosophical, in the sense that I don’t get what you want to know in practice. Regarding the last question: I reason about ethics and morality by using similar cognitive skills to the ones I use in order to know and reason about other stuff in the world. This paragraph might help:
It also helps explain how we get to discriminate between goals such as increasing world happiness and increasing world suffering, mentioned in the introduction. From our frequent experiences of pleasure and pain, we categorise many things as ‘good (or bad) for me’; then, through a mix of empathy, generalisation, and reflection, we get to the concept of ‘good (or bad) for others’, which comes up in our minds so often that the difference between the two goals strikes us as evident and influences our behaviour (towards increasing world happiness rather than world suffering, hopefully).
I do not have a clear idea yet of how this happens algorithmically, but an important factor seems to be that, in the human mind, goals and actions are not completely separate, and neither are action selection and goal selection. When we think about what to do, sometimes we do fix a goal and plan only for that, but other times the question becomes about what is worth doing in general, what is best, what is valuable: instead of fixing a goal and choosing an action, it’s like we are choosing between goals.
I meant the first question in a very pragmatic way: what is it that you are trying to say when you say that something is good? What information does it represent?
It would be clearer in analogy to factual claims: we can do lots of philosophy about the exact meaning of saying that I have a dog, but in the end we share an objective reality in which there are real particles (or wave function approximately decomposable to particles or whatever) organized in patterns, that give rise to patterns of interaction with our senses that we learn to associate with the word “dog”. That latent shared reality ultimately allow us to talk about dogs, and check whether there is a dog in my house, and usually agree about the result. Every reflection and generalization that we do is ultimately about that, and can achieve something meaningful because of that.
I do not see the analogous story for moral reflection.
we share an objective reality in which there are real particles (or wave function approximately decomposable to particles or whatever) organized in patterns, that give rise to patterns of interaction with our senses that we learn to associate with the word “dog”. That latent shared reality ultimately allow us to talk about dogs, and check whether there is a dog in my house, and usually agree about the result.
Besides the sentence ‘check whether there is a dog in my house’, it seems ok to me to replace the word ‘dog’ with the word ‘good’ or ‘bad’ in the above paragraph. Agreement might be less easy to achieve, but it doesn’t mean finding a common ground is impossible.
For example, some researchers classify emotions according to valence, i.e. whether it is an overall good or bad experience for the experiencer, and in the future we might be able to find a map from brain states to whether a person is feeling good or bad. In this sense of good and bad, I’m pretty sure that moral philosophers who argue for the maximisation of bad feelings for the largest amount of experiencers are a very small minority. In other terms, we agree that maximising negative valence on a large scale is not worth doing.
(Personally, however, I am not a fan of arguments based on agreement or disagreement, especially in the moral domain. Many people in the past used to think that slavery was ok: does it mean slavery was good and right in the past, while now it is bad and wrong? No, I’d say that normally we use the words good/bad/right/wrong in a different way, to mean something else; similarly, we don’t normally use the word ‘dog’ to mean e.g. ‘wolf’. From a different domain: there is disagreement in modern physics about some aspects of quantum mechanics. Does it mean quantum mechanics is fake / not real / a matter of subjective opinion? I don’t think so)
Let me clarify that I don’t argue from agreement per say. I care about the underlying epistemic mechanism of agreement, that I claim to also be the mechanism of correctness. My point is that I don’t see similar epistemic mechanism in the case of morality.
Of course, emotions are verifiable states of brains. And the same goes for preferring actions that would lead to certain emotions and not others. It is a verifiable fact that you like chocolate. It is a contingent property of my brain that I care, but I don’t see what sort of argument that it is correct for me too care could even in principle be inherntly compelling.
I don’t know what passes your test of ‘in principle be an inherently compelling argument’. It’s a toy example, but here are some steps that to me seem logical / rational / coherent / right / sensible / correct:
X is a state of mind that feels bad to whatever mind experiences it (this is the starting assumption, it seems we agree that such an X exists, or at least something similar to X)
X, experienced on a large scale by many minds, is bad
Causing X on a large scale is bad
When considering what to do, I’ll discard actions that cause X, and choose other options instead.
Now, some people will object and say that there are holes in this chain of reasoning, i.e. that 2 doesn’t logically follow from 1, or 3 doesn’t follow from 2, or 4 doesn’t follow from 3. For the sake of this discussion, let’s say that you object the step from 1 to 2. Then, what about this replacement:
X is a state of mind that feels bad to whatever mind experiences it [identical to original 1]
X, experienced on a large scale by many minds, is good [replaced ‘bad’ with ‘good’]
Does this passage from 1 to 2 seems, to you (our hypothetical objector), equally logical / rational / coherent / right / sensible / correct as the original step from 1 to 2? Could I replace ‘bad’ with basically anything, and the correctness would not change at all as a result?
My point is that, to many reflecting minds, the replacement seems less logical / rational / coherent / right / sensible / correct than the original step. And this is what I care about for my research: I want an AI that reflects in a similar way, an AI to which the original steps do seem rational and sensible, while replacements like the one I gave do not.
That was good for my understanding of your position. My main problem with the whole thing though is in the use the word “bad”. I think it should be taboo at least until we establish a shared meaning.
Specifically, I think that most observers will find the first argument more logical than the second, because of a fallacy in using the word “bad”. I think that we learn that word in a way that is deeply entangled with power reward mechanism, to the point that it is mostly just a pointer to negative reward, things that we want to avoid, things that made our parents angry… In my view, the argument is than basically:
I want to avoid my suffering, and now generally person p want to avoid person p suffering. Therfore suffering is “to be avoided” in general, therefore suffering is “thing my parents will punish for”, therefore avoid creating suffering.
When written that way, it doesn’t seem more logical than is opposite.
To a kid, ‘bad things’ and ‘things my parents don’t want me to do’ overlap to a large degree. This is not true for many adults. This is probably why the step
suffering is “to be avoided” in general, therefore suffering is “thing my parents will punish for”
seems weak.
Overall, what is the intention behind your comments? Are you trying to understand my position even better, and if so, why? Are you interested in funding this kind of research; or are you looking for opportunities to change your mind; or are you trying to change my mind?
If I had to pick one between the two labels ‘moral realism’ and ‘moral anti-realism’ I would definitely choose realism.
I am not sure about how to reply to “what is the meaning of moral facts”: it seems too philosophical, in the sense that I don’t get what you want to know in practice. Regarding the last question: I reason about ethics and morality by using similar cognitive skills to the ones I use in order to know and reason about other stuff in the world. This paragraph might help:
I do not have a clear idea yet of how this happens algorithmically, but an important factor seems to be that, in the human mind, goals and actions are not completely separate, and neither are action selection and goal selection. When we think about what to do, sometimes we do fix a goal and plan only for that, but other times the question becomes about what is worth doing in general, what is best, what is valuable: instead of fixing a goal and choosing an action, it’s like we are choosing between goals.
I meant the first question in a very pragmatic way: what is it that you are trying to say when you say that something is good? What information does it represent?
It would be clearer in analogy to factual claims: we can do lots of philosophy about the exact meaning of saying that I have a dog, but in the end we share an objective reality in which there are real particles (or wave function approximately decomposable to particles or whatever) organized in patterns, that give rise to patterns of interaction with our senses that we learn to associate with the word “dog”. That latent shared reality ultimately allow us to talk about dogs, and check whether there is a dog in my house, and usually agree about the result. Every reflection and generalization that we do is ultimately about that, and can achieve something meaningful because of that.
I do not see the analogous story for moral reflection.
Besides the sentence ‘check whether there is a dog in my house’, it seems ok to me to replace the word ‘dog’ with the word ‘good’ or ‘bad’ in the above paragraph. Agreement might be less easy to achieve, but it doesn’t mean finding a common ground is impossible.
For example, some researchers classify emotions according to valence, i.e. whether it is an overall good or bad experience for the experiencer, and in the future we might be able to find a map from brain states to whether a person is feeling good or bad. In this sense of good and bad, I’m pretty sure that moral philosophers who argue for the maximisation of bad feelings for the largest amount of experiencers are a very small minority. In other terms, we agree that maximising negative valence on a large scale is not worth doing.
(Personally, however, I am not a fan of arguments based on agreement or disagreement, especially in the moral domain. Many people in the past used to think that slavery was ok: does it mean slavery was good and right in the past, while now it is bad and wrong? No, I’d say that normally we use the words good/bad/right/wrong in a different way, to mean something else; similarly, we don’t normally use the word ‘dog’ to mean e.g. ‘wolf’. From a different domain: there is disagreement in modern physics about some aspects of quantum mechanics. Does it mean quantum mechanics is fake / not real / a matter of subjective opinion? I don’t think so)
Let me clarify that I don’t argue from agreement per say. I care about the underlying epistemic mechanism of agreement, that I claim to also be the mechanism of correctness. My point is that I don’t see similar epistemic mechanism in the case of morality.
Of course, emotions are verifiable states of brains. And the same goes for preferring actions that would lead to certain emotions and not others. It is a verifiable fact that you like chocolate. It is a contingent property of my brain that I care, but I don’t see what sort of argument that it is correct for me too care could even in principle be inherntly compelling.
I don’t know what passes your test of ‘in principle be an inherently compelling argument’. It’s a toy example, but here are some steps that to me seem logical / rational / coherent / right / sensible / correct:
X is a state of mind that feels bad to whatever mind experiences it (this is the starting assumption, it seems we agree that such an X exists, or at least something similar to X)
X, experienced on a large scale by many minds, is bad
Causing X on a large scale is bad
When considering what to do, I’ll discard actions that cause X, and choose other options instead.
Now, some people will object and say that there are holes in this chain of reasoning, i.e. that 2 doesn’t logically follow from 1, or 3 doesn’t follow from 2, or 4 doesn’t follow from 3. For the sake of this discussion, let’s say that you object the step from 1 to 2. Then, what about this replacement:
X is a state of mind that feels bad to whatever mind experiences it [identical to original 1]
X, experienced on a large scale by many minds, is good [replaced ‘bad’ with ‘good’]
Does this passage from 1 to 2 seems, to you (our hypothetical objector), equally logical / rational / coherent / right / sensible / correct as the original step from 1 to 2? Could I replace ‘bad’ with basically anything, and the correctness would not change at all as a result?
My point is that, to many reflecting minds, the replacement seems less logical / rational / coherent / right / sensible / correct than the original step. And this is what I care about for my research: I want an AI that reflects in a similar way, an AI to which the original steps do seem rational and sensible, while replacements like the one I gave do not.
That was good for my understanding of your position. My main problem with the whole thing though is in the use the word “bad”. I think it should be taboo at least until we establish a shared meaning.
Specifically, I think that most observers will find the first argument more logical than the second, because of a fallacy in using the word “bad”. I think that we learn that word in a way that is deeply entangled with power reward mechanism, to the point that it is mostly just a pointer to negative reward, things that we want to avoid, things that made our parents angry… In my view, the argument is than basically:
I want to avoid my suffering, and now generally person p want to avoid person p suffering. Therfore suffering is “to be avoided” in general, therefore suffering is “thing my parents will punish for”, therefore avoid creating suffering.
When written that way, it doesn’t seem more logical than is opposite.
To a kid, ‘bad things’ and ‘things my parents don’t want me to do’ overlap to a large degree. This is not true for many adults. This is probably why the step
seems weak.
Overall, what is the intention behind your comments? Are you trying to understand my position even better, and if so, why? Are you interested in funding this kind of research; or are you looking for opportunities to change your mind; or are you trying to change my mind?
Since I became reasonably sure that I understand your position and reasoning—mostly changing it.