I view the Romans as PETA members would view me. I have justifications for my actions, as I’m sure Romans had for their actions. That’s just the nature of the human condition.
What would it mean for the PETA member to be right? Does it just mean that the PETA member has sympathy for chickens, whereas you and I do not? Or is there something testable going on here?
It doesn’t seem to me that the differences between the PETA members, us, and the Romans, are at all unclear. They are differences in the parties’ moral universe, so to speak: the PETA member sees a chicken as morally significant; you and I see a Scythian, Judean, or Gaul as morally significant; and the Roman sees only another Roman as morally significant. (I exaggerate slightly.)
A great deal of moral progress has been made through the expansion of the morally significant; through recognition of other tribes (and kinds of beings) as relevant objects of moral concern. Richard Rorty has argued that it is this sympathy or moral sentiment — and not the knowledge of moral facts — which makes the practical difference in causing a person to act morally; and that this in turn depends on living in a world where you can expect the same from others.
This is an empirical prediction: Rorty claims that expanding people’s moral sympathies to include more others, and giving them a world in which they can expect others to do the same in turn, is a more effective way of producing good moral consequences, than moral philosophizing is. I wonder what sort of experiment would provide evidence one way or the other.
That’s an interesting link to Rorty; I’ll have to read it again in some more detail. I really appreciated this quote:
We have come to see that the only lesson of either history or anthropology is our extraordinary malleability. We are coming to think of ourselves as the flexible, protean, self-shaping, animal rather than as the rational animal or the cruel animal.
That really seems to hit it for me. That flexibility, the sense that we can step beyond being warlike, or even calculating, seems to be critical to what morals are all about. I don’t want to make it sound like I’m against a generally moral culture, where happiness is optimized (or some other value I like personally). I just don’t think moral philosophizing would get us there. I’ll have to read up more on the moral sentiments approach. I have read some of Rorty’s papers, but not his major works. I would be interested to see these ideas of his paired with meme theory. Describing moral sentiment as a meme that enters a positive feedback loop where groups that have it survive longer than ones that don’t seems very plausible to me.
I’ll have to think more about your PETA question. I think it goes beyond sympathy. I don’t know how to test the positions though. I don’t think viewing chickens as being equally morally significant would lead to a much better world (for humans—chickens are a different matter). Even with the moral sentiment view, I don’t see how each side could come to a clear resolution.
I do wonder what would constitute “good moral consequences” in this context. If it’s being defined as the practical extension of goodwill, or of its tangible signs, then the argument seems very nearly tautological.
Not to put too fine a point on it, but part of Rorty’s argument seems to be that if you don’t already have a reasonably good sense for what “good moral consequences” would be, then you’re part of the problem. Rorty claims that philosophical ethics has been largely concerned with explaining to “psychopaths” like Thrasymachus and Callicles (the sophists in Plato’s dialogues who argue that might makes right) why they would do better to be moral; but that the only way for morality to win out in the real world is to avoid bringing agents into existence that lack moral sentiment:
It would have been better if Plato had decided, as Aristotle was to decide, that there was nothing much to be done with people like Thrasymachus and Callicles, and that the problem was how to avoid having children who would be like Thrasymachus and Callicles.
As far as I can tell, this fits perfectly into the FAI project, which is concerned with bringing into existence superhuman AI that does have a sense of “good moral consequences” before someone else creates one that doesn’t.
You can’t write an algorithm based on “if you don’t get it, you’re part of the problem”. You can get away with telling that to your children, sort of, but only because children are very good at synthesizing behavioral rules from contextual cues. Rorty’s advice might be useful as a practical guide to making moral humans, but it only masks the underlying issue: if the only way for morality to win in the real world is to avoid bringing amoral agents into existence, then there must already exist a well-bounded set of moral utility functions for agents to follow. It doesn’t tell us much about what such a set might contain, giving only a loose suggestion that good morality functions tend to be relatively subject-independent.
Now, to encode a member of such a set into an AI (which may or may not end up being Friendly depending on how well those functions generalize outside the human problem domain), you need a formalization of it. To teach one implicitly, you need a formalization of something analogous (but not necessarily identical) to the social intuitions that human children use to derive their morals, which is most likely a harder problem. And if you have such a formalization, explaining an instance of moral behavior to a rational sociopath is as easy as running it on particular inputs.
Presented with an irrational sociopath you’re out of luck, but I can’t think of any ethical systems that don’t have that problem.
What would it mean for the PETA member to be right? Does it just mean that the PETA member has sympathy for chickens, whereas you and I do not? Or is there something testable going on here?
It doesn’t seem to me that the differences between the PETA members, us, and the Romans, are at all unclear. They are differences in the parties’ moral universe, so to speak: the PETA member sees a chicken as morally significant; you and I see a Scythian, Judean, or Gaul as morally significant; and the Roman sees only another Roman as morally significant. (I exaggerate slightly.)
A great deal of moral progress has been made through the expansion of the morally significant; through recognition of other tribes (and kinds of beings) as relevant objects of moral concern. Richard Rorty has argued that it is this sympathy or moral sentiment — and not the knowledge of moral facts — which makes the practical difference in causing a person to act morally; and that this in turn depends on living in a world where you can expect the same from others.
This is an empirical prediction: Rorty claims that expanding people’s moral sympathies to include more others, and giving them a world in which they can expect others to do the same in turn, is a more effective way of producing good moral consequences, than moral philosophizing is. I wonder what sort of experiment would provide evidence one way or the other.
That’s an interesting link to Rorty; I’ll have to read it again in some more detail. I really appreciated this quote:
That really seems to hit it for me. That flexibility, the sense that we can step beyond being warlike, or even calculating, seems to be critical to what morals are all about. I don’t want to make it sound like I’m against a generally moral culture, where happiness is optimized (or some other value I like personally). I just don’t think moral philosophizing would get us there. I’ll have to read up more on the moral sentiments approach. I have read some of Rorty’s papers, but not his major works. I would be interested to see these ideas of his paired with meme theory. Describing moral sentiment as a meme that enters a positive feedback loop where groups that have it survive longer than ones that don’t seems very plausible to me.
I’ll have to think more about your PETA question. I think it goes beyond sympathy. I don’t know how to test the positions though. I don’t think viewing chickens as being equally morally significant would lead to a much better world (for humans—chickens are a different matter). Even with the moral sentiment view, I don’t see how each side could come to a clear resolution.
I do wonder what would constitute “good moral consequences” in this context. If it’s being defined as the practical extension of goodwill, or of its tangible signs, then the argument seems very nearly tautological.
Not to put too fine a point on it, but part of Rorty’s argument seems to be that if you don’t already have a reasonably good sense for what “good moral consequences” would be, then you’re part of the problem. Rorty claims that philosophical ethics has been largely concerned with explaining to “psychopaths” like Thrasymachus and Callicles (the sophists in Plato’s dialogues who argue that might makes right) why they would do better to be moral; but that the only way for morality to win out in the real world is to avoid bringing agents into existence that lack moral sentiment:
As far as I can tell, this fits perfectly into the FAI project, which is concerned with bringing into existence superhuman AI that does have a sense of “good moral consequences” before someone else creates one that doesn’t.
You can’t write an algorithm based on “if you don’t get it, you’re part of the problem”. You can get away with telling that to your children, sort of, but only because children are very good at synthesizing behavioral rules from contextual cues. Rorty’s advice might be useful as a practical guide to making moral humans, but it only masks the underlying issue: if the only way for morality to win in the real world is to avoid bringing amoral agents into existence, then there must already exist a well-bounded set of moral utility functions for agents to follow. It doesn’t tell us much about what such a set might contain, giving only a loose suggestion that good morality functions tend to be relatively subject-independent.
Now, to encode a member of such a set into an AI (which may or may not end up being Friendly depending on how well those functions generalize outside the human problem domain), you need a formalization of it. To teach one implicitly, you need a formalization of something analogous (but not necessarily identical) to the social intuitions that human children use to derive their morals, which is most likely a harder problem. And if you have such a formalization, explaining an instance of moral behavior to a rational sociopath is as easy as running it on particular inputs.
Presented with an irrational sociopath you’re out of luck, but I can’t think of any ethical systems that don’t have that problem.