I disagree. Rabbits have the “should” in their algorithm, they search for plans that “could” be executed and converge to the plans satisfying their sense of “good”, it is a process similar to one operating in humans or fruit flies and very unlike one operating in rocks. The main difference is that it seems difficult to persuade a rabbit of anything, but it’s equally difficult to persuade a drunk Vasya from 6th floor that flooding the neighbors is really bad. Animals (and even fruit flies) can adapt, can change their behavior in response to the same context as a result of being exposed to training context, can start selecting different plans in the same situations. They don’t follow the same ritual as humans do, they don’t exchange the moral arguments on the same level of explicitness as humans, but as cognitive algorithms go, they have all the details of “should”. Not all of humans are able to be persuaded by valid moral arguments, and need really deep reconfiguration before such arguments would start working, in ways not yet accessible to modern medicine, in ways equally unlike the normal rituals of moral persuasion. What would a more intelligent, more informed rabbit want? Would rabbits uploaded in rabbit-Friendly environment experience moral progress?
Reconstructing the part of the original argument that I think is valid, I agree that rabbits don’t posses the facilities for moral argument in the same sense as humans do, but it is a different phenomenon from the fundamental should of cognitive algorithms. Discussing this difference requires understanding the process of human-level moral argument, and not just process of goal-directed action, or process of moral progress. There are different ways in which the behavior changes, and I’m not sure that there is a meaningful threshold at which adaptation becomes moral progress; there might be.
This is cogent and forceful, but still wrong I think. There’s something to morality beyond the presence of a planning algorithm. I can’t currently imagine what that might be tho, so maybe you’re right that the difference is one of degree and not kind.
I think part of the confusion is that Elizier is distinguishing morality as a particular aspect of human decision-making. A lot of the comments seem to want to include any decision-making criteria as a kind of generalized morality.
Morality may just be a deceptively simple word that covers extremely complex aspects of how humans choose, and justify, their actions.
I disagree. Rabbits have the “should” in their algorithm, they search for plans that “could” be executed and converge to the plans satisfying their sense of “good”, it is a process similar to one operating in humans or fruit flies and very unlike one operating in rocks. The main difference is that it seems difficult to persuade a rabbit of anything, but it’s equally difficult to persuade a drunk Vasya from 6th floor that flooding the neighbors is really bad. Animals (and even fruit flies) can adapt, can change their behavior in response to the same context as a result of being exposed to training context, can start selecting different plans in the same situations. They don’t follow the same ritual as humans do, they don’t exchange the moral arguments on the same level of explicitness as humans, but as cognitive algorithms go, they have all the details of “should”. Not all of humans are able to be persuaded by valid moral arguments, and need really deep reconfiguration before such arguments would start working, in ways not yet accessible to modern medicine, in ways equally unlike the normal rituals of moral persuasion. What would a more intelligent, more informed rabbit want? Would rabbits uploaded in rabbit-Friendly environment experience moral progress?
Reconstructing the part of the original argument that I think is valid, I agree that rabbits don’t posses the facilities for moral argument in the same sense as humans do, but it is a different phenomenon from the fundamental should of cognitive algorithms. Discussing this difference requires understanding the process of human-level moral argument, and not just process of goal-directed action, or process of moral progress. There are different ways in which the behavior changes, and I’m not sure that there is a meaningful threshold at which adaptation becomes moral progress; there might be.
This is cogent and forceful, but still wrong I think. There’s something to morality beyond the presence of a planning algorithm. I can’t currently imagine what that might be tho, so maybe you’re right that the difference is one of degree and not kind.
I think part of the confusion is that Elizier is distinguishing morality as a particular aspect of human decision-making. A lot of the comments seem to want to include any decision-making criteria as a kind of generalized morality.
Morality may just be a deceptively simple word that covers extremely complex aspects of how humans choose, and justify, their actions.