1: Harris compares pursuing moral goals to pursuing health and claims they are fundamentally similiar (i.e. both part of the basic purview of science). This is what I’m disputing here.
2: See the reply I’ve made already, both here and my other argument.
3: Harris could claim that a question of the worth of animals could be solved by checking the brains of humans, but this begs questions of why human brains are the only ones that are taken into account. In addition, human brains are likely often contradictory on the subject- a law of averages could be used, but why is it so valid?
4: Harris claims all morality is about the well-being of conscious creatures. That’s what I’m objecting to here.
I think 3) is your strongest point, may I try to expand on it?
I wonder, what is Sam’s response to utility monsters, small chances of large effects and torture vs. dust specks? In saying that science can answer moral questions by examining the well-being of humans, isn’t he making the unspoken assumption that there is a way to combine the diverse “well-being-values” of different humans into one single number by which to order outcomes, and, more importantly, that science can find this method? Then the question remains, how shall science do this? Is this function to be found anywhere in nature? Perhaps in the brains of conscious beings? What if these beings hold different views on what is “fair”?
I simply can’t imagine what one would measure to determine what is the “correct” distribution of happiness, although that failure to imagine may be on my part.
Sam would be subject to all the usual objections to utilitarianism, altruism, and moral objectivism available in the existing literature. He has justified not addressing that literature with a glib comment that he was sparing people from boredom. As I said before, he is fundamentally unserious and even dishonest in arguing his case.
He should have appointed a seperate judge for his contest. If he’s just going to brush off legitimate criticism, this whole contest thing doesn’t make sense.
this begs questions of why human brains are the only ones that are taken into account.
Harris has decided to define “good” as “that thing in human brains which typically corresponds to the word good”.
Under this definition, an agent using orange/blue compass rather than a black/white compass doesn’t have a different morality—rather, it’s simply unconcerned with moral questions. “Good” and “Moral” are defined as the human-specific-value-thingies. That is why only human brains are taken into account—because they are embedded in his definition of “good”.
Yes, but he’s effectively ignoring a significant number of ethical questions regarding Why Humans? In addition, the principle that all humans are about equally weighted appears to be significant in his morality.
1: Harris compares pursuing moral goals to pursuing health and claims they are fundamentally similiar (i.e. both part of the basic purview of science). This is what I’m disputing here.
2: See the reply I’ve made already, both here and my other argument.
3: Harris could claim that a question of the worth of animals could be solved by checking the brains of humans, but this begs questions of why human brains are the only ones that are taken into account. In addition, human brains are likely often contradictory on the subject- a law of averages could be used, but why is it so valid?
4: Harris claims all morality is about the well-being of conscious creatures. That’s what I’m objecting to here.
I think 3) is your strongest point, may I try to expand on it?
I wonder, what is Sam’s response to utility monsters, small chances of large effects and torture vs. dust specks? In saying that science can answer moral questions by examining the well-being of humans, isn’t he making the unspoken assumption that there is a way to combine the diverse “well-being-values” of different humans into one single number by which to order outcomes, and, more importantly, that science can find this method? Then the question remains, how shall science do this? Is this function to be found anywhere in nature? Perhaps in the brains of conscious beings? What if these beings hold different views on what is “fair”?
I simply can’t imagine what one would measure to determine what is the “correct” distribution of happiness, although that failure to imagine may be on my part.
Sam would be subject to all the usual objections to utilitarianism, altruism, and moral objectivism available in the existing literature. He has justified not addressing that literature with a glib comment that he was sparing people from boredom. As I said before, he is fundamentally unserious and even dishonest in arguing his case.
He should have appointed a seperate judge for his contest. If he’s just going to brush off legitimate criticism, this whole contest thing doesn’t make sense.
Harris has decided to define “good” as “that thing in human brains which typically corresponds to the word good”.
Under this definition, an agent using orange/blue compass rather than a black/white compass doesn’t have a different morality—rather, it’s simply unconcerned with moral questions. “Good” and “Moral” are defined as the human-specific-value-thingies. That is why only human brains are taken into account—because they are embedded in his definition of “good”.
Yes, but he’s effectively ignoring a significant number of ethical questions regarding Why Humans? In addition, the principle that all humans are about equally weighted appears to be significant in his morality.