I wish this viewpoint were more common, but judging from the OP’s score, it is still in minority.
I just picked up Sam Harris’s latest book—the Moral Landscape, which is all about the idea that it is high time science invaded religion’s turf and claimed objective morality as a scientific inquiry.
Perhaps the time is also come when science reclaims theism and the related set of questions and cosmologies. The future (or perhaps even the present) is rather clearly a place where there are super-powerful beings that create beings like us and generally have total control over their created realities. It’s time we discussed this rationally.
Sam Harris is misguided at best in the major conclusions he draws about objective morality. See this blog post by Sean Carroll, which links to his previous posts on the subject.
My views on “reclaiming” theism are summed up by ata’s previous comment:
I recall a while ago that there was a brief thread where someone was arguing that phlogiston theory was actually correct, as long as you interpret it as identical to the modern scientific model of fire. I react to things like this similarly: theism/God were silly mistakes, let’s move on and not get attached to old terminology. Rehabilitating the idea of “theism” to make it refer to things like the Simulation Hypothesis seems pointless; how does lumping those concepts together with Yahweh (as far as common usage is concerned) help us think about the more plausible ones?
Sam Harris is misguided at best in the major conclusions he draws about objective morality. See this blog post by Sean Carroll, which links to his previous posts on the subject.
Have you read Less Wrong’s metaethics sequence? It and The Moral Landscape reach pretty much the same conclusions, except about the true nature of terminal values, which is a major conclusion, but only one among many.
Sean Carroll, on the other hand, gets absolutely everything wrong.
Given that the full title of the book is “The Moral Landscape: How Science Can Determine Human Values,” I think that conclusion is the major one, and certainly the controversial one. “Science can help us judge things that involve facts” and similar ideas aren’t really news to anyone who understands science. Values aren’t a certain kind of fact.
I don’t see where Sean’s conclusions are functionally different from those in the metaethics sequence. They’re presented in a much less philosophically rigorous form, because Sean is a physicist, not a philosopher (and so am I). For example, this statement of Sean’s:
But there’s no reason why we can’t be judgmental and firm in our personal convictions, even if we are honest that those convictions don’t have the same status as objective laws of nature.
Given that the full title of the book is “The Moral Landscape: How Science Can Determine Human Values,” I think that conclusion is the major one, and certainly the controversial one. “Science can help us judge things that involve facts” and similar ideas aren’t really news to anyone who understands science. Values aren’t a certain kind of fact.
To be accurate Harris should have inserted the word “Instrumental” before “Values” in his book’s title, and left out the paragraphs where he argues that the well-being of conscious minds is the basis of morality for reasons other than that the well-being of conscious minds is the basis of morality. There would still be at least two thirds of the book left, and there would still be a huge amount of people who would find it controversial, and I’m not just talking about religious fundamentalists.
I don’t see where Sean’s conclusions are functionally different from those in the metaethics sequence. They’re presented in a much less philosophically rigorous form, because Sean is a physicist, not a philosopher (and so am I). For example, this statement of Sean’s:
[...]
and this one of Eliezer’s:
[...]
seem to express the same sentiment, to me.
The difference is huge. Eliezer and I do believe that our ‘convictions’ have the same status as objective laws of nature (although we assign lower probability to some of them, obviously).
There would still be at least two thirds of the book left, and there would still be a huge amount of people who would find it controversial, and I’m not just talking about religious fundamentalists.
I wouldn’t limit “people who don’t understand science” to “religious fundamentalists,” so I don’t think we really disagree. A huge amount of people find evolution to be controversial, too, but I wouldn’t give much credence to that “controversy” in a serious discussion.
The difference is huge. Eliezer (and I) do believe that our ‘convictions’ have the same status as objective laws of nature (although we assign lower probability to some of them, obviously).
The quantum numbers which an electron possesses are the same whether you’re a human or a Pebblesorter. There’s an objectively right answer, and therefore objectively wrong answers. Convictions/terminal values cannot be compared in that way.
If you identify rightness with this huge computational property, then moral judgments are subjunctively objective (like math), subjectively objective (like probability), and capable of being true (like counterfactuals).
but he later says
Finally I realized that there was no foundation but humanity—no evidence pointing to even a reasonable doubt that there was anything else—and indeed I shouldn’t even want to hope for anything else—and indeed would have no moral cause to follow the dictates of a light in the sky, even if I found one.
That’s what the difference is, to me. An electron would have its quantum numbers whether or not humanity existed to discover them. 2 + 2 = 4 is true whether or not humanity is around to think it. Terminal values are higher level, less fundamental in terms of nature, because humanity (or other intelligent life) has to exist in order for them to exist. We can find what’s morally right based on terminal values, but we can’t find terminal values that are objectively right in that they exist whether or not we do.
Careful. The quantum numbers are no more than a basis for describing an electron. I can describe a stick as spanning a distance 3 meters wide and 4 long, while a pebblesorter describes it as being 5 meters long and 0 wide, and we can both be right. The same thing can happen when describing a quantum object.
I wouldn’t limit “people who don’t understand science” to “religious fundamentalists,” so I don’t think we really disagree. A huge amount of people find evolution to be controversial, too, but I wouldn’t give much credence to that “controversy” in a serious discussion.
Okay, let me make my claim stronger then: A huge amount of people who understand science would find the truncated version of TML described above controversial: A big fraction of the people who usually call themselves moral nihilists or moral relativists.
The quantum numbers which define an electron are the same whether you’re a human or a Pebblesorter. There’s an objectively right answer, and therefore objectively wrong answers. Convictions/terminal values cannot be compared in that way.
I’m saying that there is an objectively right answer, that terminal values can be compared (in a way that is tautological in this case, but that is fundamentally the only way we can determine the truth of anything). See this comment.
Do you believe it is true that “For every natural number x, x = x”? Yes? Why do you believe that? Well, you believe it because for every natural number x, x = x. How do you compare this axiom to “For every natural number x, x != x”?
Anyway, at least one of us is misunderstanding the metaethics sequence, so this exchange is rather pointless unless we want to get into a really complex conversation about a sequence of posts that has to total at least 100,000 words, and I don’t want to. Sorry.
That terminal values are like axioms, not like theorems. That is, they’re the things without which you cannot actually ask the question, “Is this true?”
You can say or write the words “Is”, “this”, and “true” without having axioms related to that question somewhere in your mind, of course, but you can’t mean anything coherent by the sentence. Someone who asks, “Why terminal value A rather than terminal value B?” and expects (or gives) an answer other than “Because of terminal value A, obviously!”* is confused.
*That’s assuming that A really is a terminal value of the person’s moral system. It could be an instrumental value; people have been known to hold false beliefs about their own minds.
I just started reading it and picked it really because I needed something for the train in a hurry. In part I read the likes of Harris just to get a better understanding of what makes a popular book. As far as I’ve read into Harris’s thesis about objective morality, I see it as rather hopeless; depending ultimately on the notion of a timeless universal human brain architecture which is mythical even today, posthuman future aside.
Carroll’s point at the end about attempting to find the ‘objective truth’ about what is the best flavor of ice cream echoes my thoughts so far on the “Moral Landscape”.
The interesting part wasn’t his theory, it was the idea that the entire belief space currently held by religion is now up for grabs.
In regards to ata’s previous comment, I don’t agree at all.
Theism is not some single atomic belief. It is an entire region in belief space. You can pull out many of the sub-beliefs and reduce them to atomic binary questions which slice idea-space, such as:
Was this observable universe created by a superintelligence?
Those in the science camp used to be pretty sure the answer to that was no, but it turns out they may very well be wrong, and the theists may have guessed correctly all along (Simulation Argument).
Did superintelligences intervene in earth’s history? How do they view us from a moral/ethical standpoint? And so on . . .
These questions all have definitive answers, and with enough intelligence/knowledge/computation they are all probably answerable.
You can say “theism/God” were silly mistakes, but how do you rationalize that when we now know that true godlike entities are the likely evolutionary outcome of technological civilizations and common throughout the multiverse?
I don’t think we should reward correct guesses that were made for the wrong reasons (and are only correct by certain stretches of vocabulary). Talking about superintelligences is more precise and avoids vast planes of ambiguity and negative connotations, so why not just do that?
I don’t think it is any stretch of vocabulary to use the word ‘god’ to describe future superintelligences.
If the belief is correct, it can’t also be a silly mistake.
The entire idea that one must choose words carefully to avoid ‘vast planes of ambiguity and negative connotations’ is at the heart of the ‘theism as taboo’ problem.
The SA so far stands to show that the central belief of broad theism is basically correct. Let’s not split hairs on that and just admit it. If that is true however then an entire set of associated and dependent beliefs may also be correct, and a massive probability update is in order.
Avoiding the ‘negative connotations’ to me suggests this flawed process of consciously or sub-consciously distancing any possible mental interpretation of the Singularity and the SA such that it is similar to theistic beliefs.
I suspect most people tend to do this because of belief inertia, the true difficulty of updating, and social signaling issues arising from being associated with a category of people who believe in the wrong versions of a right idea for insufficient reasons.
The SA so far stands to show that the central belief of broad theism is basically correct.
“The universe was created by an intelligence” is the central belief of deism, not theism. Whether or not the intelligence would interact with the universe, for what reasons, and to what ends, are open questions.
Also, at this point I’m more inclined to accept Tegmark’s mathematical universe description than the simulation argument.
wrong versions of a right idea
That seems oxymoronic to me.
There are superficial similarities between the simulation argument and theism, but, for example, the idea of worship/deference in the latter is a major element that the former lacks. The important question is: will using theistic terminology help with clarity and understanding for the simulation argument? The answer does not appear to be yes.
The SA so far stands to show that the central belief of broad theism is basically correct.
“The universe was created by an intelligence” is the central belief of deism, not theism. Whether or not the intelligence would interact with the universe, for what reasons, and to what ends, are open questions.
You’re right, I completely agree with the above in terms of the theism/deism distinction. The SA supports deism while allowing for theism but leaving it as an open question. My term “broad theism” meant to include theism & deism. Perhaps that category already has a term, not quite sure.
Also, at this point I’m more inclined to accept Tegmark’s mathematical universe description than the simulation argument.
I find the SA has much stronger support—Tegmark requires the additional belief that other physical universes exist for which we can never possibly find evidence for against.
There are superficial similarities between the simulation argument and theism, but, for example, the idea of worship/deference in the latter is a major element that the former lacks.
Some fraction of simulations probably have creators who desire some form of worship/deference, the SA turns this into a question of frequency or probability. I of course expect that worship-desiring creators are highly unlikely. Regardless, worship is not a defining characteristic of theism.
The important question is: will using theistic terminology help with clarity and understanding for the simulation argument?
I see it as the other way around. The SA gives us a reasonable structure within which to (re)-evaluate theism.
I find the SA has much stronger support—Tegmark requires the additional belief that other physical universes exist for which we can never possibly find evidence for against.
How could we find evidence of the universe simulating our own, if we are in a simulation? They’re both logical arguments, not empirical ones.
Regardless, worship is not a defining characteristic of theism.
The SA gives us a reasonable structure within which to (re)-evaluate theism.
I really don’t see what is so desirable about theism that we ought to define it to line up near-perfectly with the simulation argument in order to use it and related terminology. Any rhetorical scaffolding for dealing with Creators that theists have built up over the centuries is dripping with the negative connotations I referenced earlier. What net advantage do we gain by using it?
How could we find evidence of the universe simulating our own, if we are in a simulation? They’re both logical arguments, not empirical ones.
If say in 2080 we have created a number of high-fidelity historical recreations of 2010 with billions of sentient virtual humans who which is nearly indistinguishable (from their perspective) to our original 2010, then much of the uncertainty in the argument is eliminated.
(some uncertainty always remains, of course)
The other distinct possibility is that our simulation reaches some endpoint and possible re-integration, at which point it would be obvious.
tl;dr—If you’re going to equate morality with taste, understand that when we measure either of the two, taking agents into the process is a huge fact we can’t leave out
I’ll be upfront about having not read Sam Harris’ book yet, though I did read the blog review to get a general idea. Nonetheless, I take issue with the following point:
Carroll’s point at the end about attempting to find the ‘objective truth’ about what is the best flavor of ice cream echoes my thoughts so far on the “Moral Landscape”.
I’ve found that an objective truth about the best flavor of ice cream can be found if one figures out which disguised query they’re after. (Am I looking for “If I had to guess, what would random person z’s favorite flavor of ice cream be, with no other information?” or am I looking for something else).
This attempt at making morality too subjective to measure by relating it to taste has always bothered me because people always ignore a main factor here: agents should be part of our computation. When I want to know what flavor of ice cream is best, I take into account people’s preferences. If I want to know what would be the most moral action, I need to take into account it’s effects on people (or myself, should I be a virtue ethicist, or how it aligns with my rules, should I be a deontologist). Admittedly the latter is tougher than the former, but that doesn’t mean we have no hoped of dealing with it objectively. It just means we have to do the best we can with what we’re given, which may mean a lot of individual subjectivity.
In his book Stumbling on Happiness, Daniel Gilbert writes about studying the subjective as objectively as possible when he decides on the three premises for understanding happiness:
1] Using imperfect tools sucks, but it’s better than no tools.
2] An honest, real-time insider view is going to be more accurate than our current best outside views.
3] Abuse the law of real numbers to get around the imperfections of 1] and 2] (a.k.a measure often)
This attempt at making morality too subjective to measure by relating it to taste has always bothered me because people always ignore a main factor here: agents should be part of our computation.
I perhaps should have elaborated more, or think through my objection to Harris more clearly, but in essence I believe the problem is not that of finding an objective morality given people’s preferences, it’s objectively determining what people’s preferences should be.
There is an objective best ice cream flavor given a certain person’s mind, but can we say some minds are objectively more correct on the matter of preferring the best ice cream flavor?
My attempt at a universal objective morality might take some maximization of value given our current preferences and then evolve it into the future, maximizing over some time window. Perhaps you need to extend that time window to the very end. This would lead to some form of cosmism—directing everything towards some very long term universal goal.
This post was clearer than your original, and I think we agree more here than we did before, which may partially be an issue of communication styles/methods/etc.
I believe the problem is not that of finding an objective morality given people’s preferences, it’s objectively determining what people’s preferences should be.
This I agree with, but it’s more for the gut response of “I don’t trust people to determine other people’s values.” I wonder if the latter could be handled objectively, but I’m not sure I’d trust humans to do it.
There is an objective best ice cream flavor given a certain person’s mind, but can we say some minds are objectively more correct on the matter of preferring the best ice cream flavor?
My reflex response to this question was “No” followed by “Wait, wouldn’t I weight humans minds much more significantly than raccoons if I was figuring out human preferences?” Which I then thought through and latched on “Agents still matter; if I’m trying to model “best ice cream flavor to humans”, I give the rough category of “human-minds” more weight than other minds. Heck, I hardly have a reason to include such minds, and instrumentally they will likely be detrimental. So in that particular generalization, we disagree, but I’m getting the feeling we agree here more than I had guessed.
This I agree with, but it’s more for the gut response of “I don’t trust people to determine other people’s values.” I wonder if the latter could be handled objectively, but I’m not sure I’d trust humans to do it.
We already have to deal with this when we raise children. Western societies generally favor granting individuals great leeway in modifying their preferences and shaping the preferences of their children. We also place much less value on the children’s immediate preferences. But even this freedom is not absolute.
I wish this viewpoint were more common, but judging from the OP’s score, it is still in minority.
Hard to say, my sense is those of us endorsing/sympathizing/tolerant of Will’s position were pretty persuasive in this thread. The OP’s score went up from where it was when I first read the post.
I just picked up Sam Harris’s latest book—the Moral Landscape, which is all about the idea that it is high time science invaded religion’s turf and claimed objective morality as a scientific inquiry.
I’m in complete agreement with Dreaded_Anomaly on this. Harris is excellent on the neurobiology of religion, as an anti-apologist and as a commentator on the status of atheism as a public force. But he is way out of his depths as a moral philosopher. Carroll’s reaction is pretty much dead on. Even by the standards of the ethical realists Harris’s arguments just aren’t any good. As philosophy, they’d be unlikely to meet the standards for publication.
Now, once you accept certain controversial things about morality then much of what Harris says does follow. And from what I’ve seen Harris says some interesting things on that score. But it’s hard to get excited when the thesis the book got publicized with is so flawed.
I wish this viewpoint were more common, but judging from the OP’s score, it is still in minority.
I just picked up Sam Harris’s latest book—the Moral Landscape, which is all about the idea that it is high time science invaded religion’s turf and claimed objective morality as a scientific inquiry.
Perhaps the time is also come when science reclaims theism and the related set of questions and cosmologies. The future (or perhaps even the present) is rather clearly a place where there are super-powerful beings that create beings like us and generally have total control over their created realities. It’s time we discussed this rationally.
Sam Harris is misguided at best in the major conclusions he draws about objective morality. See this blog post by Sean Carroll, which links to his previous posts on the subject.
My views on “reclaiming” theism are summed up by ata’s previous comment:
Have you read Less Wrong’s metaethics sequence? It and The Moral Landscape reach pretty much the same conclusions, except about the true nature of terminal values, which is a major conclusion, but only one among many.
Sean Carroll, on the other hand, gets absolutely everything wrong.
Given that the full title of the book is “The Moral Landscape: How Science Can Determine Human Values,” I think that conclusion is the major one, and certainly the controversial one. “Science can help us judge things that involve facts” and similar ideas aren’t really news to anyone who understands science. Values aren’t a certain kind of fact.
I don’t see where Sean’s conclusions are functionally different from those in the metaethics sequence. They’re presented in a much less philosophically rigorous form, because Sean is a physicist, not a philosopher (and so am I). For example, this statement of Sean’s:
and this one of Eliezer’s:
seem to express the same sentiment, to me.
If you really object to Sean’s writing, take a look at Russell Blackford’s review of the book. (He is a philosopher, and a transhumanist one at that.)
To be accurate Harris should have inserted the word “Instrumental” before “Values” in his book’s title, and left out the paragraphs where he argues that the well-being of conscious minds is the basis of morality for reasons other than that the well-being of conscious minds is the basis of morality. There would still be at least two thirds of the book left, and there would still be a huge amount of people who would find it controversial, and I’m not just talking about religious fundamentalists.
The difference is huge. Eliezer and I do believe that our ‘convictions’ have the same status as objective laws of nature (although we assign lower probability to some of them, obviously).
I wouldn’t limit “people who don’t understand science” to “religious fundamentalists,” so I don’t think we really disagree. A huge amount of people find evolution to be controversial, too, but I wouldn’t give much credence to that “controversy” in a serious discussion.
The quantum numbers which an electron possesses are the same whether you’re a human or a Pebblesorter. There’s an objectively right answer, and therefore objectively wrong answers. Convictions/terminal values cannot be compared in that way.
I understand what Eliezer means when he says:
but he later says
That’s what the difference is, to me. An electron would have its quantum numbers whether or not humanity existed to discover them. 2 + 2 = 4 is true whether or not humanity is around to think it. Terminal values are higher level, less fundamental in terms of nature, because humanity (or other intelligent life) has to exist in order for them to exist. We can find what’s morally right based on terminal values, but we can’t find terminal values that are objectively right in that they exist whether or not we do.
Careful. The quantum numbers are no more than a basis for describing an electron. I can describe a stick as spanning a distance 3 meters wide and 4 long, while a pebblesorter describes it as being 5 meters long and 0 wide, and we can both be right. The same thing can happen when describing a quantum object.
Yes, I should have been more careful with my language. Thanks for pointing it out. Edited.
Okay, let me make my claim stronger then: A huge amount of people who understand science would find the truncated version of TML described above controversial: A big fraction of the people who usually call themselves moral nihilists or moral relativists.
I’m saying that there is an objectively right answer, that terminal values can be compared (in a way that is tautological in this case, but that is fundamentally the only way we can determine the truth of anything). See this comment.
Do you believe it is true that “For every natural number x, x = x”? Yes? Why do you believe that? Well, you believe it because for every natural number x, x = x. How do you compare this axiom to “For every natural number x, x != x”?
Anyway, at least one of us is misunderstanding the metaethics sequence, so this exchange is rather pointless unless we want to get into a really complex conversation about a sequence of posts that has to total at least 100,000 words, and I don’t want to. Sorry.
In quick approximation, what was this conclusion?
That terminal values are like axioms, not like theorems. That is, they’re the things without which you cannot actually ask the question, “Is this true?”
You can say or write the words “Is”, “this”, and “true” without having axioms related to that question somewhere in your mind, of course, but you can’t mean anything coherent by the sentence. Someone who asks, “Why terminal value A rather than terminal value B?” and expects (or gives) an answer other than “Because of terminal value A, obviously!”* is confused.
*That’s assuming that A really is a terminal value of the person’s moral system. It could be an instrumental value; people have been known to hold false beliefs about their own minds.
I just started reading it and picked it really because I needed something for the train in a hurry. In part I read the likes of Harris just to get a better understanding of what makes a popular book. As far as I’ve read into Harris’s thesis about objective morality, I see it as rather hopeless; depending ultimately on the notion of a timeless universal human brain architecture which is mythical even today, posthuman future aside.
Carroll’s point at the end about attempting to find the ‘objective truth’ about what is the best flavor of ice cream echoes my thoughts so far on the “Moral Landscape”.
The interesting part wasn’t his theory, it was the idea that the entire belief space currently held by religion is now up for grabs.
In regards to ata’s previous comment, I don’t agree at all.
Theism is not some single atomic belief. It is an entire region in belief space. You can pull out many of the sub-beliefs and reduce them to atomic binary questions which slice idea-space, such as:
Was this observable universe created by a superintelligence?
Those in the science camp used to be pretty sure the answer to that was no, but it turns out they may very well be wrong, and the theists may have guessed correctly all along (Simulation Argument).
Did superintelligences intervene in earth’s history? How do they view us from a moral/ethical standpoint? And so on . . .
These questions all have definitive answers, and with enough intelligence/knowledge/computation they are all probably answerable.
You can say “theism/God” were silly mistakes, but how do you rationalize that when we now know that true godlike entities are the likely evolutionary outcome of technological civilizations and common throughout the multiverse?
I try not to rationalize.
I don’t think we should reward correct guesses that were made for the wrong reasons (and are only correct by certain stretches of vocabulary). Talking about superintelligences is more precise and avoids vast planes of ambiguity and negative connotations, so why not just do that?
I don’t think it is any stretch of vocabulary to use the word ‘god’ to describe future superintelligences.
If the belief is correct, it can’t also be a silly mistake.
The entire idea that one must choose words carefully to avoid ‘vast planes of ambiguity and negative connotations’ is at the heart of the ‘theism as taboo’ problem.
The SA so far stands to show that the central belief of broad theism is basically correct. Let’s not split hairs on that and just admit it. If that is true however then an entire set of associated and dependent beliefs may also be correct, and a massive probability update is in order.
Avoiding the ‘negative connotations’ to me suggests this flawed process of consciously or sub-consciously distancing any possible mental interpretation of the Singularity and the SA such that it is similar to theistic beliefs.
I suspect most people tend to do this because of belief inertia, the true difficulty of updating, and social signaling issues arising from being associated with a category of people who believe in the wrong versions of a right idea for insufficient reasons.
“The universe was created by an intelligence” is the central belief of deism, not theism. Whether or not the intelligence would interact with the universe, for what reasons, and to what ends, are open questions.
Also, at this point I’m more inclined to accept Tegmark’s mathematical universe description than the simulation argument.
That seems oxymoronic to me.
There are superficial similarities between the simulation argument and theism, but, for example, the idea of worship/deference in the latter is a major element that the former lacks. The important question is: will using theistic terminology help with clarity and understanding for the simulation argument? The answer does not appear to be yes.
You’re right, I completely agree with the above in terms of the theism/deism distinction. The SA supports deism while allowing for theism but leaving it as an open question. My term “broad theism” meant to include theism & deism. Perhaps that category already has a term, not quite sure.
I find the SA has much stronger support—Tegmark requires the additional belief that other physical universes exist for which we can never possibly find evidence for against.
Some fraction of simulations probably have creators who desire some form of worship/deference, the SA turns this into a question of frequency or probability. I of course expect that worship-desiring creators are highly unlikely. Regardless, worship is not a defining characteristic of theism.
I see it as the other way around. The SA gives us a reasonable structure within which to (re)-evaluate theism.
How could we find evidence of the universe simulating our own, if we are in a simulation? They’re both logical arguments, not empirical ones.
I really don’t see what is so desirable about theism that we ought to define it to line up near-perfectly with the simulation argument in order to use it and related terminology. Any rhetorical scaffolding for dealing with Creators that theists have built up over the centuries is dripping with the negative connotations I referenced earlier. What net advantage do we gain by using it?
If say in 2080 we have created a number of high-fidelity historical recreations of 2010 with billions of sentient virtual humans who which is nearly indistinguishable (from their perspective) to our original 2010, then much of the uncertainty in the argument is eliminated.
(some uncertainty always remains, of course)
The other distinct possibility is that our simulation reaches some endpoint and possible re-integration, at which point it would be obvious.
tl;dr—If you’re going to equate morality with taste, understand that when we measure either of the two, taking agents into the process is a huge fact we can’t leave out
I’ll be upfront about having not read Sam Harris’ book yet, though I did read the blog review to get a general idea. Nonetheless, I take issue with the following point:
I’ve found that an objective truth about the best flavor of ice cream can be found if one figures out which disguised query they’re after. (Am I looking for “If I had to guess, what would random person z’s favorite flavor of ice cream be, with no other information?” or am I looking for something else).
This attempt at making morality too subjective to measure by relating it to taste has always bothered me because people always ignore a main factor here: agents should be part of our computation. When I want to know what flavor of ice cream is best, I take into account people’s preferences. If I want to know what would be the most moral action, I need to take into account it’s effects on people (or myself, should I be a virtue ethicist, or how it aligns with my rules, should I be a deontologist). Admittedly the latter is tougher than the former, but that doesn’t mean we have no hoped of dealing with it objectively. It just means we have to do the best we can with what we’re given, which may mean a lot of individual subjectivity.
In his book Stumbling on Happiness, Daniel Gilbert writes about studying the subjective as objectively as possible when he decides on the three premises for understanding happiness: 1] Using imperfect tools sucks, but it’s better than no tools. 2] An honest, real-time insider view is going to be more accurate than our current best outside views. 3] Abuse the law of real numbers to get around the imperfections of 1] and 2] (a.k.a measure often)
I perhaps should have elaborated more, or think through my objection to Harris more clearly, but in essence I believe the problem is not that of finding an objective morality given people’s preferences, it’s objectively determining what people’s preferences should be.
There is an objective best ice cream flavor given a certain person’s mind, but can we say some minds are objectively more correct on the matter of preferring the best ice cream flavor?
My attempt at a universal objective morality might take some maximization of value given our current preferences and then evolve it into the future, maximizing over some time window. Perhaps you need to extend that time window to the very end. This would lead to some form of cosmism—directing everything towards some very long term universal goal.
This post was clearer than your original, and I think we agree more here than we did before, which may partially be an issue of communication styles/methods/etc.
This I agree with, but it’s more for the gut response of “I don’t trust people to determine other people’s values.” I wonder if the latter could be handled objectively, but I’m not sure I’d trust humans to do it.
My reflex response to this question was “No” followed by “Wait, wouldn’t I weight humans minds much more significantly than raccoons if I was figuring out human preferences?” Which I then thought through and latched on “Agents still matter; if I’m trying to model “best ice cream flavor to humans”, I give the rough category of “human-minds” more weight than other minds. Heck, I hardly have a reason to include such minds, and instrumentally they will likely be detrimental. So in that particular generalization, we disagree, but I’m getting the feeling we agree here more than I had guessed.
We already have to deal with this when we raise children. Western societies generally favor granting individuals great leeway in modifying their preferences and shaping the preferences of their children. We also place much less value on the children’s immediate preferences. But even this freedom is not absolute.
Hard to say, my sense is those of us endorsing/sympathizing/tolerant of Will’s position were pretty persuasive in this thread. The OP’s score went up from where it was when I first read the post.
I’m in complete agreement with Dreaded_Anomaly on this. Harris is excellent on the neurobiology of religion, as an anti-apologist and as a commentator on the status of atheism as a public force. But he is way out of his depths as a moral philosopher. Carroll’s reaction is pretty much dead on. Even by the standards of the ethical realists Harris’s arguments just aren’t any good. As philosophy, they’d be unlikely to meet the standards for publication.
Now, once you accept certain controversial things about morality then much of what Harris says does follow. And from what I’ve seen Harris says some interesting things on that score. But it’s hard to get excited when the thesis the book got publicized with is so flawed.