Your example seems to provide an instance where S is false. You just assert that it isn’t like that:
It seems clear that whether the cube actually exists cannot possibly be relevant to the utility of P
Why?
P certainly desires the cube to exist, but I believe that it cannot be part of P’s utility function.
Again, why? You haven’t really said anything about why you’d think that...
Also, it seems pretty clearl that things outside of your head can matter. Suppose an evil demon offers you a choice: either
your family will be tortured, but you will think that they’re fine
your family will be fine, but you will think that they’re being tortured.
And of course, all memory of the encounter with the demon will be erased.
I think most people would take the second option, and gladly! That seems pretty strong prima facie evidence that stuff outside people’s heads matters to them. So I guess I’d disagree with S. Oh, and I’m (sort of) an anti-realist.
In your example, I agree that almost everyone would choose the second choice, but my point is that they will be worse off because they make that choice. It is an act of altruism, not an act which will increase their own utility. (Possibly the horror they would experience in making choice 1 would outweigh their future suffering, but after the choice is made they are definitely worse off having made the second choice.)
I say that the cube cannot be part of P’s utility function, because whether the cube exists in this example is completely decoupled from whether P believes the cube exists, since P trusts the oracle completely, and the oracle is free to give false data about this particular fact. P’s belief about the cube is part of the utility function, but not the actual fact of whether the cube exists.
Well, again, you’re kind of just asserting your claim. Prima facie, it seems pretty plausible that whatever function evaluates how well off a person is could take into account things outside of their mental states. People looking after their families isn’t often thought of as especially altruistic, because it’s something that usually matters very deeply to the person, even bracketing morality.
Your second paragraph is genuinely circular: the whole argument was about whether it showed that S was false, but you appeal to the fact that
whether the cube exists in this example is completely decoupled from whether P believes the cube exists
This is only relevant if we already think S is true. You can’t use it to support that very argument!
Look, if it helps, you can define utility*, which is utility that doesn’t depend on anything outside the mental state of the agent, as opposed to utility**, which does. Then you can get frustrated at all these silly people who seem to mistakenly think they want to maximize their utility** instead of their utility*. Or just perhaps they actually do know what they want? Utility* is a perfectly fine concept, it’s just not one that is acutally much use in relation to human decision-making.
Look, if it helps, you can define utility*, which is utility that doesn’t depend on anything outside the mental state of the agent, as opposed to utility**, which does. Then you can get frustrated at all these silly people who seem to mistakenly think they want to maximize their utility** instead of their utility*.
Someone can want to maximize utility**, and this is not necessarily irrational, but if they do this the are choosing to maximize something other than their own well-being.
Perhaps they are being altruistic and trying to improve someone else’s well-being at the expense of their own, like in your torture example. In this example, I don’t believe that most people who choose to save their family believe that they are maximizing their own well-being, I think they realize they are sacrificing their well-being (by maximizing utility** instead of utility*) in order to increase the well-being of their family members. I think that any one who does believe they are maximizing their own well being when saving their family is mistaken.
Perhaps they do not have any legitimate reason for wanting something other than their own well-being. Going back to the gold cube example, think of why P wants the cube to exist. P could want it to exist because knowing that gold cubes exist makes them happy. If this is the only reason, then P would probably be perfectly happy to accept a deal where their mind is altered so that they know the cube exists, even though it does not. If, however, P thinks there is something “good” about the cube existing, independent of their mind, they would (probably) not take this deal. Both of these actions are perfectly rational, given P’s beliefs about morality, but in the second case, P is mistaken in thinking that the existence of the cube is good by itself. This is because in either case, after accepting the deal, P’s mental state is exactly the same, so P’s well-being must be exactly the same. Further, nothing else in this universe is morally relevant, and P was simply mistaken in thinking that the existence of the gold block was a fundamentally good thing. (There might be other reasons for P to want the cube. Perhaps P just has an inexplicable urge for there to be a cube. in this case it is unclear whether they would take the deal, but taking it would surely still increase their well-being)
Well, again, you’re kind of just asserting your claim. Prima facie, it seems pretty plausible that whatever function evaluates how well off a person is could take into account things outside of their mental states.
It seems implausible to me that this function could exist independent of a mind or outside of a mind. You seem to be claiming that two people with identical mental states could have different levels of well-being. This seems absurd to me. I realize I am not provide much of an argument for this claim, but the idea that someone’s well-being could depend upon something that has no connection with their mental states whatsoever strongly violates my moral intuitions. I expected that other people would share this intuition, but so far no one has said that they do, so perhaps this intuition is unusual. (One could argue that P is correct in believing that the cube has moral value/utility independent of any sentient being, but this seems even more absurd)
In any case, I think S is basically equivalent to saying that utility (or moral value, however you want to define it) reduces to mental states.
P.S. I think you quoted more than you meant to above.
Okay, I just think you seem to have some pretty radically different intuitions about what counts for someone’s well-being.
One other thing: you seem to be assuming that the only reasons someone can have to act are either
it promotes their well-being
some moral reason.
I think this isn’t true, and it’s especially not true if you’re defining well-being as you are. So you present the options for P as
they want to have the happy-making belief that the cube exists
they think there is something “good” about the cube existing
but these aren’t exhaustive: P could just want the cube to exist, not to produce mental states in themself or for a moral reason. If you’re now claiming that actually noone desires anything other than that they come to have certain mental states, that’s even more controversial, and I would say even more obviously false ;)
I said that there could be other reasons for P to want the cube to exist. If someone has a desire that fulfilling will not be good for them in any way, or good for any other sentient being, that’s fine but I do not think that a desire of this type is morally relevant in any way. Further if someone claimed to have such a desire, knowing that fulfilling it served no purpose other than simply fulfilling it, I would believe them to be confused about what desire is. Surely the desire would have to be at least causing them discomfort, or at least some sort of an urge to fulfill the desire. Without that, what does desire even mean?
But that doesn’t really have much to do with whether S is true. Like I said, It seems clearly true to me that identical mental states implies identical well-being, If you don’t agree, I don’t really have any way to convince you other than what I’ve already written.
It may not matter whether there is gold in them thar hills, but it does matter what the oracle says. So I think you have misstated P’s utility function. P wants the oracle to tell him the gold exists, that is his utility function. And realizing that, you cannot say that it doesn’t matter what the oracle really tells him, because it does.
I don’t think P’s hypothesized stupid reliance on a lying oracle binds us to ignore what P really wants and thus call it only a state of mind. He needs that physical communication from something other than his mind, the oracle.
I am stipulating that P really truly wants the gold to exist (in the same way that you would want there not to exist a bunch of people who are being tortured, ceteris paribus). Whether P should be trusting the oracle is besides the point. The difference between these scenarios is that you are correct in believing that the people being tortured is morally bad. However, your well-being would not be affected by whether the people are being tortured, only by your belief of how likely this is. Of course, you would still try to stop the torture if you could, even if you knew that you would never know whether you were successful, but this is mainly an act of altruism.
My main point is probably better expressed as “Beings with identical mental states must be equally well off”. Disagreeing with this seems absurd to me, but apparently a lot of people do not share this intuition.
Also, you could easily eliminate the oracle in the example by just stating that P spontaneously comes to believe the cube exists for no reason. Or we could imagine that P has a perfectly realistic hallucination of the oracle. The fact that P’s belief is unjustified does not matter. According to S, the reasons for P’s mental state are irrelevant.
Whether P should be trusting the oracle is besides the point.
No, it isn’t. You are claiming that P “really” wants the gold to exist, but you are also claiming that P thinks that at least one of the definitions of “the gold exists” is “the oracle said the gold exists.” You are flummoxed by the paradox of P feeling just as happy due to a false belief in gold as he would based on a true belief in gold, and you are ignoring the thing that ACTUALLY made him happy: which was the oracle telling him the gold was real.
How surprising should it be that ignoring the real world causes of something produces paradoxes? P’s happiness doesn’t depend the gold existing in reality, but it does believe on something in reality causing him to believe the gold exists. And if the gold doesn’t exist in reality, P’s happiness is not changed, but if the reality that lead him to believe the gold existed is reversed, if the oracle tells him (truly or falsely) the gold doesn’t exist, then his happiness is changed.
I actually have not a clue what this example’s connection to moral realism might be, either supporting it or denying it. But I am pretty clear that what you present as a “real mental result without a physical cause because the gold does not matter” is merely a case of you taking an hypothesized fool at his word and ignoring the REAL physical cause of P’s happiness or sadness. Or from a slightly different tack, if P defined “gold exists” as “oracle tells me gold exists” then P’s claim that his utility is the gold is equivaelnt to a claim that his utility is being told there is god.
Ps happiness has a real cause in the real world. Because P is an idiot, he misunderstands what that cause means, but even P recognizes that the cause of his happiness is what the oracle told him.
No, it isn’t. You are claiming that P “really” wants the gold to exist, but you are also claiming that P thinks that at least one of the definitions of “the gold exists” is “the oracle said the gold exists.”
I do not claim that. I claim that P believes the cube exists because the oracle says so. He could believe it exists because he saw it in a telescope. Or because he saw it fly in front of his face and then away into space. Whatever reason he has for “knowing” the cube exists has some degree of uncertainty. He is happy because he has a strong belief that the gold exists. Moreover, my point stands regardless of where P gets his knowledge. Imagine, for example, that P believes strongly that the cube does not exist, because the existence of the cube violates Occam’s razor. It is still the case (in my opinion) that whether he is correct does not alter his well-being.
How surprising should it be that ignoring the real world causes of something produces paradoxes?
I do not think that this is a paradox, it seems intuitively obvious to me. In fact, I’m not entirely sure that we disagree on anything. You say “P’s happiness doesn’t depend the gold existing in reality, but it does believe on something in reality causing him to believe the gold exists.” I think others on this thread would argue that P’s happiness does change depending on the existence of the gold, even if what the oracle tells him is the same either way.
I actually have not a clue what this example’s connection to moral realism might be,
Maybe nothing, I just suspected that moral anti-realists would be less likely to accept S. My main question is just whether other people share my intuition that S is true (and what there reasons for agreeing or disagreeing are).
Ps happiness has a real cause in the real world. Because P is an idiot, he misunderstands what that cause means, but even P recognizes that the cause of his happiness is what the oracle told him.
I’m not sure I understand what you’re saying. P believes that the oracle is telling him the cube exists because the cube exists. P is of course mistaken, but everything else the oracle told him was correct, so he strongly believes that the oracle will only tell him things because they are the truth. Whether this is a reasonable belief for P to have is not relevant. You seem to be saying that if something has no causal effect on someone, that it cannot affect their well-being. I agree with that, but other people do not agree with that.
Your example seems to provide an instance where S is false. You just assert that it isn’t like that:
Why?
Again, why? You haven’t really said anything about why you’d think that...
Also, it seems pretty clearl that things outside of your head can matter. Suppose an evil demon offers you a choice: either
your family will be tortured, but you will think that they’re fine
your family will be fine, but you will think that they’re being tortured.
And of course, all memory of the encounter with the demon will be erased.
I think most people would take the second option, and gladly! That seems pretty strong prima facie evidence that stuff outside people’s heads matters to them. So I guess I’d disagree with S. Oh, and I’m (sort of) an anti-realist.
In your example, I agree that almost everyone would choose the second choice, but my point is that they will be worse off because they make that choice. It is an act of altruism, not an act which will increase their own utility. (Possibly the horror they would experience in making choice 1 would outweigh their future suffering, but after the choice is made they are definitely worse off having made the second choice.)
I say that the cube cannot be part of P’s utility function, because whether the cube exists in this example is completely decoupled from whether P believes the cube exists, since P trusts the oracle completely, and the oracle is free to give false data about this particular fact. P’s belief about the cube is part of the utility function, but not the actual fact of whether the cube exists.
Well, again, you’re kind of just asserting your claim. Prima facie, it seems pretty plausible that whatever function evaluates how well off a person is could take into account things outside of their mental states. People looking after their families isn’t often thought of as especially altruistic, because it’s something that usually matters very deeply to the person, even bracketing morality.
Your second paragraph is genuinely circular: the whole argument was about whether it showed that S was false, but you appeal to the fact that
This is only relevant if we already think S is true. You can’t use it to support that very argument!
Look, if it helps, you can define utility*, which is utility that doesn’t depend on anything outside the mental state of the agent, as opposed to utility**, which does. Then you can get frustrated at all these silly people who seem to mistakenly think they want to maximize their utility** instead of their utility*. Or just perhaps they actually do know what they want? Utility* is a perfectly fine concept, it’s just not one that is acutally much use in relation to human decision-making.
Edit: remember to escape * s!
Edit2: quoting fail.
Someone can want to maximize utility**, and this is not necessarily irrational, but if they do this the are choosing to maximize something other than their own well-being.
Perhaps they are being altruistic and trying to improve someone else’s well-being at the expense of their own, like in your torture example. In this example, I don’t believe that most people who choose to save their family believe that they are maximizing their own well-being, I think they realize they are sacrificing their well-being (by maximizing utility** instead of utility*) in order to increase the well-being of their family members. I think that any one who does believe they are maximizing their own well being when saving their family is mistaken.
Perhaps they do not have any legitimate reason for wanting something other than their own well-being. Going back to the gold cube example, think of why P wants the cube to exist. P could want it to exist because knowing that gold cubes exist makes them happy. If this is the only reason, then P would probably be perfectly happy to accept a deal where their mind is altered so that they know the cube exists, even though it does not. If, however, P thinks there is something “good” about the cube existing, independent of their mind, they would (probably) not take this deal. Both of these actions are perfectly rational, given P’s beliefs about morality, but in the second case, P is mistaken in thinking that the existence of the cube is good by itself. This is because in either case, after accepting the deal, P’s mental state is exactly the same, so P’s well-being must be exactly the same. Further, nothing else in this universe is morally relevant, and P was simply mistaken in thinking that the existence of the gold block was a fundamentally good thing. (There might be other reasons for P to want the cube. Perhaps P just has an inexplicable urge for there to be a cube. in this case it is unclear whether they would take the deal, but taking it would surely still increase their well-being)
It seems implausible to me that this function could exist independent of a mind or outside of a mind. You seem to be claiming that two people with identical mental states could have different levels of well-being. This seems absurd to me. I realize I am not provide much of an argument for this claim, but the idea that someone’s well-being could depend upon something that has no connection with their mental states whatsoever strongly violates my moral intuitions. I expected that other people would share this intuition, but so far no one has said that they do, so perhaps this intuition is unusual. (One could argue that P is correct in believing that the cube has moral value/utility independent of any sentient being, but this seems even more absurd)
In any case, I think S is basically equivalent to saying that utility (or moral value, however you want to define it) reduces to mental states.
P.S. I think you quoted more than you meant to above.
Okay, I just think you seem to have some pretty radically different intuitions about what counts for someone’s well-being.
One other thing: you seem to be assuming that the only reasons someone can have to act are either
it promotes their well-being
some moral reason.
I think this isn’t true, and it’s especially not true if you’re defining well-being as you are. So you present the options for P as
they want to have the happy-making belief that the cube exists
they think there is something “good” about the cube existing
but these aren’t exhaustive: P could just want the cube to exist, not to produce mental states in themself or for a moral reason. If you’re now claiming that actually noone desires anything other than that they come to have certain mental states, that’s even more controversial, and I would say even more obviously false ;)
I said that there could be other reasons for P to want the cube to exist. If someone has a desire that fulfilling will not be good for them in any way, or good for any other sentient being, that’s fine but I do not think that a desire of this type is morally relevant in any way. Further if someone claimed to have such a desire, knowing that fulfilling it served no purpose other than simply fulfilling it, I would believe them to be confused about what desire is. Surely the desire would have to be at least causing them discomfort, or at least some sort of an urge to fulfill the desire. Without that, what does desire even mean?
But that doesn’t really have much to do with whether S is true. Like I said, It seems clearly true to me that identical mental states implies identical well-being, If you don’t agree, I don’t really have any way to convince you other than what I’ve already written.
It may not matter whether there is gold in them thar hills, but it does matter what the oracle says. So I think you have misstated P’s utility function. P wants the oracle to tell him the gold exists, that is his utility function. And realizing that, you cannot say that it doesn’t matter what the oracle really tells him, because it does.
I don’t think P’s hypothesized stupid reliance on a lying oracle binds us to ignore what P really wants and thus call it only a state of mind. He needs that physical communication from something other than his mind, the oracle.
I am stipulating that P really truly wants the gold to exist (in the same way that you would want there not to exist a bunch of people who are being tortured, ceteris paribus). Whether P should be trusting the oracle is besides the point. The difference between these scenarios is that you are correct in believing that the people being tortured is morally bad. However, your well-being would not be affected by whether the people are being tortured, only by your belief of how likely this is. Of course, you would still try to stop the torture if you could, even if you knew that you would never know whether you were successful, but this is mainly an act of altruism.
My main point is probably better expressed as “Beings with identical mental states must be equally well off”. Disagreeing with this seems absurd to me, but apparently a lot of people do not share this intuition.
Also, you could easily eliminate the oracle in the example by just stating that P spontaneously comes to believe the cube exists for no reason. Or we could imagine that P has a perfectly realistic hallucination of the oracle. The fact that P’s belief is unjustified does not matter. According to S, the reasons for P’s mental state are irrelevant.
No, it isn’t. You are claiming that P “really” wants the gold to exist, but you are also claiming that P thinks that at least one of the definitions of “the gold exists” is “the oracle said the gold exists.” You are flummoxed by the paradox of P feeling just as happy due to a false belief in gold as he would based on a true belief in gold, and you are ignoring the thing that ACTUALLY made him happy: which was the oracle telling him the gold was real.
How surprising should it be that ignoring the real world causes of something produces paradoxes? P’s happiness doesn’t depend the gold existing in reality, but it does believe on something in reality causing him to believe the gold exists. And if the gold doesn’t exist in reality, P’s happiness is not changed, but if the reality that lead him to believe the gold existed is reversed, if the oracle tells him (truly or falsely) the gold doesn’t exist, then his happiness is changed.
I actually have not a clue what this example’s connection to moral realism might be, either supporting it or denying it. But I am pretty clear that what you present as a “real mental result without a physical cause because the gold does not matter” is merely a case of you taking an hypothesized fool at his word and ignoring the REAL physical cause of P’s happiness or sadness. Or from a slightly different tack, if P defined “gold exists” as “oracle tells me gold exists” then P’s claim that his utility is the gold is equivaelnt to a claim that his utility is being told there is god.
Ps happiness has a real cause in the real world. Because P is an idiot, he misunderstands what that cause means, but even P recognizes that the cause of his happiness is what the oracle told him.
I do not claim that. I claim that P believes the cube exists because the oracle says so. He could believe it exists because he saw it in a telescope. Or because he saw it fly in front of his face and then away into space. Whatever reason he has for “knowing” the cube exists has some degree of uncertainty. He is happy because he has a strong belief that the gold exists. Moreover, my point stands regardless of where P gets his knowledge. Imagine, for example, that P believes strongly that the cube does not exist, because the existence of the cube violates Occam’s razor. It is still the case (in my opinion) that whether he is correct does not alter his well-being.
I do not think that this is a paradox, it seems intuitively obvious to me. In fact, I’m not entirely sure that we disagree on anything. You say “P’s happiness doesn’t depend the gold existing in reality, but it does believe on something in reality causing him to believe the gold exists.” I think others on this thread would argue that P’s happiness does change depending on the existence of the gold, even if what the oracle tells him is the same either way.
Maybe nothing, I just suspected that moral anti-realists would be less likely to accept S. My main question is just whether other people share my intuition that S is true (and what there reasons for agreeing or disagreeing are).
I’m not sure I understand what you’re saying. P believes that the oracle is telling him the cube exists because the cube exists. P is of course mistaken, but everything else the oracle told him was correct, so he strongly believes that the oracle will only tell him things because they are the truth. Whether this is a reasonable belief for P to have is not relevant. You seem to be saying that if something has no causal effect on someone, that it cannot affect their well-being. I agree with that, but other people do not agree with that.