Well, again, you’re kind of just asserting your claim. Prima facie, it seems pretty plausible that whatever function evaluates how well off a person is could take into account things outside of their mental states. People looking after their families isn’t often thought of as especially altruistic, because it’s something that usually matters very deeply to the person, even bracketing morality.
Your second paragraph is genuinely circular: the whole argument was about whether it showed that S was false, but you appeal to the fact that
whether the cube exists in this example is completely decoupled from whether P believes the cube exists
This is only relevant if we already think S is true. You can’t use it to support that very argument!
Look, if it helps, you can define utility*, which is utility that doesn’t depend on anything outside the mental state of the agent, as opposed to utility**, which does. Then you can get frustrated at all these silly people who seem to mistakenly think they want to maximize their utility** instead of their utility*. Or just perhaps they actually do know what they want? Utility* is a perfectly fine concept, it’s just not one that is acutally much use in relation to human decision-making.
Look, if it helps, you can define utility*, which is utility that doesn’t depend on anything outside the mental state of the agent, as opposed to utility**, which does. Then you can get frustrated at all these silly people who seem to mistakenly think they want to maximize their utility** instead of their utility*.
Someone can want to maximize utility**, and this is not necessarily irrational, but if they do this the are choosing to maximize something other than their own well-being.
Perhaps they are being altruistic and trying to improve someone else’s well-being at the expense of their own, like in your torture example. In this example, I don’t believe that most people who choose to save their family believe that they are maximizing their own well-being, I think they realize they are sacrificing their well-being (by maximizing utility** instead of utility*) in order to increase the well-being of their family members. I think that any one who does believe they are maximizing their own well being when saving their family is mistaken.
Perhaps they do not have any legitimate reason for wanting something other than their own well-being. Going back to the gold cube example, think of why P wants the cube to exist. P could want it to exist because knowing that gold cubes exist makes them happy. If this is the only reason, then P would probably be perfectly happy to accept a deal where their mind is altered so that they know the cube exists, even though it does not. If, however, P thinks there is something “good” about the cube existing, independent of their mind, they would (probably) not take this deal. Both of these actions are perfectly rational, given P’s beliefs about morality, but in the second case, P is mistaken in thinking that the existence of the cube is good by itself. This is because in either case, after accepting the deal, P’s mental state is exactly the same, so P’s well-being must be exactly the same. Further, nothing else in this universe is morally relevant, and P was simply mistaken in thinking that the existence of the gold block was a fundamentally good thing. (There might be other reasons for P to want the cube. Perhaps P just has an inexplicable urge for there to be a cube. in this case it is unclear whether they would take the deal, but taking it would surely still increase their well-being)
Well, again, you’re kind of just asserting your claim. Prima facie, it seems pretty plausible that whatever function evaluates how well off a person is could take into account things outside of their mental states.
It seems implausible to me that this function could exist independent of a mind or outside of a mind. You seem to be claiming that two people with identical mental states could have different levels of well-being. This seems absurd to me. I realize I am not provide much of an argument for this claim, but the idea that someone’s well-being could depend upon something that has no connection with their mental states whatsoever strongly violates my moral intuitions. I expected that other people would share this intuition, but so far no one has said that they do, so perhaps this intuition is unusual. (One could argue that P is correct in believing that the cube has moral value/utility independent of any sentient being, but this seems even more absurd)
In any case, I think S is basically equivalent to saying that utility (or moral value, however you want to define it) reduces to mental states.
P.S. I think you quoted more than you meant to above.
Okay, I just think you seem to have some pretty radically different intuitions about what counts for someone’s well-being.
One other thing: you seem to be assuming that the only reasons someone can have to act are either
it promotes their well-being
some moral reason.
I think this isn’t true, and it’s especially not true if you’re defining well-being as you are. So you present the options for P as
they want to have the happy-making belief that the cube exists
they think there is something “good” about the cube existing
but these aren’t exhaustive: P could just want the cube to exist, not to produce mental states in themself or for a moral reason. If you’re now claiming that actually noone desires anything other than that they come to have certain mental states, that’s even more controversial, and I would say even more obviously false ;)
I said that there could be other reasons for P to want the cube to exist. If someone has a desire that fulfilling will not be good for them in any way, or good for any other sentient being, that’s fine but I do not think that a desire of this type is morally relevant in any way. Further if someone claimed to have such a desire, knowing that fulfilling it served no purpose other than simply fulfilling it, I would believe them to be confused about what desire is. Surely the desire would have to be at least causing them discomfort, or at least some sort of an urge to fulfill the desire. Without that, what does desire even mean?
But that doesn’t really have much to do with whether S is true. Like I said, It seems clearly true to me that identical mental states implies identical well-being, If you don’t agree, I don’t really have any way to convince you other than what I’ve already written.
Well, again, you’re kind of just asserting your claim. Prima facie, it seems pretty plausible that whatever function evaluates how well off a person is could take into account things outside of their mental states. People looking after their families isn’t often thought of as especially altruistic, because it’s something that usually matters very deeply to the person, even bracketing morality.
Your second paragraph is genuinely circular: the whole argument was about whether it showed that S was false, but you appeal to the fact that
This is only relevant if we already think S is true. You can’t use it to support that very argument!
Look, if it helps, you can define utility*, which is utility that doesn’t depend on anything outside the mental state of the agent, as opposed to utility**, which does. Then you can get frustrated at all these silly people who seem to mistakenly think they want to maximize their utility** instead of their utility*. Or just perhaps they actually do know what they want? Utility* is a perfectly fine concept, it’s just not one that is acutally much use in relation to human decision-making.
Edit: remember to escape * s!
Edit2: quoting fail.
Someone can want to maximize utility**, and this is not necessarily irrational, but if they do this the are choosing to maximize something other than their own well-being.
Perhaps they are being altruistic and trying to improve someone else’s well-being at the expense of their own, like in your torture example. In this example, I don’t believe that most people who choose to save their family believe that they are maximizing their own well-being, I think they realize they are sacrificing their well-being (by maximizing utility** instead of utility*) in order to increase the well-being of their family members. I think that any one who does believe they are maximizing their own well being when saving their family is mistaken.
Perhaps they do not have any legitimate reason for wanting something other than their own well-being. Going back to the gold cube example, think of why P wants the cube to exist. P could want it to exist because knowing that gold cubes exist makes them happy. If this is the only reason, then P would probably be perfectly happy to accept a deal where their mind is altered so that they know the cube exists, even though it does not. If, however, P thinks there is something “good” about the cube existing, independent of their mind, they would (probably) not take this deal. Both of these actions are perfectly rational, given P’s beliefs about morality, but in the second case, P is mistaken in thinking that the existence of the cube is good by itself. This is because in either case, after accepting the deal, P’s mental state is exactly the same, so P’s well-being must be exactly the same. Further, nothing else in this universe is morally relevant, and P was simply mistaken in thinking that the existence of the gold block was a fundamentally good thing. (There might be other reasons for P to want the cube. Perhaps P just has an inexplicable urge for there to be a cube. in this case it is unclear whether they would take the deal, but taking it would surely still increase their well-being)
It seems implausible to me that this function could exist independent of a mind or outside of a mind. You seem to be claiming that two people with identical mental states could have different levels of well-being. This seems absurd to me. I realize I am not provide much of an argument for this claim, but the idea that someone’s well-being could depend upon something that has no connection with their mental states whatsoever strongly violates my moral intuitions. I expected that other people would share this intuition, but so far no one has said that they do, so perhaps this intuition is unusual. (One could argue that P is correct in believing that the cube has moral value/utility independent of any sentient being, but this seems even more absurd)
In any case, I think S is basically equivalent to saying that utility (or moral value, however you want to define it) reduces to mental states.
P.S. I think you quoted more than you meant to above.
Okay, I just think you seem to have some pretty radically different intuitions about what counts for someone’s well-being.
One other thing: you seem to be assuming that the only reasons someone can have to act are either
it promotes their well-being
some moral reason.
I think this isn’t true, and it’s especially not true if you’re defining well-being as you are. So you present the options for P as
they want to have the happy-making belief that the cube exists
they think there is something “good” about the cube existing
but these aren’t exhaustive: P could just want the cube to exist, not to produce mental states in themself or for a moral reason. If you’re now claiming that actually noone desires anything other than that they come to have certain mental states, that’s even more controversial, and I would say even more obviously false ;)
I said that there could be other reasons for P to want the cube to exist. If someone has a desire that fulfilling will not be good for them in any way, or good for any other sentient being, that’s fine but I do not think that a desire of this type is morally relevant in any way. Further if someone claimed to have such a desire, knowing that fulfilling it served no purpose other than simply fulfilling it, I would believe them to be confused about what desire is. Surely the desire would have to be at least causing them discomfort, or at least some sort of an urge to fulfill the desire. Without that, what does desire even mean?
But that doesn’t really have much to do with whether S is true. Like I said, It seems clearly true to me that identical mental states implies identical well-being, If you don’t agree, I don’t really have any way to convince you other than what I’ve already written.