“Desire” denotes your utility function (things you want). “Pleasure” denotes subjectively nice-feeling experiences. These are not necessarily the same thing.
Surely you would have to be superstitious to refuse!
There’s nothing superstitious about caring about stuff other than your own mental state.
“Desire” denotes your utility function (things you want). “Pleasure” denotes subjectively nice-feeling experiences. These are not necessarily the same thing.
Indeed they are not necessarily the same thing, which is why my utility function should not value that which I “want” but that which I “like”! The top-level post all but concludes this. The conclusion the author draws just does not follow from what came before. The correct conclusion is that we may still be able to “just” program an AI to maximize pleasure. What we “want” may be complex, but what we “like” may be simple. In fact, that would be better than programming an AI to make the world into what we “want” but not necessarily “like”.
There’s nothing superstitious about caring about stuff other than your own mental state.
If you mean that others’ mental states matter equally much, then I agree (but this distracts from the point of the experience machine hypothetical). Anything else couldn’t possibly matter.
A priori, nothing matters. But sentient beings cannot help but make value judgements regarding some of their mental states. This is why the quality of mental states matters.
Wanting something out there in the world to be some way, regardless of whether anyone will ever actually experience it, is different. A want is a proposition about reality whose apparent falsehood makes you feel bad. Why should we care about arbitrary propositions being true or false?
You haven’t read or paid much attention to the metaethics sequence yet, have you? Or do you simply disagree with pretty much all the major points of the first half of it?
I remember starting it, and putting it away because yes, I disagreed with so many things. Especially the present subject; I couldn’t find any arguments for the insistence on placating wants rather than improving experience. I’ll read it in full next week.
If you mean that others’ mental states matter equally much, then I agree (but this distracts from the point of the experience machine hypothetical). Anything else couldn’t possibly matter.
And unsupported strong claim. Dozens of implications and necessary conditions in evolutionary psychology if the claim is assumed true. No justification. No arguments. Only one or two weak points looked up by the claim’s proponent.
I think you may be confusing labels and concepts. Maximizing hedonistic mental states means, to the best of my knowledge, programming a hedonistic imperative directly into DNA for full-maximal state constantly from birth, regardless of conditions or situations, and then stacking up humans as much as possible to have as many of them as possible feeling as good as possible. If any of the humans move, they could prove to be a danger to efficient operation of this system, and letting them move thus becomes a net negative, so it follows from this that in the process of optimization all human mobility should be removed, considering that for a superintelligence removing limbs and any sort of mobility from “human” DNA is probably trivial.
But since they’re all feeling the best they could possibly feel, then it’s all good, right? It’s what they like (having been programmed to like it), so that’s the ideal world, right?
Edit: See Wireheading for a more detailed explanation and context of the possible result of a happiness-maximizer.
And unsupported strong claim. Dozens of implications and necessary conditions in evolutionary psychology if the claim is assumed true. No justification. No arguments. Only one or two weak points looked up by the claim’s proponent.
This comment has justification. I don’t see how this would affect evolutionary psychology. I’m not sure if I’m parsing your last sentence here correctly; I didn’t “look up” anything, and I don’t know what the weak points are.
Assuming that the scenario you paint is plausible and the optimal way to get there, then yeah, that’s where we should be headed. One of the explicit truths of your scenario is that “they’re all feeling the best they could possibly feel”. But your scenario is a bad intuition pump. You deliberately constructed this scenario so as to manipulate me into judging what the inhabitants experience as less than that, appealing to some superstitious notion of true/pure/honest/all-natural pleasure.
You may be onto something when you say I might be confusing labels and concepts, but I am not saying that the label “pleasure” refers to something simple. I am only saying that the quality of mental states is the only thing we should care about (note the word should, I’m not saying that is currently the way things are).
You deliberately constructed this scenario so as to manipulate me into judging what the inhabitants experience as less than that, appealing to some superstitious notion of true/pure/honest/all-natural pleasure.
No. I deliberately re-used a similar construct to Wireheading theories to expose more easily that many people disagree with this.
There’s no superstition of “true/pure/honest/all-natural pleasure” in my model—right now, my current brain feels extreme anti-hedons towards the idea of living in Wirehead Land. Right now, and to my best reasonable extrapolation, I and any future version of “myself” will hate and disapprove of wireheading, and would keep doing so even once wireheaded, if not for the fact that the wireheading necessarily overrides this in order to achieve maximum happiness by re-wiring the user to value wireheading and nothing else.
The “weak points” I spoke of is that you consider some “weaknesses” of your position, namely others’ mental states, but those are not the weakest of your position, nor are you using the strongest “enemy” arguments to judge your own position, and the other pieces of data also indicate that there’s mind-killing going on.
The quality of mental states is presumably the only thing we should care about—my model also points towards “that” (same label, probably not same referent). The thing is, that phrase is so open to interpretation (What’s “should”? What’s “quality”? How meta do the mental states go about analyzing themselves and future/past mental states, and does the quality of a mental state take into account the bound-to-reality factor of future qualitative mental states? etc.) that it’s almost an applause light.
No. I deliberately re-used a similar construct to Wireheading theories to expose more easily that many people disagree with this.
Yes, but they disagree because what they want is not the same as what they would like.
The “weak points” I spoke of is that you consider some “weaknesses” of your position, namely others’ mental states, but those are not the weakest of your position, nor are you using the strongest “enemy” arguments to judge your own position, and the other pieces of data also indicate that there’s mind-killing going on.
The value of others’ mental states is not a weakness of my position; I just considered them irrelevant for the purposes of the experience machine thought experiment. The fact that hooking up to the machine would take away resources that could be used to help others weighs against hooking up. I am not necessarily in favor of wireheading.
I am not aware of weaknesses of my position, nor in what way I am mind-killing. Can you tell me?
[...] it’s almost an applause light.
Yes! So why is nobody applauding? Because they disagree with some part of it. However, the part they disagree with is not what the referent of “pleasure” is, or what kind of elaborate outside-world engineering is needed to bring it about (which has instrumental value on my view), but the part where I say that the only terminal value is in mental states that you cannot help but value.
The burden of proof isn’t actually on my side. A priori, nothing has value. I’ve argued that the quality of mental states has (terminal) value. Why should we also go to any length to placate desires?
Hm, a bit over-condensed. More like the burden of proof is on yourself, to yourself. Once you have satisfied that, argument should be an exercise in communication, not rhetoric.
“Desire” denotes your utility function (things you want). “Pleasure” denotes subjectively nice-feeling experiences. These are not necessarily the same thing.
There’s nothing superstitious about caring about stuff other than your own mental state.
Indeed they are not necessarily the same thing, which is why my utility function should not value that which I “want” but that which I “like”! The top-level post all but concludes this. The conclusion the author draws just does not follow from what came before. The correct conclusion is that we may still be able to “just” program an AI to maximize pleasure. What we “want” may be complex, but what we “like” may be simple. In fact, that would be better than programming an AI to make the world into what we “want” but not necessarily “like”.
If you mean that others’ mental states matter equally much, then I agree (but this distracts from the point of the experience machine hypothetical). Anything else couldn’t possibly matter.
Why’s that?
A priori, nothing matters. But sentient beings cannot help but make value judgements regarding some of their mental states. This is why the quality of mental states matters.
Wanting something out there in the world to be some way, regardless of whether anyone will ever actually experience it, is different. A want is a proposition about reality whose apparent falsehood makes you feel bad. Why should we care about arbitrary propositions being true or false?
You haven’t read or paid much attention to the metaethics sequence yet, have you? Or do you simply disagree with pretty much all the major points of the first half of it?
Also relevant: Joy in the merely real
I remember starting it, and putting it away because yes, I disagreed with so many things. Especially the present subject; I couldn’t find any arguments for the insistence on placating wants rather than improving experience. I’ll read it in full next week.
And unsupported strong claim. Dozens of implications and necessary conditions in evolutionary psychology if the claim is assumed true. No justification. No arguments. Only one or two weak points looked up by the claim’s proponent.
I think you may be confusing labels and concepts. Maximizing hedonistic mental states means, to the best of my knowledge, programming a hedonistic imperative directly into DNA for full-maximal state constantly from birth, regardless of conditions or situations, and then stacking up humans as much as possible to have as many of them as possible feeling as good as possible. If any of the humans move, they could prove to be a danger to efficient operation of this system, and letting them move thus becomes a net negative, so it follows from this that in the process of optimization all human mobility should be removed, considering that for a superintelligence removing limbs and any sort of mobility from “human” DNA is probably trivial.
But since they’re all feeling the best they could possibly feel, then it’s all good, right? It’s what they like (having been programmed to like it), so that’s the ideal world, right?
Edit: See Wireheading for a more detailed explanation and context of the possible result of a happiness-maximizer.
This comment has justification. I don’t see how this would affect evolutionary psychology. I’m not sure if I’m parsing your last sentence here correctly; I didn’t “look up” anything, and I don’t know what the weak points are.
Assuming that the scenario you paint is plausible and the optimal way to get there, then yeah, that’s where we should be headed. One of the explicit truths of your scenario is that “they’re all feeling the best they could possibly feel”. But your scenario is a bad intuition pump. You deliberately constructed this scenario so as to manipulate me into judging what the inhabitants experience as less than that, appealing to some superstitious notion of true/pure/honest/all-natural pleasure.
You may be onto something when you say I might be confusing labels and concepts, but I am not saying that the label “pleasure” refers to something simple. I am only saying that the quality of mental states is the only thing we should care about (note the word should, I’m not saying that is currently the way things are).
No. I deliberately re-used a similar construct to Wireheading theories to expose more easily that many people disagree with this.
There’s no superstition of “true/pure/honest/all-natural pleasure” in my model—right now, my current brain feels extreme anti-hedons towards the idea of living in Wirehead Land. Right now, and to my best reasonable extrapolation, I and any future version of “myself” will hate and disapprove of wireheading, and would keep doing so even once wireheaded, if not for the fact that the wireheading necessarily overrides this in order to achieve maximum happiness by re-wiring the user to value wireheading and nothing else.
The “weak points” I spoke of is that you consider some “weaknesses” of your position, namely others’ mental states, but those are not the weakest of your position, nor are you using the strongest “enemy” arguments to judge your own position, and the other pieces of data also indicate that there’s mind-killing going on.
The quality of mental states is presumably the only thing we should care about—my model also points towards “that” (same label, probably not same referent). The thing is, that phrase is so open to interpretation (What’s “should”? What’s “quality”? How meta do the mental states go about analyzing themselves and future/past mental states, and does the quality of a mental state take into account the bound-to-reality factor of future qualitative mental states? etc.) that it’s almost an applause light.
Yes, but they disagree because what they want is not the same as what they would like.
The value of others’ mental states is not a weakness of my position; I just considered them irrelevant for the purposes of the experience machine thought experiment. The fact that hooking up to the machine would take away resources that could be used to help others weighs against hooking up. I am not necessarily in favor of wireheading.
I am not aware of weaknesses of my position, nor in what way I am mind-killing. Can you tell me?
Yes! So why is nobody applauding? Because they disagree with some part of it. However, the part they disagree with is not what the referent of “pleasure” is, or what kind of elaborate outside-world engineering is needed to bring it about (which has instrumental value on my view), but the part where I say that the only terminal value is in mental states that you cannot help but value.
The burden of proof isn’t actually on my side. A priori, nothing has value. I’ve argued that the quality of mental states has (terminal) value. Why should we also go to any length to placate desires?
To a rationalist, the “burden of proof” is always on one’s own side.
Hm, a bit over-condensed. More like the burden of proof is on yourself, to yourself. Once you have satisfied that, argument should be an exercise in communication, not rhetoric.
Agree completely.
This would seem to depend on the instrument goal motivating the argument.