You don’t know how good “as good as can possibly be” is yet.
I want to continue to be someone who thinks things and does stuff, even at a cost in happiness.
But surely the cost in happiness that you’re willing to accept isn’t infinite. For example, presumably you’re not willing to be tortured for a year in exchange for a year of thinking and doing stuff. Someone who has never experienced much pain might think that torture is no big deal, and accept this exchange, but he would be mistaken, right?
How do you know you’re not similarly mistaken about wireheading?
How do you know you’re not similarly mistaken about wireheading?
I’m a bit skeptical of how well you can use the term “mistaken” when talking about technology that would allow us to modify our minds to an arbitrary degree. One could easily fathom a mind that (say) wants to be wireheaded for as long as the wireheading goes on, but ceases to want it the moment the wireheading stops. (I.e. both prefer their current state of wireheadedness/non-wireheadedness and wouldn’t want to change it.) Can we really say that one of them is “mistaken”, or wouldn’t it be more accurate to say that they simply have different preferences?
Perhaps I have a maximum utility to happiness, which increasing happiness approaches asymptotically?
Yes, I think that’s quite possible, but I don’t know whether it’s actually the case or not. A big question I have is whether any of our values scales up to the size of the universe, in other words, doesn’t asymptotically approach an upper bound well before we used up the resources in the universe. See also my latest post http://lesswrong.com/lw/1oj/complexity_of_value_complexity_of_outcome/ where I talk about some related ideas.
You don’t know how good “as good as can possibly be” is yet.
But surely the cost in happiness that you’re willing to accept isn’t infinite. For example, presumably you’re not willing to be tortured for a year in exchange for a year of thinking and doing stuff. Someone who has never experienced much pain might think that torture is no big deal, and accept this exchange, but he would be mistaken, right?
How do you know you’re not similarly mistaken about wireheading?
I’m a bit skeptical of how well you can use the term “mistaken” when talking about technology that would allow us to modify our minds to an arbitrary degree. One could easily fathom a mind that (say) wants to be wireheaded for as long as the wireheading goes on, but ceases to want it the moment the wireheading stops. (I.e. both prefer their current state of wireheadedness/non-wireheadedness and wouldn’t want to change it.) Can we really say that one of them is “mistaken”, or wouldn’t it be more accurate to say that they simply have different preferences?
EDIT: Expanded this to a top-level post.
Interesting problem! Perhaps I have a maximum utility to happiness, which increasing happiness approaches asymptotically?
Yes, I think that’s quite possible, but I don’t know whether it’s actually the case or not. A big question I have is whether any of our values scales up to the size of the universe, in other words, doesn’t asymptotically approach an upper bound well before we used up the resources in the universe. See also my latest post http://lesswrong.com/lw/1oj/complexity_of_value_complexity_of_outcome/ where I talk about some related ideas.
The maximum amount of pleasure is finite too.