Let’s use the example of the Much Better Life Simulator from the post of a similar name, which is less repellent than a case of pure orgasmium. My objections to it are these:
1: Involves memory loss. (Trivially fixable without changing the basic thought experiment; it was originally introduced to avoid marring the pleasure but I think I’m wired strangely with regard to information’s effect on my mood.)
2: Machine does not allow interaction with other real people. (Less-trivially fixable, but still very fixable. Networked MBLSes would do the trick, and/or ones with input devices to let outsiders communicate with folks who were in them.
If these objections were repaired and there were no “gotcha” side effects I haven’t thought of, I would enter an MBLS with only negligible misgivings, which are not endorsed and would be well within my ability to dismiss.
Let’s consider another case: suppose my neurochemistry were altered so I just had a really high happiness set point, and under ordinary circumstances was generally pleased as punch (but had comparable emotional range to what I have now, and reacted in isomorphic ways to events, so I could dip low when unpleasant things happened and soar high when pleasant things happened). I have no objections to this whatsoever, assuming as is customary for thought experiments that the neurochemistry alteration has no other effects.
Let’s consider a third: orgasmium. “I”, in some sense of that word, am turned into an optimally efficient enjoying-thing. I will assume for the sake of charitableness that the enjoying-thing can experience a full range of sensations that I find enjoyable. I’ll try to restrain myself from the “it’s not me” argument aside from some scare quotes, because I don’t think I can express that line of thought in a way comprehensible to someone who diverges so significantly in intuition.
I don’t object to creating orgasmium (ceteris paribus). I think if you’re going to create orgasmium or, say, a rock, go with the former, because orgasmium is enjoying itself and the rock definitely is doing less than that. But I would object to being replaced with orgasmium myself.
I have both of the objections I mentioned regarding the MBLS to this scenario, and several more.
1: It does not seem like a transmuted orgasmium version of “me” would remember much (except maybe how nice everything has always been for all time). Remembering things is not universally enjoyable, and anyway it’s rarely the most enjoyable thing I could be doing; this faculty would be replaced. This objection weakens, but does not evaporate, if all my memories are stored somewhere in the orgasmium, and simply never happen to be bothered with.
2: Also, orgasmium would not interact with real people (it would just directly have the pleasing sensations associated with doing so). Networking orgasmium would make it a less efficient enjoying-thing (the other nodes might say less than maximally enjoyable things), and would seem to violate the thought experiment.
3: Orgasmium would not react to changes in the world. The MLBS involves a complete, complex simulation that I could react to, and also other people, same. The neurochemistry scenario I introduced stipulates that I don’t lose emotional range, I just add a positive number to all the values. Orgasmium would be a less effective enjoying-thing if it allowed this sort of fluctuation; it just turns it all up to eleven and tapes the button down. I do not approve of losing this ability.
4: Orgasmium would not have an interest in accomplishing many of my goals, and would probably not have the cognitive complexity to do it anyway. Most of these goals boil down to interacting with people in some way (writing for an audience), so that folds into the above.
I think in general this boils down to: I don’t want to lose capacities that I currently have. (Capacity to access information, capacity to interact with humans, capacity to experience emotional range.) There are some capacities that I don’t happen to care about (capacity to affect physical objects instead of just indistinguishable simulations thereof), and I would trade those in for a relatively modest increase in enjoyment if the offer were on the table.
2: Machine does not allow interaction with other real people. (Less-trivially fixable, but still very fixable. Networked MBLSes would do the trick, and/or ones with input devices to let outsiders communicate with folks who were in them.
How could you tell the difference? Let’s say I claim to have build a MBLS that doesn’t contain any sentients whatsoever and invite you to test it for an hour. (I guarantee you it won’t rewire any preferences or memories; no cheating here.) Do you expect to not be happy? I have taken great care that emotions like loneliness or guilt won’t arise and that you will have plenty of fun. What would be missing?
Like in my response to Yasuo, I find it really weird to distinguish states that have no different experiences, that feel exactly the same.
Let’s consider another case: suppose my neurochemistry were altered so I just had a really high happiness set point [...] but had comparable emotional range to what I have now [...] so I could dip low when unpleasant things happened [...]
Why would you want that? To me, that sounds like deliberately crippling a good solution. What good does it do to be in a low mood when something bad happens? I’d assume that this isn’t an easy question to answer and I’m not calling you out on it, but “I want to be able to feel something bad” sounds positively deranged.
(I can see uses with regards to honest signaling, but then a constant high set-point and a better ability to lie would be preferable.)
It does not seem like a transmuted orgasmium version of “me” would remember much [...]. Remembering things is not universally enjoyable, and anyway it’s rarely the most enjoyable thing I could be doing; this faculty would be replaced.
Yes, I would imagine orgasmium to essentially have no memory or only insofar as it’s necessary for survival and normal operations. Why does that matter? You already have a very unreliable and sparse memory. You wouldn’t lose anything great in orgasmium; it would always be present. I can only think of the intuition “the only way to access some of the good things that happened to me, right now, is through my memory, so if I lost it, those good things would be gone”. Orgasmium is always amazing.
But then, that can’t be exactly right, as you say you’d be more at ease to have memory you simply never use. I can’t understand this. If you don’t use it, how can it possibly affect your well-being, at any point? How can you value something that doesn’t have a causal connection to you?
I think in general this boils down to: I don’t want to lose capacities that I currently have.
How do you know that? I’m not trying to play the postmodernism card “How do we know anything?”, I’m genuinely curious how you arrived at this conclusion. If I try to answer the question “Do I care about losing capacities?”, I go through thought experiments and try to imagine scenarios that are only distinguished by the amount of capacities I have and then see what emotional reaction comes up. But then I’m still answering the question based on my (anticipated and real) rewards, so I’m really deciding what state I would enjoy more and pick the more enjoyable one (or less painful one). Wireheading, however, is always maximally enjoyable, so it seems I should always choose it.
(For completeness, I would normally agree with you that losing capacities is bad, but only because losing optimization power makes it harder to arrive at my goals. If I saw no need for more power, e.g. because I’m already maximally happy and there’s a system to ensure sustainability, I’d happily give up everything.)
(Finally, I really appreciate your detailed and charitable answer.)
How could you tell the difference? Let’s say I claim to have build a MBLS that doesn’t contain any sentients whatsoever and invite you to test it for an hour. (I guarantee you it won’t rewire any preferences or memories; no cheating here.) Do you expect to not be happy? I have taken great care that emotions like loneliness or guilt won’t arise and that you will have plenty of fun. What would be missing?
I’d probably test such a thing for an hour, actually, and for all I know it would be so overwhelmingly awesome that I would choose to stay, but I expect that assuming my preferences and memories remained intact, I would rather be out among real people. My desire to be among real people is related to but not dependent on my tendency towards loneliness, and guilt hadn’t even occurred to me (I suppose I’d think I was being a bit of a jerk if I abandoned everybody without saying goodbye, but presumably I could explain what I was doing first?) I want to interact with, say, my sister, not just with an algorithm that pretends to be her and elicits similar feelings without actually having my sister on the other end.
Why would you want that? To me, that sounds like deliberately crippling a good solution. What good does it do to be in a low mood when something bad happens? I’d assume that this isn’t an easy question to answer and I’m not calling you out on it, but “I want to be able to feel something bad” sounds positively deranged.
In a sense, emotions can be accurate sort of like beliefs can. I would react similarly badly to the idea of having pleasant, inaccurate beliefs. It would be mistaken (given my preferences about the world) to feel equally happy when someone I care about has died (or something else bad) as when someone I care about gets married (or something else good).
(I can see uses with regards to honest signaling, but then a constant high set-point and a better ability to lie would be preferable.)
Lying is wrong.
You already have a very unreliable and sparse memory.
I know. It is one of the many terrible things about reality. I hate it.
I can only think of the intuition “the only way to access some of the good things that happened to me, right now, is through my memory, so if I lost it, those good things would be gone”. Orgasmium is always amazing.
Memories are a way to access reality-tracking information. As I said, remembering stuff is not consistently pleasant, but that’s not what it’s about.
How can you value something that doesn’t have a causal connection to you?
Counterfactually.
How do you know that? I’m not trying to play the postmodernism card “How do we know anything?”, I’m genuinely curious how you arrived at this conclusion.
Well, I wrote everything above that in my comment, and then noticed that there was this pattern, and didn’t immediately come up with a counterexample to it.
I think it’s fine if you want to wirehead. I do not advocate interfering with your interest in doing so. But I still don’t want it.
Apologies for coming to the discussion very, very late, but I just ran across this.
If I saw no need for more power, e.g. because I’m already maximally happy and there’s a system to ensure sustainability, I’d happily give up everything.
How could you possibly get into this epistemic state? That is, how could you possibly be so sure of the sustainability of your maximally happy state, without any intervention from you, that you would be willing to give up all your optimization power?
(This isn’t the only reason why I personally would not choose wireheading, but other reasons have already been well discussed in this thread and I haven’t seen anyone else zero in on this particular point.)
Let’s use the example of the Much Better Life Simulator from the post of a similar name, which is less repellent than a case of pure orgasmium. My objections to it are these:
1: Involves memory loss. (Trivially fixable without changing the basic thought experiment; it was originally introduced to avoid marring the pleasure but I think I’m wired strangely with regard to information’s effect on my mood.)
2: Machine does not allow interaction with other real people. (Less-trivially fixable, but still very fixable. Networked MBLSes would do the trick, and/or ones with input devices to let outsiders communicate with folks who were in them.
If these objections were repaired and there were no “gotcha” side effects I haven’t thought of, I would enter an MBLS with only negligible misgivings, which are not endorsed and would be well within my ability to dismiss.
Let’s consider another case: suppose my neurochemistry were altered so I just had a really high happiness set point, and under ordinary circumstances was generally pleased as punch (but had comparable emotional range to what I have now, and reacted in isomorphic ways to events, so I could dip low when unpleasant things happened and soar high when pleasant things happened). I have no objections to this whatsoever, assuming as is customary for thought experiments that the neurochemistry alteration has no other effects.
Let’s consider a third: orgasmium. “I”, in some sense of that word, am turned into an optimally efficient enjoying-thing. I will assume for the sake of charitableness that the enjoying-thing can experience a full range of sensations that I find enjoyable. I’ll try to restrain myself from the “it’s not me” argument aside from some scare quotes, because I don’t think I can express that line of thought in a way comprehensible to someone who diverges so significantly in intuition.
I don’t object to creating orgasmium (ceteris paribus). I think if you’re going to create orgasmium or, say, a rock, go with the former, because orgasmium is enjoying itself and the rock definitely is doing less than that. But I would object to being replaced with orgasmium myself.
I have both of the objections I mentioned regarding the MBLS to this scenario, and several more.
1: It does not seem like a transmuted orgasmium version of “me” would remember much (except maybe how nice everything has always been for all time). Remembering things is not universally enjoyable, and anyway it’s rarely the most enjoyable thing I could be doing; this faculty would be replaced. This objection weakens, but does not evaporate, if all my memories are stored somewhere in the orgasmium, and simply never happen to be bothered with.
2: Also, orgasmium would not interact with real people (it would just directly have the pleasing sensations associated with doing so). Networking orgasmium would make it a less efficient enjoying-thing (the other nodes might say less than maximally enjoyable things), and would seem to violate the thought experiment.
3: Orgasmium would not react to changes in the world. The MLBS involves a complete, complex simulation that I could react to, and also other people, same. The neurochemistry scenario I introduced stipulates that I don’t lose emotional range, I just add a positive number to all the values. Orgasmium would be a less effective enjoying-thing if it allowed this sort of fluctuation; it just turns it all up to eleven and tapes the button down. I do not approve of losing this ability.
4: Orgasmium would not have an interest in accomplishing many of my goals, and would probably not have the cognitive complexity to do it anyway. Most of these goals boil down to interacting with people in some way (writing for an audience), so that folds into the above.
I think in general this boils down to: I don’t want to lose capacities that I currently have. (Capacity to access information, capacity to interact with humans, capacity to experience emotional range.) There are some capacities that I don’t happen to care about (capacity to affect physical objects instead of just indistinguishable simulations thereof), and I would trade those in for a relatively modest increase in enjoyment if the offer were on the table.
WoW already qualifies as that sort of MBLS for some subset of the world.
I tried WoW—weekend free trial. Didn’t see what the fuss was about.
that’s because your life is better than WoW.
I’m rarely attacked by horrifying monsters, that’s one thing. I also have less of a tendency to die than my character demonstrated.
How could you tell the difference? Let’s say I claim to have build a MBLS that doesn’t contain any sentients whatsoever and invite you to test it for an hour. (I guarantee you it won’t rewire any preferences or memories; no cheating here.) Do you expect to not be happy? I have taken great care that emotions like loneliness or guilt won’t arise and that you will have plenty of fun. What would be missing?
Like in my response to Yasuo, I find it really weird to distinguish states that have no different experiences, that feel exactly the same.
Why would you want that? To me, that sounds like deliberately crippling a good solution. What good does it do to be in a low mood when something bad happens? I’d assume that this isn’t an easy question to answer and I’m not calling you out on it, but “I want to be able to feel something bad” sounds positively deranged.
(I can see uses with regards to honest signaling, but then a constant high set-point and a better ability to lie would be preferable.)
Yes, I would imagine orgasmium to essentially have no memory or only insofar as it’s necessary for survival and normal operations. Why does that matter? You already have a very unreliable and sparse memory. You wouldn’t lose anything great in orgasmium; it would always be present. I can only think of the intuition “the only way to access some of the good things that happened to me, right now, is through my memory, so if I lost it, those good things would be gone”. Orgasmium is always amazing.
But then, that can’t be exactly right, as you say you’d be more at ease to have memory you simply never use. I can’t understand this. If you don’t use it, how can it possibly affect your well-being, at any point? How can you value something that doesn’t have a causal connection to you?
How do you know that? I’m not trying to play the postmodernism card “How do we know anything?”, I’m genuinely curious how you arrived at this conclusion. If I try to answer the question “Do I care about losing capacities?”, I go through thought experiments and try to imagine scenarios that are only distinguished by the amount of capacities I have and then see what emotional reaction comes up. But then I’m still answering the question based on my (anticipated and real) rewards, so I’m really deciding what state I would enjoy more and pick the more enjoyable one (or less painful one). Wireheading, however, is always maximally enjoyable, so it seems I should always choose it.
(For completeness, I would normally agree with you that losing capacities is bad, but only because losing optimization power makes it harder to arrive at my goals. If I saw no need for more power, e.g. because I’m already maximally happy and there’s a system to ensure sustainability, I’d happily give up everything.)
(Finally, I really appreciate your detailed and charitable answer.)
I’d probably test such a thing for an hour, actually, and for all I know it would be so overwhelmingly awesome that I would choose to stay, but I expect that assuming my preferences and memories remained intact, I would rather be out among real people. My desire to be among real people is related to but not dependent on my tendency towards loneliness, and guilt hadn’t even occurred to me (I suppose I’d think I was being a bit of a jerk if I abandoned everybody without saying goodbye, but presumably I could explain what I was doing first?) I want to interact with, say, my sister, not just with an algorithm that pretends to be her and elicits similar feelings without actually having my sister on the other end.
In a sense, emotions can be accurate sort of like beliefs can. I would react similarly badly to the idea of having pleasant, inaccurate beliefs. It would be mistaken (given my preferences about the world) to feel equally happy when someone I care about has died (or something else bad) as when someone I care about gets married (or something else good).
Lying is wrong.
I know. It is one of the many terrible things about reality. I hate it.
Memories are a way to access reality-tracking information. As I said, remembering stuff is not consistently pleasant, but that’s not what it’s about.
Counterfactually.
Well, I wrote everything above that in my comment, and then noticed that there was this pattern, and didn’t immediately come up with a counterexample to it.
I think it’s fine if you want to wirehead. I do not advocate interfering with your interest in doing so. But I still don’t want it.
Apologies for coming to the discussion very, very late, but I just ran across this.
How could you possibly get into this epistemic state? That is, how could you possibly be so sure of the sustainability of your maximally happy state, without any intervention from you, that you would be willing to give up all your optimization power?
(This isn’t the only reason why I personally would not choose wireheading, but other reasons have already been well discussed in this thread and I haven’t seen anyone else zero in on this particular point.)