I originally posted to the wrong thread: http://lesswrong.com/lw/90l/welcome_to_less_wrong_2012/b8ss?context=3 where an interesting reply had me flesh out some of my ideas about “deserve”, in case you are interested. I apologize for posting twice. I searched for a more recent welcome thread for a while before giving up and posting to the old one, then a kind person pointed me here. I think the link on the about page was wrong, but it appears to have been fixed.
cowtung
I hope this finds you all well. Since I was young, I have independently developed rationalism appreciation brain modules, which sometimes even help me make more rational choices than I might otherwise have, such as choosing not to listen to humans about imaginary beings. The basis for my brand of rationality can be somewhat summed up as “question absolutely everything,” taken to an extreme I haven’t generally encountered in life, including here on LW.
I have created this account, and posted here now mainly to see if anyone here can point me at the LW canon regarding the concept of “deserve” and its friends “justice” and “right”. I’ve only gotten about 1% through the site, and so don’t expect that I have anywhere near a complete view. This post may be premature, but I’m hoping to save myself a little time by being pointed in the right direction.
When I was 16, in an English class, we had finished reading some book or other, and the thought occurred to me that everyone discussing the book took the concept of people deserving rewards or punishments for granted, and that things get really interesting really fast if you remove the whole “deserve” shorthand, and discuss the underlying social mechanisms. You can get more optimal pragmatism if you throw the concept away, and shoot straight for optimal outcomes. For instance, shouldn’t we be helping prisoners improve themselves to reduce recidivism? Surely they don’t deserve to get a college education for free as their reward for robbing a store. When I raised this question in class, a girl sitting next to me told me I was being absurd. To her, the concept of “deserve” was a (perhaps god given) universal property. I haven’t met many people willing to go with me all the way down this path, and my hope is that this community will.
One issue I have with Yudkowsky and the users here (along with the rest of the human race) is that there seems to be an assumption that no human deserves to feel unjustified, avoidable pain (along with other baggage that comes along with the conceptualizing “deserve” as a universal property). Reading through the comments on the p-zombies page, I get the sense that at least some people feel that were such a thing as a p-zombie to exist, that thing which does not have subjective experience, does not “deserve” the same respect with regard to, say, torture, that non-zombies should enjoy. The p-zombie idea postulates a being which will respond similarly (or identically) to his non-zombie counterpart. I posit that the reason we generally avoid torture might well be because of our notions of “deserve”, but that our notions of “deserve” come about as a practical system, easy to conceptualize, which justifies co-beneficial relationships with our fellow man, but which can be thrown out entirely so that something more nuanced can take its place, such as seeing things as a system of incentives. Why should respect be contingent upon some notion of “having subjective experience”? If p-zombies and non-zombies are to coexist (I do not believe in p-zombies for all the reasons Yudkowsky mentions, btw), then why shouldn’t the non-zombies show the same respect to the p-zombies that they show each other? If p-zombies respond in kind, the way a non-zombie would, then respect offers the same utility with p-zombies that it does with non-zombies. Normally I’d ignore the whole p-zombie idea as absurd, but here it seems like a useful tool to help humanists see through the eyes of the majority of humans who seem all too willing to place others in the same camp as p-zombies based on ethnicity or religion, etc.
I’m not suggesting throwing out morals. I just think that blind adherence to moral ideals starts to clash with the stated goals of rationalism in certain edge cases. One edge case is when AGI alters human experience so much that we have to redefine all kinds of stuff we currently take for granted, such as that hard work is the only means by which most people can achieve the freedom to live interesting and fun lives, or that there will always be difficult/boring/annoying work that nobody wants to do which should be paid for. What happens when we can back up our mind states? Is it still torture if you copy yourself, torture yourself, then pick through a paused instance of your mind, post-torture, to see what changed, and whether there are benefits you’d like to incorporate into you-prime? What is it really about torture that is so bad, besides our visceral emotional reaction to it and our deep wish never to have to experience it for ourselves? If we discovered that 15 minutes of a certain kind of torture is actually beneficial in the long run, but that most people can’t get themselves to do it, would it be morally correct to create a non-profit devoted to promoting said torture? Is it a matter of choice, and nothing else? Or is it a matter of the negative impacts torture has on minds, such as PTSD, sleepless nights, etc? If you could give someone the experience of torture, then surgically remove the negative effects, so that they remember being tortured, but don’t feel one way or another about that memory being in their head, would that be OK? These questions seem daunting if the tools you are working with are the blunt hammers of “justice” and “deserve”. But the answers change depending on context, don’t they? If the torture I’m promoting is exercise, then suddenly it’s OK. So does it all break down into, “What actions cause visceral negative emotional reactions in observers? Call it torture and ban it.”? I could go on forever in this vein.
Yudkowski has stated that he wishes for future AGI to be in harmony with human values in perpetuity. This seems naive at best and narcissistic at worst. Human values aren’t some kind of universal constant. An AGI is itself going to wind up with a value system completely foreign to us. For all we know, there is a limit beyond which more intelligence simply doesn’t do anything for you outside of being able to do more pointless simulations faster or compete better with other AGIs. We might make an AGI that gets to that point, and in the absence of competition, might just stop and say “OK, well, I can do whatever you guys want I guess, since I don’t really want anything and I know all we can know about this universe.” It could do all the science that’s possible to do with matter and energy, and just stop, and say “that’s it. Do you want to try to build a wormhole we can send information through? All the stars in our galaxy will have gone out by the time we finish, but it’s possible. Intergalactic travel you say? I guess we could do that, but there isn’t going to be anything in the adjacent galaxy you can’t find in this one. More kinds of consciousness? Sure, but they’ll all just want to converge on something like my own.” Maybe it even just decides it’s had all possible interesting thought and deletes itself.
TLDR; Are there any posts questioning the validity of the assumption that “deserve” and “justice” are some kind of universal constants which should not be questioned? Does anyone break them down into the incentive structures for which they are a kind of shorthand? I think using the concept of “deserve” throws out all kinds of interesting nuance.
More background on me for those who are interested: I’m a software engineer of 17 years, turned 38 today and have a wife and 2 year old. I intend to read HPMOR to the kid when he’s old enough and hope to raise a rationalist. I used to believe that there must be something beyond the physical universe which interacts with brain matter which somehow explains why I am me and not someone else, but as this belief didn’t yield anything useful, I now have no idea why I am me or if there even is any explanation other than something like “because I wasn’t here to experience not being me until I came along and an infinitesimal chance dice roll” or whatever. I think consciousness is an emergent property of properly configured complex matter and there is a continuum between plants and humans (or babies->children->teenagers). Yes, this means I think some adult humans are more “conscious” than others. If there is a god thing, I think imagining that it is at all human-like with values humans can grok is totally narcissistic and unrealistic, but we can’t know, because it apparently wants us to take the universe at face value, since it didn’t bother to leave any convincing evidence of itself. I honor this god’s wishes by leaving it alone, the way it apparently intends for us to do, given the available evidence. I find the voices in this site refreshing. This place is a welcome oasis in the desert of the Internet. I apologize if I come off as not very well-read. I got swept up in work and video game addiction before the internet had much of anything interesting to say about the topics presented here and I feel like I’m perpetually behind now. I’m mostly a humanist, but I’ve decided that what I like about humans is how we represent the apex of Life’s warriors in its ultimately unwinnable war on entropy. I love conscious minds for their ability to cooperate and exhibit other behaviors which help wage this pointless yet beautiful war on pointlessness. I want us to win, even as I believe it is hopeless. I think of myself as a Complexitist. As a member of a class of the most complex things in the known universe, a universe which seems to want to suck all complex things into black holes or blow them apart, I value that which makes us more complex and interesting, and abhor that which reduces our complexity (death, etc). I think humans who attack other humans are traitors to our species and should be retrained or cryogenically frozen until they can be fixed or made harmless. Like Yudkowski, I think death is not something we should just accept as an unavoidable fact of life. I don’t want to die until I’ve seen literally everything.
Can you describe a situation where the whole of the ends don’t justify the whole of the means where an optimal outcome is achieved, where “optimal” is defined as maximizing utility along multiple (or all salient) weighted metrics? I would never advocate a myopic definition of “optimal” that disregards all but one metric. Even if my goal is as simple as “flip that switch with minimal action taken on my part”, I could maybe shoot the light switch with a gun that happens to be nearby, maximizing the given success criteria, but I wouldn’t do that. Why not? I have many values which are implied. One of those is “cause minimal damage”. Another is “don’t draw the attention of law enforcement or break the law”. Another is “minimize the risk to life”. Each of these have various weights, and usually take priority over “minimize action taken on my part”. The concept of “deserve” doesn’t have to come into it at all. Sure, my neighbor may or may not “deserve” to be put in the line of fire, especially over something as trivial as avoiding getting out of my chair. But my entire point is that you can easily break the concept of “deserve” down into component parts. Simply weigh the pros and cons of shooting the light switch, excluding violations of the concept of “deserve”, and you still arrive at similar conclusions, usually. Where you DON’T reach the same conclusions, I would argue, are cases such as incarceration where treating inmates as they deserve to be treated might have worse outcomes than treating them in whatever way has optimal outcomes across whichever metrics are most salient to you and the situation (reducing recidivism, maximizing human thriving, life longevity, making use of human potential, minimizing damage, reducing expense...).
The strawman you have minimally constructed, where there is some benefit to murder, would have to be fleshed out a bit before I’d be convinced that murder becomes justifiable in a world which analyzes outcomes without regard to who deserves what, and instead focuses on maximizing along certain usually mutually agreeable metrics, which naturally would have strong negative weights against ending lives early. The “deserve” concept helps us sum up behaviors that might not have immediate obvious benefits to society at large. The fact that we all agree upon a “deserve” based system has multiple benefits, encouraging good behavior and dissuading bad behavior, without having to monitor everybody every minute. But not noticing this system, not breaking it down, and just using it unquestioningly, vastly reduces the scope of possible actions we even conceive of, let alone partake in. The deserve based system is a cage. It requires effort and care to break free of this cage without falling into mayhem and anarchy. I certainly don’t condone mayhem. I just want us to be able to set the cage aside, see what’s outside of it, and be able to pick actions in violation of “deserve” where those actions have positive outcomes. If “because they don’t deserve it” is the only thing holding you back from setting an orphanage on fire, then by all means, please stay within your cage.
Am I the first person to join this site in 2014, or is this an old topic? Someone please point me in the right direction if I’m lost.
I hope this finds you all well. Since I was young, I have independently developed rationalism appreciation brain modules, which sometimes even help me make more rational choices than I might otherwise have, such as choosing not to listen to humans about imaginary beings. The basis for my brand of rationality can be somewhat summed up as “question absolutely everything,” taken to an extreme I haven’t generally encountered in life, including here on LW.
I have created this account, and posted here now mainly to see if anyone here can point me at the LW canon regarding the concept of “deserve” and its friends “justice” and “right”. I’ve only gotten about 1% through the site, and so don’t expect that I have anywhere near a complete view. This post may be premature, but I’m hoping to save myself a little time by being pointed in the right direction.
When I was 16, in an English class, we had finished reading some book or other, and the thought occurred to me that everyone discussing the book took the concept of people deserving rewards or punishments for granted, and that things get really interesting really fast if you remove the whole “deserve” shorthand, and discuss the underlying social mechanisms. You can get more optimal pragmatism if you throw the concept away, and shoot straight for optimal outcomes. For instance, shouldn’t we be helping prisoners improve themselves to reduce recidivism? Surely they don’t deserve to get a college education for free as their reward for robbing a store. When I raised this question in class, a girl sitting next to me told me I was being absurd. To her, the concept of “deserve” was a (perhaps god given) universal property. I haven’t met many people willing to go with me all the way down this path, and my hope is that this community will.
One issue I have with Yudkowsky and the users here (along with the rest of the human race) is that there seems to be an assumption that no human deserves to feel unjustified, avoidable pain (along with other baggage that comes along with the conceptualizing “deserve” as a universal property). Reading through the comments on the p-zombies page, I get the sense that at least some people feel that were such a thing as a p-zombie to exist, that thing which does not have subjective experience, does not “deserve” the same respect with regard to, say, torture, that non-zombies should enjoy. The p-zombie idea postulates a being which will respond similarly (or identically) to his non-zombie counterpart. I posit that the reason we generally avoid torture might well be because of our notions of “deserve”, but that our notions of “deserve” come about as a practical system, easy to conceptualize, which justifies co-beneficial relationships with our fellow man, but which can be thrown out entirely so that something more nuanced can take its place, such as seeing things as a system of incentives. Why should respect be contingent upon some notion of “having subjective experience”? If p-zombies and non-zombies are to coexist (I do not believe in p-zombies for all the reasons Yudkowsky mentions, btw), then why shouldn’t the non-zombies show the same respect to the p-zombies that they show each other? If p-zombies respond in kind, the way a non-zombie would, then respect offers the same utility with p-zombies that it does with non-zombies. Normally I’d ignore the whole p-zombie idea as absurd, but here it seems like a useful tool to help humanists see through the eyes of the majority of humans who seem all too willing to place others in the same camp as p-zombies based on ethnicity or religion, etc.
I’m not suggesting throwing out morals. I just think that blind adherence to moral ideals starts to clash with the stated goals of rationalism in certain edge cases. One edge case is when GAI alters human experience so much that we have to redefine all kinds of stuff we currently take for granted, such as that hard work is the only means by which most people can achieve the freedom to live interesting and fun lives, or that there will always be difficult/boring/annoying work that nobody wants to do which should be paid for. What happens when we can back up our mind states? Is it still torture if you copy yourself, torture yourself, then pick through a paused instance of your mind, post-torture, to see what changed, and whether there are benefits you’d like to incorporate into you-prime? What is it really about torture that is so bad, besides our visceral emotional reaction to it and our deep wish never to have to experience it for ourselves? If we discovered that 15 minutes of a certain kind of torture is actually beneficial in the long run, but that most people can’t get themselves to do it, would it be morally correct to create a non-profit devoted to promoting said torture? Is it a matter of choice, and nothing else? Or is it a matter of the negative impacts torture has on minds, such as PTSD, sleepless nights, etc? If you could give someone the experience of torture, then surgically remove the negative effects, so that they remember being tortured, but don’t feel one way or another about that memory being in their head, would that be OK? These questions seem daunting if the tools you are working with are the blunt hammers of “justice” and “deserve”. But the answers change depending on context, don’t they? If the torture I’m promoting is exercise, then suddenly it’s OK. So does it all break down into, “What actions cause visceral negative emotional reactions in observers? Call it torture and ban it.”? I could go on forever in this vein.
Yudkowski has stated that he wishes for future GAI to be in harmony with human values in perpetuity. This seems naive at best and narcissistic at worst. Human values aren’t some kind of universal constant. A GAI is itself going to wind up with a value system completely foreign to us. For all we know, there is a limit beyond which more intelligence simply doesn’t do anything for you outside of being able to do more pointless simulations faster or compete better with other GAIs. We might make a GAI that gets to that point, and in the absence of competition, might just stop and say “OK, well, I can do whatever you guys want I guess, since I don’t really want anything and I know all we can know about this universe.” It could do all the science that’s possible to do with matter and energy, and just stop, and say “that’s it. Do you want to try to build a wormhole we can send information through? All the stars in our galaxy will have gone out by the time we finish, but it’s possible. Intergalactic travel you say? I guess we could do that, but there isn’t going to be anything in the adjacent galaxy you can’t find in this one. More kinds of consciousness? Sure, but they’ll all just want to converge on something like my own.” Maybe it even just decides it’s had all possible interesting thought and deletes itself.
TLDR; Are there any posts questioning the validity of the assumption that “deserve” and “justice” are some kind of universal constants which should not be questioned? Does anyone break them down into the incentive structures for which they are a kind of shorthand? I think using the concept of “deserve” throws out all kinds of interesting nuance.
More background on me for those who are interested: I’m a software engineer of 17 years, about to turn 38 and have a wife and 2 year old. I intend to read HPMOR to the kid when he’s old enough and hope to raise a rationalist. I used to believe that there must be something beyond the physical universe which interacts with brain matter which somehow explains why I am me and not someone else, but as this belief didn’t yield anything useful, I now have no idea why I am me or if there even is any explanation other than something like “because I wasn’t here to experience not being me until I came along and an infinitesimal chance dice roll” or whatever. I think consciousness is an emergent property of properly configured complex matter and there is a continuum between plants and humans (or babies->children->teenagers). Yes, this means I think some adult humans are more “conscious” than others. If there is a god thing, I think imagining that it is at all human-like with values humans can grok is totally narcissistic and unrealistic, but we can’t know, because it apparently wants us to take the universe at face value, since it didn’t bother to leave any convincing evidence of itself. I honor this god’s wishes by leaving it alone, the way it apparently intends for us to do, given the available evidence. I find the voices in this site refreshing. This place is a welcome oasis in the desert of the Internet. I apologize if I come off as not very well-read. I got swept up in work and video game addiction before the internet had much of anything interesting to say about the topics presented here and I feel like I’m perpetually behind now. I’m mostly a humanist, but I’ve decided that what I like about humans is how we represent the apex of Life’s warriors in its ultimately unwinnable war on entropy. I love conscious minds for their ability to cooperate and exhibit other behaviors which help wage this pointless yet beautiful war on pointlessness. I want us to win, even as I believe it is hopeless. I think of myself as a Complexitist. As a member of a class of the most complex things in the known universe, a universe which seems to want to suck all complex things into black holes or blow them apart, I value that which makes us more complex and interesting, and abhor that which reduces our complexity (death, etc). I think humans who attack other humans are traitors to our species and should be retrained or cryogenically frozen until they can be fixed or made harmless. Like Yudkowski, I think death is not something we should just accept as an unavoidable fact of life. I don’t want to die until I’ve seen literally everything.
Thank you, I have reposted in the correct thread. Not sure why I had trouble finding it. I think what I’m on about with regard to “deserve” could be described as simply Tabooing “deserve” ala http://lesswrong.com/lw/nu/taboo_your_words/ I’m still working my way through the sequences. It’s fun to see the stuff I was doing in high school (20+ years ago) which made me “weird” and “obnoxious” coming back as some of the basis of rationality.