I’ve had a rather unsettling night’s sleep, contemplating scenarios where I’m forced to choose between slight variations on violations of my body and mind, disconnect from reality, and loss of everyone I’ve ever loved. It was worth it, though, since I’ve come up with a less convenient version:
If choice D included, within the simulation, versions of my loved ones that were ultimately hollow, but convincing enough that I could be satisfied with them by choosing not to look too closely, and further if the VR included a society with complex, internally-consistent dynamics of a sort that are impossible in the real world but endlessly fascinating to me, and if in option C I would know that such a virtual world existed but be permanently denied access to it (in such a way that seemed consistent with the falsely-remembered death of my loved ones), that would make D quite a bit more tempting.
However, I would still chose the ‘actual reality’ option, because it has better long-term recovery prospects. In that situation, my loved ones aren’t actually dead, so I’ve got some chance of reconnecting with them or benefiting by the indirect consequences of their actions; my map is broken, but I still have access to the territory, so it could eventually be repaired.
Ok, that is a better effort to find a less convenient world, but you still seem to be avoiding the conflict between optimizing the actual state of reality and optimizing your perception of reality.
Assume in Scenario C, you know you will never see your loved ones again, you will never realize that they are still alive.
More generally, if you come up with some reason why optimizing your expected experience of your loved ones happens to produce the same result as optimizing the actual lives of your loved ones, despite the dilemma being constructed to introduce a disconnect between these concepts, then imagine that reason does not work. Imagine the dilemma is tightened to eliminate that reason. For purposes of this thought experiment, don’t worry if this requires you to occupy some epistemic state that humans can not ordinarily achieve, or strange arbitrary powers for the agents forcing you to make this decision. Because planning a reaction for this absurd scenario is not the point. The point is to figure out and compare to what extent your care about the actual state of the universe, and to what extent you care about your perceptions.
My own answer to this dilemma is options C, because then my loved ones are actually alive and well, full stop.
Assume in Scenario C, you know you will never see your loved ones again, you will never realize that they are still alive.
Fair enough. I’d still pick C, since it also includes the options of finding someone else to be with, or somehow coming to terms with living alone.
The point is to figure out and compare to what extent your care about the actual state of the universe, and to what extent you care about your perceptions.
Thank you for clarifying that.
Most of all, I want to stay alive, or if that’s not possible, keep a viable breeding population of my species alive. I would be suspicious of anyone who claimed to be the result of an evolutionary process but did not value this.
If the ‘survival’ situation seems to be under control, my next priority is constructing predictive models. This requires sensory input and thought, preferably conscious thought. I’m not terribly picky about what sort of sensory input exactly, but more is better (so long as my ability to process it can keep up, of course).
After modeling it gets complicated. I want to be able to effect changes in my surroundings, but a hammer does me no good without the ability to predict that striking a nail will change the nail’s position. If my perceptions are sufficiently disconnected from reality that the connection can never be reestablished, objective two is in an irretrievable failure state, and any higher goal is irrelevant.
That leaves survival. Neither C nor D explicitly threatens my own life, but with perma-death on the table, either of them might mean me expiring somewhere down the line. D explicitly involves my loved ones (all or at least most of whom are members of my species) being killed for arbitrary, nonrepeatable reasons, which constitutes a marginal reduction in genetic diversity without corresponding increase in fitness for any conceivable, let alone relevant, environment.
So, I suppose I would agree with you in choosing C primarily because it would leave my loved ones alive and well.
Most of all, I want to stay alive, or if that’s not possible, keep a viable breeding population of my species alive. I would be suspicious of anyone who claimed to be the result of an evolutionary process but did not value this.
You are allowed to assign intrinsic, terminal value to your loved ones’ well being, and to choose option C because it better achieves that terminal value, without having to justify it further by appeals to inclusive genetic fitness. Knowing this, do you still say you are choosing C because of a small difference in genetic diversity?
But, getting back to the reason I presented the dilemma, it seems that you do in fact have preferences over what happens after you die, and so your utility function, representing your preferences over possible futures that you would now attempt to bring about, cannot be uniformly 0 in the cases where you are dead.
I am not claiming to have inherited anything from evolution itself. The blind idiot god has no DNA of it’s own, nor could it have preached to a younger, impressionable me. I decided to value the survival of my species, assigned intrinsic, terminal value to it, because it’s a fountain for so much of the stuff I instinctively value.
Part of objective two is modeling my own probable responses, so an equally-accurate model of my preferences with lower Kolmogorov complexity has intrinsic value as well. Of course, I can’t be totally sure that it’s accurate, but that particular hasn’t let me down so far, and if it did (and I survived) I would replace it with one that better fit the data.
If my species survives, there’s some possibility that my utility function, or one sufficiently similar as to be practically indistinguishable, will be re-instantiated at some point. Even without resurrection, cryostasis, or some other clear continuity, enough recombinant exploration of the finite solution-space for ‘members of my species’ will eventually result in repeats. Admittedly, the chance is slim, which is why I overwhelmingly prefer the more direct solution of immortality through not dying.
In short, yes, I’ve thought this through and I’m pretty sure. Why do you find that so hard to believe?
The entire post above is actually a statement that you value the survival of our species instrumentally, not intrinsically. If it were an intrinsic value for you, then contemplating any future in which humanity becomes smarter and happier and eventually leaves behind the old bug-riddled bodies we started with, should fill you with indescribable horror. And in my experience, very few people feel that way, and many of those who do (i.e. Leon Kass) do so as an outgrowth of a really strong signaling process.
I don’t object to biological augmentations, and I’m particularly fond of the idea of radical life-extension. Having our bodies tweaked, new features added and old bugs patched, that would be fine by me. Kidneys that don’t produce stones, but otherwise meet or exceed the original spec? Sign me up!
If some sort of posthumans emerged and decided to take care of humans in a manner analogous to present-day humans taking care of chimps in zoos, that might be weird, but having someone incomprehensibly intelligent and powerful looking out for my interests would be preferable to a poke in the eye with a sharp stick.
If, on the other hand, a posthuman appears as a wheel of fire, explains that it’s smarter and happier than I can possibly imagine and further that any demographic which could produce individuals psychologically equivalent to me is a waste of valuable mass, so I need to be disassembled now, that’s where the indescribable horror kicks in. Under those circumstances, I would do everything I could do to keep being, or set up some possibility of coming back, and it wouldn’t be enough.
You’re right. Describing that value as intrinsic was an error in terminology on my part.
I decided to value the survival of my species, assigned intrinsic, terminal value to it, because it’s a fountain for so much of the stuff I instinctively value.
Right, because if you forgot everything else that you value, you would be able to rederive that you are an agent as described in Thou Art Godshatter:
Such agents would have sex only as a means of reproduction, and wouldn’t bother with sex that involved birth control. They could eat food out of an explicitly reasoned belief that food was necessary to reproduce, not because they liked the taste, and so they wouldn’t eat candy if it became detrimental to survival or reproduction. Post-menopausal women would babysit grandchildren until they became sick enough to be a net drain on resources, and would then commit suicide.
Or maybe not. See, the value of a theory is not just what can explain, but what it can’t explain. It is not enough that your fountain generates your values, it also must not generate any other values.
Did you miss the part where I said that the value I place on the survival of my species is secondary to my own personal survival?
I recognize that, for example, nonreproductive sex has emotional consequences and social implications. Participation in a larger social network provides me with access to resources of life-or-death importance (including, but certainly not limited to, modern medical care) that I would be unable to maintain, let alone create, on my own. Optimal participation in that social network seems to require at least one ‘intimate’ relationship, to which nonreproductive sex can contribute.
As for what my theory can’t explain: If I ever take up alcohol use for social or recreational purposes, that would be very surprising; social is subsidiary to survival, and fun is something I have when I know what’s going on. Likewise, it would be a big surprise if I ever attempt suicide. I’ve considered possible techniques, but only as an academic exercise, optimized to show the subject what a bad idea it is while there’s still time to back out. I can imagine circumstances under which I would endanger my own health, or even life, to save others, but I wouldn’t do so lightly. It would most likely be part of a calculated gambit to accept a relatively small but impressive-looking immediate risk in exchange for social capital necessary to escape larger long-term risks. The idea of deliberately distorting my own senses and/or cognition is bizarre; I can accept other people doing so, provided they don’t hurt me or my interests in the process, but I wouldn’t do it myself. Taking something like caffeine or Provigil for the cognitive benefits would seem downright Faustian, and I have a hard time imagining myself accepting LSD unless someone was literally holding a gun to my head. I could go on.
I’ve had a rather unsettling night’s sleep, contemplating scenarios where I’m forced to choose between slight variations on violations of my body and mind, disconnect from reality, and loss of everyone I’ve ever loved. It was worth it, though, since I’ve come up with a less convenient version:
If choice D included, within the simulation, versions of my loved ones that were ultimately hollow, but convincing enough that I could be satisfied with them by choosing not to look too closely, and further if the VR included a society with complex, internally-consistent dynamics of a sort that are impossible in the real world but endlessly fascinating to me, and if in option C I would know that such a virtual world existed but be permanently denied access to it (in such a way that seemed consistent with the falsely-remembered death of my loved ones), that would make D quite a bit more tempting.
However, I would still chose the ‘actual reality’ option, because it has better long-term recovery prospects. In that situation, my loved ones aren’t actually dead, so I’ve got some chance of reconnecting with them or benefiting by the indirect consequences of their actions; my map is broken, but I still have access to the territory, so it could eventually be repaired.
Ok, that is a better effort to find a less convenient world, but you still seem to be avoiding the conflict between optimizing the actual state of reality and optimizing your perception of reality.
Assume in Scenario C, you know you will never see your loved ones again, you will never realize that they are still alive.
More generally, if you come up with some reason why optimizing your expected experience of your loved ones happens to produce the same result as optimizing the actual lives of your loved ones, despite the dilemma being constructed to introduce a disconnect between these concepts, then imagine that reason does not work. Imagine the dilemma is tightened to eliminate that reason. For purposes of this thought experiment, don’t worry if this requires you to occupy some epistemic state that humans can not ordinarily achieve, or strange arbitrary powers for the agents forcing you to make this decision. Because planning a reaction for this absurd scenario is not the point. The point is to figure out and compare to what extent your care about the actual state of the universe, and to what extent you care about your perceptions.
My own answer to this dilemma is options C, because then my loved ones are actually alive and well, full stop.
Fair enough. I’d still pick C, since it also includes the options of finding someone else to be with, or somehow coming to terms with living alone.
Thank you for clarifying that.
Most of all, I want to stay alive, or if that’s not possible, keep a viable breeding population of my species alive. I would be suspicious of anyone who claimed to be the result of an evolutionary process but did not value this.
If the ‘survival’ situation seems to be under control, my next priority is constructing predictive models. This requires sensory input and thought, preferably conscious thought. I’m not terribly picky about what sort of sensory input exactly, but more is better (so long as my ability to process it can keep up, of course).
After modeling it gets complicated. I want to be able to effect changes in my surroundings, but a hammer does me no good without the ability to predict that striking a nail will change the nail’s position. If my perceptions are sufficiently disconnected from reality that the connection can never be reestablished, objective two is in an irretrievable failure state, and any higher goal is irrelevant.
That leaves survival. Neither C nor D explicitly threatens my own life, but with perma-death on the table, either of them might mean me expiring somewhere down the line. D explicitly involves my loved ones (all or at least most of whom are members of my species) being killed for arbitrary, nonrepeatable reasons, which constitutes a marginal reduction in genetic diversity without corresponding increase in fitness for any conceivable, let alone relevant, environment.
So, I suppose I would agree with you in choosing C primarily because it would leave my loved ones alive and well.
Be careful about confusing evolution’s purposes with the purposes of the product of evolution. Is mere species survival what you want, or what you predict you want, as a result of inheriting evolutions’s values (which doesn’t actually work that way)?
You are allowed to assign intrinsic, terminal value to your loved ones’ well being, and to choose option C because it better achieves that terminal value, without having to justify it further by appeals to inclusive genetic fitness. Knowing this, do you still say you are choosing C because of a small difference in genetic diversity?
But, getting back to the reason I presented the dilemma, it seems that you do in fact have preferences over what happens after you die, and so your utility function, representing your preferences over possible futures that you would now attempt to bring about, cannot be uniformly 0 in the cases where you are dead.
I am not claiming to have inherited anything from evolution itself. The blind idiot god has no DNA of it’s own, nor could it have preached to a younger, impressionable me. I decided to value the survival of my species, assigned intrinsic, terminal value to it, because it’s a fountain for so much of the stuff I instinctively value.
Part of objective two is modeling my own probable responses, so an equally-accurate model of my preferences with lower Kolmogorov complexity has intrinsic value as well. Of course, I can’t be totally sure that it’s accurate, but that particular hasn’t let me down so far, and if it did (and I survived) I would replace it with one that better fit the data.
If my species survives, there’s some possibility that my utility function, or one sufficiently similar as to be practically indistinguishable, will be re-instantiated at some point. Even without resurrection, cryostasis, or some other clear continuity, enough recombinant exploration of the finite solution-space for ‘members of my species’ will eventually result in repeats. Admittedly, the chance is slim, which is why I overwhelmingly prefer the more direct solution of immortality through not dying.
In short, yes, I’ve thought this through and I’m pretty sure. Why do you find that so hard to believe?
The entire post above is actually a statement that you value the survival of our species instrumentally, not intrinsically. If it were an intrinsic value for you, then contemplating any future in which humanity becomes smarter and happier and eventually leaves behind the old bug-riddled bodies we started with, should fill you with indescribable horror. And in my experience, very few people feel that way, and many of those who do (i.e. Leon Kass) do so as an outgrowth of a really strong signaling process.
I don’t object to biological augmentations, and I’m particularly fond of the idea of radical life-extension. Having our bodies tweaked, new features added and old bugs patched, that would be fine by me. Kidneys that don’t produce stones, but otherwise meet or exceed the original spec? Sign me up!
If some sort of posthumans emerged and decided to take care of humans in a manner analogous to present-day humans taking care of chimps in zoos, that might be weird, but having someone incomprehensibly intelligent and powerful looking out for my interests would be preferable to a poke in the eye with a sharp stick.
If, on the other hand, a posthuman appears as a wheel of fire, explains that it’s smarter and happier than I can possibly imagine and further that any demographic which could produce individuals psychologically equivalent to me is a waste of valuable mass, so I need to be disassembled now, that’s where the indescribable horror kicks in. Under those circumstances, I would do everything I could do to keep being, or set up some possibility of coming back, and it wouldn’t be enough.
You’re right. Describing that value as intrinsic was an error in terminology on my part.
Right, because if you forgot everything else that you value, you would be able to rederive that you are an agent as described in Thou Art Godshatter:
Or maybe not. See, the value of a theory is not just what can explain, but what it can’t explain. It is not enough that your fountain generates your values, it also must not generate any other values.
Did you miss the part where I said that the value I place on the survival of my species is secondary to my own personal survival?
I recognize that, for example, nonreproductive sex has emotional consequences and social implications. Participation in a larger social network provides me with access to resources of life-or-death importance (including, but certainly not limited to, modern medical care) that I would be unable to maintain, let alone create, on my own. Optimal participation in that social network seems to require at least one ‘intimate’ relationship, to which nonreproductive sex can contribute.
As for what my theory can’t explain: If I ever take up alcohol use for social or recreational purposes, that would be very surprising; social is subsidiary to survival, and fun is something I have when I know what’s going on. Likewise, it would be a big surprise if I ever attempt suicide. I’ve considered possible techniques, but only as an academic exercise, optimized to show the subject what a bad idea it is while there’s still time to back out. I can imagine circumstances under which I would endanger my own health, or even life, to save others, but I wouldn’t do so lightly. It would most likely be part of a calculated gambit to accept a relatively small but impressive-looking immediate risk in exchange for social capital necessary to escape larger long-term risks. The idea of deliberately distorting my own senses and/or cognition is bizarre; I can accept other people doing so, provided they don’t hurt me or my interests in the process, but I wouldn’t do it myself. Taking something like caffeine or Provigil for the cognitive benefits would seem downright Faustian, and I have a hard time imagining myself accepting LSD unless someone was literally holding a gun to my head. I could go on.