I am not claiming to have inherited anything from evolution itself. The blind idiot god has no DNA of it’s own, nor could it have preached to a younger, impressionable me. I decided to value the survival of my species, assigned intrinsic, terminal value to it, because it’s a fountain for so much of the stuff I instinctively value.
Part of objective two is modeling my own probable responses, so an equally-accurate model of my preferences with lower Kolmogorov complexity has intrinsic value as well. Of course, I can’t be totally sure that it’s accurate, but that particular hasn’t let me down so far, and if it did (and I survived) I would replace it with one that better fit the data.
If my species survives, there’s some possibility that my utility function, or one sufficiently similar as to be practically indistinguishable, will be re-instantiated at some point. Even without resurrection, cryostasis, or some other clear continuity, enough recombinant exploration of the finite solution-space for ‘members of my species’ will eventually result in repeats. Admittedly, the chance is slim, which is why I overwhelmingly prefer the more direct solution of immortality through not dying.
In short, yes, I’ve thought this through and I’m pretty sure. Why do you find that so hard to believe?
The entire post above is actually a statement that you value the survival of our species instrumentally, not intrinsically. If it were an intrinsic value for you, then contemplating any future in which humanity becomes smarter and happier and eventually leaves behind the old bug-riddled bodies we started with, should fill you with indescribable horror. And in my experience, very few people feel that way, and many of those who do (i.e. Leon Kass) do so as an outgrowth of a really strong signaling process.
I don’t object to biological augmentations, and I’m particularly fond of the idea of radical life-extension. Having our bodies tweaked, new features added and old bugs patched, that would be fine by me. Kidneys that don’t produce stones, but otherwise meet or exceed the original spec? Sign me up!
If some sort of posthumans emerged and decided to take care of humans in a manner analogous to present-day humans taking care of chimps in zoos, that might be weird, but having someone incomprehensibly intelligent and powerful looking out for my interests would be preferable to a poke in the eye with a sharp stick.
If, on the other hand, a posthuman appears as a wheel of fire, explains that it’s smarter and happier than I can possibly imagine and further that any demographic which could produce individuals psychologically equivalent to me is a waste of valuable mass, so I need to be disassembled now, that’s where the indescribable horror kicks in. Under those circumstances, I would do everything I could do to keep being, or set up some possibility of coming back, and it wouldn’t be enough.
You’re right. Describing that value as intrinsic was an error in terminology on my part.
I decided to value the survival of my species, assigned intrinsic, terminal value to it, because it’s a fountain for so much of the stuff I instinctively value.
Right, because if you forgot everything else that you value, you would be able to rederive that you are an agent as described in Thou Art Godshatter:
Such agents would have sex only as a means of reproduction, and wouldn’t bother with sex that involved birth control. They could eat food out of an explicitly reasoned belief that food was necessary to reproduce, not because they liked the taste, and so they wouldn’t eat candy if it became detrimental to survival or reproduction. Post-menopausal women would babysit grandchildren until they became sick enough to be a net drain on resources, and would then commit suicide.
Or maybe not. See, the value of a theory is not just what can explain, but what it can’t explain. It is not enough that your fountain generates your values, it also must not generate any other values.
Did you miss the part where I said that the value I place on the survival of my species is secondary to my own personal survival?
I recognize that, for example, nonreproductive sex has emotional consequences and social implications. Participation in a larger social network provides me with access to resources of life-or-death importance (including, but certainly not limited to, modern medical care) that I would be unable to maintain, let alone create, on my own. Optimal participation in that social network seems to require at least one ‘intimate’ relationship, to which nonreproductive sex can contribute.
As for what my theory can’t explain: If I ever take up alcohol use for social or recreational purposes, that would be very surprising; social is subsidiary to survival, and fun is something I have when I know what’s going on. Likewise, it would be a big surprise if I ever attempt suicide. I’ve considered possible techniques, but only as an academic exercise, optimized to show the subject what a bad idea it is while there’s still time to back out. I can imagine circumstances under which I would endanger my own health, or even life, to save others, but I wouldn’t do so lightly. It would most likely be part of a calculated gambit to accept a relatively small but impressive-looking immediate risk in exchange for social capital necessary to escape larger long-term risks. The idea of deliberately distorting my own senses and/or cognition is bizarre; I can accept other people doing so, provided they don’t hurt me or my interests in the process, but I wouldn’t do it myself. Taking something like caffeine or Provigil for the cognitive benefits would seem downright Faustian, and I have a hard time imagining myself accepting LSD unless someone was literally holding a gun to my head. I could go on.
I am not claiming to have inherited anything from evolution itself. The blind idiot god has no DNA of it’s own, nor could it have preached to a younger, impressionable me. I decided to value the survival of my species, assigned intrinsic, terminal value to it, because it’s a fountain for so much of the stuff I instinctively value.
Part of objective two is modeling my own probable responses, so an equally-accurate model of my preferences with lower Kolmogorov complexity has intrinsic value as well. Of course, I can’t be totally sure that it’s accurate, but that particular hasn’t let me down so far, and if it did (and I survived) I would replace it with one that better fit the data.
If my species survives, there’s some possibility that my utility function, or one sufficiently similar as to be practically indistinguishable, will be re-instantiated at some point. Even without resurrection, cryostasis, or some other clear continuity, enough recombinant exploration of the finite solution-space for ‘members of my species’ will eventually result in repeats. Admittedly, the chance is slim, which is why I overwhelmingly prefer the more direct solution of immortality through not dying.
In short, yes, I’ve thought this through and I’m pretty sure. Why do you find that so hard to believe?
The entire post above is actually a statement that you value the survival of our species instrumentally, not intrinsically. If it were an intrinsic value for you, then contemplating any future in which humanity becomes smarter and happier and eventually leaves behind the old bug-riddled bodies we started with, should fill you with indescribable horror. And in my experience, very few people feel that way, and many of those who do (i.e. Leon Kass) do so as an outgrowth of a really strong signaling process.
I don’t object to biological augmentations, and I’m particularly fond of the idea of radical life-extension. Having our bodies tweaked, new features added and old bugs patched, that would be fine by me. Kidneys that don’t produce stones, but otherwise meet or exceed the original spec? Sign me up!
If some sort of posthumans emerged and decided to take care of humans in a manner analogous to present-day humans taking care of chimps in zoos, that might be weird, but having someone incomprehensibly intelligent and powerful looking out for my interests would be preferable to a poke in the eye with a sharp stick.
If, on the other hand, a posthuman appears as a wheel of fire, explains that it’s smarter and happier than I can possibly imagine and further that any demographic which could produce individuals psychologically equivalent to me is a waste of valuable mass, so I need to be disassembled now, that’s where the indescribable horror kicks in. Under those circumstances, I would do everything I could do to keep being, or set up some possibility of coming back, and it wouldn’t be enough.
You’re right. Describing that value as intrinsic was an error in terminology on my part.
Right, because if you forgot everything else that you value, you would be able to rederive that you are an agent as described in Thou Art Godshatter:
Or maybe not. See, the value of a theory is not just what can explain, but what it can’t explain. It is not enough that your fountain generates your values, it also must not generate any other values.
Did you miss the part where I said that the value I place on the survival of my species is secondary to my own personal survival?
I recognize that, for example, nonreproductive sex has emotional consequences and social implications. Participation in a larger social network provides me with access to resources of life-or-death importance (including, but certainly not limited to, modern medical care) that I would be unable to maintain, let alone create, on my own. Optimal participation in that social network seems to require at least one ‘intimate’ relationship, to which nonreproductive sex can contribute.
As for what my theory can’t explain: If I ever take up alcohol use for social or recreational purposes, that would be very surprising; social is subsidiary to survival, and fun is something I have when I know what’s going on. Likewise, it would be a big surprise if I ever attempt suicide. I’ve considered possible techniques, but only as an academic exercise, optimized to show the subject what a bad idea it is while there’s still time to back out. I can imagine circumstances under which I would endanger my own health, or even life, to save others, but I wouldn’t do so lightly. It would most likely be part of a calculated gambit to accept a relatively small but impressive-looking immediate risk in exchange for social capital necessary to escape larger long-term risks. The idea of deliberately distorting my own senses and/or cognition is bizarre; I can accept other people doing so, provided they don’t hurt me or my interests in the process, but I wouldn’t do it myself. Taking something like caffeine or Provigil for the cognitive benefits would seem downright Faustian, and I have a hard time imagining myself accepting LSD unless someone was literally holding a gun to my head. I could go on.