The way I frame this question is “what if I executed my personal volition extrapolating FAI, it ran, created a pretty light show, and then did nothing, and I checked over the code many times with many people who also knew the theory and we all agreed that it should have worked, then tried again with completely different code many (maybe 100 or 1000 or millions) times, sometimes extrapolating somewhat different volitions with somewhat different dynamics and each time it produced the same pretty light show and then did nothing. Lets say I have spend a few thousand years on this while running as an upload. Now what?”
In this scenario there’s no optimization reason I shouldn’t just execute cached thoughts. In fact, that’s pretty much what anything I do in this scenario amounts to doing. Executing cached thoughts does, of course, happen lawfully, so there is a reason to dress in black etc in that sense. I used to be pretty good at writing some sad but mostly non-gloomy poetry and denouncing people as fools. Might be even more fun to do that with other modified upload copies of myself. When that got old, maybe use my knowledge of FAI theory to build myself a philosophy of math oracle neural module. Hard to guess how my actions would differ once it was brought on-line. It seems to me that it might add up to normality because there might be an irreducible difference between utility for me and utility for an external AGI even if it was an extrapolation of my volition, but for now I’m a blind man speculating on the relative merits of Picasso and Van Gogh.
Honestly I’m much less concerned about this scenario than I once was. Pretty convinced that there are ways to extrapolate me that do something even if they discover infinite computing power.
Dynamically linked: No-one but nerds and children care what moral philosophies say anyway, at least, not in a way that effects their actions. You, TGGP and Unknown are very atypical. Poke is much closer to correct. If anything, when the dust settled the world would be more peaceful if most people understood the proof.
Eric Mesoy: If utilities = 0 then dying from malnourishment isn’t horrible.
Andy M: Your answer sounds more appropriate for someone fairly shallow and 20 years old who discovers that the world or his life will end in 6 months than for someone for whom utilities are set to zero or morality is lost.
Constant Pablo and especially Sebastian: Clearly thought! I should probably start reading your comments more carefully in the future.
Laura: Why unsympathetic? My guess is that you still confuse my and Eliezer’s aspirations with some puerile Nietzschean ambition. I like who I am now too thank you very much, and if my extrapolated volition does want to replace who I am it is for reasons that I would approve of if I knew them, e.g. what it will replace me with is not “completely different, incomprehensible, and unsympathetic”. That’s the difference between a positive and a negative singularity. Death isn’t abhorrent, life/experience/growth/joy/flourishing/fulfillment, rather, is good, and a universe more full of them more good than one less full, whether viewed from inside or from outside. Math is full of both death and flourishing and is not lessened by the former.
The way I frame this question is “what if I executed my personal volition extrapolating FAI, it ran, created a pretty light show, and then did nothing, and I checked over the code many times with many people who also knew the theory and we all agreed that it should have worked, then tried again with completely different code many (maybe 100 or 1000 or millions) times, sometimes extrapolating somewhat different volitions with somewhat different dynamics and each time it produced the same pretty light show and then did nothing. Lets say I have spend a few thousand years on this while running as an upload. Now what?”
In this scenario there’s no optimization reason I shouldn’t just execute cached thoughts. In fact, that’s pretty much what anything I do in this scenario amounts to doing. Executing cached thoughts does, of course, happen lawfully, so there is a reason to dress in black etc in that sense. I used to be pretty good at writing some sad but mostly non-gloomy poetry and denouncing people as fools. Might be even more fun to do that with other modified upload copies of myself. When that got old, maybe use my knowledge of FAI theory to build myself a philosophy of math oracle neural module. Hard to guess how my actions would differ once it was brought on-line. It seems to me that it might add up to normality because there might be an irreducible difference between utility for me and utility for an external AGI even if it was an extrapolation of my volition, but for now I’m a blind man speculating on the relative merits of Picasso and Van Gogh.
Honestly I’m much less concerned about this scenario than I once was. Pretty convinced that there are ways to extrapolate me that do something even if they discover infinite computing power.
Dynamically linked: No-one but nerds and children care what moral philosophies say anyway, at least, not in a way that effects their actions. You, TGGP and Unknown are very atypical. Poke is much closer to correct. If anything, when the dust settled the world would be more peaceful if most people understood the proof.
Eric Mesoy: If utilities = 0 then dying from malnourishment isn’t horrible.
Andy M: Your answer sounds more appropriate for someone fairly shallow and 20 years old who discovers that the world or his life will end in 6 months than for someone for whom utilities are set to zero or morality is lost.
Constant Pablo and especially Sebastian: Clearly thought! I should probably start reading your comments more carefully in the future.
Laura: Why unsympathetic? My guess is that you still confuse my and Eliezer’s aspirations with some puerile Nietzschean ambition. I like who I am now too thank you very much, and if my extrapolated volition does want to replace who I am it is for reasons that I would approve of if I knew them, e.g. what it will replace me with is not “completely different, incomprehensible, and unsympathetic”. That’s the difference between a positive and a negative singularity. Death isn’t abhorrent, life/experience/growth/joy/flourishing/fulfillment, rather, is good, and a universe more full of them more good than one less full, whether viewed from inside or from outside. Math is full of both death and flourishing and is not lessened by the former.
Phil: Very entertaining and thoughtful post.