Why won’t QM save you from quantum-hangover where you do or don’t get a terrible headache and IQ impairment of 20 points?
Just because. Really, that’s all there is to it. It so happens that Yvain’s (alleged) subjectively objective preferences are such that given the existence of a non-hangover branch the existence of a hangover branch is less desirable than if the hangover was substituted for death in that branch.
This is not entirely unrelated to the natural outcome of ‘average utilitarianism’. Under that value system the most simple solution is to go around killing the unhappiest person in the world until such time as the remaining people would be made on average less happy by the removal of the lowest happiness people. Sure, this is totally crazy. But if you have crazy preferences you should do crazy things.
Well, that was kind of rhetorical question; I want to make him be less into quantum-suicide by appeal to how (my guess) he’s not expecting MWI to save him from experiencing quantum-hangover.
edit: to explain, I myself believe death to be a real value from epsilon to 1 rather than a binary value, the epsilon being the shared memories remaining in other people, other people nearby who think like you, etc. and the 1 being the state where you are never forgetting anything at all, with living being somewhere close to 1, and amnesias somewhere between 0 and that. Having binary valued anything seem to result in really screwed up behaviours. (by other people nearby i mean within distances that are much smaller than your information content; the other people at distances that have as many bits in their relative coordinate as you have in your brain, it seems to me those really should be counted as substantially different even though I can’t quite pin point why)
It was a question that got a straight answer that is in accord with Yvain’s position. To the extent that such rhetorical questions get answers that do not require the rhetoric victim to change their mind, the implied argument can be considered refuted.
In practical social usage rhetorical questions can, indeed, often be used as a way to make arguments that an opponent is not allowed to respond to. Here on lesswrong we are free to reject that convention and so not only are literal answers to rhetorical questions acceptable, arguments that are hidden behind rhetorical questions should be considered at least as subject to criticism as arguments made openly.
Well, it is the case that humans often strive to have consistent goal systems (perhaps minimizing their descriptive complexity?), so while he can just say ‘because I defined my goal system to be this’ he is also likely to think and try to come up with some kind of general principle that does not have weird discontinuities at some point when the amnesia is too much like death for his taste.
edit: i think we are talking about different issues; i’m not making a point about his utility function, i’m making a point that he is expecting to become ‘him 20 points dumber and with a terrible headache’, who’s a rather different person, rather than to become someone in a galaxy far far away who doesn’t have the hangover and is thus more similar to him before the hangover.
Well, it is the case that humans often strive to have consistent goal systems (perhaps minimizing their descriptive complexity?), so while he can just say ‘because I defined my goal system to be this’
Yvain does present a consistent goal system. It is one that may appear either crazy or morally abhorrent to us but all indications are that it is entirely consistent. If you were attempting to demonstrate to Yvain an inconsistency in his value system that requires arbitrary complexity to circumvent then you failed.
I think you’re misunderstanding me, see edit. The point I am making is not so much about his values, but about his expectations of subjective experience.
Well, my argument is that you can propose a battery of possible partial quantum suicide set ups involving a machine that partially destroys you (e.g. you are anaesthetised and undergo lobotomy with varying extent of cutting, or something of this sort such as administration of a sublethal dose of neurotoxin). At some point, there’s so little of you left that you’re as good as dead; at some other point, there’s so much of you left that you don’t really expect to be quantum-saved. Either he has some strange continuous function inbetween, that I am very curious about, or he has a discontinuity, which is weird. (and I am guessing a discontinuity but i’d be interested to hear about function)
Just because. Really, that’s all there is to it. It so happens that Yvain’s (alleged) subjectively objective preferences are such that given the existence of a non-hangover branch the existence of a hangover branch is less desirable than if the hangover was substituted for death in that branch.
This is not entirely unrelated to the natural outcome of ‘average utilitarianism’. Under that value system the most simple solution is to go around killing the unhappiest person in the world until such time as the remaining people would be made on average less happy by the removal of the lowest happiness people. Sure, this is totally crazy. But if you have crazy preferences you should do crazy things.
Well, that was kind of rhetorical question; I want to make him be less into quantum-suicide by appeal to how (my guess) he’s not expecting MWI to save him from experiencing quantum-hangover.
edit: to explain, I myself believe death to be a real value from epsilon to 1 rather than a binary value, the epsilon being the shared memories remaining in other people, other people nearby who think like you, etc. and the 1 being the state where you are never forgetting anything at all, with living being somewhere close to 1, and amnesias somewhere between 0 and that. Having binary valued anything seem to result in really screwed up behaviours. (by other people nearby i mean within distances that are much smaller than your information content; the other people at distances that have as many bits in their relative coordinate as you have in your brain, it seems to me those really should be counted as substantially different even though I can’t quite pin point why)
It was a question that got a straight answer that is in accord with Yvain’s position. To the extent that such rhetorical questions get answers that do not require the rhetoric victim to change their mind, the implied argument can be considered refuted.
In practical social usage rhetorical questions can, indeed, often be used as a way to make arguments that an opponent is not allowed to respond to. Here on lesswrong we are free to reject that convention and so not only are literal answers to rhetorical questions acceptable, arguments that are hidden behind rhetorical questions should be considered at least as subject to criticism as arguments made openly.
Well, it is the case that humans often strive to have consistent goal systems (perhaps minimizing their descriptive complexity?), so while he can just say ‘because I defined my goal system to be this’ he is also likely to think and try to come up with some kind of general principle that does not have weird discontinuities at some point when the amnesia is too much like death for his taste.
edit: i think we are talking about different issues; i’m not making a point about his utility function, i’m making a point that he is expecting to become ‘him 20 points dumber and with a terrible headache’, who’s a rather different person, rather than to become someone in a galaxy far far away who doesn’t have the hangover and is thus more similar to him before the hangover.
Yvain does present a consistent goal system. It is one that may appear either crazy or morally abhorrent to us but all indications are that it is entirely consistent. If you were attempting to demonstrate to Yvain an inconsistency in his value system that requires arbitrary complexity to circumvent then you failed.
I think you’re misunderstanding me, see edit. The point I am making is not so much about his values, but about his expectations of subjective experience.
Yvain’s expectations of subjective experience actually seem sane to me. Only his values (and so expected decisionmaking) are weird.
Well, my argument is that you can propose a battery of possible partial quantum suicide set ups involving a machine that partially destroys you (e.g. you are anaesthetised and undergo lobotomy with varying extent of cutting, or something of this sort such as administration of a sublethal dose of neurotoxin). At some point, there’s so little of you left that you’re as good as dead; at some other point, there’s so much of you left that you don’t really expect to be quantum-saved. Either he has some strange continuous function inbetween, that I am very curious about, or he has a discontinuity, which is weird. (and I am guessing a discontinuity but i’d be interested to hear about function)