It was a question that got a straight answer that is in accord with Yvain’s position. To the extent that such rhetorical questions get answers that do not require the rhetoric victim to change their mind, the implied argument can be considered refuted.
In practical social usage rhetorical questions can, indeed, often be used as a way to make arguments that an opponent is not allowed to respond to. Here on lesswrong we are free to reject that convention and so not only are literal answers to rhetorical questions acceptable, arguments that are hidden behind rhetorical questions should be considered at least as subject to criticism as arguments made openly.
Well, it is the case that humans often strive to have consistent goal systems (perhaps minimizing their descriptive complexity?), so while he can just say ‘because I defined my goal system to be this’ he is also likely to think and try to come up with some kind of general principle that does not have weird discontinuities at some point when the amnesia is too much like death for his taste.
edit: i think we are talking about different issues; i’m not making a point about his utility function, i’m making a point that he is expecting to become ‘him 20 points dumber and with a terrible headache’, who’s a rather different person, rather than to become someone in a galaxy far far away who doesn’t have the hangover and is thus more similar to him before the hangover.
Well, it is the case that humans often strive to have consistent goal systems (perhaps minimizing their descriptive complexity?), so while he can just say ‘because I defined my goal system to be this’
Yvain does present a consistent goal system. It is one that may appear either crazy or morally abhorrent to us but all indications are that it is entirely consistent. If you were attempting to demonstrate to Yvain an inconsistency in his value system that requires arbitrary complexity to circumvent then you failed.
I think you’re misunderstanding me, see edit. The point I am making is not so much about his values, but about his expectations of subjective experience.
Well, my argument is that you can propose a battery of possible partial quantum suicide set ups involving a machine that partially destroys you (e.g. you are anaesthetised and undergo lobotomy with varying extent of cutting, or something of this sort such as administration of a sublethal dose of neurotoxin). At some point, there’s so little of you left that you’re as good as dead; at some other point, there’s so much of you left that you don’t really expect to be quantum-saved. Either he has some strange continuous function inbetween, that I am very curious about, or he has a discontinuity, which is weird. (and I am guessing a discontinuity but i’d be interested to hear about function)
It was a question that got a straight answer that is in accord with Yvain’s position. To the extent that such rhetorical questions get answers that do not require the rhetoric victim to change their mind, the implied argument can be considered refuted.
In practical social usage rhetorical questions can, indeed, often be used as a way to make arguments that an opponent is not allowed to respond to. Here on lesswrong we are free to reject that convention and so not only are literal answers to rhetorical questions acceptable, arguments that are hidden behind rhetorical questions should be considered at least as subject to criticism as arguments made openly.
Well, it is the case that humans often strive to have consistent goal systems (perhaps minimizing their descriptive complexity?), so while he can just say ‘because I defined my goal system to be this’ he is also likely to think and try to come up with some kind of general principle that does not have weird discontinuities at some point when the amnesia is too much like death for his taste.
edit: i think we are talking about different issues; i’m not making a point about his utility function, i’m making a point that he is expecting to become ‘him 20 points dumber and with a terrible headache’, who’s a rather different person, rather than to become someone in a galaxy far far away who doesn’t have the hangover and is thus more similar to him before the hangover.
Yvain does present a consistent goal system. It is one that may appear either crazy or morally abhorrent to us but all indications are that it is entirely consistent. If you were attempting to demonstrate to Yvain an inconsistency in his value system that requires arbitrary complexity to circumvent then you failed.
I think you’re misunderstanding me, see edit. The point I am making is not so much about his values, but about his expectations of subjective experience.
Yvain’s expectations of subjective experience actually seem sane to me. Only his values (and so expected decisionmaking) are weird.
Well, my argument is that you can propose a battery of possible partial quantum suicide set ups involving a machine that partially destroys you (e.g. you are anaesthetised and undergo lobotomy with varying extent of cutting, or something of this sort such as administration of a sublethal dose of neurotoxin). At some point, there’s so little of you left that you’re as good as dead; at some other point, there’s so much of you left that you don’t really expect to be quantum-saved. Either he has some strange continuous function inbetween, that I am very curious about, or he has a discontinuity, which is weird. (and I am guessing a discontinuity but i’d be interested to hear about function)