If you are a boltzmann brain, none of this is real and you will blink out of existence in the next second. If you think a happy thought, that’s a good thing. If you move to rescue the child, you will be under stress and no child will end up being rescued.
If you don’t like the boltzmann brain gamble, substitute something else where you have it on good authority that nothing is real except your own happyness or whatever.
(My answer is that the tiny possibility that “none of this is real” is wrong is much more important (in the sense that more value is at stake) than the mainline possibility that none of this is real, so the mainline boltzmann case more or less washes out in the noise and I act as if the things I see are real.)
EDIT: The curiosity thing is a fake justification: I find it suspicious that moving to save the child also happens to be teh most interesting experiment you could run.
The injunction “I can’t be in such an epistemic state, so I will go with the autopilot” is a good solution that I hadn’t though of. But then in the case of pure morality, without epistemic concerns and whatnot, which is better: save the very unlikely child, or think a happy thought? (my answer is above, but I still take the injunction in practice)
Yes, I was aware the curiosity thing is not a valid reason, which is why I only qualify it as “+V”. There are other options which give much greater +V. It is not an optimum.
Regarding you description of the Vs, I guess I’m a bit skewed in that regard. I don’t perceive happy thought and stress/sadness as clean-cut + and—utilities. Ceteris Paribus, I find stress to be positive utility against the backdrop of “lack of anything”. I think there’s a Type 1 / Type 2 thing going on, with the “conscious” assigning some value to what’s automatic or built-in, but I don’t remember the right vocabulary and recreating a proper terminology from reductions would take a lot of time better spent studying up on the already-established conventions. Basically, I consciously value all feelings equivalently, with a built-in valuation of what my instinct / human-built-in-devices values too, such that many small forms of pain are actually more pleasant than not feeling anything in particular, but strong pain is less pleasant than temporary lack of feeling.
Stuck in a two-branch decision-theoretic problem between “lifelong torture” and “lifelong lack of sensation or feeling”, my current conscious mind is edging towards the former, assuming the latter means I don’t get that rush from curiosity and figuring stuff out anymore. Of course, in practice I’m not quite so sure that none of the built-in mechanisms I have in my brain would get me to choose otherwise.
Anyway, just wanted to chip in that the utilitarian math for the “if I’m a boltzmann, I want a happy thought rather than a bit of stress” case isn’t quite so clear-cut for me personally, since the happy thought might not “succeed” in being produced or being really happy, and the stress might be valued positively anyway and is probably more likely to “succeed”. This isn’t the real motivation for my choices (so it’s an excuse/rationalization if I decide based on this), but is an interesting bit of detail and trivia, IMO.
Well, if I have evidence that I’m a special kind of telekinetic who can only move stuff with his mind when not physically moving (i.e. not sending signals to my own muscles) instead of a boltzmann, then unless I’m missing something I really do prefer staying immobile and saving the child with my thoughts instead of jumping in and wasting a lot of energy (this is assuming there’s no long-term consequences like other people seeing me save a child with my mind), but I’d still jump in anyway because my mental machinery overrides the far knowledge that I can almost certainly do it without moving.
It would take a lot of actual training in order to overcome this and start actually using the telekinesis. I think in such a situation, an ideal rationalist would use telekinesis instead of jumping in the water—not to mention the practical advantages of saving the child faster and in a safer manner (also with no risk to yourself!), assuming you have that level of control over your telekinetic powers.
That’s a good one, I lean towards jumping in as well, but you are right that the ideal says “use the force”.
doesn’t fit the pascal’s wager pattern tho...
EDIT: it seems a reliable hypothesis that intuition will go with whatever is best in the near mode case, never mind this “probability” and “utility” stuff.
Well, to make it fit Pascal’s Wager pattern a bit more, assume that you’re aware that telekinetics like you sometimes have a finite, very small amount of physical energy you can spend during your entire life, and once you’re out of it you die. You have unlimited “telekinetic energy”. Saving the child is, if this is true, going to chop off a good 95% of your remaining lifespan and permanently sacrifice any possibility of becoming immortal.
If you move to rescue the child, you will be under stress and no child will end up being rescued.
Boltzmann brains aren’t actually able to put themselves under stress, any more than they can rescue children or even think.
Aside from this, I’m not sure I accept the assumption that I should care about the emotional experiences of boltzmann brains (or representation of there being such experiences). That is, I believe I reject:
If you are a boltzmann brain, none of this is real and you will blink out of existence in the next second. If you think a happy thought, that’s a good thing.
For the purpose of choosing my decisions and decision making strategy for the purpose of optimizing the universe towards a preferred state I would weigh influence over the freaky low entropy part of the universe (ie. what we believe exists) more than influence over the ridiculous amounts of noise that happens to include boltzmann brains of every kind even if my decisions had any influence over the latter at all.
There is a caveat that the above would be different if I was able to colonize and exploit the high entropy parts of the universe somehow but even then it wouldn’t be the noise-including-boltzmann brains that I valued but whatever little negentropy that remained to be harvested. If I happened to seek out and find copies of myself within the random fluctuations and preserve them then I would consider what I am doing to be roughly speaking creating clones of myself via a rather eccentric and inefficient engineering process involving ‘search for state matching specification then remove everything else’ rather than ‘put stuff into state matching specification’.
You’re right, an actual boltzmann brain would not have time to do either. It was just an illustrative example to get you to think of something like pascals wager with inverted near-mode and far-mode.
If you don’t like the boltzmann brain gamble, substitute something else where you have it on good authority that nothing is real except your own happyness or whatever.
It was just an illustrative example to get you to think of something like pascals wager with inverted near-mode and far-mode.
It was mainly the Bolzmann Brain component that caught my attention. Largely because yesterday I was considering how the concept of “Boltzmann’s Marbles” impacts on when and whether there was a time that could make the statement “There was only one marble in the universe” true.
If you are a boltzmann brain, none of this is real and you will blink out of existence in the next second. If you think a happy thought, that’s a good thing. If you move to rescue the child, you will be under stress and no child will end up being rescued.
If you don’t like the boltzmann brain gamble, substitute something else where you have it on good authority that nothing is real except your own happyness or whatever.
(My answer is that the tiny possibility that “none of this is real” is wrong is much more important (in the sense that more value is at stake) than the mainline possibility that none of this is real, so the mainline boltzmann case more or less washes out in the noise and I act as if the things I see are real.)
EDIT: The curiosity thing is a fake justification: I find it suspicious that moving to save the child also happens to be teh most interesting experiment you could run.
The injunction “I can’t be in such an epistemic state, so I will go with the autopilot” is a good solution that I hadn’t though of. But then in the case of pure morality, without epistemic concerns and whatnot, which is better: save the very unlikely child, or think a happy thought? (my answer is above, but I still take the injunction in practice)
Yes, I was aware the curiosity thing is not a valid reason, which is why I only qualify it as “+V”. There are other options which give much greater +V. It is not an optimum.
Regarding you description of the Vs, I guess I’m a bit skewed in that regard. I don’t perceive happy thought and stress/sadness as clean-cut + and—utilities. Ceteris Paribus, I find stress to be positive utility against the backdrop of “lack of anything”. I think there’s a Type 1 / Type 2 thing going on, with the “conscious” assigning some value to what’s automatic or built-in, but I don’t remember the right vocabulary and recreating a proper terminology from reductions would take a lot of time better spent studying up on the already-established conventions. Basically, I consciously value all feelings equivalently, with a built-in valuation of what my instinct / human-built-in-devices values too, such that many small forms of pain are actually more pleasant than not feeling anything in particular, but strong pain is less pleasant than temporary lack of feeling.
Stuck in a two-branch decision-theoretic problem between “lifelong torture” and “lifelong lack of sensation or feeling”, my current conscious mind is edging towards the former, assuming the latter means I don’t get that rush from curiosity and figuring stuff out anymore. Of course, in practice I’m not quite so sure that none of the built-in mechanisms I have in my brain would get me to choose otherwise.
Anyway, just wanted to chip in that the utilitarian math for the “if I’m a boltzmann, I want a happy thought rather than a bit of stress” case isn’t quite so clear-cut for me personally, since the happy thought might not “succeed” in being produced or being really happy, and the stress might be valued positively anyway and is probably more likely to “succeed”. This isn’t the real motivation for my choices (so it’s an excuse/rationalization if I decide based on this), but is an interesting bit of detail and trivia, IMO.
interesting. Again, substitute a new example that does have the desired properties.
Well, if I have evidence that I’m a special kind of telekinetic who can only move stuff with his mind when not physically moving (i.e. not sending signals to my own muscles) instead of a boltzmann, then unless I’m missing something I really do prefer staying immobile and saving the child with my thoughts instead of jumping in and wasting a lot of energy (this is assuming there’s no long-term consequences like other people seeing me save a child with my mind), but I’d still jump in anyway because my mental machinery overrides the far knowledge that I can almost certainly do it without moving.
It would take a lot of actual training in order to overcome this and start actually using the telekinesis. I think in such a situation, an ideal rationalist would use telekinesis instead of jumping in the water—not to mention the practical advantages of saving the child faster and in a safer manner (also with no risk to yourself!), assuming you have that level of control over your telekinetic powers.
That’s a good one, I lean towards jumping in as well, but you are right that the ideal says “use the force”.
doesn’t fit the pascal’s wager pattern tho...
EDIT: it seems a reliable hypothesis that intuition will go with whatever is best in the near mode case, never mind this “probability” and “utility” stuff.
Well, to make it fit Pascal’s Wager pattern a bit more, assume that you’re aware that telekinetics like you sometimes have a finite, very small amount of physical energy you can spend during your entire life, and once you’re out of it you die. You have unlimited “telekinetic energy”. Saving the child is, if this is true, going to chop off a good 95% of your remaining lifespan and permanently sacrifice any possibility of becoming immortal.
Or is that the wrong way around? Hmm.
Boltzmann brains aren’t actually able to put themselves under stress, any more than they can rescue children or even think.
Aside from this, I’m not sure I accept the assumption that I should care about the emotional experiences of boltzmann brains (or representation of there being such experiences). That is, I believe I reject:
For the purpose of choosing my decisions and decision making strategy for the purpose of optimizing the universe towards a preferred state I would weigh influence over the freaky low entropy part of the universe (ie. what we believe exists) more than influence over the ridiculous amounts of noise that happens to include boltzmann brains of every kind even if my decisions had any influence over the latter at all.
There is a caveat that the above would be different if I was able to colonize and exploit the high entropy parts of the universe somehow but even then it wouldn’t be the noise-including-boltzmann brains that I valued but whatever little negentropy that remained to be harvested. If I happened to seek out and find copies of myself within the random fluctuations and preserve them then I would consider what I am doing to be roughly speaking creating clones of myself via a rather eccentric and inefficient engineering process involving ‘search for state matching specification then remove everything else’ rather than ‘put stuff into state matching specification’.
You’re right, an actual boltzmann brain would not have time to do either. It was just an illustrative example to get you to think of something like pascals wager with inverted near-mode and far-mode.
It was mainly the Bolzmann Brain component that caught my attention. Largely because yesterday I was considering how the concept of “Boltzmann’s Marbles” impacts on when and whether there was a time that could make the statement “There was only one marble in the universe” true.