Well, that feels like an obvious no. I’m human though, so the obviousness is very much worthless.
My thought is to compare the EV here with full evidence weighing and such (including that it’s more likely that anyone would make some other threat, probably a more credible one, rather than this) of a policy of denying Pascal’s Mugging (a few occasional very tiny odds of very huge calamity) against a policy of falling for Pascal’s Mugging.
A policy that gives the money seems like posting “Please Pascal-mug me!” publicly on reddit or a facebook full of rationalists or something. You’re bound to end up making the odds cumulatively shoot up by having more instances of the mugging, including that someone takes your money and still somehow executes the major -EV thing they’re threatening you of. The EV clearly seems better with a policy of denying mugging, doesn’t it?
This seems to support the idea that a fully rational TDT agent would refuse Pascal’s Mugging, IMO. Feedback / why-you’re-wrong appreciated.
Let’s move away from the signalling and such, so that such a policy does not lead to a larger-than-five-bucks loss. (tho no amount of signalling loss actually overcomes the 3^^^3*P(3^^^3)).
Assume you recieve some mild evidence that is more likely in the case of imminent -EV singularity than an immenent +EV singularity. Maybe you find a memo floating out of some “abandoned” basement window that contains a some design notes for a half-assed FAI. Something that updates you toward the bad case (but only a very little amount, barely credible). Do you sneak in and microwave the hard drive? Our current best understanding of an ideal decision theorist would.
I tried to make that isomorphic to the meat of the problem. Do you think I got it? We face problems isomorphic to that every day, and we don’t tend to act on them.
Now consider that you observe some reality-glitch that causes you to conclude that you are quite sure that you are a boltzmann brain. At the same time, you see a child that is drowning. Do you think a happy thought (best idea if you are a boltzmann brain, IMO), or move quick to save the child (much better good idea, but only in the case where the child is real)? I would still try to save the child (I hope), as would an ideal decision theorist.
Those two examples are equivalent as far as our ideal decision theorist is concerned, but have opposite intuition. Who’s wrong?
I lean toward our intuition being wrong, because it depends on a lot of irrellevent stuff like whether the utilities are near-mode (drowning child, microwaved ard drive), or far-mode (+-EV singularity, boltzmann brain). Also, all the easy ways to make the decision theorist wrong don’t work unless you are ~100% sure that unbounded expected utility maximizing is stupid.
On the other hand, I’m still not about to start paying out on pascals wager.
Now consider that you observe some reality-glitch that causes you to conclude that you are quite sure that you are a boltzmann brain. At the same time, you see a child that is drowning. Do you think a happy thought (best idea if you are a boltzmann brain, IMO), or move quick to save the child (much better good idea, but only in the case where the child is real)? I would still try to save the child (I hope), as would an ideal decision theorist.
In this example, I very much agree, but not for any magical sentiment or urge. I simply don’t trust my brain and my own ability to gather knowledge and infer / deduct the right things enough to override the high base rate that there is an actual child drowning that I should go save. It would take waaaaaay more conclusive evidence and experimentation to confirm the boltzmann brain hypothesis, and then some more to make sure the drowning child is actually such a phenomenon (did I get that right? I have no idea what a boltzmann brain is).
Regarding the first example, that’s a very good case. I don’t see myself facing situations that could be framed similarly very often though, to be honest. In such a case, I would probably do something equivalent to the hard drive microwaving tactic, but would first attempt to run failsafes in case I’m the one seeing the wrong thing—this is comparable to the injunction against taking power in your own hands because you’re obviously able to make the best use of it. There are all kinds of reasons I might be wrong about the FAI, and might be doing something wrong by microwaving the drive. Faced with a hard now-or-never setting with immediate instant permanent consequences, a clean two-path decision tree (usually astronomically unlikely in the real world, we just get the illusion that it is one), I would definitely take the microwave option. In more likely scenarios though, there are all kinds of things to do.
Well, if I do have such evidence, then this is time for some bayes. If I’ve got the right math, then it’ll depend on information that I don’t have: What actually happens if I’m a boltzmann brain and I try to save the kid anyway?
The unknown information seems to be outweighed by the clear +EV of saving the child, but I have a hard time quantifying such unknown unknowns even with WAGs, and my mastery of continuous probability distributions isn’t up to par to properly calculate something like this anyway.
In this case, my curiosity as for what might happen is actually a +V, but even without that, I think I’d still try to save the child. My best guess is basically “My built-in function to evaluate this says save the child, and this function apparently knows more about the problem than I do, and I have no valid math that says otherwise, so let’s go with that” in such a case.
If you are a boltzmann brain, none of this is real and you will blink out of existence in the next second. If you think a happy thought, that’s a good thing. If you move to rescue the child, you will be under stress and no child will end up being rescued.
If you don’t like the boltzmann brain gamble, substitute something else where you have it on good authority that nothing is real except your own happyness or whatever.
(My answer is that the tiny possibility that “none of this is real” is wrong is much more important (in the sense that more value is at stake) than the mainline possibility that none of this is real, so the mainline boltzmann case more or less washes out in the noise and I act as if the things I see are real.)
EDIT: The curiosity thing is a fake justification: I find it suspicious that moving to save the child also happens to be teh most interesting experiment you could run.
The injunction “I can’t be in such an epistemic state, so I will go with the autopilot” is a good solution that I hadn’t though of. But then in the case of pure morality, without epistemic concerns and whatnot, which is better: save the very unlikely child, or think a happy thought? (my answer is above, but I still take the injunction in practice)
Yes, I was aware the curiosity thing is not a valid reason, which is why I only qualify it as “+V”. There are other options which give much greater +V. It is not an optimum.
Regarding you description of the Vs, I guess I’m a bit skewed in that regard. I don’t perceive happy thought and stress/sadness as clean-cut + and—utilities. Ceteris Paribus, I find stress to be positive utility against the backdrop of “lack of anything”. I think there’s a Type 1 / Type 2 thing going on, with the “conscious” assigning some value to what’s automatic or built-in, but I don’t remember the right vocabulary and recreating a proper terminology from reductions would take a lot of time better spent studying up on the already-established conventions. Basically, I consciously value all feelings equivalently, with a built-in valuation of what my instinct / human-built-in-devices values too, such that many small forms of pain are actually more pleasant than not feeling anything in particular, but strong pain is less pleasant than temporary lack of feeling.
Stuck in a two-branch decision-theoretic problem between “lifelong torture” and “lifelong lack of sensation or feeling”, my current conscious mind is edging towards the former, assuming the latter means I don’t get that rush from curiosity and figuring stuff out anymore. Of course, in practice I’m not quite so sure that none of the built-in mechanisms I have in my brain would get me to choose otherwise.
Anyway, just wanted to chip in that the utilitarian math for the “if I’m a boltzmann, I want a happy thought rather than a bit of stress” case isn’t quite so clear-cut for me personally, since the happy thought might not “succeed” in being produced or being really happy, and the stress might be valued positively anyway and is probably more likely to “succeed”. This isn’t the real motivation for my choices (so it’s an excuse/rationalization if I decide based on this), but is an interesting bit of detail and trivia, IMO.
Well, if I have evidence that I’m a special kind of telekinetic who can only move stuff with his mind when not physically moving (i.e. not sending signals to my own muscles) instead of a boltzmann, then unless I’m missing something I really do prefer staying immobile and saving the child with my thoughts instead of jumping in and wasting a lot of energy (this is assuming there’s no long-term consequences like other people seeing me save a child with my mind), but I’d still jump in anyway because my mental machinery overrides the far knowledge that I can almost certainly do it without moving.
It would take a lot of actual training in order to overcome this and start actually using the telekinesis. I think in such a situation, an ideal rationalist would use telekinesis instead of jumping in the water—not to mention the practical advantages of saving the child faster and in a safer manner (also with no risk to yourself!), assuming you have that level of control over your telekinetic powers.
That’s a good one, I lean towards jumping in as well, but you are right that the ideal says “use the force”.
doesn’t fit the pascal’s wager pattern tho...
EDIT: it seems a reliable hypothesis that intuition will go with whatever is best in the near mode case, never mind this “probability” and “utility” stuff.
Well, to make it fit Pascal’s Wager pattern a bit more, assume that you’re aware that telekinetics like you sometimes have a finite, very small amount of physical energy you can spend during your entire life, and once you’re out of it you die. You have unlimited “telekinetic energy”. Saving the child is, if this is true, going to chop off a good 95% of your remaining lifespan and permanently sacrifice any possibility of becoming immortal.
If you move to rescue the child, you will be under stress and no child will end up being rescued.
Boltzmann brains aren’t actually able to put themselves under stress, any more than they can rescue children or even think.
Aside from this, I’m not sure I accept the assumption that I should care about the emotional experiences of boltzmann brains (or representation of there being such experiences). That is, I believe I reject:
If you are a boltzmann brain, none of this is real and you will blink out of existence in the next second. If you think a happy thought, that’s a good thing.
For the purpose of choosing my decisions and decision making strategy for the purpose of optimizing the universe towards a preferred state I would weigh influence over the freaky low entropy part of the universe (ie. what we believe exists) more than influence over the ridiculous amounts of noise that happens to include boltzmann brains of every kind even if my decisions had any influence over the latter at all.
There is a caveat that the above would be different if I was able to colonize and exploit the high entropy parts of the universe somehow but even then it wouldn’t be the noise-including-boltzmann brains that I valued but whatever little negentropy that remained to be harvested. If I happened to seek out and find copies of myself within the random fluctuations and preserve them then I would consider what I am doing to be roughly speaking creating clones of myself via a rather eccentric and inefficient engineering process involving ‘search for state matching specification then remove everything else’ rather than ‘put stuff into state matching specification’.
You’re right, an actual boltzmann brain would not have time to do either. It was just an illustrative example to get you to think of something like pascals wager with inverted near-mode and far-mode.
If you don’t like the boltzmann brain gamble, substitute something else where you have it on good authority that nothing is real except your own happyness or whatever.
It was just an illustrative example to get you to think of something like pascals wager with inverted near-mode and far-mode.
It was mainly the Bolzmann Brain component that caught my attention. Largely because yesterday I was considering how the concept of “Boltzmann’s Marbles” impacts on when and whether there was a time that could make the statement “There was only one marble in the universe” true.
You still haven’t actually calculated the disutility of having a policy of giving the money, versus a policy of not giving the money. You’re just waving your hands. Saying “the EV clearly seems better” is no more helpful than your initial “obvious”.
The calculation I had in mind was basically that if those policies really do have those effects, then which one is superior depends entirely on the ratio between: 1) the difference between likelihoods of large calamity when you pay vs not pay and 2) the actual increase in frequency of muggings
The math I have, the way I understand it, removes the actual -EV of the mugging (keeping only the difference) from the equation and saves some disutility calculation. In my mind, you’d need some pretty crazy values for the above ratio in order for the policy of accepting Pascal Muggings to be worthwhile, and my WAGs are at 2% for the first and about 1000% for the second, with a base rate of around 5 total Muggings if you have a policy of denying them.
I have a high confidence rating for values that stay within the ratio that makes the denial policy favorable, and I find the values that would be required for favoring the acceptance policy highly unlikely with my priors.
Apologies if it seemed like I was blowing air. I actually did some stuff on paper, but posting it seemed irrelevant when the vast majority of LW users appear to have far better mastery of mathematics and the ability to do such calculations far faster than I can. I thought I’d sufficiently restricted the space of possible calculations with my description in the grandparent.
I might still be completely wrong though. My maths have errors a full 25% of the time until I’ve actually programmed or tested them somehow, for average math problems.
Of course it doesn’t actually work on humans. The question is should it work?
Well, that feels like an obvious no. I’m human though, so the obviousness is very much worthless.
My thought is to compare the EV here with full evidence weighing and such (including that it’s more likely that anyone would make some other threat, probably a more credible one, rather than this) of a policy of denying Pascal’s Mugging (a few occasional very tiny odds of very huge calamity) against a policy of falling for Pascal’s Mugging.
A policy that gives the money seems like posting “Please Pascal-mug me!” publicly on reddit or a facebook full of rationalists or something. You’re bound to end up making the odds cumulatively shoot up by having more instances of the mugging, including that someone takes your money and still somehow executes the major -EV thing they’re threatening you of. The EV clearly seems better with a policy of denying mugging, doesn’t it?
This seems to support the idea that a fully rational TDT agent would refuse Pascal’s Mugging, IMO. Feedback / why-you’re-wrong appreciated.
Let’s move away from the signalling and such, so that such a policy does not lead to a larger-than-five-bucks loss. (tho no amount of signalling loss actually overcomes the 3^^^3*P(3^^^3)).
Assume you recieve some mild evidence that is more likely in the case of imminent -EV singularity than an immenent +EV singularity. Maybe you find a memo floating out of some “abandoned” basement window that contains a some design notes for a half-assed FAI. Something that updates you toward the bad case (but only a very little amount, barely credible). Do you sneak in and microwave the hard drive? Our current best understanding of an ideal decision theorist would.
I tried to make that isomorphic to the meat of the problem. Do you think I got it? We face problems isomorphic to that every day, and we don’t tend to act on them.
Now consider that you observe some reality-glitch that causes you to conclude that you are quite sure that you are a boltzmann brain. At the same time, you see a child that is drowning. Do you think a happy thought (best idea if you are a boltzmann brain, IMO), or move quick to save the child (much better good idea, but only in the case where the child is real)? I would still try to save the child (I hope), as would an ideal decision theorist.
Those two examples are equivalent as far as our ideal decision theorist is concerned, but have opposite intuition. Who’s wrong?
I lean toward our intuition being wrong, because it depends on a lot of irrellevent stuff like whether the utilities are near-mode (drowning child, microwaved ard drive), or far-mode (+-EV singularity, boltzmann brain). Also, all the easy ways to make the decision theorist wrong don’t work unless you are ~100% sure that unbounded expected utility maximizing is stupid.
On the other hand, I’m still not about to start paying out on pascals wager.
In this example, I very much agree, but not for any magical sentiment or urge. I simply don’t trust my brain and my own ability to gather knowledge and infer / deduct the right things enough to override the high base rate that there is an actual child drowning that I should go save. It would take waaaaaay more conclusive evidence and experimentation to confirm the boltzmann brain hypothesis, and then some more to make sure the drowning child is actually such a phenomenon (did I get that right? I have no idea what a boltzmann brain is).
Regarding the first example, that’s a very good case. I don’t see myself facing situations that could be framed similarly very often though, to be honest. In such a case, I would probably do something equivalent to the hard drive microwaving tactic, but would first attempt to run failsafes in case I’m the one seeing the wrong thing—this is comparable to the injunction against taking power in your own hands because you’re obviously able to make the best use of it. There are all kinds of reasons I might be wrong about the FAI, and might be doing something wrong by microwaving the drive. Faced with a hard now-or-never setting with immediate instant permanent consequences, a clean two-path decision tree (usually astronomically unlikely in the real world, we just get the illusion that it is one), I would definitely take the microwave option. In more likely scenarios though, there are all kinds of things to do.
Assume you have such evidence.
You got it i think
Well, if I do have such evidence, then this is time for some bayes. If I’ve got the right math, then it’ll depend on information that I don’t have: What actually happens if I’m a boltzmann brain and I try to save the kid anyway?
The unknown information seems to be outweighed by the clear +EV of saving the child, but I have a hard time quantifying such unknown unknowns even with WAGs, and my mastery of continuous probability distributions isn’t up to par to properly calculate something like this anyway.
In this case, my curiosity as for what might happen is actually a +V, but even without that, I think I’d still try to save the child. My best guess is basically “My built-in function to evaluate this says save the child, and this function apparently knows more about the problem than I do, and I have no valid math that says otherwise, so let’s go with that” in such a case.
If you are a boltzmann brain, none of this is real and you will blink out of existence in the next second. If you think a happy thought, that’s a good thing. If you move to rescue the child, you will be under stress and no child will end up being rescued.
If you don’t like the boltzmann brain gamble, substitute something else where you have it on good authority that nothing is real except your own happyness or whatever.
(My answer is that the tiny possibility that “none of this is real” is wrong is much more important (in the sense that more value is at stake) than the mainline possibility that none of this is real, so the mainline boltzmann case more or less washes out in the noise and I act as if the things I see are real.)
EDIT: The curiosity thing is a fake justification: I find it suspicious that moving to save the child also happens to be teh most interesting experiment you could run.
The injunction “I can’t be in such an epistemic state, so I will go with the autopilot” is a good solution that I hadn’t though of. But then in the case of pure morality, without epistemic concerns and whatnot, which is better: save the very unlikely child, or think a happy thought? (my answer is above, but I still take the injunction in practice)
Yes, I was aware the curiosity thing is not a valid reason, which is why I only qualify it as “+V”. There are other options which give much greater +V. It is not an optimum.
Regarding you description of the Vs, I guess I’m a bit skewed in that regard. I don’t perceive happy thought and stress/sadness as clean-cut + and—utilities. Ceteris Paribus, I find stress to be positive utility against the backdrop of “lack of anything”. I think there’s a Type 1 / Type 2 thing going on, with the “conscious” assigning some value to what’s automatic or built-in, but I don’t remember the right vocabulary and recreating a proper terminology from reductions would take a lot of time better spent studying up on the already-established conventions. Basically, I consciously value all feelings equivalently, with a built-in valuation of what my instinct / human-built-in-devices values too, such that many small forms of pain are actually more pleasant than not feeling anything in particular, but strong pain is less pleasant than temporary lack of feeling.
Stuck in a two-branch decision-theoretic problem between “lifelong torture” and “lifelong lack of sensation or feeling”, my current conscious mind is edging towards the former, assuming the latter means I don’t get that rush from curiosity and figuring stuff out anymore. Of course, in practice I’m not quite so sure that none of the built-in mechanisms I have in my brain would get me to choose otherwise.
Anyway, just wanted to chip in that the utilitarian math for the “if I’m a boltzmann, I want a happy thought rather than a bit of stress” case isn’t quite so clear-cut for me personally, since the happy thought might not “succeed” in being produced or being really happy, and the stress might be valued positively anyway and is probably more likely to “succeed”. This isn’t the real motivation for my choices (so it’s an excuse/rationalization if I decide based on this), but is an interesting bit of detail and trivia, IMO.
interesting. Again, substitute a new example that does have the desired properties.
Well, if I have evidence that I’m a special kind of telekinetic who can only move stuff with his mind when not physically moving (i.e. not sending signals to my own muscles) instead of a boltzmann, then unless I’m missing something I really do prefer staying immobile and saving the child with my thoughts instead of jumping in and wasting a lot of energy (this is assuming there’s no long-term consequences like other people seeing me save a child with my mind), but I’d still jump in anyway because my mental machinery overrides the far knowledge that I can almost certainly do it without moving.
It would take a lot of actual training in order to overcome this and start actually using the telekinesis. I think in such a situation, an ideal rationalist would use telekinesis instead of jumping in the water—not to mention the practical advantages of saving the child faster and in a safer manner (also with no risk to yourself!), assuming you have that level of control over your telekinetic powers.
That’s a good one, I lean towards jumping in as well, but you are right that the ideal says “use the force”.
doesn’t fit the pascal’s wager pattern tho...
EDIT: it seems a reliable hypothesis that intuition will go with whatever is best in the near mode case, never mind this “probability” and “utility” stuff.
Well, to make it fit Pascal’s Wager pattern a bit more, assume that you’re aware that telekinetics like you sometimes have a finite, very small amount of physical energy you can spend during your entire life, and once you’re out of it you die. You have unlimited “telekinetic energy”. Saving the child is, if this is true, going to chop off a good 95% of your remaining lifespan and permanently sacrifice any possibility of becoming immortal.
Or is that the wrong way around? Hmm.
Boltzmann brains aren’t actually able to put themselves under stress, any more than they can rescue children or even think.
Aside from this, I’m not sure I accept the assumption that I should care about the emotional experiences of boltzmann brains (or representation of there being such experiences). That is, I believe I reject:
For the purpose of choosing my decisions and decision making strategy for the purpose of optimizing the universe towards a preferred state I would weigh influence over the freaky low entropy part of the universe (ie. what we believe exists) more than influence over the ridiculous amounts of noise that happens to include boltzmann brains of every kind even if my decisions had any influence over the latter at all.
There is a caveat that the above would be different if I was able to colonize and exploit the high entropy parts of the universe somehow but even then it wouldn’t be the noise-including-boltzmann brains that I valued but whatever little negentropy that remained to be harvested. If I happened to seek out and find copies of myself within the random fluctuations and preserve them then I would consider what I am doing to be roughly speaking creating clones of myself via a rather eccentric and inefficient engineering process involving ‘search for state matching specification then remove everything else’ rather than ‘put stuff into state matching specification’.
You’re right, an actual boltzmann brain would not have time to do either. It was just an illustrative example to get you to think of something like pascals wager with inverted near-mode and far-mode.
It was mainly the Bolzmann Brain component that caught my attention. Largely because yesterday I was considering how the concept of “Boltzmann’s Marbles” impacts on when and whether there was a time that could make the statement “There was only one marble in the universe” true.
You still haven’t actually calculated the disutility of having a policy of giving the money, versus a policy of not giving the money. You’re just waving your hands. Saying “the EV clearly seems better” is no more helpful than your initial “obvious”.
The calculation I had in mind was basically that if those policies really do have those effects, then which one is superior depends entirely on the ratio between: 1) the difference between likelihoods of large calamity when you pay vs not pay and 2) the actual increase in frequency of muggings
The math I have, the way I understand it, removes the actual -EV of the mugging (keeping only the difference) from the equation and saves some disutility calculation. In my mind, you’d need some pretty crazy values for the above ratio in order for the policy of accepting Pascal Muggings to be worthwhile, and my WAGs are at 2% for the first and about 1000% for the second, with a base rate of around 5 total Muggings if you have a policy of denying them.
I have a high confidence rating for values that stay within the ratio that makes the denial policy favorable, and I find the values that would be required for favoring the acceptance policy highly unlikely with my priors.
Apologies if it seemed like I was blowing air. I actually did some stuff on paper, but posting it seemed irrelevant when the vast majority of LW users appear to have far better mastery of mathematics and the ability to do such calculations far faster than I can. I thought I’d sufficiently restricted the space of possible calculations with my description in the grandparent.
I might still be completely wrong though. My maths have errors a full 25% of the time until I’ve actually programmed or tested them somehow, for average math problems.
Don’t worry, the chance of being wrong only costs you 3^^^3*.25 expected utilons, or so.
Hah, that made me chuckle. I ought to remind myself to bust out the python interpreter and test this thing when I get home.