Unlike my (present) traits, my future decisions don’t yet exist, and hence cannot leak anything or become entangled with anyone.
Your future decisions are entangled with your present traits, and thus can leak. If you picture a Bayesian network with the nodes “Current Brain”, “Future Decision”, and “Current Observation”, with arrows from Current Brain to the two other nodes, then knowing the value of Current Observation gives you information about Future Decision.
Obviously the alien is better than a human at running this game (though, note that a human would only have to be right a little more than 50% of the time to make one-boxing have the higher expected value—in fact, that could be an interesting test to run!). Perhaps it can observe your neurochemistry in detail and in real time. Perhaps it simulates you in this precise situation, and just sees whether you pick one or both boxes. Perhaps land-ape psychology turns out to be really simple if you’re an omnipotent thought-experiment enthusiast.
The reasoning wouldn’t be “this person is a one-boxer” but rather “this person will pick one box in this particular situation”. It’s very difficult to be the sort of person who would pick one box in the situation you are in without actually picking one box in the situation you are in.
One use of the thought experiment, other than the “non-causal effects” thing, is getting at this notion that the “rational” thing to do (as you suggest two-boxing is) might not be the best thing. If it’s worse, just do the other thing—isn’t that more “rational”?
knowing the value of Current Observation gives you information about Future Decision.
Here I’d just like to note that one must not assume all subsystems of Current Brain remain constant over time. And what if the brain is partly a chaotic system? (AND new information flows in all the time… Sorry, I cannot condone this model as presented.)
Perhaps it can observe your neurochemistry in detail and in real time.
I already mentioned this possibility. Fallible models make the situation gameable. I’d get together with my friends, try to figure out when the model predicts correctly, calculate its accuracy, work out a plan for who picks what, and split the profits between ourselves. How’s that for rationality? To get around this, the alien needs to predict our plan and—do what? Our plan treats his mission like total garbage. Should he try to make us collectively lose out? But that would hamper his initial design.
(Whether it cares about such games or not, what input the alien takes, when, how, and what exactly it does with said input—everything counts in charting an optimal solution. You can’t just say it uses Method A and then replace it with Method B when convenient. THAT is the point: Predictive methods are NOT interchangeable in this context. (Reminder: Reading my brain AS I make the decision violates the original conditions.))
Perhaps land-ape psychology turns out to be really simple if you’re an omnipotent thought-experiment enthusiast.
We’re veering into uncertain territory again… (Which would be fine if it weren’t for the vagueness of mechanism inherent in magical algorithms.)
The reasoning wouldn’t be “this person is a one-boxer” but rather “this person will pick one box in this particular situation”.
Second note: An entity, alien or not, offering me a million dollars, or anything remotely analogous to this, would be a unique event in my life with no precedent whatever. My last post was written entirely under the assumption that the alien would be using simple heuristics based on similar decisions in the past. So yeah, if you’re tweaking the alien’s method, then disregard all that.
It’s very difficult to be the sort of person who would pick one box in the situation you are in without actually picking one box in the situation you are in.
From the alien’s point of view, this is epistemologically non-trivial if my box-picking nature is more complicated than a yes-no switch. Even if the final output must take the form of a yes or a no, the decision tree that generated that result can be as endlessly complex as I want, every step of which the alien must predict correctly (or be a Luck Elemental) to maintain its reputation of infallibility.
If it’s worse, just do the other thing—isn’t that more “rational”?
As long as I know nothing about the alien’s method, the choice is arbitrary. See my second note. This is why the alien’s ultimate goals, algorithms, etc, MATTER.
(If the alien reads my brain chemistry five minutes before The Task, his past history is one of infallibility, and no especially cunning plan comes to mind, then my bet regarding the nature of brain chemistry would be that not going with one box is silly if I want the million dollars. I mean, he’ll read my intentions and place the money (or not) like five minutes before… (At least that’s what I’ll determine to do before the event. Who knows what I’ll end up doing once I actually get there. (Since even I am unsure as to the strength of my determination to keep to this course of action once I’ve been scanned, the conscious minds of me and the alien are freed from culpability. Whatever happens next, only the physical stance is appropriate for the emergent scenario. ((“At what point then, does decision theory apply here?” is what I was getting at.) Anyway, enough navel-gazing and back to Timeless Decision Theory.))))
knowing the value of Current Observation gives you information about Future Decision.
Here I’d just like to note that one must not assume all subsystems of Current Brain remain constant over time. And what if the brain is partly a chaotic system? (AND new information flows in all the time… Sorry, I cannot condone this model as presented.)
Well… okay, but the point I was making was milder and pretty uncontroversial. Are you familiar with bayesian networks?
Perhaps it can observe your neurochemistry in detail and in real time.
I already mentioned this possibility. Fallible models make the situation gameable. I’d get together with my friends, try to figure out when the model predicts correctly, calculate its accuracy, work out a plan for who picks what, and split the profits between ourselves. How’s that for rationality? To get around this, the alien needs to predict our plan and—do what? Our plan treats his mission like total garbage. Should he try to make us collectively lose out? But that would hamper his initial design.
(Whether it cares about such games or not, what input the alien takes, when, how, and what exactly it does with said input—everything counts in charting an optimal solution. You can’t just say it uses Method A and then replace it with Method B when convenient. THAT is the point: Predictive methods are NOT interchangeable in this context. (Reminder: Reading my brain AS I make the decision violates the original conditions.))
I never said it used method A? And what is all this about games? It predicts your choice.
You’re not engaging with the thought experiment. How about this—how would you change the thought experiment to make it work properly, in your estimation?
Perhaps land-ape psychology turns out to be really simple if you’re an omnipotent thought-experiment enthusiast.
We’re veering into uncertain territory again… (Which would be fine if it weren’t for the vagueness of mechanism inherent in magical algorithms.)
Well, yeah. We’re in uncertain territory as a premise.
The reasoning wouldn’t be “this person is a one-boxer” but rather “this person will pick one box in this particular situation”.
Second note: An entity, alien or not, offering me a million dollars, or anything remotely analogous to this, would be a unique event in my life with no precedent whatever. My last post was written entirely under the assumption that the alien would be using simple heuristics based on similar decisions in the past. So yeah, if you’re tweaking the alien’s method, then disregard all that.
I’m not tweaking the method. There is no given method. The closest to a canonical method that I’m aware of is simulation, which you elided in your reply.
It’s very difficult to be the sort of person who would pick one box in the situation you are in without actually picking one box in the situation you are in.
From the alien’s point of view, this is epistemologically non-trivial if my box-picking nature is more complicated than a yes-no switch. Even if the final output must take the form of a yes or a no, the decision tree that generated that result can be as endlessly complex as I want, every step of which the alien must predict correctly (or be a Luck Elemental) to maintain its reputation of infallibility.
What makes you think you’re so special—compared to the people who’ve been predicted ahead of you?
If it’s worse, just do the other thing—isn’t that more “rational”?
As long as I know nothing about the alien’s method, the choice is arbitrary. See my second note. This is why the alien’s ultimate goals, algorithms, etc, MATTER.
If you know nothing about the alien’s methods, there still is a better choice. You do not have the same expected value for each choice.
(If the alien reads my brain chemistry five minutes before The Task, his past history is one of infallibility, and no especially cunning plan comes to mind, then my bet regarding the nature of brain chemistry would be that not going with one box is silly if I want the million dollars. I mean, he’ll read my intentions and place the money (or not) like five minutes before… (At least that’s what I’ll determine to do before the event. Who knows what I’ll end up doing once I actually get there. (Since even I am unsure as to the strength of my determination to keep to this course of action once I’ve been scanned, the conscious minds of me and the alien are freed from culpability. Whatever happens next, only the physical stance is appropriate for the emergent scenario. ((“At what point then, does decision theory apply here?” is what I was getting at.) Anyway, enough navel-gazing and back to Timeless Decision Theory.))))
Your future decisions are entangled with your present traits, and thus can leak. If you picture a Bayesian network with the nodes “Current Brain”, “Future Decision”, and “Current Observation”, with arrows from Current Brain to the two other nodes, then knowing the value of Current Observation gives you information about Future Decision.
Obviously the alien is better than a human at running this game (though, note that a human would only have to be right a little more than 50% of the time to make one-boxing have the higher expected value—in fact, that could be an interesting test to run!). Perhaps it can observe your neurochemistry in detail and in real time. Perhaps it simulates you in this precise situation, and just sees whether you pick one or both boxes. Perhaps land-ape psychology turns out to be really simple if you’re an omnipotent thought-experiment enthusiast.
The reasoning wouldn’t be “this person is a one-boxer” but rather “this person will pick one box in this particular situation”. It’s very difficult to be the sort of person who would pick one box in the situation you are in without actually picking one box in the situation you are in.
One use of the thought experiment, other than the “non-causal effects” thing, is getting at this notion that the “rational” thing to do (as you suggest two-boxing is) might not be the best thing. If it’s worse, just do the other thing—isn’t that more “rational”?
Here I’d just like to note that one must not assume all subsystems of Current Brain remain constant over time. And what if the brain is partly a chaotic system? (AND new information flows in all the time… Sorry, I cannot condone this model as presented.)
I already mentioned this possibility. Fallible models make the situation gameable. I’d get together with my friends, try to figure out when the model predicts correctly, calculate its accuracy, work out a plan for who picks what, and split the profits between ourselves. How’s that for rationality? To get around this, the alien needs to predict our plan and—do what? Our plan treats his mission like total garbage. Should he try to make us collectively lose out? But that would hamper his initial design.
(Whether it cares about such games or not, what input the alien takes, when, how, and what exactly it does with said input—everything counts in charting an optimal solution. You can’t just say it uses Method A and then replace it with Method B when convenient. THAT is the point: Predictive methods are NOT interchangeable in this context. (Reminder: Reading my brain AS I make the decision violates the original conditions.))
We’re veering into uncertain territory again… (Which would be fine if it weren’t for the vagueness of mechanism inherent in magical algorithms.)
Second note: An entity, alien or not, offering me a million dollars, or anything remotely analogous to this, would be a unique event in my life with no precedent whatever. My last post was written entirely under the assumption that the alien would be using simple heuristics based on similar decisions in the past. So yeah, if you’re tweaking the alien’s method, then disregard all that.
From the alien’s point of view, this is epistemologically non-trivial if my box-picking nature is more complicated than a yes-no switch. Even if the final output must take the form of a yes or a no, the decision tree that generated that result can be as endlessly complex as I want, every step of which the alien must predict correctly (or be a Luck Elemental) to maintain its reputation of infallibility.
As long as I know nothing about the alien’s method, the choice is arbitrary. See my second note. This is why the alien’s ultimate goals, algorithms, etc, MATTER.
(If the alien reads my brain chemistry five minutes before The Task, his past history is one of infallibility, and no especially cunning plan comes to mind, then my bet regarding the nature of brain chemistry would be that not going with one box is silly if I want the million dollars. I mean, he’ll read my intentions and place the money (or not) like five minutes before… (At least that’s what I’ll determine to do before the event. Who knows what I’ll end up doing once I actually get there. (Since even I am unsure as to the strength of my determination to keep to this course of action once I’ve been scanned, the conscious minds of me and the alien are freed from culpability. Whatever happens next, only the physical stance is appropriate for the emergent scenario. ((“At what point then, does decision theory apply here?” is what I was getting at.) Anyway, enough navel-gazing and back to Timeless Decision Theory.))))
Well… okay, but the point I was making was milder and pretty uncontroversial. Are you familiar with bayesian networks?
I never said it used method A? And what is all this about games? It predicts your choice.
You’re not engaging with the thought experiment. How about this—how would you change the thought experiment to make it work properly, in your estimation?
Well, yeah. We’re in uncertain territory as a premise.
I’m not tweaking the method. There is no given method. The closest to a canonical method that I’m aware of is simulation, which you elided in your reply.
What makes you think you’re so special—compared to the people who’ve been predicted ahead of you?
If you know nothing about the alien’s methods, there still is a better choice. You do not have the same expected value for each choice.