I don’t understand why the Smoking Lesion is a problem for evidential decision theory. I would simply accept that in the scenario given, you shouldn’t smoke. And I don’t see why you assert that this doesn’t lessen your chances of getting cancer, except in the same sense that two-boxing doesn’t lessen your chances of getting the million.
I would just say: in the scenario give, you should not smoke, and this will improve your chances of not getting cancer.
If you doubt this, consider if the correlation were known to be 100%; every person who ever smoked up till now, had the lesion and developed cancer, while every person who did not smoke, did not have the lesion. This was true also of people who knew about the Lesion. Do you still say it’s a good idea to smoke?
If the correlation is 100%, it doesn’t mean that you can choose whether or not you’ll have cancer. It means that if you have the lesion, then some combination of logic, rationalisation or impulse will make you decide to smoke (and if you don’t, then similarly you’ll end up not smoking). You can then tell from your decision whether you’ll get cancer or not, but you couldn’t have made the other decision, no matter what.
(Either that, or you can be the first person to try using EDT for it, and that way you get to be the person who breaks the 100% correlation and gets cancer without smoking)
You can say the same thing about Newcomb’s problem. It doesn’t mean you can choose whether or not there will be a million in one of the boxes. It means that if there is a million in one of the boxes, then “some combination of logic, rationalisation or impulse will make you decide” to choose only one of the boxes (and if there’s no million, then similarly you’ll end up taking both boxes.) “You can then tell from your decision whether” you’ll get the million or not, “but you couldn’t have made the other decision, no matter what.”
Either that, or you can be the first to outguess Omega and get the million as well as the thousand...
Nope, this reasoning doesn’t work with Newcomb, and it doesn’t work with the Smoking Lesion. If you want to win, you one-box, and you don’t smoke.
One potentially-significant difference: in Newcomb, it is precisely the fact that you’re disposed to two-box that causes you to lose out. (Omega is detecting and responding to this very disposition.) In Smoking Lesion, the disposition to smoke is intrinsically harmless; it merely happens to be correlated (due to a common cause) with a disposition to get cancer.
(But if you’re right that the two cases are on a par, then that would significant increase my confidence that two-boxing is rational. The smoking lesion case is by far the more obvious of the two.)
Responding to the supposed difference between the cases:
Omega puts the million in the box or not before the game has begun, depending on your former disposition to one-box or two-box.
Then the game begins. You are considering whether to one-box or two-box. Then the choice to one-box or two-box is intrinsically harmless; it merely happens to be correlated with your previous disposition and with Omega’s choice. Likewise, your present disposition to one-box or two-box is also intrinsically harmless. It is merely correlated with your previous disposition and with Omega’s choice.
You can no more change your previous disposition than you can change whether you have the lesion, so the two cases are equivalent.
And if people’s actions are deterministic, then in theory there could be an Omega that is 100% accurate. Nor would there be a need for simulation; as cousin_it has pointed out, it could “analyze your source code” and come up with a proof that you will one-box or two-box. In this case the 100% correlated smoking lesion and Newcomb would be precisely equivalent. The same is true if each has a 90% correlation, and so on.
Nor would there be a need for simulation; as cousin_it has pointed out, it could “analyze your source code” and come up with a proof that you will one-box or two-box.
If some subset of the information contained within you is sufficient to prove what you will do, simulating that subset is a relevant simulation of you.
I’m not sure what kind of proof you could do without going through the steps such that you essentially produced a simulation.
Could you give an example of the type of proof you’re proposing, so I can judge for myself whether it seems to involve running through the relevant steps?
Many programs can be proven to have a certain result without any simulation, not even of a subset of the information. For example, think of a program that discovers the first 10,000 primes, increasing a counter by one for each prime it finds, and then stops. You can prove that the counter will equal 10,000 when it stops, without simulating this program.
See, to me that is a mental simulation of the relevant part of the program.
The counter will increase, point by point, it will remain an integer at each point and pass through every integer, and upon reaching 10,000 this will happen.
The fact that the relevant part of the program is as ridiculously simple as a counter just means that the simulation is easy.
So would you smoke even if the previous correlation were 100%, and included those who knew about the Lesion?
This could happen in reality, if everyone who smoked, smoked because he wanted to, and if everyone who sufficiently desired it did so, and if the sufficient desire for smoking was completely caused by the lesion. In other words, by choosing to smoke, you would be showing that you had sufficient desire, and therefore the lesion, and by choosing not to smoke, you would be showing that you did not have sufficient desire, and therefore not the lesion.
Under these circumstances, if you chose not to smoke, would you expect to get cancer, since you knew that you had some desire for smoking? (Presumably whether the desire was sufficient or not would not be evident to introspection, but only from whether or not you ended up smoking.) Or choosing to smoke, would you expect not to get cancer, since you say it doesn’t make any difference to whether you have the lesion?
I think that the “ABSOLUTELY IRRESISTIBLE” and “ABSOLUTELY UNTHINKABLE” language can be a bit misleading here. Yes, someone with the lesion is compelled to smoke, but his experience of this may be experience of spending days deliberating about whether to smoke—even though, all along, he was just running along preprepared rails and the end-result was inevitable.
If we assume determinism, however, we might say this about any decision. If someone makes a decision, it is because his brain was in such a state that it was compelled to make that decision, and any other decision was “UNTHINKABLE”. We don’t normally use language like that, even if we subscribe to such a view of decisions, because “UNTHINKABLE” implies a lot about the experience itself rather than just implying something about the certainty of particular action or compulsion towards it.
I could walk to the nearest bridge to jump off, and tell myself all along that, to someone whose brain was predisposed to jumping off the bridge, not doing it was unthinkable, so any attempt on my part to decide otherwise is meaningless. Acknowledging some kind of fatalism is one thing, but injecting it into the middle of our decision processes seems to me to be asking for trouble.
If we assume determinism, however, we might say this about any decision.
Not really. The lesion is a single aspect that completely determines a decision.
For most decisions, far more of the brain/mind than just one small, otherwise irrelevant, part can have some influence on the outcome.
But the lesion is clearly different, IF it has a 100% correlation.
Acknowledging some kind of fatalism is one thing, but injecting it into the middle of our decision processes seems to me to be asking for trouble.
When making a decision on something where I know my thought-process is irrelevant, why should I not be fatalistic?
There is no decision-making process in the 100%-lesion case, the decision is MADE, it’s right there in the lesion.
EDIT: Here’s something analogous to the 100% lesion: you have a light attached to your head. If it blinks red, it’ll make you feel happy, but it’ll blow up in an hour. It’s not linked to the rest of your brain at all. Should you try and make a decision about whether to have it blink red?
There is no decision-making process in the 100%-lesion case, the decision is MADE, it’s right there in the lesion.
There is no decision-making process anyway, every decision is made, it’s right there in the frontal/temporal/occipital/parietal lobe, right?
Here’s something analogous to the 100% lesion: you have a light attached to your head. If it blinks red, it’ll make you feel happy, but it’ll blow up in an hour. It’s not linked to the rest of your brain at all. Should you try and make a decision about whether to have it blink red?
The red light blinking doesn’t feel as a decision. According to the lesion scenario, the lesion-influenced decisions feel exactly like other decisions. It is an important difference. And I am not sure why you have included both happy feeling and explosion, by the way.
There is no decision-making process anyway, every decision is made, it’s right there in the frontal/temporal/occipital/parietal lobe, right?
If you can point to a specific part of my brain that has no purpose other than to make me have bacon for breakfast on tuesday 24th of august, 2010? And that can’t be over-ruled by any other parts of my brain?
That decision involved more than just one spot in my brain. All the parts of my brain involved do more than one thing.
So, no, the real world isn’t like the lesion example.
The red light blinking doesn’t feel as a decision. According to the lesion scenario, the lesion-influenced decisions feel exactly like other decisions. It is an important difference.
Okay, let’s change it slightly: Instead of the happy feeling, you get a feeling of “I decided to do this” when the light blinks red.
Is that a better analogy for you?
Whether you think about it or not, you end up feeling like you made the decision. Just like in the lesion case.
If you can point to a specific part of my brain that has no purpose other than to make me have bacon for breakfast on tuesday 24th of august, 2010? And that can’t be over-ruled by any other parts of my brain?
I can’t, however it doesn’t imply that the decision about the breakfast is spread across the whole brain. Moreover, why it is so important to have it localised? What if the lesion is in fact only a slightly different concentration of chemicals spread across the whole brain, which I) leads to cancer, II) causes desire for smoking, which is nevertheless substantiated as a global coordinated action of neurons in different parts of the brain?
Instead of the happy feeling, you get a feeling of “I decided to do this” when the light blinks red.
I can’t, however it doesn’t imply that the decision about the breakfast is spread across the whole brain. Moreover, why it is so important to have it localised?
It’s not particularly. Replace “part” with “aspect”; I hadn’t actually thought about the option you propose.
What if the lesion is in fact only a slightly different concentration of chemicals spread across the whole brain, which I) leads to cancer, II) causes desire for smoking, which is nevertheless substantiated as a global coordinated action of neurons in different parts of the brain?
Now we’re getting back to the “correlates with smoking” scenario; not the 100% scenario. If it just causes desire for smoking, some people with it won’t smoke. At which point it is a decision.
If this desire is irresistible, then you no more have a choice not to smoke than you have a choice not to sleep.
Do you have the option of not sleeping for the next year? (while still being alive)
Imagine you lived in a lesion world where most of the smokers described their decision to start smoking as “free”. Still, there was a 100% correlation between smoking and cancer. Do you find it impossible?
It’s also entirely possible in the lightbulb world. In the lightbulb world I suspect you’d agree it isn’t a free decision, but it’s entirely possible that the people of that world might claim that it was.
Still, your original description of the scenario was
you have a light attached to your head. If it blinks red, it’ll make you feel happy, but it’ll blow up in an hour. It’s not linked to the rest of your brain at all.
Now you have changed the “happy” feeling into a “decided” feeling. So the bulb has to be connected somehow to the brain to stimulate the feeling. I am not sure what “rest” refers to here.
But in general, if somebody said they decided freely, I take it as given. I don’t know any better criterion how to judge whether the decision was free, whatever it means.
I meant: it’s not connected to your brain at all except when making you happy/making you believe you decided.
ie. it’s not taking any input from the brain at any point. Much like the lesion.
But in general, if somebody said they decided freely, I take it as given. I don’t know any better criterion how to judge whether the decision was free, whatever it means.
In the specific case of the bulb-world, would you consider their decisions free, if they did?
If the bulb-apparatus physically took no input from the brain, if it was attached to the brain artificially (as opposed from being a native part of human body, or growing spontaneously—so that it couldn’t be considered a part of the brain), if its action was direct enough (e.g. implanting the decision by some sequence of electric impulses in course of seconds, as opposed to altering the brain only in a slight, but predictable manner, which modification would develop into the final decision after years of thought going inside the brain) and if the decision made by the bulb could be disentangled from other processes in the brain, then I certainly would not call the decision free. If only some of the above conditions were satisfied, then it would be hard to decide whether to use the word free or not.
I suspect we have unknowingly changed the topic into investigation of the meaning of “free”.
For the correlation with Omega to be 100%, one-boxing would have to be ABSOLUTELY IRRESISTABLE when there was a million in the box...
Well, yeah, which is why people resist the story about Omega, think it must be nonsense, and decide to two-box (although it would be better to explicitly reject the story). Or interpret it to imply backwards causality (in which case even CDT makes you one-box) or something else that violates the laws of physics as I know them.
This is one reason to stick with probabilistic versions of Newcomb’s Paradox.
In the Newcombian situation the lines of causality are different.
What’s in the box is explicitly caused by what you will choose, …
I find that the term “cause” or “causality” can be very misleading in this situation.
As a matter of terminology, I actually agree with you: in lay speech, I see nothing wrong with saying that “One-boxing causes the sealed box to be filled”, because this is exactly how we perceive causality in the world.
However, when speaking of these problems, theorists nail down their terminology as best they can. And in such problems, standard usage is such that the concept of causality only applies to cases where an event changes things solely in the future[1], not merely where it reveals you to be in a situation in which a past event has happened.
When speaking of decision-theoretic problems, it is important to stick to this definition of causality, counter-intuitive though it may be.
Another example of the distinction is in Drescher’s Good and Real. Consider this: if you raise your hand (in a deterministic universe), you are setting the universe’s state 1 billion years ago to be such that a chain of events will unfold in a way that, 1 billion years later, you will raise your hand. In a (lay) sense, raising your hand “caused” that state.
However, because that state is in the past, it violates decision-theoretic usage to say that you caused that state; instead, you should simply say that either:
a) there is an acausal relationship between your choice to raise your hand and that state of the universe, or b) by choosing to raise your hand, you have learned about a past state of universe. (Just as deciding whether to exit in the Absent-Minded Driver problem tells you something about which exit you are at.)
[1] or, in timeless formalisms, where the cause screens off that which it causes.
I think you’ve misunderstood me. “What you will choose” is a fact that exists before omega fills the boxes.
This fact determines how the boxes are filled.
“What you will choose” (some people seem to refer to this, or something similar, as your “disposition”, but I find my terminology more immediately apparent) causes the future event “how the boxes are filled”
Actually, this is excellent. We could rewrite Newcomb’s problem like this:
Omega places in the box together with the million or non-million, a device that influences your brain, programming the device so that you are caused to take both if it does not place the million, and programming the device so that you are caused to one-box if it places the million. In other words, Omega decides in advance whether you are going to get the million or not, then sets up the situation so you will make the choice that gets you what it wanted you to get.
However, the influence on your brain is quite subtle; to you, it still feels like you are deciding in the normal way, using some decision theory or other.
Now, do you one-box or two-box? This is certainly exactly the same as the smoking lesion. Nor can you answer “I don’t have to decide because my actions are determined” because your actions might well be determined in real life anyway, and you still have to decide.
If you one-box here, you should not smoke in the lesion problem. If you don’t one-box here… well, too bad for you.
I flip a coin; if it’s heads, I give you a million dollars, else I give you a thousand dollars. How much money should you get from me? (And is this problem any different from the last one?)
At some point, these questions no longer help us make rational decisions. Even an AI with complete access to its source code can’t do anything to prepare itself for these situations.
No, you don’t, you don’t get to decide. The decision has been made.
You’re ignoring the fact that, normally, the thoughts going on in your brain are PART of how the decision is determined by the laws of physics. In your scenario, they’re irrelevant. Whatever you think, your action is determined by the machine.
You can argue all you like about what I should do, but what I will do is already decided, and isn’t influenced by my thoughts, my rationality, or anything else.
All the information needed to determine what I will do is in the lesion/machine.
Applying rationality to a scenario where the agent is by definition incapable of rationality is just plain silly.
In the real world the information that determines my action is contained within me. In order to determine the action, you would have to run “me” (or at least some reasonable part thereof)
In your version of newcombs the information that determines my action is contained within the machine.
Can you see why I consider that a significant difference?
You can substitute “the laws of physics” for “Omega” in your argument, and if it proves you will not decide rationally in the Omega situation, then it proves you will not decide—anything—rationally in real life.
Presumably (or at least hopefully) if you are a rational agent with a certain DT, then a long and accurate description of the ways that “the laws of physics” affect your decision-making process break down into
The ways that the laws of physics affect the computer you’re running on
How the computer program, and specifically your DT, works when running on a reliable computer.
It’s not clear how a reduction like this could work in your example.
In my example, it is give that Omega decides what you are going to do, but that he causes you to do it in the same way you ordinarily do things, namely with some decision theory and by thinking some thoughts etc.
If the fact that Omega causes it means that you are irrational, then the fact that the laws of physics cause your actions also means that you are irrational.
A rational entity can exist in the laws of physics.
A rational entity by definition has a determined decision, if there is a rational decision possible.
A rational entity cannot make an irrational decision.
You’re getting hung up on the determinism. That’s not the issue. Rational entities are by definition deterministic.
What they are not is deterministically irrational. Your scenario requires an irrational entity.
Your scenario requires that the entity be able to make an irrational decision, using it’s normal thought processes.
This requires that it be using irrational thought processes.
It seems you are simply assuming away the problem. Your assumptions:
Rational entities can exist.
The choice of either one-boxing or two-boxing in the above scenario is irrational
Omega makes the subject one-box or two-box using its normal decision mechanisms
A rational entity will never make an irrational decision
Then, the described scenario is simply inconsistent, if Omega can use a rational entity as a subject. And so it comes down to which bullet you want to bite. Is it:
A. Rational entities can't exist.
B. Neither choice is irrational.
C. Omega cannot use the subject's normal decision mechanisms to effect the choice
D. Rational entities are allowed to make irrational decisions sometimes
E. The thought experiment is simply inconsistent with reality.
I’m somewhat willing to grant A, B, or D, and less apt to grant C or E.
I’m not sure if you have an objection thus far that this does not encapsulate.
D doesn’t make sense to me. If they make their decisions rationally, that shouldn’t result in an irrational act at any point. If rational decision-making can result in irrational decisions we have a contradiction.
C. would not have to be true for all entities, just rational ones; which seems entirely possible.
But I still hold with something very similar to B.
There isn’t a real choice. What you will do has been decided from outside you, and no matter how much you think you’re not going to change that.
I was simply attempting to show that it is irrelevant to talk about what you should, rationally, do in the scenario, because the scenario doesn’t allow rational choice. It doesn’t actually allow choice at all, but that’s harder to demonstrate than demonstrating that it doesn’t allow rational choice.
ETA: the relevance of the comment below is doubtful. I didn’t read upthread far enough before making it. Original comment was:
...the scenario doesn’t allow rational choice. It doesn’t actually allow choice at all...
What do you mean by “choice”?
Per Possibility and couldness (spoiler warning), if I run a deterministic chess-playing program, I’m willing to call its evaluation of the board and subsequent move a “choice”. How about you?
By choice, I mean my mind deciding what to do on the basis of it’s own thought processes, out of set of possibilities that could be realised if my mind were different than it is.
That is what I mean by choice.
A chess-program can do that.
I, in this scenario, cannot. No matter how my mind was setup prior to the scenario, there is only one possible outcome.
EDIT—I had missed the full context as follows:
“In my example, it is give that Omega decides what you are going to do, but that he causes you to do it in the same way you ordinarily do things, namely with some decision theory and by thinking some thoughts etc.”
for the comment below, so I accept Kingreaper’s reply here. BUT I will give another answer, below.
If the fact that Omega causes it means that you are irrational, then the fact that the laws of physics cause your actions also means that you are irrational.
You are being inconsistent here.
“I mean my mind deciding what to do on the basis of it’s own thought processes, out of set of possibilities that could be realised if my mind were different than it is.”
so can we apply this to a chess program as you suggest? I’ll rewrite it as:
“I mean a chess program deciding what to do on the basis of it’s own algorithmic process, out of set of possibilities that could be realised if its algorithm were different than it is.”
No problem there! So you didn’t say anything untrue about chess programs.
BUT
“I, in this scenario, cannot. No matter how my mind was setup prior to the scenario, there is only one possible outcome.”
This doesn’t make sense at all. The scenario requires your mind to be set up in a particular way. This does not mean that if your mind were set up in a different way you would still behave in the same way: If your mind were set up in a different way, either the outcome would be the same or your mind would be outside the scope of the scenario.
We can do exactly the same thing with a chess program.
Suppose I get a chess position (the state of play in a game) and present it to a chess program. The chess program replies with the move “Ngf3”. We now set the chess position up the same way again, and we predict that the program will move “Ngf3″ (because we just saw it do that with this position.) As far as we are concerned, the program can’t do anything else. As predicted, the program moves “Ngf3”. Now, the program was required by its own nature to make that move. It was forced to make that move by the way that the computer code in the program was organized, and by the chess position itself. We could say that even if the program had been different, it would still have made the same move—but this would be a fallacy, because if the program were different in such a way as to cause it to make a different move, it could never be the program about which we made that prediction. It would be a program about which a different prediction would be needed. Likewise, saying that your mind is compelled to act in a certain way, regardless of how it is set up, is also a fallacy, because the situation describes your mind as having set up in a specific way, just like the program with the predicted chess move, and if it wasn’t it would be outside the scope of the prediction.
“I, in this scenario, cannot. No matter how my mind was setup prior to the scenario, there is only one possible outcome.”
This doesn’t make sense at all. The scenario requires your mind to be set up in a particular way. This does not mean that if your mind were set up in a different way you would still behave in the same way: If your mind were set up in a different way, either the outcome would be the same or your mind would be outside the scope of the scenario.
No matter how my mind is set-up, Omega will change the scenario it to produce the same outcome.
If you took a chess program and chose a move, then gave it precisely the scenario necessary for it to make that move, I wouldn’t consider that move its choice.
If the entity making the choice is irrelevant, and the choice would be the same even if they were replaced by someone completely different, in what sense have they really made a choice?
Okay, so I got the scenario wrong, but I will give another reply. Omega is going to force you to act in a certain way. However, you will still experience what seem, to you, to be cognitive processes, and anyone watching your behavior will see what looks like cognitive processes going on.
Suppose Omega wrote a computer program and he used it to work outhow to control your behavior. Suppose he put this in a microchip and implanted it in your brain. You might say your brain is controlled by the chip, but you might also say that the chip and your brain form a composite entity which is still making decisions in the sense that any other mind is.
Now, suppose Omega keeps possession of the chip, but has it control you remotely. Again, you might still say that the chip and your brain form a composite system.
Finally, suppose Omega just does the computations in his own brain. You might say that your brain, together with Omega’s brain, form a composite system which is causing your behavior—and that this composite system makes decisions just like any other system.
“If the entity making the choice is irrelevant, and the choice would be the same even if they were replaced by someone completely different, in what sense have they really made a choice?”
We could look at your own brain in these terms and ask about removing parts of it.
In the Omega-composite scenario, the composite entity is clearly making the decisions.
In the chip-composite scenario, the chip-composite appears to be making decision, and in the general case I would say probably is.
“If the entity making the choice is irrelevant, and the choice would be the same even if they were replaced by someone completely different, in what sense have they really made a choice?”
We could look at your own brain in these terms and ask about removing parts of it.
Indeed. Not all parts of my brain are involved in all decisions. But, in general, at least some part of me has an effect on what decision I make.
The point, here, is that in the scenario in which Omega is actively manipulating your brain “you” might mean something in a more extended sense and “some part of you” might mean “some part of Omega’s brain”.
Except that that’s not the person the question is being directed at. I’m not “amalgam-Kingreaper-and-Omega” at the moment. Asking what that person would do would garner completely different responses.
For example, amalgam-kingreaper-and-omega has a fondness for creating ridiculous scenarios and inflicting them on rationalists.
“Except that that’s not the person the question is being directed at.”
Does that mean that you accept that it might at least be conceivable that the scenario implies the existence of a compound being who is less constrained than the person being controlled by Omega?
Just that the scenario could really be considered as just adding an extra component onto a being—one that has a lot of influence on his behavior.
Similarly, we might imagine surgically removing a piece of your brain, connecting the neurons at the edges of the removed piece to the ones left in your brain by radio control, and taking the removed piece to another location, from which it still plays a full part in your thought processes. We would probably still consider that composite system “you”.
What if you had a brain disorder and some electronics were implanted into your brain? Maybe a system to help with social cues for Asperger syndrome, or a system to help with dyslexia? What if we had a process to make extra neurons grow to repair damage? We might easily consider many things to be a “you which has been modified”.
When you say that the question is not directed at the compound entity, one answer could be that the scenario involved adding an extra component to you, that “you” has been extended, and that the compound entity is now “you”.
The scenario, as I understand it doesn’t really specify the limits of the entity involved. It talks about your brain, and what Omega is doing to it, but it doesn’t specifically disallow the idea that the “you” that it is about gets modified in the process.
Now, if you want to edit the scenario to specify exactly what the “you” is here...
There isn’t a real choice. What you will do has been decided from outside you, and no matter how much you think you’re not going to change that.
And there’s the rub. My decision in Newcomb’s is also ultimately caused by things outside me; the conditions of the universe before I was born determined what my decision would be.
Whether we call something a ‘real choice’ in this kind of question depends upon whether it’s determined by things within the black box we call ‘our decision-making apparatus’ or something like that, or if the causal arrow bypasses it entirely. The black box screens off causes preceding it.
The scenario might go as follows:
Omega puts a million dollars in the box.
Omega scans your brain
Omega deduces that if he shows you a picture of a fish at just the right time, it will influence your internal decision-making in some otherwise inscrutable way that causes you to one-box
You see the fish, and decide (in whatever way you usually decide things) to one-box.
As far as I can tell, that is a ‘real choice’ to one-box. If you had happened upon that picture of a fish in regular Newcomb’s, without Omega being the one to put it there, it would equally be your ‘real choice’ to one-box, and I don’t see how Omega knowing that it will happen changes its realness or choiceness.
As you will see, it exists in the standard newcomb, but not in this variant.
To directly address your fish example: If, in the standard newcomb, my mind had been different, seeing the fish wouldn’t necessarily have caused me to make the same choice.
In the modified newcomb, if my mind had been different I would have seen a different thing. The state of my mind had no impact on the outcome of events.
The fact that the causal arrows are rooted in some other being’s decision algorithm black box could reasonably be taken as the criterion for calling it that being’s choice. Still real, still choice, not my choice.
No, it proves I will not decide everything rationally if I don’t decide everything rationally.
Which is pretty tautologous.
The Omega example requires that I will not decide everything rationally.
The real world permits the possibility of a rational agent. Thus it makes sense to question what a rational agent would do.
Your scenario doesn’t permit a rational agent, thus it makes no sense to ask what a rational agent would do.
You’re missing the point Unknowns. In your scenario, my decision doesn’t depend on how I decide. It just depends on the setting of the box.
So I might as well just decide arbitrarily, and save effort.
In real life, your decision doesn’t depend on how you decide it. It just depends on the positions of your atoms and the laws of physics. So you might as well just decide arbitrarily, and save effort.
You left out some steps in your argument. It appears you were going for a disjunction elimination, but if so I’m not convinced of one premise. Let me lay out more explicitly what I think your argument is supposed to be, then I’ll show where I think it’s gone wrong.
A = “The rational decision is to two-box”
B = “Omega has set me to one-box”
C = “The rational decision is to one-box”
D = “Omega has set me to two-box”
E = “I must not be deciding rationally”
1. (A∧B)→E
2. (C∧D)→E
3. (A∧B)∨(C∧D)
4. ∴ E
I’ll grant #1 and #2. This is a valid argument, but the dubious proposition is #3. It is entirely possible that (A∧D) or that (C∧B). And in those cases, E is not guaranteed.
In short, you might decide rationally in cases where you’re set to one-box and it’s rational to one-box.
It is possible that I will make the rational decision in one path of the scenario. But the scenario, by it’s very nature, contains both paths. In one of the two paths I must be deciding irrationally.
Given as it was stated that I will use my normal thought-processes in both paths, my normal thought-processes must, in order for this scenario to be possible, be irrational.
Proposition 3 is only required to be possible, not to be true, and is supported by the existence of both paths of the scenario: the scenario requires that both A and B are possible.
It is possible that I will make the rational decision in one path of the scenario. But the scenario contains both paths. In one of the two paths I must be deciding irrationally.
Given as it was stated that I will use my normal thought-processes in both paths, my normal thought-processes must, in order for this scenario to be possible, be irrational.
It is not the case that in order for this scenario to be possible, your normal thought-processes must be necessarily irrational. Rather, in order for this scenario to be possible, your normal thought-processes must be possibly irrational. And clearly that’s the case for normal non-supernatural decision-making.
If you did not know about the box, you’d experience your normal decision-making apparatus output a decision in the normal way. Either you’re the sort of person who generally decides rationally or not, and if you’re a particularly rational person the box might have to make you do some strange mental backflips to justify the decision in the case that it’s not rational to make the choice the box specifies.
It is isomorphic, in this sense, to the world determining your actions, except that you’ll get initial conditions that are very strange, in half the times you play this game (assuming a 50% chance of either outcome).
If you know about the box, then it becomes simpler, as you will indeed be able to use this reasoning and the box will probably just have to flip a bit here or there to get you to pick one or the other.
If you’re not the sort of person who usually decides rationally, then following your strategy should be easy. For me, I anticipate that I would decide rationally half the time, and go rather insane the other half (assuming there was a clear rational decision, as you implied above).
No. What is in the box is not caused by what you will choose. It is caused by Omega after analyzing your original disposition, before the game begins. After you start the game, your choice and the million share a cause, namely your original disposition. So the cases are the same—same lines of causality, same scenario.
You can no more change your original disposition (which causes the million), than you can change the lesion that causes cancer.
You can no more change your original disposition (which causes the million), than you can change the lesion that causes cancer.
You can control your original disposition in exactly the same way you usually control your decisions. Even normally when you consider a decision the outcome is already settled and the measure of all Everett branches involved already determined. Just because you consider the counterfactual of local miracles that result in a different decision when evaluating your preferences doesn’t mean any such local miracles actually happen. Your original disposition is caused by your preferences between the two “possible” actions, just like with any other decision. The lesion example is different because your preferences are at no point involved in the causal history of the cancer.
You can precommit to not smoking in the same way you can precommit to taking only one box. If you might later find smoking irresistable, you might later find taking both boxes irresistable.
Precommitting not to smoke also changes my disposition regarding smoking. I still might find it irresistable later. Likewise if I precommit to one box. That says nothing about how I will feel about it later, when the situation happens.
In fact, even in real life, I suspect many one-boxers would two box in the end when they are standing there and thinking, “Either the million is there or it isn’t, and there’s nothing I can do about it.” In other words, they might very well find two-boxing irresistable, even if they had precommitted.
If they give in, they have not successfully precommitted.
Now, you could argue that successfully precommitting is impossible. But in the Newcombian problem, it doesn’t seem to be.
In the Lesion problem, the problem involves what essentially amounts to brain-damage, which gives a clear reason why precommitment is impossible.
Precommitting not to smoke also changes my disposition regarding smoking. I still might find it irresistable later.
This misses the point.
If precommitting changes your disposition, and your disposition decides the outcome, precommitting is worthwhile.
If precommitting changes your disposition, and a lesion decides the outcome, precommitting is irrelevant.
Actually, talking about precommitting is any case a side issue, because it happens before the start of the game. We can just assume you’ve never thought it before Omega comes up to you, says that it has predicted your decision, and tells you the rules. Now what do you do?
In this case the situation is clearly the same as the lesion, and the lines of causality are the same: both your present disposition, and the million in the box, have a common cause, namely your previous disposition, but you can do nothing about it.
If you should one-box here, then you should not smoke in the lesion case.
In fact, even in real life, I suspect many one-boxers would two box in the end when they are standing there
My intuition says the opposite: I think many people who claimed they would two-box would one-box in the event. $1000 is so small compared to $1000000, after all; why take the chance that Omega will be wrong?
Every event has multiple causes, and what causes you point out is not such important as you seem to think. In Newcomb, Omega’s decision and your one-or-two-boxing are both ultimately consequences of the state of the world before the scenario has started.
The only difference between Newcomb and the lesion is that in case of 100% effective lesion, there will be no correlation between having read about EDT and smoking. And in a world where there was such a correlation, one should start believing in fate.
I think the difference is that your disposition to one-box or two-box is something you can decide to change. Whether you were born with a lesion is not.
When you are standing there, and there is either a million in the box or there isn’t, can you change whether or not there is a million in the box?
No, no more than whether you were born with a lesion or not. The argument, “I should smoke, because if I have the lesion I have it whether or not I smoke” is exactly the same as the argument “I should take two boxes, because if the million is there, it is there whether or not I take two boxes.”
I agree, insofar as I think “I should not smoke” is true as long as I’m also allowed to say “I should not have the lesion”.
The problem is I think running into the proper use of ‘should’. We’d need to draw very sharp lines around the things we pretend that we can or cannot control for purposes of that word.
Basically, you end up with a black-box concept containing some but not all of the machinery that led up to your decision such that words like ‘should’ and ‘control’ apply to the workings of the black box and not to anything else. And then we can decide whether it’s sensible to ask ‘should I smoke’ in Smoking Lesion and ‘should I one-box’ in Newcomb.
Right now I don’t have a good enough handle on this model to draw those lines, and so don’t have an answer to this puzzle.
You’re correct that if the correlation were known to be 100% then the only meaningful advice one could give would be not to smoke. However, it’s important to understand that “100% correlation” is a degenerate case of the Smoking Lesion problem, as I’ll try to explain:
Imagine a problem of the following form: Y is a variable under our control, which we can either set to k or -k for some k >= 0 (0 is not ruled out). X is an N(0, m^2) random variable which we do not observe, for some m >= 0 (again, 0 is not ruled out). Our payoff has the form (X + Y) − 1000(X + rY) for some constant r with 0 ⇐ r ⇐ 1. Working out the optimal strategy is rather trivial. But anyway, in the edge cases: If r = 0 we should put Y = k and if r = 1 we should put Y = -k.
Now I want to say that the case r = 0 is analogous to the Smoking Lesion problem and the case r = 1 is analogous to Newcomb’s problem (with a flawless predictor):
Y corresponds to the part of a person’s will that they exercise conscious control over.
X corresponds to the part of a person’s will that just does whatever it does in spite of the person’s best intentions.
The ratio k : m measures “the extent to which we have free will.”
The (X + Y) term is the ‘temptation’, analogous to the extra $1000 in the second box or the pleasure gained from smoking.
The −1000(X + rY) is the ‘punishment’, analogous to the loss of the $1000000 in the first box, or getting cancer.
The constant r measures the extent to which even the ‘free part’ of our will is visible to whoever or whatever decides whether to punish us. (For simplicity, we take for granted that the ‘unfree part’ of our will is visible, but this needn’t always be the case.)
In the case of Newcomb’s Problem (with a perfect predictor) we have r = 1 and so the X term becomes irrelevant—we may as well simply treat the player as being ‘totally in control of their decision’.
In the case of the Smoking Lesion problem, the ‘Lesion’ is just some predetermined physiological property over which the player exercises no control. Whether the player smokes and whether they get cancer are conditionally independent given the presence or absence of the Lesion. This corresponds to r = 0 (and note that the problem wouldn’t even make sense unless m > 0). But then the only way to have 100% correlation between ‘temptation’ and ‘punishment’ is to put k = 0 so that the person’s will is ‘totally unfree’. But if the person’s will is ‘totally unfree’ then it doesn’t really make sense to treat them a decision-making agent.
ETA: Perhaps this analogy can be developed into an analysis of the original problems. One way to do it would be to define random variables Z and W taking values 0, 1 such that log(P(Z = 1 | X and Y)) / log(P(Z = 0 | X and Y)) = a linear combination of X and Y (and likewise for W, but with a different linear combination), and then have Z be the “player’s decision” and W be “Omega’s decision / whether person gets cancer”. But I think the ratio of extra work to extra insight would be quite high.
I don’t understand why the Smoking Lesion is a problem for evidential decision theory. I would simply accept that in the scenario given, you shouldn’t smoke. And I don’t see why you assert that this doesn’t lessen your chances of getting cancer, except in the same sense that two-boxing doesn’t lessen your chances of getting the million.
I would just say: in the scenario give, you should not smoke, and this will improve your chances of not getting cancer.
If you doubt this, consider if the correlation were known to be 100%; every person who ever smoked up till now, had the lesion and developed cancer, while every person who did not smoke, did not have the lesion. This was true also of people who knew about the Lesion. Do you still say it’s a good idea to smoke?
If the correlation is 100%, it doesn’t mean that you can choose whether or not you’ll have cancer. It means that if you have the lesion, then some combination of logic, rationalisation or impulse will make you decide to smoke (and if you don’t, then similarly you’ll end up not smoking). You can then tell from your decision whether you’ll get cancer or not, but you couldn’t have made the other decision, no matter what.
(Either that, or you can be the first person to try using EDT for it, and that way you get to be the person who breaks the 100% correlation and gets cancer without smoking)
You can say the same thing about Newcomb’s problem. It doesn’t mean you can choose whether or not there will be a million in one of the boxes. It means that if there is a million in one of the boxes, then “some combination of logic, rationalisation or impulse will make you decide” to choose only one of the boxes (and if there’s no million, then similarly you’ll end up taking both boxes.) “You can then tell from your decision whether” you’ll get the million or not, “but you couldn’t have made the other decision, no matter what.”
Either that, or you can be the first to outguess Omega and get the million as well as the thousand...
Nope, this reasoning doesn’t work with Newcomb, and it doesn’t work with the Smoking Lesion. If you want to win, you one-box, and you don’t smoke.
One potentially-significant difference: in Newcomb, it is precisely the fact that you’re disposed to two-box that causes you to lose out. (Omega is detecting and responding to this very disposition.) In Smoking Lesion, the disposition to smoke is intrinsically harmless; it merely happens to be correlated (due to a common cause) with a disposition to get cancer.
(But if you’re right that the two cases are on a par, then that would significant increase my confidence that two-boxing is rational. The smoking lesion case is by far the more obvious of the two.)
Responding to the supposed difference between the cases:
Omega puts the million in the box or not before the game has begun, depending on your former disposition to one-box or two-box.
Then the game begins. You are considering whether to one-box or two-box. Then the choice to one-box or two-box is intrinsically harmless; it merely happens to be correlated with your previous disposition and with Omega’s choice. Likewise, your present disposition to one-box or two-box is also intrinsically harmless. It is merely correlated with your previous disposition and with Omega’s choice.
You can no more change your previous disposition than you can change whether you have the lesion, so the two cases are equivalent.
And if people’s actions are deterministic, then in theory there could be an Omega that is 100% accurate. Nor would there be a need for simulation; as cousin_it has pointed out, it could “analyze your source code” and come up with a proof that you will one-box or two-box. In this case the 100% correlated smoking lesion and Newcomb would be precisely equivalent. The same is true if each has a 90% correlation, and so on.
If some subset of the information contained within you is sufficient to prove what you will do, simulating that subset is a relevant simulation of you.
I’m not sure what kind of proof you could do without going through the steps such that you essentially produced a simulation.
Could you give an example of the type of proof you’re proposing, so I can judge for myself whether it seems to involve running through the relevant steps?
See cousin_it’s post: http://lesswrong.com/lw/2ip/ai_cooperation_in_practice/
Many programs can be proven to have a certain result without any simulation, not even of a subset of the information. For example, think of a program that discovers the first 10,000 primes, increasing a counter by one for each prime it finds, and then stops. You can prove that the counter will equal 10,000 when it stops, without simulating this program.
See, to me that is a mental simulation of the relevant part of the program.
The counter will increase, point by point, it will remain an integer at each point and pass through every integer, and upon reaching 10,000 this will happen.
The fact that the relevant part of the program is as ridiculously simple as a counter just means that the simulation is easy.
So would you smoke even if the previous correlation were 100%, and included those who knew about the Lesion?
This could happen in reality, if everyone who smoked, smoked because he wanted to, and if everyone who sufficiently desired it did so, and if the sufficient desire for smoking was completely caused by the lesion. In other words, by choosing to smoke, you would be showing that you had sufficient desire, and therefore the lesion, and by choosing not to smoke, you would be showing that you did not have sufficient desire, and therefore not the lesion.
Under these circumstances, if you chose not to smoke, would you expect to get cancer, since you knew that you had some desire for smoking? (Presumably whether the desire was sufficient or not would not be evident to introspection, but only from whether or not you ended up smoking.) Or choosing to smoke, would you expect not to get cancer, since you say it doesn’t make any difference to whether you have the lesion?
For the correlation to be 100%, smoking would have to be ABSOLUTELY IRRESISTIBLE to people with the lesion.
Hence, if I had the lesion, I would smoke. I wouldn’t be able to resist doing so.
And of course smoking would have to be ABSOLUTELY UNTHINKABLE for people without the lesion.
Hence, if I didn’t have the lesion, I wouldn’t smoke, I wouldn’t be able to even try it.
I think that the “ABSOLUTELY IRRESISTIBLE” and “ABSOLUTELY UNTHINKABLE” language can be a bit misleading here. Yes, someone with the lesion is compelled to smoke, but his experience of this may be experience of spending days deliberating about whether to smoke—even though, all along, he was just running along preprepared rails and the end-result was inevitable.
If we assume determinism, however, we might say this about any decision. If someone makes a decision, it is because his brain was in such a state that it was compelled to make that decision, and any other decision was “UNTHINKABLE”. We don’t normally use language like that, even if we subscribe to such a view of decisions, because “UNTHINKABLE” implies a lot about the experience itself rather than just implying something about the certainty of particular action or compulsion towards it.
I could walk to the nearest bridge to jump off, and tell myself all along that, to someone whose brain was predisposed to jumping off the bridge, not doing it was unthinkable, so any attempt on my part to decide otherwise is meaningless. Acknowledging some kind of fatalism is one thing, but injecting it into the middle of our decision processes seems to me to be asking for trouble.
Not really. The lesion is a single aspect that completely determines a decision.
For most decisions, far more of the brain/mind than just one small, otherwise irrelevant, part can have some influence on the outcome.
But the lesion is clearly different, IF it has a 100% correlation.
When making a decision on something where I know my thought-process is irrelevant, why should I not be fatalistic? There is no decision-making process in the 100%-lesion case, the decision is MADE, it’s right there in the lesion.
EDIT: Here’s something analogous to the 100% lesion: you have a light attached to your head. If it blinks red, it’ll make you feel happy, but it’ll blow up in an hour. It’s not linked to the rest of your brain at all. Should you try and make a decision about whether to have it blink red?
There is no decision-making process anyway, every decision is made, it’s right there in the frontal/temporal/occipital/parietal lobe, right?
The red light blinking doesn’t feel as a decision. According to the lesion scenario, the lesion-influenced decisions feel exactly like other decisions. It is an important difference. And I am not sure why you have included both happy feeling and explosion, by the way.
If you can point to a specific part of my brain that has no purpose other than to make me have bacon for breakfast on tuesday 24th of august, 2010? And that can’t be over-ruled by any other parts of my brain?
That decision involved more than just one spot in my brain. All the parts of my brain involved do more than one thing.
So, no, the real world isn’t like the lesion example.
Okay, let’s change it slightly: Instead of the happy feeling, you get a feeling of “I decided to do this” when the light blinks red.
Is that a better analogy for you? Whether you think about it or not, you end up feeling like you made the decision. Just like in the lesion case.
I can’t, however it doesn’t imply that the decision about the breakfast is spread across the whole brain. Moreover, why it is so important to have it localised? What if the lesion is in fact only a slightly different concentration of chemicals spread across the whole brain, which I) leads to cancer, II) causes desire for smoking, which is nevertheless substantiated as a global coordinated action of neurons in different parts of the brain?
It is indeed a better example.
It’s not particularly. Replace “part” with “aspect”; I hadn’t actually thought about the option you propose.
Now we’re getting back to the “correlates with smoking” scenario; not the 100% scenario. If it just causes desire for smoking, some people with it won’t smoke. At which point it is a decision.
If this desire is irresistible, then you no more have a choice not to smoke than you have a choice not to sleep.
Do you have the option of not sleeping for the next year? (while still being alive)
No, I don’t. However, feeling of irresistible temptation is not the same thing as 100% incidence within respective population. (There are people who claim they don’t sleep.)
Imagine you lived in a lesion world where most of the smokers described their decision to start smoking as “free”. Still, there was a 100% correlation between smoking and cancer. Do you find it impossible?
No, it’s entirely possible.
It’s also entirely possible in the lightbulb world. In the lightbulb world I suspect you’d agree it isn’t a free decision, but it’s entirely possible that the people of that world might claim that it was.
What is the lightbulb world?
The world I described, with the red, blinking, exploding, light; that makes you think you chose to have it blink.
For a second there I thought I’d somehow confused the conversations, but no, you are the one I’ve been discussing that with.
I’m sorry for being stupid.
Still, your original description of the scenario was
Now you have changed the “happy” feeling into a “decided” feeling. So the bulb has to be connected somehow to the brain to stimulate the feeling. I am not sure what “rest” refers to here.
But in general, if somebody said they decided freely, I take it as given. I don’t know any better criterion how to judge whether the decision was free, whatever it means.
It’s my mistake.
I meant: it’s not connected to your brain at all except when making you happy/making you believe you decided.
ie. it’s not taking any input from the brain at any point. Much like the lesion.
In the specific case of the bulb-world, would you consider their decisions free, if they did?
If the bulb-apparatus physically took no input from the brain, if it was attached to the brain artificially (as opposed from being a native part of human body, or growing spontaneously—so that it couldn’t be considered a part of the brain), if its action was direct enough (e.g. implanting the decision by some sequence of electric impulses in course of seconds, as opposed to altering the brain only in a slight, but predictable manner, which modification would develop into the final decision after years of thought going inside the brain) and if the decision made by the bulb could be disentangled from other processes in the brain, then I certainly would not call the decision free. If only some of the above conditions were satisfied, then it would be hard to decide whether to use the word free or not.
I suspect we have unknowingly changed the topic into investigation of the meaning of “free”.
For the correlation with Omega to be 100%, one-boxing would have to be ABSOLUTELY IRRESISTABLE when there was a million in the box...
Hence, if there was a million, the person would one-box. He wouldn’t be able to resist doing so...
And of course taking only one box would have to be ABSOLUTELY UNTHINKABLE for people when the million wasn’t there.
And so on.
Well, yeah, which is why people resist the story about Omega, think it must be nonsense, and decide to two-box (although it would be better to explicitly reject the story). Or interpret it to imply backwards causality (in which case even CDT makes you one-box) or something else that violates the laws of physics as I know them.
This is one reason to stick with probabilistic versions of Newcomb’s Paradox.
In both cases (Newcomb’s Paradox and the Smoking Lesion), this seems to another example of the difficulty with 0 and 1 as probabilities.
Nope. In the Newcombian situation the lines of causality are different.
What’s in the box is explicitly caused by what you will choose, whereas in the smoking lesion example they simply share a cause.
Different lines of causality, different scenario.
I find that the term “cause” or “causality” can be very misleading in this situation.
As a matter of terminology, I actually agree with you: in lay speech, I see nothing wrong with saying that “One-boxing causes the sealed box to be filled”, because this is exactly how we perceive causality in the world.
However, when speaking of these problems, theorists nail down their terminology as best they can. And in such problems, standard usage is such that the concept of causality only applies to cases where an event changes things solely in the future[1], not merely where it reveals you to be in a situation in which a past event has happened.
When speaking of decision-theoretic problems, it is important to stick to this definition of causality, counter-intuitive though it may be.
Another example of the distinction is in Drescher’s Good and Real. Consider this: if you raise your hand (in a deterministic universe), you are setting the universe’s state 1 billion years ago to be such that a chain of events will unfold in a way that, 1 billion years later, you will raise your hand. In a (lay) sense, raising your hand “caused” that state.
However, because that state is in the past, it violates decision-theoretic usage to say that you caused that state; instead, you should simply say that either:
a) there is an acausal relationship between your choice to raise your hand and that state of the universe, or
b) by choosing to raise your hand, you have learned about a past state of universe. (Just as deciding whether to exit in the Absent-Minded Driver problem tells you something about which exit you are at.)
[1] or, in timeless formalisms, where the cause screens off that which it causes.
I think you’ve misunderstood me. “What you will choose” is a fact that exists before omega fills the boxes.
This fact determines how the boxes are filled.
“What you will choose” (some people seem to refer to this, or something similar, as your “disposition”, but I find my terminology more immediately apparent) causes the future event “how the boxes are filled”
Oh, sorry. Some of this stuff is just tough to parse, but your points are correct.
I’ll leave up the previous post because it’s an important thing to keep in mind.
Indeed. I’ll try to be clearer in future.
That isn’t relevant. For all you know, Omega also created the universe, and so set it in the situation that disposed you to choose the way you did.
When the game actually begins, you cannot change your disposition, and you cannot change the million dollars.
Someone should wrap it up with a problem where what you choose is determined by what’s in the box. Any ideas, anyone?
Actually, this is excellent. We could rewrite Newcomb’s problem like this:
Omega places in the box together with the million or non-million, a device that influences your brain, programming the device so that you are caused to take both if it does not place the million, and programming the device so that you are caused to one-box if it places the million. In other words, Omega decides in advance whether you are going to get the million or not, then sets up the situation so you will make the choice that gets you what it wanted you to get.
However, the influence on your brain is quite subtle; to you, it still feels like you are deciding in the normal way, using some decision theory or other.
Now, do you one-box or two-box? This is certainly exactly the same as the smoking lesion. Nor can you answer “I don’t have to decide because my actions are determined” because your actions might well be determined in real life anyway, and you still have to decide.
If you one-box here, you should not smoke in the lesion problem. If you don’t one-box here… well, too bad for you.
The obvious answer is ‘whatever Omega decided’. But I hope that I one-box.
You might as well say in general that you do “whatever the laws of physics determine.”
But you still have to decide, anyway. Hoping doesn’t help.
I flip a coin; if it’s heads, I give you a million dollars, else I give you a thousand dollars. How much money should you get from me? (And is this problem any different from the last one?)
At some point, these questions no longer help us make rational decisions. Even an AI with complete access to its source code can’t do anything to prepare itself for these situations.
No, you don’t, you don’t get to decide. The decision has been made.
You’re ignoring the fact that, normally, the thoughts going on in your brain are PART of how the decision is determined by the laws of physics. In your scenario, they’re irrelevant. Whatever you think, your action is determined by the machine.
EDIT: http://lesswrong.com/lw/2mc/the_smoking_lesion_a_problem_for_evidential/2hx7?c=1 You’ve claimed that you would one-box in this scenario. You’ve claimed therefore, that you would one-box if programmed to two-box.
Ie. you’ve claimed you’re capable of logically impossible acts. Either that, or you don’t understand your own scenario.
The machine works only by getting you to think certain things, and these things cause your decision. So you decide in the same way you normally do.
I did not say I would one box if I were programmed to two box; I said I would one-box.
And if you were programmed to two-box, and unaware of that fact?
Your response is like responding to “what would you do if there was a 50% chance of you dying tomorrow?” with: “I’d survive”
It completely ignores the point of the situation, and assumes godlike agency.
I do whatever I’m being influenced into doing.
This is a fact.
You can argue all you like about what I should do, but what I will do is already decided, and isn’t influenced by my thoughts, my rationality, or anything else.
All the information needed to determine what I will do is in the lesion/machine.
Applying rationality to a scenario where the agent is by definition incapable of rationality is just plain silly.
Do you think that in real life you are exempt from the laws of physics?
If not, does that mean that “what you will do is already decided”? That you don’t have to make a decision? That you are “incapable of rationality”?
In the real world the information that determines my action is contained within me. In order to determine the action, you would have to run “me” (or at least some reasonable part thereof)
In your version of newcombs the information that determines my action is contained within the machine.
Can you see why I consider that a significant difference?
No. The machine determines your action only by determining what is in you, which determines your action in the normal way.
So you still have to decide what to do.
Do you see how this scenario rules out the possibility of me deciding rationally?
EDIT: In fact, let me explain now, before you answer, give me a sec and I’ll re-edit
EDIT2: If the rational decision is to two-box, and Omega has set me to one-box, then I must not be deciding rationally. Correct?
If the rational decision is to one-box, and Omega has set me to two-box, then I must not be deciding rationally. Correct?
Now, assuming I will not decide rationally, as I know I will not, I need waste no time thinking. I’ll do whichever I feel like.
You can substitute “the laws of physics” for “Omega” in your argument, and if it proves you will not decide rationally in the Omega situation, then it proves you will not decide—anything—rationally in real life.
Presumably (or at least hopefully) if you are a rational agent with a certain DT, then a long and accurate description of the ways that “the laws of physics” affect your decision-making process break down into
The ways that the laws of physics affect the computer you’re running on
How the computer program, and specifically your DT, works when running on a reliable computer.
It’s not clear how a reduction like this could work in your example.
In my example, it is give that Omega decides what you are going to do, but that he causes you to do it in the same way you ordinarily do things, namely with some decision theory and by thinking some thoughts etc.
If the fact that Omega causes it means that you are irrational, then the fact that the laws of physics cause your actions also means that you are irrational.
A rational entity can exist in the laws of physics. A rational entity by definition has a determined decision, if there is a rational decision possible. A rational entity cannot make an irrational decision.
You’re getting hung up on the determinism. That’s not the issue. Rational entities are by definition deterministic.
What they are not is deterministically irrational. Your scenario requires an irrational entity.
Your scenario requires that the entity be able to make an irrational decision, using it’s normal thought processes. This requires that it be using irrational thought processes.
It seems you are simply assuming away the problem. Your assumptions:
Rational entities can exist.
The choice of either one-boxing or two-boxing in the above scenario is irrational
Omega makes the subject one-box or two-box using its normal decision mechanisms
A rational entity will never make an irrational decision
Then, the described scenario is simply inconsistent, if Omega can use a rational entity as a subject. And so it comes down to which bullet you want to bite. Is it:
I’m somewhat willing to grant A, B, or D, and less apt to grant C or E.
I’m not sure if you have an objection thus far that this does not encapsulate.
D doesn’t make sense to me. If they make their decisions rationally, that shouldn’t result in an irrational act at any point. If rational decision-making can result in irrational decisions we have a contradiction.
C. would not have to be true for all entities, just rational ones; which seems entirely possible.
But I still hold with something very similar to B.
There isn’t a real choice. What you will do has been decided from outside you, and no matter how much you think you’re not going to change that.
I was simply attempting to show that it is irrelevant to talk about what you should, rationally, do in the scenario, because the scenario doesn’t allow rational choice. It doesn’t actually allow choice at all, but that’s harder to demonstrate than demonstrating that it doesn’t allow rational choice.
Apparently I’m not doing a very good job of it.
ETA: the relevance of the comment below is doubtful. I didn’t read upthread far enough before making it. Original comment was:
What do you mean by “choice”?
Per Possibility and couldness (spoiler warning), if I run a deterministic chess-playing program, I’m willing to call its evaluation of the board and subsequent move a “choice”. How about you?
By choice, I mean my mind deciding what to do on the basis of it’s own thought processes, out of set of possibilities that could be realised if my mind were different than it is.
That is what I mean by choice.
A chess-program can do that.
I, in this scenario, cannot. No matter how my mind was setup prior to the scenario, there is only one possible outcome.
EDIT—I had missed the full context as follows: “In my example, it is give that Omega decides what you are going to do, but that he causes you to do it in the same way you ordinarily do things, namely with some decision theory and by thinking some thoughts etc.”
for the comment below, so I accept Kingreaper’s reply here. BUT I will give another answer, below.
If the fact that Omega causes it means that you are irrational, then the fact that the laws of physics cause your actions also means that you are irrational. You are being inconsistent here.
“I mean my mind deciding what to do on the basis of it’s own thought processes, out of set of possibilities that could be realised if my mind were different than it is.”
so can we apply this to a chess program as you suggest? I’ll rewrite it as:
“I mean a chess program deciding what to do on the basis of it’s own algorithmic process, out of set of possibilities that could be realised if its algorithm were different than it is.”
No problem there! So you didn’t say anything untrue about chess programs.
BUT
“I, in this scenario, cannot. No matter how my mind was setup prior to the scenario, there is only one possible outcome.”
This doesn’t make sense at all. The scenario requires your mind to be set up in a particular way. This does not mean that if your mind were set up in a different way you would still behave in the same way: If your mind were set up in a different way, either the outcome would be the same or your mind would be outside the scope of the scenario.
We can do exactly the same thing with a chess program.
Suppose I get a chess position (the state of play in a game) and present it to a chess program. The chess program replies with the move “Ngf3”. We now set the chess position up the same way again, and we predict that the program will move “Ngf3″ (because we just saw it do that with this position.) As far as we are concerned, the program can’t do anything else. As predicted, the program moves “Ngf3”. Now, the program was required by its own nature to make that move. It was forced to make that move by the way that the computer code in the program was organized, and by the chess position itself. We could say that even if the program had been different, it would still have made the same move—but this would be a fallacy, because if the program were different in such a way as to cause it to make a different move, it could never be the program about which we made that prediction. It would be a program about which a different prediction would be needed. Likewise, saying that your mind is compelled to act in a certain way, regardless of how it is set up, is also a fallacy, because the situation describes your mind as having set up in a specific way, just like the program with the predicted chess move, and if it wasn’t it would be outside the scope of the prediction.
No matter how my mind is set-up, Omega will change the scenario it to produce the same outcome.
If you took a chess program and chose a move, then gave it precisely the scenario necessary for it to make that move, I wouldn’t consider that move its choice.
If the entity making the choice is irrelevant, and the choice would be the same even if they were replaced by someone completely different, in what sense have they really made a choice?
Okay, so I got the scenario wrong, but I will give another reply. Omega is going to force you to act in a certain way. However, you will still experience what seem, to you, to be cognitive processes, and anyone watching your behavior will see what looks like cognitive processes going on.
Suppose Omega wrote a computer program and he used it to work outhow to control your behavior. Suppose he put this in a microchip and implanted it in your brain. You might say your brain is controlled by the chip, but you might also say that the chip and your brain form a composite entity which is still making decisions in the sense that any other mind is.
Now, suppose Omega keeps possession of the chip, but has it control you remotely. Again, you might still say that the chip and your brain form a composite system.
Finally, suppose Omega just does the computations in his own brain. You might say that your brain, together with Omega’s brain, form a composite system which is causing your behavior—and that this composite system makes decisions just like any other system.
“If the entity making the choice is irrelevant, and the choice would be the same even if they were replaced by someone completely different, in what sense have they really made a choice?”
We could look at your own brain in these terms and ask about removing parts of it.
In the Omega-composite scenario, the composite entity is clearly making the decisions.
In the chip-composite scenario, the chip-composite appears to be making decision, and in the general case I would say probably is.
Indeed. Not all parts of my brain are involved in all decisions. But, in general, at least some part of me has an effect on what decision I make.
The point, here, is that in the scenario in which Omega is actively manipulating your brain “you” might mean something in a more extended sense and “some part of you” might mean “some part of Omega’s brain”.
Except that that’s not the person the question is being directed at. I’m not “amalgam-Kingreaper-and-Omega” at the moment. Asking what that person would do would garner completely different responses.
For example, amalgam-kingreaper-and-omega has a fondness for creating ridiculous scenarios and inflicting them on rationalists.
“Except that that’s not the person the question is being directed at.”
Does that mean that you accept that it might at least be conceivable that the scenario implies the existence of a compound being who is less constrained than the person being controlled by Omega?
Yes. Of course, the part of them that is unconstrained IS Omega.
I’m just not sure about the relevance of this?
Just that the scenario could really be considered as just adding an extra component onto a being—one that has a lot of influence on his behavior.
Similarly, we might imagine surgically removing a piece of your brain, connecting the neurons at the edges of the removed piece to the ones left in your brain by radio control, and taking the removed piece to another location, from which it still plays a full part in your thought processes. We would probably still consider that composite system “you”.
What if you had a brain disorder and some electronics were implanted into your brain? Maybe a system to help with social cues for Asperger syndrome, or a system to help with dyslexia? What if we had a process to make extra neurons grow to repair damage? We might easily consider many things to be a “you which has been modified”.
When you say that the question is not directed at the compound entity, one answer could be that the scenario involved adding an extra component to you, that “you” has been extended, and that the compound entity is now “you”.
The scenario, as I understand it doesn’t really specify the limits of the entity involved. It talks about your brain, and what Omega is doing to it, but it doesn’t specifically disallow the idea that the “you” that it is about gets modified in the process.
Now, if you want to edit the scenario to specify exactly what the “you” is here...
We do. But what if we had a better one?
Yeah, after reading far enough upthread to become aware of the scenario under discussion, I find I agree with your conclusion.
And there’s the rub. My decision in Newcomb’s is also ultimately caused by things outside me; the conditions of the universe before I was born determined what my decision would be.
Whether we call something a ‘real choice’ in this kind of question depends upon whether it’s determined by things within the black box we call ‘our decision-making apparatus’ or something like that, or if the causal arrow bypasses it entirely. The black box screens off causes preceding it.
The scenario might go as follows:
Omega puts a million dollars in the box.
Omega scans your brain
Omega deduces that if he shows you a picture of a fish at just the right time, it will influence your internal decision-making in some otherwise inscrutable way that causes you to one-box
You see the fish, and decide (in whatever way you usually decide things) to one-box.
As far as I can tell, that is a ‘real choice’ to one-box. If you had happened upon that picture of a fish in regular Newcomb’s, without Omega being the one to put it there, it would equally be your ‘real choice’ to one-box, and I don’t see how Omega knowing that it will happen changes its realness or choiceness.
My explanation of what I mean by choice is here: http://lesswrong.com/lw/2mc/the_smoking_lesion_a_problem_for_evidential/2hyu?c=1
As you will see, it exists in the standard newcomb, but not in this variant.
To directly address your fish example: If, in the standard newcomb, my mind had been different, seeing the fish wouldn’t necessarily have caused me to make the same choice.
In the modified newcomb, if my mind had been different I would have seen a different thing. The state of my mind had no impact on the outcome of events.
The fact that the causal arrows are rooted in some other being’s decision algorithm black box could reasonably be taken as the criterion for calling it that being’s choice. Still real, still choice, not my choice.
No, it proves I will not decide everything rationally if I don’t decide everything rationally. Which is pretty tautologous.
The Omega example requires that I will not decide everything rationally.
The real world permits the possibility of a rational agent. Thus it makes sense to question what a rational agent would do. Your scenario doesn’t permit a rational agent, thus it makes no sense to ask what a rational agent would do.
You’re missing the point Unknowns. In your scenario, my decision doesn’t depend on how I decide. It just depends on the setting of the box. So I might as well just decide arbitrarily, and save effort.
What would you do in your own scenario?
In real life, your decision doesn’t depend on how you decide it. It just depends on the positions of your atoms and the laws of physics. So you might as well just decide arbitrarily, and save effort.
I would one-box.
So, if Omega programmed you to two-box, you would one-box?
That’s not exactly consistent. In fact, that’s logically impossible.
Essentially, you’re denying your own scenario.
You left out some steps in your argument. It appears you were going for a disjunction elimination, but if so I’m not convinced of one premise. Let me lay out more explicitly what I think your argument is supposed to be, then I’ll show where I think it’s gone wrong.
A = “The rational decision is to two-box” B = “Omega has set me to one-box” C = “The rational decision is to one-box” D = “Omega has set me to two-box” E = “I must not be deciding rationally”
I’ll grant #1 and #2. This is a valid argument, but the dubious proposition is #3. It is entirely possible that (A∧D) or that (C∧B). And in those cases, E is not guaranteed.
In short, you might decide rationally in cases where you’re set to one-box and it’s rational to one-box.
It is possible that I will make the rational decision in one path of the scenario. But the scenario, by it’s very nature, contains both paths. In one of the two paths I must be deciding irrationally.
Given as it was stated that I will use my normal thought-processes in both paths, my normal thought-processes must, in order for this scenario to be possible, be irrational.
Proposition 3 is only required to be possible, not to be true, and is supported by the existence of both paths of the scenario: the scenario requires that both A and B are possible.
It is possible that I will make the rational decision in one path of the scenario. But the scenario contains both paths. In one of the two paths I must be deciding irrationally.
Given as it was stated that I will use my normal thought-processes in both paths, my normal thought-processes must, in order for this scenario to be possible, be irrational.
You’re mixing modes.
It is not the case that in order for this scenario to be possible, your normal thought-processes must be necessarily irrational. Rather, in order for this scenario to be possible, your normal thought-processes must be possibly irrational. And clearly that’s the case for normal non-supernatural decision-making.
ETA: Unknowns stated the conclusion better
Let’s try a different tack: Is it rational to decide rationally in Unknown’s scenario?
1.Thinking takes effort, and this effort is a disutility. (-c)
2.If I don’t think I will come to the answer the machine is set to. (of utility X)
3.If I do think I will come to the answer the machine is set to. (of utility X)
My outcome if I don’t think is “X” My outcome if I do think if “X-c” Which is less than “X” I shouldn’t waste my effort thinking this through.
If you did not know about the box, you’d experience your normal decision-making apparatus output a decision in the normal way. Either you’re the sort of person who generally decides rationally or not, and if you’re a particularly rational person the box might have to make you do some strange mental backflips to justify the decision in the case that it’s not rational to make the choice the box specifies.
It is isomorphic, in this sense, to the world determining your actions, except that you’ll get initial conditions that are very strange, in half the times you play this game (assuming a 50% chance of either outcome).
If you know about the box, then it becomes simpler, as you will indeed be able to use this reasoning and the box will probably just have to flip a bit here or there to get you to pick one or the other.
If you’re not the sort of person who usually decides rationally, then following your strategy should be easy. For me, I anticipate that I would decide rationally half the time, and go rather insane the other half (assuming there was a clear rational decision, as you implied above).
No. What is in the box is not caused by what you will choose. It is caused by Omega after analyzing your original disposition, before the game begins. After you start the game, your choice and the million share a cause, namely your original disposition. So the cases are the same—same lines of causality, same scenario.
You can no more change your original disposition (which causes the million), than you can change the lesion that causes cancer.
You can control your original disposition in exactly the same way you usually control your decisions. Even normally when you consider a decision the outcome is already settled and the measure of all Everett branches involved already determined. Just because you consider the counterfactual of local miracles that result in a different decision when evaluating your preferences doesn’t mean any such local miracles actually happen. Your original disposition is caused by your preferences between the two “possible” actions, just like with any other decision. The lesion example is different because your preferences are at no point involved in the causal history of the cancer.
Even going on that basis, which I disagree with (I disagree with the “lack of simulation” hypothesis; see the other thread of comments in a second)
Right now, I could precommit myself to winning in all newcomb-like problems I encounter in future, and thus, right now, I can change my disposition.
I can’t precommit to not finding something irresistable due to brain damage/lesions/whatever.
That’s a pretty significant difference.
You can precommit to not smoking in the same way you can precommit to taking only one box. If you might later find smoking irresistable, you might later find taking both boxes irresistable.
Precommitting changes my disposition, making me not find two-boxing irresistable.
Precommitting CAN’T change whether I get the lesion or not.
In Newcombs scenario, precommitting changes the outcome. In the smoking lesion, it doesn’t.
Precommitting not to smoke also changes my disposition regarding smoking. I still might find it irresistable later. Likewise if I precommit to one box. That says nothing about how I will feel about it later, when the situation happens.
In fact, even in real life, I suspect many one-boxers would two box in the end when they are standing there and thinking, “Either the million is there or it isn’t, and there’s nothing I can do about it.” In other words, they might very well find two-boxing irresistable, even if they had precommitted.
If they give in, they have not successfully precommitted. Now, you could argue that successfully precommitting is impossible. But in the Newcombian problem, it doesn’t seem to be.
In the Lesion problem, the problem involves what essentially amounts to brain-damage, which gives a clear reason why precommitment is impossible.
This misses the point. If precommitting changes your disposition, and your disposition decides the outcome, precommitting is worthwhile.
If precommitting changes your disposition, and a lesion decides the outcome, precommitting is irrelevant.
Actually, talking about precommitting is any case a side issue, because it happens before the start of the game. We can just assume you’ve never thought it before Omega comes up to you, says that it has predicted your decision, and tells you the rules. Now what do you do?
In this case the situation is clearly the same as the lesion, and the lines of causality are the same: both your present disposition, and the million in the box, have a common cause, namely your previous disposition, but you can do nothing about it.
If you should one-box here, then you should not smoke in the lesion case.
My intuition says the opposite: I think many people who claimed they would two-box would one-box in the event. $1000 is so small compared to $1000000, after all; why take the chance that Omega will be wrong?
Every event has multiple causes, and what causes you point out is not such important as you seem to think. In Newcomb, Omega’s decision and your one-or-two-boxing are both ultimately consequences of the state of the world before the scenario has started.
The only difference between Newcomb and the lesion is that in case of 100% effective lesion, there will be no correlation between having read about EDT and smoking. And in a world where there was such a correlation, one should start believing in fate.
I think the difference is that your disposition to one-box or two-box is something you can decide to change. Whether you were born with a lesion is not.
When you are standing there, and there is either a million in the box or there isn’t, can you change whether or not there is a million in the box?
No, no more than whether you were born with a lesion or not. The argument, “I should smoke, because if I have the lesion I have it whether or not I smoke” is exactly the same as the argument “I should take two boxes, because if the million is there, it is there whether or not I take two boxes.”
I agree, insofar as I think “I should not smoke” is true as long as I’m also allowed to say “I should not have the lesion”.
The problem is I think running into the proper use of ‘should’. We’d need to draw very sharp lines around the things we pretend that we can or cannot control for purposes of that word.
Basically, you end up with a black-box concept containing some but not all of the machinery that led up to your decision such that words like ‘should’ and ‘control’ apply to the workings of the black box and not to anything else. And then we can decide whether it’s sensible to ask ‘should I smoke’ in Smoking Lesion and ‘should I one-box’ in Newcomb.
Right now I don’t have a good enough handle on this model to draw those lines, and so don’t have an answer to this puzzle.
Yes I can, right now.
You’re correct that if the correlation were known to be 100% then the only meaningful advice one could give would be not to smoke. However, it’s important to understand that “100% correlation” is a degenerate case of the Smoking Lesion problem, as I’ll try to explain:
Imagine a problem of the following form: Y is a variable under our control, which we can either set to k or -k for some k >= 0 (0 is not ruled out). X is an N(0, m^2) random variable which we do not observe, for some m >= 0 (again, 0 is not ruled out). Our payoff has the form (X + Y) − 1000(X + rY) for some constant r with 0 ⇐ r ⇐ 1. Working out the optimal strategy is rather trivial. But anyway, in the edge cases: If r = 0 we should put Y = k and if r = 1 we should put Y = -k.
Now I want to say that the case r = 0 is analogous to the Smoking Lesion problem and the case r = 1 is analogous to Newcomb’s problem (with a flawless predictor):
Y corresponds to the part of a person’s will that they exercise conscious control over.
X corresponds to the part of a person’s will that just does whatever it does in spite of the person’s best intentions.
The ratio k : m measures “the extent to which we have free will.”
The (X + Y) term is the ‘temptation’, analogous to the extra $1000 in the second box or the pleasure gained from smoking.
The −1000(X + rY) is the ‘punishment’, analogous to the loss of the $1000000 in the first box, or getting cancer.
The constant r measures the extent to which even the ‘free part’ of our will is visible to whoever or whatever decides whether to punish us. (For simplicity, we take for granted that the ‘unfree part’ of our will is visible, but this needn’t always be the case.)
In the case of Newcomb’s Problem (with a perfect predictor) we have r = 1 and so the X term becomes irrelevant—we may as well simply treat the player as being ‘totally in control of their decision’.
In the case of the Smoking Lesion problem, the ‘Lesion’ is just some predetermined physiological property over which the player exercises no control. Whether the player smokes and whether they get cancer are conditionally independent given the presence or absence of the Lesion. This corresponds to r = 0 (and note that the problem wouldn’t even make sense unless m > 0). But then the only way to have 100% correlation between ‘temptation’ and ‘punishment’ is to put k = 0 so that the person’s will is ‘totally unfree’. But if the person’s will is ‘totally unfree’ then it doesn’t really make sense to treat them a decision-making agent.
ETA: Perhaps this analogy can be developed into an analysis of the original problems. One way to do it would be to define random variables Z and W taking values 0, 1 such that log(P(Z = 1 | X and Y)) / log(P(Z = 0 | X and Y)) = a linear combination of X and Y (and likewise for W, but with a different linear combination), and then have Z be the “player’s decision” and W be “Omega’s decision / whether person gets cancer”. But I think the ratio of extra work to extra insight would be quite high.