You can argue all you like about what I should do, but what I will do is already decided, and isn’t influenced by my thoughts, my rationality, or anything else.
All the information needed to determine what I will do is in the lesion/machine.
Applying rationality to a scenario where the agent is by definition incapable of rationality is just plain silly.
In the real world the information that determines my action is contained within me. In order to determine the action, you would have to run “me” (or at least some reasonable part thereof)
In your version of newcombs the information that determines my action is contained within the machine.
Can you see why I consider that a significant difference?
You can substitute “the laws of physics” for “Omega” in your argument, and if it proves you will not decide rationally in the Omega situation, then it proves you will not decide—anything—rationally in real life.
Presumably (or at least hopefully) if you are a rational agent with a certain DT, then a long and accurate description of the ways that “the laws of physics” affect your decision-making process break down into
The ways that the laws of physics affect the computer you’re running on
How the computer program, and specifically your DT, works when running on a reliable computer.
It’s not clear how a reduction like this could work in your example.
In my example, it is give that Omega decides what you are going to do, but that he causes you to do it in the same way you ordinarily do things, namely with some decision theory and by thinking some thoughts etc.
If the fact that Omega causes it means that you are irrational, then the fact that the laws of physics cause your actions also means that you are irrational.
A rational entity can exist in the laws of physics.
A rational entity by definition has a determined decision, if there is a rational decision possible.
A rational entity cannot make an irrational decision.
You’re getting hung up on the determinism. That’s not the issue. Rational entities are by definition deterministic.
What they are not is deterministically irrational. Your scenario requires an irrational entity.
Your scenario requires that the entity be able to make an irrational decision, using it’s normal thought processes.
This requires that it be using irrational thought processes.
It seems you are simply assuming away the problem. Your assumptions:
Rational entities can exist.
The choice of either one-boxing or two-boxing in the above scenario is irrational
Omega makes the subject one-box or two-box using its normal decision mechanisms
A rational entity will never make an irrational decision
Then, the described scenario is simply inconsistent, if Omega can use a rational entity as a subject. And so it comes down to which bullet you want to bite. Is it:
A. Rational entities can't exist.
B. Neither choice is irrational.
C. Omega cannot use the subject's normal decision mechanisms to effect the choice
D. Rational entities are allowed to make irrational decisions sometimes
E. The thought experiment is simply inconsistent with reality.
I’m somewhat willing to grant A, B, or D, and less apt to grant C or E.
I’m not sure if you have an objection thus far that this does not encapsulate.
D doesn’t make sense to me. If they make their decisions rationally, that shouldn’t result in an irrational act at any point. If rational decision-making can result in irrational decisions we have a contradiction.
C. would not have to be true for all entities, just rational ones; which seems entirely possible.
But I still hold with something very similar to B.
There isn’t a real choice. What you will do has been decided from outside you, and no matter how much you think you’re not going to change that.
I was simply attempting to show that it is irrelevant to talk about what you should, rationally, do in the scenario, because the scenario doesn’t allow rational choice. It doesn’t actually allow choice at all, but that’s harder to demonstrate than demonstrating that it doesn’t allow rational choice.
ETA: the relevance of the comment below is doubtful. I didn’t read upthread far enough before making it. Original comment was:
...the scenario doesn’t allow rational choice. It doesn’t actually allow choice at all...
What do you mean by “choice”?
Per Possibility and couldness (spoiler warning), if I run a deterministic chess-playing program, I’m willing to call its evaluation of the board and subsequent move a “choice”. How about you?
By choice, I mean my mind deciding what to do on the basis of it’s own thought processes, out of set of possibilities that could be realised if my mind were different than it is.
That is what I mean by choice.
A chess-program can do that.
I, in this scenario, cannot. No matter how my mind was setup prior to the scenario, there is only one possible outcome.
EDIT—I had missed the full context as follows:
“In my example, it is give that Omega decides what you are going to do, but that he causes you to do it in the same way you ordinarily do things, namely with some decision theory and by thinking some thoughts etc.”
for the comment below, so I accept Kingreaper’s reply here. BUT I will give another answer, below.
If the fact that Omega causes it means that you are irrational, then the fact that the laws of physics cause your actions also means that you are irrational.
You are being inconsistent here.
“I mean my mind deciding what to do on the basis of it’s own thought processes, out of set of possibilities that could be realised if my mind were different than it is.”
so can we apply this to a chess program as you suggest? I’ll rewrite it as:
“I mean a chess program deciding what to do on the basis of it’s own algorithmic process, out of set of possibilities that could be realised if its algorithm were different than it is.”
No problem there! So you didn’t say anything untrue about chess programs.
BUT
“I, in this scenario, cannot. No matter how my mind was setup prior to the scenario, there is only one possible outcome.”
This doesn’t make sense at all. The scenario requires your mind to be set up in a particular way. This does not mean that if your mind were set up in a different way you would still behave in the same way: If your mind were set up in a different way, either the outcome would be the same or your mind would be outside the scope of the scenario.
We can do exactly the same thing with a chess program.
Suppose I get a chess position (the state of play in a game) and present it to a chess program. The chess program replies with the move “Ngf3”. We now set the chess position up the same way again, and we predict that the program will move “Ngf3″ (because we just saw it do that with this position.) As far as we are concerned, the program can’t do anything else. As predicted, the program moves “Ngf3”. Now, the program was required by its own nature to make that move. It was forced to make that move by the way that the computer code in the program was organized, and by the chess position itself. We could say that even if the program had been different, it would still have made the same move—but this would be a fallacy, because if the program were different in such a way as to cause it to make a different move, it could never be the program about which we made that prediction. It would be a program about which a different prediction would be needed. Likewise, saying that your mind is compelled to act in a certain way, regardless of how it is set up, is also a fallacy, because the situation describes your mind as having set up in a specific way, just like the program with the predicted chess move, and if it wasn’t it would be outside the scope of the prediction.
“I, in this scenario, cannot. No matter how my mind was setup prior to the scenario, there is only one possible outcome.”
This doesn’t make sense at all. The scenario requires your mind to be set up in a particular way. This does not mean that if your mind were set up in a different way you would still behave in the same way: If your mind were set up in a different way, either the outcome would be the same or your mind would be outside the scope of the scenario.
No matter how my mind is set-up, Omega will change the scenario it to produce the same outcome.
If you took a chess program and chose a move, then gave it precisely the scenario necessary for it to make that move, I wouldn’t consider that move its choice.
If the entity making the choice is irrelevant, and the choice would be the same even if they were replaced by someone completely different, in what sense have they really made a choice?
Okay, so I got the scenario wrong, but I will give another reply. Omega is going to force you to act in a certain way. However, you will still experience what seem, to you, to be cognitive processes, and anyone watching your behavior will see what looks like cognitive processes going on.
Suppose Omega wrote a computer program and he used it to work outhow to control your behavior. Suppose he put this in a microchip and implanted it in your brain. You might say your brain is controlled by the chip, but you might also say that the chip and your brain form a composite entity which is still making decisions in the sense that any other mind is.
Now, suppose Omega keeps possession of the chip, but has it control you remotely. Again, you might still say that the chip and your brain form a composite system.
Finally, suppose Omega just does the computations in his own brain. You might say that your brain, together with Omega’s brain, form a composite system which is causing your behavior—and that this composite system makes decisions just like any other system.
“If the entity making the choice is irrelevant, and the choice would be the same even if they were replaced by someone completely different, in what sense have they really made a choice?”
We could look at your own brain in these terms and ask about removing parts of it.
In the Omega-composite scenario, the composite entity is clearly making the decisions.
In the chip-composite scenario, the chip-composite appears to be making decision, and in the general case I would say probably is.
“If the entity making the choice is irrelevant, and the choice would be the same even if they were replaced by someone completely different, in what sense have they really made a choice?”
We could look at your own brain in these terms and ask about removing parts of it.
Indeed. Not all parts of my brain are involved in all decisions. But, in general, at least some part of me has an effect on what decision I make.
The point, here, is that in the scenario in which Omega is actively manipulating your brain “you” might mean something in a more extended sense and “some part of you” might mean “some part of Omega’s brain”.
Except that that’s not the person the question is being directed at. I’m not “amalgam-Kingreaper-and-Omega” at the moment. Asking what that person would do would garner completely different responses.
For example, amalgam-kingreaper-and-omega has a fondness for creating ridiculous scenarios and inflicting them on rationalists.
“Except that that’s not the person the question is being directed at.”
Does that mean that you accept that it might at least be conceivable that the scenario implies the existence of a compound being who is less constrained than the person being controlled by Omega?
Just that the scenario could really be considered as just adding an extra component onto a being—one that has a lot of influence on his behavior.
Similarly, we might imagine surgically removing a piece of your brain, connecting the neurons at the edges of the removed piece to the ones left in your brain by radio control, and taking the removed piece to another location, from which it still plays a full part in your thought processes. We would probably still consider that composite system “you”.
What if you had a brain disorder and some electronics were implanted into your brain? Maybe a system to help with social cues for Asperger syndrome, or a system to help with dyslexia? What if we had a process to make extra neurons grow to repair damage? We might easily consider many things to be a “you which has been modified”.
When you say that the question is not directed at the compound entity, one answer could be that the scenario involved adding an extra component to you, that “you” has been extended, and that the compound entity is now “you”.
The scenario, as I understand it doesn’t really specify the limits of the entity involved. It talks about your brain, and what Omega is doing to it, but it doesn’t specifically disallow the idea that the “you” that it is about gets modified in the process.
Now, if you want to edit the scenario to specify exactly what the “you” is here...
There isn’t a real choice. What you will do has been decided from outside you, and no matter how much you think you’re not going to change that.
And there’s the rub. My decision in Newcomb’s is also ultimately caused by things outside me; the conditions of the universe before I was born determined what my decision would be.
Whether we call something a ‘real choice’ in this kind of question depends upon whether it’s determined by things within the black box we call ‘our decision-making apparatus’ or something like that, or if the causal arrow bypasses it entirely. The black box screens off causes preceding it.
The scenario might go as follows:
Omega puts a million dollars in the box.
Omega scans your brain
Omega deduces that if he shows you a picture of a fish at just the right time, it will influence your internal decision-making in some otherwise inscrutable way that causes you to one-box
You see the fish, and decide (in whatever way you usually decide things) to one-box.
As far as I can tell, that is a ‘real choice’ to one-box. If you had happened upon that picture of a fish in regular Newcomb’s, without Omega being the one to put it there, it would equally be your ‘real choice’ to one-box, and I don’t see how Omega knowing that it will happen changes its realness or choiceness.
As you will see, it exists in the standard newcomb, but not in this variant.
To directly address your fish example: If, in the standard newcomb, my mind had been different, seeing the fish wouldn’t necessarily have caused me to make the same choice.
In the modified newcomb, if my mind had been different I would have seen a different thing. The state of my mind had no impact on the outcome of events.
The fact that the causal arrows are rooted in some other being’s decision algorithm black box could reasonably be taken as the criterion for calling it that being’s choice. Still real, still choice, not my choice.
No, it proves I will not decide everything rationally if I don’t decide everything rationally.
Which is pretty tautologous.
The Omega example requires that I will not decide everything rationally.
The real world permits the possibility of a rational agent. Thus it makes sense to question what a rational agent would do.
Your scenario doesn’t permit a rational agent, thus it makes no sense to ask what a rational agent would do.
You’re missing the point Unknowns. In your scenario, my decision doesn’t depend on how I decide. It just depends on the setting of the box.
So I might as well just decide arbitrarily, and save effort.
In real life, your decision doesn’t depend on how you decide it. It just depends on the positions of your atoms and the laws of physics. So you might as well just decide arbitrarily, and save effort.
You left out some steps in your argument. It appears you were going for a disjunction elimination, but if so I’m not convinced of one premise. Let me lay out more explicitly what I think your argument is supposed to be, then I’ll show where I think it’s gone wrong.
A = “The rational decision is to two-box”
B = “Omega has set me to one-box”
C = “The rational decision is to one-box”
D = “Omega has set me to two-box”
E = “I must not be deciding rationally”
1. (A∧B)→E
2. (C∧D)→E
3. (A∧B)∨(C∧D)
4. ∴ E
I’ll grant #1 and #2. This is a valid argument, but the dubious proposition is #3. It is entirely possible that (A∧D) or that (C∧B). And in those cases, E is not guaranteed.
In short, you might decide rationally in cases where you’re set to one-box and it’s rational to one-box.
It is possible that I will make the rational decision in one path of the scenario. But the scenario, by it’s very nature, contains both paths. In one of the two paths I must be deciding irrationally.
Given as it was stated that I will use my normal thought-processes in both paths, my normal thought-processes must, in order for this scenario to be possible, be irrational.
Proposition 3 is only required to be possible, not to be true, and is supported by the existence of both paths of the scenario: the scenario requires that both A and B are possible.
It is possible that I will make the rational decision in one path of the scenario. But the scenario contains both paths. In one of the two paths I must be deciding irrationally.
Given as it was stated that I will use my normal thought-processes in both paths, my normal thought-processes must, in order for this scenario to be possible, be irrational.
It is not the case that in order for this scenario to be possible, your normal thought-processes must be necessarily irrational. Rather, in order for this scenario to be possible, your normal thought-processes must be possibly irrational. And clearly that’s the case for normal non-supernatural decision-making.
If you did not know about the box, you’d experience your normal decision-making apparatus output a decision in the normal way. Either you’re the sort of person who generally decides rationally or not, and if you’re a particularly rational person the box might have to make you do some strange mental backflips to justify the decision in the case that it’s not rational to make the choice the box specifies.
It is isomorphic, in this sense, to the world determining your actions, except that you’ll get initial conditions that are very strange, in half the times you play this game (assuming a 50% chance of either outcome).
If you know about the box, then it becomes simpler, as you will indeed be able to use this reasoning and the box will probably just have to flip a bit here or there to get you to pick one or the other.
If you’re not the sort of person who usually decides rationally, then following your strategy should be easy. For me, I anticipate that I would decide rationally half the time, and go rather insane the other half (assuming there was a clear rational decision, as you implied above).
I do whatever I’m being influenced into doing.
This is a fact.
You can argue all you like about what I should do, but what I will do is already decided, and isn’t influenced by my thoughts, my rationality, or anything else.
All the information needed to determine what I will do is in the lesion/machine.
Applying rationality to a scenario where the agent is by definition incapable of rationality is just plain silly.
Do you think that in real life you are exempt from the laws of physics?
If not, does that mean that “what you will do is already decided”? That you don’t have to make a decision? That you are “incapable of rationality”?
In the real world the information that determines my action is contained within me. In order to determine the action, you would have to run “me” (or at least some reasonable part thereof)
In your version of newcombs the information that determines my action is contained within the machine.
Can you see why I consider that a significant difference?
No. The machine determines your action only by determining what is in you, which determines your action in the normal way.
So you still have to decide what to do.
Do you see how this scenario rules out the possibility of me deciding rationally?
EDIT: In fact, let me explain now, before you answer, give me a sec and I’ll re-edit
EDIT2: If the rational decision is to two-box, and Omega has set me to one-box, then I must not be deciding rationally. Correct?
If the rational decision is to one-box, and Omega has set me to two-box, then I must not be deciding rationally. Correct?
Now, assuming I will not decide rationally, as I know I will not, I need waste no time thinking. I’ll do whichever I feel like.
You can substitute “the laws of physics” for “Omega” in your argument, and if it proves you will not decide rationally in the Omega situation, then it proves you will not decide—anything—rationally in real life.
Presumably (or at least hopefully) if you are a rational agent with a certain DT, then a long and accurate description of the ways that “the laws of physics” affect your decision-making process break down into
The ways that the laws of physics affect the computer you’re running on
How the computer program, and specifically your DT, works when running on a reliable computer.
It’s not clear how a reduction like this could work in your example.
In my example, it is give that Omega decides what you are going to do, but that he causes you to do it in the same way you ordinarily do things, namely with some decision theory and by thinking some thoughts etc.
If the fact that Omega causes it means that you are irrational, then the fact that the laws of physics cause your actions also means that you are irrational.
A rational entity can exist in the laws of physics. A rational entity by definition has a determined decision, if there is a rational decision possible. A rational entity cannot make an irrational decision.
You’re getting hung up on the determinism. That’s not the issue. Rational entities are by definition deterministic.
What they are not is deterministically irrational. Your scenario requires an irrational entity.
Your scenario requires that the entity be able to make an irrational decision, using it’s normal thought processes. This requires that it be using irrational thought processes.
It seems you are simply assuming away the problem. Your assumptions:
Rational entities can exist.
The choice of either one-boxing or two-boxing in the above scenario is irrational
Omega makes the subject one-box or two-box using its normal decision mechanisms
A rational entity will never make an irrational decision
Then, the described scenario is simply inconsistent, if Omega can use a rational entity as a subject. And so it comes down to which bullet you want to bite. Is it:
I’m somewhat willing to grant A, B, or D, and less apt to grant C or E.
I’m not sure if you have an objection thus far that this does not encapsulate.
D doesn’t make sense to me. If they make their decisions rationally, that shouldn’t result in an irrational act at any point. If rational decision-making can result in irrational decisions we have a contradiction.
C. would not have to be true for all entities, just rational ones; which seems entirely possible.
But I still hold with something very similar to B.
There isn’t a real choice. What you will do has been decided from outside you, and no matter how much you think you’re not going to change that.
I was simply attempting to show that it is irrelevant to talk about what you should, rationally, do in the scenario, because the scenario doesn’t allow rational choice. It doesn’t actually allow choice at all, but that’s harder to demonstrate than demonstrating that it doesn’t allow rational choice.
Apparently I’m not doing a very good job of it.
ETA: the relevance of the comment below is doubtful. I didn’t read upthread far enough before making it. Original comment was:
What do you mean by “choice”?
Per Possibility and couldness (spoiler warning), if I run a deterministic chess-playing program, I’m willing to call its evaluation of the board and subsequent move a “choice”. How about you?
By choice, I mean my mind deciding what to do on the basis of it’s own thought processes, out of set of possibilities that could be realised if my mind were different than it is.
That is what I mean by choice.
A chess-program can do that.
I, in this scenario, cannot. No matter how my mind was setup prior to the scenario, there is only one possible outcome.
EDIT—I had missed the full context as follows: “In my example, it is give that Omega decides what you are going to do, but that he causes you to do it in the same way you ordinarily do things, namely with some decision theory and by thinking some thoughts etc.”
for the comment below, so I accept Kingreaper’s reply here. BUT I will give another answer, below.
If the fact that Omega causes it means that you are irrational, then the fact that the laws of physics cause your actions also means that you are irrational. You are being inconsistent here.
“I mean my mind deciding what to do on the basis of it’s own thought processes, out of set of possibilities that could be realised if my mind were different than it is.”
so can we apply this to a chess program as you suggest? I’ll rewrite it as:
“I mean a chess program deciding what to do on the basis of it’s own algorithmic process, out of set of possibilities that could be realised if its algorithm were different than it is.”
No problem there! So you didn’t say anything untrue about chess programs.
BUT
“I, in this scenario, cannot. No matter how my mind was setup prior to the scenario, there is only one possible outcome.”
This doesn’t make sense at all. The scenario requires your mind to be set up in a particular way. This does not mean that if your mind were set up in a different way you would still behave in the same way: If your mind were set up in a different way, either the outcome would be the same or your mind would be outside the scope of the scenario.
We can do exactly the same thing with a chess program.
Suppose I get a chess position (the state of play in a game) and present it to a chess program. The chess program replies with the move “Ngf3”. We now set the chess position up the same way again, and we predict that the program will move “Ngf3″ (because we just saw it do that with this position.) As far as we are concerned, the program can’t do anything else. As predicted, the program moves “Ngf3”. Now, the program was required by its own nature to make that move. It was forced to make that move by the way that the computer code in the program was organized, and by the chess position itself. We could say that even if the program had been different, it would still have made the same move—but this would be a fallacy, because if the program were different in such a way as to cause it to make a different move, it could never be the program about which we made that prediction. It would be a program about which a different prediction would be needed. Likewise, saying that your mind is compelled to act in a certain way, regardless of how it is set up, is also a fallacy, because the situation describes your mind as having set up in a specific way, just like the program with the predicted chess move, and if it wasn’t it would be outside the scope of the prediction.
No matter how my mind is set-up, Omega will change the scenario it to produce the same outcome.
If you took a chess program and chose a move, then gave it precisely the scenario necessary for it to make that move, I wouldn’t consider that move its choice.
If the entity making the choice is irrelevant, and the choice would be the same even if they were replaced by someone completely different, in what sense have they really made a choice?
Okay, so I got the scenario wrong, but I will give another reply. Omega is going to force you to act in a certain way. However, you will still experience what seem, to you, to be cognitive processes, and anyone watching your behavior will see what looks like cognitive processes going on.
Suppose Omega wrote a computer program and he used it to work outhow to control your behavior. Suppose he put this in a microchip and implanted it in your brain. You might say your brain is controlled by the chip, but you might also say that the chip and your brain form a composite entity which is still making decisions in the sense that any other mind is.
Now, suppose Omega keeps possession of the chip, but has it control you remotely. Again, you might still say that the chip and your brain form a composite system.
Finally, suppose Omega just does the computations in his own brain. You might say that your brain, together with Omega’s brain, form a composite system which is causing your behavior—and that this composite system makes decisions just like any other system.
“If the entity making the choice is irrelevant, and the choice would be the same even if they were replaced by someone completely different, in what sense have they really made a choice?”
We could look at your own brain in these terms and ask about removing parts of it.
In the Omega-composite scenario, the composite entity is clearly making the decisions.
In the chip-composite scenario, the chip-composite appears to be making decision, and in the general case I would say probably is.
Indeed. Not all parts of my brain are involved in all decisions. But, in general, at least some part of me has an effect on what decision I make.
The point, here, is that in the scenario in which Omega is actively manipulating your brain “you” might mean something in a more extended sense and “some part of you” might mean “some part of Omega’s brain”.
Except that that’s not the person the question is being directed at. I’m not “amalgam-Kingreaper-and-Omega” at the moment. Asking what that person would do would garner completely different responses.
For example, amalgam-kingreaper-and-omega has a fondness for creating ridiculous scenarios and inflicting them on rationalists.
“Except that that’s not the person the question is being directed at.”
Does that mean that you accept that it might at least be conceivable that the scenario implies the existence of a compound being who is less constrained than the person being controlled by Omega?
Yes. Of course, the part of them that is unconstrained IS Omega.
I’m just not sure about the relevance of this?
Just that the scenario could really be considered as just adding an extra component onto a being—one that has a lot of influence on his behavior.
Similarly, we might imagine surgically removing a piece of your brain, connecting the neurons at the edges of the removed piece to the ones left in your brain by radio control, and taking the removed piece to another location, from which it still plays a full part in your thought processes. We would probably still consider that composite system “you”.
What if you had a brain disorder and some electronics were implanted into your brain? Maybe a system to help with social cues for Asperger syndrome, or a system to help with dyslexia? What if we had a process to make extra neurons grow to repair damage? We might easily consider many things to be a “you which has been modified”.
When you say that the question is not directed at the compound entity, one answer could be that the scenario involved adding an extra component to you, that “you” has been extended, and that the compound entity is now “you”.
The scenario, as I understand it doesn’t really specify the limits of the entity involved. It talks about your brain, and what Omega is doing to it, but it doesn’t specifically disallow the idea that the “you” that it is about gets modified in the process.
Now, if you want to edit the scenario to specify exactly what the “you” is here...
We do. But what if we had a better one?
Yeah, after reading far enough upthread to become aware of the scenario under discussion, I find I agree with your conclusion.
And there’s the rub. My decision in Newcomb’s is also ultimately caused by things outside me; the conditions of the universe before I was born determined what my decision would be.
Whether we call something a ‘real choice’ in this kind of question depends upon whether it’s determined by things within the black box we call ‘our decision-making apparatus’ or something like that, or if the causal arrow bypasses it entirely. The black box screens off causes preceding it.
The scenario might go as follows:
Omega puts a million dollars in the box.
Omega scans your brain
Omega deduces that if he shows you a picture of a fish at just the right time, it will influence your internal decision-making in some otherwise inscrutable way that causes you to one-box
You see the fish, and decide (in whatever way you usually decide things) to one-box.
As far as I can tell, that is a ‘real choice’ to one-box. If you had happened upon that picture of a fish in regular Newcomb’s, without Omega being the one to put it there, it would equally be your ‘real choice’ to one-box, and I don’t see how Omega knowing that it will happen changes its realness or choiceness.
My explanation of what I mean by choice is here: http://lesswrong.com/lw/2mc/the_smoking_lesion_a_problem_for_evidential/2hyu?c=1
As you will see, it exists in the standard newcomb, but not in this variant.
To directly address your fish example: If, in the standard newcomb, my mind had been different, seeing the fish wouldn’t necessarily have caused me to make the same choice.
In the modified newcomb, if my mind had been different I would have seen a different thing. The state of my mind had no impact on the outcome of events.
The fact that the causal arrows are rooted in some other being’s decision algorithm black box could reasonably be taken as the criterion for calling it that being’s choice. Still real, still choice, not my choice.
No, it proves I will not decide everything rationally if I don’t decide everything rationally. Which is pretty tautologous.
The Omega example requires that I will not decide everything rationally.
The real world permits the possibility of a rational agent. Thus it makes sense to question what a rational agent would do. Your scenario doesn’t permit a rational agent, thus it makes no sense to ask what a rational agent would do.
You’re missing the point Unknowns. In your scenario, my decision doesn’t depend on how I decide. It just depends on the setting of the box. So I might as well just decide arbitrarily, and save effort.
What would you do in your own scenario?
In real life, your decision doesn’t depend on how you decide it. It just depends on the positions of your atoms and the laws of physics. So you might as well just decide arbitrarily, and save effort.
I would one-box.
So, if Omega programmed you to two-box, you would one-box?
That’s not exactly consistent. In fact, that’s logically impossible.
Essentially, you’re denying your own scenario.
You left out some steps in your argument. It appears you were going for a disjunction elimination, but if so I’m not convinced of one premise. Let me lay out more explicitly what I think your argument is supposed to be, then I’ll show where I think it’s gone wrong.
A = “The rational decision is to two-box” B = “Omega has set me to one-box” C = “The rational decision is to one-box” D = “Omega has set me to two-box” E = “I must not be deciding rationally”
I’ll grant #1 and #2. This is a valid argument, but the dubious proposition is #3. It is entirely possible that (A∧D) or that (C∧B). And in those cases, E is not guaranteed.
In short, you might decide rationally in cases where you’re set to one-box and it’s rational to one-box.
It is possible that I will make the rational decision in one path of the scenario. But the scenario, by it’s very nature, contains both paths. In one of the two paths I must be deciding irrationally.
Given as it was stated that I will use my normal thought-processes in both paths, my normal thought-processes must, in order for this scenario to be possible, be irrational.
Proposition 3 is only required to be possible, not to be true, and is supported by the existence of both paths of the scenario: the scenario requires that both A and B are possible.
It is possible that I will make the rational decision in one path of the scenario. But the scenario contains both paths. In one of the two paths I must be deciding irrationally.
Given as it was stated that I will use my normal thought-processes in both paths, my normal thought-processes must, in order for this scenario to be possible, be irrational.
You’re mixing modes.
It is not the case that in order for this scenario to be possible, your normal thought-processes must be necessarily irrational. Rather, in order for this scenario to be possible, your normal thought-processes must be possibly irrational. And clearly that’s the case for normal non-supernatural decision-making.
ETA: Unknowns stated the conclusion better
Let’s try a different tack: Is it rational to decide rationally in Unknown’s scenario?
1.Thinking takes effort, and this effort is a disutility. (-c)
2.If I don’t think I will come to the answer the machine is set to. (of utility X)
3.If I do think I will come to the answer the machine is set to. (of utility X)
My outcome if I don’t think is “X” My outcome if I do think if “X-c” Which is less than “X” I shouldn’t waste my effort thinking this through.
If you did not know about the box, you’d experience your normal decision-making apparatus output a decision in the normal way. Either you’re the sort of person who generally decides rationally or not, and if you’re a particularly rational person the box might have to make you do some strange mental backflips to justify the decision in the case that it’s not rational to make the choice the box specifies.
It is isomorphic, in this sense, to the world determining your actions, except that you’ll get initial conditions that are very strange, in half the times you play this game (assuming a 50% chance of either outcome).
If you know about the box, then it becomes simpler, as you will indeed be able to use this reasoning and the box will probably just have to flip a bit here or there to get you to pick one or the other.
If you’re not the sort of person who usually decides rationally, then following your strategy should be easy. For me, I anticipate that I would decide rationally half the time, and go rather insane the other half (assuming there was a clear rational decision, as you implied above).