Presumably (or at least hopefully) if you are a rational agent with a certain DT, then a long and accurate description of the ways that “the laws of physics” affect your decision-making process break down into
The ways that the laws of physics affect the computer you’re running on
How the computer program, and specifically your DT, works when running on a reliable computer.
It’s not clear how a reduction like this could work in your example.
In my example, it is give that Omega decides what you are going to do, but that he causes you to do it in the same way you ordinarily do things, namely with some decision theory and by thinking some thoughts etc.
If the fact that Omega causes it means that you are irrational, then the fact that the laws of physics cause your actions also means that you are irrational.
A rational entity can exist in the laws of physics.
A rational entity by definition has a determined decision, if there is a rational decision possible.
A rational entity cannot make an irrational decision.
You’re getting hung up on the determinism. That’s not the issue. Rational entities are by definition deterministic.
What they are not is deterministically irrational. Your scenario requires an irrational entity.
Your scenario requires that the entity be able to make an irrational decision, using it’s normal thought processes.
This requires that it be using irrational thought processes.
It seems you are simply assuming away the problem. Your assumptions:
Rational entities can exist.
The choice of either one-boxing or two-boxing in the above scenario is irrational
Omega makes the subject one-box or two-box using its normal decision mechanisms
A rational entity will never make an irrational decision
Then, the described scenario is simply inconsistent, if Omega can use a rational entity as a subject. And so it comes down to which bullet you want to bite. Is it:
A. Rational entities can't exist.
B. Neither choice is irrational.
C. Omega cannot use the subject's normal decision mechanisms to effect the choice
D. Rational entities are allowed to make irrational decisions sometimes
E. The thought experiment is simply inconsistent with reality.
I’m somewhat willing to grant A, B, or D, and less apt to grant C or E.
I’m not sure if you have an objection thus far that this does not encapsulate.
D doesn’t make sense to me. If they make their decisions rationally, that shouldn’t result in an irrational act at any point. If rational decision-making can result in irrational decisions we have a contradiction.
C. would not have to be true for all entities, just rational ones; which seems entirely possible.
But I still hold with something very similar to B.
There isn’t a real choice. What you will do has been decided from outside you, and no matter how much you think you’re not going to change that.
I was simply attempting to show that it is irrelevant to talk about what you should, rationally, do in the scenario, because the scenario doesn’t allow rational choice. It doesn’t actually allow choice at all, but that’s harder to demonstrate than demonstrating that it doesn’t allow rational choice.
ETA: the relevance of the comment below is doubtful. I didn’t read upthread far enough before making it. Original comment was:
...the scenario doesn’t allow rational choice. It doesn’t actually allow choice at all...
What do you mean by “choice”?
Per Possibility and couldness (spoiler warning), if I run a deterministic chess-playing program, I’m willing to call its evaluation of the board and subsequent move a “choice”. How about you?
By choice, I mean my mind deciding what to do on the basis of it’s own thought processes, out of set of possibilities that could be realised if my mind were different than it is.
That is what I mean by choice.
A chess-program can do that.
I, in this scenario, cannot. No matter how my mind was setup prior to the scenario, there is only one possible outcome.
EDIT—I had missed the full context as follows:
“In my example, it is give that Omega decides what you are going to do, but that he causes you to do it in the same way you ordinarily do things, namely with some decision theory and by thinking some thoughts etc.”
for the comment below, so I accept Kingreaper’s reply here. BUT I will give another answer, below.
If the fact that Omega causes it means that you are irrational, then the fact that the laws of physics cause your actions also means that you are irrational.
You are being inconsistent here.
“I mean my mind deciding what to do on the basis of it’s own thought processes, out of set of possibilities that could be realised if my mind were different than it is.”
so can we apply this to a chess program as you suggest? I’ll rewrite it as:
“I mean a chess program deciding what to do on the basis of it’s own algorithmic process, out of set of possibilities that could be realised if its algorithm were different than it is.”
No problem there! So you didn’t say anything untrue about chess programs.
BUT
“I, in this scenario, cannot. No matter how my mind was setup prior to the scenario, there is only one possible outcome.”
This doesn’t make sense at all. The scenario requires your mind to be set up in a particular way. This does not mean that if your mind were set up in a different way you would still behave in the same way: If your mind were set up in a different way, either the outcome would be the same or your mind would be outside the scope of the scenario.
We can do exactly the same thing with a chess program.
Suppose I get a chess position (the state of play in a game) and present it to a chess program. The chess program replies with the move “Ngf3”. We now set the chess position up the same way again, and we predict that the program will move “Ngf3″ (because we just saw it do that with this position.) As far as we are concerned, the program can’t do anything else. As predicted, the program moves “Ngf3”. Now, the program was required by its own nature to make that move. It was forced to make that move by the way that the computer code in the program was organized, and by the chess position itself. We could say that even if the program had been different, it would still have made the same move—but this would be a fallacy, because if the program were different in such a way as to cause it to make a different move, it could never be the program about which we made that prediction. It would be a program about which a different prediction would be needed. Likewise, saying that your mind is compelled to act in a certain way, regardless of how it is set up, is also a fallacy, because the situation describes your mind as having set up in a specific way, just like the program with the predicted chess move, and if it wasn’t it would be outside the scope of the prediction.
“I, in this scenario, cannot. No matter how my mind was setup prior to the scenario, there is only one possible outcome.”
This doesn’t make sense at all. The scenario requires your mind to be set up in a particular way. This does not mean that if your mind were set up in a different way you would still behave in the same way: If your mind were set up in a different way, either the outcome would be the same or your mind would be outside the scope of the scenario.
No matter how my mind is set-up, Omega will change the scenario it to produce the same outcome.
If you took a chess program and chose a move, then gave it precisely the scenario necessary for it to make that move, I wouldn’t consider that move its choice.
If the entity making the choice is irrelevant, and the choice would be the same even if they were replaced by someone completely different, in what sense have they really made a choice?
Okay, so I got the scenario wrong, but I will give another reply. Omega is going to force you to act in a certain way. However, you will still experience what seem, to you, to be cognitive processes, and anyone watching your behavior will see what looks like cognitive processes going on.
Suppose Omega wrote a computer program and he used it to work outhow to control your behavior. Suppose he put this in a microchip and implanted it in your brain. You might say your brain is controlled by the chip, but you might also say that the chip and your brain form a composite entity which is still making decisions in the sense that any other mind is.
Now, suppose Omega keeps possession of the chip, but has it control you remotely. Again, you might still say that the chip and your brain form a composite system.
Finally, suppose Omega just does the computations in his own brain. You might say that your brain, together with Omega’s brain, form a composite system which is causing your behavior—and that this composite system makes decisions just like any other system.
“If the entity making the choice is irrelevant, and the choice would be the same even if they were replaced by someone completely different, in what sense have they really made a choice?”
We could look at your own brain in these terms and ask about removing parts of it.
In the Omega-composite scenario, the composite entity is clearly making the decisions.
In the chip-composite scenario, the chip-composite appears to be making decision, and in the general case I would say probably is.
“If the entity making the choice is irrelevant, and the choice would be the same even if they were replaced by someone completely different, in what sense have they really made a choice?”
We could look at your own brain in these terms and ask about removing parts of it.
Indeed. Not all parts of my brain are involved in all decisions. But, in general, at least some part of me has an effect on what decision I make.
The point, here, is that in the scenario in which Omega is actively manipulating your brain “you” might mean something in a more extended sense and “some part of you” might mean “some part of Omega’s brain”.
Except that that’s not the person the question is being directed at. I’m not “amalgam-Kingreaper-and-Omega” at the moment. Asking what that person would do would garner completely different responses.
For example, amalgam-kingreaper-and-omega has a fondness for creating ridiculous scenarios and inflicting them on rationalists.
“Except that that’s not the person the question is being directed at.”
Does that mean that you accept that it might at least be conceivable that the scenario implies the existence of a compound being who is less constrained than the person being controlled by Omega?
Just that the scenario could really be considered as just adding an extra component onto a being—one that has a lot of influence on his behavior.
Similarly, we might imagine surgically removing a piece of your brain, connecting the neurons at the edges of the removed piece to the ones left in your brain by radio control, and taking the removed piece to another location, from which it still plays a full part in your thought processes. We would probably still consider that composite system “you”.
What if you had a brain disorder and some electronics were implanted into your brain? Maybe a system to help with social cues for Asperger syndrome, or a system to help with dyslexia? What if we had a process to make extra neurons grow to repair damage? We might easily consider many things to be a “you which has been modified”.
When you say that the question is not directed at the compound entity, one answer could be that the scenario involved adding an extra component to you, that “you” has been extended, and that the compound entity is now “you”.
The scenario, as I understand it doesn’t really specify the limits of the entity involved. It talks about your brain, and what Omega is doing to it, but it doesn’t specifically disallow the idea that the “you” that it is about gets modified in the process.
Now, if you want to edit the scenario to specify exactly what the “you” is here...
There isn’t a real choice. What you will do has been decided from outside you, and no matter how much you think you’re not going to change that.
And there’s the rub. My decision in Newcomb’s is also ultimately caused by things outside me; the conditions of the universe before I was born determined what my decision would be.
Whether we call something a ‘real choice’ in this kind of question depends upon whether it’s determined by things within the black box we call ‘our decision-making apparatus’ or something like that, or if the causal arrow bypasses it entirely. The black box screens off causes preceding it.
The scenario might go as follows:
Omega puts a million dollars in the box.
Omega scans your brain
Omega deduces that if he shows you a picture of a fish at just the right time, it will influence your internal decision-making in some otherwise inscrutable way that causes you to one-box
You see the fish, and decide (in whatever way you usually decide things) to one-box.
As far as I can tell, that is a ‘real choice’ to one-box. If you had happened upon that picture of a fish in regular Newcomb’s, without Omega being the one to put it there, it would equally be your ‘real choice’ to one-box, and I don’t see how Omega knowing that it will happen changes its realness or choiceness.
As you will see, it exists in the standard newcomb, but not in this variant.
To directly address your fish example: If, in the standard newcomb, my mind had been different, seeing the fish wouldn’t necessarily have caused me to make the same choice.
In the modified newcomb, if my mind had been different I would have seen a different thing. The state of my mind had no impact on the outcome of events.
The fact that the causal arrows are rooted in some other being’s decision algorithm black box could reasonably be taken as the criterion for calling it that being’s choice. Still real, still choice, not my choice.
Presumably (or at least hopefully) if you are a rational agent with a certain DT, then a long and accurate description of the ways that “the laws of physics” affect your decision-making process break down into
The ways that the laws of physics affect the computer you’re running on
How the computer program, and specifically your DT, works when running on a reliable computer.
It’s not clear how a reduction like this could work in your example.
In my example, it is give that Omega decides what you are going to do, but that he causes you to do it in the same way you ordinarily do things, namely with some decision theory and by thinking some thoughts etc.
If the fact that Omega causes it means that you are irrational, then the fact that the laws of physics cause your actions also means that you are irrational.
A rational entity can exist in the laws of physics. A rational entity by definition has a determined decision, if there is a rational decision possible. A rational entity cannot make an irrational decision.
You’re getting hung up on the determinism. That’s not the issue. Rational entities are by definition deterministic.
What they are not is deterministically irrational. Your scenario requires an irrational entity.
Your scenario requires that the entity be able to make an irrational decision, using it’s normal thought processes. This requires that it be using irrational thought processes.
It seems you are simply assuming away the problem. Your assumptions:
Rational entities can exist.
The choice of either one-boxing or two-boxing in the above scenario is irrational
Omega makes the subject one-box or two-box using its normal decision mechanisms
A rational entity will never make an irrational decision
Then, the described scenario is simply inconsistent, if Omega can use a rational entity as a subject. And so it comes down to which bullet you want to bite. Is it:
I’m somewhat willing to grant A, B, or D, and less apt to grant C or E.
I’m not sure if you have an objection thus far that this does not encapsulate.
D doesn’t make sense to me. If they make their decisions rationally, that shouldn’t result in an irrational act at any point. If rational decision-making can result in irrational decisions we have a contradiction.
C. would not have to be true for all entities, just rational ones; which seems entirely possible.
But I still hold with something very similar to B.
There isn’t a real choice. What you will do has been decided from outside you, and no matter how much you think you’re not going to change that.
I was simply attempting to show that it is irrelevant to talk about what you should, rationally, do in the scenario, because the scenario doesn’t allow rational choice. It doesn’t actually allow choice at all, but that’s harder to demonstrate than demonstrating that it doesn’t allow rational choice.
Apparently I’m not doing a very good job of it.
ETA: the relevance of the comment below is doubtful. I didn’t read upthread far enough before making it. Original comment was:
What do you mean by “choice”?
Per Possibility and couldness (spoiler warning), if I run a deterministic chess-playing program, I’m willing to call its evaluation of the board and subsequent move a “choice”. How about you?
By choice, I mean my mind deciding what to do on the basis of it’s own thought processes, out of set of possibilities that could be realised if my mind were different than it is.
That is what I mean by choice.
A chess-program can do that.
I, in this scenario, cannot. No matter how my mind was setup prior to the scenario, there is only one possible outcome.
EDIT—I had missed the full context as follows: “In my example, it is give that Omega decides what you are going to do, but that he causes you to do it in the same way you ordinarily do things, namely with some decision theory and by thinking some thoughts etc.”
for the comment below, so I accept Kingreaper’s reply here. BUT I will give another answer, below.
If the fact that Omega causes it means that you are irrational, then the fact that the laws of physics cause your actions also means that you are irrational. You are being inconsistent here.
“I mean my mind deciding what to do on the basis of it’s own thought processes, out of set of possibilities that could be realised if my mind were different than it is.”
so can we apply this to a chess program as you suggest? I’ll rewrite it as:
“I mean a chess program deciding what to do on the basis of it’s own algorithmic process, out of set of possibilities that could be realised if its algorithm were different than it is.”
No problem there! So you didn’t say anything untrue about chess programs.
BUT
“I, in this scenario, cannot. No matter how my mind was setup prior to the scenario, there is only one possible outcome.”
This doesn’t make sense at all. The scenario requires your mind to be set up in a particular way. This does not mean that if your mind were set up in a different way you would still behave in the same way: If your mind were set up in a different way, either the outcome would be the same or your mind would be outside the scope of the scenario.
We can do exactly the same thing with a chess program.
Suppose I get a chess position (the state of play in a game) and present it to a chess program. The chess program replies with the move “Ngf3”. We now set the chess position up the same way again, and we predict that the program will move “Ngf3″ (because we just saw it do that with this position.) As far as we are concerned, the program can’t do anything else. As predicted, the program moves “Ngf3”. Now, the program was required by its own nature to make that move. It was forced to make that move by the way that the computer code in the program was organized, and by the chess position itself. We could say that even if the program had been different, it would still have made the same move—but this would be a fallacy, because if the program were different in such a way as to cause it to make a different move, it could never be the program about which we made that prediction. It would be a program about which a different prediction would be needed. Likewise, saying that your mind is compelled to act in a certain way, regardless of how it is set up, is also a fallacy, because the situation describes your mind as having set up in a specific way, just like the program with the predicted chess move, and if it wasn’t it would be outside the scope of the prediction.
No matter how my mind is set-up, Omega will change the scenario it to produce the same outcome.
If you took a chess program and chose a move, then gave it precisely the scenario necessary for it to make that move, I wouldn’t consider that move its choice.
If the entity making the choice is irrelevant, and the choice would be the same even if they were replaced by someone completely different, in what sense have they really made a choice?
Okay, so I got the scenario wrong, but I will give another reply. Omega is going to force you to act in a certain way. However, you will still experience what seem, to you, to be cognitive processes, and anyone watching your behavior will see what looks like cognitive processes going on.
Suppose Omega wrote a computer program and he used it to work outhow to control your behavior. Suppose he put this in a microchip and implanted it in your brain. You might say your brain is controlled by the chip, but you might also say that the chip and your brain form a composite entity which is still making decisions in the sense that any other mind is.
Now, suppose Omega keeps possession of the chip, but has it control you remotely. Again, you might still say that the chip and your brain form a composite system.
Finally, suppose Omega just does the computations in his own brain. You might say that your brain, together with Omega’s brain, form a composite system which is causing your behavior—and that this composite system makes decisions just like any other system.
“If the entity making the choice is irrelevant, and the choice would be the same even if they were replaced by someone completely different, in what sense have they really made a choice?”
We could look at your own brain in these terms and ask about removing parts of it.
In the Omega-composite scenario, the composite entity is clearly making the decisions.
In the chip-composite scenario, the chip-composite appears to be making decision, and in the general case I would say probably is.
Indeed. Not all parts of my brain are involved in all decisions. But, in general, at least some part of me has an effect on what decision I make.
The point, here, is that in the scenario in which Omega is actively manipulating your brain “you” might mean something in a more extended sense and “some part of you” might mean “some part of Omega’s brain”.
Except that that’s not the person the question is being directed at. I’m not “amalgam-Kingreaper-and-Omega” at the moment. Asking what that person would do would garner completely different responses.
For example, amalgam-kingreaper-and-omega has a fondness for creating ridiculous scenarios and inflicting them on rationalists.
“Except that that’s not the person the question is being directed at.”
Does that mean that you accept that it might at least be conceivable that the scenario implies the existence of a compound being who is less constrained than the person being controlled by Omega?
Yes. Of course, the part of them that is unconstrained IS Omega.
I’m just not sure about the relevance of this?
Just that the scenario could really be considered as just adding an extra component onto a being—one that has a lot of influence on his behavior.
Similarly, we might imagine surgically removing a piece of your brain, connecting the neurons at the edges of the removed piece to the ones left in your brain by radio control, and taking the removed piece to another location, from which it still plays a full part in your thought processes. We would probably still consider that composite system “you”.
What if you had a brain disorder and some electronics were implanted into your brain? Maybe a system to help with social cues for Asperger syndrome, or a system to help with dyslexia? What if we had a process to make extra neurons grow to repair damage? We might easily consider many things to be a “you which has been modified”.
When you say that the question is not directed at the compound entity, one answer could be that the scenario involved adding an extra component to you, that “you” has been extended, and that the compound entity is now “you”.
The scenario, as I understand it doesn’t really specify the limits of the entity involved. It talks about your brain, and what Omega is doing to it, but it doesn’t specifically disallow the idea that the “you” that it is about gets modified in the process.
Now, if you want to edit the scenario to specify exactly what the “you” is here...
We do. But what if we had a better one?
Yeah, after reading far enough upthread to become aware of the scenario under discussion, I find I agree with your conclusion.
And there’s the rub. My decision in Newcomb’s is also ultimately caused by things outside me; the conditions of the universe before I was born determined what my decision would be.
Whether we call something a ‘real choice’ in this kind of question depends upon whether it’s determined by things within the black box we call ‘our decision-making apparatus’ or something like that, or if the causal arrow bypasses it entirely. The black box screens off causes preceding it.
The scenario might go as follows:
Omega puts a million dollars in the box.
Omega scans your brain
Omega deduces that if he shows you a picture of a fish at just the right time, it will influence your internal decision-making in some otherwise inscrutable way that causes you to one-box
You see the fish, and decide (in whatever way you usually decide things) to one-box.
As far as I can tell, that is a ‘real choice’ to one-box. If you had happened upon that picture of a fish in regular Newcomb’s, without Omega being the one to put it there, it would equally be your ‘real choice’ to one-box, and I don’t see how Omega knowing that it will happen changes its realness or choiceness.
My explanation of what I mean by choice is here: http://lesswrong.com/lw/2mc/the_smoking_lesion_a_problem_for_evidential/2hyu?c=1
As you will see, it exists in the standard newcomb, but not in this variant.
To directly address your fish example: If, in the standard newcomb, my mind had been different, seeing the fish wouldn’t necessarily have caused me to make the same choice.
In the modified newcomb, if my mind had been different I would have seen a different thing. The state of my mind had no impact on the outcome of events.
The fact that the causal arrows are rooted in some other being’s decision algorithm black box could reasonably be taken as the criterion for calling it that being’s choice. Still real, still choice, not my choice.