It’s still more about magic and time-reversed causation than it is about deciding which box to take.
Particularly since it rewards the reflexively inconsistent agent that, at the time the money was placed, was going to one-box when it had the chance, but at the time the decision was made two-boxed. (At time A, when Omega makes the prediction, it is the case that the highest-performing decision model will at time B select one box; at time B, the highest-performing model selects both boxes.)
You’re effectively calling the concept of determinism “magic”, arguing that merely being able to calculate the outcome of a decision process is “magical” or requires time-reversal.
Look, I have your source code. I can see what you’ll decide, because I have your source code and know how you decide. Where’s the magic in that? Start thinking like a programmer. There’s nothing magical when I look at the source of a method and say “this will always return ‘true’ under such-and-such conditions”.
What is the physical analogue of looking at the source code of physics? You, the programmer, can assume that no bit rot will occur during the period that the program is running, and that no other program will engage in memory access violations, but the computer cannot.
The compiler will (barring interference) always produce the same executable from the same source, but it can’t use that fact to shortcut compiling the code; even if my decision is deterministic, there is no way within the universe to, without some loss of precision or accuracy, determine in general what outcome someone will have before the universe does. (Special case: people who have already considered the problem and already decided to use cached decisions)
there is no way within the universe to, without some loss of precision or accuracy, determine in general what outcome someone will have before the universe does.
Sure. So what? If Omega is just 99.99999999999% of the time correct, how does that change in practice whether you should one-box or two-box?
“Special case: people who have already considered the problem and already decided to use cached decisions”
Why? Knowledge of the problem is just as an sensory input. Pass it through the same deterministic brain in the same state and you get the same result. “I’ve just been informed of this type of problem, let me think about it right now” and “I’ve just been informed that I’m now involved in this type of problem I have already considered, I’ll use my predetermined decision.” are both equally deterministic.
The latter seems to you more predictable, because as a human being you’re accustomed to people making up their minds and following through with predetermined decision. As a programmer, I’ll tell you it’s equally determistic whether you multiply 3 5 every time, or if you only multiply it once, store it in a variable and then return it when asked about the product of 35...
Sure. So what? If Omega is just 99.99999999999% of the time correct, how does that change in practice whether you should one-box or two-box?
If I estimate a probability of less than ~99.9001% ($1,000,000/$1,001,000) that Omega will be correct in this specific instance, I one-box; otherwise I two-box. With a prior of 13 nines, getting down to three would require ten decades of evidence; if I shared any feature with one person who Omega wrongly identified as a one-boxer but not with the first 10^10 people who Omega correctly identified as a two-boxer, I think that would be strong enough evidence.
As a programmer, I’ll tell you it’s equally determistic whether you multiply 3 5 every time, or if you only multiply it once, store it in a variable and then return it when asked about the product of 35...
Unless you are doing the math on a Pentium processor...
“If I estimate a probability of less than ~99.9001% ($1,000,000/$1,001,000) that Omega will be correct in this specific instance, I one-box; ”
??? Your calculation seems to be trying to divide the wrong things. One boxing gives you $1,000,000 if Omega is right, gives you $0 if Omega is wrong. Two boxing gives you $1,001,000 if Omega is wrong, gives you $1,000 if Omega is right.
So with Omega being X likely to be right: -the estimated payoff for one-boxing is X $1,000,000 -the estimated payoff for two-boxing is (100-X) ($1,001,000) + X * $1000.
One-boxing is therefore superior (assuming linear utility of money) when X $1,000,000 > (100-X) ($1,001,000) + X * $1000.
One-boxing is therefore superior (always assuming linear utility of money) when Omega has a higher than X> 50.05% likelihood of being right.
It’s still more about magic and time-reversed causation than it is about deciding which box to take.
Particularly since it rewards the reflexively inconsistent agent that, at the time the money was placed, was going to one-box when it had the chance, but at the time the decision was made two-boxed. (At time A, when Omega makes the prediction, it is the case that the highest-performing decision model will at time B select one box; at time B, the highest-performing model selects both boxes.)
You’re effectively calling the concept of determinism “magic”, arguing that merely being able to calculate the outcome of a decision process is “magical” or requires time-reversal.
Look, I have your source code. I can see what you’ll decide, because I have your source code and know how you decide. Where’s the magic in that? Start thinking like a programmer. There’s nothing magical when I look at the source of a method and say “this will always return ‘true’ under such-and-such conditions”.
What is the physical analogue of looking at the source code of physics? You, the programmer, can assume that no bit rot will occur during the period that the program is running, and that no other program will engage in memory access violations, but the computer cannot.
The compiler will (barring interference) always produce the same executable from the same source, but it can’t use that fact to shortcut compiling the code; even if my decision is deterministic, there is no way within the universe to, without some loss of precision or accuracy, determine in general what outcome someone will have before the universe does. (Special case: people who have already considered the problem and already decided to use cached decisions)
Sure. So what? If Omega is just 99.99999999999% of the time correct, how does that change in practice whether you should one-box or two-box?
Why? Knowledge of the problem is just as an sensory input. Pass it through the same deterministic brain in the same state and you get the same result. “I’ve just been informed of this type of problem, let me think about it right now” and “I’ve just been informed that I’m now involved in this type of problem I have already considered, I’ll use my predetermined decision.” are both equally deterministic.
The latter seems to you more predictable, because as a human being you’re accustomed to people making up their minds and following through with predetermined decision. As a programmer, I’ll tell you it’s equally determistic whether you multiply 3 5 every time, or if you only multiply it once, store it in a variable and then return it when asked about the product of 35...
If I estimate a probability of less than ~99.9001% ($1,000,000/$1,001,000) that Omega will be correct in this specific instance, I one-box; otherwise I two-box. With a prior of 13 nines, getting down to three would require ten decades of evidence; if I shared any feature with one person who Omega wrongly identified as a one-boxer but not with the first 10^10 people who Omega correctly identified as a two-boxer, I think that would be strong enough evidence.
Unless you are doing the math on a Pentium processor...
??? Your calculation seems to be trying to divide the wrong things.
One boxing gives you $1,000,000 if Omega is right, gives you $0 if Omega is wrong.
Two boxing gives you $1,001,000 if Omega is wrong, gives you $1,000 if Omega is right.
So with Omega being X likely to be right:
-the estimated payoff for one-boxing is X $1,000,000
-the estimated payoff for two-boxing is (100-X) ($1,001,000) + X * $1000.
One-boxing is therefore superior (assuming linear utility of money) when X $1,000,000 > (100-X) ($1,001,000) + X * $1000.
One-boxing is therefore superior (always assuming linear utility of money) when Omega has a higher than X> 50.05% likelihood of being right.
Yeah, looking at it as a $500,000 bet on almost even money, odds of about 50% are right.