But if you are TDT, you can’t always use less computing power, because that correlates with your opponents also deciding to use less computing power.
Substitute “move logically first” for “use less computing power”? Using less computing power seems like a red herring to me. TDT on simple problems (with the causal / logical structure already given) uses skeletally small amounts of computing power. “Who moves first” is a “battle”(?) over the causal / logical structure, not over who can manage to run out of computing power first. If you’re visualizing this using lots of computing power for the core logic, rather than computing the 20th decimal place of some threshold or verifying large proofs, then we’ve got different visualizations.
The idea of “if you do this, the opponent does the same” might apply to trying to move logically first, but in my world this has nothing to do with computing power, so at this point I think it’d be pretty odd if the agents were competing to be stupider.
Besides, you don’t want to respond to most logical threats, because that gives your opponent an incentive to make logical threats; you only want to respond to logical offers that you want your opponent to have an incentive to make. This gets into the scary issues I was hinting at before, like determining in advance that if you see your opponent predetermine to destroy the universe in a mutual suicide unless you pay a ransom, you’ll call their bet and die with them, even if they’ve predetermined to ignore your decision, etcetera; but if they offer to trade you silver for gold at a Ricardian-advantageous rate, you’ll predetermine to cooperate, etc. The point, though, is that “If I do X, they’ll do Y” is not a blank check to decide that minds do X, because you could choose a different form of responsiveness.
But anyway, I don’t see in the first place that agents should be having these sorts of contests over how little computing power to use. That doesn’t seem to me like a compelling advantage to reach for.
But if you simply don’t have that much computing power then you seem to have the advantage of logically moving first.
If you’ve got that little computing power then perhaps you can’t simulate your opponent’s skeletally small TDT decision, i.e., you can’t use TDT at all. If you can’t close the loop of “I simulate you simulating me”—which isn’t infinite, and actually terminates rather quickly in the simple cases I know how to analyze at all, because we perform counterfactual surgery inside the loop—then you can’t use TDT at all.
Lack of computing power could be considered a form of “crazy reasoning”...
No, I mean much crazier than that. Like “This doesn’t follow, but I’m going to believe it anyway!” That’s what it takes to get “unusual reasons”—the sort of madness that only strictly naturally selected biological minds would find compelling in advance of a timeless decision to be crazy. Like “I’M GOING TO THROW THE STEERING WINDOW OUT THE WHEEL AND I DON’T CARE WHAT THE OPPONENT PREDETERMINES” crazy.
Why does TDT lead to the phenomenon of “stupid winners”?
It has not been established to my satisfaction that it does. It is a central philosophical intuition driving my decision theory that increased computing power, knowledge, or self-control, should not harm a rational agent.
Substitute “move logically first” for “use less computing power”? Using less computing power seems like a red herring to me. TDT on simple problems (with the causal / logical structure already given) uses skeletally small amounts of computing power. “Who moves first” is a “battle”(?) over the causal / logical structure, not over who can manage to run out of computing power first. If you’re visualizing this using lots of computing power for the core logic, rather than computing the 20th decimal place of some threshold or verifying large proofs, then we’ve got different visualizations.
The idea of “if you do this, the opponent does the same” might apply to trying to move logically first, but in my world this has nothing to do with computing power, so at this point I think it’d be pretty odd if the agents were competing to be stupider.
Besides, you don’t want to respond to most logical threats, because that gives your opponent an incentive to make logical threats; you only want to respond to logical offers that you want your opponent to have an incentive to make. This gets into the scary issues I was hinting at before, like determining in advance that if you see your opponent predetermine to destroy the universe in a mutual suicide unless you pay a ransom, you’ll call their bet and die with them, even if they’ve predetermined to ignore your decision, etcetera; but if they offer to trade you silver for gold at a Ricardian-advantageous rate, you’ll predetermine to cooperate, etc. The point, though, is that “If I do X, they’ll do Y” is not a blank check to decide that minds do X, because you could choose a different form of responsiveness.
But anyway, I don’t see in the first place that agents should be having these sorts of contests over how little computing power to use. That doesn’t seem to me like a compelling advantage to reach for.
If you’ve got that little computing power then perhaps you can’t simulate your opponent’s skeletally small TDT decision, i.e., you can’t use TDT at all. If you can’t close the loop of “I simulate you simulating me”—which isn’t infinite, and actually terminates rather quickly in the simple cases I know how to analyze at all, because we perform counterfactual surgery inside the loop—then you can’t use TDT at all.
No, I mean much crazier than that. Like “This doesn’t follow, but I’m going to believe it anyway!” That’s what it takes to get “unusual reasons”—the sort of madness that only strictly naturally selected biological minds would find compelling in advance of a timeless decision to be crazy. Like “I’M GOING TO THROW THE STEERING WINDOW OUT THE WHEEL AND I DON’T CARE WHAT THE OPPONENT PREDETERMINES” crazy.
It has not been established to my satisfaction that it does. It is a central philosophical intuition driving my decision theory that increased computing power, knowledge, or self-control, should not harm a rational agent.