Yeah. TDT/UDT agents don’t choose their source code. They just choose their strategy.
They happen to do this in a way that respects logical connections between events, whereas CDT reasons about its options in a way that neglects acausal logical connections (such as in the mirror token trade above).
None of the algorithms are making different assumptions about the nature of “free will” or whether you get to “choose your own code”: they differ only in how they construct their counterfactuals, with TDT/UDT respecting acausal connections that CDT neglects.
I recognize that you understand this much better than me, and that I am almost certainly wrong. I am continuing this discussion only to try to figure out where my lingering sense of confusion comes from.
If an agent is playing a game of PD against another agent running the same source code, and chooses to “cooperate” because he believes the that other agent will necessarily make the same choice: how is that not equivalent to choosing your source code?
It isn’t equivalent. The agent recognizes that their already-existing source code is what causes them to either cooperate or defect, and so because the other agent has the same source code that agent must make the same decision.
As for how the actual decision happens, the agent doesn’t “choose its source code”, it simply runs the source code and outputs “cooperate” or “defect” based on what the result of running that source code is.
As for how the actual decision happens, the agent doesn’t “choose its source code”, it simply runs the source code > and outputs “cooperate” or “defect” based on what the result of running that source code is.
This makes sense, but if it is true, I don’t understand in what sense a “choice” is made. It seems to me you have assumed away free will. Which is fine, it is probably true that free will does not exist. But if it is true, I don’t understand why there is any need for a decision theory, as no decisions are actually made.
Clearly you have a notion of what it means to “make a decision”. Doesn’t it make sense to associate this idea of “making a decision” with the notion of evaluating the outcomes from different (sometimes counterfactual) actions and then selecting one of those actions on the basis of those evaluations?
Surely if the notion of “choice” refers to anything coherent, that’s what it’s talking about? What matters is that the decision is determined directly through the “make a decision” process rather than independently of it.
Also, given that these “make a decision” processes (i.e. decision theories) are things that actually exist and are used, surely it also makes sense to compare different decision theories on the basis of how sensibly they behave?
You are probably right that I have a faulty notion of what it means to make a decision. I’ll have to think about this for a few days to see if I can update...
Basically, my point is that the “running the source code” part is where all of the interesting stuff actually happens, and that’s where the “choice” would actually happen.
It may be true that the agent “runs the source code and outputs the resulting output”, but in saying that I’ve neglected all of the cool stuff that happens when the source code actually gets run (e.g. comparing different options, etc.). In order to establish that source code A leads to output B you would need to talk about how source code A leads to output B, and that’s the interesting part! That’s the part that I associate with the notion of “choice”.
Yeah. TDT/UDT agents don’t choose their source code. They just choose their strategy.
They happen to do this in a way that respects logical connections between events, whereas CDT reasons about its options in a way that neglects acausal logical connections (such as in the mirror token trade above).
None of the algorithms are making different assumptions about the nature of “free will” or whether you get to “choose your own code”: they differ only in how they construct their counterfactuals, with TDT/UDT respecting acausal connections that CDT neglects.
I recognize that you understand this much better than me, and that I am almost certainly wrong. I am continuing this discussion only to try to figure out where my lingering sense of confusion comes from.
If an agent is playing a game of PD against another agent running the same source code, and chooses to “cooperate” because he believes the that other agent will necessarily make the same choice: how is that not equivalent to choosing your source code?
It isn’t equivalent. The agent recognizes that their already-existing source code is what causes them to either cooperate or defect, and so because the other agent has the same source code that agent must make the same decision.
As for how the actual decision happens, the agent doesn’t “choose its source code”, it simply runs the source code and outputs “cooperate” or “defect” based on what the result of running that source code is.
This makes sense, but if it is true, I don’t understand in what sense a “choice” is made. It seems to me you have assumed away free will. Which is fine, it is probably true that free will does not exist. But if it is true, I don’t understand why there is any need for a decision theory, as no decisions are actually made.
Clearly you have a notion of what it means to “make a decision”. Doesn’t it make sense to associate this idea of “making a decision” with the notion of evaluating the outcomes from different (sometimes counterfactual) actions and then selecting one of those actions on the basis of those evaluations?
Surely if the notion of “choice” refers to anything coherent, that’s what it’s talking about? What matters is that the decision is determined directly through the “make a decision” process rather than independently of it.
Also, given that these “make a decision” processes (i.e. decision theories) are things that actually exist and are used, surely it also makes sense to compare different decision theories on the basis of how sensibly they behave?
You are probably right that I have a faulty notion of what it means to make a decision. I’ll have to think about this for a few days to see if I can update...
This may help you. (Well, at least it helped me—YMMV.)
Basically, my point is that the “running the source code” part is where all of the interesting stuff actually happens, and that’s where the “choice” would actually happen.
It may be true that the agent “runs the source code and outputs the resulting output”, but in saying that I’ve neglected all of the cool stuff that happens when the source code actually gets run (e.g. comparing different options, etc.). In order to establish that source code A leads to output B you would need to talk about how source code A leads to output B, and that’s the interesting part! That’s the part that I associate with the notion of “choice”.