As for how the actual decision happens, the agent doesn’t “choose its source code”, it simply runs the source code > and outputs “cooperate” or “defect” based on what the result of running that source code is.
This makes sense, but if it is true, I don’t understand in what sense a “choice” is made. It seems to me you have assumed away free will. Which is fine, it is probably true that free will does not exist. But if it is true, I don’t understand why there is any need for a decision theory, as no decisions are actually made.
Clearly you have a notion of what it means to “make a decision”. Doesn’t it make sense to associate this idea of “making a decision” with the notion of evaluating the outcomes from different (sometimes counterfactual) actions and then selecting one of those actions on the basis of those evaluations?
Surely if the notion of “choice” refers to anything coherent, that’s what it’s talking about? What matters is that the decision is determined directly through the “make a decision” process rather than independently of it.
Also, given that these “make a decision” processes (i.e. decision theories) are things that actually exist and are used, surely it also makes sense to compare different decision theories on the basis of how sensibly they behave?
You are probably right that I have a faulty notion of what it means to make a decision. I’ll have to think about this for a few days to see if I can update...
Basically, my point is that the “running the source code” part is where all of the interesting stuff actually happens, and that’s where the “choice” would actually happen.
It may be true that the agent “runs the source code and outputs the resulting output”, but in saying that I’ve neglected all of the cool stuff that happens when the source code actually gets run (e.g. comparing different options, etc.). In order to establish that source code A leads to output B you would need to talk about how source code A leads to output B, and that’s the interesting part! That’s the part that I associate with the notion of “choice”.
This makes sense, but if it is true, I don’t understand in what sense a “choice” is made. It seems to me you have assumed away free will. Which is fine, it is probably true that free will does not exist. But if it is true, I don’t understand why there is any need for a decision theory, as no decisions are actually made.
Clearly you have a notion of what it means to “make a decision”. Doesn’t it make sense to associate this idea of “making a decision” with the notion of evaluating the outcomes from different (sometimes counterfactual) actions and then selecting one of those actions on the basis of those evaluations?
Surely if the notion of “choice” refers to anything coherent, that’s what it’s talking about? What matters is that the decision is determined directly through the “make a decision” process rather than independently of it.
Also, given that these “make a decision” processes (i.e. decision theories) are things that actually exist and are used, surely it also makes sense to compare different decision theories on the basis of how sensibly they behave?
You are probably right that I have a faulty notion of what it means to make a decision. I’ll have to think about this for a few days to see if I can update...
This may help you. (Well, at least it helped me—YMMV.)
Basically, my point is that the “running the source code” part is where all of the interesting stuff actually happens, and that’s where the “choice” would actually happen.
It may be true that the agent “runs the source code and outputs the resulting output”, but in saying that I’ve neglected all of the cool stuff that happens when the source code actually gets run (e.g. comparing different options, etc.). In order to establish that source code A leads to output B you would need to talk about how source code A leads to output B, and that’s the interesting part! That’s the part that I associate with the notion of “choice”.