No, you don’t. You’ve pattern matched it to the nearest wrong thing — you’re using causal analysis, you must be secretly using CDT!
If I was using CDT, I would use pearl’s causal analysis and common sense to derive a causal graph over my hypothetical actual situation, and pick the action with the highest interventional expected utility.
This is in fact something decision theorists do every day, because the assumption that a dataset about applying HAART to certain patients has anything at all to say about applying a similar treatment to a similar patient is underlied by lots of commonsense causal reasoning, such as the fact that HAART works by affecting the biology of the human body (and therefore should work the same way in two humans), that it is unaffected by the positions of the stars, because they are not very well causally connected, and so on.
If I was using CDT, I would use pearl’s causal analysis and common sense to derive a causal
graph
When I read the philosophy literature, the way decision theory problems are presented is via examples. For example, smoking lesion is one such example, newcomb’s problem is another. So when I ask you what your decision algorithm is, I am asking for something that (a) you can write down and I can follow step by step (b) that takes these examples as input and (c) produces an output action.
What is your preferred algorithm that satisfies (a), (b), and (c)? Can you write it down for me in a follow up post? If (a) is false, it’s not really an algorithm, if (b) is false, it’s not engaging with the problems people in the literature are struggling with, and if (c) is false, it’s not answering the question! So, for instance, anything based on AIXI is a non-starter because you can’t write it down. Anything that you have not formalized in your head enough to write down is a non-starter also.
I have been talking with you for a long time, and in all this time, never have you actually written down what it is you are using to solve decision problems. I am not sure why—do you actually have something specific in mind or not? I can write down my algorithm, no problem.
Here is the standard causal graph for Newcomb’s problem (note that this is a graph of the agent’s actual situation, not a graph of related historical data):
Given that graph, my CDT solution is to return the action A with highest sum_payoff { U(payoff) P(payoff | do(A), observations) }. Given that graph (you don’t need a causal graph of course), my EDT solution is to return the action A with highest sum_payoff { U(payoff) P(payoff | A, observations) }.
That’s the easy part. Are you asking me for an algorithm to turn a description of Newcomb’s problem in words into that graph? You probably know better than me how to do that.
Ok, thanks. I understand your position now.
No, you don’t. You’ve pattern matched it to the nearest wrong thing — you’re using causal analysis, you must be secretly using CDT!
If I was using CDT, I would use pearl’s causal analysis and common sense to derive a causal graph over my hypothetical actual situation, and pick the action with the highest interventional expected utility.
This is in fact something decision theorists do every day, because the assumption that a dataset about applying HAART to certain patients has anything at all to say about applying a similar treatment to a similar patient is underlied by lots of commonsense causal reasoning, such as the fact that HAART works by affecting the biology of the human body (and therefore should work the same way in two humans), that it is unaffected by the positions of the stars, because they are not very well causally connected, and so on.
When I read the philosophy literature, the way decision theory problems are presented is via examples. For example, smoking lesion is one such example, newcomb’s problem is another. So when I ask you what your decision algorithm is, I am asking for something that (a) you can write down and I can follow step by step (b) that takes these examples as input and (c) produces an output action.
What is your preferred algorithm that satisfies (a), (b), and (c)? Can you write it down for me in a follow up post? If (a) is false, it’s not really an algorithm, if (b) is false, it’s not engaging with the problems people in the literature are struggling with, and if (c) is false, it’s not answering the question! So, for instance, anything based on AIXI is a non-starter because you can’t write it down. Anything that you have not formalized in your head enough to write down is a non-starter also.
I have been talking with you for a long time, and in all this time, never have you actually written down what it is you are using to solve decision problems. I am not sure why—do you actually have something specific in mind or not? I can write down my algorithm, no problem.
Here is the standard causal graph for Newcomb’s problem (note that this is a graph of the agent’s actual situation, not a graph of related historical data):
Given that graph, my CDT solution is to return the action A with highest
sum_payoff { U(payoff) P(payoff | do(A), observations) }
. Given that graph (you don’t need a causal graph of course), my EDT solution is to return the action A with highestsum_payoff { U(payoff) P(payoff | A, observations) }
.That’s the easy part. Are you asking me for an algorithm to turn a description of Newcomb’s problem in words into that graph? You probably know better than me how to do that.