That part is correct, but opting not to smoke for the purpose of avoiding this increase in probability s an error.
An error that an evidence based decision theory needs not make if it can process the evidence that causality works and that it is actually the pre-existing lesion that causes smoking, and control for the pre-existing lesion when comparing the outcomes of actions. (And if the agent is ignorant of the way world works—then we shouldn’t benchmark it against an agent into which we coded the way our world works)
That part is correct, but opting not to smoke for the purpose of avoiding this increase in probability s an error.
I still don’t see how it is. If the agent has no other information, all he knows is that if he decides to smoke it is more likely that he has the lesion. His decision itself doesn’t influence whether he has the lesion, of course. But he desires to not have the lesion, and therefore should desire to decide not to smoke.
The way the lesion influences deciding to smoke will be through the utility function or the decision theory. With no other information, the agent can’t trust that his decision will outsmart the lesion.
Ahh, I guess we are talking about same thing. My point is that given more information—and making more conclusions—EDT should smoke. The CDT gets around requirement for more information by cheating—we wrote some of that information implicitly into CDT—we thought CDT is a good idea because we know our world is causal. Whenever EDT can reason that CDT will work better—based on evidence in support of causality, the model of how lesions work, et cetera—the EDT will act like CDT. And whenever CDT reasons that EDT will work better—the CDT self modifies to be EDT, except that CDT can’t do it on spot and has to do it in advance. The advanced decision theories try to ‘hardcode’ more of our conclusions about the world into the decision theory. This is very silly.
If you test humans, I think it is pretty clear that humans work like EDT + evidence for causality. Take away evidence for causality, and people can believe that deciding to smoke retroactively introduces the lesion.
edit: ahh, wait, the EDT is some pretty naive theory that can not even process anything as complicated as evidence for causality working in our universe. Whatever then, a thoughtless approach leads to thoughtless results, end of story. The correct decision theory should be able to control for pre-existing lesion when it makes sense to do so.
edit: ahh, wait, the EDT is some pretty naive theory that can not even process anything as complicated as evidence for causality working in our universe.
Can you explain this?
EDT is described as $V(A) = \sum_{j} P(O_j | A) U(O_j)$. If you have knowledge about the mechanisms behind the how the lesion causes smoking, that would change $P(A | O_j)$ and therefore also $P(O_j | A)$.
I don’t see how knowledge how the lesion works would affect the probabilities when you don’t know if you have lesion and the probability of having lesion.
Even if you do, how is knowing that the lesion causes cancer going to change anything about P(smokes|gets cancer) ? The issue is that you need to do two equations, one for case when you do have lesion, and other for when you don’t have lesion. The EDT just confuses those together.
it makes you more likely to use a decision theory that leads you to decide to smoke
it only makes irrational people more likely to smoke.
it changes people’s utility of smoking.
In case 1, you should follow EDT, and use a decision theory that will make you not decide to smoke.
In case 2, you know that the lesion doesn’t apply to you, so go ahead and smoke.
In case 3, conditioned on your utility function (which you know), the probability of the lesion no longer depends on your decision. So, you can smoke.
edit: ahh, wait, the EDT is some pretty naive theory that can not even process anything as complicated as evidence for causality working in our universe. Whatever then, a thoughtless approach leads to thoughtless results, end of story. The correct decision theory should be able to control for pre-existing lesion when it makes sense to do so.
I think you’ve got it. Pure EDT and CDT really just are that stupid—and irredeemably so because agents implementing them will not want to learn how to replace their decision strategy (beyond resolving themselves to their respective predetermined stable outcomes). Usually when people think either of them are a good idea it is because they have been incidentally supplementing and subverting them with a whole lot of their own common sense!
Usually when people think either of them are a good idea it is because they have been incidentally supplementing and subverting them with a whole lot of their own common sense!
As a person who (right now) thinks that EDT is a good idea, could you help enlighten me?
Wikipedia states that under EDT the action with the maximum value is chosen, where value is determined as V(A) =sum{outcomes O} P(O|A) U(O). The agent can put in knowledge about how the universe works into P(O|A), right?
Now the smoking lesion problem. It can be formally written as something like this,
I think the tricky part is P(smoking | lesion) > P(smoking | !lesion), because this puts a probability on something that the agent gets to decide. Since probabilities are about uncertainty, and the agent would be certain about its actions, this makes no sense.
Is that the main problem with EDT?
Actually the known fact is more like P(X smoking | X lesion), the probability of any agent with a lesion deciding to smoke. From this the agent will have to derive P(me smoking | me lesion). If the agent is an avarage human being, then they would be equal. But if the agent is special because he uses some specific decision theory or utility function, he should only look at a smaller reference class. I think in this way you get quite close to TDT/UDT.
In the smoking lesion, I do two worlds: in one I have lesion, in other I don’t, weighted with p and 1-p . That’s just how i process uncertainties . Then I apply my predictions to both worlds, given my action, and I obtain the results which I weight by p and 1-p (i never seen the possible worlds interact) . Then I can decide on action assuming 0<p<1 . I don’t even need to know the p, and updates to my estimate of p that result from my actions don’t change the decisions.
In the newcomb’s problem, i’m inclined to do exact same thing: let p is probability that the one-box was predicted, then onebox < twobox by 1000000 p + 0 (1-p) < 1001000 p + 1000 (1-p) . And I am totally going to do this if I am being predicted based on psychology test I took back in elementary school, or based on genetics. But I get told that the 1001000 p and 0 (1-p) never happens, i.e. I get told that the equation is wrong, and if i assign high enough confidence to that, higher than to my equation, I can strike out the 1001000 p and 0 (1-p) from the equation (and get some nonsense which i fix by removing the probabilities altogether), deciding to one-box as the best effort i can do when i’m told that my equation won’t work, and I don’t quite know why.
(The world model of mine being what it is, I’ll also have to come up with some explanations for how the predictor works before i assign high enough probability to the predictor working correctly for me. E.g. I could say that the predictor is predicting using a quantum coin flip and then cutting off branches in MWI where it was wrong, or I could say, the predictor is working via mind simulation, or even that my actions somehow go into the past. )
Of course it is bloody hard to formalize an agent that got a world model of some kind, and which can correct it’s equations if it is convinced with good enough evidence that the equation is somehow wrong (which is pretty much the premise of Newcomb’s paradox).
That part is correct, but opting not to smoke for the purpose of avoiding this increase in probability s an error.
An error that an evidence based decision theory needs not make if it can process the evidence that causality works and that it is actually the pre-existing lesion that causes smoking, and control for the pre-existing lesion when comparing the outcomes of actions. (And if the agent is ignorant of the way world works—then we shouldn’t benchmark it against an agent into which we coded the way our world works)
I still don’t see how it is. If the agent has no other information, all he knows is that if he decides to smoke it is more likely that he has the lesion. His decision itself doesn’t influence whether he has the lesion, of course. But he desires to not have the lesion, and therefore should desire to decide not to smoke.
The way the lesion influences deciding to smoke will be through the utility function or the decision theory. With no other information, the agent can’t trust that his decision will outsmart the lesion.
Ahh, I guess we are talking about same thing. My point is that given more information—and making more conclusions—EDT should smoke. The CDT gets around requirement for more information by cheating—we wrote some of that information implicitly into CDT—we thought CDT is a good idea because we know our world is causal. Whenever EDT can reason that CDT will work better—based on evidence in support of causality, the model of how lesions work, et cetera—the EDT will act like CDT. And whenever CDT reasons that EDT will work better—the CDT self modifies to be EDT, except that CDT can’t do it on spot and has to do it in advance. The advanced decision theories try to ‘hardcode’ more of our conclusions about the world into the decision theory. This is very silly.
If you test humans, I think it is pretty clear that humans work like EDT + evidence for causality. Take away evidence for causality, and people can believe that deciding to smoke retroactively introduces the lesion.
edit: ahh, wait, the EDT is some pretty naive theory that can not even process anything as complicated as evidence for causality working in our universe. Whatever then, a thoughtless approach leads to thoughtless results, end of story. The correct decision theory should be able to control for pre-existing lesion when it makes sense to do so.
Can you explain this?
EDT is described as $V(A) = \sum_{j} P(O_j | A) U(O_j)$. If you have knowledge about the mechanisms behind the how the lesion causes smoking, that would change $P(A | O_j)$ and therefore also $P(O_j | A)$.
I don’t see how knowledge how the lesion works would affect the probabilities when you don’t know if you have lesion and the probability of having lesion.
Also:
You would still have priors for all of these things.
Even if you do, how is knowing that the lesion causes cancer going to change anything about P(smokes|gets cancer) ? The issue is that you need to do two equations, one for case when you do have lesion, and other for when you don’t have lesion. The EDT just confuses those together.
The lesion could work in (at least) two ways:
it makes you more likely to use a decision theory that leads you to decide to smoke
it only makes irrational people more likely to smoke.
it changes people’s utility of smoking.
In case 1, you should follow EDT, and use a decision theory that will make you not decide to smoke. In case 2, you know that the lesion doesn’t apply to you, so go ahead and smoke. In case 3, conditioned on your utility function (which you know), the probability of the lesion no longer depends on your decision. So, you can smoke.
I think you’ve got it. Pure EDT and CDT really just are that stupid—and irredeemably so because agents implementing them will not want to learn how to replace their decision strategy (beyond resolving themselves to their respective predetermined stable outcomes). Usually when people think either of them are a good idea it is because they have been incidentally supplementing and subverting them with a whole lot of their own common sense!
As a person who (right now) thinks that EDT is a good idea, could you help enlighten me?
Wikipedia states that under EDT the action with the maximum value is chosen, where value is determined as
V(A) =sum{outcomes O} P(O|A) U(O)
. The agent can put in knowledge about how the universe works intoP(O|A)
, right?Now the smoking lesion problem. It can be formally written as something like this,
I think the tricky part is
P(smoking | lesion) > P(smoking | !lesion)
, because this puts a probability on something that the agent gets to decide. Since probabilities are about uncertainty, and the agent would be certain about its actions, this makes no sense.Is that the main problem with EDT?
Actually the known fact is more like
P(X smoking | X lesion)
, the probability of any agent with a lesion deciding to smoke. From this the agent will have to deriveP(me smoking | me lesion)
. If the agent is an avarage human being, then they would be equal. But if the agent is special because he uses some specific decision theory or utility function, he should only look at a smaller reference class. I think in this way you get quite close to TDT/UDT.I propose a nonstupid decision theory then.
In the smoking lesion, I do two worlds: in one I have lesion, in other I don’t, weighted with p and 1-p . That’s just how i process uncertainties . Then I apply my predictions to both worlds, given my action, and I obtain the results which I weight by p and 1-p (i never seen the possible worlds interact) . Then I can decide on action assuming 0<p<1 . I don’t even need to know the p, and updates to my estimate of p that result from my actions don’t change the decisions.
In the newcomb’s problem, i’m inclined to do exact same thing: let p is probability that the one-box was predicted, then onebox < twobox by 1000000 p + 0 (1-p) < 1001000 p + 1000 (1-p) . And I am totally going to do this if I am being predicted based on psychology test I took back in elementary school, or based on genetics. But I get told that the 1001000 p and 0 (1-p) never happens, i.e. I get told that the equation is wrong, and if i assign high enough confidence to that, higher than to my equation, I can strike out the 1001000 p and 0 (1-p) from the equation (and get some nonsense which i fix by removing the probabilities altogether), deciding to one-box as the best effort i can do when i’m told that my equation won’t work, and I don’t quite know why.
(The world model of mine being what it is, I’ll also have to come up with some explanations for how the predictor works before i assign high enough probability to the predictor working correctly for me. E.g. I could say that the predictor is predicting using a quantum coin flip and then cutting off branches in MWI where it was wrong, or I could say, the predictor is working via mind simulation, or even that my actions somehow go into the past. )
Of course it is bloody hard to formalize an agent that got a world model of some kind, and which can correct it’s equations if it is convinced with good enough evidence that the equation is somehow wrong (which is pretty much the premise of Newcomb’s paradox).