Yes, that is a strategy they can take, However, that sort of strategy is unnecessary in Newcomb’s problem, where you can just one-box and find the money there without having made any sort of precommitment.
This mapping does not work. Causal Decision Theory would commit (if available) in the marriage proposal problem, but two box in Newcomb’s problem. So the mapping does not preserve the relationship between the mapped elements.
This should be a sanity check for any scenario proposed to be equivalent to Newcomb’s problem. EDT/TDT/UDT should all do the equivalent of one-boxing, and CDT should do the equivalent of two-boxing.
CDT on Newcomb’s problem would, if possible, precommit to one-boxing as long as Omega’s prediction is based on observing the CDT agent after its commitment.
CDT in the marriage case would choose to leave once unhappy, absent specific precommitment.
So that exact mapping doesn’t work, but the problem does seem Newcomblike to me (like the transparent-boxes version, actually; which, I now realize, is like Kavka’s toxin puzzle without the vagueness of “intent”.) (ETA: assuming that Kate can reliably predict Joe, which I now see was the point under dispute to begin with.)
Would you care to share your reasoning? What is your mapping of strategies, and does it pass my sanity check? (EVT two-boxes on the transparent boxes variation.)
one-box ⇔ stay in marriage when unhappy two-box ⇔ leave marriage when unhappy precommit to one-boxing ⇔ precommit to staying in marriage
In both this problem and transparent-boxes Newcomb:
you don’t take the action under discussion (take boxes, leave or not) until you know whether you’ve won
if you would counterfactually take one of the choices if you were to win, you’ll lose
TDT and UDT win
CDT either precommits and wins or doesn’t and loses, as described in my previous comment
(I’m assuming that Kate can reliably predict Joe. I didn’t initially realize your objection might have more to do with that than the structure of the problem.)
CDT either precommits and wins or doesn’t and loses, as described in my previous comment
If Jack and Kate were already married it really would make no sense for Jack to not get a divorce just because Kate would have never married him had she suspected he would. CDT wins, here. The fact that CDT tells Jack to precommit now doesn’t make it Newcomblike. Precommiting is a rational strategy in lots of games that aren’t Newcomb like. The whole point of Newcomb is that even if you haven’t precommitted, CDT tells you the wrong thing to do once Omega shows up.
Even if that assumption is fair (since it obviously isn’t true I’m not sure why we would make it**) we’re still entering the scenario too early. It’s like being told Omega is going to offer you the boxes a year before he does. Jack now has the opportunity to precommit, but Omega doesn’t give you that chance.
** I’m sure glad my girlfriend isn’t a superintelligence that can predict my actions with perfect accuracy! Am I right guys?!
Point taken; the similarity is somewhat distant. (I made that assumption to show the problem’s broadly Newcomblike structure, since I wrongly read JGWeissman as saying that the problem never had Newcomblike structure. But as you say, there is another, more qualitative difference.)
(I’m assuming that Kate can reliably predict Joe. I didn’t initially realize your objection might have more to do with that than the structure of the problem.)
Yes, that is where my objection lies.
ETA: And the fact that in Newcomb’s problem, there is no opportunity after learning about the problem to precommit, the predictions of your behavior have already been made. So allowing precommitment in the marriage proposal problem sidesteps the problem that would be Newcomb like if Kate were a highly accurate predictor.
Causal decision theory precommits to one-boxing on Newcomb if it can and if Omega’s prediction is based on observation of the CDT agent after its opportunity to precommit.
Why is the parent comment being voted down, and its parent being voted up, when it correctly refutes the parent?
Why is the article itself being voted up, when it has been refuted? Are people so impressed by the idea of a real life Newcomb like problem that they don’t notice, even when it is pointed out, that the described story is not in fact a Newcomb like problem?
Why is the article itself being voted up, when it has been refuted?
I voted it up because it is a good article. The claim “this situation is a problem of the class Newcomblike” has been refuted. If Academian had belligerently defended the ‘It’s Newcomblike’ claim in response to correction I would have reversed my upvote. As it stands the discussion both in the original post and the comments are useful. I expect it has helped clarify how the situation as it is formalized here differs from Newcomb’s problem and what changes the scenario would need to actually be a Newcomblike problem. In fact, that is a follow up post that I would like to see.
Are people so impressed by the idea of a real life Newcomb like problem that they don’t notice, even when it is pointed out, that the described story is not in fact a Newcomb like problem?
Ease up. The “it’s not actually Newcomblike” comments are being upvoted. People get it. It’s just that sometimes correction is sufficient and a spiral of downvotes isn’t desirable.
It is an article in which poor thought leads to a wrong conclusion. I don’t consider that “good”.
If Academian had belligerently defended the ‘It’s Newcomblike’ claim in response to correction I would have reversed my upvote.
I wouldn’t say he was belligerent, but earlier in this thread he seemed to be Fighting a Rearguard Action Against the Truth, first saying, “it’s a big open problem if some humans can precommit or not”, and then saying the scenario still works if you replace certainties with high confidence levels, with those confidence levels also being unrealistic. I found “Self-modification is robust, pre-commitment is robust, its detection is robust… these phenomena really aren’t going anywhere.” to be particularly arrogant. He seems to have dropped out after I refuted those points.
My standard for changing this article from bad to sort of ok, would require an actual retraction of the wrong conclusion.
As it stands the discussion both in the original post and the comments are useful.
As it stands, someone can be led astray by reading just the article and not the comments.
The “it’s not actually Newcomblike” comments are being upvoted. People get it.
Not as much as the article. And this comment, which refuted a wrong argument that the scenario really is Newcomb’s problem, at the time I asked that question, was at −2.
It’s just that sometimes correction is sufficient and a spiral of downvotes isn’t desirable.
I am not saying everyone should vote it down so Academian loses so much karma he can never post another article. I think a small negative score is enough to make the point. A small positive score would be appropiate if he made a proper retraction. +27 is too high. I don’t think articles should get over +5 without the main point actually being correct, and they should be incredibly thought provoking to get that high.
I am also wary of making unsupportable claims that Newcomb’s problem happens in real life, which can overshadow other reasons we consider such problems, so these other reasons are forgotten when the unsupportable claim is knocked down.
I can empathise with your point of view here. Perhaps the fact that people (including me) still appreciate the post despite it getting the game theory discussion wrong is an indication that we would love to see more posts on ‘real life’ applications of decision theory!
Are people so impressed by the idea of a real life Newcomb like problem that they don’t notice, even when it is pointed out, that the described story is not in fact a Newcomb like problem?
That depends entirely on what characteristics you consider to be most “Newcomb like”. From an emotional point of view, the situation is very “Newcomb like”, even if the mathematics is different.
This sounds like a fully general excuse to support any position. What is this emotional view? If the emotions disagree with the logical analisys, why aren’t the emotions wrong? Correct emotions should be reactions to the actual state of reality.
What is this emotional view? If the emotions disagree with the logical analisys, why aren’t the emotions wrong? Correct emotions should be reactions to the actual state of reality.
We seem to be having a language difficulty. By “emotional point of view”, I mean that there are similarities in the human emotional experience of deciding Newcomb’s problem and the marriage proposal problem.
(Agree) Evolution built in (a vague approximation of) one boxing into our emotional systems. Humans actually can commit, without changing external payoffs. It isn’t a bullet proof commitment. Evolution will also try to create effective compartmentalization mechanisms so that humans can maximise signalling benefit vs actual cost to change later.
On a timescale of decades, the commitment has hardly any strength at all.
Fortunately, it isn’t meant to be. In a crude sense the emotions are playing a signalling game on a timescale of months to a couple of years. That the emotions tell us they are talking about ‘forever’ is just part of their game.
Then you agree that Joe’s commitment is not a good indicator that he will stay in the marriage for decades, so Joe did not get what he wanted by allowing Kate to make an accurate prediction that he will do what she wants?
Why, when we are discussing a problem that requires commitment on the scale of many decades, did you bring up that humans can make commitments up to maybe a couple of years?
Then you agree that Joe’s commitment is not a good indicator that he will stay in the marriage for decades, so Joe did not get what he wanted by allowing Kate to make an accurate prediction that he will do what she wants?
I haven’t said any such thing. Joe and Kate are counterfactual in as much as organic emotional responses were simplified to Kate having a predictive superpower and Joe the ability to magically (and reliably) self modify. Two real people would be somewhat more complex and their words and beliefs less literally correlated with reality.
Why, when we are discussing a problem that requires commitment on the scale of many decades, did you bring up that humans can make commitments up to maybe a couple of years?
The basic mechanism of conversation requires that I follow the flow rather than making every comment as a reply to the original post. When I learned that from a book there was an analogy about tennis involved which I found helpful.
You did say “In a crude sense the emotions are playing a signaling game on a timescale of months to a couple of years.” And the scenario does involve predictions of events which take place over the time scale of decades. Do you disagree with my assessment of the time scales, or do you somehow disagree with the conclusion?
Joe and Kate are counterfactual in as much as organic emotional responses were simplified to Kate having a predictive superpower and Joe the ability to magically (and reliably) self modify. Two real people would be somewhat more complex and their words and beliefs less literally correlated with reality.
The scenario was presented as something that actually happened, to two real people, with Kate’s beliefs literally correlating with reality.
I am comfortable with leaving my previous statements as they stand.
I think we are taking a somewhat different approach to discussion here and remind myself that my way of thinking may not be strictly better than yours, merely different (P vs J).
There are similarities in the human emotional experience of deciding Newcomb’s problem and the marriage proposal problem.
If that is true (which I am not sure of, and it is hard to tell since your vague claim doesn’t specify the common emotional experience), isn’t that just an example of how our human emotions are misleading?
I hate to have to ask this sort of question again, but would the downvoters care to explain why it is wrong for me to ask why PJEby thinks we should draw conclusions about decision theory from human emotional reactions, or to ask what particular emotional reactions he is talking about?
Yes, that is a strategy they can take, However, that sort of strategy is unnecessary in Newcomb’s problem, where you can just one-box and find the money there without having made any sort of precommitment.
I think that the translation to Newcombe’s was that committing == one boxing and hedging == two boxing.
This mapping does not work. Causal Decision Theory would commit (if available) in the marriage proposal problem, but two box in Newcomb’s problem. So the mapping does not preserve the relationship between the mapped elements.
This should be a sanity check for any scenario proposed to be equivalent to Newcomb’s problem. EDT/TDT/UDT should all do the equivalent of one-boxing, and CDT should do the equivalent of two-boxing.
CDT on Newcomb’s problem would, if possible, precommit to one-boxing as long as Omega’s prediction is based on observing the CDT agent after its commitment.
CDT in the marriage case would choose to leave once unhappy, absent specific precommitment.
So that exact mapping doesn’t work, but the problem does seem Newcomblike to me (like the transparent-boxes version, actually; which, I now realize, is like Kavka’s toxin puzzle without the vagueness of “intent”.) (ETA: assuming that Kate can reliably predict Joe, which I now see was the point under dispute to begin with.)
Would you care to share your reasoning? What is your mapping of strategies, and does it pass my sanity check? (EVT two-boxes on the transparent boxes variation.)
one-box ⇔ stay in marriage when unhappy
two-box ⇔ leave marriage when unhappy
precommit to one-boxing ⇔ precommit to staying in marriage
In both this problem and transparent-boxes Newcomb:
you don’t take the action under discussion (take boxes, leave or not) until you know whether you’ve won
if you would counterfactually take one of the choices if you were to win, you’ll lose
TDT and UDT win
CDT either precommits and wins or doesn’t and loses, as described in my previous comment
(I’m assuming that Kate can reliably predict Joe. I didn’t initially realize your objection might have more to do with that than the structure of the problem.)
If Jack and Kate were already married it really would make no sense for Jack to not get a divorce just because Kate would have never married him had she suspected he would. CDT wins, here. The fact that CDT tells Jack to precommit now doesn’t make it Newcomblike. Precommiting is a rational strategy in lots of games that aren’t Newcomb like. The whole point of Newcomb is that even if you haven’t precommitted, CDT tells you the wrong thing to do once Omega shows up.
As I said, I assumed that Kate = Omega.
Even if that assumption is fair (since it obviously isn’t true I’m not sure why we would make it**) we’re still entering the scenario too early. It’s like being told Omega is going to offer you the boxes a year before he does. Jack now has the opportunity to precommit, but Omega doesn’t give you that chance.
** I’m sure glad my girlfriend isn’t a superintelligence that can predict my actions with perfect accuracy! Am I right guys?!
Point taken; the similarity is somewhat distant. (I made that assumption to show the problem’s broadly Newcomblike structure, since I wrongly read JGWeissman as saying that the problem never had Newcomblike structure. But as you say, there is another, more qualitative difference.)
Yes, that is where my objection lies.
ETA: And the fact that in Newcomb’s problem, there is no opportunity after learning about the problem to precommit, the predictions of your behavior have already been made. So allowing precommitment in the marriage proposal problem sidesteps the problem that would be Newcomb like if Kate were a highly accurate predictor.
Causal decision theory precommits to one-boxing on Newcomb if it can and if Omega’s prediction is based on observation of the CDT agent after its opportunity to precommit.
Why is the parent comment being voted down, and its parent being voted up, when it correctly refutes the parent?
Why is the article itself being voted up, when it has been refuted? Are people so impressed by the idea of a real life Newcomb like problem that they don’t notice, even when it is pointed out, that the described story is not in fact a Newcomb like problem?
I voted it up because it is a good article. The claim “this situation is a problem of the class Newcomblike” has been refuted. If Academian had belligerently defended the ‘It’s Newcomblike’ claim in response to correction I would have reversed my upvote. As it stands the discussion both in the original post and the comments are useful. I expect it has helped clarify how the situation as it is formalized here differs from Newcomb’s problem and what changes the scenario would need to actually be a Newcomblike problem. In fact, that is a follow up post that I would like to see.
Ease up. The “it’s not actually Newcomblike” comments are being upvoted. People get it. It’s just that sometimes correction is sufficient and a spiral of downvotes isn’t desirable.
It is an article in which poor thought leads to a wrong conclusion. I don’t consider that “good”.
I wouldn’t say he was belligerent, but earlier in this thread he seemed to be Fighting a Rearguard Action Against the Truth, first saying, “it’s a big open problem if some humans can precommit or not”, and then saying the scenario still works if you replace certainties with high confidence levels, with those confidence levels also being unrealistic. I found “Self-modification is robust, pre-commitment is robust, its detection is robust… these phenomena really aren’t going anywhere.” to be particularly arrogant. He seems to have dropped out after I refuted those points.
My standard for changing this article from bad to sort of ok, would require an actual retraction of the wrong conclusion.
As it stands, someone can be led astray by reading just the article and not the comments.
Not as much as the article. And this comment, which refuted a wrong argument that the scenario really is Newcomb’s problem, at the time I asked that question, was at −2.
I am not saying everyone should vote it down so Academian loses so much karma he can never post another article. I think a small negative score is enough to make the point. A small positive score would be appropiate if he made a proper retraction. +27 is too high. I don’t think articles should get over +5 without the main point actually being correct, and they should be incredibly thought provoking to get that high.
I am also wary of making unsupportable claims that Newcomb’s problem happens in real life, which can overshadow other reasons we consider such problems, so these other reasons are forgotten when the unsupportable claim is knocked down.
I can empathise with your point of view here. Perhaps the fact that people (including me) still appreciate the post despite it getting the game theory discussion wrong is an indication that we would love to see more posts on ‘real life’ applications of decision theory!
That depends entirely on what characteristics you consider to be most “Newcomb like”. From an emotional point of view, the situation is very “Newcomb like”, even if the mathematics is different.
This sounds like a fully general excuse to support any position. What is this emotional view? If the emotions disagree with the logical analisys, why aren’t the emotions wrong? Correct emotions should be reactions to the actual state of reality.
We seem to be having a language difficulty. By “emotional point of view”, I mean that there are similarities in the human emotional experience of deciding Newcomb’s problem and the marriage proposal problem.
(Agree) Evolution built in (a vague approximation of) one boxing into our emotional systems. Humans actually can commit, without changing external payoffs. It isn’t a bullet proof commitment. Evolution will also try to create effective compartmentalization mechanisms so that humans can maximise signalling benefit vs actual cost to change later.
On a timescale of decades, the commitment has hardly any strength at all.
Fortunately, it isn’t meant to be. In a crude sense the emotions are playing a signalling game on a timescale of months to a couple of years. That the emotions tell us they are talking about ‘forever’ is just part of their game.
Then you agree that Joe’s commitment is not a good indicator that he will stay in the marriage for decades, so Joe did not get what he wanted by allowing Kate to make an accurate prediction that he will do what she wants?
Why, when we are discussing a problem that requires commitment on the scale of many decades, did you bring up that humans can make commitments up to maybe a couple of years?
I haven’t said any such thing. Joe and Kate are counterfactual in as much as organic emotional responses were simplified to Kate having a predictive superpower and Joe the ability to magically (and reliably) self modify. Two real people would be somewhat more complex and their words and beliefs less literally correlated with reality.
The basic mechanism of conversation requires that I follow the flow rather than making every comment as a reply to the original post. When I learned that from a book there was an analogy about tennis involved which I found helpful.
You did say “In a crude sense the emotions are playing a signaling game on a timescale of months to a couple of years.” And the scenario does involve predictions of events which take place over the time scale of decades. Do you disagree with my assessment of the time scales, or do you somehow disagree with the conclusion?
The scenario was presented as something that actually happened, to two real people, with Kate’s beliefs literally correlating with reality.
I am comfortable with leaving my previous statements as they stand.
I think we are taking a somewhat different approach to discussion here and remind myself that my way of thinking may not be strictly better than yours, merely different (P vs J).
If that is true (which I am not sure of, and it is hard to tell since your vague claim doesn’t specify the common emotional experience), isn’t that just an example of how our human emotions are misleading?
I hate to have to ask this sort of question again, but would the downvoters care to explain why it is wrong for me to ask why PJEby thinks we should draw conclusions about decision theory from human emotional reactions, or to ask what particular emotional reactions he is talking about?