I voted this comment down, and would like to explain why.
Omega can have various properties as needed to simplify various thought experiments
Right, we don’t want people distracted by whether Omega’s prediction could be incorrect in their case or whether the solution should involve tricking Omega, etc. We say that Omega is a perfect predictor not because it’s so very reasonable for him to be a perfect predictor, but so that people won’t get distracted in those directions.
If Omega were a perfect predictor then the whole dilemma inherent in Newcomb-like problems ceases to exist and that short circuits the entire point of posing those types of problems.
We must disagree about what is the heart of the dilemma. How can it be all about whether Omega is wrong with some fractional probability? Rather it’s about whether logic (2-boxing seems logical) and winning are at odds. Or perhaps whether determinism and choice is at odds, if you are operating outside a deterministic world-view. Or perhaps a third thing, but nothing—in this problem—about what kinds of Omega powers are reasonable or possible. Omega is just a device being used to set up the dilemma.
My difficulty is in understanding why the concept of a perfect predictor is relevant to artificial intelligence.
Also, 2-boxing is indicated by inductive logic based on non-Omega situations. Given the special circumstances of Newcomb’s problem, it would seem unwise to rely on that. Deductive logic leads to 1-boxing.
You don’t need perfect prediction to develop an argument for one-boxing. If the predictor’s probability of correct prediction is p and the utility of the contents of the one-box is k times the utility of the contents of the two-box, then the expected utility of one-boxing is greater than that of two-boxing if p is greater than (k + 1) / 2k.
Also, 2-boxing is indicated by inductive logic based on non-Omega situations. Given the special circumstances of Newcomb’s problem, it would seem unwise to rely on that. Deductive logic leads to 1-boxing.
I agree that in general this is how it works. It’s rather like POAT that way… some people see it as one kind of problem, and other people see it as another kind of problem, and neither side can make sense of the other’s position.
I don’t think it counts as a matter for LMGTFY unless the answer pretty much screams at you on the results page before you even start clicking the links...
Making the assumption that the person you’re responding to hasn’t invested those two minutes can be risky, as the present instance shows. Maybe they have, but got different results.
Another risky assumption is that the other person is using the same Google that you are using. By default the search bar in Firefox directs me to the French Google (I’ve even looked for a way to change that, without success).
So you could end up looking like an ass, rather than a jerk, when you pull a LMGTFY and the recipient still doesn’t see what you’re seeing. It only works as a status move if you’re confident that most search options and variations will still pull up the relevant result.
More importantly, this is yet another data point in favor of the 10x norm. Unless of course we want LW to be Yet Another Internet Forum (complete with avatars).
(ETA: yes, in the comment linked here the 10X norm is intended to apply to posts, not comments. I favor the stronger version that applies to comments as well: look at the length of this comment thread, infer the time spent writing these various messages, the time wasted by readers watching Recent Comments, and compare with how long it would have taken to spell it out.)
Making the assumption… Another risky assumption...
’Strue. Those occurred to me about five minutes after I first replied to ciphergoth, when the implications of the fact that the link position changed based on where I was when I Googled finally penetrated my cerebral cortex. I considered noting it in an ETA, but I didn’t expect the comment thread to continue as far as it has.
If it takes two full minutes for my readership to find out what the terms mean, the onus is on me to link to it; if that only takes me three minutes and it saves two readers Googling, then it’s worth it. The LMGTFY boundary is closer to ten seconds or less.
Another option would have been to spell it out—that way a lot of readers would have known without Googling, and those who didn’t would have got answers right away.
I don’t disagree with this. My “corollary” comment above was too facile—when I recall my own behavior, it’s my standard for peevishly thinking LMGTFY, not actually linking it.
First, thanks for explaining your down vote and thereby giving me an opportunity to respond.
We say that Omega is a perfect predictor not because it’s so very reasonable for him to be a perfect predictor, but so that people won’t get distracted in those directions.
The problem is that it is not a fair simplification, it disrupts the dilemma in such a way as to render it trivial. If you set the accuracy of the prediction to %100 many of the other specific details of the problem become largely irrelevant. For example you could then put $999,999.99 into box A and it would still be better to one-box.
It’s effectively the same thing as lowering the amount in box A to zero or raising the amount in box B to infinity. And one could break the problem in the other direction by lowering the accuracy of the prediction to %50 or equalizing the amount in both boxes.
We must disagree about what is the heart of the dilemma. How can it be all about whether Omega is wrong with some fractional probability?
It’s because the probability of a correct prediction must be between %50 and %100 or it breaks the structure of the problem in the sense that it makes the answer trivial to work out.
Rather it’s about whether logic (2-boxing seems logical) and winning are at odds.
I suppose it is true that some people have intuitions that persist in leading them astray even when the probability is set to %100. In that sense it may still have some value if it helps to isolate and illuminate these biases.
Or perhaps whether determinism and choice is at odds, if you are operating outside a deterministic world-view. Or perhaps a third thing, but nothing—in this problem—about what kinds of Omega powers are reasonable or possible. Omega is just a device being used to set up the dilemma.
My objection here doesn’t have to do with whether it is reasonable for Omega to possess such powers but with the over-simplification of the dilemma to the point where it is trivial.
I see we really are talking about different Newcomb “problem”s. I took back my down vote. So one of our problems should have another name, or at least a qualifier.
I suppose it is true that some people have intuitions that persist in leading them astray even when the probability is set to %100. In that sense it may still have some value if it helps to isolate and illuminate these biases.
I don’t think Newcomb’s problem (mine) is so trivial. And I wouldn’t call belief in the triangle inequality a bias.
The contents of box 1 = (a>=0)
The contents of box 2 = (b>=0)
2-boxing is the logical deduction that ((a+b)>=a) and ((a+b)>=b).
I do 1-box, and do agree that this decision is a logical deduction. I find it odd though that this deduction works by repressing another logical deduction and don’t think I’ve ever see this before. I would want to argue that any and every logical path should work without contradiction.
I suppose it is true that some people have intuitions that persist in leading them astray even when the probability is set to %100. In that sense it may still have some value if it helps to isolate and illuminate these biases.
My objection here doesn’t have to do with whether it is reasonable for Omega to possess such powers but with the over-simplification of the dilemma to the point where it is trivial.
Perhaps I can clarify: I specifically intended to simplify the dilemma to the point where it was trivial. There are a few reasons for this, but the primary reason is so I can take the trivial example expressed here, tweak it, and see what happens.
This is not intended to be a solution to any other scenario in which Omega is involved. It is intended to make sure that we all agree that this is correct.
I’m finding “correct” to be a loaded term here. It is correct in the sense that your conclusions follow from your premises, but in my view it bears only a superficial resemblance to Newcomb’s problem. Omega is not defined the way you defined it in Newcomb-like problems and the resulting difference is not trivial.
To really get at the core dilemma of Newcomb’s problem in detail one needs to attempt to work out the equilibrium accuracy (that is the level of accuracy required to make one-boxing and two-boxing have equal expected utility) not just arbitrarily set the accuracy to the upper limit where it is easy to work out that one-boxing wins.
I’m finding “correct” to be a loaded term here. It is correct in the sense that your conclusions follow from your premises, but in my view it bears only a superficial resemblance to Newcomb’s problem.
I don’t care about Newcomb’s problem. This post doesn’t care about Newcomb’s problem. The next step in this line of questioning still doesn’t care about Newcomb’s problem.
So, please, forget about Newcomb’s problem. At some point, way down the line, Newcomb’s problem may show up again, but when it does this:
Omega is not defined the way you defined it in Newcomb-like problems and the resulting difference is not trivial.
Will certainly be taken into account. Namely, it is exactly because the difference is not trivial that I went looking for a trivial example.
The reason you find “correct” to be loaded is probably because you are expecting some hidden “Gotcha!” to pop out. There is no gotcha. I am not trying to trick you. I just want an answer to what I thought was a simple question.
First, thanks for explaining your down vote and thereby giving me an opportunity to respond.
We say that Omega is a perfect predictor not because it’s so very reasonable for him to be a perfect predictor, but so that people won’t get distracted in those directions.
The problem is that it is not a fair simplification, it disrupts the dilemma in such a way as to render it trivial. If you set the accuracy of the prediction to %100 many of the other specific details of the problem become largely irrelevant. For example you could then put $999,999.99 into box A and it would still be better to one-box.
It’s effectively the same thing as lowering the amount in box A to zero or raising the amount in box B to infinity. And one could break the problem in the other direction by lowering the accuracy of the prediction to %50 or equalizing the amount in both boxes.
We must disagree about what is the heart of the dilemma. How can it be all about whether Omega is wrong with some fractional probability?
It’s because the probability of a correct prediction must be between %50 and %100 or it breaks the structure of the problem in the sense that it makes the answer trivial to work out.
Rather it’s about whether logic (2-boxing seems logical) and winning are at odds.
I suppose it is true that some people have intuitions that persist in leading them astray even when the probability is set to %100. In that sense it may still have some value if it helps to isolate and illuminate these biases.
Or perhaps whether determinism and choice is at odds, if you are operating outside a deterministic world-view. Or perhaps a third thing, but nothing—in this problem—about what kinds of Omega powers are reasonable or possible. Omega is just a device being used to set up the dilemma.
My objection here doesn’t have to do with whether it is reasonable for Omega to possess such powers but with the over-simplification of the dilemma to the point where it is trivial.
I voted this comment down, and would like to explain why.
Right, we don’t want people distracted by whether Omega’s prediction could be incorrect in their case or whether the solution should involve tricking Omega, etc. We say that Omega is a perfect predictor not because it’s so very reasonable for him to be a perfect predictor, but so that people won’t get distracted in those directions.
We must disagree about what is the heart of the dilemma. How can it be all about whether Omega is wrong with some fractional probability? Rather it’s about whether logic (2-boxing seems logical) and winning are at odds. Or perhaps whether determinism and choice is at odds, if you are operating outside a deterministic world-view. Or perhaps a third thing, but nothing—in this problem—about what kinds of Omega powers are reasonable or possible. Omega is just a device being used to set up the dilemma.
My difficulty is in understanding why the concept of a perfect predictor is relevant to artificial intelligence.
Also, 2-boxing is indicated by inductive logic based on non-Omega situations. Given the special circumstances of Newcomb’s problem, it would seem unwise to rely on that. Deductive logic leads to 1-boxing.
You don’t need perfect prediction to develop an argument for one-boxing. If the predictor’s probability of correct prediction is p and the utility of the contents of the one-box is k times the utility of the contents of the two-box, then the expected utility of one-boxing is greater than that of two-boxing if p is greater than (k + 1) / 2k.
I agree that in general this is how it works. It’s rather like POAT that way… some people see it as one kind of problem, and other people see it as another kind of problem, and neither side can make sense of the other’s position.
I’ve heard this sentiment expressed a fair bit, but I think I understand the argument for two-boxing perfectly, even though I’d one-box.
POAT?
Plane on a treadmill. (I’d pull out LMGTFY again, but I try to limit myself to one jerk-move per day.)
Er, did you actually Google it before saying that? For me it’s not even defined that way on the front page.
Yep. For me the first link (at work, second link now at home) is urbandictionary.com, and it’s the second definition.
I don’t think it counts as a matter for LMGTFY unless the answer pretty much screams at you on the results page before you even start clicking the links...
I personally ask for a link if two minutes of Googling and link-clicking gets me nothing; my standard for LMGTFY follows as a corollary.
Making the assumption that the person you’re responding to hasn’t invested those two minutes can be risky, as the present instance shows. Maybe they have, but got different results.
Another risky assumption is that the other person is using the same Google that you are using. By default the search bar in Firefox directs me to the French Google (I’ve even looked for a way to change that, without success).
So you could end up looking like an ass, rather than a jerk, when you pull a LMGTFY and the recipient still doesn’t see what you’re seeing. It only works as a status move if you’re confident that most search options and variations will still pull up the relevant result.
More importantly, this is yet another data point in favor of the 10x norm. Unless of course we want LW to be Yet Another Internet Forum (complete with avatars).
(ETA: yes, in the comment linked here the 10X norm is intended to apply to posts, not comments. I favor the stronger version that applies to comments as well: look at the length of this comment thread, infer the time spent writing these various messages, the time wasted by readers watching Recent Comments, and compare with how long it would have taken to spell it out.)
’Strue. Those occurred to me about five minutes after I first replied to ciphergoth, when the implications of the fact that the link position changed based on where I was when I Googled finally penetrated my cerebral cortex. I considered noting it in an ETA, but I didn’t expect the comment thread to continue as far as it has.
Oh, note also that Cyan’s first use of LMGTFY was I think legit—finding my blog through Google is pretty straightforward from my username.
I don’t think it’s fair to count the meta-discussion against Cyan when weighing this up. Anything can spark meta-discussion here.
If it takes two full minutes for my readership to find out what the terms mean, the onus is on me to link to it; if that only takes me three minutes and it saves two readers Googling, then it’s worth it. The LMGTFY boundary is closer to ten seconds or less.
Another option would have been to spell it out—that way a lot of readers would have known without Googling, and those who didn’t would have got answers right away.
I don’t disagree with this. My “corollary” comment above was too facile—when I recall my own behavior, it’s my standard for peevishly thinking LMGTFY, not actually linking it.
First, thanks for explaining your down vote and thereby giving me an opportunity to respond.
The problem is that it is not a fair simplification, it disrupts the dilemma in such a way as to render it trivial. If you set the accuracy of the prediction to %100 many of the other specific details of the problem become largely irrelevant. For example you could then put $999,999.99 into box A and it would still be better to one-box.
It’s effectively the same thing as lowering the amount in box A to zero or raising the amount in box B to infinity. And one could break the problem in the other direction by lowering the accuracy of the prediction to %50 or equalizing the amount in both boxes.
It’s because the probability of a correct prediction must be between %50 and %100 or it breaks the structure of the problem in the sense that it makes the answer trivial to work out.
I suppose it is true that some people have intuitions that persist in leading them astray even when the probability is set to %100. In that sense it may still have some value if it helps to isolate and illuminate these biases.
My objection here doesn’t have to do with whether it is reasonable for Omega to possess such powers but with the over-simplification of the dilemma to the point where it is trivial.
I see we really are talking about different Newcomb “problem”s. I took back my down vote. So one of our problems should have another name, or at least a qualifier.
I don’t think Newcomb’s problem (mine) is so trivial. And I wouldn’t call belief in the triangle inequality a bias.
The contents of box 1 = (a>=0)
The contents of box 2 = (b>=0)
2-boxing is the logical deduction that ((a+b)>=a) and ((a+b)>=b).
I do 1-box, and do agree that this decision is a logical deduction. I find it odd though that this deduction works by repressing another logical deduction and don’t think I’ve ever see this before. I would want to argue that any and every logical path should work without contradiction.
Perhaps I can clarify: I specifically intended to simplify the dilemma to the point where it was trivial. There are a few reasons for this, but the primary reason is so I can take the trivial example expressed here, tweak it, and see what happens.
This is not intended to be a solution to any other scenario in which Omega is involved. It is intended to make sure that we all agree that this is correct.
I’m finding “correct” to be a loaded term here. It is correct in the sense that your conclusions follow from your premises, but in my view it bears only a superficial resemblance to Newcomb’s problem. Omega is not defined the way you defined it in Newcomb-like problems and the resulting difference is not trivial.
To really get at the core dilemma of Newcomb’s problem in detail one needs to attempt to work out the equilibrium accuracy (that is the level of accuracy required to make one-boxing and two-boxing have equal expected utility) not just arbitrarily set the accuracy to the upper limit where it is easy to work out that one-boxing wins.
I don’t care about Newcomb’s problem. This post doesn’t care about Newcomb’s problem. The next step in this line of questioning still doesn’t care about Newcomb’s problem.
So, please, forget about Newcomb’s problem. At some point, way down the line, Newcomb’s problem may show up again, but when it does this:
Will certainly be taken into account. Namely, it is exactly because the difference is not trivial that I went looking for a trivial example.
The reason you find “correct” to be loaded is probably because you are expecting some hidden “Gotcha!” to pop out. There is no gotcha. I am not trying to trick you. I just want an answer to what I thought was a simple question.
First, thanks for explaining your down vote and thereby giving me an opportunity to respond.
The problem is that it is not a fair simplification, it disrupts the dilemma in such a way as to render it trivial. If you set the accuracy of the prediction to %100 many of the other specific details of the problem become largely irrelevant. For example you could then put $999,999.99 into box A and it would still be better to one-box.
It’s effectively the same thing as lowering the amount in box A to zero or raising the amount in box B to infinity. And one could break the problem in the other direction by lowering the accuracy of the prediction to %50 or equalizing the amount in both boxes.
It’s because the probability of a correct prediction must be between %50 and %100 or it breaks the structure of the problem in the sense that it makes the answer trivial to work out.
I suppose it is true that some people have intuitions that persist in leading them astray even when the probability is set to %100. In that sense it may still have some value if it helps to isolate and illuminate these biases.
My objection here doesn’t have to do with whether it is reasonable for Omega to possess such powers but with the over-simplification of the dilemma to the point where it is trivial.