Newcomb’s problem is one possible way to explain the power of precommitments, and why “doing what seems most rational at this moment” doesn’t have to be the best strategy generally.
(But of course you can also explain precommitments without the Newcomb’s problem.)
Sometimes rationality novices are prone to use greedy reasoning, and dismiss everything else as “irrational”. Newcomb’s problem may happen be the koan that wakes them up.
In its literal meaning (i.e. not merely as a metaphor for something else), as MrMind said, it’s useful for people who do something with decision theory, like publish papers on it, or try to build a decision-making machine. Otherwise, it’s just a curiosity.
Newcomb’s problem is one possible way to explain the power of precommitments
You can’t precommit to something you have no idea will happen.
In the standard Newcomb’s problem the existence of Omega and his two boxes is a surprise to you. You did not train since childhood for the moment of meeting him.
It’s a counter-example to the back-then prevailing theory of decision making, which is a foundational discipline in AI. So yes, it has a very important use in reality.
I am also not quite sure how is it a counter-example.
Newcomb’s problem involves “choice”. If you are not going to discard causality (which I’m not willing to do), the only sensible interpretation is that your choice when you are in front of the two boxes doesn’t matter (or is predetermined, same thing). The choice that matters is the one you’ve made in the past when you picked your decision algorithm.
Given this, I come to the conclusion that you should pick your decision algorithm based on some improbable side-effect unknown to you at the time you were making the choice that matters.
If by prevailing we agree to mean “accepted as true by the majority of people who worked on the subject”, then it’s safe to say that causal decistion theory was the prevailing theory, and CDT two-boxes, so it’s sub-optimal and so Newcomb is a counter-example.
The choice that matters is the one you’ve made in the past when you picked your decision algorithm.
That is exactly the crux of the matter: decision theory must be faced with problem of source code stability and self-alignment.
you should pick your decision algorithm based on some improbable side-effect unknown to you at the time you were making the choice that matters.
Well, there’s a probabilistic Newcomb problem and it’s relevant in strategic decision making, so it’s not very improbable. It’s like the Prisoner’s dilemma, once you know it you start to see it everywhere.
I have never encountered things like Newcomb’s problem before LW. And after years on this site, I still don’t understand their relevance, or why the more AI x-risk people here obsess over them. Such issues have very little practical value and are extremely far removed from applied rationality.
I agree with Lumifer. It’s hard to look at LW and not come away with a bad aftertaste of ivory tower philosophizing in the pejorative sense.
If the goal of applied rationalists is to improve upon and teach applied rationality to others, wouldn’t it behoove us to reframe the way we speak here and think about how our words can be interpreted in more elegant ways?
It doesn’t matter how good of an idea somebody has, if they can’t communicate it palatably, it won’t reliably pass on in time, not to other people, not to the next generation, nobody.
It would be very surprising for an agent or community to have only one goal (at this level of abstraction. If you prefer, say “to have only one term in their utility function”). There are multiple participants here, with somewhat variant interests in rationality and lifehackery.
Personally, I prefer exploring the edge cases and theoretical foundations of correct decision-making BEFORE I commit to heuristics or shortcuts that clearly can’t apply universally.
The fact that these explorations aren’t necessary or interesting to those who just want to learn some tricks to be stronger (probably, for some definitions) bothers me a bit, but more for them than for me. If you don’t see how an understanding of Newcomb’s problem lets you better evaluate the power and limits of a decision mechanism, that’s fine, but please don’t try to stop me discussing it.
The fact that these explorations aren’t necessary or interesting to those who just want to learn some tricks to be stronger (probably, for some definitions) bothers me a bit, but more for them than for me. If you don’t see how an understanding of Newcomb’s problem lets you better evaluate the power and limits of a decision mechanism, that’s fine, but please don’t try to stop me discussing it.
I wouldn’t ask anybody to stop discussing Newcomb problems, my response was solely directed at the rhetoric behind Newcomb discussion, not the merits (or lack thereof) of discussing it.
I’m not as concerned about what it being discussed, as much as how. When inferential distances and cognitive biases get in the way of understanding concepts, much less make them seem palatable to read about, I’d hope people would spend more time optimizing the topic to appear more transparent. When I hear somebody claiming to have had a “bad aftertaste” from coming to this site, I can’t help but think this partially a failure of the site. Then again, perhaps my standards would be too high for the discussion board...
LW.
Example: preoccupation with the Newcomb’s problem. You think it’s of any use in reality?
Newcomb’s problem is one possible way to explain the power of precommitments, and why “doing what seems most rational at this moment” doesn’t have to be the best strategy generally.
(But of course you can also explain precommitments without the Newcomb’s problem.)
Sometimes rationality novices are prone to use greedy reasoning, and dismiss everything else as “irrational”. Newcomb’s problem may happen be the koan that wakes them up.
In its literal meaning (i.e. not merely as a metaphor for something else), as MrMind said, it’s useful for people who do something with decision theory, like publish papers on it, or try to build a decision-making machine. Otherwise, it’s just a curiosity.
You can’t precommit to something you have no idea will happen.
In the standard Newcomb’s problem the existence of Omega and his two boxes is a surprise to you. You did not train since childhood for the moment of meeting him.
It’s a counter-example to the back-then prevailing theory of decision making, which is a foundational discipline in AI. So yes, it has a very important use in reality.
In which sense do you use the word “prevailing”?
I am also not quite sure how is it a counter-example.
Newcomb’s problem involves “choice”. If you are not going to discard causality (which I’m not willing to do), the only sensible interpretation is that your choice when you are in front of the two boxes doesn’t matter (or is predetermined, same thing). The choice that matters is the one you’ve made in the past when you picked your decision algorithm.
Given this, I come to the conclusion that you should pick your decision algorithm based on some improbable side-effect unknown to you at the time you were making the choice that matters.
If by prevailing we agree to mean “accepted as true by the majority of people who worked on the subject”, then it’s safe to say that causal decistion theory was the prevailing theory, and CDT two-boxes, so it’s sub-optimal and so Newcomb is a counter-example.
That is exactly the crux of the matter: decision theory must be faced with problem of source code stability and self-alignment.
Well, there’s a probabilistic Newcomb problem and it’s relevant in strategic decision making, so it’s not very improbable. It’s like the Prisoner’s dilemma, once you know it you start to see it everywhere.
I don’t see it as sub-optimal (I two-box in case you haven’t guessed it already).
I don’t understand what that means. Can you ELI5?
OK. Throw out the word “improbable”. You are still left with
You haven’t made much progress.
Something earlier? That is, who regurgitated that question to you before you regurgitated it to me? Newcomb? Robert Nozick?
I think LW was actually the place where I first encountered the Newcomb’s problem.
But if you’re looking for origins of intellectual masturbation, they go waaaaay back X-)
I have never encountered things like Newcomb’s problem before LW. And after years on this site, I still don’t understand their relevance, or why the more AI x-risk people here obsess over them. Such issues have very little practical value and are extremely far removed from applied rationality.
I agree with Lumifer. It’s hard to look at LW and not come away with a bad aftertaste of ivory tower philosophizing in the pejorative sense.
Doesn’t that bother you?
If the goal of applied rationalists is to improve upon and teach applied rationality to others, wouldn’t it behoove us to reframe the way we speak here and think about how our words can be interpreted in more elegant ways?
It doesn’t matter how good of an idea somebody has, if they can’t communicate it palatably, it won’t reliably pass on in time, not to other people, not to the next generation, nobody.
It would be very surprising for an agent or community to have only one goal (at this level of abstraction. If you prefer, say “to have only one term in their utility function”). There are multiple participants here, with somewhat variant interests in rationality and lifehackery.
Personally, I prefer exploring the edge cases and theoretical foundations of correct decision-making BEFORE I commit to heuristics or shortcuts that clearly can’t apply universally.
The fact that these explorations aren’t necessary or interesting to those who just want to learn some tricks to be stronger (probably, for some definitions) bothers me a bit, but more for them than for me. If you don’t see how an understanding of Newcomb’s problem lets you better evaluate the power and limits of a decision mechanism, that’s fine, but please don’t try to stop me discussing it.
I wouldn’t ask anybody to stop discussing Newcomb problems, my response was solely directed at the rhetoric behind Newcomb discussion, not the merits (or lack thereof) of discussing it.
I’m not as concerned about what it being discussed, as much as how. When inferential distances and cognitive biases get in the way of understanding concepts, much less make them seem palatable to read about, I’d hope people would spend more time optimizing the topic to appear more transparent. When I hear somebody claiming to have had a “bad aftertaste” from coming to this site, I can’t help but think this partially a failure of the site. Then again, perhaps my standards would be too high for the discussion board...