I’ve been noticing a trend lately, perhaps others have some evidence for this.
Perhaps during casual conversation, or perhaps as a means of guiding somebody, maybe an old friend, or an inquisitive stranger, I’ll mention this site or rationality as a practice in General. Typically, I get what I believe is a cached response most people saw somewhere that follows along something like this, “Rationalists are too high in the clouds to have useful ideas. Logic is impractical.”
Perhaps people heard it through casual conversation themselves, but at the end of the day, there’s source out there somewhere that must have blown up like any other meme on the planet. Anybody have a few sources in mind?
Newcomb’s problem is one possible way to explain the power of precommitments, and why “doing what seems most rational at this moment” doesn’t have to be the best strategy generally.
(But of course you can also explain precommitments without the Newcomb’s problem.)
Sometimes rationality novices are prone to use greedy reasoning, and dismiss everything else as “irrational”. Newcomb’s problem may happen be the koan that wakes them up.
In its literal meaning (i.e. not merely as a metaphor for something else), as MrMind said, it’s useful for people who do something with decision theory, like publish papers on it, or try to build a decision-making machine. Otherwise, it’s just a curiosity.
Newcomb’s problem is one possible way to explain the power of precommitments
You can’t precommit to something you have no idea will happen.
In the standard Newcomb’s problem the existence of Omega and his two boxes is a surprise to you. You did not train since childhood for the moment of meeting him.
It’s a counter-example to the back-then prevailing theory of decision making, which is a foundational discipline in AI. So yes, it has a very important use in reality.
I am also not quite sure how is it a counter-example.
Newcomb’s problem involves “choice”. If you are not going to discard causality (which I’m not willing to do), the only sensible interpretation is that your choice when you are in front of the two boxes doesn’t matter (or is predetermined, same thing). The choice that matters is the one you’ve made in the past when you picked your decision algorithm.
Given this, I come to the conclusion that you should pick your decision algorithm based on some improbable side-effect unknown to you at the time you were making the choice that matters.
If by prevailing we agree to mean “accepted as true by the majority of people who worked on the subject”, then it’s safe to say that causal decistion theory was the prevailing theory, and CDT two-boxes, so it’s sub-optimal and so Newcomb is a counter-example.
The choice that matters is the one you’ve made in the past when you picked your decision algorithm.
That is exactly the crux of the matter: decision theory must be faced with problem of source code stability and self-alignment.
you should pick your decision algorithm based on some improbable side-effect unknown to you at the time you were making the choice that matters.
Well, there’s a probabilistic Newcomb problem and it’s relevant in strategic decision making, so it’s not very improbable. It’s like the Prisoner’s dilemma, once you know it you start to see it everywhere.
I have never encountered things like Newcomb’s problem before LW. And after years on this site, I still don’t understand their relevance, or why the more AI x-risk people here obsess over them. Such issues have very little practical value and are extremely far removed from applied rationality.
I agree with Lumifer. It’s hard to look at LW and not come away with a bad aftertaste of ivory tower philosophizing in the pejorative sense.
If the goal of applied rationalists is to improve upon and teach applied rationality to others, wouldn’t it behoove us to reframe the way we speak here and think about how our words can be interpreted in more elegant ways?
It doesn’t matter how good of an idea somebody has, if they can’t communicate it palatably, it won’t reliably pass on in time, not to other people, not to the next generation, nobody.
It would be very surprising for an agent or community to have only one goal (at this level of abstraction. If you prefer, say “to have only one term in their utility function”). There are multiple participants here, with somewhat variant interests in rationality and lifehackery.
Personally, I prefer exploring the edge cases and theoretical foundations of correct decision-making BEFORE I commit to heuristics or shortcuts that clearly can’t apply universally.
The fact that these explorations aren’t necessary or interesting to those who just want to learn some tricks to be stronger (probably, for some definitions) bothers me a bit, but more for them than for me. If you don’t see how an understanding of Newcomb’s problem lets you better evaluate the power and limits of a decision mechanism, that’s fine, but please don’t try to stop me discussing it.
The fact that these explorations aren’t necessary or interesting to those who just want to learn some tricks to be stronger (probably, for some definitions) bothers me a bit, but more for them than for me. If you don’t see how an understanding of Newcomb’s problem lets you better evaluate the power and limits of a decision mechanism, that’s fine, but please don’t try to stop me discussing it.
I wouldn’t ask anybody to stop discussing Newcomb problems, my response was solely directed at the rhetoric behind Newcomb discussion, not the merits (or lack thereof) of discussing it.
I’m not as concerned about what it being discussed, as much as how. When inferential distances and cognitive biases get in the way of understanding concepts, much less make them seem palatable to read about, I’d hope people would spend more time optimizing the topic to appear more transparent. When I hear somebody claiming to have had a “bad aftertaste” from coming to this site, I can’t help but think this partially a failure of the site. Then again, perhaps my standards would be too high for the discussion board...
Being high-class means that you can afford to spend your time on issues that are impractical. As a result you had throughout history high class people signal the fact that they are high class by spending their time on impractical matters.
I’ll mention this site or rationality as a practice in General. Typically, I get what I believe is a cached response most people saw somewhere that follows along something like this, “Rationalists are too high in the clouds to have useful ideas. Logic is impractical.”
Applied rationality doesn’t have that much to do with using logic. It doesn’t violate logic but a lot of what we talk about, is about different heuristics. It might be worthwhile to present the idea of applied rationality differently.
Applied rationality doesn’t have that much to do with using logic. It doesn’t violate logic but a lot of what we talk about, is about different heuristics. It might be worthwhile to present the idea of applied rationality differently.
This seems like an issue of conflating logic with applied rationality, then. Chances are that I made this mistake in writing my post. I’ll be sure to check for conflation in the rhetoric I use, chances are that certain words used will carry with them a connotation that signals to the listener a need to reply with a cached response.
I’m reminded of a tale retold by Plato, about the famous philosopher Thales who was so intent at looking at the star that he fell into a well. This ‘meme’ is actually as ancient as civilization itself (Thales is indeed pre-Socratic, that is, this anecdote predates the very idea of rationality).
Other early tellers of tales about Thales point in quite a different direction: I think the single best-known story about him is of how one year he worked out that it was going to be a good year for olive oil, hired all the olive presses, and made an absolute killing when the autumn harvest came along. (Aristotle’s telling, at least, is explicitly aimed at indicating that philosophers are perfectly capable of turning their talents to practical ends, and that if they don’t it’s by choice.)
I’ve been noticing a trend lately, perhaps others have some evidence for this.
Perhaps during casual conversation, or perhaps as a means of guiding somebody, maybe an old friend, or an inquisitive stranger, I’ll mention this site or rationality as a practice in General. Typically, I get what I believe is a cached response most people saw somewhere that follows along something like this, “Rationalists are too high in the clouds to have useful ideas. Logic is impractical.”
Perhaps people heard it through casual conversation themselves, but at the end of the day, there’s source out there somewhere that must have blown up like any other meme on the planet. Anybody have a few sources in mind?
LW.
Example: preoccupation with the Newcomb’s problem. You think it’s of any use in reality?
Newcomb’s problem is one possible way to explain the power of precommitments, and why “doing what seems most rational at this moment” doesn’t have to be the best strategy generally.
(But of course you can also explain precommitments without the Newcomb’s problem.)
Sometimes rationality novices are prone to use greedy reasoning, and dismiss everything else as “irrational”. Newcomb’s problem may happen be the koan that wakes them up.
In its literal meaning (i.e. not merely as a metaphor for something else), as MrMind said, it’s useful for people who do something with decision theory, like publish papers on it, or try to build a decision-making machine. Otherwise, it’s just a curiosity.
You can’t precommit to something you have no idea will happen.
In the standard Newcomb’s problem the existence of Omega and his two boxes is a surprise to you. You did not train since childhood for the moment of meeting him.
It’s a counter-example to the back-then prevailing theory of decision making, which is a foundational discipline in AI. So yes, it has a very important use in reality.
In which sense do you use the word “prevailing”?
I am also not quite sure how is it a counter-example.
Newcomb’s problem involves “choice”. If you are not going to discard causality (which I’m not willing to do), the only sensible interpretation is that your choice when you are in front of the two boxes doesn’t matter (or is predetermined, same thing). The choice that matters is the one you’ve made in the past when you picked your decision algorithm.
Given this, I come to the conclusion that you should pick your decision algorithm based on some improbable side-effect unknown to you at the time you were making the choice that matters.
If by prevailing we agree to mean “accepted as true by the majority of people who worked on the subject”, then it’s safe to say that causal decistion theory was the prevailing theory, and CDT two-boxes, so it’s sub-optimal and so Newcomb is a counter-example.
That is exactly the crux of the matter: decision theory must be faced with problem of source code stability and self-alignment.
Well, there’s a probabilistic Newcomb problem and it’s relevant in strategic decision making, so it’s not very improbable. It’s like the Prisoner’s dilemma, once you know it you start to see it everywhere.
I don’t see it as sub-optimal (I two-box in case you haven’t guessed it already).
I don’t understand what that means. Can you ELI5?
OK. Throw out the word “improbable”. You are still left with
You haven’t made much progress.
Something earlier? That is, who regurgitated that question to you before you regurgitated it to me? Newcomb? Robert Nozick?
I think LW was actually the place where I first encountered the Newcomb’s problem.
But if you’re looking for origins of intellectual masturbation, they go waaaaay back X-)
I have never encountered things like Newcomb’s problem before LW. And after years on this site, I still don’t understand their relevance, or why the more AI x-risk people here obsess over them. Such issues have very little practical value and are extremely far removed from applied rationality.
I agree with Lumifer. It’s hard to look at LW and not come away with a bad aftertaste of ivory tower philosophizing in the pejorative sense.
Doesn’t that bother you?
If the goal of applied rationalists is to improve upon and teach applied rationality to others, wouldn’t it behoove us to reframe the way we speak here and think about how our words can be interpreted in more elegant ways?
It doesn’t matter how good of an idea somebody has, if they can’t communicate it palatably, it won’t reliably pass on in time, not to other people, not to the next generation, nobody.
It would be very surprising for an agent or community to have only one goal (at this level of abstraction. If you prefer, say “to have only one term in their utility function”). There are multiple participants here, with somewhat variant interests in rationality and lifehackery.
Personally, I prefer exploring the edge cases and theoretical foundations of correct decision-making BEFORE I commit to heuristics or shortcuts that clearly can’t apply universally.
The fact that these explorations aren’t necessary or interesting to those who just want to learn some tricks to be stronger (probably, for some definitions) bothers me a bit, but more for them than for me. If you don’t see how an understanding of Newcomb’s problem lets you better evaluate the power and limits of a decision mechanism, that’s fine, but please don’t try to stop me discussing it.
I wouldn’t ask anybody to stop discussing Newcomb problems, my response was solely directed at the rhetoric behind Newcomb discussion, not the merits (or lack thereof) of discussing it.
I’m not as concerned about what it being discussed, as much as how. When inferential distances and cognitive biases get in the way of understanding concepts, much less make them seem palatable to read about, I’d hope people would spend more time optimizing the topic to appear more transparent. When I hear somebody claiming to have had a “bad aftertaste” from coming to this site, I can’t help but think this partially a failure of the site. Then again, perhaps my standards would be too high for the discussion board...
Being high-class means that you can afford to spend your time on issues that are impractical. As a result you had throughout history high class people signal the fact that they are high class by spending their time on impractical matters.
Applied rationality doesn’t have that much to do with using logic. It doesn’t violate logic but a lot of what we talk about, is about different heuristics. It might be worthwhile to present the idea of applied rationality differently.
This seems like an issue of conflating logic with applied rationality, then. Chances are that I made this mistake in writing my post. I’ll be sure to check for conflation in the rhetoric I use, chances are that certain words used will carry with them a connotation that signals to the listener a need to reply with a cached response.
I’m reminded of a tale retold by Plato, about the famous philosopher Thales who was so intent at looking at the star that he fell into a well. This ‘meme’ is actually as ancient as civilization itself (Thales is indeed pre-Socratic, that is, this anecdote predates the very idea of rationality).
Other early tellers of tales about Thales point in quite a different direction: I think the single best-known story about him is of how one year he worked out that it was going to be a good year for olive oil, hired all the olive presses, and made an absolute killing when the autumn harvest came along. (Aristotle’s telling, at least, is explicitly aimed at indicating that philosophers are perfectly capable of turning their talents to practical ends, and that if they don’t it’s by choice.)