I have never encountered things like Newcomb’s problem before LW. And after years on this site, I still don’t understand their relevance, or why the more AI x-risk people here obsess over them. Such issues have very little practical value and are extremely far removed from applied rationality.
I agree with Lumifer. It’s hard to look at LW and not come away with a bad aftertaste of ivory tower philosophizing in the pejorative sense.
If the goal of applied rationalists is to improve upon and teach applied rationality to others, wouldn’t it behoove us to reframe the way we speak here and think about how our words can be interpreted in more elegant ways?
It doesn’t matter how good of an idea somebody has, if they can’t communicate it palatably, it won’t reliably pass on in time, not to other people, not to the next generation, nobody.
It would be very surprising for an agent or community to have only one goal (at this level of abstraction. If you prefer, say “to have only one term in their utility function”). There are multiple participants here, with somewhat variant interests in rationality and lifehackery.
Personally, I prefer exploring the edge cases and theoretical foundations of correct decision-making BEFORE I commit to heuristics or shortcuts that clearly can’t apply universally.
The fact that these explorations aren’t necessary or interesting to those who just want to learn some tricks to be stronger (probably, for some definitions) bothers me a bit, but more for them than for me. If you don’t see how an understanding of Newcomb’s problem lets you better evaluate the power and limits of a decision mechanism, that’s fine, but please don’t try to stop me discussing it.
The fact that these explorations aren’t necessary or interesting to those who just want to learn some tricks to be stronger (probably, for some definitions) bothers me a bit, but more for them than for me. If you don’t see how an understanding of Newcomb’s problem lets you better evaluate the power and limits of a decision mechanism, that’s fine, but please don’t try to stop me discussing it.
I wouldn’t ask anybody to stop discussing Newcomb problems, my response was solely directed at the rhetoric behind Newcomb discussion, not the merits (or lack thereof) of discussing it.
I’m not as concerned about what it being discussed, as much as how. When inferential distances and cognitive biases get in the way of understanding concepts, much less make them seem palatable to read about, I’d hope people would spend more time optimizing the topic to appear more transparent. When I hear somebody claiming to have had a “bad aftertaste” from coming to this site, I can’t help but think this partially a failure of the site. Then again, perhaps my standards would be too high for the discussion board...
Something earlier? That is, who regurgitated that question to you before you regurgitated it to me? Newcomb? Robert Nozick?
I think LW was actually the place where I first encountered the Newcomb’s problem.
But if you’re looking for origins of intellectual masturbation, they go waaaaay back X-)
I have never encountered things like Newcomb’s problem before LW. And after years on this site, I still don’t understand their relevance, or why the more AI x-risk people here obsess over them. Such issues have very little practical value and are extremely far removed from applied rationality.
I agree with Lumifer. It’s hard to look at LW and not come away with a bad aftertaste of ivory tower philosophizing in the pejorative sense.
Doesn’t that bother you?
If the goal of applied rationalists is to improve upon and teach applied rationality to others, wouldn’t it behoove us to reframe the way we speak here and think about how our words can be interpreted in more elegant ways?
It doesn’t matter how good of an idea somebody has, if they can’t communicate it palatably, it won’t reliably pass on in time, not to other people, not to the next generation, nobody.
It would be very surprising for an agent or community to have only one goal (at this level of abstraction. If you prefer, say “to have only one term in their utility function”). There are multiple participants here, with somewhat variant interests in rationality and lifehackery.
Personally, I prefer exploring the edge cases and theoretical foundations of correct decision-making BEFORE I commit to heuristics or shortcuts that clearly can’t apply universally.
The fact that these explorations aren’t necessary or interesting to those who just want to learn some tricks to be stronger (probably, for some definitions) bothers me a bit, but more for them than for me. If you don’t see how an understanding of Newcomb’s problem lets you better evaluate the power and limits of a decision mechanism, that’s fine, but please don’t try to stop me discussing it.
I wouldn’t ask anybody to stop discussing Newcomb problems, my response was solely directed at the rhetoric behind Newcomb discussion, not the merits (or lack thereof) of discussing it.
I’m not as concerned about what it being discussed, as much as how. When inferential distances and cognitive biases get in the way of understanding concepts, much less make them seem palatable to read about, I’d hope people would spend more time optimizing the topic to appear more transparent. When I hear somebody claiming to have had a “bad aftertaste” from coming to this site, I can’t help but think this partially a failure of the site. Then again, perhaps my standards would be too high for the discussion board...