Yes it pokes fun at lesswrong. That’s to be expected. But it’s well written and clearly conveys all the concepts in an easy to understand manner. The author understands lesswrong and our goals and ideas on a technical level, even if he doesn’t agree with them. I was particularly impressed in how the author explained why TDT solves Newcomb’s problem. I could give that explanation to my grandma and she’d understand it.
I don’t generally believe that “any publicity is good publicity.” However, this publicity is good publicity. Most people who read the article will forget it and only remember lesswrong as that kinda weird place that’s really technical about decision stuff (which is frankly accurate). Those people who do want to learn more are exactly the people lesswrong wants to attract.
I’m not sure what people’s expectations are for free publicity but this is, IMO, best case scenario.
Even if the alien jeers at you, saying, “The computer said you’d take both boxes, so I left Box B empty! Nyah nyah!” and then opens Box B and shows you that it’s empty, you should still only take Box B and get bupkis. … The rationale for this eludes easy summary, but the simplest argument is that you might be in the computer’s simulation. In order to make its prediction, the computer would have to simulate the universe itself.
Seems wrong. Omega wouldn’t necessarily have to simulate the universe, although that’s one option. If it did simulate the universe, showing sim-you an empty box B doesn’t tell it much about whether real-you will take box B when you haven’t seen that it’s empty.
(Not an expert, and I haven’t read Good and Real which this is supposedly from, but I do expect to understand this better than a Slate columnist.)
And I think the final two paragraphs go beyond “pokes fun at lesswrong”.
It is wrong in about the same way that highschool chemistry is wrong. Not one of the statements is true but the error seems to be one of not quite understanding the details rather than any overt misrepresentation. ie. I’d cringe and say “more or less”, since that’s closer to getting Transparent Newcomb’s right than I could reasonably expect from most people.
Seems wrong. Omega wouldn’t necessarily have to simulate the universe, although that’s one option.
The other options work out the same as simulating the universe for the purpose of telling you how you should decide to behave, but “simulating the universe” makes it visceral and easy to imagine.
Of course, mentioning the articles on ethical injuctions would be too boring.
It’s troublesome how ambiguous the signals are that LessWrong is sending on some issues.
On the one hand LessWrong says that you should “shut up and multiply, to trust the math even when it feels wrong”. On the other hand Yudkowsky writes that he would sooner question his grasp of “rationality” than give five dollars to a Pascal’s Mugger because he thought it was “rational”.
On the one hand LessWrong says that whoever knowingly chooses to save one life, when they could have saved two—to say nothing of a thousand lives, or a world—they have damned themselves as thoroughly as any murderer. On the other hand Yudkowsky writes that ends don’t justify the means for humans.
On the one hand LessWrong stresses the importance of acknowledging a fundamental problem and saying “Oops”. On the other hand Yudkowsky tries to patch a framework that is obviously broken.
Anyway, I worry that the overall message LessWrong sends is that of naive consequentialism based on back-of-the-envelope calculations, rather than the meta-level consequentialism that contains itself when faced with too much uncertainty.
Okay, for me the whole paradox breaks down to this:
I have limited brainpower and my hardware is corrupted. I am not able to solve all problems, and even where I believe I have a solution, I can’t trust myself. On the other hand, I should use all the intelligence I have, simply because there is no convincing argument why doing anything else would be better.
Using my reasoning to study my reasoning itself, and the biases thereof, here are some typical failure modes: those are the things I probably shouldn’t do even if they seem rational. Now I’m kinda meta-reasoning about where should I follow my reasoning and where not. And things are getting confusing; probably because I am getting closer to limits of my rationality. Still, there is no better way for me to act.
From the outside, this may seem like having dozen random excuses. But there are no better solutions. So the socially savvy solution is to shut up and pretend the whole topic doesn’t even exist. It doesn’t help to solve the problem, but it helps to save face. Sweeping the human irrationality under the rug instead of exposing it and then admitting that you, too, are only a human.
Expounding at length on dust specks vs torture, shut up and multiply and “taking ideas seriously” is likely to make people look askance at you, even if you also add ”… but don’t do anything weird, OK?” on the end.
I thought the article was quite good.
Yes it pokes fun at lesswrong. That’s to be expected. But it’s well written and clearly conveys all the concepts in an easy to understand manner. The author understands lesswrong and our goals and ideas on a technical level, even if he doesn’t agree with them. I was particularly impressed in how the author explained why TDT solves Newcomb’s problem. I could give that explanation to my grandma and she’d understand it.
I don’t generally believe that “any publicity is good publicity.” However, this publicity is good publicity. Most people who read the article will forget it and only remember lesswrong as that kinda weird place that’s really technical about decision stuff (which is frankly accurate). Those people who do want to learn more are exactly the people lesswrong wants to attract.
I’m not sure what people’s expectations are for free publicity but this is, IMO, best case scenario.
From a technical standpoint, this bit:
Seems wrong. Omega wouldn’t necessarily have to simulate the universe, although that’s one option. If it did simulate the universe, showing sim-you an empty box B doesn’t tell it much about whether real-you will take box B when you haven’t seen that it’s empty.
(Not an expert, and I haven’t read Good and Real which this is supposedly from, but I do expect to understand this better than a Slate columnist.)
And I think the final two paragraphs go beyond “pokes fun at lesswrong”.
It is wrong in about the same way that highschool chemistry is wrong. Not one of the statements is true but the error seems to be one of not quite understanding the details rather than any overt misrepresentation. ie. I’d cringe and say “more or less”, since that’s closer to getting Transparent Newcomb’s right than I could reasonably expect from most people.
The other options work out the same as simulating the universe for the purpose of telling you how you should decide to behave, but “simulating the universe” makes it visceral and easy to imagine.
Yes, they’re a caution about reason as memetic immune disorder. The money quote for the whole article is:
Of course, mentioning the articles on ethical injuctions would be too boring.
Here comes the Straw Vulcan’s younger brother, the Straw LessWrongian. (Brought to you by RationalWiki.)
It’s troublesome how ambiguous the signals are that LessWrong is sending on some issues.
On the one hand LessWrong says that you should “shut up and multiply, to trust the math even when it feels wrong”. On the other hand Yudkowsky writes that he would sooner question his grasp of “rationality” than give five dollars to a Pascal’s Mugger because he thought it was “rational”.
On the one hand LessWrong says that whoever knowingly chooses to save one life, when they could have saved two—to say nothing of a thousand lives, or a world—they have damned themselves as thoroughly as any murderer. On the other hand Yudkowsky writes that ends don’t justify the means for humans.
On the one hand LessWrong stresses the importance of acknowledging a fundamental problem and saying “Oops”. On the other hand Yudkowsky tries to patch a framework that is obviously broken.
Anyway, I worry that the overall message LessWrong sends is that of naive consequentialism based on back-of-the-envelope calculations, rather than the meta-level consequentialism that contains itself when faced with too much uncertainty.
Wow, these are very interesting examples!
Okay, for me the whole paradox breaks down to this:
I have limited brainpower and my hardware is corrupted. I am not able to solve all problems, and even where I believe I have a solution, I can’t trust myself. On the other hand, I should use all the intelligence I have, simply because there is no convincing argument why doing anything else would be better.
Using my reasoning to study my reasoning itself, and the biases thereof, here are some typical failure modes: those are the things I probably shouldn’t do even if they seem rational. Now I’m kinda meta-reasoning about where should I follow my reasoning and where not. And things are getting confusing; probably because I am getting closer to limits of my rationality. Still, there is no better way for me to act.
From the outside, this may seem like having dozen random excuses. But there are no better solutions. So the socially savvy solution is to shut up and pretend the whole topic doesn’t even exist. It doesn’t help to solve the problem, but it helps to save face. Sweeping the human irrationality under the rug instead of exposing it and then admitting that you, too, are only a human.
Expounding at length on dust specks vs torture, shut up and multiply and “taking ideas seriously” is likely to make people look askance at you, even if you also add ”… but don’t do anything weird, OK?” on the end.