Of course, mentioning the articles on ethical injuctions would be too boring.
It’s troublesome how ambiguous the signals are that LessWrong is sending on some issues.
On the one hand LessWrong says that you should “shut up and multiply, to trust the math even when it feels wrong”. On the other hand Yudkowsky writes that he would sooner question his grasp of “rationality” than give five dollars to a Pascal’s Mugger because he thought it was “rational”.
On the one hand LessWrong says that whoever knowingly chooses to save one life, when they could have saved two—to say nothing of a thousand lives, or a world—they have damned themselves as thoroughly as any murderer. On the other hand Yudkowsky writes that ends don’t justify the means for humans.
On the one hand LessWrong stresses the importance of acknowledging a fundamental problem and saying “Oops”. On the other hand Yudkowsky tries to patch a framework that is obviously broken.
Anyway, I worry that the overall message LessWrong sends is that of naive consequentialism based on back-of-the-envelope calculations, rather than the meta-level consequentialism that contains itself when faced with too much uncertainty.
Okay, for me the whole paradox breaks down to this:
I have limited brainpower and my hardware is corrupted. I am not able to solve all problems, and even where I believe I have a solution, I can’t trust myself. On the other hand, I should use all the intelligence I have, simply because there is no convincing argument why doing anything else would be better.
Using my reasoning to study my reasoning itself, and the biases thereof, here are some typical failure modes: those are the things I probably shouldn’t do even if they seem rational. Now I’m kinda meta-reasoning about where should I follow my reasoning and where not. And things are getting confusing; probably because I am getting closer to limits of my rationality. Still, there is no better way for me to act.
From the outside, this may seem like having dozen random excuses. But there are no better solutions. So the socially savvy solution is to shut up and pretend the whole topic doesn’t even exist. It doesn’t help to solve the problem, but it helps to save face. Sweeping the human irrationality under the rug instead of exposing it and then admitting that you, too, are only a human.
Expounding at length on dust specks vs torture, shut up and multiply and “taking ideas seriously” is likely to make people look askance at you, even if you also add ”… but don’t do anything weird, OK?” on the end.
Yes, they’re a caution about reason as memetic immune disorder. The money quote for the whole article is:
Of course, mentioning the articles on ethical injuctions would be too boring.
Here comes the Straw Vulcan’s younger brother, the Straw LessWrongian. (Brought to you by RationalWiki.)
It’s troublesome how ambiguous the signals are that LessWrong is sending on some issues.
On the one hand LessWrong says that you should “shut up and multiply, to trust the math even when it feels wrong”. On the other hand Yudkowsky writes that he would sooner question his grasp of “rationality” than give five dollars to a Pascal’s Mugger because he thought it was “rational”.
On the one hand LessWrong says that whoever knowingly chooses to save one life, when they could have saved two—to say nothing of a thousand lives, or a world—they have damned themselves as thoroughly as any murderer. On the other hand Yudkowsky writes that ends don’t justify the means for humans.
On the one hand LessWrong stresses the importance of acknowledging a fundamental problem and saying “Oops”. On the other hand Yudkowsky tries to patch a framework that is obviously broken.
Anyway, I worry that the overall message LessWrong sends is that of naive consequentialism based on back-of-the-envelope calculations, rather than the meta-level consequentialism that contains itself when faced with too much uncertainty.
Wow, these are very interesting examples!
Okay, for me the whole paradox breaks down to this:
I have limited brainpower and my hardware is corrupted. I am not able to solve all problems, and even where I believe I have a solution, I can’t trust myself. On the other hand, I should use all the intelligence I have, simply because there is no convincing argument why doing anything else would be better.
Using my reasoning to study my reasoning itself, and the biases thereof, here are some typical failure modes: those are the things I probably shouldn’t do even if they seem rational. Now I’m kinda meta-reasoning about where should I follow my reasoning and where not. And things are getting confusing; probably because I am getting closer to limits of my rationality. Still, there is no better way for me to act.
From the outside, this may seem like having dozen random excuses. But there are no better solutions. So the socially savvy solution is to shut up and pretend the whole topic doesn’t even exist. It doesn’t help to solve the problem, but it helps to save face. Sweeping the human irrationality under the rug instead of exposing it and then admitting that you, too, are only a human.
Expounding at length on dust specks vs torture, shut up and multiply and “taking ideas seriously” is likely to make people look askance at you, even if you also add ”… but don’t do anything weird, OK?” on the end.