There’s no real reason to avoid competition unless the competition happens to be threatening your life. Gambling, for instance, isn’t real competition. Simply not avoiding something isn’t the same as seeking it out. LessWrong doesn’t fail to properly compete, it avoids competition. Rationality isn’t fueled by neutrality—not in humans, anyway. LessWrong is optimized to minimize conflict under the guise that conflict impedes rationality rather than the already well observed phenomenon of diverse opinions and viewpoints coming together to mutually benefit each other by trying to rip the other guy’s world view apart. Being a good rationalist simply means that, at the end of the day, your opinion will make sense, whether you agree with anyone—your pre-debate self included—or not.
Rationality isn’t tested for quality in a laboratory by some competent rational agent; that makes no sense. It can only be tested in the field. It doesn’t matter how good your equations look on paper, if your device explodes and kills you, your device explodes and kills you. A good Bayesian takes this explosion hypothesis into account and isn’t swayed by delusions of having been right. Rationality is tested by combat with other rational agents on the world stage. There’s no law, conviction, or Bayesian measurement that states that Hawking or Einstein or Yudkowsky are qualified to conduct rationality measurements. If a child can find a flaw in your logic, no matter how intelligent your colleagues are and none of them saw it first, then that child is thinking in a way that nobody on your team is thinking, and they have found a flaw in your reasoning.
LessWrong isn’t fostering this process along; it actively prevents it. It is a community of like-minded people consulting each other in a “hive mind” sort of format—brainstorming, as it were. Anyone who is upvoted well is considered part of a purely rational exo-thought process. This is about as useful as armchair reasoning all by your lonesome, just at the group level rather than the level of the individual. It’s not unproductive, it’s simply a real waste of what could be a finely tuned rationality advancing system. Hope this makes sense to anyone else.
As far as losing site of the goal goes, what kind of goal post makes the question, “How do you test rationality?” come to mind in the first place? You’ll realize this isn’t a thought that everyone has/asks themselves very often.
I just love this. LessWrong is so blatantly self-conscious. I don’t get rebuttals when I make comments like this, I get downvotes, inferential silence, and the inferential gap of a request for clarification. If what I say is so obviously wrong (if I’m reading the inferential silence right), I have to wonder why nobody’s really reached out to say-so yet. It’s really a ton easier to believe that a community of aspiring rationalists is self-conscious about their collective identity rather than that I’m just not appreciating the beauty of the mechanism. I’d love to appreciate the beauty like everyone else, but I’m just way too aware of the flaws.
Feel free to interpret this as a request to clear up inferential silence, but it’s really not. LessWrong is self-conscious about its identity. I’m just stating my amusement.
There’s no real reason to avoid competition unless the competition happens to be threatening your life. Gambling, for instance, isn’t real competition. Simply not avoiding something isn’t the same as seeking it out. LessWrong doesn’t fail to properly compete, it avoids competition. Rationality isn’t fueled by neutrality—not in humans, anyway. LessWrong is optimized to minimize conflict under the guise that conflict impedes rationality rather than the already well observed phenomenon of diverse opinions and viewpoints coming together to mutually benefit each other by trying to rip the other guy’s world view apart. Being a good rationalist simply means that, at the end of the day, your opinion will make sense, whether you agree with anyone—your pre-debate self included—or not.
Rationality isn’t tested for quality in a laboratory by some competent rational agent; that makes no sense. It can only be tested in the field. It doesn’t matter how good your equations look on paper, if your device explodes and kills you, your device explodes and kills you. A good Bayesian takes this explosion hypothesis into account and isn’t swayed by delusions of having been right. Rationality is tested by combat with other rational agents on the world stage. There’s no law, conviction, or Bayesian measurement that states that Hawking or Einstein or Yudkowsky are qualified to conduct rationality measurements. If a child can find a flaw in your logic, no matter how intelligent your colleagues are and none of them saw it first, then that child is thinking in a way that nobody on your team is thinking, and they have found a flaw in your reasoning.
LessWrong isn’t fostering this process along; it actively prevents it. It is a community of like-minded people consulting each other in a “hive mind” sort of format—brainstorming, as it were. Anyone who is upvoted well is considered part of a purely rational exo-thought process. This is about as useful as armchair reasoning all by your lonesome, just at the group level rather than the level of the individual. It’s not unproductive, it’s simply a real waste of what could be a finely tuned rationality advancing system. Hope this makes sense to anyone else.
As far as losing site of the goal goes, what kind of goal post makes the question, “How do you test rationality?” come to mind in the first place? You’ll realize this isn’t a thought that everyone has/asks themselves very often.
I just love this. LessWrong is so blatantly self-conscious. I don’t get rebuttals when I make comments like this, I get downvotes, inferential silence, and the inferential gap of a request for clarification. If what I say is so obviously wrong (if I’m reading the inferential silence right), I have to wonder why nobody’s really reached out to say-so yet. It’s really a ton easier to believe that a community of aspiring rationalists is self-conscious about their collective identity rather than that I’m just not appreciating the beauty of the mechanism. I’d love to appreciate the beauty like everyone else, but I’m just way too aware of the flaws.
Feel free to interpret this as a request to clear up inferential silence, but it’s really not. LessWrong is self-conscious about its identity. I’m just stating my amusement.