Cue: Any time my brain goes into “explaining” mode rather than “thinking” (“discovering”) mode. These are rather distinct modes of mental activity and can be distinguished easily. “Explaining” is much more verbal and usually involves imagining a hypothetical audience, e.g. Anna Salamon or Less Wrong. When explaining I usually presume that my conclusion is correct and focus on optimizing the credibility and presentation of my arguments. “Actually thinking” is much more kinesthetic and “stressful” (in a not-particularly-negative sense of the word) and I feel a lot less certain about where I’m going. When in “explaining” mode (or, inversely, “skeptical” mode) my conceptual metaphors are also more visual: “I see where you’re going with that, but...” or “I don’t see how that is related to your earlier point about...”. Explaining produces rationalizations by default but this is usually okay as the “rationalizations” are cached results from previous periods of “actually thinking”; of course, oftentimes it’s introspectively unclear how much actual thought was put into reaching any given conclusion, and it’s easy to assume that any conclusion previously reached by my brain must be correct.
(Both of these are in some sense a form of “rationalization”: “thinking” being the rationalization of data primarily via forward propagation to any conclusionspace in a relatively wide set of possible conclusionspaces, “explaining” being the rationalization of narrow conclusionspaces primarily via backpropagation. But when people use the word “rationalization” they almost always mean the latter. I hope the processes actually are sufficiently distinct such that it’s not a bad idea to praise the former while demonizing the latter; I do have some trepidation over the whole “rationalization is bad” campaign.)
Yo, people who categorically or near-categorically downvote my contributions: assuming you at least have good intentions, could you please exercise more context-sensitivity? I understand this would impose additional costs on your screening processes but I think the result would be fewer negative externalities in the form of subtly misinformed Less Wrong readers, e.g. especially in this case Anna Salamon, who is designing rationality practices and would benefit from relatively unbiased information to a greater extent than might be naively expected. Thanks for your consideration.
I don’t normally downvote your contributions (and indeed had just upvoted one), but I downvoted this one for whining about downvotes. (Especially as its parent is actually at +13 right now—maybe it was at −3 or something when you originally wrote the above, though.)
Anyone who categorically or near-categorically downvotes your contributions is unlikely to be swayed by a polite request for them not to do so.
Anyone who categorically or near-categorically downvotes your contributions is unlikely to be swayed by a polite request for them not to do so.
Of course, but in this case it seemed deontologically necessary to request it anyway; I would feel guilty if I didn’t even make a token effort to keep people from being needlessly self-defeating. This happens to me all the time: “if it’s normally distributed then you should just straightforwardly optimize for the median outcome” versus “a heavy-tailed distribution is more accurate or at least acts as a better proxy for accuracy, we should optimize for rare but significant events on the tails”. I feel like the latter is often the case but people systematically don’t see it and subsequently predictably shoot their own feet off.
ETA: “To not forget scenarios consistent with the evidence, even at the cost of overweighting them; to prioritize low relative entropy over low expected absolute error, as a proxy for expected costs from error.”
If you consider it important that certain contributions not be unfairly downvoted, and you consider it likely that making those contributions under your name will result in them being unfairly downvoted, it would seem to follow that you consider it important not to make those contributions under your name. No?
It does follow but I might still take the lesser of two evils and post it anyway. It’s true that if I used a different name that would have been strictly better; for some reason that idea hadn’t occurred to me. (Upvoted.) In retrospect I should have found the third option, but in practice when commenting on LW I’m normally already feeling as if I’ve gone out of my way to take a third option and feel that if I kept on in that vein I would get paralyzed and super-stressed. Perhaps I should update once again towards thinking harder and more broadly even at the cost of an even greater risk of paralysis.
That said, sometimes thinking dilemmas through after I’ve made (and implemented) a decision and then, if I find a viable third option, noting it in my head so that it comes to mind more readily the next time I’m faced with a similar decision, can get me broad thinking and nonparalysis.
“When explaining I usually presume that my conclusion is correct and focus on optimizing the credibility and presentation of my arguments”
Thats because your sentences are badly formed.
As a debater (i know how much you guys hate debating and testing your theories), i have to do allot of explaining, and if i include any flaws or fallacies from critical thinking (something else you dont know about), then i know, and my opponent should know, that this is a mistake.
So to illuminate them helps to create a logical explanation. And that is our goal.
Too bad no one here seems to know diddly squat about critical thinking OR debating.
Cue: Any time my brain goes into “explaining” mode rather than “thinking” (“discovering”) mode. These are rather distinct modes of mental activity and can be distinguished easily. “Explaining” is much more verbal and usually involves imagining a hypothetical audience, e.g. Anna Salamon or Less Wrong. When explaining I usually presume that my conclusion is correct and focus on optimizing the credibility and presentation of my arguments. “Actually thinking” is much more kinesthetic and “stressful” (in a not-particularly-negative sense of the word) and I feel a lot less certain about where I’m going. When in “explaining” mode (or, inversely, “skeptical” mode) my conceptual metaphors are also more visual: “I see where you’re going with that, but...” or “I don’t see how that is related to your earlier point about...”. Explaining produces rationalizations by default but this is usually okay as the “rationalizations” are cached results from previous periods of “actually thinking”; of course, oftentimes it’s introspectively unclear how much actual thought was put into reaching any given conclusion, and it’s easy to assume that any conclusion previously reached by my brain must be correct.
(Both of these are in some sense a form of “rationalization”: “thinking” being the rationalization of data primarily via forward propagation to any conclusionspace in a relatively wide set of possible conclusionspaces, “explaining” being the rationalization of narrow conclusionspaces primarily via backpropagation. But when people use the word “rationalization” they almost always mean the latter. I hope the processes actually are sufficiently distinct such that it’s not a bad idea to praise the former while demonizing the latter; I do have some trepidation over the whole “rationalization is bad” campaign.)
Yeah, that’s what it feels like to me, too.
Yo, people who categorically or near-categorically downvote my contributions: assuming you at least have good intentions, could you please exercise more context-sensitivity? I understand this would impose additional costs on your screening processes but I think the result would be fewer negative externalities in the form of subtly misinformed Less Wrong readers, e.g. especially in this case Anna Salamon, who is designing rationality practices and would benefit from relatively unbiased information to a greater extent than might be naively expected. Thanks for your consideration.
I don’t normally downvote your contributions (and indeed had just upvoted one), but I downvoted this one for whining about downvotes. (Especially as its parent is actually at +13 right now—maybe it was at −3 or something when you originally wrote the above, though.)
Anyone who categorically or near-categorically downvotes your contributions is unlikely to be swayed by a polite request for them not to do so.
Of course, but in this case it seemed deontologically necessary to request it anyway; I would feel guilty if I didn’t even make a token effort to keep people from being needlessly self-defeating. This happens to me all the time: “if it’s normally distributed then you should just straightforwardly optimize for the median outcome” versus “a heavy-tailed distribution is more accurate or at least acts as a better proxy for accuracy, we should optimize for rare but significant events on the tails”. I feel like the latter is often the case but people systematically don’t see it and subsequently predictably shoot their own feet off.
ETA: “To not forget scenarios consistent with the evidence, even at the cost of overweighting them; to prioritize low relative entropy over low expected absolute error, as a proxy for expected costs from error.”
If you consider it important that certain contributions not be unfairly downvoted, and you consider it likely that making those contributions under your name will result in them being unfairly downvoted, it would seem to follow that you consider it important not to make those contributions under your name. No?
It does follow but I might still take the lesser of two evils and post it anyway. It’s true that if I used a different name that would have been strictly better; for some reason that idea hadn’t occurred to me. (Upvoted.) In retrospect I should have found the third option, but in practice when commenting on LW I’m normally already feeling as if I’ve gone out of my way to take a third option and feel that if I kept on in that vein I would get paralyzed and super-stressed. Perhaps I should update once again towards thinking harder and more broadly even at the cost of an even greater risk of paralysis.
Well, I endorse nonparalysis.
That said, sometimes thinking dilemmas through after I’ve made (and implemented) a decision and then, if I find a viable third option, noting it in my head so that it comes to mind more readily the next time I’m faced with a similar decision, can get me broad thinking and nonparalysis.
Thats because your sentences are badly formed.
As a debater (i know how much you guys hate debating and testing your theories), i have to do allot of explaining, and if i include any flaws or fallacies from critical thinking (something else you dont know about), then i know, and my opponent should know, that this is a mistake. So to illuminate them helps to create a logical explanation. And that is our goal.
Too bad no one here seems to know diddly squat about critical thinking OR debating.
What brings you to Less Wrong, CriticalSteel2?