“Just being stupid” and “just doing the wrong thing” are rarely helpful views, because those errors are produced by specific bugs. Those bugs have pointers to how to fix them, whereas “just being stupid” doesn’t.
I think you should allow yourself in some situations to both believe “I should not smoke because it is bad for my health” and to continue smoking, because then you’ll flinch less.
I think this misses the point, and damages your “should” center. You want to get into a state where if you think “I should X,” then you do X. The set of beliefs that allows this is “Smoking is bad for my health,” “On net I think smoking is worth it,” and “I should do things that I think are on net worth doing.” (You can see how updating the first one from “Smoking isn’t that bad for my health” to its current state could flip the second belief, but that is determined by a trusted process instead of health getting an undeserved veto.)
“Just being stupid” and “just doing the wrong thing” are rarely helpful views, because those errors are produced by specific bugs. Those bugs have pointers to how to fix them, whereas “just being stupid” doesn’t.
I’m guessing you’re alluding to “Errors vs. Bugs and the End of Stupidity” here, which seems to have disappeared along with the rest of LiveJournal. Here’s the Google cached version, though.
“Just being stupid” and “just doing the wrong thing” are rarely helpful views
I agree. What I means was something like:
If the OP describes a skill, then the first problem (the kid that wants to be a writer) is so very easy to solve, that I feel I’m not learning much about how that skill works. The second problem (Carol) seems too hard for me. I doubt it’s actually solvable using the described skill.
I think this misses the point, and damages your “should” center
Potentially, yes. I’m deliberately proposing something that might be a little dangerous. I feel my should center is already broken and/or doing more harm to me than the other way around.
“Smoking is bad for my health,” “On net I think smoking is worth it,” and “I should do things that I think are on net worth doing.”
That’s definitely not good enough for me. I never smoked in my life. I don’t think smoking is worth it. And if I were smoking, I don’t think I would stop just because I think it’s a net harm. And I do think that, because I don’t want to think about the harm of smoking or the diffiicutly of quitting, I’d avoid learning about either of those two.
ADDED: First meaning of “I should-1 do X” is “a rational agent would do X”. Second meaning (idiosyncratic to me) of “I should-2 do X” is “do X” is the advice I need to hear. should-2 is based on my (miss-)understanding of Consequentialist-Recommendation Consequentialism. The problem with should-1 is that I interpret “I shoud-1 do X” to mean that I should feel guilty if I don’t do X, which is definitely not helpful.
“Just being stupid” and “just doing the wrong thing” are rarely helpful views, because those errors are produced by specific bugs. Those bugs have pointers to how to fix them, whereas “just being stupid” doesn’t.
I think this misses the point, and damages your “should” center. You want to get into a state where if you think “I should X,” then you do X. The set of beliefs that allows this is “Smoking is bad for my health,” “On net I think smoking is worth it,” and “I should do things that I think are on net worth doing.” (You can see how updating the first one from “Smoking isn’t that bad for my health” to its current state could flip the second belief, but that is determined by a trusted process instead of health getting an undeserved veto.)
I’m guessing you’re alluding to “Errors vs. Bugs and the End of Stupidity” here, which seems to have disappeared along with the rest of LiveJournal. Here’s the Google cached version, though.
I was, and I couldn’t find it; thanks for doing that!
I agree. What I means was something like: If the OP describes a skill, then the first problem (the kid that wants to be a writer) is so very easy to solve, that I feel I’m not learning much about how that skill works. The second problem (Carol) seems too hard for me. I doubt it’s actually solvable using the described skill.
Potentially, yes. I’m deliberately proposing something that might be a little dangerous. I feel my should center is already broken and/or doing more harm to me than the other way around.
That’s definitely not good enough for me. I never smoked in my life. I don’t think smoking is worth it. And if I were smoking, I don’t think I would stop just because I think it’s a net harm. And I do think that, because I don’t want to think about the harm of smoking or the diffiicutly of quitting, I’d avoid learning about either of those two.
ADDED: First meaning of “I should-1 do X” is “a rational agent would do X”. Second meaning (idiosyncratic to me) of “I should-2 do X” is “do X” is the advice I need to hear. should-2 is based on my (miss-)understanding of Consequentialist-Recommendation Consequentialism. The problem with should-1 is that I interpret “I shoud-1 do X” to mean that I should feel guilty if I don’t do X, which is definitely not helpful.