Interesting article. Here is the problem I have:
In the first example, “spelling ocean correctly” and “I’ll be a successful writer” clearly have nothing to do with each other, so they shouldn’t be in a bucket together and the kid is just being stupid. At least on first glance, that’s totally different from Carol’s situation. I’m tempted to say that “I should not try full force on the startup” and “there is a fatal flaw in the startup” should be in a bucket, because I believe “if there is a fatal flaw in the startup, I should not try it”. As long as I believe that, how can I separate these two and not flinch?
Do you think one should allow oneself to be less consistent in order to become more accurate? Suppose you are a smoker and you don’t want to look into the health risks of smoking, because you don’t want to quit. I think you should allow yourself in some situations to both believe “I should not smoke because it is bad for my health” and to continue smoking, because then you’ll flinch less. But I’m fuzzy on when. If you completely give up on having your actions be determined by your believes about what you should do, that seems obviously crazy and there won’t be any reason to look into the health risks of smoking anyway.
Maybe you should model yourself as two people. One person is rationality. It’s responsible for determining what to believe and what to do. The other person is the one that queries rationality and acts on it’s recommendations. Since rationality is a consequentialis with integrity it might not recommend to quit smoking, because then the other person will stop acting on it’s advice and stop giving it queries.
“Just being stupid” and “just doing the wrong thing” are rarely helpful views, because those errors are produced by specific bugs. Those bugs have pointers to how to fix them, whereas “just being stupid” doesn’t.
I think you should allow yourself in some situations to both believe “I should not smoke because it is bad for my health” and to continue smoking, because then you’ll flinch less.
I think this misses the point, and damages your “should” center. You want to get into a state where if you think “I should X,” then you do X. The set of beliefs that allows this is “Smoking is bad for my health,” “On net I think smoking is worth it,” and “I should do things that I think are on net worth doing.” (You can see how updating the first one from “Smoking isn’t that bad for my health” to its current state could flip the second belief, but that is determined by a trusted process instead of health getting an undeserved veto.)
“Just being stupid” and “just doing the wrong thing” are rarely helpful views, because those errors are produced by specific bugs. Those bugs have pointers to how to fix them, whereas “just being stupid” doesn’t.
I’m guessing you’re alluding to “Errors vs. Bugs and the End of Stupidity” here, which seems to have disappeared along with the rest of LiveJournal. Here’s the Google cached version, though.
“Just being stupid” and “just doing the wrong thing” are rarely helpful views
I agree. What I means was something like:
If the OP describes a skill, then the first problem (the kid that wants to be a writer) is so very easy to solve, that I feel I’m not learning much about how that skill works. The second problem (Carol) seems too hard for me. I doubt it’s actually solvable using the described skill.
I think this misses the point, and damages your “should” center
Potentially, yes. I’m deliberately proposing something that might be a little dangerous. I feel my should center is already broken and/or doing more harm to me than the other way around.
“Smoking is bad for my health,” “On net I think smoking is worth it,” and “I should do things that I think are on net worth doing.”
That’s definitely not good enough for me. I never smoked in my life. I don’t think smoking is worth it. And if I were smoking, I don’t think I would stop just because I think it’s a net harm. And I do think that, because I don’t want to think about the harm of smoking or the diffiicutly of quitting, I’d avoid learning about either of those two.
ADDED: First meaning of “I should-1 do X” is “a rational agent would do X”. Second meaning (idiosyncratic to me) of “I should-2 do X” is “do X” is the advice I need to hear. should-2 is based on my (miss-)understanding of Consequentialist-Recommendation Consequentialism. The problem with should-1 is that I interpret “I shoud-1 do X” to mean that I should feel guilty if I don’t do X, which is definitely not helpful.
In the first example, “spelling ocean correctly” and “I’ll be a successful writer” clearly have nothing to do with each other,
If you think that successful writers are talented, and that talent means fewer misspellings, then misspelling things is evidence of you not going to be a successful writer. (No, I don’t think this is a very plausible model, but it’s one that I’d imagine could be plausible to a kid with a fixed mindset and who didn’t yet know what really distinguishes good writers from the bad.)
Interesting article. Here is the problem I have: In the first example, “spelling ocean correctly” and “I’ll be a successful writer” clearly have nothing to do with each other, so they shouldn’t be in a bucket together and the kid is just being stupid. At least on first glance, that’s totally different from Carol’s situation. I’m tempted to say that “I should not try full force on the startup” and “there is a fatal flaw in the startup” should be in a bucket, because I believe “if there is a fatal flaw in the startup, I should not try it”. As long as I believe that, how can I separate these two and not flinch?
Do you think one should allow oneself to be less consistent in order to become more accurate? Suppose you are a smoker and you don’t want to look into the health risks of smoking, because you don’t want to quit. I think you should allow yourself in some situations to both believe “I should not smoke because it is bad for my health” and to continue smoking, because then you’ll flinch less. But I’m fuzzy on when. If you completely give up on having your actions be determined by your believes about what you should do, that seems obviously crazy and there won’t be any reason to look into the health risks of smoking anyway.
Maybe you should model yourself as two people. One person is rationality. It’s responsible for determining what to believe and what to do. The other person is the one that queries rationality and acts on it’s recommendations. Since rationality is a consequentialis with integrity it might not recommend to quit smoking, because then the other person will stop acting on it’s advice and stop giving it queries.
“Just being stupid” and “just doing the wrong thing” are rarely helpful views, because those errors are produced by specific bugs. Those bugs have pointers to how to fix them, whereas “just being stupid” doesn’t.
I think this misses the point, and damages your “should” center. You want to get into a state where if you think “I should X,” then you do X. The set of beliefs that allows this is “Smoking is bad for my health,” “On net I think smoking is worth it,” and “I should do things that I think are on net worth doing.” (You can see how updating the first one from “Smoking isn’t that bad for my health” to its current state could flip the second belief, but that is determined by a trusted process instead of health getting an undeserved veto.)
I’m guessing you’re alluding to “Errors vs. Bugs and the End of Stupidity” here, which seems to have disappeared along with the rest of LiveJournal. Here’s the Google cached version, though.
I was, and I couldn’t find it; thanks for doing that!
I agree. What I means was something like: If the OP describes a skill, then the first problem (the kid that wants to be a writer) is so very easy to solve, that I feel I’m not learning much about how that skill works. The second problem (Carol) seems too hard for me. I doubt it’s actually solvable using the described skill.
Potentially, yes. I’m deliberately proposing something that might be a little dangerous. I feel my should center is already broken and/or doing more harm to me than the other way around.
That’s definitely not good enough for me. I never smoked in my life. I don’t think smoking is worth it. And if I were smoking, I don’t think I would stop just because I think it’s a net harm. And I do think that, because I don’t want to think about the harm of smoking or the diffiicutly of quitting, I’d avoid learning about either of those two.
ADDED: First meaning of “I should-1 do X” is “a rational agent would do X”. Second meaning (idiosyncratic to me) of “I should-2 do X” is “do X” is the advice I need to hear. should-2 is based on my (miss-)understanding of Consequentialist-Recommendation Consequentialism. The problem with should-1 is that I interpret “I shoud-1 do X” to mean that I should feel guilty if I don’t do X, which is definitely not helpful.
If you think that successful writers are talented, and that talent means fewer misspellings, then misspelling things is evidence of you not going to be a successful writer. (No, I don’t think this is a very plausible model, but it’s one that I’d imagine could be plausible to a kid with a fixed mindset and who didn’t yet know what really distinguishes good writers from the bad.)