I appreciate you pointing this out. I’m not sure if you’re already saying this or not, but IMO we on LW should work hard (on LW, at least) not to promote beliefs that are meant to be useful, as though they are meant to be true. Otherwise, we’ll get into a muddle where moralism / desire not to harm others makes it difficult to acquire and share true observations about the world.
E.g., maybe I’ll be afraid to say “my anonymous friend Bob seems to me to work exceedingly hard, and exceedingly effectively, while being very unhappy” lest I retraumatize people or make their antidotes ineffective.
A proposed fix to your “counterbalancing beliefs”: call them “heuristics” or “questions-to-oneself,” and phrase them as questions rather than truth-claims. E.g.:
1′. If it hurts, is there some way the specifics of the pain/tiredness can lead me to notice wasted effort / improvable form?
2′. Are there ways I can let go of some of the pain/tiredness? If I was really trying here, might I be happier?
I do personally get milage from questions like 1′ and 2′. I think the thing you’re after with the antidotes (whose spirit I appreciate) is to make sure that we don’t preferentially look for ways to be more effective that cause pain (rather than ways to be more effective that relieve pain, or that are neutral on the pain dimension). So we can look for the search strategies directly.
(Also, thanks for the post! Some good discussion on a tricky and important topic, IMO.)
Thanks for opening this discussion! I think this conversation is hard because I was trying to talk at several levels at once, and not even consciously aware of it. Let me first explain with an analogy what was going on in my head when I wrote the previous reply.
Imagine I am writing a math paper, and the most important thing to me is that the main theorem is true. What I received from Richard was information that one of the supporting lemmas [1. if it hurts, you’re probably doing it wrong] in the paper is false or at least the proof was insufficient. I also received an implication that said lemma is not just a lemma but also one of the main theorems in the paper. My instinct in this position is to ditch the lemma and make sure the actual main theorem is on solid footing as the first course of action. In doing so, I argued that I only need a much weaker form of the lemma to prove the theorem, e.g. instead of a Central Limit Theorem I only needed to apply Markov’s Inequality.
I think where I went wrong and raised rationalist red flags is that the way I make this argument: (a) makes it seem like I don’t believe in the strong form of the lemma and am intentionally stating false observations for instrumental reasons, and (b) also looks like a conversation-stopper that I’m not willing to investigate certain truth claims on their own.
Neither of these are true. At least at the time of posting I was moderately confident about both lemmas as truth claims (up to poetic embellishment). After ChristianKL’s comment about soreness I don’t endorse the [1. if it hurts, you’re probably doing it wrong] statement any more, that seems like motivated blindness on my part. I will think about replacing it with the [1′] statement you proposed instead, although I feel some aversion to making deep edits of already published posts. I still essentially endorse [2. You’re not trying your best if you’re not happy.] and am very much open to discussing the truth value of this statement, once there is shared understanding that the main theorem does not depend on it.
I take some responsibility for my original point being misinterpreted, because it was phrased in an unnecessarily confrontational way. Sorry about that.
I think where I went wrong and raised rationalist red flags is that the way I make this argument: (a) makes it seem like I don’t believe in the strong form of the lemma and am intentionally stating false observations for instrumental reasons.
I think this falls on a spectrum of epistemic rigour. The good end involves treating instrumentally useful observations with the same level of scrutiny as instrumentally anti-useful observations (or even more, to counteract bias). The bad end involves intentionally say things known to be false, because they are instrumentally useful. I interpret you as doing something in the middle, which I’d describe as: applying lower epistemic standards to instrumentally useful claims, and exaggerating them to make them more instrumentally useful.
To be clear, I don’t think it’s a particularly big deal, because I expect most people to have defensive filters that prevent them from taking these types of motivational sayings too seriously. However, this post has been very highly upvoted, which makes me a bit more concerned that people will start treating your two antidotes as received knowledge—especially given my background beliefs about this being a common mistake on LW. Hence why I pushed back on it.
Moving to the object level claims: I accept that the main point you’re making doesn’t depend on the truth of the antidotes. I’ve already critiqued #1, but #2 also seems false to me. Consider someone who’s very depressed, and also trying very hard to become less depressed. Are they “not trying their best”? Or someone who is working a miserable minimum-wage job while putting themselves through university and looking after children? Is there always going to be a magic bullet that solves these problems and makes them happy, apart from gritting their teeth and getting through it?
I tentatively accept the applicability of this claim to therestricted domain of people who are physically/mentally healthy, economically/socially privileged and focused on their long-term impact. Since I’m in that category, it may well be useful for me actually, so I’ll try think about it more; thanks for raising the argument.
Thank you for extending more charity than you did previously. It was hard for me to respond fairly when you made arguments like “Well actually, you shouldn’t use the word ‘never’ because no probability is literally zero.”
I honestly don’t think point [2. You’re not trying your best if you’re not happy.] is specified well enough to be confused with an interesting truth claim, as opposed to a helpful heuristic. For example, one can easily make the case that that no human is “trying their best,” and therefore the statement is vacuously true because a true statement is implied by anything. I think the most reasonable way to interpret the sentence is “nobody is trying their best, and happiness is a particularly high ROI dimension along which to notice this.”
I tentatively accept the applicability of this claim to therestricted domain of people who are physically/mentally healthy, economically/socially privileged and focused on their long-term impact. Since I’m in that category, it may well be useful for me actually, so I’ll try think about it more; thanks for raising the argument.
Given that you believe this part, I suspect that we pretty much agree on every claim about the territory. If you think so too, then I think there’s not much left to dispute other than word choice.
This is the second time you’ve (inaccurately) accused me of something, while simultaneously doing that thing yourself.
In the first case, I quoted a specific claim from your post and argued that it wasn’t well-supported and, interpreted as a statement of fact, was false. In response, you accused me of “rounding off a specific technical claim to the nearest idea they’ve heard before”, and then rounded off my criticism to a misunderstanding of the overall post.
Here, I asked “what justifies claims like [claim you made]”? The essence of my criticism was that you’d made a bold claim while providing approximately zero evidence for it. You accuse me of being uncharitable because I highlighted the “never” part in particular, which you interpreted as me taking you totally literally. But this is itself rather uncharitable, because in fact I’m also uninterested in whether “the probability is literally zero”, and was just trying to highlight that you’d made a strong claim which demands correspondingly strong evidence. If you’d written “almost never” or “very rarely”, I would have responded in approximately the same way: “Almost never? Based on what?” In other words, I was happy to use “never” in whatever sense you intended it, but you then did exactly what you criticised me for, and jumped to a “literally zero” interpretation.
I would suggest being more restrained with such criticisms in the future.
In any case, it’s not unreasonable for you to make a substantive part of your post about “useful heuristics” (even though you do propose them as “beliefs”). It’s not the best, epistemically, but there’s plenty of space in an intellectual ecosystem for memorable, instrumentally useful blog posts. The main problem, from my point of view, is that Less Wrong still seems to think that insight porn is the unit of progress, as judged by engagement and upvotes. You get what you reward, and I wish our reward mechanism were more aligned. But this is a community-level issue which means your post may be interpreted in ways that you didn’t necessarily intend, so it’s probably not too useful for me to continue criticising it (even though I think we do have further territory-level disagreements—e.g. I agree with your statement about happiness, but would also say “nobody is trying their best, and not feeling enough pain is a particularly high ROI dimension along which to notice this”, which I expect you’d disagree with).
I appreciate you pointing this out. I’m not sure if you’re already saying this or not, but IMO we on LW should work hard (on LW, at least) not to promote beliefs that are meant to be useful, as though they are meant to be true. Otherwise, we’ll get into a muddle where moralism / desire not to harm others makes it difficult to acquire and share true observations about the world.
E.g., maybe I’ll be afraid to say “my anonymous friend Bob seems to me to work exceedingly hard, and exceedingly effectively, while being very unhappy” lest I retraumatize people or make their antidotes ineffective.
A proposed fix to your “counterbalancing beliefs”: call them “heuristics” or “questions-to-oneself,” and phrase them as questions rather than truth-claims. E.g.:
1′. If it hurts, is there some way the specifics of the pain/tiredness can lead me to notice wasted effort / improvable form?
2′. Are there ways I can let go of some of the pain/tiredness? If I was really trying here, might I be happier?
I do personally get milage from questions like 1′ and 2′. I think the thing you’re after with the antidotes (whose spirit I appreciate) is to make sure that we don’t preferentially look for ways to be more effective that cause pain (rather than ways to be more effective that relieve pain, or that are neutral on the pain dimension). So we can look for the search strategies directly.
(Also, thanks for the post! Some good discussion on a tricky and important topic, IMO.)
Thanks for opening this discussion! I think this conversation is hard because I was trying to talk at several levels at once, and not even consciously aware of it. Let me first explain with an analogy what was going on in my head when I wrote the previous reply.
Imagine I am writing a math paper, and the most important thing to me is that the main theorem is true. What I received from Richard was information that one of the supporting lemmas [1. if it hurts, you’re probably doing it wrong] in the paper is false or at least the proof was insufficient. I also received an implication that said lemma is not just a lemma but also one of the main theorems in the paper. My instinct in this position is to ditch the lemma and make sure the actual main theorem is on solid footing as the first course of action. In doing so, I argued that I only need a much weaker form of the lemma to prove the theorem, e.g. instead of a Central Limit Theorem I only needed to apply Markov’s Inequality.
I think where I went wrong and raised rationalist red flags is that the way I make this argument: (a) makes it seem like I don’t believe in the strong form of the lemma and am intentionally stating false observations for instrumental reasons, and (b) also looks like a conversation-stopper that I’m not willing to investigate certain truth claims on their own.
Neither of these are true. At least at the time of posting I was moderately confident about both lemmas as truth claims (up to poetic embellishment). After ChristianKL’s comment about soreness I don’t endorse the [1. if it hurts, you’re probably doing it wrong] statement any more, that seems like motivated blindness on my part. I will think about replacing it with the [1′] statement you proposed instead, although I feel some aversion to making deep edits of already published posts. I still essentially endorse [2. You’re not trying your best if you’re not happy.] and am very much open to discussing the truth value of this statement, once there is shared understanding that the main theorem does not depend on it.
I take some responsibility for my original point being misinterpreted, because it was phrased in an unnecessarily confrontational way. Sorry about that.
I think this falls on a spectrum of epistemic rigour. The good end involves treating instrumentally useful observations with the same level of scrutiny as instrumentally anti-useful observations (or even more, to counteract bias). The bad end involves intentionally say things known to be false, because they are instrumentally useful. I interpret you as doing something in the middle, which I’d describe as: applying lower epistemic standards to instrumentally useful claims, and exaggerating them to make them more instrumentally useful.
To be clear, I don’t think it’s a particularly big deal, because I expect most people to have defensive filters that prevent them from taking these types of motivational sayings too seriously. However, this post has been very highly upvoted, which makes me a bit more concerned that people will start treating your two antidotes as received knowledge—especially given my background beliefs about this being a common mistake on LW. Hence why I pushed back on it.
Moving to the object level claims: I accept that the main point you’re making doesn’t depend on the truth of the antidotes. I’ve already critiqued #1, but #2 also seems false to me. Consider someone who’s very depressed, and also trying very hard to become less depressed. Are they “not trying their best”? Or someone who is working a miserable minimum-wage job while putting themselves through university and looking after children? Is there always going to be a magic bullet that solves these problems and makes them happy, apart from gritting their teeth and getting through it?
I tentatively accept the applicability of this claim to the restricted domain of people who are physically/mentally healthy, economically/socially privileged and focused on their long-term impact. Since I’m in that category, it may well be useful for me actually, so I’ll try think about it more; thanks for raising the argument.
Thank you for extending more charity than you did previously. It was hard for me to respond fairly when you made arguments like “Well actually, you shouldn’t use the word ‘never’ because no probability is literally zero.”
I honestly don’t think point [2. You’re not trying your best if you’re not happy.] is specified well enough to be confused with an interesting truth claim, as opposed to a helpful heuristic. For example, one can easily make the case that that no human is “trying their best,” and therefore the statement is vacuously true because a true statement is implied by anything. I think the most reasonable way to interpret the sentence is “nobody is trying their best, and happiness is a particularly high ROI dimension along which to notice this.”
Given that you believe this part, I suspect that we pretty much agree on every claim about the territory. If you think so too, then I think there’s not much left to dispute other than word choice.
This is the second time you’ve (inaccurately) accused me of something, while simultaneously doing that thing yourself.
In the first case, I quoted a specific claim from your post and argued that it wasn’t well-supported and, interpreted as a statement of fact, was false. In response, you accused me of “rounding off a specific technical claim to the nearest idea they’ve heard before”, and then rounded off my criticism to a misunderstanding of the overall post.
Here, I asked “what justifies claims like [claim you made]”? The essence of my criticism was that you’d made a bold claim while providing approximately zero evidence for it. You accuse me of being uncharitable because I highlighted the “never” part in particular, which you interpreted as me taking you totally literally. But this is itself rather uncharitable, because in fact I’m also uninterested in whether “the probability is literally zero”, and was just trying to highlight that you’d made a strong claim which demands correspondingly strong evidence. If you’d written “almost never” or “very rarely”, I would have responded in approximately the same way: “Almost never? Based on what?” In other words, I was happy to use “never” in whatever sense you intended it, but you then did exactly what you criticised me for, and jumped to a “literally zero” interpretation.
I would suggest being more restrained with such criticisms in the future.
In any case, it’s not unreasonable for you to make a substantive part of your post about “useful heuristics” (even though you do propose them as “beliefs”). It’s not the best, epistemically, but there’s plenty of space in an intellectual ecosystem for memorable, instrumentally useful blog posts. The main problem, from my point of view, is that Less Wrong still seems to think that insight porn is the unit of progress, as judged by engagement and upvotes. You get what you reward, and I wish our reward mechanism were more aligned. But this is a community-level issue which means your post may be interpreted in ways that you didn’t necessarily intend, so it’s probably not too useful for me to continue criticising it (even though I think we do have further territory-level disagreements—e.g. I agree with your statement about happiness, but would also say “nobody is trying their best, and not feeling enough pain is a particularly high ROI dimension along which to notice this”, which I expect you’d disagree with).