I think the replies to this and the previous post have surprised me in how much even LessWrong readers are capable of rounding off a specific technical claim to the nearest idea they’ve heard before. Let me attempt just once to state what the thesis is again.
I am not saying that effort should never painful. I am also not saying that many useful interventions are painful. I am specifically saying that when you measure effort in units of pain this systematically leads to really bad places (and also that a lot of people are doing this). For example, you will discount forms of effort that are pleasant even if they are more effective.
“If it hurts, you’re probably doing it wrong.” This is just an assertion from an analogy with sports, where even the analogy is false—elite athletes put themselves through a ridiculous amount of pain.
Most amateur and intermediate athletes are doing something wrong if it hurts. “Most amateur and intermediate athletes” is a much larger piece of probability space than “elite athletes.”
I’m not quite sure I see how this is evidence for your point? Most entrepreneurs fail. It’s possible that they fail because they can’t handle enough pain. It’s also possible that they fail because they Goodharted on a terrible heuristic and lost all the free energy human brains need to innovate. As someone in mathematics research, which I thought would be filled with staring at a wall banging your head until I actually tried it, my gut leans towards the latter.
I am specifically saying that when you measure effort in units of pain this systematically leads to really bad places.
I think this is probably a useful insight, and seems to have resonated with quite a few people.
I’m specifically disputing your further conclusion that people in general should believe: “if it hurts, you’re probably doing it wrong” (and also “You’re not trying your best if you’re not happy.”). In fact, these are quite different from the original claim, and also broader than it, which is why they seem like overstatements to me.
I’m reminded of Buck’s argument that it’s much easier to determine that other people are wrong, than to be right yourself. In this case, even though I buy the criticism of the existing heuristic, proposing new heuristics is a difficult endeavour. Yet you present them as if they follow directly from your original insight. I mean, what justifies claims like “in practice [trading off happiness for short bursts of productivity] is never worth it”? Like, never? Based on what?
I get that this is an occupational hazard of writings posts that are meant to be very motivational/call-to-arms type posts. But you can motivate people without making blanket assertions about how to think about all domains. This seems particularly important on Less Wrong, where there’s a common problem that content of the category “interesting speculation about psychology and society, where I have no way of knowing if it’s true” is interpreted as solid intellectual progress.
I’m specifically disputing your further conclusion that people in general should believe: “if it hurts, you’re probably doing it wrong” (and also “You’re not trying your best if you’re not happy.”). In fact, these are quite different from the original claim, and also broader than it, which is why they seem like overstatements to me.
You are correct that the further “conclusions” are definitely on weaker epistemic grounds than the original claim. They are more of “attempts to propose solutions” than “confident assertions based on models.” I tried to be clear about this in the text, but I probably wasn’t.
Actually, there’s something else here, and it might fall into the territory of Dark Arts. On reflection, the two proposed statements “if it hurts, you’re probably doing it wrong” and “You’re not trying your best if you’re not happy” are not primarily truth claims at all. They are primarily sequences of words that are supposed to trigger mental moves to pull you out of the damaging belief “pain is the unit of effort.” To the extent that these claims are literally true, that is only instrumentally helpful for the effectiveness of the mental move. But for the purposes of this post it’s only necessary that they feel true enough of the time to get people noticing confusion. They are called “antidotes” and “proposed counterbalancing beliefs.” It may not be helpful to take an antidote if you’re not poisoned in the first place.
Determining when people are wrong is already useful intellectual progress. Figuring out what the correct heuristic is is much harder and not something I seriously attempted in the body of the post. (If I had to take a guess right now, the correct answer is that effort as a concept is more trouble than it’s worth in creative domains. Tracking it leads to people doing superstitious things “to be a certain kind of person,” whereas most things should just be tracked by results.)
I appreciate you pointing this out. I’m not sure if you’re already saying this or not, but IMO we on LW should work hard (on LW, at least) not to promote beliefs that are meant to be useful, as though they are meant to be true. Otherwise, we’ll get into a muddle where moralism / desire not to harm others makes it difficult to acquire and share true observations about the world.
E.g., maybe I’ll be afraid to say “my anonymous friend Bob seems to me to work exceedingly hard, and exceedingly effectively, while being very unhappy” lest I retraumatize people or make their antidotes ineffective.
A proposed fix to your “counterbalancing beliefs”: call them “heuristics” or “questions-to-oneself,” and phrase them as questions rather than truth-claims. E.g.:
1′. If it hurts, is there some way the specifics of the pain/tiredness can lead me to notice wasted effort / improvable form?
2′. Are there ways I can let go of some of the pain/tiredness? If I was really trying here, might I be happier?
I do personally get milage from questions like 1′ and 2′. I think the thing you’re after with the antidotes (whose spirit I appreciate) is to make sure that we don’t preferentially look for ways to be more effective that cause pain (rather than ways to be more effective that relieve pain, or that are neutral on the pain dimension). So we can look for the search strategies directly.
(Also, thanks for the post! Some good discussion on a tricky and important topic, IMO.)
Thanks for opening this discussion! I think this conversation is hard because I was trying to talk at several levels at once, and not even consciously aware of it. Let me first explain with an analogy what was going on in my head when I wrote the previous reply.
Imagine I am writing a math paper, and the most important thing to me is that the main theorem is true. What I received from Richard was information that one of the supporting lemmas [1. if it hurts, you’re probably doing it wrong] in the paper is false or at least the proof was insufficient. I also received an implication that said lemma is not just a lemma but also one of the main theorems in the paper. My instinct in this position is to ditch the lemma and make sure the actual main theorem is on solid footing as the first course of action. In doing so, I argued that I only need a much weaker form of the lemma to prove the theorem, e.g. instead of a Central Limit Theorem I only needed to apply Markov’s Inequality.
I think where I went wrong and raised rationalist red flags is that the way I make this argument: (a) makes it seem like I don’t believe in the strong form of the lemma and am intentionally stating false observations for instrumental reasons, and (b) also looks like a conversation-stopper that I’m not willing to investigate certain truth claims on their own.
Neither of these are true. At least at the time of posting I was moderately confident about both lemmas as truth claims (up to poetic embellishment). After ChristianKL’s comment about soreness I don’t endorse the [1. if it hurts, you’re probably doing it wrong] statement any more, that seems like motivated blindness on my part. I will think about replacing it with the [1′] statement you proposed instead, although I feel some aversion to making deep edits of already published posts. I still essentially endorse [2. You’re not trying your best if you’re not happy.] and am very much open to discussing the truth value of this statement, once there is shared understanding that the main theorem does not depend on it.
I take some responsibility for my original point being misinterpreted, because it was phrased in an unnecessarily confrontational way. Sorry about that.
I think where I went wrong and raised rationalist red flags is that the way I make this argument: (a) makes it seem like I don’t believe in the strong form of the lemma and am intentionally stating false observations for instrumental reasons.
I think this falls on a spectrum of epistemic rigour. The good end involves treating instrumentally useful observations with the same level of scrutiny as instrumentally anti-useful observations (or even more, to counteract bias). The bad end involves intentionally say things known to be false, because they are instrumentally useful. I interpret you as doing something in the middle, which I’d describe as: applying lower epistemic standards to instrumentally useful claims, and exaggerating them to make them more instrumentally useful.
To be clear, I don’t think it’s a particularly big deal, because I expect most people to have defensive filters that prevent them from taking these types of motivational sayings too seriously. However, this post has been very highly upvoted, which makes me a bit more concerned that people will start treating your two antidotes as received knowledge—especially given my background beliefs about this being a common mistake on LW. Hence why I pushed back on it.
Moving to the object level claims: I accept that the main point you’re making doesn’t depend on the truth of the antidotes. I’ve already critiqued #1, but #2 also seems false to me. Consider someone who’s very depressed, and also trying very hard to become less depressed. Are they “not trying their best”? Or someone who is working a miserable minimum-wage job while putting themselves through university and looking after children? Is there always going to be a magic bullet that solves these problems and makes them happy, apart from gritting their teeth and getting through it?
I tentatively accept the applicability of this claim to therestricted domain of people who are physically/mentally healthy, economically/socially privileged and focused on their long-term impact. Since I’m in that category, it may well be useful for me actually, so I’ll try think about it more; thanks for raising the argument.
Thank you for extending more charity than you did previously. It was hard for me to respond fairly when you made arguments like “Well actually, you shouldn’t use the word ‘never’ because no probability is literally zero.”
I honestly don’t think point [2. You’re not trying your best if you’re not happy.] is specified well enough to be confused with an interesting truth claim, as opposed to a helpful heuristic. For example, one can easily make the case that that no human is “trying their best,” and therefore the statement is vacuously true because a true statement is implied by anything. I think the most reasonable way to interpret the sentence is “nobody is trying their best, and happiness is a particularly high ROI dimension along which to notice this.”
I tentatively accept the applicability of this claim to therestricted domain of people who are physically/mentally healthy, economically/socially privileged and focused on their long-term impact. Since I’m in that category, it may well be useful for me actually, so I’ll try think about it more; thanks for raising the argument.
Given that you believe this part, I suspect that we pretty much agree on every claim about the territory. If you think so too, then I think there’s not much left to dispute other than word choice.
This is the second time you’ve (inaccurately) accused me of something, while simultaneously doing that thing yourself.
In the first case, I quoted a specific claim from your post and argued that it wasn’t well-supported and, interpreted as a statement of fact, was false. In response, you accused me of “rounding off a specific technical claim to the nearest idea they’ve heard before”, and then rounded off my criticism to a misunderstanding of the overall post.
Here, I asked “what justifies claims like [claim you made]”? The essence of my criticism was that you’d made a bold claim while providing approximately zero evidence for it. You accuse me of being uncharitable because I highlighted the “never” part in particular, which you interpreted as me taking you totally literally. But this is itself rather uncharitable, because in fact I’m also uninterested in whether “the probability is literally zero”, and was just trying to highlight that you’d made a strong claim which demands correspondingly strong evidence. If you’d written “almost never” or “very rarely”, I would have responded in approximately the same way: “Almost never? Based on what?” In other words, I was happy to use “never” in whatever sense you intended it, but you then did exactly what you criticised me for, and jumped to a “literally zero” interpretation.
I would suggest being more restrained with such criticisms in the future.
In any case, it’s not unreasonable for you to make a substantive part of your post about “useful heuristics” (even though you do propose them as “beliefs”). It’s not the best, epistemically, but there’s plenty of space in an intellectual ecosystem for memorable, instrumentally useful blog posts. The main problem, from my point of view, is that Less Wrong still seems to think that insight porn is the unit of progress, as judged by engagement and upvotes. You get what you reward, and I wish our reward mechanism were more aligned. But this is a community-level issue which means your post may be interpreted in ways that you didn’t necessarily intend, so it’s probably not too useful for me to continue criticising it (even though I think we do have further territory-level disagreements—e.g. I agree with your statement about happiness, but would also say “nobody is trying their best, and not feeling enough pain is a particularly high ROI dimension along which to notice this”, which I expect you’d disagree with).
Thank you for this comment; I was happy death spiralling around the original post to the point of starting to wonder if I should quit my job and this abruptly snapped me out of it.
I read the whole article and didn’t really get a click, but this version of it somehow clicked for me:
I am not saying that effort should never painful. I am also not saying that many useful interventions are painful. I am specifically saying that when you measure effort in units of pain this systematically leads to really bad places (and also that a lot of people are doing this).
The specific thing I realized was that this might be almost THE SAME as the reasoning error where people think that “objects which cost large amounts” probably have “high quality and usability”.
There are various ways to make it so that all the really extra specially good objects in a class of objects tend to have the highest prices, and equivalently for the low end of the product range, leading to a pretty clean correlation between the price (which is a cost, which is bad) and the value (which is the usability, which is good).
This sort of correlation can arise in a reasonably efficient market with heterogeneously capitalized people so that rich ones have money to waste on durability or style or whatever. Or it could just be that there are people with variance in their instrumental utility for the objects (perhaps based on aiming at different goals of greater or less import)? I think the correlation always requires the pre-existence of agents engaged in means/ends reasoning who accept the cost because the value is worth it, and so each purchase is something such agents are reasonably sure will create consumer surplus for them.
There’s a related joke about economists knowing “the price of everything and the value of nothing”.
The click I had was that maybe in the same way that some people conflate “price” and “value” into something like “worth”, there might also be people who conflate “painful effort” and “creative success” into something like “trying”, and such people might get more “agentic surplus” by making very goddamn sure that they never optimize for pain directly, due to a motivational/conceptual conflation of “pain” with “trying that leads to success”.
IME ‘soreness’ is pretty different to ‘pain’. [EDIT: although I don’t know if this is just different because my brain knows that DOMS is fine but knee pain during squats is not]
There are plenty of different sensations that people sometimes call pain. Being able to distinguish those sensations and treat them differently is useful.
Not counting the sensensation that comes with soreness as pain is an option but it’s not how everyone uses the word pain and especially people without much experience of it will lump it into the general pain cluster when they encouter it.
My initial reaction was that “soreness” doesn’t count as pain within the context of the post because it’s not as immediate, but I couldn’t come up with a principled reason for doing this gerrymandering. I no longer endorse point 1 (If it hurts, you’re probably doing it wrong) in the form stated and will think about how to reflect that in the post.
Small correction: DOMS is a distinct phenomenon from pain during exercise, which usually means that you are doing something wrong and may be injuring yourself.
DOMS occurs 1-2 days after exercise, as mentioned in the NHS quote.
I think the replies to this and the previous post have surprised me in how much even LessWrong readers are capable of rounding off a specific technical claim to the nearest idea they’ve heard before. Let me attempt just once to state what the thesis is again.
I am not saying that effort should never painful. I am also not saying that many useful interventions are painful. I am specifically saying that when you measure effort in units of pain this systematically leads to really bad places (and also that a lot of people are doing this). For example, you will discount forms of effort that are pleasant even if they are more effective.
Most amateur and intermediate athletes are doing something wrong if it hurts. “Most amateur and intermediate athletes” is a much larger piece of probability space than “elite athletes.”
I’m not quite sure I see how this is evidence for your point? Most entrepreneurs fail. It’s possible that they fail because they can’t handle enough pain. It’s also possible that they fail because they Goodharted on a terrible heuristic and lost all the free energy human brains need to innovate. As someone in mathematics research, which I thought would be filled with staring at a wall banging your head until I actually tried it, my gut leans towards the latter.
I think this is probably a useful insight, and seems to have resonated with quite a few people.
I’m specifically disputing your further conclusion that people in general should believe: “if it hurts, you’re probably doing it wrong” (and also “You’re not trying your best if you’re not happy.”). In fact, these are quite different from the original claim, and also broader than it, which is why they seem like overstatements to me.
I’m reminded of Buck’s argument that it’s much easier to determine that other people are wrong, than to be right yourself. In this case, even though I buy the criticism of the existing heuristic, proposing new heuristics is a difficult endeavour. Yet you present them as if they follow directly from your original insight. I mean, what justifies claims like “in practice [trading off happiness for short bursts of productivity] is never worth it”? Like, never? Based on what?
I get that this is an occupational hazard of writings posts that are meant to be very motivational/call-to-arms type posts. But you can motivate people without making blanket assertions about how to think about all domains. This seems particularly important on Less Wrong, where there’s a common problem that content of the category “interesting speculation about psychology and society, where I have no way of knowing if it’s true” is interpreted as solid intellectual progress.
You are correct that the further “conclusions” are definitely on weaker epistemic grounds than the original claim. They are more of “attempts to propose solutions” than “confident assertions based on models.” I tried to be clear about this in the text, but I probably wasn’t.
Actually, there’s something else here, and it might fall into the territory of Dark Arts. On reflection, the two proposed statements “if it hurts, you’re probably doing it wrong” and “You’re not trying your best if you’re not happy” are not primarily truth claims at all. They are primarily sequences of words that are supposed to trigger mental moves to pull you out of the damaging belief “pain is the unit of effort.” To the extent that these claims are literally true, that is only instrumentally helpful for the effectiveness of the mental move. But for the purposes of this post it’s only necessary that they feel true enough of the time to get people noticing confusion. They are called “antidotes” and “proposed counterbalancing beliefs.” It may not be helpful to take an antidote if you’re not poisoned in the first place.
Determining when people are wrong is already useful intellectual progress. Figuring out what the correct heuristic is is much harder and not something I seriously attempted in the body of the post. (If I had to take a guess right now, the correct answer is that effort as a concept is more trouble than it’s worth in creative domains. Tracking it leads to people doing superstitious things “to be a certain kind of person,” whereas most things should just be tracked by results.)
I appreciate you pointing this out. I’m not sure if you’re already saying this or not, but IMO we on LW should work hard (on LW, at least) not to promote beliefs that are meant to be useful, as though they are meant to be true. Otherwise, we’ll get into a muddle where moralism / desire not to harm others makes it difficult to acquire and share true observations about the world.
E.g., maybe I’ll be afraid to say “my anonymous friend Bob seems to me to work exceedingly hard, and exceedingly effectively, while being very unhappy” lest I retraumatize people or make their antidotes ineffective.
A proposed fix to your “counterbalancing beliefs”: call them “heuristics” or “questions-to-oneself,” and phrase them as questions rather than truth-claims. E.g.:
1′. If it hurts, is there some way the specifics of the pain/tiredness can lead me to notice wasted effort / improvable form?
2′. Are there ways I can let go of some of the pain/tiredness? If I was really trying here, might I be happier?
I do personally get milage from questions like 1′ and 2′. I think the thing you’re after with the antidotes (whose spirit I appreciate) is to make sure that we don’t preferentially look for ways to be more effective that cause pain (rather than ways to be more effective that relieve pain, or that are neutral on the pain dimension). So we can look for the search strategies directly.
(Also, thanks for the post! Some good discussion on a tricky and important topic, IMO.)
Thanks for opening this discussion! I think this conversation is hard because I was trying to talk at several levels at once, and not even consciously aware of it. Let me first explain with an analogy what was going on in my head when I wrote the previous reply.
Imagine I am writing a math paper, and the most important thing to me is that the main theorem is true. What I received from Richard was information that one of the supporting lemmas [1. if it hurts, you’re probably doing it wrong] in the paper is false or at least the proof was insufficient. I also received an implication that said lemma is not just a lemma but also one of the main theorems in the paper. My instinct in this position is to ditch the lemma and make sure the actual main theorem is on solid footing as the first course of action. In doing so, I argued that I only need a much weaker form of the lemma to prove the theorem, e.g. instead of a Central Limit Theorem I only needed to apply Markov’s Inequality.
I think where I went wrong and raised rationalist red flags is that the way I make this argument: (a) makes it seem like I don’t believe in the strong form of the lemma and am intentionally stating false observations for instrumental reasons, and (b) also looks like a conversation-stopper that I’m not willing to investigate certain truth claims on their own.
Neither of these are true. At least at the time of posting I was moderately confident about both lemmas as truth claims (up to poetic embellishment). After ChristianKL’s comment about soreness I don’t endorse the [1. if it hurts, you’re probably doing it wrong] statement any more, that seems like motivated blindness on my part. I will think about replacing it with the [1′] statement you proposed instead, although I feel some aversion to making deep edits of already published posts. I still essentially endorse [2. You’re not trying your best if you’re not happy.] and am very much open to discussing the truth value of this statement, once there is shared understanding that the main theorem does not depend on it.
I take some responsibility for my original point being misinterpreted, because it was phrased in an unnecessarily confrontational way. Sorry about that.
I think this falls on a spectrum of epistemic rigour. The good end involves treating instrumentally useful observations with the same level of scrutiny as instrumentally anti-useful observations (or even more, to counteract bias). The bad end involves intentionally say things known to be false, because they are instrumentally useful. I interpret you as doing something in the middle, which I’d describe as: applying lower epistemic standards to instrumentally useful claims, and exaggerating them to make them more instrumentally useful.
To be clear, I don’t think it’s a particularly big deal, because I expect most people to have defensive filters that prevent them from taking these types of motivational sayings too seriously. However, this post has been very highly upvoted, which makes me a bit more concerned that people will start treating your two antidotes as received knowledge—especially given my background beliefs about this being a common mistake on LW. Hence why I pushed back on it.
Moving to the object level claims: I accept that the main point you’re making doesn’t depend on the truth of the antidotes. I’ve already critiqued #1, but #2 also seems false to me. Consider someone who’s very depressed, and also trying very hard to become less depressed. Are they “not trying their best”? Or someone who is working a miserable minimum-wage job while putting themselves through university and looking after children? Is there always going to be a magic bullet that solves these problems and makes them happy, apart from gritting their teeth and getting through it?
I tentatively accept the applicability of this claim to the restricted domain of people who are physically/mentally healthy, economically/socially privileged and focused on their long-term impact. Since I’m in that category, it may well be useful for me actually, so I’ll try think about it more; thanks for raising the argument.
Thank you for extending more charity than you did previously. It was hard for me to respond fairly when you made arguments like “Well actually, you shouldn’t use the word ‘never’ because no probability is literally zero.”
I honestly don’t think point [2. You’re not trying your best if you’re not happy.] is specified well enough to be confused with an interesting truth claim, as opposed to a helpful heuristic. For example, one can easily make the case that that no human is “trying their best,” and therefore the statement is vacuously true because a true statement is implied by anything. I think the most reasonable way to interpret the sentence is “nobody is trying their best, and happiness is a particularly high ROI dimension along which to notice this.”
Given that you believe this part, I suspect that we pretty much agree on every claim about the territory. If you think so too, then I think there’s not much left to dispute other than word choice.
This is the second time you’ve (inaccurately) accused me of something, while simultaneously doing that thing yourself.
In the first case, I quoted a specific claim from your post and argued that it wasn’t well-supported and, interpreted as a statement of fact, was false. In response, you accused me of “rounding off a specific technical claim to the nearest idea they’ve heard before”, and then rounded off my criticism to a misunderstanding of the overall post.
Here, I asked “what justifies claims like [claim you made]”? The essence of my criticism was that you’d made a bold claim while providing approximately zero evidence for it. You accuse me of being uncharitable because I highlighted the “never” part in particular, which you interpreted as me taking you totally literally. But this is itself rather uncharitable, because in fact I’m also uninterested in whether “the probability is literally zero”, and was just trying to highlight that you’d made a strong claim which demands correspondingly strong evidence. If you’d written “almost never” or “very rarely”, I would have responded in approximately the same way: “Almost never? Based on what?” In other words, I was happy to use “never” in whatever sense you intended it, but you then did exactly what you criticised me for, and jumped to a “literally zero” interpretation.
I would suggest being more restrained with such criticisms in the future.
In any case, it’s not unreasonable for you to make a substantive part of your post about “useful heuristics” (even though you do propose them as “beliefs”). It’s not the best, epistemically, but there’s plenty of space in an intellectual ecosystem for memorable, instrumentally useful blog posts. The main problem, from my point of view, is that Less Wrong still seems to think that insight porn is the unit of progress, as judged by engagement and upvotes. You get what you reward, and I wish our reward mechanism were more aligned. But this is a community-level issue which means your post may be interpreted in ways that you didn’t necessarily intend, so it’s probably not too useful for me to continue criticising it (even though I think we do have further territory-level disagreements—e.g. I agree with your statement about happiness, but would also say “nobody is trying their best, and not feeling enough pain is a particularly high ROI dimension along which to notice this”, which I expect you’d disagree with).
Thank you for this comment; I was happy death spiralling around the original post to the point of starting to wonder if I should quit my job and this abruptly snapped me out of it.
I read the whole article and didn’t really get a click, but this version of it somehow clicked for me:
The specific thing I realized was that this might be almost THE SAME as the reasoning error where people think that “objects which cost large amounts” probably have “high quality and usability”.
There are various ways to make it so that all the really extra specially good objects in a class of objects tend to have the highest prices, and equivalently for the low end of the product range, leading to a pretty clean correlation between the price (which is a cost, which is bad) and the value (which is the usability, which is good).
This sort of correlation can arise in a reasonably efficient market with heterogeneously capitalized people so that rich ones have money to waste on durability or style or whatever. Or it could just be that there are people with variance in their instrumental utility for the objects (perhaps based on aiming at different goals of greater or less import)? I think the correlation always requires the pre-existence of agents engaged in means/ends reasoning who accept the cost because the value is worth it, and so each purchase is something such agents are reasonably sure will create consumer surplus for them.
There’s a related joke about economists knowing “the price of everything and the value of nothing”.
The click I had was that maybe in the same way that some people conflate “price” and “value” into something like “worth”, there might also be people who conflate “painful effort” and “creative success” into something like “trying”, and such people might get more “agentic surplus” by making very goddamn sure that they never optimize for pain directly, due to a motivational/conceptual conflation of “pain” with “trying that leads to success”.
This goes counter to the general NHS advice for excersie Why do I feel pain after exercise?:
IME ‘soreness’ is pretty different to ‘pain’. [EDIT: although I don’t know if this is just different because my brain knows that DOMS is fine but knee pain during squats is not]
There are plenty of different sensations that people sometimes call pain. Being able to distinguish those sensations and treat them differently is useful.
Not counting the sensensation that comes with soreness as pain is an option but it’s not how everyone uses the word pain and especially people without much experience of it will lump it into the general pain cluster when they encouter it.
Thank you for pointing this out.
My initial reaction was that “soreness” doesn’t count as pain within the context of the post because it’s not as immediate, but I couldn’t come up with a principled reason for doing this gerrymandering. I no longer endorse point 1 (If it hurts, you’re probably doing it wrong) in the form stated and will think about how to reflect that in the post.
Small correction: DOMS is a distinct phenomenon from pain during exercise, which usually means that you are doing something wrong and may be injuring yourself.
DOMS occurs 1-2 days after exercise, as mentioned in the NHS quote.