What’s the model whereby a LessWrong post ought to have a “takeaway message” or “call to action”?
If an argument/explanation elucidates the structure of reality in ways that are important for understanding a class of things of which the conclusion is a member, then we can’t summarize the value with the conclusion! If it’s not important for that sort of understanding, then it’s just a soldier.
It reads to me like you’re complaining that Zvi’s post is insufficiently mindkilled and therefore confusing. I’m perplexed by this; you’ve written a lot on LessWrong that’s been helpful and insightful without a clear “takeaway” or single specific action implied, e.g. on decision theory.
Zvi’s post seems like it’s in the analysis genre, where an existing commonly represented story about right action is critiqued. Pointing out common obvious mistakes, and trying to explain them and distinguish them from nearby unmistaken stories, is really important for deconfusion.
What’s the model whereby a LessWrong post ought to have a “takeaway message” or “call to action”?
I was trying to figure out what “Let us all be wise enough to aim higher.” was intended to mean. It seemed like it was either a “takeaway message” (e.g., suggestion or call to action), or an applause light (i.e., something that doesn’t mean anything but sounds good), hence my question.
Zvi’s post seems like it’s in the analysis genre, where an existing commonly represented story about right action is critiqued.
I guess the last sentence threw me, since it seems out of place in the analysis genre?
I also see, looking back upon it now, that this was kind of supposed to be a call for literally any action whatsoever, as opposed to striving to take as little action as possible. Or at least, I can read it like that quite easily—one needs to not strive to be the ‘perfect’ person in the form of someone who didn’t do anything actively wrong.
Which would then be the most call-to—action of all the calls-to-action, since it is literally a Call To Action.
So, yeah. There’s that. In terms of what I was thinking at the time, I’ll quote my comment above:
But this is also of the type of thing that I do when I’m analyzing my game play choices after a match of Magic, where I come up with all sorts of explanations and deep lines of possibility and consideration that were never in my conscious analysis at the time. At the time it was more something like, this needs a conclusion, I’ve shown the problems with this thing, this seems like a way to wrap things up and maybe get people to think about doing the thing less and spotting/discounting it more, which would be good.
Your reaction points out a way this could be bad. By taking a call-for-clarity piece, and finishing it with a sentence that implies one might want to take action of some kind, one potentially makes a reader classify the whole thing as a call-to-action. Which is natural, since the default is to assume calls-for-clarity are failed calls-for-action, because who would bother calling for clarity? Doesn’t seem worth one’s time.
Which means that such things might indeed be quite bad, and to be avoided. If people end up going ‘oh, I’m being asked to do less X’ and therefore forget about the model of X being presented, that’s a big loss.
The cost is twofold, then:
1. It becomes harder to form a good ending. You can’t just delete that line without substituting another ending.
2. If we can’t put an incidental/payoff call to implied action into an analysis piece, then the concrete steps this suggests won’t get taken. People might think ‘this is interesting’ but not know what to do with it, and thus discard the presented model as unworthy of their brain space.
Which means this gets pretty muddled and it’s not obvious which way this should go.
It becomes harder to form a good ending. You can’t just delete that line without substituting another ending.
My main complaint was that I just couldn’t tell what you were trying to say with the current ending. If you’re open to suggestions, I’d replace the last few lines with something like this instead:
If this analysis is correct, it suggests that we should avoid using asymmetric mental point systems except where structurally necessary. For example, the next time you’re in situation …, consider doing … instead of …
ETA:
If we can’t put an incidental/payoff call to implied action into an analysis piece, then the concrete steps this suggests won’t get taken. People might think ‘this is interesting’ but not know what to do with it, and thus discard the presented model as unworthy of their brain space.
It’s not clear to me whether you’re saying A) people ought to keep this model in their brain even if there was no practical implication, but they won’t in practice, so you had to give them one, or B) it’s reasonable to demand a practical implication before making space for a model in one’s brain which is why you included one in the post. If the latter, it seems like the practical implication / call to action shouldn’t just be incidental, but significant space should be devoted to spelling it out clearly and backing it up by analysis/argument, so that if it was wrong it could be critiqued (which would allow people to discard the model after all).
What’s the model whereby a LessWrong post ought to have a “takeaway message” or “call to action”?
If an argument/explanation elucidates the structure of reality in ways that are important for understanding a class of things of which the conclusion is a member, then we can’t summarize the value with the conclusion! If it’s not important for that sort of understanding, then it’s just a soldier.
It reads to me like you’re complaining that Zvi’s post is insufficiently mindkilled and therefore confusing. I’m perplexed by this; you’ve written a lot on LessWrong that’s been helpful and insightful without a clear “takeaway” or single specific action implied, e.g. on decision theory.
Zvi’s post seems like it’s in the analysis genre, where an existing commonly represented story about right action is critiqued. Pointing out common obvious mistakes, and trying to explain them and distinguish them from nearby unmistaken stories, is really important for deconfusion.
I was trying to figure out what “Let us all be wise enough to aim higher.” was intended to mean. It seemed like it was either a “takeaway message” (e.g., suggestion or call to action), or an applause light (i.e., something that doesn’t mean anything but sounds good), hence my question.
I guess the last sentence threw me, since it seems out of place in the analysis genre?
I also see, looking back upon it now, that this was kind of supposed to be a call for literally any action whatsoever, as opposed to striving to take as little action as possible. Or at least, I can read it like that quite easily—one needs to not strive to be the ‘perfect’ person in the form of someone who didn’t do anything actively wrong.
Which would then be the most call-to—action of all the calls-to-action, since it is literally a Call To Action.
So, yeah. There’s that. In terms of what I was thinking at the time, I’ll quote my comment above:
Your reaction points out a way this could be bad. By taking a call-for-clarity piece, and finishing it with a sentence that implies one might want to take action of some kind, one potentially makes a reader classify the whole thing as a call-to-action. Which is natural, since the default is to assume calls-for-clarity are failed calls-for-action, because who would bother calling for clarity? Doesn’t seem worth one’s time.
Which means that such things might indeed be quite bad, and to be avoided. If people end up going ‘oh, I’m being asked to do less X’ and therefore forget about the model of X being presented, that’s a big loss.
The cost is twofold, then:
1. It becomes harder to form a good ending. You can’t just delete that line without substituting another ending.
2. If we can’t put an incidental/payoff call to implied action into an analysis piece, then the concrete steps this suggests won’t get taken. People might think ‘this is interesting’ but not know what to do with it, and thus discard the presented model as unworthy of their brain space.
Which means this gets pretty muddled and it’s not obvious which way this should go.
My main complaint was that I just couldn’t tell what you were trying to say with the current ending. If you’re open to suggestions, I’d replace the last few lines with something like this instead:
If this analysis is correct, it suggests that we should avoid using asymmetric mental point systems except where structurally necessary. For example, the next time you’re in situation …, consider doing … instead of …
ETA:
It’s not clear to me whether you’re saying A) people ought to keep this model in their brain even if there was no practical implication, but they won’t in practice, so you had to give them one, or B) it’s reasonable to demand a practical implication before making space for a model in one’s brain which is why you included one in the post. If the latter, it seems like the practical implication / call to action shouldn’t just be incidental, but significant space should be devoted to spelling it out clearly and backing it up by analysis/argument, so that if it was wrong it could be critiqued (which would allow people to discard the model after all).
Good point—that rhetorical flourish implies a call to action when there isn’t one.