Sorry, I didn’t realize that you’d dislike that suggestion as well. I assumed that it was primarily the suggestion of shortening the post that you were unhappy with, since the introduction section already kind of says the same thing as the proposed paragraph and I was only suggesting saying it with slightly more emphasis.
“what if these suggestions were terrible? Like, what if Omega came down and told me ‘Duncan was right, your version is objectively and meaningfully worse, those changes caused problems’ … what model would I produce, as a result, trying to explain what was going on?”
I’m trying to think about it, but finding it hard to answer, since to me moving that paragraph to an earlier point seems like a very minor change. One thought that comes to mind is “it would change people’s first impression of the post” (after all, changing people’s first impression of the length of the post is what the change was intended to achieve)… presumably in a worse way somehow? Maybe make them take the post less seriously in some sense? But I suspect that’s not what you have in mind.
It would be helpful to get a hint of the kind of axis on which the post would become worse. Like, is it something that directly affects some property of the post itself, such as its persuasiveness or impact? Or is this about some more indirect effect, like giving in to some undesirable set of norms (that’s what your mention of the Twitter mob implies to me)?
It’s more the latter; I think that it further reinforces a sense of something like “people should have to put forth zero effort; whatever it takes to get reader buy-in, no matter how silly; if your post isn’t bending over backwards to smooth the transition from [haven’t read] to [read] it’s automatically unstrategic (as opposed to maybe those readers just aren’t part of the audience),” etc.
Literally the first paragraph of the post is like, “this is mostly about a short list.” The sort of reader who sees “43 min” on a LessWrong post and then is so deterred that they don’t even read the first paragraph feels already lost to me, and going further in the direction of accommodating them (I already weakened the post substantially on behalf of the tl;dr crowd; this is already WAY capitulating) seems bad not only for the specific post but also for, like, sending the implicit social signal that yes, your terrorism is working, please continue leaning on the incentive gradient that makes it hard to take [an audience who actually gives a crap and doesn’t need to be infinitely “sold” on every little thing] for granted.
Putting a soothing “don’t worry, this is actually short, you don’t have to read something big and scary if you don’t want to!” message as literally the first line of the post sends a strong message that I Do Not Want To Send; people should just not read it if they don’t want to and my reassurances and endorsements shouldn’t be necessary.
This is why the zeroth guideline is “expect to have to put in a little work some of the time;” in the future I’ll answer such questions by linking to it but it’s a bit circular in this case when people have already demonstrated that they’re loath to even read that far.
If I were to rephrase this in my own words, it’d be something like:
“There’s a kind of expectation/behavior on some people’s behalf, where they get unhappy with any content that requires them to put in effort in order to get value out of it. These people tend to push their demand to others, so that others need to contort to meet the demand and rewrite everything to require no effort on the reader’s behalf. This is harmful because optimizing one variable requires sacrifices with regard to other variables, so content that gives in to the demand is necessarily worse than content that’s not optimized for zero effort. (Also there’s quite a bit of content that just can’t be communicated at all if you insist that the reader needs to spend zero effort on it. Some ideas intrinsically require an investment of effort to understand in the first place.)
The more that posts are written in a way that gives in to these demands, the more it signals that these demands are justified and make sense. That then further strengthens those demands and makes it ever harder to resist them in other contexts.”
Ideally I’d pause here to check for your agreement with this summary, but if I were to pause here it’d be quite possible that I’d wander off and never get around to answering your earlier prompt. So I’ll just answer on the assumption that this is close enough.
So, if Omega came to me and told me that making the change would actually make things worse, what would my model be?
Well, I’d definitely be surprised. My own model doesn’t quite agree with the above paraphrase—for one, Iwas one of the people who didn’t read the introduction properly, and I don’t think that I’m demanding everyone rewrite their content so as to require zero effort to read.
That said, a steelman of the paraphrase probably shouldn’t assume that all such people require all content to require literally zero effort. There can still be an underlying tendency for people to wish that they were presented with content that required less effort in general. So even if I might correctly object “hey, I don’t actually expect all content to require literally zero effort from me”, it might still be the case that I’m more impatient than I would be in a counterfactual world where I wasn’t influenced by the social forces pushing for more impatience.
Now that I think of it, I’m pretty certain that that’s actually indeed the case.
Hmm.
Another objection I had to the paraphrased model was that the forces pushing in the direction of impatience are just too strong to make an impact on. But while that might be the case globally, it doesn’t need to be the case locally. Even if a social incentive wasn’t strong enough to take root in the world as a whole, it could take root in rationalist spaces. And in fact there are plenty of social incentives that hold in rationalist spaces while not holding in the world in general.
It’s also relevant that these kinds of norms select for people who are more likely to agree with them. So if we consistently enforce them, that has the effect of keeping out the kinds of people who wouldn’t agree with them, making local enforcement of them possible.
So maybe one model that I could have, given Omega saying that my proposed change would have a bad impact on the post, would be something like… “Making the change would reduce the amount of effort that people needed to expend to decide whether reading this post was worth it. That would increase social pressure on other people on LW to write posts that were readable with minimum effort. While the marginal impact of this post in particular wouldn’t be that big, it’d still make it somewhat more likely that another post would give in slightly more, making it somewhat more likely that yet another post would give in slightly more, and so on. As a result of that, more impatient users would join the site (as the posts were now failing to filter them out) while more patient users would be pushed out, and this would be bad for the site in general.”
So even if I might correctly object “hey, I don’t actually expect all content to require literally zero effort from me”, it might still be the case that I’m more impatient than I would be in a counterfactual world where I wasn’t influenced by the social forces pushing for more impatience.
Now that I think of it, I’m pretty certain that that’s actually indeed the case.
I’ll note that Logan’s writing historically gets “surprisingly” little engagement, and “it doesn’t fit well in a culture of impatience” is among my top guesses as to why.
Like, if LessWrong were 15% more patient (whatever that actually means), I suspect Logan’s writing in particular would get something like 30% more in the way of discussion and upvotes.
So my disagreement with this model is that it sounds like you’re modeling patience as a quantity that people have either more or less of, while I think of patience as a budget that you need to split between different things.
Like at one extreme, maybe I dedicate all of my patience budget to reading LW articles, and I might spend up to an hour reading an article even if its value seems unclear, with the expectation that I might get something valuable out of it if I persist enough. But then this means that I have no time/energy/patience left to read anything that’s not an LW article.
It seems to me that a significant difficulty with budgeting patience is that it’s not a thing where I know the worthwhile things in advance and just have to divide my patience between them. Rather finding out what’s worthwhile, requires an investment of patience by itself. As an alternative to spending 60% of my effort budget on one thing, I could say… take half of that and spend 5% on six things each, sampling them to see which one of them seems the most valuable to read, and then invest 30% on diving into the one that does seem the most valuable. And that might very well give me a better return.
On my model, it mostly (caveat in next paragraph) doesn’t make sense to criticize people for not having enough patience—since it’s not that they’d have less patience overall, it’s just that they have budgeted more of their patience on other things. And under this model, trying to write articles so as to make their value maximally easy to sample is the prosocial thing, since it helps others make good decisions.
I get that there’s some social pressure to just make things easy-to-digest for its own sake, and some people with principled indignation if they are forced to expend effort, that goes beyond the “budget consideration” model. But compared to the budget consideration, this seems like a relatively minor force, in my experience. Sure there are some people like that, but I don’t experience them being influential enough to be worth modeling. I think for most impatient people, the root cause of their impatience isn’t principled impatience but just having too many possible things that they could split their patience between.
it sounds like you’re modeling patience as a quantity that people have either more or less of, while I think of patience as a budget that you need to split between different things
… it’s pretty obviously both?
Like, each person is going to have a quantity, and some people will have more or less, and each person will need to budget the quantity that they have available.
And separately, one can build up one’s capacity to bring deliberate patience to bear on worthwhile endeavors, thus increasing one’s available quantity of patience, or one can not.
What I’m trying to say about Logan’s writing in particular is something like “it takes a certain degree of patience (or perhaps more aptly, a certain leisurely pace) to notice its value at all (at which point one will be motivated to keep mining it for more); that degree of patience is set higher than 85+% of LessWrongers know to even try offering to a given piece, as an experiment, if they haven’t already decided the author is worth their attention.”
Like, each person is going to have a quantity, and some people will have more or less, and each person will need to budget the quantity that they have available.
Ah, that makes sense. I like that framing as an elegant way of combining the two.
that degree of patience is set higher than 85+% of LessWrongers know to even try offering to a given piece, as an experiment, if they haven’t already decided the author is worth their attention.
Do you have a model of how to change that? Like, just have the site select for readers that can afford that leisurely pace, or something else?
Like, there are ideas along the lines of “reward people for practicing the skill of patience generally” and “disincentivize or at least do not reward the people practicing the skill of impatience/making impatient demands.”
But a) that’s not really a model and those aren’t really plans, and b) creating the capacity for patient engagement still doesn’t solve the problem of knowing when to be patient and when to move on, for a given piece of writing.
(Not sure I’ll be able to substantively respond but wanting to note that I’ve strong upvoted on both axes available to me; your summary up to the point where you noted you’d check in was great)
FWIW I like this comment much more than some of the others you’ve written on this page, because it feels like it’s gotten past the communication difficulty and foregrounds your objection.
I am a little suspicious of the word ‘should’ in the parent comment. I think we have differing models of reader buy-in / how authors should manage it, where you’re expecting it to be more correlated with “how much you want them to read the post” than I am.
This line was also quite salient to me:
that makes it hard to take [an audience who actually gives a crap and doesn’t need to be infinitely “sold” on every little thing] for granted
There’s an ongoing tradeoff-fight of which things should be appropriate context (‘taken for granted’) in which posts. The ideal careful reader does have to be finitely sold on the appropriate details, and writing with them in mind helps make writing posts sharpen thinking. We have the distribution of potential readers that we actually have.
I want to simultaneously uphold and affirm (you writing for the audience and assumed context you want) and (that not obviously being the ‘rationalist’ position or ‘LessWrong position’). When ‘should’ comes out in a discussion like this, it typically seems to me like it’s failing to note the distinction or attempting to set the norm or obvious position (such that opposition naturally arises). [Most of the time you instead write about Duncan culture, where it seems appropriate.]
writing with them in mind helps make writing posts sharpen thinking. We have the distribution of potential readers that we actually have.
For sure! This is part of why the post took me over a year; I did in fact work hard to strike what I felt was a workable compromise between me-and-the-culture-I’d-like-us-to-have and the-culture-we-currently-have.
Some of what’s going on with my apologetic frustration above is, like, “Gosh, I’ve already worked real hard to bridge the gap/come substantially toward a compromise position, but of course you all don’t know that/can’t be expected to have seen any of that work, and thus to me it feels like there’s a real meta question about ‘did you go far enough/do an effective-enough job’ and it’s hard to make visceral to myself that y’all’s stance on that question is (probably) different than it would be if you had total extrospective access to my brain.”
I do not at all mean to criticize you deeply. this is a great post. I just want to be able to use it in conversations on discord where people are new to the concept with somewhat less difficulty. I linked it somewhere and got the immediate response “it opens with a quote from that one lady, close”, and another who was approximately like “geez that’s long, can you summarize”. Yes, I know you’d wish that the sanity waterline was higher than that; and you did do a great job building this ladder to dip into the sanity so the sanity can climb the ladder. I just wanted to have a link that would clearly signal “you don’t have to read the rest if you decide the intro isn’t worth it”. It’s a small edit, and your existing work into making the thing doesn’t make it impossible to change it further. honestly when I first posted my comment I thought I was being constructive and friendly.
I linked it somewhere and got the immediate response “it opens with a quote from that one lady, close”
Why in the world would we want to optimize for engagement with people like that…? Excluding those who react in such a way seems to me to be a good thing.
This is my sense as well, but this is in large part the core of the cultural disagreement, I think?
Like, back in the early 2000′s, there was a parkour community centered around a web forum, NCparkour.com. And there was a constant back-and-forth tension between:
a) have a big tent; bring in as many people as possible; gradually infect them with knowledge and discipline and the proper way of doing things
b) have standards; have boundaries; make clear what we’re here to do and do not be particularly welcoming or tolerant of people whose default way of being undermines that mission
My sense is that, if you’re an a-er, the above mentality seems like a CLEAR mistake, à la “why would you drive away someone who’s a mere two or three insights away from being a good and productive member of our culture??”
And if you’re a b-er, the above mentality is like, yep, two or three insights away from good is a vast and oft-insurmountable distance, people generally don’t change and even if they do we’re not going to be able to change them from the outside. Let’s not dilute our subculture by allowing in a bunch of “”voters”″ who don’t even understand what we’re trying to do, here (and will therefore ruin it).
My sense is that LessWrong has historically been closer to a than to b, though not so close to a that, as a b-er, I feel like it’s shooting itself in the foot. More like, just failing to be the shining city on the hill that it could be.
(Also, more of a side note, but: the quoted text is not from J.K. Rowling.)
Sorry, I didn’t realize that you’d dislike that suggestion as well. I assumed that it was primarily the suggestion of shortening the post that you were unhappy with, since the introduction section already kind of says the same thing as the proposed paragraph and I was only suggesting saying it with slightly more emphasis.
I’m trying to think about it, but finding it hard to answer, since to me moving that paragraph to an earlier point seems like a very minor change. One thought that comes to mind is “it would change people’s first impression of the post” (after all, changing people’s first impression of the length of the post is what the change was intended to achieve)… presumably in a worse way somehow? Maybe make them take the post less seriously in some sense? But I suspect that’s not what you have in mind.
It would be helpful to get a hint of the kind of axis on which the post would become worse. Like, is it something that directly affects some property of the post itself, such as its persuasiveness or impact? Or is this about some more indirect effect, like giving in to some undesirable set of norms (that’s what your mention of the Twitter mob implies to me)?
It’s more the latter; I think that it further reinforces a sense of something like “people should have to put forth zero effort; whatever it takes to get reader buy-in, no matter how silly; if your post isn’t bending over backwards to smooth the transition from [haven’t read] to [read] it’s automatically unstrategic (as opposed to maybe those readers just aren’t part of the audience),” etc.
Literally the first paragraph of the post is like, “this is mostly about a short list.” The sort of reader who sees “43 min” on a LessWrong post and then is so deterred that they don’t even read the first paragraph feels already lost to me, and going further in the direction of accommodating them (I already weakened the post substantially on behalf of the tl;dr crowd; this is already WAY capitulating) seems bad not only for the specific post but also for, like, sending the implicit social signal that yes, your terrorism is working, please continue leaning on the incentive gradient that makes it hard to take [an audience who actually gives a crap and doesn’t need to be infinitely “sold” on every little thing] for granted.
Putting a soothing “don’t worry, this is actually short, you don’t have to read something big and scary if you don’t want to!” message as literally the first line of the post sends a strong message that I Do Not Want To Send; people should just not read it if they don’t want to and my reassurances and endorsements shouldn’t be necessary.
This is why the zeroth guideline is “expect to have to put in a little work some of the time;” in the future I’ll answer such questions by linking to it but it’s a bit circular in this case when people have already demonstrated that they’re loath to even read that far.
Thanks!
If I were to rephrase this in my own words, it’d be something like:
“There’s a kind of expectation/behavior on some people’s behalf, where they get unhappy with any content that requires them to put in effort in order to get value out of it. These people tend to push their demand to others, so that others need to contort to meet the demand and rewrite everything to require no effort on the reader’s behalf. This is harmful because optimizing one variable requires sacrifices with regard to other variables, so content that gives in to the demand is necessarily worse than content that’s not optimized for zero effort. (Also there’s quite a bit of content that just can’t be communicated at all if you insist that the reader needs to spend zero effort on it. Some ideas intrinsically require an investment of effort to understand in the first place.)
The more that posts are written in a way that gives in to these demands, the more it signals that these demands are justified and make sense. That then further strengthens those demands and makes it ever harder to resist them in other contexts.”
Ideally I’d pause here to check for your agreement with this summary, but if I were to pause here it’d be quite possible that I’d wander off and never get around to answering your earlier prompt. So I’ll just answer on the assumption that this is close enough.
So, if Omega came to me and told me that making the change would actually make things worse, what would my model be?
Well, I’d definitely be surprised. My own model doesn’t quite agree with the above paraphrase—for one, I was one of the people who didn’t read the introduction properly, and I don’t think that I’m demanding everyone rewrite their content so as to require zero effort to read.
That said, a steelman of the paraphrase probably shouldn’t assume that all such people require all content to require literally zero effort. There can still be an underlying tendency for people to wish that they were presented with content that required less effort in general. So even if I might correctly object “hey, I don’t actually expect all content to require literally zero effort from me”, it might still be the case that I’m more impatient than I would be in a counterfactual world where I wasn’t influenced by the social forces pushing for more impatience.
Now that I think of it, I’m pretty certain that that’s actually indeed the case.
Hmm.
Another objection I had to the paraphrased model was that the forces pushing in the direction of impatience are just too strong to make an impact on. But while that might be the case globally, it doesn’t need to be the case locally. Even if a social incentive wasn’t strong enough to take root in the world as a whole, it could take root in rationalist spaces. And in fact there are plenty of social incentives that hold in rationalist spaces while not holding in the world in general.
It’s also relevant that these kinds of norms select for people who are more likely to agree with them. So if we consistently enforce them, that has the effect of keeping out the kinds of people who wouldn’t agree with them, making local enforcement of them possible.
So maybe one model that I could have, given Omega saying that my proposed change would have a bad impact on the post, would be something like… “Making the change would reduce the amount of effort that people needed to expend to decide whether reading this post was worth it. That would increase social pressure on other people on LW to write posts that were readable with minimum effort. While the marginal impact of this post in particular wouldn’t be that big, it’d still make it somewhat more likely that another post would give in slightly more, making it somewhat more likely that yet another post would give in slightly more, and so on. As a result of that, more impatient users would join the site (as the posts were now failing to filter them out) while more patient users would be pushed out, and this would be bad for the site in general.”
I’ll note that Logan’s writing historically gets “surprisingly” little engagement, and “it doesn’t fit well in a culture of impatience” is among my top guesses as to why.
Like, if LessWrong were 15% more patient (whatever that actually means), I suspect Logan’s writing in particular would get something like 30% more in the way of discussion and upvotes.
So my disagreement with this model is that it sounds like you’re modeling patience as a quantity that people have either more or less of, while I think of patience as a budget that you need to split between different things.
Like at one extreme, maybe I dedicate all of my patience budget to reading LW articles, and I might spend up to an hour reading an article even if its value seems unclear, with the expectation that I might get something valuable out of it if I persist enough. But then this means that I have no time/energy/patience left to read anything that’s not an LW article.
It seems to me that a significant difficulty with budgeting patience is that it’s not a thing where I know the worthwhile things in advance and just have to divide my patience between them. Rather finding out what’s worthwhile, requires an investment of patience by itself. As an alternative to spending 60% of my effort budget on one thing, I could say… take half of that and spend 5% on six things each, sampling them to see which one of them seems the most valuable to read, and then invest 30% on diving into the one that does seem the most valuable. And that might very well give me a better return.
On my model, it mostly (caveat in next paragraph) doesn’t make sense to criticize people for not having enough patience—since it’s not that they’d have less patience overall, it’s just that they have budgeted more of their patience on other things. And under this model, trying to write articles so as to make their value maximally easy to sample is the prosocial thing, since it helps others make good decisions.
I get that there’s some social pressure to just make things easy-to-digest for its own sake, and some people with principled indignation if they are forced to expend effort, that goes beyond the “budget consideration” model. But compared to the budget consideration, this seems like a relatively minor force, in my experience. Sure there are some people like that, but I don’t experience them being influential enough to be worth modeling. I think for most impatient people, the root cause of their impatience isn’t principled impatience but just having too many possible things that they could split their patience between.
… it’s pretty obviously both?
Like, each person is going to have a quantity, and some people will have more or less, and each person will need to budget the quantity that they have available.
And separately, one can build up one’s capacity to bring deliberate patience to bear on worthwhile endeavors, thus increasing one’s available quantity of patience, or one can not.
What I’m trying to say about Logan’s writing in particular is something like “it takes a certain degree of patience (or perhaps more aptly, a certain leisurely pace) to notice its value at all (at which point one will be motivated to keep mining it for more); that degree of patience is set higher than 85+% of LessWrongers know to even try offering to a given piece, as an experiment, if they haven’t already decided the author is worth their attention.”
Ah, that makes sense. I like that framing as an elegant way of combining the two.
Do you have a model of how to change that? Like, just have the site select for readers that can afford that leisurely pace, or something else?
Not really, alas.
Like, there are ideas along the lines of “reward people for practicing the skill of patience generally” and “disincentivize or at least do not reward the people practicing the skill of impatience/making impatient demands.”
But a) that’s not really a model and those aren’t really plans, and b) creating the capacity for patient engagement still doesn’t solve the problem of knowing when to be patient and when to move on, for a given piece of writing.
(Not sure I’ll be able to substantively respond but wanting to note that I’ve strong upvoted on both axes available to me; your summary up to the point where you noted you’d check in was great)
yeah for real Kaj, i’m pretty sure that was in form if not content among the best contributions to a comment thread i’ve ever seen
<3
FWIW I like this comment much more than some of the others you’ve written on this page, because it feels like it’s gotten past the communication difficulty and foregrounds your objection.
I am a little suspicious of the word ‘should’ in the parent comment. I think we have differing models of reader buy-in / how authors should manage it, where you’re expecting it to be more correlated with “how much you want them to read the post” than I am.
This line was also quite salient to me:
There’s an ongoing tradeoff-fight of which things should be appropriate context (‘taken for granted’) in which posts. The ideal careful reader does have to be finitely sold on the appropriate details, and writing with them in mind helps make writing posts sharpen thinking. We have the distribution of potential readers that we actually have.
I want to simultaneously uphold and affirm (you writing for the audience and assumed context you want) and (that not obviously being the ‘rationalist’ position or ‘LessWrong position’). When ‘should’ comes out in a discussion like this, it typically seems to me like it’s failing to note the distinction or attempting to set the norm or obvious position (such that opposition naturally arises). [Most of the time you instead write about Duncan culture, where it seems appropriate.]
(I like and have upvoted the above)
For sure! This is part of why the post took me over a year; I did in fact work hard to strike what I felt was a workable compromise between me-and-the-culture-I’d-like-us-to-have and the-culture-we-currently-have.
Some of what’s going on with my apologetic frustration above is, like, “Gosh, I’ve already worked real hard to bridge the gap/come substantially toward a compromise position, but of course you all don’t know that/can’t be expected to have seen any of that work, and thus to me it feels like there’s a real meta question about ‘did you go far enough/do an effective-enough job’ and it’s hard to make visceral to myself that y’all’s stance on that question is (probably) different than it would be if you had total extrospective access to my brain.”
I do not at all mean to criticize you deeply. this is a great post. I just want to be able to use it in conversations on discord where people are new to the concept with somewhat less difficulty. I linked it somewhere and got the immediate response “it opens with a quote from that one lady, close”, and another who was approximately like “geez that’s long, can you summarize”. Yes, I know you’d wish that the sanity waterline was higher than that; and you did do a great job building this ladder to dip into the sanity so the sanity can climb the ladder. I just wanted to have a link that would clearly signal “you don’t have to read the rest if you decide the intro isn’t worth it”. It’s a small edit, and your existing work into making the thing doesn’t make it impossible to change it further. honestly when I first posted my comment I thought I was being constructive and friendly.
Having read to this point in the thread, part of me wants this post to be called “Basics Of Intermediate Rationalist Discourse”.
Just copy paste the bullet points.
reasonable.
Why in the world would we want to optimize for engagement with people like that…? Excluding those who react in such a way seems to me to be a good thing.
This is my sense as well, but this is in large part the core of the cultural disagreement, I think?
Like, back in the early 2000′s, there was a parkour community centered around a web forum, NCparkour.com. And there was a constant back-and-forth tension between:
a) have a big tent; bring in as many people as possible; gradually infect them with knowledge and discipline and the proper way of doing things
b) have standards; have boundaries; make clear what we’re here to do and do not be particularly welcoming or tolerant of people whose default way of being undermines that mission
My sense is that, if you’re an a-er, the above mentality seems like a CLEAR mistake, à la “why would you drive away someone who’s a mere two or three insights away from being a good and productive member of our culture??”
And if you’re a b-er, the above mentality is like, yep, two or three insights away from good is a vast and oft-insurmountable distance, people generally don’t change and even if they do we’re not going to be able to change them from the outside. Let’s not dilute our subculture by allowing in a bunch of “”voters”″ who don’t even understand what we’re trying to do, here (and will therefore ruin it).
My sense is that LessWrong has historically been closer to a than to b, though not so close to a that, as a b-er, I feel like it’s shooting itself in the foot. More like, just failing to be the shining city on the hill that it could be.
(Also, more of a side note, but: the quoted text is not from J.K. Rowling.)