We gravely apologize in the most grave form we can for this post getting eaten, and will be prioritizing the autosave feature as the very next thing we do. :/ :)
Unfortunately I have had to block lesserwrong.com during most working hours so that I can’t get too distracted from coding lesserwrong.com so I was only able to get 1/3 through reading this post before my screen abruptly turned green and said “You are free from Lesserwrong.com!”
Featured on the frontpage. I really like it as an introduction to Bayes and bucket errors, though I am not fully sure whether it covers enough ground in either to serve as a solid foundation, but the combination might really create something concrete and vivid that future explanations might latch onto to then provide the more solid intellectual foundation, and that’s maybe precisely what you want from an introductory post.
This is the kind of post I can imagine early on in a future featured collection that will take the place where HPMOR currently is, and that is community driven/has multiple authors (or maybe written completely by you? who knows, you’ve been doing a lot of great writing over the last month).
(Warning, far too long and slightly rambly comment incoming. Haven’t gotten fully enough sleep tonight)
I have two levels of comments on this:
1. Pedagogical thoughts:
I really like the pedagogical presentation, but stumbled in a few places. When you asked
“Okay. Before I draw the through-line, it’s worth pausing to check: did any of these spark anything in your own memory or experience? Do you have a similar story rising to the surface already? ”
I don’t think I was fully ready yet to understand what “similar” meant here. A variety of stories that came to mind, but the common pattern of the stories wasn’t clear enough that I could determine whether the stories that came to mind fit what you were asking for. Maybe it might make sense to ask the reader to come up with a potential through-line themselves, and then ask them whether they have a story fitting to that through-line? Not sure, but I did end up stumbling a bit here.
Another thing about the Bayes explanation: I think I would be even happier if you would make some slight references to the terms that are commonly used for the relevant mental/mathematical actions when explaining Bayes in other places. I.e. you use the term “condition on” once in the explanation, but are not putting it into context with how that is the key action in a bayesian update, and how it might be used in a more technical explanation of the subject, such as Eliezer’s intro to Bayes. I generally think that making small cross-references to existing content or alternative explanations has a very large positive effect, since it causes the reader to connect different mental buckets that they are currently separating.
I ended up skimming over the 5-step process on my first read, not fully sure why. I am a bit more tired than usual today, but even so, I feel that maybe a section break or something else to give the actual 5-step technique some more weight would have been good. Maybe another paragraph encouraging the reader to actually apply the process after heaving read through it.
The last thing is more of a bug in human visual processing, but we might still need a few decades to fix it, so I will leave the critique here until then. Humans seem to be really bad at comparing the size of areas, especially the areas of circles, and this causes people to both repeatedly misinterpret pie charts and to be confused by Venn diagrams. I am not really sure what to do about this, except to maybe err on the side of using square Venn diagrams and flow diagrams when possible.
2. Rationality thoughts:
I am not fully sure how much this post will actually help me get rid of bucket errors. I think the really hard question in the process is (2). Bucket errors usually aren’t tagged like bucket errors, and even if I realize that I am in a situation where I might be making a bucket error, the correct resolution of the error is often very hard.
I feel that a part of the process of fixing bucket errors has to be some kind of brainstorming process about alternative explanations which might carve reality differently than you currently do, or the creation of some kind of safe space where you allow yourself to realize that your reasoning might be flawed in this situation. Internal Double Crux goes a long way here (i.e. dialoging with your internal parts instead of punishing or threatening them), but even then I feel that something more in the space of the Fermi Modeling stuff I gave some talks on might be useful.
I am a bit conflicted about this and will think more. I don’t think I fully yet understand what the motion you are describing in step 5 is, and will see whether I can come to better grasp it after sleeping and thinking on it for a while.
I endorse basically all of your constructive critique, as well as your more vaguely voiced concerns. I went back through and addressed everything in the above comment at least in part, though for some things (like your final concern about the motion in step 5) “addressing” it looked like “admitting to and lampshading the handwaving.”
I do think that you pointing out the teachable moment on the phrase “conditioned on” was super high value, and I’m glad for the opportunity to edit that in.
I am not fully sure how much this post will actually help me get rid of bucket errors. I think the really hard question in the process is (2). Bucket errors usually aren’t tagged like bucket errors, and even if I realize that I am in a situation where I might be making a bucket error, the correct resolution of the error is often very hard.
I had the same problem when I ran the algorithm. I made a too-hasty decision about what the “question” was, and “resolving” it was unsatisfying.
Perhaps adding a step for something like aversion factoring, in order to tease out which implication is the emotionally relevant one?
Tried square Venn diagrams prior to habryka’s comment. Rejected them because they were worse pedagogically (despite being more accurate). Suspect rounded-corner quadrangles may actually be an optimal substitute.
Essentially, I found that they implied something much more specific and meaningful in the overlap. With circles, you’re just pushing centers together/overlapping radii, and the mind parses the overlap as perfect/symmetrical and therefore containing no semantic content other than size/area.
With the square or rectangular Venn diagrams, the creator starts making choices—should the overlap be center-to-center, center-to-corner, corner-to-corner, etc.? Should the overlap be “designed” to have a square aspect ratio, or to be golden, or to be long and narrow? If you’ve got a clear grid such that every background square is 1u^2, should you force the blocks to adhere evenly to that grid, or have them be off? What if your area doesn’t easily break down into X by X, or X-minus-a-little by X-plus-a-little? If your squares are e.g. 4x4 and 6x6 but you want the overlap to be 7 squares, what do?
I found that there was no “natural” answer to these questions; no Schelling layout that seemed zero-content. Instead, every arrangement invited the reader to try to actively parse it for additional meaning that wasn’t there, chewing up attention and bandwidth.
Here’s my take on a good Venn diagram layout that doesn’t try to convey extra information, and avoids the problems you and the parent mentioned: https://i.imgur.com/tKPzfLM.png. Make the rectangles full height, and give them rounded corners so it’s clear that these are subsets of a larger space and not just vertical bars (it’s unclear with square corners that there are 2 overlapping sets and not 3 adjacent). Only caveats are that this is not instantly recognizable like your standard Venn diagram, and is only really usable for 2 subsets.
Yeah. I hadn’t thought to go full column height, but this definitely resembles one of the options I thought was most promising. I think you may have identified the best option for square diagrams that match the use case in this post.
Author’s note: I was five hours’ work and 5,000 words into this post last night when something in the LW2 UI blinked, and all of it vanished.
This is why you should produce offline, then upload it online. I don’t know why you haven’t been doing it, but perhaps you should take this opportunity to start the practice; there are several other benefits to the practice including (but probably not limited to):
Ability to solicit for feedback, revise and update the article, etc. It allows you to adopt an agile methodology of article writing. Maybe you are confident enough in your writing, that you feel you don’t need a testing and development period.
Ability to develop the work over several time periods. I have around 4 mega long (estimated 10,000+ words at completion articles I’ve been working on for a few weeks/days, one of which I plan to publish sometime this month).
Ability to use professional writing tools, Microsoft Word is really great for all but the most math-heavy articles (for which I turn to LaTeX), and I have heard good things about Grammarly, plus there are probably several other tools you can use to expedite development.
Ability to plan your article, and use cool development approaches like the snowflake method.
I view writing online as software development without testing, and basically developing on the spot. This may work for writing short scripts, but for any commercial software development, you want to go through the entire software development life cycle.
I hope you start writing offline, and take advantage of the benefits, producing even more high quality content.
I realise that this kind of development methodology may not appeal to you if you plan to write 1 post every day. However, like Gauss I prefer quality to quantity.
I had a pretty negative initial reaction to this comment, and I am currently trying to decipher why. Here are my thoughts:
I felt some sense that you were personally threatening Connor, or trying to get him to admit a mistake. The first sentence was:
This is why you should produce offline, then upload it online.
To me this was spoken in a condescending tone, with somewhat of an “I told you so” intonation. The beginning of the next sentence read similarly to me:
I don’t know why you haven’t been doing it
Which to me was spoken in an ironic voice, meaning more something like “I don’t know how you missed such an obvious thing”.
And I think after those two sentences I had already settled on that being the tone of the comment, and that tone just felt really aggressive to me, so I downvoted it.
If that would have been the tone in which you intended the comment to be read, then I think the comment should have been justifiably downvoted. Reading your response I have a sense that you did not intend the comment to be read in the tone that I ended up reading it.
I am not really sure what the best way to deal with this is. Subtext and tone is much harder to communicate in text-only communication, and so I think authors will have to inevitably put in a bunch of effort to ensure that text does not get read in the wrong tone. The reaction of me reading the comment in this more aggressive tone happened very quickly at a highly subconscious level, and while I will try to calibrate myself here better, I am not sure how tractable an intervention on the level of “let’s just have everyone try to be more charitable when reading comments” actually is, or whether that makes sense.
For now, I think it is reasonable that this comment got downvoted, because I expect the majority of readers to read it in an aggressive tone, but also think that there is a very closeby comment that does not require significant rewriting that would get read in the correct tone.
I agree with both Elizabeth and Said (just saw all of this thread in the last ten minutes).
The advice was something like 85% obvious and stuff I had already belatedly realized, and so getting it in long form was a little bit like being lectured or being kicked while I was down. There are two status signals in a comment like that (the first being “I, DragonGod, know better” and the second being “you, Conor, are not clever enough to have already figured this out, nor will you in the future unless I tell you”) and for that reason my monkey brain wanted to downvote reflexively (I didn’t, in the end).
But a) it’s obvious from you asking this question that those status moves were not at all on your radar/were not intended, and b) I completely agree that it’s good to have those thoughts and heuristics up where others can see them and benefit from them. Your comment included models and details that make it concretely useful, and it did, in fact, include at least one point that I hadn’t considered.
Interestingly, as I tried to answer your question, I found myself skirting the edge of the exact same surface-level problem that I think brought on the downvotes. So as I went to try to tell you what I think was good and bad about your comment, I found myself wanting to couch with phrases like “you probably already thought of this, but for the sake of anyone who doesn’t” and “I could be wrong here, let me know if your models differ” and “I hope this doesn’t come across as lecturing/condescending; I just figured it was better to err on the side of caution and completeness.”
If I were in your position, that’s probably the update I’d take away from this (that the skin of the comment matters a lot, and there’s a large swath of people who don’t want to be treated like white belts). I’d probably still have the same “speak up and share advice” module, but I’d add a function to the front that injects some gentleness and some status-dynamic-defusing words and phrases.
> I’d probably still have the same “speak up and share advice” module, but I’d add a function to the front that injects some gentleness and some status-dynamic-defusing words and phrases.
In this case, I have to object to this advice. You can tie yourself in knots trying to figure out what the most gentle way to say something is, and end up being perceived as condescending etc. anyway for believing that X is “obvious” or that someone else “should have already thought of it” (as again, what is obvious to one person may not be obvious or salient to another). Better to just state the obvious.
First, I definitely agree that getting tied in knots is both a) bad and b) something at least some people are vulnerable to.
But 80⁄20 seems like a valuable principle to adhere to, especially if you’ve already been downvoted/punished for your normal style and then, on your own initiative, you asked for clarification/updated heuristics. The claim I made about what I would do in DragonGod’s shoes wasn’t an imperative for literally everybody, it was simply … well, what I would do in their shoes.
I think I would be with you re: objecting if the line you quoted were broadcast as general advice that everyone needs more of. But in context, my brain is trying to round off your objection to “you can’t please everyone, so don’t even try.” I grant that I may have misunderstood you, but in fact signaling and status moves are real, and in fact (as evidenced by group opinion) DragonGod’s first comment was readily parseable as containing information that DragonGod didn’t intend to put there, and in fact a good patch for preventing that from happening in the future is adding something like 5-25 words of gentling and caveatting. That wasn’t a call to tie oneself in knots, nor a claim that this would please everybody.
“Add 5-25 extra words,” for someone who’s already gotten data that they’re not at the ideal point and was intrinsically motivated to investigate further, does not seem to me like a dangerous heuristic that’s likely to suck up lots of attention or effort.
And as for “stating the obvious,” well—I’d wager $50 at 5-1 odds that if DragonGod had prefaced the comment with “I may be stating the obvious here, but,” then the downvotes would not have happened (because that does away with the implicit claim that I or other readers don’t know that these things are low-hanging fruit and easily thinkable).
For what it’s worth, I’m strongly of the same opinion as Conor on this one. It’s just too easy to get annoyed by criticisms, and I think increasing the risk of tying oneself in knots is worth it for the sake of decreasing the risk that someone gets offended.
I thought it was an excellent comment, and upvoted it!
Edit: In particular (though not only)—I found the analogy to software development insightful, and new to me (has this comparison been made elsewhere? if so, I’ve not seen it); and I also didn’t know about the snowflake method, so that’s a new thing I’m reading about now, thanks!
And I definitely agree with the core point about writing offline, and all the reasons you list for why it’s a good idea.
Not everything is obvious to everyone; and what is obvious, may grow less obvious with time, as new generations, and new waves of people who come into a community, come of age without ever hearing it said aloud (since it’s too obvious to say!). This is why even obvious things need to be said, periodically, lest another piece of collective knowledge be lost.
Then, too, things that are obvious when you think about them might not be salient, in the moment of doing; saying the obvious thing increases its salience, and thus benefits everyone.
Finally, a comment on a forum/blog post is not a one-to-one communication between OP and commenter; it’s a public speech act. The advice is condescending, useless to the OP? Perhaps, perhaps not. But that has little bearing on whether it is useful to others who read it!
It will, I think, lead to far more lastingly useful content on this site, if we treat comments not just, and not even primarily, as utterances in a conversation, but also and importantly, as public utterances, spoken before the whole forum, and addressed to the collective. For who might care about the details of some conversation that took place long ago? But what was said to a public audience—that will keep.
Step 1: Spark, insight, inspiration, Genesis of idea.
Step 2: Devoted meditation and contemplation on the idea, ruminations, etc.
Step 3: Draw a plan for the structure of the article.
i = 1
research(topic) #function that represents undertaking study on a particular topic. After execution, research directs to step 4.
Step 4: write the ith draft.
Step 5: Upload a PDF of the ith draft to rationalist/rat adjacent discords, /r/lesswrong, /r/slatestarcodex, PM cousin_it, ELO, and some others with said PDF.
Step 6: receive criticism, and feedback.
if criticism indicates as such research()
else if criticism/feedback is suitable return to step 2.
else if I disagree with criticism/feel U have done my best/feel this is the best possible proceed to step 7.
Step 7: publish to LW.
end
I have four articles undergoing that process currently.
I haven’t published anything using this algorithm yet.
We gravely apologize in the most grave form we can for this post getting eaten, and will be prioritizing the autosave feature as the very next thing we do. :/ :)
That’s cool and appreciated and awesome but do you have any comments about BAYES or BUCKETS
(<3)
Unfortunately I have had to block lesserwrong.com during most working hours so that I can’t get too distracted from coding lesserwrong.com so I was only able to get 1/3 through reading this post before my screen abruptly turned green and said “You are free from Lesserwrong.com!”
Featured on the frontpage. I really like it as an introduction to Bayes and bucket errors, though I am not fully sure whether it covers enough ground in either to serve as a solid foundation, but the combination might really create something concrete and vivid that future explanations might latch onto to then provide the more solid intellectual foundation, and that’s maybe precisely what you want from an introductory post.
This is the kind of post I can imagine early on in a future featured collection that will take the place where HPMOR currently is, and that is community driven/has multiple authors (or maybe written completely by you? who knows, you’ve been doing a lot of great writing over the last month).
Wahoo! Gratitude and pride and happiness-to-have-contributed.
(Warning, far too long and slightly rambly comment incoming. Haven’t gotten fully enough sleep tonight)
I have two levels of comments on this:
1. Pedagogical thoughts:
I really like the pedagogical presentation, but stumbled in a few places. When you asked
I don’t think I was fully ready yet to understand what “similar” meant here. A variety of stories that came to mind, but the common pattern of the stories wasn’t clear enough that I could determine whether the stories that came to mind fit what you were asking for. Maybe it might make sense to ask the reader to come up with a potential through-line themselves, and then ask them whether they have a story fitting to that through-line? Not sure, but I did end up stumbling a bit here.
Another thing about the Bayes explanation: I think I would be even happier if you would make some slight references to the terms that are commonly used for the relevant mental/mathematical actions when explaining Bayes in other places. I.e. you use the term “condition on” once in the explanation, but are not putting it into context with how that is the key action in a bayesian update, and how it might be used in a more technical explanation of the subject, such as Eliezer’s intro to Bayes. I generally think that making small cross-references to existing content or alternative explanations has a very large positive effect, since it causes the reader to connect different mental buckets that they are currently separating.
I ended up skimming over the 5-step process on my first read, not fully sure why. I am a bit more tired than usual today, but even so, I feel that maybe a section break or something else to give the actual 5-step technique some more weight would have been good. Maybe another paragraph encouraging the reader to actually apply the process after heaving read through it.
The last thing is more of a bug in human visual processing, but we might still need a few decades to fix it, so I will leave the critique here until then. Humans seem to be really bad at comparing the size of areas, especially the areas of circles, and this causes people to both repeatedly misinterpret pie charts and to be confused by Venn diagrams. I am not really sure what to do about this, except to maybe err on the side of using square Venn diagrams and flow diagrams when possible.
2. Rationality thoughts:
I am not fully sure how much this post will actually help me get rid of bucket errors. I think the really hard question in the process is (2). Bucket errors usually aren’t tagged like bucket errors, and even if I realize that I am in a situation where I might be making a bucket error, the correct resolution of the error is often very hard.
I feel that a part of the process of fixing bucket errors has to be some kind of brainstorming process about alternative explanations which might carve reality differently than you currently do, or the creation of some kind of safe space where you allow yourself to realize that your reasoning might be flawed in this situation. Internal Double Crux goes a long way here (i.e. dialoging with your internal parts instead of punishing or threatening them), but even then I feel that something more in the space of the Fermi Modeling stuff I gave some talks on might be useful.
I am a bit conflicted about this and will think more. I don’t think I fully yet understand what the motion you are describing in step 5 is, and will see whether I can come to better grasp it after sleeping and thinking on it for a while.
In any case, great post!
I endorse basically all of your constructive critique, as well as your more vaguely voiced concerns. I went back through and addressed everything in the above comment at least in part, though for some things (like your final concern about the motion in step 5) “addressing” it looked like “admitting to and lampshading the handwaving.”
I do think that you pointing out the teachable moment on the phrase “conditioned on” was super high value, and I’m glad for the opportunity to edit that in.
I had the same problem when I ran the algorithm. I made a too-hasty decision about what the “question” was, and “resolving” it was unsatisfying.
Perhaps adding a step for something like aversion factoring, in order to tease out which implication is the emotionally relevant one?
> I am not really sure what to do about this, except to maybe err on the side of using square Venn diagrams and flow diagrams when possible.
Strong support for square venn-diagrams.
Tried square Venn diagrams prior to habryka’s comment. Rejected them because they were worse pedagogically (despite being more accurate). Suspect rounded-corner quadrangles may actually be an optimal substitute.
I’ve never actually tried square Venn Diagrams, so I would be interested in you unpacking the “they were worse pedagogically”.
Essentially, I found that they implied something much more specific and meaningful in the overlap. With circles, you’re just pushing centers together/overlapping radii, and the mind parses the overlap as perfect/symmetrical and therefore containing no semantic content other than size/area.
With the square or rectangular Venn diagrams, the creator starts making choices—should the overlap be center-to-center, center-to-corner, corner-to-corner, etc.? Should the overlap be “designed” to have a square aspect ratio, or to be golden, or to be long and narrow? If you’ve got a clear grid such that every background square is 1u^2, should you force the blocks to adhere evenly to that grid, or have them be off? What if your area doesn’t easily break down into X by X, or X-minus-a-little by X-plus-a-little? If your squares are e.g. 4x4 and 6x6 but you want the overlap to be 7 squares, what do?
I found that there was no “natural” answer to these questions; no Schelling layout that seemed zero-content. Instead, every arrangement invited the reader to try to actively parse it for additional meaning that wasn’t there, chewing up attention and bandwidth.
This was a great explanation, and now I want to try this out to see the same effect myself.
Alternative idea: Waterfall diagrams, which is what Eliezer uses sometimes in his latest Bayes Guide.
Here’s my take on a good Venn diagram layout that doesn’t try to convey extra information, and avoids the problems you and the parent mentioned: https://i.imgur.com/tKPzfLM.png. Make the rectangles full height, and give them rounded corners so it’s clear that these are subsets of a larger space and not just vertical bars (it’s unclear with square corners that there are 2 overlapping sets and not 3 adjacent). Only caveats are that this is not instantly recognizable like your standard Venn diagram, and is only really usable for 2 subsets.
Yeah. I hadn’t thought to go full column height, but this definitely resembles one of the options I thought was most promising. I think you may have identified the best option for square diagrams that match the use case in this post.
This is why you should produce offline, then upload it online. I don’t know why you haven’t been doing it, but perhaps you should take this opportunity to start the practice; there are several other benefits to the practice including (but probably not limited to):
Ability to solicit for feedback, revise and update the article, etc. It allows you to adopt an agile methodology of article writing. Maybe you are confident enough in your writing, that you feel you don’t need a testing and development period.
Ability to develop the work over several time periods. I have around 4 mega long (estimated 10,000+ words at completion articles I’ve been working on for a few weeks/days, one of which I plan to publish sometime this month).
Ability to use professional writing tools, Microsoft Word is really great for all but the most math-heavy articles (for which I turn to LaTeX), and I have heard good things about Grammarly, plus there are probably several other tools you can use to expedite development.
Ability to plan your article, and use cool development approaches like the snowflake method.
I view writing online as software development without testing, and basically developing on the spot. This may work for writing short scripts, but for any commercial software development, you want to go through the entire software development life cycle.
I hope you start writing offline, and take advantage of the benefits, producing even more high quality content.
I realise that this kind of development methodology may not appeal to you if you plan to write 1 post every day. However, like Gauss I prefer quality to quantity.
Why did this comment get downvoted? I’m willing to update f I did something wrong, but I think I was being genuinely helpful.
I had a pretty negative initial reaction to this comment, and I am currently trying to decipher why. Here are my thoughts:
I felt some sense that you were personally threatening Connor, or trying to get him to admit a mistake. The first sentence was:
To me this was spoken in a condescending tone, with somewhat of an “I told you so” intonation. The beginning of the next sentence read similarly to me:
Which to me was spoken in an ironic voice, meaning more something like “I don’t know how you missed such an obvious thing”.
And I think after those two sentences I had already settled on that being the tone of the comment, and that tone just felt really aggressive to me, so I downvoted it.
If that would have been the tone in which you intended the comment to be read, then I think the comment should have been justifiably downvoted. Reading your response I have a sense that you did not intend the comment to be read in the tone that I ended up reading it.
I am not really sure what the best way to deal with this is. Subtext and tone is much harder to communicate in text-only communication, and so I think authors will have to inevitably put in a bunch of effort to ensure that text does not get read in the wrong tone. The reaction of me reading the comment in this more aggressive tone happened very quickly at a highly subconscious level, and while I will try to calibrate myself here better, I am not sure how tractable an intervention on the level of “let’s just have everyone try to be more charitable when reading comments” actually is, or whether that makes sense.
For now, I think it is reasonable that this comment got downvoted, because I expect the majority of readers to read it in an aggressive tone, but also think that there is a very closeby comment that does not require significant rewriting that would get read in the correct tone.
I agree with both Elizabeth and Said (just saw all of this thread in the last ten minutes).
The advice was something like 85% obvious and stuff I had already belatedly realized, and so getting it in long form was a little bit like being lectured or being kicked while I was down. There are two status signals in a comment like that (the first being “I, DragonGod, know better” and the second being “you, Conor, are not clever enough to have already figured this out, nor will you in the future unless I tell you”) and for that reason my monkey brain wanted to downvote reflexively (I didn’t, in the end).
But a) it’s obvious from you asking this question that those status moves were not at all on your radar/were not intended, and b) I completely agree that it’s good to have those thoughts and heuristics up where others can see them and benefit from them. Your comment included models and details that make it concretely useful, and it did, in fact, include at least one point that I hadn’t considered.
Interestingly, as I tried to answer your question, I found myself skirting the edge of the exact same surface-level problem that I think brought on the downvotes. So as I went to try to tell you what I think was good and bad about your comment, I found myself wanting to couch with phrases like “you probably already thought of this, but for the sake of anyone who doesn’t” and “I could be wrong here, let me know if your models differ” and “I hope this doesn’t come across as lecturing/condescending; I just figured it was better to err on the side of caution and completeness.”
If I were in your position, that’s probably the update I’d take away from this (that the skin of the comment matters a lot, and there’s a large swath of people who don’t want to be treated like white belts). I’d probably still have the same “speak up and share advice” module, but I’d add a function to the front that injects some gentleness and some status-dynamic-defusing words and phrases.
> I’d probably still have the same “speak up and share advice” module, but I’d add a function to the front that injects some gentleness and some status-dynamic-defusing words and phrases.
In this case, I have to object to this advice. You can tie yourself in knots trying to figure out what the most gentle way to say something is, and end up being perceived as condescending etc. anyway for believing that X is “obvious” or that someone else “should have already thought of it” (as again, what is obvious to one person may not be obvious or salient to another). Better to just state the obvious.
First, I definitely agree that getting tied in knots is both a) bad and b) something at least some people are vulnerable to.
But 80⁄20 seems like a valuable principle to adhere to, especially if you’ve already been downvoted/punished for your normal style and then, on your own initiative, you asked for clarification/updated heuristics. The claim I made about what I would do in DragonGod’s shoes wasn’t an imperative for literally everybody, it was simply … well, what I would do in their shoes.
I think I would be with you re: objecting if the line you quoted were broadcast as general advice that everyone needs more of. But in context, my brain is trying to round off your objection to “you can’t please everyone, so don’t even try.” I grant that I may have misunderstood you, but in fact signaling and status moves are real, and in fact (as evidenced by group opinion) DragonGod’s first comment was readily parseable as containing information that DragonGod didn’t intend to put there, and in fact a good patch for preventing that from happening in the future is adding something like 5-25 words of gentling and caveatting. That wasn’t a call to tie oneself in knots, nor a claim that this would please everybody.
“Add 5-25 extra words,” for someone who’s already gotten data that they’re not at the ideal point and was intrinsically motivated to investigate further, does not seem to me like a dangerous heuristic that’s likely to suck up lots of attention or effort.
And as for “stating the obvious,” well—I’d wager $50 at 5-1 odds that if DragonGod had prefaced the comment with “I may be stating the obvious here, but,” then the downvotes would not have happened (because that does away with the implicit claim that I or other readers don’t know that these things are low-hanging fruit and easily thinkable).
For what it’s worth, I’m strongly of the same opinion as Conor on this one. It’s just too easy to get annoyed by criticisms, and I think increasing the risk of tying oneself in knots is worth it for the sake of decreasing the risk that someone gets offended.
I thought it was an excellent comment, and upvoted it!
Edit: In particular (though not only)—I found the analogy to software development insightful, and new to me (has this comparison been made elsewhere? if so, I’ve not seen it); and I also didn’t know about the snowflake method, so that’s a new thing I’m reading about now, thanks!
And I definitely agree with the core point about writing offline, and all the reasons you list for why it’s a good idea.
I haven’t voted on this comment, but if I had to guess:
The author did not ask for advice. The loss was mentioned as part of context for a different decision.
It is too late to do anything about losing this particular post.
“Write offline” is an obvious solution to the problem, suggesting it comes across as condescending and victim-blaming.
Not everything is obvious to everyone; and what is obvious, may grow less obvious with time, as new generations, and new waves of people who come into a community, come of age without ever hearing it said aloud (since it’s too obvious to say!). This is why even obvious things need to be said, periodically, lest another piece of collective knowledge be lost.
Then, too, things that are obvious when you think about them might not be salient, in the moment of doing; saying the obvious thing increases its salience, and thus benefits everyone.
Finally, a comment on a forum/blog post is not a one-to-one communication between OP and commenter; it’s a public speech act. The advice is condescending, useless to the OP? Perhaps, perhaps not. But that has little bearing on whether it is useful to others who read it!
It will, I think, lead to far more lastingly useful content on this site, if we treat comments not just, and not even primarily, as utterances in a conversation, but also and importantly, as public utterances, spoken before the whole forum, and addressed to the collective. For who might care about the details of some conversation that took place long ago? But what was said to a public audience—that will keep.
I have four articles undergoing that process currently.
I haven’t published anything using this algorithm yet.
I’m on ith drafts on some papers.
I’m on “research()” on others.
Example of an article that exhibited this is: https://www.reddit.com/r/slatestarcodex/comments/73c8c6/pdf_the_probability_theoretic_formulation_of/
It’s on research(information theory).
I held my A button halfway pressed the entire time while reading this article.
Turns out, this is a complete coincidence and is not actually a reference.
I PM’d Connor and he had no idea what I was talking about when I went on about how I appreciated the SM64 speedrunning reference.