Lately I’ve noticed, both here and the wider LW-sphere, a trend towards rationalizing the status quo. For example, pointing out how seemingly irrational behavior might actually be rational when taking into account various factors. Has anyone else noticed the same?
At any rate I’m not sure if this represents an evolution (taking into account more subtleties) or regression (genuine change is too hard so let’s rationalize) in the discourse.
“Again and again, I’ve undergone the humbling experience of first lamenting how badly something sucks, then only much later having the crucial insight that its not sucking wouldn’t have been a Nash equilibrium.”—Scott Aaronson
Damn, that is a lession I forgot. Does anyone else experience this? Reading an article, agreeing with it being an interesting insight, forgetting it and then rediscovering it in a different context?
This happened to me all the time before I started putting valuable insights into Anki. I find that 1 card per outstanding article or lecture and 1-3 cards per excellent book is about right. (This is the only thing I use Anki for.)
I tried this but went about it wrong, I wrote a whole bunch of cards like I was making comprehensive notes (around the level of detail of the MineZone book notes), and ended up getting frustrated by the chaff of disordered small notes that the system threw back at me. One card per article / book section seems like a good rule of thumb.
Do you have any conventions for turning insights that don’t necessarily go into a neat question/answer format into card halves? Just put the whole thing on the front of the card?
I’ve found that the process of creating the cards is helpful because it forces me to make the book’s major insight explicit. I usually use cloze tests to run through a book’s major points. For example, my card for The Lean Startup is:
“The Lean Startup process for continuous improvement is (1) {{c1::identify the hypothesis to test}}, (2) {{c2::determine metrics with which to evaluate the hypothesis}}, (3) {{c3::build a minimum viable product}}, (4) {{c4::use the product to get data and test the hypothesis}}.”
This isn’t especially helpful if you just remember what the four phrases are, so I use this as a cue to think briefly about each of those concepts.
“The Lean Startup process for continuous improvement is (1) {{c1::identify the hypothesis to test}}, (2) {{c2::determine metrics with which to evaluate the hypothesis}}, (3) {{c3::build a minimum viable product}}, (4) {{c4::use the product to get data and test the hypothesis}}.”
Does this become a single card with four blanks to fill or four cards that have all but one blank visible?
I don’t know. Metacontrarianism, as I understand it, involves taking specific positions solely for the sake of differentiating oneself from others, whereas many of the status quo explanations (e.g. Yvain’s recent post on weak men) seem like they actually have definite intellectual merit as well.
My explanation would be more something like “LW was originally quite dominated by Eliezer’s ideas, but over time and as people have had the time to think about them more, people have started going off in their own directions and producing new kinds of thoughts that are the kind of synthesis that you get when you’ve assimilated the LW canon deeply enough and then start combining it with all the other influences and ideas that you run into and think about in your life”.
There does seem to be a pattern of (commonly accepted idea/pattern/behavior)/(LW-ish/sequence-related/rational (for some sense of the word) idea/pattern/behavior)/(LW-ish justification of something similar to the commonly accepted idea/pattern/behavior), though. It’s similar to the metacontrarianism pattern even if it’s not caused by actual metacontrarianism.
When a single experiment seems to show that subjects are guilty of some horrifying sinful bias (...) people may try to dismiss (not defy) the experimental data. Most commonly, by questioning whether the subjects interpreted the experimental instructions in some unexpected fashion (...) Experiments are not beyond questioning; on the other hand, there should always exist some mountain of evidence which suffices to convince you. It’s not impossible for researchers to make mistakes. It’s also not impossible for experimental subjects to be really genuinely and truly biased. It happens. On both sides, it happens. We’re all only human here. If you think to extend a hand of charity toward experimental subjects, casting them in a better light, you should also consider thinking charitably of scientists. They’re not stupid, you know. If you can see an alternative interpretation, they can see it too. This is especially important to keep in mind when you read about a bias and one or two illustrative experiments in a blog post. Yes, if the few experiments you saw were all the evidence, then indeed you might wonder. But you might also wonder if you’re seeing all the evidence that supports the standard interpretation. Especially if the experiments have dates on them like “1982” and are prefaced with adjectives like “famous” or “classic”.
The belief in correctness of status quo is very strong. Even if people don’t literally believe in “just world”, they still want to believe that at least some parts of the world are the way thay are for a reason. Maybe there is no god creating the balance, but couldn’t evolution or economy have the same outcome?
There is also the “Chesterton’s fence” concept, that if you don’t understand how X makes sense, that may be a fact about you, not a fact about X.
Perhaps it would be better to solve each case separately. What is the evidence for rationality of the given behavior? What is the evidence for its irrationality?
If you want to change the status quo it’s very important to understand the reasons for why the status quo exists. If you don’t you are unlikely to be able to change anything.
Sometimes, however, there is no big, deterministic reason for the status quo—it can be historical accident. A lot of intellectual effort can go into story-telling about why things are the way they are and be dead wrong.
Lately I’ve noticed, both here and the wider LW-sphere, a trend towards rationalizing the status quo. For example, pointing out how seemingly irrational behavior might actually be rational when taking into account various factors. Has anyone else noticed the same?
At any rate I’m not sure if this represents an evolution (taking into account more subtleties) or regression (genuine change is too hard so let’s rationalize) in the discourse.
“Again and again, I’ve undergone the humbling experience of first lamenting how badly something sucks, then only much later having the crucial insight that its not sucking wouldn’t have been a Nash equilibrium.”—Scott Aaronson
Damn, that is a lession I forgot. Does anyone else experience this? Reading an article, agreeing with it being an interesting insight, forgetting it and then rediscovering it in a different context?
This happened to me all the time before I started putting valuable insights into Anki. I find that 1 card per outstanding article or lecture and 1-3 cards per excellent book is about right. (This is the only thing I use Anki for.)
I tried this but went about it wrong, I wrote a whole bunch of cards like I was making comprehensive notes (around the level of detail of the MineZone book notes), and ended up getting frustrated by the chaff of disordered small notes that the system threw back at me. One card per article / book section seems like a good rule of thumb.
Do you have any conventions for turning insights that don’t necessarily go into a neat question/answer format into card halves? Just put the whole thing on the front of the card?
I’ve found that the process of creating the cards is helpful because it forces me to make the book’s major insight explicit. I usually use cloze tests to run through a book’s major points. For example, my card for The Lean Startup is:
“The Lean Startup process for continuous improvement is (1) {{c1::identify the hypothesis to test}}, (2) {{c2::determine metrics with which to evaluate the hypothesis}}, (3) {{c3::build a minimum viable product}}, (4) {{c4::use the product to get data and test the hypothesis}}.”
This isn’t especially helpful if you just remember what the four phrases are, so I use this as a cue to think briefly about each of those concepts.
Does this become a single card with four blanks to fill or four cards that have all but one blank visible?
I’ll have to try that.
Indeed I’ll have to try to not forget that.
What you are observing is part of the phenomenon of meta-contrarianism. Like everything Yvain writes, the aforementioned post is well worth a read.
I don’t know. Metacontrarianism, as I understand it, involves taking specific positions solely for the sake of differentiating oneself from others, whereas many of the status quo explanations (e.g. Yvain’s recent post on weak men) seem like they actually have definite intellectual merit as well.
My explanation would be more something like “LW was originally quite dominated by Eliezer’s ideas, but over time and as people have had the time to think about them more, people have started going off in their own directions and producing new kinds of thoughts that are the kind of synthesis that you get when you’ve assimilated the LW canon deeply enough and then start combining it with all the other influences and ideas that you run into and think about in your life”.
There does seem to be a pattern of (commonly accepted idea/pattern/behavior)/(LW-ish/sequence-related/rational (for some sense of the word) idea/pattern/behavior)/(LW-ish justification of something similar to the commonly accepted idea/pattern/behavior), though. It’s similar to the metacontrarianism pattern even if it’s not caused by actual metacontrarianism.
There’s also the general “thesis, antithesis, synthesis” pattern that intellectual ideas tend to take.
I’ve read that one—what I was thinking of felt a little different, but maybe it’s really the same thing.
The status quo actually exists. The status quo is the result of the only experiment we’ve managed to observe.
Understanding this is far more important than any theoretical behaviors.
It reminded me of this article:
The belief in correctness of status quo is very strong. Even if people don’t literally believe in “just world”, they still want to believe that at least some parts of the world are the way thay are for a reason. Maybe there is no god creating the balance, but couldn’t evolution or economy have the same outcome?
There is also the “Chesterton’s fence” concept, that if you don’t understand how X makes sense, that may be a fact about you, not a fact about X.
Perhaps it would be better to solve each case separately. What is the evidence for rationality of the given behavior? What is the evidence for its irrationality?
If you want to change the status quo it’s very important to understand the reasons for why the status quo exists. If you don’t you are unlikely to be able to change anything.
Sometimes, however, there is no big, deterministic reason for the status quo—it can be historical accident. A lot of intellectual effort can go into story-telling about why things are the way they are and be dead wrong.
If a status quo is stable then there are forces that keep it stable.
Of course that doesn’t mean that you can’t be wrong if you try to identify those forces.
They can be local minima, stable but arbitrary. People turn “for want of a nail” into “inconceivable it could have turned out otherwise”.
Can you point to a specific example of this? Or maybe give an example from what you’ve seen personally?
Scott Alexander’s “Weak men are superweapons” comes to mind.