Which ideas from LW would you most like to see spread?
My favorite is that people get credit for updating based on evidence.
The more common reaction is for people to get criticized (by themselves and others) for not having known the truth sooner.
My favorite is that people get credit for updating based on evidence.
The more common reaction is for people to get criticized (by themselves and others) for not having known the truth sooner.
That they notice when they are arguing about definitions, and not do that.
To other Internet forums:
I think evidence-updating is kinda common, what striked me as really new here is tabooing.
So that one. Not getting too hung up about terms / categories. I would be happy enough if at least the habit of doing a quick round of taboo whenever anyone feels a discussion is too attached to terminology would be widespread on places like Reddit.
For example, I kinda like economics. This is why I absolutely hate it people use “capitalism” as a flag to rally for or against, and then quality goes down the drain, it all becomes a playground fight. We all know (or if engaging in these issues, then should) that it unpacks to two different and unrelated terms, one is broadly a voluntary transactions based system, and the other is a specific distribution of property where most people don’t have any so they need jobs offered by those who have. It would be so easy to not use this word just use the appropriate unpacking.
There is something about all this tabooing thing that reminds me of when I used to be a fairly active Buddhist. Similar things were done.
Personal life, living better:
Tough question, but probably not trying to be “clever”. That is, not ranking solutions on the complexity or how sophisticated they look but more like accepting, taking, or coming up with “boring advice”. There is a HUGE urge to show off your brain sparkles if you have an IQ over 120 and this can be highly counter-productive. It is really humiliating and enlightening to see how more efficient people can be who are not trying to be too clever. I know a guy who is a textbook average mind, works in a warehouse, likes football, not much else. They wanted to live in the UK and did it so that he moved, got a job, and then his girlfriend follow a few months later. Anyway he figured it is time to lose some weight and he was not much into cooking anyway, so he just filled a big tupperware every 2-3 days with sliced cucumbers and tuna from cans and that salad was the only thing he ate. It was, of course, very efficient, having found two of the least calorie dense foods that exist. And yet there are intelligent people who struggle with their weight for decades with the most complicated insulin response based diets. Committing hard to something simple is often the best—just the problem is that it lacks glory, hence, motivation...
Eating tuna every day likely gives you more mercury than the European Food Safety Authority considers to be safe.
True but also depends on the type, I think the most popular type here is the skipjack tuna, which is classified as moderate mercury content.
Depends where the fish comes from; but +1 for promoting awareness of mercury risks
What kind of negative health consequences did the diet have?
I don’t know of any. It was about two months anyway, droppig a good 10kg, AFAIK micronutrient deficiencies usually take longer to build up.
My most pleasant experience here was that people listen to your arguments if you make the effort / do your homework, even if they are contrary to their ideological views.
When I wrote something which (knowing the statistics about the demographics who frequent this site) was opposing the ideological views of the overwhelming majority of the target audience of the site (and even involved religion in a limited way), the reaction was not what you find on most other sites where they denounce you as the Enemy, cherry-picking some of your arguments they can counter and pretending they didn’t notice the ones they can’t, and even if they found something in your arguments they can agree with it, they never admit because that’s impossible, as you are the evil Enemy and everything you say must be wrong by default… no, instead of this most people actually listened, and while they stated those parts they disagreed with, they admitted that they agreed with parts of it and even admitted that they had to update at least part of what they knew or believed about the topic.
A lot of online communities pay lip service to the idea that their experiences aren’t universal, but Less Wrong seems to be one of the few places that takes that idea seriously.
This is actually one of the things I am both positive and negative about. I think people are far more similar than sometimes assumed here. The evidence is pop culture. How could something be as popular as Star Wars or Metallica if not for calling out to a very similar underlying emotional hardware in people? In a No Mind Is Typical universe there would no universal mind product—there would be no hugely successful bands or movies just thousands of mini subcultures with mini stars.
One of my most surreal moments of How We All Are Really Alike was reading a documentary novel from an Afghanistani woman writer. I don’t really remember the title or her name but I can find the book later if interested. Anyway she wrote Titanic was a huge underground success there. Underground because banned like most Western stuff, it was the early Taliban era if I remember right but still pretty much ever woman or young man saw it. It is a romantic movie that has just about NO reference whatsoever to how love or marriage is done in their culture, and yet it “clicked” because it talks to the same, universal human hardware. (Nm the book I found another source anyway.)
I feel like you two are talking about a different type of variance. It may be that Titanic is well-loved in every culture all over the world, ever. But there are still individuals who didn’t enjoy it. I think sixes means that LW is unusually good at alieving that not everybody enjoyed Titanic.
I endorse this interpretation.
The important things is not whether you agree in the abstract with the fact that people are or aren’t alike. The important part is actually noticing when you hit a topic where people aren’t alike.
The difference between “I don’t think that’s true” and “I disapprove of the policy implications of that”.
Steelmanning, and tabooing words.
But neither of these were invented here.
I’m not concerned about where the ideas were invented, just taking a look at what might be worth spreading.
A few years later, I feel like tabooing is pretty much obsoleted by steelmanning. When someone says a vague word like “emergence”, you can ask them to taboo it or try to steelman it, and the latter is usually much more interesting.
I think tabooing (known by its conventional name “unpacking”) is done in a cooperative argumentation context, often in phil. papers.
So in that sense it’s useful if you can count on your interlocutor to be a colleague rather than an adversary. In adversarial contexts presumably the goal is to grind the opponent into dust as efficiently as possible.
I feel these are a bit different.
I understand “unpacking” as basically going into more detail, descending one level of abstraction. It’s a good answer to the question “But what do you mean by this?”
Tabooing to me is more like shifting frameworks or changing the point of view. Concepts tend to exist in sets and a request to taboo I would understand as a call to swap out one whole set of concepts for another set.
Yeah that’s exactly what I’m seeing on LW :-(
In hindsight it seems that steelmanning is the best idea in the LW-sphere. It was invented by Steven Kaas (whose whole blog is a great read) and popularized by Scott, who went on to use it as a writing exercise and then grew it into a hugely successful blog. I’m no Scott, but pretty much all my best posts and comments came from trying to steelman various ideas, and my mistakes often come from not steelmanning enough.
Understanding much communication as glorified signalling, and specifically Applause Lights.
EA.
Working habits for making unusually low levels of dishonesty work outside relationships and small select groups.
Scholarship: How to Do It Efficiently by lukeprog
Immortality: A Practical Guide by G0W51
JonahSinick’s many many many many posts on career advice
More knowledge about bias, which would particularly undermine the unfortunately common and well regarded stance “I only believe what I see”. People rely too much on their direct feelings/intuitions without assessing them.
The idea that in order to have an accurate representation of reality, one must have background knowledge in science. Add in a little philosophy (recent philosophy, like Popper).
Also praising the ones who admit their mistakes—that happens too little.
The final idea would be like yours, more Bayesian thinking.
I’m probably too optimistic.
What do you mean with that? Especially while you try to treat Popper as an important thinker?
Updating priors with evidence. Standing by your beliefs seems to be praised—at least where I live.
I consider Popper as an important thinker, falsification is quite important right now for example. Why do you seem to think he’s unimportant ?
The most funny thing about Popper is that I don’t get the impression that he or most of the people reading him seek to falsify his theories. Often because someone, as it’s philosophy the rules of falsification don’t matter. Popper didn’t try to study scientists and how scientists come up with new scientific findings to try to falsify his hypothesis.
From a more LW perspective Lukeprog writes:
Popper isn’t really a recent thinker. He wrote 50 years ago. Jaynes wrote his book in the 1990s. Kahnemans works wasn’t known before that time.
We have modern tools to deal with uncertainty like credence calibration. We have found that in cases with low costs of false positives trusting intuition is highly useful and that most experts make a lot of their decisions based on intuition rather than analytical reasoning.
I may be mad, but I actually think of Popper more or less in the same breath as Bayesianism—modus tollens and reductio (the main methods of Popperian “critical rationalism”—CR basically says that the reductio is the model of all successful empirical reasoning) just seem to me to be special cases of Bayesianism. The idea with both (as I see it) is that we start where we are and get to the truth by shaving away untruths, by testing our ideas to destruction and going with what’s left standing because we’ve got nothing better left standing—that seems to me the basic gist of both philosophies.
I’m also fond of the idea that knowledge is always conjecture, and that belief has nothing to do with knowledge (and knowledge can occasionally be accidental). Knowledge is just the “aperiodic crystals” of language in its manifest forms (ink on paper, sounds out of a mouth, coding, or whatever), which, by convention (“language games”), represent or model reality either accurately or not, regardless of psychological state of belief.
Furthermore, while I’m on my high horse, Bayesianism is conjectural deductive reasoning—neither “subjective” nor “objective” approaches have anything to do with it. It doesn’t “update beliefs” it updates, modifies, discards, conjectures.
IOW, you take a punt, a bet, a conjecture (none of which have anything to do with belief) at how things are, objectively. The punt is itself in the form of a “language crystal”, objectively out there in reality, in some embodied form, which is something embedded in reality that conventionally models reality, as above—again, nothing to do with belief.
In this context, truth and objectivity (in another sense) are ideals—things we’re aiming for. It may be the case that there is no true proposition, but when we say we have a probably true proposition, what that means is that we have a ranking of conjectures against each other, in a ratio, and the most probable is the most provable (the one that can be best corroborated—in the Popperian sense—by evidence). That’s all.
I agree with gurugeorge response and see Popper the same way.
That said, I do think that although Pearl’s work is great, the key word is “in principle”—the methods rely on a number of assumptions that you can’t test (like independance) and he also says that the experiment is the only guaranteed way to establish causation (in his talk the art and science of cause and effect). I also may be wrong, as this talk was given in 1996, he might have changed his mind.
Moreover, your “trust your intuitions sometimes” is misleading: it is still not simply trusting your intuitions, it is trusting them only in the cases where there is data suggesting that intuition gives better results in similar cases. It has data behind it—the intuition is not taken for granted.
As Popper wrote, sensory data comes through organs that aren’t ‘perfect’ sensers. Our brain is also not a ‘perfect’ thinker. We know all that thanks to our knowledge of evolution—and that’s the starting point of Popper. Popper didn’t have Kahnemans’ or Pearl works, but he still encouraged critical thinking of hypotheses while not treating intuitions as given (only as hypotheses, and only if they were falsifiable), and falsification is still the basis of science at this moment.
You can test independence. There is a ton of frequentist literature on hypothesis testing, and Bayesian methods too, of course. Did you mean something else?
I wasn’t very clear, and probably misleading. Although I’m not an expert, I have “read” Pearl’s book a few years ago (Causality: Models, Reasoning, and Inference, it’s available as a pdf) and it really seemed to me that some independence was hard to test, and sometimes was an assumption given the system. It’s also true that I haven’t read it deeper now that I have a bit more knowledge, and I lack time to do so.
If you have more hindsight about that, I would love to read it.
Extremely difficult to get into normal people’s heads: that striving to become more rational has nothing to do with becoming “less emotional.”
Part of that’s because LW redefined rationality to mean something different than it means out there.
At the last weekend I was at a birthday with a lot of LW people where we did lightning talks. One speaker made the point that he programs better when he shuts off his emotions because his computer doesn’t care whether he’s angry at it. Even when the software tries to get him to feel pleasure by being pretty that draws him out of his analytic programming mode.
There are people for whom striving to become more rational involves focusing on the analytic mode. Thinking about the tradeoffs is useful. Having words for them is as well.
To you think “think more clearly” has connotations of “become less emotional”?
Thinking is often seen as opposed to feeling, which is as if eating were opposed to drinking.
I like what Khalil Gibran says: “Your reason and your passion are the rudder and the sails of your seafaring soul. If either your sails or your rudder be broken, you can but toss and drift, or else be held at a standstill in mid-seas.”
Not to me, but laypeople tend to think in terms of a brain-vs.-heart divide.
One of the key abstract memetic components of LW is something along the lines of “use recent research findings about cognition from fields like psychology, artificial intelligence, probability, and economics to improve your own thinking.” That seems like an idea worth spreading. People could use that idea even if they didn’t accept or understand specific individual techniques like value-of-information calculations.
A little knowledge is a dangerous thing.
On the other hand keep in mind that many research findings in medicine (and especially psychology) fail to replicate.
That is not even the biggest issue. It is more like they make an experiment with 28 upper middle class white American psychology students and deduce that people in general are made more happier by experiences than by possessions.
The idea that I find least entangled but still very potentially beneficial is that politics is the mind-killer. I realize it’s an old sequence, and it doesn’t have much traction here (since LW is ostensibly un-killed minds).
It is a powerful slogan, but it could be unpacked into people having different goals. Sometimes it is to find truth. Sometimes it is to find the policies with the best outcomes. Sometimes it is to enjoy the thrill of fighting tribe against tribe. This is actually a cool hobby when happens on a football field or basketball court, but when they are called Team Life and Team Choice and the ball is abortion then it is more problematic. Then all three gets mixed into one. It is usually better to keep these activities separate, I think that is the core lesson.
The difference being that on a football field or basketball court, there is a settled outcome of competition, and no sincere value attached to certain outcomes. An average person might prefer that their chosen sports team wins, but I think they would acknowledge that it does not make the world a better place. In politics, however, the preference that a chosen team wins is very closely tied to the view that the win is beneficial for everybody.
An idea that I think would be very helpful to people—and relatively simple to grasp—is the idea of tribalism, and how much it really motivates us, even to this day. Not just that politics is the mindkiller, but why. I think if more people were able to take a step back every once in a while and think, “Hey, I don’t even care about or like this idea...why am I defending it? Because it’s an idea that I think a group I consider myself a part of holds, and by attacking one idea of my tribe, it seems like you’re attacking every idea of my tribe? Does this make sense?” then the world would be a much more friendly place, at least.
Understanding the distinction between the map and the territory. And understanding that there are different levels of maps.
I think if you go to CFAR’s webpage, and (I think) look at one of Michael Smith’s interviews, he says that that’s the one thing he wants people to take away from CFAR.