This post is primarily targeted towards people trying to develop rationality, either as a personal skill or as an overall field/artform.
Could you clarify if you disagree with the claims I more explicitly make/don’t make in the appendix?
Why do you exclude one of the most important cognitive algorithms “sifting out the good from the bad” from “the study (and applied skill) of finding cognitive algorithms that form better beliefs and make better decisions”? If you are not good at critical thinking, how do you know that LW is not complete bullshit?
fyi I explicitly included this, I just warned that it wouldn’t necessarily pay off in time to help
The word ‘rational’ is properly used to talk about cognitive algorithms which systematically promote map-territory correspondences or goal achievement.
I disagree with the definition “systematically promote map-territory correspondences” because for me it is “maps all the way down”, we never ever perceive the territory directly, we perceive and manipulate the world via models (maps). Finding models that work (that enable goal achievement/winning) is the essence of intelligence. “All models are wrong, some are useful”. Even if we get to the actually elemental parts of reality and can essentially equate our most granular map with the territory that is out there, we still mainly won’t care in practice about this perfect map because it is going to be computationally intractable. Lets take Newtonian Mechanics and General Relativity for example. We know that General Relativity is “truer” but we don’t use it for calculating pendulum dynamics at the earth surface, the differences that it models are just irrelevant compared to other more relevant stuff.
Second: I’m mostly making an empirical claim as to what seems to happen to individual people (and more noticeably to groups-of-people) if they focus on the slogan “rationality is winning.”
This is the core claim I think!
The feedbackloops are long/slow/noisy, which makes it hard to learn if what you’re trying is working.
Definitely! If the feedback loops are long, slow and noisy, then learning is long, slow and noisy. That’s why I give examples of areas where the feedback loops are short, fast and with very little noise. These are examples that worked for me with astonishing efficiency. I would be the person I am otherwise. And I’ve chosen these areas explicitly for this reason.
If you set out to systematically win, many people end up pursuing a lot of strategies that are pretty random. And maybe they’re good strategies! But bucketing all of them under “rationality” starts to deflate the meaning of the word.
“pretty random” sounds to me like the exact opposite of rational and winning)
People repeatedly ask “but, isn’t it rationality to believe false things?”
Here I make an extremely strong claim that it is never rational to believe false things. Personal integrity is the cornerstone of rationality and winning. This is a blogpost scope topic, so I won’t go into it further right here.
Similarly and more specifically: a lot of things-that-win in some respects are wooy, and while I think there’s in fact good stuff in some woo, while the first generation of rationalists exploring that woo were rationalists with a solid epistemic foundation. Subsequent generations came more for the woo than for the rationality (See Salvage Epistemology).
“Woo” is stuff that doesn’t fit into your clear self-consistent world model. There is a lot of useful stuff out there that you guys ignore! Copenhagen interpretation, humanities, biology, religion, etc… If you don’t understand why it makes sense, you don’t understand it, fullstop. I believe that mining woo for useful stuff is exactly how you do original research. It worked wonders for me! But integrity goes first! You shouldn’t just replace your model with the foreign one or do “model averaging”, you should grok what those guys get that you are missing and incorporate it in your model. Integrity and good epistemiology are a must, if you don’t have those yet, don’t touch woo! This is power aka dark arts, it will corrupt you.
In both the previous two bullets, the slogan “rationality is winning” is really fuzzy and makes it harder to discern “okay which stuff here is relevant?”. Whereas “rationality is the study of cognitive algorithms that systematically arrive at truth and succeeding at your goals” at least somewhat
I go for “rationality is cognitive algorithms that systematically arrive at succeeding at your goals”.
Third: The valley of bad rationality means that study of systemized winning is not guaranteed to actually lead to winning, even en-net over the course of your entire lifetime.
In my experience there is a valley of bad X for every theory X. This is what you have to overcome. I agree that many perish in it. But the success of those who pass is well worth it. I think we should add more “here be dragons” and “most of you will perish” and “like seriously, 90% will do worse of by trying this”. It’s not for everybody, you need to have character.
Fourth: Honestly, while I think LessWrong culture is good at epistemics, addressing motivated cognition, and some similar things… I don’t have a strong reason to believe that we are particularly good at systematically winning across domains (except in domains where epistemics are particularly relevant)
I am really sorry to say this, I love LW and I took a lot from it and I deeply respect a lot of people from here, I mean like genius-level, but yep, LW sucks at winning and you are not even good in epistemics in the areas that matter for you the most. Lets do smth about it, lets win?)
So, fifth: So, to answer df fd’s challenge here:
I got into Rationality for a purpose if it is not the best way to get me to that purpose [i.e. not winning] then Rationality should be casted down and the alternative embraced.
A lot of my answer here is “sure, that might be fine!” I highly recommend you focus on winning, and use whatever tools are appropriate, which sometimes will be “study/practice cognitive algorithms shaped” and sometimes will have other shapes.
I do agree there is a meta-level skill of figuring out what tools to use, and I do think that meta-level skill is still pretty central to what I call rationality (which includes “applying cognitive algorithms to make good decisions”). But it’s not necessarily the case that studying that skill will pay off.
Linguistically, I think it’s correct to say “the rational move is the one that resulted in you winning (given your starting resources, including knowledge)”, but, “that was the rational move” doesn’t necessarily equal “‘rationality’ as a practice was helpful.”
Hope that helps explain where I’m coming from.
This one I just agree.
fyi I explicitly included this, I just warned that it wouldn’t necessarily pay off in time to help
I see from the 5th point that you explicitly included it, sorry for missing it, I just tend to really get stuck in writing good deliberate replies, so I just explicitly decided to contribute whatever I realistically can.
I still stand on the position that this one (I call it critical thinking) should come first. It’s true that there is no guarantee that it would pay off in time for everybody. But if you miss it, how do you distinguish between woo and rationality? I think you are just doomed in this case. Here be dragons, most of you will perish on the way.
This post is primarily targeted towards people trying to develop rationality, either as a personal skill or as an overall field/artform.
Could you clarify if you disagree with the claims I more explicitly make/don’t make in the appendix?
fyi I explicitly included this, I just warned that it wouldn’t necessarily pay off in time to help
I disagree with the definition “systematically promote map-territory correspondences” because for me it is “maps all the way down”, we never ever perceive the territory directly, we perceive and manipulate the world via models (maps). Finding models that work (that enable goal achievement/winning) is the essence of intelligence. “All models are wrong, some are useful”. Even if we get to the actually elemental parts of reality and can essentially equate our most granular map with the territory that is out there, we still mainly won’t care in practice about this perfect map because it is going to be computationally intractable. Lets take Newtonian Mechanics and General Relativity for example. We know that General Relativity is “truer” but we don’t use it for calculating pendulum dynamics at the earth surface, the differences that it models are just irrelevant compared to other more relevant stuff.
This is the core claim I think!
Definitely! If the feedback loops are long, slow and noisy, then learning is long, slow and noisy. That’s why I give examples of areas where the feedback loops are short, fast and with very little noise. These are examples that worked for me with astonishing efficiency. I would be the person I am otherwise. And I’ve chosen these areas explicitly for this reason.
“pretty random” sounds to me like the exact opposite of rational and winning)
Here I make an extremely strong claim that it is never rational to believe false things. Personal integrity is the cornerstone of rationality and winning. This is a blogpost scope topic, so I won’t go into it further right here.
“Woo” is stuff that doesn’t fit into your clear self-consistent world model. There is a lot of useful stuff out there that you guys ignore! Copenhagen interpretation, humanities, biology, religion, etc… If you don’t understand why it makes sense, you don’t understand it, fullstop. I believe that mining woo for useful stuff is exactly how you do original research. It worked wonders for me! But integrity goes first! You shouldn’t just replace your model with the foreign one or do “model averaging”, you should grok what those guys get that you are missing and incorporate it in your model. Integrity and good epistemiology are a must, if you don’t have those yet, don’t touch woo! This is power aka dark arts, it will corrupt you.
I go for “rationality is cognitive algorithms that systematically arrive at succeeding at your goals”.
In my experience there is a valley of bad X for every theory X. This is what you have to overcome. I agree that many perish in it. But the success of those who pass is well worth it. I think we should add more “here be dragons” and “most of you will perish” and “like seriously, 90% will do worse of by trying this”. It’s not for everybody, you need to have character.
I am really sorry to say this, I love LW and I took a lot from it and I deeply respect a lot of people from here, I mean like genius-level, but yep, LW sucks at winning and you are not even good in epistemics in the areas that matter for you the most. Lets do smth about it, lets win?)
This one I just agree.
I see from the 5th point that you explicitly included it, sorry for missing it, I just tend to really get stuck in writing good deliberate replies, so I just explicitly decided to contribute whatever I realistically can.
I still stand on the position that this one (I call it critical thinking) should come first. It’s true that there is no guarantee that it would pay off in time for everybody. But if you miss it, how do you distinguish between woo and rationality? I think you are just doomed in this case. Here be dragons, most of you will perish on the way.