I got into Rationality for a purpose if it is not the best way to get me to that purpose [i.e. not winning] then Rationality should be casted down and the alternative embraced.
On the other hand, I suspect we mostly agree with our disagreement is on the definition of the word “winning”
I could have failed reading comprehension but I did not see “winning” defined anywhere in the post
Okay, I’ve updated I should be a bit more clear on which claims I’m specifically making (and not making) in this post.
First, I want to note my definition of rationality here is not new, it’s basically how it was described by Eliezer in 2012, and I’m pretty confident it’s what he meant when writing most of the sequences. “Eliezer said so” isn’t an argument, but it looks like some people might feel like I’m shifting goalposts and changing definitions and I claim I am not doing that. From Rationality: Appreciating Cognitive Algorithms:
The word ‘rational’ is properly used to talk about cognitive algorithms which systematically promote map-territory correspondences or goal achievement.
He notes that in the sentence “It’s rational to believe the sky is blue”, the word rational isn’t really doing any useful work. The sky is blue. But by contrast, the sentence “It’s (epistemically) rational to believe more in hypotheses that make successful experimental predictions” is saying something specifically about how to generate beliefs, and if you removed “rational” you’d have to fill in more words to replace it.
I think there’s something similar going on with “it’s rational to [do an object-level-strategy that wins].” If it just wins, just call it winning, not rationality.
...
Second: I’m mostly making an empirical claim as to what seems to happen to individual people (and more noticeably to groups-of-people) if they focus on the slogan “rationality is winning.”
It’s hypothetically possible for it to true that “rationality is systemized winning”, and for it to be subtly warping to focus on that fact. The specific failure modes I’m worried about are:
The feedbackloops are long/slow/noisy, which makes it hard to learn if what you’re trying is working.
If you set out to systematically win, many people end up pursuing a lot of strategies that are pretty random. And maybe they’re good strategies! But bucketing all of them under “rationality” starts to deflate the meaning of the word.
People repeatedly ask “but, isn’t it rationality to believe false things?”. And, my answer is “maybe, for some people? I think you should be really wary of doing that, but there’s certainly no law of the universe saying it’s false.” But, this gets particularly bad as a way to orient as a group. The first generation of people who came for the epistemics maybe has a decent judgment on when it’s okay to ignore epistemics. The second generation who comes for “systemized winning, including maybe ignoring epistemics?” has less ability to figure out if they’re actually winning because they can’t reason as clearly.
Similarly and more specifically: a lot of things-that-win in some respects are wooy, and while I think there’s in fact good stuff in some woo, while the first generation of rationalists exploring that woo were rationalists with a solid epistemic foundation. Subsequent generations came more for the woo than for the rationality (See Salvage Epistemology).
In both the previous two bullets, the slogan “rationality is winning” is really fuzzy and makes it harder to discern “okay which stuff here is relevant?”. Whereas “rationality is the study of cognitive algorithms that systematically arrive at truth and succeeding at your goals” at least somewhat
...
Third: The valley of bad rationality means that study of systemized winning is not guaranteed to actually lead to winning, even en-net over the course of your entire lifetime.
Maybe civilization, or your local culture, just has too many missing pieces for the deliberate study of systematic winning to be net-positive. Or maybe you can make some incremental progress, but hit a local optima, and the only way to further improve is to invest in skills that will take too long to pay off.
...
Fourth: Honestly, while I think LessWrong culture is good at epistemics, addressing motivated cognition, and some similar things… I don’t have a strong reason to believe that we are particularly good at systematically winning across domains (except in domains where epistemics are particularly relevant)
So while it might be true that “The True Spirit of Rationality” is systemized winning, and epistemics is merely subservient to that… it’s nonetheless true that if you’re showing on LessWrong or in other rationalist spaces, I think you’ll be kind of disappointed if you’re hoping to learn skills that will be help you win at life in a generalized sense.
I do still think “more is possible”. And I think there is “alpha” in epistemics, such that if you invest a lot in epistemics you will find a set of tools that the rest of the world is less likely to find. But I don’t have a belief that this’ll pay off that hard for any specific person.
(side note: I think we have maybe specialized reasonably in “help autistic nerds recover their weak spots”, which means learning from our practices will help with some initial growth spurt, but then level up)
...
So, fifth: Regarding your claim here:
I got into Rationality for a purpose if it is not the best way to get me to that purpose [i.e. not winning] then Rationality should be casted down and the alternative embraced.
A lot of my answer here is “sure, that might be fine!” I highly recommend you focus on winning, and use whatever tools are appropriate, which sometimes will be “study/practice cognitive algorithms shaped” and sometimes will have other shapes.
I do agree there is a meta-level skill of figuring out what tools to use, and I do think that meta-level skill is still pretty central to what I call rationality (which includes “applying cognitive algorithms to make good decisions”). But it’s not necessarily the case that studying that skill will pay off. And it’s not necessarily the case that focusing on that cultivating skill as a community will pay off harder than alternatives like “specialize in a particular sub-domain”
Linguistically, I think it’s correct to say “the rational move is the one that resulted in you winning (given your starting resources, including knowledge)”, but, “that was the rational move” doesn’t necessarily equal “‘rationality’ as a practice was helpful.”
I figure this is as good a place as any to flag a jargon-y nuance in the post title.
The post title is “Rationality !== Winning”, not “Rationality != Winning”. Different programming languages implement this somewhat differently, but typically ”!=” means “X not equal to Y” and ”!==” means “X not exactly equal to Y” (when there are various edge cases on what exactly counts as ‘equal’).
I think there is some sense in that Rationality is Winning, but I don’t think it’s true that it’s exactly equal to winning, and the difference has some implications.
Raemon, I had a long time to think on this and I wanted to break down a few points. I hope you will respond and help me clarify where I am confused.
By expected value, don’t you mean it in the mathematical sense? For example, take a case where at a casino gambling game you have a slight edge in EV. (Happens when the house gives custom rules to high rollers, on roulette with computer assistance, and blackjack).
This doesn’t mean an individual playing with positive EV will accumulate money until they are banned from playing. They can absolutely have a string of bad luck and go broke.
Similarly a person using rationality in their life can have bad luck and receive a bad outcome.
Some of the obvious ones are: if cryonics has a 15 percent chance of working, 85 percent of the futures they wasted money on it. The current drugs that extend lifespan in rats and other organisms that the medical-legal establishment is slow walking studying in humans may not work, or they may work but one of the side effects kills an individual rationalist.
With that said there’s another issue here.
There is the assumptions behind rationality, and the heuristics and algorithms this particular group tries to use.
Assumptions:
World is causal.
You can compute from past events general patterns that can be reused.
Individual humans, no matter their trappings of authority, must have a mechanism in order to know what they claim.
Knowing more information relevant to be a decision when making a decision improves your odds, it’s not all luck.
Rules not written as criminal law that society wants you to follow may not be to your benefit to obey. Example, “go to college first”.
It’s just us. Many things by humans are just made up and have no information content whatsoever, they can be ignored. Examples are the idea of “generations” and of course all religion.
Gears level model. How does A cause B. If there is no connection it is possible someone is mistaken.
Reason with numbers. It is possible to describe and implement any effective decisionmaking process as numbers and written rules, reasoning in the open. You can always beat a human “going with their gut”, assuming sufficient compute..
I have others but this seems like a start on it
Algorithms
Try to apply bayes theorem
Prediction markets, expressing opinions as probability
What do they claim to know and how do they know it? Specific to humans. This let’s you dismiss the advice of whole classes of people as they have no empirical support or are paid to work against you.
Psychologists with their unvalidated and ineffective “talk therapy”, psychiatrists in many cases with their obvious crude methods in manipulating entire classes of receptor and lack of empirical tools to monitor attempts at treatment, real estate agents, stock brokers pushing specific securities, and all religion employees.
Note that I will say each of the above is majority not helpful but there are edge cases. Meaning I would trust a psychologist that was an AI system validated against a million patient’s outcomes, I would trust a psychiatrist using fMRI or internal brain electrodes, I would trust a real estate agent who is not incentivized for me to make an immediate purchase, I would trust a stock advice system with open source code, and I would trust a religion employee who can show their communication device used to contact a deity or their supernatural powers.
Sorry for the long paragraph but these are heuristics. A truly rational ASI is going to simulate it all out. We humans can at best look if someone is misleading us by looking for outright impossibilities.
Is someone we are debating even responding to our arguments. For example authority figures simply don’t engage for questions on cryonics or existential AI risk, or give meaningless platitudes that are not responding to the question asked. Someone doing this is potentially wrong about their opinion.
If an authority figure with a deeply held belief that may be wrong is even updating their belief as evidence is available that invalidates it. Does any authority figure at medical research establishments even know 21CM revived a working kidney after cryo recently? Would it alter their opinion if they were told?
If the assumptions are true, and you pick the best algorithm available, you will win relative to your other humans in expected value. Rationality is winning.
Doesn’t mean as an individual you can’t die of a heart attack despite a the correct diet while AI stocks are in a winter so you never see the financial benefits. (A gears level model would say A, AI company capital can lead to B, goods and services from AI, and thus also feeds back to A and thus owning shares is a share of infinitely)
I’m not sure I understood the point you’re making.
A point which might be related: I’m not just saying “systemized winning still involves lucks of the dice” (i.e. just because it’s positive EV doesn’t mean you’ll win). I’m saying “studying systemized winning might be negative EV (for a given person in a given point in history.”
Illustrative example: a aspiring-doctor from the distant past might have looked at a superstitious shaman and thought “man, this guy’s arguments make no sense. Shamanism seems obviously irrational”. And the aspiring doctor goes to reason about medicine from first principles… and invents leeching/bloodletting. He might have some methods/mindsets that are “an improvement” over the shaman’s mindset, but the shaman might have generations of accumulated cultural tips/tricks that tend to work even if his arguments for them are really bad. See Book Review: The Secret Of Our Success, although also the counterpoint Reason isn’t magic.
By the way, I think you should consider rewriting the side note re autistic nerd. I am still a bit confused reading that.
FWIW, I found the comment crystal clear.
CFAR’s very first workshops had a section on fashion. LukeProg gave a presentation on why fashion was worth caring about, and then folk were taken to go shopping for upgrades to their wardrobe. Part of the point was to create a visible & tangible upgrade in “awesomeness”.
At some point — maybe in those first workshops, I don’t quite recall — there was a lot of focus on practicing rejection therapy. Folk were taken out to a place with strangers and given the task of getting rejected for something. This later morphed into Comfort Zone Expansion (CoZE) and, finally, into Comfort Zone Exploration. The point here was to help folk cultivate courage.
By the June 2012 workshop I’d introduced Againstness, which amounted to my martial arts derived reinvention of applied polyvagal theory. Part of my intent at the time was to help people get more into their bodies and to notice that yes, your physiological responses actually very much do matter for your thinking.
Each of these interventions, and many many others, were aimed specifically at helping fill in the autistic blindspots that we kept seeing with people in the social scene of rationalists. We weren’t particular about supporting people with autism per se. It was just clear that autistic traits tended to synergize in the community, and that this led to points of systematic incompetence that mattered for thinking about stuff like AI. Things on par with not noticing how “In theory, theory and practice are the same” is a joke.
CFAR was responsible for quite a lot of people moving to the Bay Area. And by around 2016 it was perfectly normal for folk to show up at a CFAR workshop not having read the Sequences. HPMOR was more common — and at the time HPMOR encouraged people toward CFAR more than the Sequences IIRC.
So I think the “smart person self-help” tone ended up defining a lot of rationalist culture at least for Berkeley/SF/etc.
…which in turn I think kind of gave the impression that rationality is smart person self-help.
I think we did meaningfully help a lot of people this way. I got a lot of private feedback on Againstness, for instance, from participants months later saying that it had changed their lives (turning around depression, resolving burnout, etc.). Rejection therapy was a game-changer for some folk. I think these things were mostly net good.
But I’m with Raemon on this: For good rationality, it’s super important to move past that paradigm to something deeper. Living a better life is great. But lots of stuff can do that. Not as many places have the vision of rationality.
Personally, I am strongly against this,
I got into Rationality for a purpose if it is not the best way to get me to that purpose [i.e. not winning] then Rationality should be casted down and the alternative embraced.
On the other hand, I suspect we mostly agree with our disagreement is on the definition of the word “winning”
I could have failed reading comprehension but I did not see “winning” defined anywhere in the post
Okay, I’ve updated I should be a bit more clear on which claims I’m specifically making (and not making) in this post.
First, I want to note my definition of rationality here is not new, it’s basically how it was described by Eliezer in 2012, and I’m pretty confident it’s what he meant when writing most of the sequences. “Eliezer said so” isn’t an argument, but it looks like some people might feel like I’m shifting goalposts and changing definitions and I claim I am not doing that. From Rationality: Appreciating Cognitive Algorithms:
He notes that in the sentence “It’s rational to believe the sky is blue”, the word rational isn’t really doing any useful work. The sky is blue. But by contrast, the sentence “It’s (epistemically) rational to believe more in hypotheses that make successful experimental predictions” is saying something specifically about how to generate beliefs, and if you removed “rational” you’d have to fill in more words to replace it.
I think there’s something similar going on with “it’s rational to [do an object-level-strategy that wins].” If it just wins, just call it winning, not rationality.
...
Second: I’m mostly making an empirical claim as to what seems to happen to individual people (and more noticeably to groups-of-people) if they focus on the slogan “rationality is winning.”
It’s hypothetically possible for it to true that “rationality is systemized winning”, and for it to be subtly warping to focus on that fact. The specific failure modes I’m worried about are:
The feedbackloops are long/slow/noisy, which makes it hard to learn if what you’re trying is working.
If you set out to systematically win, many people end up pursuing a lot of strategies that are pretty random. And maybe they’re good strategies! But bucketing all of them under “rationality” starts to deflate the meaning of the word.
People repeatedly ask “but, isn’t it rationality to believe false things?”. And, my answer is “maybe, for some people? I think you should be really wary of doing that, but there’s certainly no law of the universe saying it’s false.” But, this gets particularly bad as a way to orient as a group. The first generation of people who came for the epistemics maybe has a decent judgment on when it’s okay to ignore epistemics. The second generation who comes for “systemized winning, including maybe ignoring epistemics?” has less ability to figure out if they’re actually winning because they can’t reason as clearly.
Similarly and more specifically: a lot of things-that-win in some respects are wooy, and while I think there’s in fact good stuff in some woo, while the first generation of rationalists exploring that woo were rationalists with a solid epistemic foundation. Subsequent generations came more for the woo than for the rationality (See Salvage Epistemology).
In both the previous two bullets, the slogan “rationality is winning” is really fuzzy and makes it harder to discern “okay which stuff here is relevant?”. Whereas “rationality is the study of cognitive algorithms that systematically arrive at truth and succeeding at your goals” at least somewhat
...
Third: The valley of bad rationality means that study of systemized winning is not guaranteed to actually lead to winning, even en-net over the course of your entire lifetime.
Maybe civilization, or your local culture, just has too many missing pieces for the deliberate study of systematic winning to be net-positive. Or maybe you can make some incremental progress, but hit a local optima, and the only way to further improve is to invest in skills that will take too long to pay off.
...
Fourth: Honestly, while I think LessWrong culture is good at epistemics, addressing motivated cognition, and some similar things… I don’t have a strong reason to believe that we are particularly good at systematically winning across domains (except in domains where epistemics are particularly relevant)
So while it might be true that “The True Spirit of Rationality” is systemized winning, and epistemics is merely subservient to that… it’s nonetheless true that if you’re showing on LessWrong or in other rationalist spaces, I think you’ll be kind of disappointed if you’re hoping to learn skills that will be help you win at life in a generalized sense.
I do still think “more is possible”. And I think there is “alpha” in epistemics, such that if you invest a lot in epistemics you will find a set of tools that the rest of the world is less likely to find. But I don’t have a belief that this’ll pay off that hard for any specific person.
(side note: I think we have maybe specialized reasonably in “help autistic nerds recover their weak spots”, which means learning from our practices will help with some initial growth spurt, but then level up)
...
So, fifth: Regarding your claim here:
A lot of my answer here is “sure, that might be fine!” I highly recommend you focus on winning, and use whatever tools are appropriate, which sometimes will be “study/practice cognitive algorithms shaped” and sometimes will have other shapes.
I do agree there is a meta-level skill of figuring out what tools to use, and I do think that meta-level skill is still pretty central to what I call rationality (which includes “applying cognitive algorithms to make good decisions”). But it’s not necessarily the case that studying that skill will pay off. And it’s not necessarily the case that focusing on that cultivating skill as a community will pay off harder than alternatives like “specialize in a particular sub-domain”
Linguistically, I think it’s correct to say “the rational move is the one that resulted in you winning (given your starting resources, including knowledge)”, but, “that was the rational move” doesn’t necessarily equal “‘rationality’ as a practice was helpful.”
Hope that helps explain where I’m coming from.
I figure this is as good a place as any to flag a jargon-y nuance in the post title.
The post title is “Rationality !== Winning”, not “Rationality != Winning”. Different programming languages implement this somewhat differently, but typically ”!=” means “X not equal to Y” and ”!==” means “X not exactly equal to Y” (when there are various edge cases on what exactly counts as ‘equal’).
I think there is some sense in that Rationality is Winning, but I don’t think it’s true that it’s exactly equal to winning, and the difference has some implications.
Actually, I missed this one. I agree with you.
I would edit this into the main post. I am a programmer, but I missed it.
Raemon, I had a long time to think on this and I wanted to break down a few points. I hope you will respond and help me clarify where I am confused.
By expected value, don’t you mean it in the mathematical sense? For example, take a case where at a casino gambling game you have a slight edge in EV. (Happens when the house gives custom rules to high rollers, on roulette with computer assistance, and blackjack).
This doesn’t mean an individual playing with positive EV will accumulate money until they are banned from playing. They can absolutely have a string of bad luck and go broke.
Similarly a person using rationality in their life can have bad luck and receive a bad outcome.
Some of the obvious ones are: if cryonics has a 15 percent chance of working, 85 percent of the futures they wasted money on it. The current drugs that extend lifespan in rats and other organisms that the medical-legal establishment is slow walking studying in humans may not work, or they may work but one of the side effects kills an individual rationalist.
With that said there’s another issue here.
There is the assumptions behind rationality, and the heuristics and algorithms this particular group tries to use.
Assumptions:
World is causal.
You can compute from past events general patterns that can be reused.
Individual humans, no matter their trappings of authority, must have a mechanism in order to know what they claim.
Knowing more information relevant to be a decision when making a decision improves your odds, it’s not all luck.
Rules not written as criminal law that society wants you to follow may not be to your benefit to obey. Example, “go to college first”.
It’s just us. Many things by humans are just made up and have no information content whatsoever, they can be ignored. Examples are the idea of “generations” and of course all religion.
Gears level model. How does A cause B. If there is no connection it is possible someone is mistaken.
Reason with numbers. It is possible to describe and implement any effective decisionmaking process as numbers and written rules, reasoning in the open. You can always beat a human “going with their gut”, assuming sufficient compute..
I have others but this seems like a start on it
Algorithms
Try to apply bayes theorem
Prediction markets, expressing opinions as probability
What do they claim to know and how do they know it? Specific to humans. This let’s you dismiss the advice of whole classes of people as they have no empirical support or are paid to work against you.
Psychologists with their unvalidated and ineffective “talk therapy”, psychiatrists in many cases with their obvious crude methods in manipulating entire classes of receptor and lack of empirical tools to monitor attempts at treatment, real estate agents, stock brokers pushing specific securities, and all religion employees.
Note that I will say each of the above is majority not helpful but there are edge cases. Meaning I would trust a psychologist that was an AI system validated against a million patient’s outcomes, I would trust a psychiatrist using fMRI or internal brain electrodes, I would trust a real estate agent who is not incentivized for me to make an immediate purchase, I would trust a stock advice system with open source code, and I would trust a religion employee who can show their communication device used to contact a deity or their supernatural powers.
Sorry for the long paragraph but these are heuristics. A truly rational ASI is going to simulate it all out. We humans can at best look if someone is misleading us by looking for outright impossibilities.
Is someone we are debating even responding to our arguments. For example authority figures simply don’t engage for questions on cryonics or existential AI risk, or give meaningless platitudes that are not responding to the question asked. Someone doing this is potentially wrong about their opinion.
If an authority figure with a deeply held belief that may be wrong is even updating their belief as evidence is available that invalidates it. Does any authority figure at medical research establishments even know 21CM revived a working kidney after cryo recently? Would it alter their opinion if they were told?
If the assumptions are true, and you pick the best algorithm available, you will win relative to your other humans in expected value. Rationality is winning.
Doesn’t mean as an individual you can’t die of a heart attack despite a the correct diet while AI stocks are in a winter so you never see the financial benefits. (A gears level model would say A, AI company capital can lead to B, goods and services from AI, and thus also feeds back to A and thus owning shares is a share of infinitely)
I’m not sure I understood the point you’re making.
A point which might be related: I’m not just saying “systemized winning still involves lucks of the dice” (i.e. just because it’s positive EV doesn’t mean you’ll win). I’m saying “studying systemized winning might be negative EV (for a given person in a given point in history.”
Illustrative example: a aspiring-doctor from the distant past might have looked at a superstitious shaman and thought “man, this guy’s arguments make no sense. Shamanism seems obviously irrational”. And the aspiring doctor goes to reason about medicine from first principles… and invents leeching/bloodletting. He might have some methods/mindsets that are “an improvement” over the shaman’s mindset, but the shaman might have generations of accumulated cultural tips/tricks that tend to work even if his arguments for them are really bad. See Book Review: The Secret Of Our Success, although also the counterpoint Reason isn’t magic.
Yes. This is what I was looking for. It makes way more sense now. I broadly agree with everything said here. Thank you for clarifying.
By the way, I think you should consider rewriting the side note re autistic nerd. I am still a bit confused reading that.
FWIW, I found the comment crystal clear.
CFAR’s very first workshops had a section on fashion. LukeProg gave a presentation on why fashion was worth caring about, and then folk were taken to go shopping for upgrades to their wardrobe. Part of the point was to create a visible & tangible upgrade in “awesomeness”.
At some point — maybe in those first workshops, I don’t quite recall — there was a lot of focus on practicing rejection therapy. Folk were taken out to a place with strangers and given the task of getting rejected for something. This later morphed into Comfort Zone Expansion (CoZE) and, finally, into Comfort Zone Exploration. The point here was to help folk cultivate courage.
By the June 2012 workshop I’d introduced Againstness, which amounted to my martial arts derived reinvention of applied polyvagal theory. Part of my intent at the time was to help people get more into their bodies and to notice that yes, your physiological responses actually very much do matter for your thinking.
Each of these interventions, and many many others, were aimed specifically at helping fill in the autistic blindspots that we kept seeing with people in the social scene of rationalists. We weren’t particular about supporting people with autism per se. It was just clear that autistic traits tended to synergize in the community, and that this led to points of systematic incompetence that mattered for thinking about stuff like AI. Things on par with not noticing how “In theory, theory and practice are the same” is a joke.
CFAR was responsible for quite a lot of people moving to the Bay Area. And by around 2016 it was perfectly normal for folk to show up at a CFAR workshop not having read the Sequences. HPMOR was more common — and at the time HPMOR encouraged people toward CFAR more than the Sequences IIRC.
So I think the “smart person self-help” tone ended up defining a lot of rationalist culture at least for Berkeley/SF/etc.
…which in turn I think kind of gave the impression that rationality is smart person self-help.
I think we did meaningfully help a lot of people this way. I got a lot of private feedback on Againstness, for instance, from participants months later saying that it had changed their lives (turning around depression, resolving burnout, etc.). Rejection therapy was a game-changer for some folk. I think these things were mostly net good.
But I’m with Raemon on this: For good rationality, it’s super important to move past that paradigm to something deeper. Living a better life is great. But lots of stuff can do that. Not as many places have the vision of rationality.
That’s because it isn’t; insofar as rationality is systematically winning, it is meant to be true for arbitrary definitions of winning.