what clues do I tend to notice when my rationality level is going up, relative to other people?
How do you distinguish your rationality going up from you becoming ossified in your beliefs with the increased conviction that other people are wrong and stupid?
This I expect to be pretty universal, so if you think about how you do it you’ll have a good idea. I’m still going to answer though. Briefly, it seems to be a combination of:
monitoring effectiveness, increase in ability to solve actual problems and make predictions,
intuition, sense of elegance, feeling that the theory “clicks”,
checking against other people, both by listening to them and penalizing any solution that breaks existing rules/trends.
so if you think about how you do it you’ll have a good idea
The problem is, if I go solely by internal perceptions/feelings I can’t reliably distinguish the cases where I’m a beacon of light and reason and where I’m an arrogant self-deluding idiot. What I need is real-life testing.
So yes, I agree with the “effectiveness” point, but at least in my case I have doubts about elegance and “clicks”. To figure out whether something “clicks” is easy for me, so that’s an early threshold an idea/theory/explanation has to pass. And “checking against other people” is not terribly useful because if I’m right then they are doing it wrong so the check will only confirm that we see things differently.
All of this is true. Though in many cases when people “are doing it wrong” you find not that they have opinions opposed to you, you find that they don’t have any consistent opinion at all. Which makes it OK to stick with your version until you find something better.
I’d mention that in many cases the best thing to do might be to lay off the topic for some time, work on other problems, improve your overall thinking, check facts known from respectable science, wait for your feelings of attachment to die, and revisit the original topic with a fresh perspective much later.
This can be repeated many times, and I guess it’s actually the core of my description of caring about “pastures”. This is a kind of a meta-technique that seems to be central to not becoming “stuck” in stupidity.
you find that they don’t have any consistent opinion at all.
Well, they might not be expressing any consistent opinion, but if they are doing the same thing over and over, then there is a clear implied position (similar to revealed preferences).
the best thing to do might be to lay off the topic for some time
Might be—unless you need to make a decision in the near future. If the topic is something you can ponder for a long time without needing to come to any conclusions, well, the question that comes to my mind is “Are you sure it’s important?” :-/ (yes, I know that’s not applicable to science)
Yes. I’m not claiming to be infallible, but I also suppose that having done a lot of abstract math helps me to know good thinking when I see it. Especially in cases when I can go deep enough and follow the whole thing from “first principles”.
Being convinced that a single theory derived from first principles explains everything about a complex domain seems to me like having a hedgehog perspective on the domain.
That means you are unlikely to be very good at predicting over the domain by the findings of Tedlock.
You are jumping to assumptions about what I do, and how I think.
That’s part of trying to understand what somebody else thinks. It’s good to make assumptions to prevent a statement to be to vague to be wrong.
If you think I made incorrect assumptions feel free to say to correct mistaken assumptions.
I’m not SquirrelInHell, but I’ll point out what looks to me like one substantial misunderstanding.
SIH said that s/he finds that mathematical training gives a good sense of good versus bad thinking in cases of the “rigorous reasoning from first principles” kind. You responded as if SIH were claiming to be explaining everything about a complex domain using such reasoning, but s/he made no such claim.
Perhaps this analogy will help. Suppose I write something about improving my abilities in graphic design, and am asked how I distinguish genuine improvements from (say) mere increases in arrogance. I list a number of criteria for distinguishing one from the other, and one of them is something like “When the design has a strong short-term commercial focus, like an advertisement or a page on a merchant’s website, we can measure actual sales or conversions and see whether I’ve successfully increased them”. And then you object that it’s wrong to reduce everything to counting money. So it is, but that doesn’t mean that when something is about money and it can be counted you shouldn’t do so.
The situation here is just the same. Not everything is about careful logical reasoning from first principles, but when things are a good sense of when they’re correct is helpful. And yes, mathematicians are good at this. (I don’t know how much of that is selection and how much is training.)
SIH said that s/he finds that mathematical training gives a good sense of good versus bad thinking in cases of the “rigorous reasoning from first principles” kind.
That’s not the only claim. If you look at the post there the claim that there’s polarization. That being rational makes him see less shades of gray. two sensible sounding ideas become one great idea and one stupid idea
For that to happen he has to call those ideas that are in line with his first principle derived theory great and ideas that are not in line with it stupid.
Let us take an example. An aspriring rationalist finds that status is important for social interactions. He then rethinks all of his thinking about social interactions based on the first principle of status. That person will see the signs that SquirrelInHell described in the OP as the signs for increased rationality about the domain.
Or take one of those libertarians who try to boil down all of politics to being about violence. That produces those signs that SquirrelInHell describes but has nothing to do with real rationality.
For that to happen he has to call those ideas that are in line with his first principle derived theory great and ideas that are not in line with it stupid.
My interpretation was that all those signs are potentially separate; in a given place, some will apply and some won’t. The situation you describe applies, at most, to those cases that (a) SquirrelInHell thinks are resolvable from first principles and (b) SquirrelInHell now feels more polarized about.
So let’s suppose we’re only talking about those cases—but note, first, that there’s no reason to think that they’re very common. (If SquirrelInHell finds that most cases are like that, then I agree that may be a bad sign.)
In that case, I agree that it is possible to go wrong by leaping into some oversimple crackpot theory. But so what? SIH listed intuition/elegance/”clicking” as just one of several signs to distinguish real from fake improvements. Any one of them may lead you astray sometimes. (All of them collectively may lead you astray sometimes. Sometimes the world just screws you over.) The question is not “can I think of counterexamples?”—of course you can—but “will this heuristic, overall, make you more or less accurate?”.
I don’t know whether SquirrelInHell has watched to see whether that sense of elegance does actually correlate with correctness (either globally or in some particular cases—heuristics can work better in some situations than others). For that matter, I don’t know whether you have (but SIH’s sense of elegance might differ from yours).
Suppose, as per your first example, someone runs across the notion of social status and completely reframes his thinking about social interactions in terms of status. They may, as you say, feel that “everything makes sense now”, even though in fact their thinking about social interactions may have become less effective. So let’s look at the other signs SquirrelInHell lists. Does our hypothetical would-be-rationalist become more effective in interacting with others after this status epiphany? (If so, I would take that as evidence that “it’s all status” is a better theory than whatever s/he was working with before. Wouldn’t you?) Does discussion with other people throw up obvious problems with it—especially obvious problems that the previous theory didn’t have? (If so, then again I would take that as evidence in favour; wouldn’t you?)
Note that for “it’s all status” to be an improvement in rationality it’s not necessary for “it’s all status” to be correct. Only that it be more correct than whatever our hypothetical would-be-rationalist thought before. (Kepler noticed that planets seem to move in approximately elliptical orbits with certain nice properties. This was wrong—because of the gravitational effects of other bodies besides the sun and the planet, and because Newtonian physics is wrong—but it was a lot better than what had come before.)
Thank you for arguing calmly and patiently. I don’t trust myself to do this, seeing how I have already failed once to keep my composure in my line of discussion with ChristianKl.
It looks to me that you tried to answer a question that is really complex and subjective. Of course you don’t have a simple equation where you could just put numbers and say “well, if the result x is positive, it means my rationality has increased; if it is zero, it stayed the same; and if it is negative, it has actually decreased”. Instead you looked into your mind and noticed a few patterns that frequently appear in situations where you believe you have become more rational. And then you put it on paper.
In return, instead of discussion like “wow, it feels the same to me, I am so surprised, I thought I was the only person who feels like this” or “for me it is completely different; I usually don’t notice anything immediately, but later other people start telling me that I have become smarter, or the smart people whom I respect a lot suddenly become interested at meeting me and talking with me”… in other words, instead of repaying your introspection and sharing with other people’s introspection and sharing… you got hit by a full-speed Vulcan train. “Your evidence is not 100% reliable, and we are going to assume that you are an idiot unaware of this.” You exposed your sensitive belly, and you got kicked there. (It’s not a coincidence that the critics have carefully avoided saying anything about how improving rationality feels to them, and only focused on dissecting you. That’s how one plays it safe.)
Yeah, it sucks.
EDIT: And then it’s funny to scroll the page down and see a comment saying it’s “ordinary and uncontroversial”.
I’m responding to a mental model of his position based on what he wrote. No single statement is responsible for the full model.
In that case, I agree that it is possible to go wrong by leaping into some oversimple crackpot theory.
I don’t think the concern is simple about crackpot theories. It’s about trying to explain everything with one theory.
You can do that successfully in physics but in many contexts it’s you can’t do everything with one theory.
The question is not “can I think of counterexamples?”—of course you can—but “will this heuristic, overall, make you more or less accurate?”.
Yes. I think the heuristic of following the Superforcasting principles is better. That means developing more shades of gray and thinking foxy instead of thinking like a hedgehog.
Does our hypothetical would-be-rationalist become more effective in interacting with others after this status epiphany?
The status-hedgehog might be better at a few interactions at the cost of not being able to have genuine connections with others anymore.
He would be more effective if he would be foxy and would say: Status is important, but there are also other important factors.
I don’t think that looking for positive real world effects or looking at whether discussion with other people throw up obvious problems are filter that successful protect from hedgehog thinking.
There nothing wrong with using first-principle thinking. If you however use it to come up with a view and then call all ideas that align with that view great and all that don’t align stupid you are making a mistake. You are using a bad heuristic.
I don’t think the concern is simply about crackpot theories
No, it isn’t. I traded precision for vividness. Sorry if that caused confusion.
but in many contexts you can’t do everything with one theory
I agree. I see no sign that SIH is any less aware of this, but you’re writing as if you’re confident s/he is.
I think the heuristic of following the Superforecasting principles is better.
These are heuristics that apply in different situations, and not alternatives to one another. Perhaps we’re at cross purposes. The heuristic I have in mind is “in situations where first-principles deductive reasoning seems appropriate, trust my sense of good reasoning that’s been trained by doing mathematics”, and not anything like “in general, expect to find good deductive first-principles models that explain everything”. The latter would be a terrible heuristic; but, again, I see no reason to think that SquirrelInHell is either using or advocating it.
In any case, I think you are making the same mistake as before. SIH says “here are some signs of improving rationality”, and you object that you could exhibit those signs while shifting to a position that’s suboptimal. But a position can be both suboptimal and better than what came before it.
If [...] you are making a mistake. You are using a bad heuristic.
Sure. And it looks to me as if you are taking SquirrelInHell to be either advocating that heuristic or admitting to using it regularly, and that just doesn’t seem to me to be true.
Actually, I’m going to qualify that “sure” a bit. I use first-principles thinking to determine that there is no integer whose square ends in 2 when written in decimal notation. If someone thinks otherwise then I call them wrong (I’m usually to polite to use words like “stupid”, but I might think them). There is nothing wrong with this.
I agree. I see no sign that SIH is any less aware of this, but you’re writing as if you’re confident s/he is.
SIH writes about himself getting polarized and starting to judge ideas as either great and stupid and then feeling the each to preach to people about how wrong they are.
That’s usually what happens with someone who focuses on one theory. It’s a sign that’s what he’s doing. It’s not useful to see either of those two factors as signs of increased rationality because that means you orient yourself in a way of becoming a hedgehog in more domains.
At the moment he or you haven’t provided a justiciation why the heuristic of seeing those things as a sign of increased rationality is useful.
Instead he tries to dodge having a real discussion in various creative ways.
I use first-principles thinking to determine that there is no integer whose square ends in 2 when written in decimal notation.
If you read what I wrote I consciously added the word “complex” to indidate that I don’t object to that usage.
haven’t provided a justification why the heuristic of seeing those things as a sign of increased rationality is useful.
I think that’s a fair criticism. But you’re making it deep in a subthread that started with an entirely different and much less fair criticism of a different part of what he said.
he tries to dodge having a real discussion in various creative ways.
From the outside, it looks to me as if you’re looking more for a status-fight than for a real discussion with SIH. I find it unsurprising if he responds defensively.
(My perception could be dead wrong, of course. The possibility of such errors is, I take it, one reason why the conventions of polite discussion in many societies include measures to make things look less like status-fights.)
I consciously added the word “complex” to indicate that I don’t object to that usage.
Being more explicit might have helped. I, and I’m guessing also SquirrelInHell, took you to be saying not “This may work well in some relatively simple and clear-cut domains, but in more complex ones it can cause trouble” (with which I guess SIH would have agreed) but something more like “Obviously you’re using this heuristic in complex domains where it doesn’t belong; how silly of you”.
As for its application to my comment: your insertion of the word “complex” was 8 comments upthread and a major theme of the intervening discussion has been the possibility that you assumed SIH was intending to apply the “feels simple and elegant” heuristic to a wide range of complex human situations when in fact he was intending to apply it only to simpler situations more amenable to first-principles analysis. So I really don’t think it is reasonable for you to suggest that when you now say (without any qualification as to complexity) “if you do X you are making a mistake and using a bad heuristic”, I should just assume you are only referring to the class of situations in which, so far as I can tell, we all agree that doing X is likely to be a bad idea.
I think that’s a fair criticism. But you’re making it deep in a subthread that started with an entirely different and much less fair criticism of a different part of what he said.
I agree 100% that I’m not giving a “justification why the heuristic of seeing those things as a sign of increased rationality is useful”.
My answer is that I never intended for what I’m writing to be useful in this way.
I think it becomes anti-useful if you use it as a set of pointers about what is more “rational”.
I indicated this much in my “notes”, as clearly as I could.
From the outside, it looks to me as if you’re looking more for a status-fight than for a real discussion with SIH.
If you look at this thread you see that the first post I wrote was explicitely thanking him and far away from status-fight. With increased attempts of him to dodge debate, I used more strong language.
when in fact he was intending to apply it only to simpler situations more amenable to first-principles analysis.
If that’s the case than SIH should be criticized for not making it clear in his OP that he talks about simple situations.
For me treating the OP as being about complex situations and noting it explicitely, is completely reasonable.
If he writes a vague post that doesn’t make it clear whether he means complex or simply domains it’s very reasonable for me to say: “I’m assuming you mean complex domains, and here’s what follows from that...”
That brings him in a discussion to clarify what he means if the assumption doesn’t apply. I’m bringing the discussion forward by making that assumption. In this case he instead tried to dodge the debate.
[Before I say anything else, an entirely separate thing: I have consistently been typing your name as “ChristianKI” when in fact it’s “ChristianKl” (the two are pixel-for-pixel identical on my screen, but others using different fonts may see the difference—the wrong one has a capital eye at the end, the right one a lowercase ell). My apologies for getting your name wrong.]
the first post I wrote was [...] far away from status-fight.
OK, I agree. I’d either not read that one, or forgotten it (it was in a separate thread starting from a different top-level comment on SIH’s post).
With increased attempts of him to dodge debate, I used more strong language.
Maybe I’m missing something, but this doesn’t look like an accurate description. The actual sequence appears to be (times as displayed to me by LW—I’m not sure what it does about timezones):
18th, 11:43: friendly comment from CK (which gets a friendly response from SIH; no further discussion there; everything else is in a different thread).
18th, 15:52: challenge from Lumifer (increased rationality versus increased ossification).
18th, 22:24: SIH replies to L listing indications (observed better effectiveness, sense-of-elegance, consonance with others’ opinions).
19th, 11:41: CK picks out one of SIH’s indications (sense-of-elegance) and says “That is also frequently happening with people adopting wrong beliefs”.
19th, 12:20: SIH replies (not claiming infallibility; mathematical experience hones one’s sense of elegance, especially in first-principles cases).
So far, nothing is notably either hostile or evasive, at least to my eyes.
19th, 12:39: CK replies (“seems like having a hedgehog perspective”, “you are unlikely to be very good at predicting”).
This is where I first get the impression of status-fighting. You seem to leap to the assumption that SIH wants to use first-principles reasoning where it doesn’t belong, with (so I still think) no actual justification; you express your criticisms personally (“you are unlikely …”).
19th, 13:12: SIH says CK is jumping to conclusions, and thanks you for the warning.
Doesn’t seem to me either hostile or evasive (though I think it would have been better if he’d said what wrong conclusions he thought you were jumping to).
19th, 13:46: CK defends conclusion-jumping and invites SIH to say what wrong conclusions.
FWIW I tend to disagree with the idea that conclusion-jumping is a good way to find out what someone means, but I don’t see anything either hostile or evasive here.
19th, 21:31: SIH says CK is making a fully general counterargument and challenges CK to argue against his own position.
That’s a weird move, and SIH himself has said (see his edit to that comment) that it was a mistake.
From this point I think the prospects of useful discussion were very poor because both parties were trying to win rather than to understand and arrive jointly at truth.
19th, 12:39: CK replies (“seems like having a hedgehog perspective”
This is where I first get the impression of status-fighting.
“Seems” is a word to make the statement less strong.
The statement provides two productive ways for the discussion to continue:
a) He says that I misunderstand that he advocates hedgehog-style thinking.
b) He defends hedgehog-style thinking as good.
Both of those alternatives lead the discussion to a more substantive place that’s less vague.
Not wanting to take either of those positions but instead criticizing the fact that there’s an assumption is evasive.
ChristianKl is explaining everything with one theory
There you are wrong. I’m not drawing from a single theory in this discussion. It the lesson from BPS debating that smart people can find good arguments for any position. It’s Tedlocks theory of Superforcasting. It’s Eliezer’s “Policy Debate Shouldn’t be One-Sided”. It’s the general case for scientific pluralism as made by Kuhn and other HPS people.
That’s four theories that I’m thinking about actively and there are likely more if I would spent more time to dig.
Lastly, this thread isn’t “everything”. I write a lot. It’s a mistake for you to assume that the tiny bit of my writing that you have read is everything.
That’s part of trying to understand what somebody else thinks. It’s good to make assumptions to prevent a statement to be to vague to be wrong. If you think I made incorrect assumptions feel free to say to correct mistaken assumptions.
Now you have made a general point that can be easily argued both ways.
Tell me the strongest counter-arguments you can think of against what you just said.
(I predict you to agonize over this, produce strawmans, and have a strong impulse to dodge my request. Am I wrong?)
Edit: This was a bad way to handle this on my part, and I regret it. The flip side to ChrisitanKl’s statement is probably obvious to anyone reading this (confirmed with a neutral third party), and I wanted to somehow make ChrisitanKl see it too. I don’t know a good way to do this, but what I wrote here was certainly not it.
Notice how I made a successful prediction that you will try to dodge my request.
That happen to be false. You predicted something related but different.
But predicting that people won’t go along with unreasonable requests doesn’t require much skill.
It’s also intersting that you call it dodgin when I ask you to provide reasons for why you think what you recommend is good.
It would be helpful to you, if you want to improve your rationality, as opposed to feeling good.
I don’t see how going along with people who are evasive generally increases my rationality. In general the sequences also recommend against playing devils advocate and don’t see it as raising rationality.
How do you distinguish your rationality going up from you becoming ossified in your beliefs with the increased conviction that other people are wrong and stupid?
This I expect to be pretty universal, so if you think about how you do it you’ll have a good idea. I’m still going to answer though. Briefly, it seems to be a combination of:
monitoring effectiveness, increase in ability to solve actual problems and make predictions,
intuition, sense of elegance, feeling that the theory “clicks”,
checking against other people, both by listening to them and penalizing any solution that breaks existing rules/trends.
The problem is, if I go solely by internal perceptions/feelings I can’t reliably distinguish the cases where I’m a beacon of light and reason and where I’m an arrogant self-deluding idiot. What I need is real-life testing.
So yes, I agree with the “effectiveness” point, but at least in my case I have doubts about elegance and “clicks”. To figure out whether something “clicks” is easy for me, so that’s an early threshold an idea/theory/explanation has to pass. And “checking against other people” is not terribly useful because if I’m right then they are doing it wrong so the check will only confirm that we see things differently.
All of this is true. Though in many cases when people “are doing it wrong” you find not that they have opinions opposed to you, you find that they don’t have any consistent opinion at all. Which makes it OK to stick with your version until you find something better.
I’d mention that in many cases the best thing to do might be to lay off the topic for some time, work on other problems, improve your overall thinking, check facts known from respectable science, wait for your feelings of attachment to die, and revisit the original topic with a fresh perspective much later.
This can be repeated many times, and I guess it’s actually the core of my description of caring about “pastures”. This is a kind of a meta-technique that seems to be central to not becoming “stuck” in stupidity.
Well, they might not be expressing any consistent opinion, but if they are doing the same thing over and over, then there is a clear implied position (similar to revealed preferences).
Might be—unless you need to make a decision in the near future. If the topic is something you can ponder for a long time without needing to come to any conclusions, well, the question that comes to my mind is “Are you sure it’s important?” :-/ (yes, I know that’s not applicable to science)
That’s also frequently happening with people adopting wrong beliefs.
Yes. I’m not claiming to be infallible, but I also suppose that having done a lot of abstract math helps me to know good thinking when I see it. Especially in cases when I can go deep enough and follow the whole thing from “first principles”.
Being convinced that a single theory derived from first principles explains everything about a complex domain seems to me like having a hedgehog perspective on the domain.
That means you are unlikely to be very good at predicting over the domain by the findings of Tedlock.
You are jumping to assumptions about what I do, and how I think.
Well, thanks for the warning anyway. It’s good to keep it in mind.
That’s part of trying to understand what somebody else thinks. It’s good to make assumptions to prevent a statement to be to vague to be wrong. If you think I made incorrect assumptions feel free to say to correct mistaken assumptions.
I’m not SquirrelInHell, but I’ll point out what looks to me like one substantial misunderstanding.
SIH said that s/he finds that mathematical training gives a good sense of good versus bad thinking in cases of the “rigorous reasoning from first principles” kind. You responded as if SIH were claiming to be explaining everything about a complex domain using such reasoning, but s/he made no such claim.
Perhaps this analogy will help. Suppose I write something about improving my abilities in graphic design, and am asked how I distinguish genuine improvements from (say) mere increases in arrogance. I list a number of criteria for distinguishing one from the other, and one of them is something like “When the design has a strong short-term commercial focus, like an advertisement or a page on a merchant’s website, we can measure actual sales or conversions and see whether I’ve successfully increased them”. And then you object that it’s wrong to reduce everything to counting money. So it is, but that doesn’t mean that when something is about money and it can be counted you shouldn’t do so.
The situation here is just the same. Not everything is about careful logical reasoning from first principles, but when things are a good sense of when they’re correct is helpful. And yes, mathematicians are good at this. (I don’t know how much of that is selection and how much is training.)
That’s not the only claim. If you look at the post there the claim that there’s polarization. That being rational makes him see less shades of gray.
two sensible sounding ideas become one great idea and one stupid idea
For that to happen he has to call those ideas that are in line with his first principle derived theory great and ideas that are not in line with it stupid.Let us take an example. An aspriring rationalist finds that status is important for social interactions. He then rethinks all of his thinking about social interactions based on the first principle of status. That person will see the signs that SquirrelInHell described in the OP as the signs for increased rationality about the domain.
Or take one of those libertarians who try to boil down all of politics to being about violence. That produces those signs that SquirrelInHell describes but has nothing to do with real rationality.
It’s the one I thought you were responding to.
My interpretation was that all those signs are potentially separate; in a given place, some will apply and some won’t. The situation you describe applies, at most, to those cases that (a) SquirrelInHell thinks are resolvable from first principles and (b) SquirrelInHell now feels more polarized about.
So let’s suppose we’re only talking about those cases—but note, first, that there’s no reason to think that they’re very common. (If SquirrelInHell finds that most cases are like that, then I agree that may be a bad sign.)
In that case, I agree that it is possible to go wrong by leaping into some oversimple crackpot theory. But so what? SIH listed intuition/elegance/”clicking” as just one of several signs to distinguish real from fake improvements. Any one of them may lead you astray sometimes. (All of them collectively may lead you astray sometimes. Sometimes the world just screws you over.) The question is not “can I think of counterexamples?”—of course you can—but “will this heuristic, overall, make you more or less accurate?”.
I don’t know whether SquirrelInHell has watched to see whether that sense of elegance does actually correlate with correctness (either globally or in some particular cases—heuristics can work better in some situations than others). For that matter, I don’t know whether you have (but SIH’s sense of elegance might differ from yours).
Suppose, as per your first example, someone runs across the notion of social status and completely reframes his thinking about social interactions in terms of status. They may, as you say, feel that “everything makes sense now”, even though in fact their thinking about social interactions may have become less effective. So let’s look at the other signs SquirrelInHell lists. Does our hypothetical would-be-rationalist become more effective in interacting with others after this status epiphany? (If so, I would take that as evidence that “it’s all status” is a better theory than whatever s/he was working with before. Wouldn’t you?) Does discussion with other people throw up obvious problems with it—especially obvious problems that the previous theory didn’t have? (If so, then again I would take that as evidence in favour; wouldn’t you?)
Note that for “it’s all status” to be an improvement in rationality it’s not necessary for “it’s all status” to be correct. Only that it be more correct than whatever our hypothetical would-be-rationalist thought before. (Kepler noticed that planets seem to move in approximately elliptical orbits with certain nice properties. This was wrong—because of the gravitational effects of other bodies besides the sun and the planet, and because Newtonian physics is wrong—but it was a lot better than what had come before.)
Thank you for arguing calmly and patiently. I don’t trust myself to do this, seeing how I have already failed once to keep my composure in my line of discussion with ChristianKl.
If it helps, I can imagine how it feels.
It looks to me that you tried to answer a question that is really complex and subjective. Of course you don’t have a simple equation where you could just put numbers and say “well, if the result x is positive, it means my rationality has increased; if it is zero, it stayed the same; and if it is negative, it has actually decreased”. Instead you looked into your mind and noticed a few patterns that frequently appear in situations where you believe you have become more rational. And then you put it on paper.
In return, instead of discussion like “wow, it feels the same to me, I am so surprised, I thought I was the only person who feels like this” or “for me it is completely different; I usually don’t notice anything immediately, but later other people start telling me that I have become smarter, or the smart people whom I respect a lot suddenly become interested at meeting me and talking with me”… in other words, instead of repaying your introspection and sharing with other people’s introspection and sharing… you got hit by a full-speed Vulcan train. “Your evidence is not 100% reliable, and we are going to assume that you are an idiot unaware of this.” You exposed your sensitive belly, and you got kicked there. (It’s not a coincidence that the critics have carefully avoided saying anything about how improving rationality feels to them, and only focused on dissecting you. That’s how one plays it safe.)
Yeah, it sucks.
EDIT: And then it’s funny to scroll the page down and see a comment saying it’s “ordinary and uncontroversial”.
Wow. You are good at empathy.
I’m responding to a mental model of his position based on what he wrote. No single statement is responsible for the full model.
I don’t think the concern is simple about crackpot theories. It’s about trying to explain everything with one theory. You can do that successfully in physics but in many contexts it’s you can’t do everything with one theory.
Yes. I think the heuristic of following the Superforcasting principles is better. That means developing more shades of gray and thinking foxy instead of thinking like a hedgehog.
The status-hedgehog might be better at a few interactions at the cost of not being able to have genuine connections with others anymore. He would be more effective if he would be foxy and would say: Status is important, but there are also other important factors.
I don’t think that looking for positive real world effects or looking at whether discussion with other people throw up obvious problems are filter that successful protect from hedgehog thinking.
There nothing wrong with using first-principle thinking. If you however use it to come up with a view and then call all ideas that align with that view great and all that don’t align stupid you are making a mistake. You are using a bad heuristic.
No, it isn’t. I traded precision for vividness. Sorry if that caused confusion.
I agree. I see no sign that SIH is any less aware of this, but you’re writing as if you’re confident s/he is.
These are heuristics that apply in different situations, and not alternatives to one another. Perhaps we’re at cross purposes. The heuristic I have in mind is “in situations where first-principles deductive reasoning seems appropriate, trust my sense of good reasoning that’s been trained by doing mathematics”, and not anything like “in general, expect to find good deductive first-principles models that explain everything”. The latter would be a terrible heuristic; but, again, I see no reason to think that SquirrelInHell is either using or advocating it.
In any case, I think you are making the same mistake as before. SIH says “here are some signs of improving rationality”, and you object that you could exhibit those signs while shifting to a position that’s suboptimal. But a position can be both suboptimal and better than what came before it.
Sure. And it looks to me as if you are taking SquirrelInHell to be either advocating that heuristic or admitting to using it regularly, and that just doesn’t seem to me to be true.
Actually, I’m going to qualify that “sure” a bit. I use first-principles thinking to determine that there is no integer whose square ends in 2 when written in decimal notation. If someone thinks otherwise then I call them wrong (I’m usually to polite to use words like “stupid”, but I might think them). There is nothing wrong with this.
SIH writes about himself getting polarized and starting to judge ideas as either great and stupid and then feeling the each to preach to people about how wrong they are.
That’s usually what happens with someone who focuses on one theory. It’s a sign that’s what he’s doing. It’s not useful to see either of those two factors as signs of increased rationality because that means you orient yourself in a way of becoming a hedgehog in more domains.
At the moment he or you haven’t provided a justiciation why the heuristic of seeing those things as a sign of increased rationality is useful. Instead he tries to dodge having a real discussion in various creative ways.
If you read what I wrote I consciously added the word “complex” to indidate that I don’t object to that usage.
I think that’s a fair criticism. But you’re making it deep in a subthread that started with an entirely different and much less fair criticism of a different part of what he said.
From the outside, it looks to me as if you’re looking more for a status-fight than for a real discussion with SIH. I find it unsurprising if he responds defensively.
(My perception could be dead wrong, of course. The possibility of such errors is, I take it, one reason why the conventions of polite discussion in many societies include measures to make things look less like status-fights.)
Being more explicit might have helped. I, and I’m guessing also SquirrelInHell, took you to be saying not “This may work well in some relatively simple and clear-cut domains, but in more complex ones it can cause trouble” (with which I guess SIH would have agreed) but something more like “Obviously you’re using this heuristic in complex domains where it doesn’t belong; how silly of you”.
As for its application to my comment: your insertion of the word “complex” was 8 comments upthread and a major theme of the intervening discussion has been the possibility that you assumed SIH was intending to apply the “feels simple and elegant” heuristic to a wide range of complex human situations when in fact he was intending to apply it only to simpler situations more amenable to first-principles analysis. So I really don’t think it is reasonable for you to suggest that when you now say (without any qualification as to complexity) “if you do X you are making a mistake and using a bad heuristic”, I should just assume you are only referring to the class of situations in which, so far as I can tell, we all agree that doing X is likely to be a bad idea.
I agree 100% that I’m not giving a “justification why the heuristic of seeing those things as a sign of increased rationality is useful”.
My answer is that I never intended for what I’m writing to be useful in this way.
I think it becomes anti-useful if you use it as a set of pointers about what is more “rational”.
I indicated this much in my “notes”, as clearly as I could.
If you look at this thread you see that the first post I wrote was explicitely thanking him and far away from status-fight. With increased attempts of him to dodge debate, I used more strong language.
If that’s the case than SIH should be criticized for not making it clear in his OP that he talks about simple situations. For me treating the OP as being about complex situations and noting it explicitely, is completely reasonable.
If he writes a vague post that doesn’t make it clear whether he means complex or simply domains it’s very reasonable for me to say: “I’m assuming you mean complex domains, and here’s what follows from that...” That brings him in a discussion to clarify what he means if the assumption doesn’t apply. I’m bringing the discussion forward by making that assumption. In this case he instead tried to dodge the debate.
[Before I say anything else, an entirely separate thing: I have consistently been typing your name as “ChristianKI” when in fact it’s “ChristianKl” (the two are pixel-for-pixel identical on my screen, but others using different fonts may see the difference—the wrong one has a capital eye at the end, the right one a lowercase ell). My apologies for getting your name wrong.]
OK, I agree. I’d either not read that one, or forgotten it (it was in a separate thread starting from a different top-level comment on SIH’s post).
Maybe I’m missing something, but this doesn’t look like an accurate description. The actual sequence appears to be (times as displayed to me by LW—I’m not sure what it does about timezones):
18th, 11:43: friendly comment from CK (which gets a friendly response from SIH; no further discussion there; everything else is in a different thread).
18th, 15:52: challenge from Lumifer (increased rationality versus increased ossification).
18th, 22:24: SIH replies to L listing indications (observed better effectiveness, sense-of-elegance, consonance with others’ opinions).
19th, 11:41: CK picks out one of SIH’s indications (sense-of-elegance) and says “That is also frequently happening with people adopting wrong beliefs”.
19th, 12:20: SIH replies (not claiming infallibility; mathematical experience hones one’s sense of elegance, especially in first-principles cases).
So far, nothing is notably either hostile or evasive, at least to my eyes.
19th, 12:39: CK replies (“seems like having a hedgehog perspective”, “you are unlikely to be very good at predicting”).
This is where I first get the impression of status-fighting. You seem to leap to the assumption that SIH wants to use first-principles reasoning where it doesn’t belong, with (so I still think) no actual justification; you express your criticisms personally (“you are unlikely …”).
19th, 13:12: SIH says CK is jumping to conclusions, and thanks you for the warning.
Doesn’t seem to me either hostile or evasive (though I think it would have been better if he’d said what wrong conclusions he thought you were jumping to).
19th, 13:46: CK defends conclusion-jumping and invites SIH to say what wrong conclusions.
FWIW I tend to disagree with the idea that conclusion-jumping is a good way to find out what someone means, but I don’t see anything either hostile or evasive here.
19th, 21:31: SIH says CK is making a fully general counterargument and challenges CK to argue against his own position.
That’s a weird move, and SIH himself has said (see his edit to that comment) that it was a mistake.
From this point I think the prospects of useful discussion were very poor because both parties were trying to win rather than to understand and arrive jointly at truth.
“Seems” is a word to make the statement less strong.
The statement provides two productive ways for the discussion to continue:
a) He says that I misunderstand that he advocates hedgehog-style thinking. b) He defends hedgehog-style thinking as good.
Both of those alternatives lead the discussion to a more substantive place that’s less vague. Not wanting to take either of those positions but instead criticizing the fact that there’s an assumption is evasive.
You are certainly missing sent direct messages started by SIH.
Obviously I can’t comment on any private messages between the two of you.
Has anyone noticed that ChristianKl is explaining everything with one theory that says it’s bad to explain everything with one theory? ;)
There you are wrong. I’m not drawing from a single theory in this discussion. It the lesson from BPS debating that smart people can find good arguments for any position. It’s Tedlocks theory of Superforcasting. It’s Eliezer’s “Policy Debate Shouldn’t be One-Sided”. It’s the general case for scientific pluralism as made by Kuhn and other HPS people.
That’s four theories that I’m thinking about actively and there are likely more if I would spent more time to dig.
Lastly, this thread isn’t “everything”. I write a lot. It’s a mistake for you to assume that the tiny bit of my writing that you have read is everything.
Now you have made a general point that can be easily argued both ways.
Tell me the strongest counter-arguments you can think of against what you just said.
(I predict you to agonize over this, produce strawmans, and have a strong impulse to dodge my request. Am I wrong?)
Edit: This was a bad way to handle this on my part, and I regret it. The flip side to ChrisitanKl’s statement is probably obvious to anyone reading this (confirmed with a neutral third party), and I wanted to somehow make ChrisitanKl see it too. I don’t know a good way to do this, but what I wrote here was certainly not it.
Why do you think that would be helpful?
It seem to me like you don’t want to engage with discussion. As a result it doesn’t me to try to find counter-arguments against what I’m saying.
Notice how I made a successful prediction that you will try to dodge my request.
It would be helpful to you, if you want to improve your rationality, as opposed to feeling good.
Edit: I retract this, since it is not a helpful way to advance the discussion.
That happen to be false. You predicted something related but different. But predicting that people won’t go along with unreasonable requests doesn’t require much skill.
It’s also intersting that you call it dodgin when I ask you to provide reasons for why you think what you recommend is good.
I don’t see how going along with people who are evasive generally increases my rationality. In general the sequences also recommend against playing devils advocate and don’t see it as raising rationality.