PSA: I have realized very recently after extensive interactive online discussion with rationalists, that they are exceptionally good at arguing. Too good. Probably there’s some inadvertent pre- or post- selection for skill at debating high concept stuff going on.
Wait a bit until acceding to their position in a live discussion with them where you start by disagreeing strongly for maybe intuitive reasons and then suddenly find the ground shifting beneath your feet. It took me repeated interactions where I only later realized I’d been hoodwinked by faulty reasoning to notice the pattern.
I think in general believing something before you have intuition around it is unreliable or vulnerable to manipulation, even if there seems to be a good System 2 reason to do so. Such intuition is specialized common sense, and stepping outside common sense is stepping outside your goodhartscope where ability to reliably reason might break down.
So it doesn’t matter who you are arguing with, don’t believe something unless you understand it intuitively. Usually believing things is unnecessary regardless, it’s sufficient to understand them to make conclusions and learn more without committing to belief. And certainly it’s often useful to make decisions without committing to believe the premises on which the decisions rest, because some decisions don’t wait on the ratchet of epistemic rationality.
I’m on board with this. it’s a common failure of reasoning in this community and humanity in general imo—people believing each other too early because of confident sounding reasoning. I’ve learned to tell people I’ll get back to them after a few nights’ sleep when someone asks me what my update is about a heavily philosophical topic.
people believing each other too early because of confident sounding reasoning
That’s a tricky thing: the method advocated in the Sequences is lightness of belief, which helps in changing your mind but also dismantles the immune system against nonsense, betting that with sufficient overall rationality training this gives a better equilibrium.
I think aiming for a single equilibrium is still inefficient use of capabilities and limitations of human mind, and it’s better to instead develop multiple segregated worldviews (something the Sequences explicitly argue against). Multiple worldviews are useful precisely to make the virtue of lightness harmless, encouraging swift change in details of a relevant worldview or formation of a new worldview if none account for new evidence. In the capacity of paradigms, some worldviews might even fail to recognize some forms of evidence as meaningful.
This gives worldviews opportunity to grow, to develop their own voice with full support of intuitive understanding expected in a zealot, without giving them any influence over your decisions or beliefs. Then, stepping back, some of them turn out to have a point, even if the original equilibrium of belief would’ve laughed their premises out of consideration before they had a chance of conveying their more nuanced non-strawman nature.
I feel like “what other people are telling me” is a very special type of evidence that needs to be handled with extra care. It is something that was generated by a potentially adversarial intelligence, so I need to check for some possible angles of attack first. This generally doesn’t need to be done with evidence that is just randomly thrown at me by the universe, or which I get as a result of my experiments. The difference is, basically, that the universe is only giving me the data, but a human is simultaneusly giving me the data (potentially filtered or falsified) and also some advice how to think about the data (potentially epistemically wrong).
Furthermore, there is a difference between “what I know” and “what I am aware of at this very moment”. There may be some problem with what the other person is telling me, but I may not necessarily notice it immediately. Especially when the person is drawing my attention away from that on purpose. So even if I do not see any problem with what that person said right now, I might notice a few problems after I sleep on it.
My own mind has all kinds of biases; how I evaluate someone’s words is colored by their perceived status, whether I feel threatened by them, etc. That is a reason to rethink the issue later when the person is not here.
In other words, if someone tells me a complex argument “A, therefore B, therefore C, therefore D, therefore you should give me all your money; in the name of Yudkowsky be a good rationalist and update immediately”, I am pretty sure that the rational reaction is to ignore them and take as much time as I need to rethink the issue alone or maybe with other people whom I trust.
By worldviews I mean more than specialized expertise where you don’t yet have the tools to get your head around how something unfamiliar works (like how someone new manipulates you, how to anticipate and counter this particular way of filtering of evidence). These could instead be unusual and currently unmotivated ways of looking at something familiar (how an old friend or your favorite trustworthy media source or historical truths you’ve known since childhood might be manipulating you; how a “crazy” person has a point).
The advantage is in removing the false dichotomy between keeping your current worldview and changing it towards a different worldview. By developing them separately, you take your time becoming competent in both, and don’t need to hesitate in being serious about engaging with a strange worldview on its own terms just because you don’t agree with it. But sure, getting more intuitively comfortable with something currently unfamiliar (and potentially dangerous) is a special case.
while I definitely see your argument, something about this seems weird to me and doesn’t feel likely to work properly. my intuition is that you just have one mashed worldview with inconsistent edges; while that’s not necessarily terrible or anything, and keeping multiple possible worldviews in mind is probably good, my sense is that “full support [as] expected in a zealot” is unhealthy for anyone. something or other overoptimization?
I do agree multiple worldviews discussing is an important thing in improving the sanity waterline.
It is weird in the sense that there is no widespread practice. The zealot thing is about taking beliefs-within-a-worldview (that are not your beliefs) seriously, biting the bullet, which is important for naturally developing any given worldview the way a believer in it would, not ignoring System 2 implications that challenge and refine preexisting intuition, making inferences according to its own principles and not your principles. Clearly even if you try you’ll fail badly at this, but you’ll fail even worse if you don’t try. With practice in a given worldview, this gets easier, an alien worldview obtains its own peculiar internal common sense, a necessary aspect of human understanding.
The named/distinct large worldviews is an oversimplification, mainly because it’s good to allow any strange claim or framing to have a chance of spinning up a new worldview around itself if none would take it as their own, and to merge worldviews as they develop enough machinery to become mutually intelligible. The simplification is sufficient to illustrate points such as a possibility of having contradictory “beliefs” about the same claim, or claims not being meaningful/relevant in some worldviews when they are in others, or taking seriously claims that would be clearly dangerous or silly to accept, or learning claims whose very meaning and not just veracity is extremely unclear.
Studying math looks like another important example, with understanding of different topics corresponding to worldviews where/while they remain sparsely connected, perhaps in want of an application to formulating something that is not yet math and might potentially admit many kinds of useful models. Less risk of wasting attention on nonsense, but quite a risk of wasting attention on topics that would never find a relevant application, were playing with math and building capacity to imagine more kinds of ideas not a goal in itself.
Note also that they may be taking positions which are selected for being easy to argue—they’re the ones they were convinced by, of course. Whether you think that has correlation with truth is up to you—I think so, but it’s not a perfect enough correlation for it to be enough.
I don’t know exactly what you mean by “acceding” to a position in a discussion—if you find the arguments strong, you should probably acknowledge that—this isn’t a battle, it’s a discussion. If you don’t find yourself actually convinced, you should state that too, even if your points of disagreement are somewhat illegible to yourself (intuition). And, of course, if you later figure out why you disagree, you can re-open the discussion next time it’s appropriate.
PSA: I have realized very recently after extensive interactive online discussion with rationalists, that they are exceptionally good at arguing. Too good. Probably there’s some inadvertent pre- or post- selection for skill at debating high concept stuff going on.
Wait a bit until acceding to their position in a live discussion with them where you start by disagreeing strongly for maybe intuitive reasons and then suddenly find the ground shifting beneath your feet. It took me repeated interactions where I only later realized I’d been hoodwinked by faulty reasoning to notice the pattern.
I think in general believing something before you have intuition around it is unreliable or vulnerable to manipulation, even if there seems to be a good System 2 reason to do so. Such intuition is specialized common sense, and stepping outside common sense is stepping outside your goodhart scope where ability to reliably reason might break down.
So it doesn’t matter who you are arguing with, don’t believe something unless you understand it intuitively. Usually believing things is unnecessary regardless, it’s sufficient to understand them to make conclusions and learn more without committing to belief. And certainly it’s often useful to make decisions without committing to believe the premises on which the decisions rest, because some decisions don’t wait on the ratchet of epistemic rationality.
I’m on board with this. it’s a common failure of reasoning in this community and humanity in general imo—people believing each other too early because of confident sounding reasoning. I’ve learned to tell people I’ll get back to them after a few nights’ sleep when someone asks me what my update is about a heavily philosophical topic.
That’s a tricky thing: the method advocated in the Sequences is lightness of belief, which helps in changing your mind but also dismantles the immune system against nonsense, betting that with sufficient overall rationality training this gives a better equilibrium.
I think aiming for a single equilibrium is still inefficient use of capabilities and limitations of human mind, and it’s better to instead develop multiple segregated worldviews (something the Sequences explicitly argue against). Multiple worldviews are useful precisely to make the virtue of lightness harmless, encouraging swift change in details of a relevant worldview or formation of a new worldview if none account for new evidence. In the capacity of paradigms, some worldviews might even fail to recognize some forms of evidence as meaningful.
This gives worldviews opportunity to grow, to develop their own voice with full support of intuitive understanding expected in a zealot, without giving them any influence over your decisions or beliefs. Then, stepping back, some of them turn out to have a point, even if the original equilibrium of belief would’ve laughed their premises out of consideration before they had a chance of conveying their more nuanced non-strawman nature.
I feel like “what other people are telling me” is a very special type of evidence that needs to be handled with extra care. It is something that was generated by a potentially adversarial intelligence, so I need to check for some possible angles of attack first. This generally doesn’t need to be done with evidence that is just randomly thrown at me by the universe, or which I get as a result of my experiments. The difference is, basically, that the universe is only giving me the data, but a human is simultaneusly giving me the data (potentially filtered or falsified) and also some advice how to think about the data (potentially epistemically wrong).
Furthermore, there is a difference between “what I know” and “what I am aware of at this very moment”. There may be some problem with what the other person is telling me, but I may not necessarily notice it immediately. Especially when the person is drawing my attention away from that on purpose. So even if I do not see any problem with what that person said right now, I might notice a few problems after I sleep on it.
My own mind has all kinds of biases; how I evaluate someone’s words is colored by their perceived status, whether I feel threatened by them, etc. That is a reason to rethink the issue later when the person is not here.
In other words, if someone tells me a complex argument “A, therefore B, therefore C, therefore D, therefore you should give me all your money; in the name of Yudkowsky be a good rationalist and update immediately”, I am pretty sure that the rational reaction is to ignore them and take as much time as I need to rethink the issue alone or maybe with other people whom I trust.
By worldviews I mean more than specialized expertise where you don’t yet have the tools to get your head around how something unfamiliar works (like how someone new manipulates you, how to anticipate and counter this particular way of filtering of evidence). These could instead be unusual and currently unmotivated ways of looking at something familiar (how an old friend or your favorite trustworthy media source or historical truths you’ve known since childhood might be manipulating you; how a “crazy” person has a point).
The advantage is in removing the false dichotomy between keeping your current worldview and changing it towards a different worldview. By developing them separately, you take your time becoming competent in both, and don’t need to hesitate in being serious about engaging with a strange worldview on its own terms just because you don’t agree with it. But sure, getting more intuitively comfortable with something currently unfamiliar (and potentially dangerous) is a special case.
while I definitely see your argument, something about this seems weird to me and doesn’t feel likely to work properly. my intuition is that you just have one mashed worldview with inconsistent edges; while that’s not necessarily terrible or anything, and keeping multiple possible worldviews in mind is probably good, my sense is that “full support [as] expected in a zealot” is unhealthy for anyone. something or other overoptimization?
I do agree multiple worldviews discussing is an important thing in improving the sanity waterline.
It is weird in the sense that there is no widespread practice. The zealot thing is about taking beliefs-within-a-worldview (that are not your beliefs) seriously, biting the bullet, which is important for naturally developing any given worldview the way a believer in it would, not ignoring System 2 implications that challenge and refine preexisting intuition, making inferences according to its own principles and not your principles. Clearly even if you try you’ll fail badly at this, but you’ll fail even worse if you don’t try. With practice in a given worldview, this gets easier, an alien worldview obtains its own peculiar internal common sense, a necessary aspect of human understanding.
The named/distinct large worldviews is an oversimplification, mainly because it’s good to allow any strange claim or framing to have a chance of spinning up a new worldview around itself if none would take it as their own, and to merge worldviews as they develop enough machinery to become mutually intelligible. The simplification is sufficient to illustrate points such as a possibility of having contradictory “beliefs” about the same claim, or claims not being meaningful/relevant in some worldviews when they are in others, or taking seriously claims that would be clearly dangerous or silly to accept, or learning claims whose very meaning and not just veracity is extremely unclear.
Studying math looks like another important example, with understanding of different topics corresponding to worldviews where/while they remain sparsely connected, perhaps in want of an application to formulating something that is not yet math and might potentially admit many kinds of useful models. Less risk of wasting attention on nonsense, but quite a risk of wasting attention on topics that would never find a relevant application, were playing with math and building capacity to imagine more kinds of ideas not a goal in itself.
Note also that they may be taking positions which are selected for being easy to argue—they’re the ones they were convinced by, of course. Whether you think that has correlation with truth is up to you—I think so, but it’s not a perfect enough correlation for it to be enough.
I don’t know exactly what you mean by “acceding” to a position in a discussion—if you find the arguments strong, you should probably acknowledge that—this isn’t a battle, it’s a discussion. If you don’t find yourself actually convinced, you should state that too, even if your points of disagreement are somewhat illegible to yourself (intuition). And, of course, if you later figure out why you disagree, you can re-open the discussion next time it’s appropriate.