Somewhat replying to both romeo and bendini elsethread:
Bendini:
Disagreements aren’t always trivial to resolve, but you’ve been actively debating an issue for a month and zero progress has been made, either the resolution process is broken or someone is doing something besides putting maximum effort into resolving the disagreement.
Romeo:
I propose an alternative model. People don’t resolve disagreements because there are no incentives to resolve them. In fact the incentives often cut the other way.
I definitely have a sense that rationalists by default aren’t that great at disagreeing for all the usual reasons (not incentivized to, don’t actually practice the mental moves necessary to do so productively), and kinda the whole point of this sequence is to go “Yo, guys, it seems like we should actually be able to be good at this?”
And the problem in part is that that this requires multiple actors – my sense is that a single person trying their best to listen/learn/update can only get halfway there, or less.[1]
[1] The exact nature of “what you can accomplish with only one person trying to productively-disagree depends on the situation.” It may be that that particular person can come to whatever the truest-nearby-beliefs reasonably well, but if you need agreement, or if the other person is the final decision maker, “one-person-coming-to-correct-beliefs” may not solve the problem.
Coming to Correct Beliefs vs Political Debate
I think one of the things going on is that it takes a bit of vulnerability to switch from “adversarial political mode” (a common default) to “actually be open to changing your mind.” There is a risk that if you try earnestly to look at the evidence and change your mind, but your partner is just pushing their agenda, and you don’t have some skills re: “resilience to social pressure”, then you may be sort of just ceding ground in a political fight without even successfully improving truthseeking.
(sometimes, this is a legitimate fear, and sometimes it’s not but it feels like it is, and noticing that in the moment is an important skill)
I’ve been on both sides of this, I think. Sometimes I’ve found myself feeling really frustrated that it feels like my discussion partner isn’t listening or willing to update, and I find myself sort of leaning into an aggressive voice to try and force them to listen to me. And then they’re like ’Dude, you don’t sound like you’re actually willing to listen to me or update” and then I was sheepishly like… “oh, yeah you’re right.”
It seems like having some kind of mutally-trustable-procedure for mutual “disarmament” would be helpful.
I do still think there’s a lot of legitimately hard stuff here. In the past year, in some debates with Habryka and with Benquo, I found a major component (of my own updating) had to do with giving their perspectives time to mull around in my brain, as well as some kind of aesthetic component. (i.e. if one person says “this UI looks good” and another person says “this UI looks bad”, there’s an aspect of that that doesn’t lend itself well to “debate”. I’ve spent the past 1.5 years thinking a lot about Aesthetic Doublecrux, which much of this sequence was laying the groundwork for)
(Site meta: it would be useful if there was a way to get a notification for this kind of mention)
Some thoughts about specific points:
the whole point of this sequence is to go “Yo, guys, it seems like we should actually be able to be good at this?”
This is true for the sequence overall, but this post and some others you’ve written elsewhere follow the pattern of “we don’t seem to be able to do the thing, therefore this thing is really hard and we shouldn’t beat ourselves up about not being able to do it” that seems to come from a hard-coded mindset rather than a balanced evaluation of how much change is possible, how things could be changed and whether it was important enough to be worth the effort.
I think the mindset of “things are hard, everyone is doing the best we can” can be very damaging, as it reduces our collective agency by passively addressing the desire for change in a way that takes the wind out of its sails.
There is a risk that if you try earnestly to look at the evidence and change your mind, but your partner is just pushing their agenda, and you don’t have some skills re: “resilience to social pressure”, then you may be sort of just ceding ground in a political fight without even successfully improving truthseeking.
Resilience to social pressure is part of it, but there also seems to be a lot of people who lack the skill to evaluate evidence in a way that doesn’t bottom out at “my friends think this is true” or “the prestigious in-group people say this is true”.
It seems like having some kind of mutally-trustable-procedure for mutual “disarmament” would be helpful.
A good starting point for this would be listing out both positions in a way that orders claims separately, ranked by importance, and separating the evidence for each into 1) externally verifiable 2) circumstantial 3) non-verifiable personal experience 4) intuition.
if one person says “this UI looks good” and another person says “this UI looks bad”, there’s an aspect of that that doesn’t lend itself well to “debate”
I’ve had design arguments like this (some of them even about LW), but my takeaway from them was not that this can’t be debated, but that:
1) People usually believe that design is almost completely subjective
2) Being able to crux on design requires solving 1 first
3) Attempts to solve 1 are seen as the thin end of the wedge
4) If you figure out how to test something they assumed couldn’t be tested, they feel threatened by it rather than see it as a chance to prove they were right.
5) The question “which design is better” contains at least 10 cruxable components which need to be unpacked.
6) If the other person doesn’t know how to unpack the question, they will see your attempts as a less funny version of proving that 1 = 2.
7) People seem to think they can bury their heads in the sand and the debate will magically go away.
Arguments about design have a lot of overlap with debates about religion, but if you’re trying to debate “does God exist?” on face value rather than questions like “given the scientific facts we can personally verify, what is the probability that God exists?” and “regardless of God’s existence, which religious teachings should we follow anyway?” then it is unlikely to make progress.
Somewhat replying to both romeo and bendini elsethread:
Bendini:
Romeo:
I definitely have a sense that rationalists by default aren’t that great at disagreeing for all the usual reasons (not incentivized to, don’t actually practice the mental moves necessary to do so productively), and kinda the whole point of this sequence is to go “Yo, guys, it seems like we should actually be able to be good at this?”
And the problem in part is that that this requires multiple actors – my sense is that a single person trying their best to listen/learn/update can only get halfway there, or less.[1]
[1] The exact nature of “what you can accomplish with only one person trying to productively-disagree depends on the situation.” It may be that that particular person can come to whatever the truest-nearby-beliefs reasonably well, but if you need agreement, or if the other person is the final decision maker, “one-person-coming-to-correct-beliefs” may not solve the problem.
Coming to Correct Beliefs vs Political Debate
I think one of the things going on is that it takes a bit of vulnerability to switch from “adversarial political mode” (a common default) to “actually be open to changing your mind.” There is a risk that if you try earnestly to look at the evidence and change your mind, but your partner is just pushing their agenda, and you don’t have some skills re: “resilience to social pressure”, then you may be sort of just ceding ground in a political fight without even successfully improving truthseeking.
(sometimes, this is a legitimate fear, and sometimes it’s not but it feels like it is, and noticing that in the moment is an important skill)
I’ve been on both sides of this, I think. Sometimes I’ve found myself feeling really frustrated that it feels like my discussion partner isn’t listening or willing to update, and I find myself sort of leaning into an aggressive voice to try and force them to listen to me. And then they’re like ’Dude, you don’t sound like you’re actually willing to listen to me or update” and then I was sheepishly like… “oh, yeah you’re right.”
It seems like having some kind of mutally-trustable-procedure for mutual “disarmament” would be helpful.
I do still think there’s a lot of legitimately hard stuff here. In the past year, in some debates with Habryka and with Benquo, I found a major component (of my own updating) had to do with giving their perspectives time to mull around in my brain, as well as some kind of aesthetic component. (i.e. if one person says “this UI looks good” and another person says “this UI looks bad”, there’s an aspect of that that doesn’t lend itself well to “debate”. I’ve spent the past 1.5 years thinking a lot about Aesthetic Doublecrux, which much of this sequence was laying the groundwork for)
(Site meta: it would be useful if there was a way to get a notification for this kind of mention)
Some thoughts about specific points:
This is true for the sequence overall, but this post and some others you’ve written elsewhere follow the pattern of “we don’t seem to be able to do the thing, therefore this thing is really hard and we shouldn’t beat ourselves up about not being able to do it” that seems to come from a hard-coded mindset rather than a balanced evaluation of how much change is possible, how things could be changed and whether it was important enough to be worth the effort.
I think the mindset of “things are hard, everyone is doing the best we can” can be very damaging, as it reduces our collective agency by passively addressing the desire for change in a way that takes the wind out of its sails.
Resilience to social pressure is part of it, but there also seems to be a lot of people who lack the skill to evaluate evidence in a way that doesn’t bottom out at “my friends think this is true” or “the prestigious in-group people say this is true”.
A good starting point for this would be listing out both positions in a way that orders claims separately, ranked by importance, and separating the evidence for each into 1) externally verifiable 2) circumstantial 3) non-verifiable personal experience 4) intuition.
I’ve had design arguments like this (some of them even about LW), but my takeaway from them was not that this can’t be debated, but that:
1) People usually believe that design is almost completely subjective
2) Being able to crux on design requires solving 1 first
3) Attempts to solve 1 are seen as the thin end of the wedge
4) If you figure out how to test something they assumed couldn’t be tested, they feel threatened by it rather than see it as a chance to prove they were right.
5) The question “which design is better” contains at least 10 cruxable components which need to be unpacked.
6) If the other person doesn’t know how to unpack the question, they will see your attempts as a less funny version of proving that 1 = 2.
7) People seem to think they can bury their heads in the sand and the debate will magically go away.
Arguments about design have a lot of overlap with debates about religion, but if you’re trying to debate “does God exist?” on face value rather than questions like “given the scientific facts we can personally verify, what is the probability that God exists?” and “regardless of God’s existence, which religious teachings should we follow anyway?” then it is unlikely to make progress.