I was trying to write up a post about getting stuck in arguments, where you thought something was obvious, and said something like “A, therefore, B.” And your partner says “but, X, therefore Y!”
And from you perspective, A is obviously true, and B obviously follows from A. And X and Y just don’t seem relevant to you. But somehow, to your conversation partner, A and B either aren’t obvious at all, or just seem… irrelevant somehow. And X and Y seem like the important things.
I feel like this happens often to me, but I didn’t carefully keep track of them at the time. I’ll try checking in with people I’ve disagreed with in the past to see if they can help jog my memory. But I’m curious if anyone has ran into this phenomenon?
A somewhat extreme but concrete example might be a consequentialist and a deontologist arguing about ethics, where the Connie the consequentialist says “We should throw the switch in the trolley problem to save lives” and Denny the Deontologist says “But you’re murdering the guy on the tracks!” and both of them are coming at the problem from such different perspectives that they can’t make any headway.
I’d ideally like examples that each involve positions that are fairly common on LessWrong.
(The important bit isn’t precisely the “A therefore B”, “X therefore Y” pattern, so much as the general phenomenon of feeling like the two participants were somehow managing to completely miss each other. Ideally, examples where the people eventually ended up on the same page, hopefully identifying the moment when the argument became “unstuck.”)
Simplified examples from my own experience of participating in or witnessing this kind of disagreement:
Poverty reduction: Alice says “extreme poverty is rapidly falling” and Bob replies “$2/day is not enough to live on!” Alice and Bob talked past each other for a while until realizing that these statements are not in conflict; the conflict concerns the significance of making enough money to no longer be considered in “extreme poverty.” The resolution came from recognizing that extreme poverty reduction is important, but that even 0% extreme poverty does not imply that we have solved starvation, homelessness, etc. That is, Alice thought Bob was denying how fast and impressively extreme poverty is being reduced, which he was not, and Bob thought Alice believed approaching 0% extreme poverty was sufficient, when she in fact did not.
Medical progress: Alice says “we don’t understand depression” and Bob replies “yes we do, look at all the anti-depression medications out there.” Alice and Bob talked past each other for a while, with the discussion getting increasingly angry, until it was realized that Alice’s position was “you don’t fully understand a problem until you can reliably fix it” and Bob’s position was “you partially understand a problem when you can sometimes fix it”. These are entirely compatible positions and Alice and Bob didn’t actually disagree on the facts at all!
Free markets: Alice says “free markets are an essential part of our economy” and Bob replies “no they’re not because there are very few free markets in our economy and none of the important industries can be considered to exist within one.” The resolution to this one is sort of embarrassing because it’s so simple and yet took so long to arrive at: Alice’s implicit definition of a free market was “a market free from government interference” while Bob’s implicit definition was “a market with symmetric information and minimal barriers to entry.” Again, while it sounds blindingly obvious why Alice and Bob were talking past each other when phrased like this, it took at least half an hour of discussion among ~6 people to come to this realization.
Folk beliefs vs science: Alice says “the average modern-day Westerner does not have a more scientific understanding of the world than the average modern-day non-Westerner who harbors ‘traditional’/‘folk’/‘pseudoscientific’ beliefs” and Bob replies “how can you argue that germ theory is no more scientific than the theory that you’re sick because a demon has inhabited you?” After much confusing back and forth, it turns out Alice is using the term ‘scientific’ to denote the practices associated with science while Bob is using the term to denote the knowledge associated with science. The average person inculcated in Western society indeed has more practical knowledge about how diseases work and spread than the average person inculcated in their local, traditional beliefs, but both people are almost entirely ignorant of why the believe what they believe and could not reproduce the knowledge if needed, e.g. the average person does not know the biological differences between a virus and a bacterium even though they are aware that antibiotics work on bacteria but not viruses. Once the distinction was made between “science as a process” and “science as the fruits of that process” Alice and Bob realized they actually agreed.
I think the above are somewhat “trivial” or “basic” examples in that the resolution came down to clearly defining terms: once Alice and Bob understood what each was claiming, the disagreement dissolved. Some less trivial ones for which the resolution was not just the result of clarifying nebulous/ambiguous terms:
AI rights: Alice says “An AGI should be given the same rights as any human” and Bob replies “computer programs are not sentient.” After much deliberation, it turns out Alice’s ethics are based on reducing suffering, where the particular identity and context surrounding the suffering don’t really matter, while Bob’s are based on protecting human-like life, with the moral value of entities rapidly decreasing as an inverse function of human-like-ness. Digging deeper, for Alice, any complex system might be sentient and the possibility of a sentient being suffering is particularly concerning when that being is traditionally not considered to have any moral value worth protecting. For Bob, sentience can’t possibly exist outside of a biological organism and so efforts into that ensuring computer programs aren’t deleted while running are a distraction that seems orthogonal to ethics. So while the ultimate question of “should we give rights to sentient programs?” was not resolved, a great amount of confusion was reduced when Alice and Bob realized they disagree about a matter of fact—can digital computers create sentience? - and not so much about how to ethically address suffering once the matter of who is suffering has been agreed on (Actually, it isn’t so much a “matter of fact” since further discussion revealed substantial metaphysical disagreements between Alice and Bob, but at least the source of the disagreements was discovered).
Government regulation: Alice says “the rise of the internet makes it insane to not abolish the FDA” and Bob replies “A lack of drug regulation would result in countless deaths.” Alice and Bob angrily, vociferously disagree with each other, unfortunately ending the discussion with a screaming match. Later discussion reveals that Alice believes drug companies can and will regulate themselves in the absence of the FDA and that 1) for decades now, essentially no major corporation has deliberately hurt their customers to make more profit and that 2) the constant communication enabled by the internet will educate customers on which of the few bad apples to avoid. Bob believes drug companies cannot and will not regulate themselves in the absence of the FDA and that 1) there is a long history of corporations hurting their customers to make more profit and that 2) the internet will promote just as much misinformation as information and will thus not alleviate this problem. Again, the object-level disagreement—should we abolish the FDA given the internet? - was not resolved, but the reason for that became utterly obvious: Alice and Bob have *very* different sets of facts about corporate behavior and the nature of the internet.
How to do science: Alice says “you should publish as many papers as possible during your PhD” and Bob replies “paper count is not a good metric for a scientist’s impact.” It turns out that Alice was giving career advice to Carol in her particular situation while Bob was speaking about things in general. In Carol’s particular, bespoke case, it may have been true that she needed to publish as many papers as possible during her PhD in order to have a successful career even though Alice was aware this would create a tragedy-of-the-commons scenario if everyone were to take this advice. Bob didn’t realize Alice was giving career advice instead of her prescriptive opinion on the matter (like Bob was giving).
Role playing: Alice says “I’m going to play this DnD campaign as a species of creature that can’t communicate with most other species” and Bob replies “but then you won’t be able to chat to your fellow party members or share information with them or strategize together effectively.” Some awkwardness ensued until it became clear that Alice *wanted* to be unable to communicate with the rest of the party due to anxiety related concerns. Actually, realizing this didn’t really reduce the awkwardness, since it was an awkward situation anyway, but Alice and Bob definitely talked past each other until the difference in assumptions was revealed and had Bob realized what Alice’s concerns were to begin with, he probably would not have initiated the conversation since he didn’t have a problem with a silent character but simply wanted to ensure Alice understood the consequences of this, with the discussion revealing that she did.
Language prescriptivism: Alice says “that’s grammatically incorrect” and Bob replies “there is no ‘correct’ or ‘incorrect’ grammar—language is socially constructed!” Alice and Bob proceed to have an extraordinarily unproductive discussion until Alice points out that while she doesn’t know exactly how it’s decided what is correct and incorrect in English, there *must* be some authority that decides, and that’s the authority she follows. While Alice and Bob did not come to an agreement per se, it became clear that what they really disagreed about was whether or not the English language has a definitive authority, not whether or not one should follow the authority assuming it exists.
I’m going to stop here so the post isn’t too long, but I very much enjoyed thinking about these circumstances and identifying the “A->B” vs “X->Y” pattern. So much time and emotional energy wasted that wouldn’t have been had Alice and Bob first established exactly what they were talking about.
Realism about rationality is an ongoing one for me that hasn’t yet gotten unstuck. See in particular Vanessa and ricraz:
Vanessa Kosoy:
ricraz:
Vanessa Kosoy:
ricraz:
I fall pretty strongly in ricraz’s camp, and I feel the same way, especially the sentence “I’m finding it very difficult to figure out how to phrase a counterargument beyond simply saying that although intelligence does allow us to understand physics, it doesn’t seem to me that this implies it’s simple or fundamental.”
It seems to me that the crux of that argument is the assumption that there is only one direction or axis in which something can be considered more fundamental than something else.
Rationality can be epistemologically basic, in that you need to be rational to arrive at physics, and physics can be ontologically basic in that rational agents are made of matter.
Sometimes the solution is to drop the framing that the two alternatives are actually rivalrous.
I don’t think that’s it. The inference I most disagree with is “rationality must have a simple core”, or “Occam’s razor works on rationality”. I’m sure there’s some meaning of “fundamental” or “epistemologically basic” such that I’d agree that rationality has that property, but that doesn’t entail “rationality has a simple core”.
This happens to me a lot less since I put effort into noticing it and actually trying to find the crux rather than just looking for different ways to state the obvious which my correspondent is obviously not getting.
So far, it almost always turns out that we’re talking about totally different things, using confusing language. Second-most common is fundamentally different simplifications of our models. Both are most common when discussing things that have zero near-mode consequences and don’t actually need resolution.
“Sam Harris and the Is–Ought Gap” might be one example.
In what way did that got unstuck? Did I miss some news?
Well, it didn’t get unstuck even slightly afaik, but it was still a very well articulated version of the particular problem I was looking at.
Ah, that is indeed an excellent example