Nuance is the cost of precision and the bane of clarity. I think it’s an error to feel positively about nuance (or something more specific like degrees of uncertainty), when it’s a serious problem clogging up productive discourse, that should be burned with fire whenever it’s not absolutely vital and impossible to avoid.
Uh. I want to make a nuanced response here, distinguishing the difference between “feeling positively about nuance when it’s net positive and negatively when its costs exceed its benefits, and trying to distinguish between the net positive case and the net negative case, and addressing the dynamics driving each” and so forth, but your comment above makes me hesitate.
EDIT: to clarify/sort-of-summarize, for those who don’t want to click through: I think there’s a compelling argument to be made that much or even the majority of intellectual progress lies in the cumulative ability to make ever-finer distinctions, i.e. increasing our capacity for nuance. I think being opposed to nuance is startling, and in my current estimation it’s approximately “being opposed to the project of LessWrong.” Since I don’t believe that Vladimir is opposed to the project of LessWrong, I declare myself confused.
The benefits of nuance are not themselves nuance. Nuance is extremely useful, but not good in itself, and the bleed-through of its usefulness into positive affect is detrimental to clarity of thought and communication.
Capacity for nuance abstracts away this problem, so might be good in itself. (It’s a capacity, something instrumentally convergent. Though things useful for agents can be dangerous for humans.)
In fact, this is one of the major problems I have with—forgive me for saying so!—your own posts. They are very nuanced! But this makes them difficult, sometimes almost impossible, to understand (not to mention very long); “bane of clarity” seems exactly right to me. (Indeed, I have noticed this tendency in the writing of several members of the LW team as well, and a few others.)
You say:
I think there’s a compelling argument to be made that much or even the majority of intellectual progress lies in the cumulative ability to make ever-finer distinctions, i.e. increasing our capacity for nuance.
There is certainly something to this view. But the counterpoint is that as you make ever finer distinctions, two trends emerge:
The distinctions come to matter less and less—and yet, they impose at least constant, and often increasing, cognitive costs. But this is surely perverse! Cognitive resource expenditures should be proportional to importance/impact, otherwise you end up wasting said resources—talking, and thinking, more and more, about things that matter less and less…
The likelihood that the distinctions you are making, and the patterns you are seeing, are perceived inaccurately, or even are entirely imaginary, increases dramatically. We might analogize this to attempting to observe increasingly tiny (or more distant) physical objects—there comes a point where the noise inherent in our means of observation (our instruments, etc.) dominates our observations.
I think that both of these trends may be seen in discussions taking place on Less Wrong, and that they are responsible for a good share of the epistemic degradation we can see.
I disagree with 1 entirely (both parts), and while 2 is sort of logically necessary, that doesn’t mean the effect is as large as you imply with “increases dramatically,” nor that it can’t be overcome. c.f. it’s not what it looks like.
(Reply more curt than usual for brevity’s sake. =P)
I think of robustness/redundancy as the opposite of nuance for the purposes of this thread. It’s not the kind of redundancy where you set up a lot of context to gesture at an idea from different sides, specify the leg/trunk/tail to hopefully indicate the elephant. It’s the kind of redundancy where saying this once in the first sentence should already be enough, the second sentence makes it inevitable, and the third sentence preempts an unreasonable misinterpretation that’s probably logically impossible.
(But then maybe you add a second paragraph, and later write a fictional dialogue where characters discuss the same idea, and record a lecture where you present this yet again on a whiteboard. There’s a lot of nuance, it adds depth by incising the grooves in the same pattern, and none of it is essential. Perhaps there are multiple levels of detail, but then there must be levels with little detail than make sense out of context, on their own, and the levels with a lot of detail must decompose into smaller self-contained points. I don’t think I’m saying anything that’s not tiresomely banal.)
...false? I just opened it in an incognito window and it worked fine. All my posts are public.
But anyway here’s the text:
Nate on Twitter, h/t Logan (transcribed for the non-avians):
Thread about a particular way in which jargon is great:
In my experience, conceptual clarity is often attained by a large number of minor viewpoint shifts.
(A complement I once got from a research partner went something like “you just keep reframing the problem ever-so-slightly until the solution seems obvious”.
<3)
Sometimes a bunch of small shifts leave people talking a bit differently, b/c now they’re thinking a bit differently. The old phrasings don’t feel quite right—maybe they conflate distinct concepts, or rely implicitly on some bad assumption, etc.
(Coarse examples: folks who think in probabilities might become awkward around definite statements of fact; people who get into NVC sometimes shift their language about thoughts and feelings. I claim more subtle linguistic shifts regularly come hand-in-hand w/ good thinking.)
I suspect this phenomenon is one cause of jargon. Eg, when a rationalist says “my model of Alice wouldn’t like that” instead of “I don’t think Alice would like that”, the non-standard phraseology tracks a non-standard way they’re thinking about Alice.
(Or, at least, I think this is true of me and of many of the folks I interact with daily. I suspect phraseology is contagious and that bystanders may pick up the alt manner of speaking w/out picking up the alt manner of thinking, etc.)
Of course, there are various other causes of jargon—eg, it can arise from naturally-occurring shorthand in some specific context where that shorthand was useful, and then morph into a tribal signal, etc. etc.
As such, I’m ambivalent about jargon. On the one hand, I prefer my communities to be newcomer-friendly and inclusive. On the other hand, I often hear accusations of jargon as a kind of thought-policing.
“Stop using phrases that meticulously track uncommon distinctions you’ve made; we already have perfectly good phrases that ignore those distinctions, and your audience won’t be able to tell the difference!”
No.
My internal language has a bunch of cool features that English lacks. I like these features, and speaking in a way that reflects them is part of the process of transmitting them.
Example: according to me, “my model of Alice wants chocolate” leaves Alice more space to disagree than “I think Alice wants chocolate”, in part b/c the denial is “your model is wrong”, rather than the more confrontational “you are wrong”.
In fact, “you are wrong” is a type error in my internal tongue. My English-to-internal-tongue translator chokes when I try to run it on “you’re wrong”, and suggests (eg) “I disagree” or perhaps “you’re wrong about whether I want chocolate”.
“But everyone knows that “you’re wrong” has a silent “(about X)” parenthetical!”, my straw conversational partner protests. I disagree. English makes it all too easy to represent confused thoughts like “maybe I’m bad”.
If I were designing a language, I would not render it easy to assign properties like “correct” to a whole person—as opposed to, say, that person’s map of some particular region of the territory.
The “my model of Alice”-style phrasing is part of a more general program of distinguishing people from their maps. I don’t claim to do this perfectly, but I’m trying, and I appreciate others who are trying.
And, this is a cool program! If you’ve tweaked your thoughts so that it’s harder to confuse someone’s correctness about a specific fact with their overall goodness, that’s rad, and I’d love you to leak some of your techniques to me via a niche phraseology.
There are lots of analogous language improvements to be made, and every so often a community has built some into their weird phraseology, and it’s *wonderful*. I would love to encounter a lot more jargon, in this sense.
(I sometimes marvel at the growth in expressive power of languages over time, and I suspect that that growth is often spurred by jargon in this sense. Ex: the etymology of “category”.)
Another part of why I flinch at jargon-policing is a suspicion that if someone regularly renders thoughts that track a distinction into words that don’t, it erodes the distinction in their own head. Maintaining distinctions that your spoken language lacks is difficult!
(This is a worry that arises in me when I imagine, eg, dropping my rationalist dialect.)
In sum, my internal dialect has drifted away from American English, and that suits me just fine, tyvm. I’ll do my best to be newcomer-friendly and inclusive, but I’m unwilling to drop distinctions from my words just to avoid an odd turn of phrase.
Thank you for coming to my TED talk. Maybe one day I’ll learn to cram an idea into a tweet, but not today.
Huh, I see the post plus a big “log in” bar at the bottom on Safari 15612.1.29.41.4 (Mac), and the same without the bar in an incognito tab Chrome 94.0.4606.71 (Mac). These don’t overlap with any of the things you tried, but it’s strange to me that our results are consistently different.
Yes, sorry, I got too excited about the absurd hypothesis supported by two datapoints, posted too soon, then tried to reproduce, and it no longer worked at all. I had the time to see the page in firefox incognito window on the same system where I’m logged in and in a normal firefox window from a different Linux username that never had facebook logged in.
Edit: Just now it worked again twice, and after that it no longer did. Bottom line: Public facebook posts are not really public, at least today, they are only public intermittently.
Nuance is the cost of precision and the bane of clarity. I think it’s an error to feel positively about nuance (or something more specific like degrees of uncertainty), when it’s a serious problem clogging up productive discourse, that should be burned with fire whenever it’s not absolutely vital and impossible to avoid.
Uh. I want to make a nuanced response here, distinguishing the difference between “feeling positively about nuance when it’s net positive and negatively when its costs exceed its benefits, and trying to distinguish between the net positive case and the net negative case, and addressing the dynamics driving each” and so forth, but your comment above makes me hesitate.
(I also think this.)
EDIT: to clarify/sort-of-summarize, for those who don’t want to click through: I think there’s a compelling argument to be made that much or even the majority of intellectual progress lies in the cumulative ability to make ever-finer distinctions, i.e. increasing our capacity for nuance. I think being opposed to nuance is startling, and in my current estimation it’s approximately “being opposed to the project of LessWrong.” Since I don’t believe that Vladimir is opposed to the project of LessWrong, I declare myself confused.
The benefits of nuance are not themselves nuance. Nuance is extremely useful, but not good in itself, and the bleed-through of its usefulness into positive affect is detrimental to clarity of thought and communication.
Capacity for nuance abstracts away this problem, so might be good in itself. (It’s a capacity, something instrumentally convergent. Though things useful for agents can be dangerous for humans.)
I agree with Vladimir, FWIW.
In fact, this is one of the major problems I have with—forgive me for saying so!—your own posts. They are very nuanced! But this makes them difficult, sometimes almost impossible, to understand (not to mention very long); “bane of clarity” seems exactly right to me. (Indeed, I have noticed this tendency in the writing of several members of the LW team as well, and a few others.)
You say:
There is certainly something to this view. But the counterpoint is that as you make ever finer distinctions, two trends emerge:
The distinctions come to matter less and less—and yet, they impose at least constant, and often increasing, cognitive costs. But this is surely perverse! Cognitive resource expenditures should be proportional to importance/impact, otherwise you end up wasting said resources—talking, and thinking, more and more, about things that matter less and less…
The likelihood that the distinctions you are making, and the patterns you are seeing, are perceived inaccurately, or even are entirely imaginary, increases dramatically. We might analogize this to attempting to observe increasingly tiny (or more distant) physical objects—there comes a point where the noise inherent in our means of observation (our instruments, etc.) dominates our observations.
I think that both of these trends may be seen in discussions taking place on Less Wrong, and that they are responsible for a good share of the epistemic degradation we can see.
I disagree with 1 entirely (both parts), and while 2 is sort of logically necessary, that doesn’t mean the effect is as large as you imply with “increases dramatically,” nor that it can’t be overcome. c.f. it’s not what it looks like.
(Reply more curt than usual for brevity’s sake. =P)
I think of robustness/redundancy as the opposite of nuance for the purposes of this thread. It’s not the kind of redundancy where you set up a lot of context to gesture at an idea from different sides, specify the leg/trunk/tail to hopefully indicate the elephant. It’s the kind of redundancy where saying this once in the first sentence should already be enough, the second sentence makes it inevitable, and the third sentence preempts an unreasonable misinterpretation that’s probably logically impossible.
(But then maybe you add a second paragraph, and later write a fictional dialogue where characters discuss the same idea, and record a lecture where you present this yet again on a whiteboard. There’s a lot of nuance, it adds depth by incising the grooves in the same pattern, and none of it is essential. Perhaps there are multiple levels of detail, but then there must be levels with little detail than make sense out of context, on their own, and the levels with a lot of detail must decompose into smaller self-contained points. I don’t think I’m saying anything that’s not tiresomely banal.)
Note that the linked content is inaccessible for those without a Facebook account.
...false? I just opened it in an incognito window and it worked fine. All my posts are public.
But anyway here’s the text:
In a regular window (Firefox): https://dl.dropboxusercontent.com/s/jyxf86t5hah9lbc/Screen%20Shot%202021-11-08%20at%203.12.59%20AM.png?dl=0
In a private window (Firefox): https://dl.dropboxusercontent.com/s/33i7ben66877zaz/Screen%20Shot%202021-11-08%20at%203.13.46%20AM.png?dl=0
In a regular window (Opera): https://dl.dropboxusercontent.com/s/bd4uu7iu0rctizl/Screen%20Shot%202021-11-08%20at%203.14.32%20AM.png?dl=0
In a private window (Opera): https://dl.dropboxusercontent.com/s/5i3oi2przg85jbr/Screen%20Shot%202021-11-08%20at%203.15.14%20AM.png?dl=0
Firefox 78.5.0esr (Mac); Opera 80.0.4170.63 (Mac).
EDIT: Tested also with Firefox 94.0.1 (Windows) and Chrome 95.0.4638.69 (Windows), with identical results.
Your posts are not accessible without a Facebook account.
Huh, I see the post plus a big “log in” bar at the bottom on Safari 15612.1.29.41.4 (Mac), and the same without the bar in an incognito tab Chrome 94.0.4606.71 (Mac). These don’t overlap with any of the things you tried, but it’s strange to me that our results are consistently different.
I can no longer see it when not logged in, even though I did before. Maybe we triggered a DDoS mitigation thingie?
Edit: Removed incorrect claim about how this worked (before seeing Said’s response).
No, this is not correct. All of my tests were conducted on a desktop (1080p) display, at maximum window width.
Yes, sorry, I got too excited about the absurd hypothesis supported by two datapoints, posted too soon, then tried to reproduce, and it no longer worked at all. I had the time to see the page in firefox incognito window on the same system where I’m logged in and in a normal firefox window from a different Linux username that never had facebook logged in.
Edit: Just now it worked again twice, and after that it no longer did. Bottom line: Public facebook posts are not really public, at least today, they are only public intermittently.