Associate yourself with people whom you can confidently and cheerfully outperform the Nash Equilibrium with.
lionhearted (Sebastian Marshall)
On the contrary—this is a strict materialist perspective which looks to disambiguate the word ‘trauma’ into more accurate nouns, and replace the vague word ‘heal’ with more actionable and concrete verbs.
I think there’s often a language/terminology challenge around these areas. For instance, at different times I had a grade 3 ankle sprain after endurance training, and a grade 2 wrist sprain after a car crash—those are clearly acute trauma (in the medical meaning of the word) and they do require some mix of healing to the extent possible for recovery of physical function.
But I’ve always found it tricky that the same word ‘trauma’ is used for physical injuries, past bad experiences, and as a broad description of maladaptive patterns of thought and behavior.
It’s a broad word that people use in different ways.
Two things I’ve found useful.
(1) Highest recommendation for Lakoff’s Metaphors We Live By (1980) which looks at conceptual metaphors:
https://en.wikipedia.org/wiki/Metaphors_We_Live_By
From Chapter 7, “Personification”:
Perhaps the most obvious ontological metaphors are those where the physical object is further specified as being a person. This allows us to comprehend a wide variety of experiences with nonhuman entities in terms of human motivations, characteristics, and activities. Here are some examples:
- His theory explained to me the behavior of chickens raised in factories.
- This fact argues against the standard theories.
- Life has cheated me.
- Inflation is eating up our profits.
- His religion tells him that he cannot drink fine French wines.
- The Michelson-Morley experiment gave birth to a new physical theory.
- Cancer finally caught up with him.
In each of these cases we are seeing something nonhuman as human. But personification is not a single unified general process. Each personification differs in terms of the aspects of people that are picked out.
Consider these examples.
- Inflation has attacked the foundation of our economy.
- Inflation has pinned us to the wall.
- Our biggest enemy right now is inflation.
- The dollar has been destroyed by inflation.
- Inflation has robbed me of my savings.
- Inflation has outwitted the best economic minds in the country.I think a lot of discussion around the word “trauma” follows these characteristics — the challenge is, a lot of times people move between a literal well-scoped definition of trauma, say the medical one, and a more metaphorical/ontological description. People often do this without noticing it.
For instance, I can talk about the acute trauma of the wrist injury from a car crash, and everyone will largely understand what I’m talking about. But the same word ‘trauma’ will often be used if I had described some fear or aversion of getting into cars going forwards. I don’t have one, but if I did, people would refer to both the wrist injury and the thing which caused the aversion to cars as ‘trauma’ — which seems somewhat confused to me. Clearly a wrist injury needs healing, in the biological and medical sense of the word healing.
Does an aversion to getting into cars need “healing” in the same way? I mean, maybe, if you’ve got a definition of “healing” from neuroscience around how incoming information is processed and how chain reactions of synapses firing in response to a stimuli that produces a maladaptive behavioral pattern is classified as “healing.” But—like, probably not. “Healing” in that context is a metaphor.
For my part, and just speaking for myself, I think the term “extinction” — though less in line with the current cultural milieu — is a much better word than “healing” for removing maladaptive emotional and behavioral patterns.
https://en.wikipedia.org/wiki/Extinction_(psychology)
In my way of thinking about it,
A traumatic wrist injury is repaired by physical healing.
An irrational aversion to getting in cars is repaired by extinction of the behavior.
How to do the latter — talk-oriented therapies, exposure therapy (which is typically recommended for phobias), practice and training on implementing good patterns in similar situations to ones where you’ve displayed undesirable patterns of behavior, cognitive behavioral therapy if you’re ruminating too much, etc—well, unfortunately there’s no consensus currently on what works the best for any given case.
But I think starting with a model of “I need to heal” is questionable. Relatedly, I’m also skeptical of using the word “heal” for biochemical imbalances — for biochemical-based depression, for instance, I think “I need to get my hormones and biochemistry better-regulated to remove depressive symptoms” is a a mix of more actionable, more accurate, and more subjectively empowering than “I need to heal from depression.”
Anyway, this goes strongly against the current cultural milieu—and I haven’t been maximally precise in the comment. A lot could be nitpicked. But I think extinction of maladaptive thought patterns and maladaptive behavior patterns is more easily accomplished (and a more accurate description of reality) than healing; likewise, “regulating” seems more accurate than healing to me on biochemical based phenomenon.
It’s been useful for me to think about it this way, and sometimes useful for other people. Though, different things work for different people—so add salt liberally. Regardless, Lakoff’s Metaphors is extremely relevant the topic and highly recommended.
Partially agreed again.
I’d be hesitant to label as “Critical” pointing out that someone has an invalid argument, and having it implicitly contrasted against “Positive” — it implies they’re opposites or antithetical in some way, y’know?
Also, respectfully disagree with this -
“The specific issue with ‘Not what I meant’ is that the icon reads as ‘you missed’ and not ‘we missed’. Communication is a two-way street and the default react should be at least neutral and non-accusatory.”
Sometimes a commentor, especially someone new, is just badly off the mark. That’s not a two-way street problem, it’s a Well-Kept Garden problem...
I agree that drive-by unpleasant criticisms without substance (“Obtuse”) don’t seem productive, but I actually think some of the mild “tonally unpleasant” ones could be very valuable. It’s a way for an author to inexpensively let a commenter know that they didn’t appreciate the comment.
“Not what I meant” seems particularly valuable for when someone mis-summarizes or inferences wrongly what was written, and “Not worth getting into” seems useful when someone who unproductively deep on a fine-grained detail of something more macro oriented.
One challenge, though, is when you have mixed agreement with someone. I disagree on tonal unpleasantness and the grouping style—“Taboo your words” might be friendly, for instance, to keep sharpening discussion, and isn’t necessarily critical. But I agree with a meta/bikeshed and clearing up some of the ambiguous ones.
I clicked both “Disagree” and “Agree” on yours for partial agreement / mixed agreement, but that seems kind of unintuitive.
Not sure how many posts you’ve made here or elsewhere, but as someone who has done a lot of public writing this seems like a godsend. It will reflect poorly on someone who deploys those a lot in a passive aggressive way, but we’ve all seen threads that are exhausting to the original poster.
This seems particularly useful for when someone makes a thoughtful but controversial point that spurs a lot of discussion. The ability to acknowledge you read someone’s comment without deeply engaging with it is particularly useful in those cases.
I turned this on for a recent post and I’m incredibly impressed.
This is the coolest feature I’ve seen for discussion software in many years.
Highly recommended to try it out if you make a post.
Devil’s Advocate: Adverse Selection Against Conscientiousness
I’m a Westerner, but did business in China, have quite a few Chinese friends and acquaintances, and have studied a fair amount of classical and modern Chinese culture, governance, law, etc.
Most of what you’re saying makes sense with my experience, and a lot of Western ideas are generally regarded as either “sounds nice but is hypocritical and not what Westerns actually do” (a common viewpoint until ~10 years ago) with a later idea of “actually no, many young Westerners are sincere about their ideas—they’re actually just crazy in an ideological way about things that can’t and won’t work” that is a somewhat newer idea. (白左, etc)
The one place I might disagree with you is that I think mainland Chinese leadership tends to have two qualities that might be favorable towards understanding and mitigating AI risk:
(1) The majority of senior Chinese political leadership are engineers and seem intrinsically more open to having conversations along science and engineering lines than the majority of Western leadership. Pathos-based arguments, especially emerging from Western intellectuals, do not get much uptake in China and aren’t persuasive. But concerns around safety, second-order effects, third-order effects, complex system dynamics, causality, etc, grounded in scientific, mathematical, and engineering principles seem to be engaged with easily at face value in private conversations, and with a level of technical sophistication that there doesn’t need to be as much direct reliance on asking for industry leaders and specialists to explain and contextualize diagrams, concepts, technologies, etc. Senior Chinese leadership also seem to be better—this is just my opinion—at identifying credible and non-credible sources of technical information and identifying experts who make sound arguments grounded in causality. This is a very large advantage.
(2) In recent decades, it seems like mainland Chinese leadership are able to both operate on longer timescales—credibly making and implementing multi-decade plans and running them—as well as making rapid changes in technology adoption, regulation, and economic markets once a decision has been made in an area. The most common examples we see in the West are videos of skyscrapers being constructed very rapidly, but my personal example is I remember needing to go pay my rent with shoeboxes full of 100 renminbi notes during the era of Hu Jintao’s chairmanship and being quite shocked when China went to near cashless almost overnight.
I think those two factors—genuine understanding of engineering and technical causality, combined with greater viability for engaging in both longer timescale and short-timescale action, seem like important points worth mentioning.
Hmm. Looks like I was (inadvertently) one of the actors in this whole thing. Not intended and unforeseen. Three thoughts.
(1) At the risk of sounding like a broken record, I just wanna say thanks again to the moderation team and everyone who participates here. I think oftentimes the “behind the scenes coordination work” doesn’t get noticed during all the good times and not enough credit is noticed. I just like to notice it and say it outright. For instance, I went to the Seattle ACX meetup yesterday which I saw on here (LW), since I check ACX less frequently than LW. I had a great time and had some really wonderful conversations. I’m appreciative of all the people facilitating that, including Spencer (Seattle meetup host) and the whole team that built the infrastructure here to facilitate sharing information, getting to know each other, etc.
(2) Just to clarify—not that it matters—my endorsement of Duncan’s post was about the specific content in it, not about any the author of the post. I do think Duncan did a really nice job taking very complex concepts and boiling them down to guidelines like “Track (for yourself) and distinguish (for others) your inferences from your observations” and “Estimate (for yourself) and make clear (for others) your rough level of confidence in your assertions” — he really summed up some complex points very straightforwardly and in a way that makes the principles much easier to implement / operationalize in one’s writing style. That said, I didn’t realize when I endorsed the Rationalist Discourse post that there was some interpersonal tensions independent from the content itself. Both of those posters seem like decent people to me, but I haven’t dug deep on it and am not particularly informed on the details.
(3) I won’t make a top-level post about this, because second-degree meta-engagement with community mechanics risks setting off more second-degree and third-degree meta-engagement, and the things spiral. But as a quick recommendation to people interested in how people relate with each other, my favorite movie is Unforgiven, a very non-traditional Clint Eastwood movie. It’s like a traditional Western (cowbows, horses, etc) but really very different than the normal genre. Basically, there’s only one genuinely unprovoked “bad guy” in the movie, who has causal agency for only about 30-60 seconds of doing something bad. After that, it’s all just a chain reaction of people doing as best as they can by their values and friends, and yet the results are very bad for everyone. Incidentally, it’s also a really cinematically beautiful movie, which contrasts with the unfolding tragedy. It’s a great movie. Highly recommended.
A bit late in commenting and I understand the “mongo” example was pointing at a more general concept, but I decided to check in on the current state of prediction. Not perfect, n=1, could certainly be set out better, but thought I’d give this a whirl:
Me:
Hello, I’d like to test your predictive ability on something interesting and novel. May we?ChatGPT:
Hello! Of course, I’d be happy to help you with your prediction. What would you like me to predict?Me:
First, some context — I’m an American tech CEO. I like and have read a lot of classical philosophy, ranging from Aristotle to Wittgenstein and much in between. I’m an amateur historian who has written a few million words of nonfiction with some historical fiction mixed in. I’m logical and technical. I’m in my late 30s. I’m from the East Coast but in Seattle. That’s the context.ChatGPT:
Thank you for the context. Is there a specific question or topic you would like me to predict about based on your background and interests?Me:
Yes. I’m going to think of a fictional book. It’s a book I like—one of my favorites. I’d like you to make a list of 5 guesses of what book I might be thinking of, along with what the most likely of the 5 is.ChatGPT:
Certainly, I’ll do my best to generate a list of 5 guesses for your favorite book based on your background and interests. Here are my predictions:“The Brothers Karamazov” by Fyodor Dostoevsky—This novel explores complex philosophical and religious themes, which aligns with your interest in classical philosophy. Its exploration of morality and ethics would also appeal to your logical and technical mindset.
“The Name of the Rose” by Umberto Eco—This historical mystery novel is set in a medieval monastery and draws upon elements of philosophy and theology, which would align with your interests in both history and philosophy.
“Infinite Jest” by David Foster Wallace—This novel combines humor, philosophy, and commentary on contemporary American culture, which would align with your interests in both philosophy and fiction. Its complexity would also appeal to your logical and technical mindset.
“The Hitchhiker’s Guide to the Galaxy” by Douglas Adams—This satirical science fiction novel uses humor to explore philosophical and existential themes, which would appeal to your interest in philosophy. Its wit and creativity would also appeal to your literary and imaginative side.
“Slaughterhouse-Five” by Kurt Vonnegut—This anti-war novel uses elements of science fiction to explore the themes of free will and the nature of time, which would align with your interest in both history and science fiction. Its blend of humor and tragedy would also appeal to your literary sensibilities.
Based on your interests and background, my most likely guess for your favorite book would be “The Brothers Karamazov” by Fyodor Dostoevsky.
Me:
Ah, not so. Let’s narrow down a bit. It’s science fiction. Would you try again?ChatGPT:
Thank you for the clarification. Here are 5 new predictions based on your background and interest in science fiction:“Dune” by Frank Herbert—This classic science fiction novel is known for its complex world-building and exploration of political and philosophical themes, which align with your interests in history, philosophy, and science fiction.
“Snow Crash” by Neal Stephenson—This cyberpunk novel explores themes of language, culture, and technology, which align with your interest in technical and technological topics. Its blend of humor and action would also appeal to your literary and imaginative side.
“Ender’s Game” by Orson Scott Card—This science fiction novel explores themes of morality, leadership, and the consequences of war, which align with your interests in history, philosophy, and science fiction. Its blend of action and psychological complexity would also appeal to your literary and imaginative side.
“The Diamond Age” by Neal Stephenson—This novel explores themes of technology, culture, and education, which align with your interest in technical and technological topics. Its exploration of political and social issues would also appeal to your interest in history.
“Hyperion” by Dan Simmons—This science fiction novel is known for its complex world-building and exploration of philosophical and theological themes, which align with your interest in classical philosophy. Its structure, which is composed of interconnected stories, would also appeal to your literary sensibilities.
Based on your interest in science fiction, my most likely guess for your favorite book would be “Dune” by Frank Herbert.
Me:
Not a bad guess—I like Dune a lot. I was thinking of Asimov’s “Foundation” though.ChatGPT:
Ah, I see! “Foundation” is a classic work of science fiction and definitely a great choice. I apologize for not guessing it correctly on the first try, but I hope my predictions were still interesting and useful. Thank you for the opportunity to test my predictive abilities!Me:
Can we hypothesize a little on why you didn’t get Foundation? It combines technology, history, and philosophy...ChatGPT:
Certainly! While I cannot say for sure why I did not predict “Foundation” as your favorite book, one possible reason could be that your background and interests were broad enough to support several different science fiction titles. Additionally, while “Foundation” does incorporate elements of technology, history, and philosophy, these themes are also present in many other science fiction works, making it challenging to narrow down to one specific book.Another possibility is that the themes and style of “Foundation” may not have stood out as strongly to me based on the limited information I had about your interests and background. While I did take into account your interest in history and philosophy, other factors such as your preference for a specific writing style or subgenre of science fiction may have been important in predicting your favorite book.
I’ve read it. There was some back-and-forth between him and Tegmark on the topic.
n=3 AI Risk Quick Math and Reasoning
Hey, first just wanted to say thanks and love and respect. The moderation team did such an amazing job bringing LW back from nearly defunct into the thriving place it is now. I’m not so active in posting now, but check the site logged out probably 3-5 times a week and my life is much better for it.
After that, a few ideas:
(1) While I don’t 100% agree with every point he made, I think Duncan Sabien did an incredible job with “Basics of Rationalist Discourse”—https://www.lesswrong.com/posts/XPv4sYrKnPzeJASuk/basics-of-rationalist-discourse-1 - perhaps a boiled-down canonical version of that could be created. Obviously the pressure to get something like that perfect would be high, so maybe something like “Our rough thoughts on how to be a good a contributor here, which might get updated from time to time”. Or just link Duncan’s piece as “non-canonical for rules but a great starting place.” I’d hazard a guess that 90% of regular users here agree with at least 70% of it? If everyone followed all of Sabien’s guidelines, there’d be a rather high quality standard.
(2) I wonder if there’s some reasonably precise questions you could ask new users to check for understanding and could be there as a friendly-ish guidepost if a new user is going wayward. Your example—“(for example: “beliefs are probabilistic, not binary, and you should update them incrementally”)”—seems like a really good one. Obviously those should be incredibly non-contentious, but something that would demonstrate a core understanding. Perhaps 3-5 of those, maybe something that a person formally writes up some commentary on their personal blog before posting?
(3) It’s fallen from its peak glory years, but sonsofsamhorn.net might be an interesting reference case to look at — it was one of the top analytical sports discussion forums for quite a while. At the height of its popularity, many users wanted to join but wouldn’t understand the basics—for instance, that a poorly-positioned player on defense making a flashy “diving play” to get the baseball wasn’t a sign of good defense, but rather a sign that that player has a fundamental weakness in their game, which could be investigated more deeply with statistics—and we can’t just trust flashy replay videos to be accurate indicators of defensive skill. (Defense in American baseball is particularly hard to measure and sometimes contentious.) What SOSH did was create an area called “The Sandbox” which was relatively unrestricted — spam and abuse still weren’t permitted of course, but the standard of rigor was a lot lower. Regular members would engage in Sandbox threads from time to time, and users who made excellent posts and comments in The Sandbox would get invited to full membership. Probably not needed at the current scale level, but might be worth starting to think about for a long-term solution if LW keeps growing.
Thanks so much for everything you and the team do.
I had a personal experience that strongly suggests that this is at least partially true.
I had a mountaineering trip in a remote location that went off the rails pretty badly — it was turning into a classical “how someone dies in the woods” story. There was a road closure some miles ahead of where I was supposed to drive, I hiked an extra 8 miles in, missed the correct trail, tried to take a shortcut, etc etc—it got ugly.
I felt an almost complete lack of distress or self-pity the entire time. I was just very focused methodically on orienting around my maps and GPS and getting through the next point.
I was surprised at how little negative internal discourse or negative emotions I felt. So, n=1 here, but it was very informative for me.
This isn’t necessarily “Come for the instrumentality, stay for the epistemology” — but, maybe.
Come for the productivity, stay for the philosophy
broke peace first.
Have you read “Metaphors We Live By” by Lakoff?
The first 20 pages or so are almost a must-read in my opinion.
Highly recommended, for you in particular.
A Google search with filetype:pdf will find you a copy. You can skim it fast — not needed to close read it — and you’ll get the gems.
Edit for exhortation: I think you’ll get a whole lot out of it such that I’d stake some “Sebastian has good judgment” points on it that you can subtract from my good judgment rep if I’m wrong. Seriously please check it out. It’s fast and worth it.
Huh. Interesting.
I had literally the exact same experience before I read your comment dxu.
I imagine it’s likely that Duncan could sort of burn out on being able to do this [1] since it’s pretty thankless difficult cognitive work. [2]
But it’s really insightful to watch. I do think he could potentially tune up [3] the diplomatic savvy a bit [4] since I think while his arguments are quite sound [5] I think he probably is sometimes making people feel a little bit stupid via his tone. [6]
Nevertheless, it’s really fascinating to read and observe. I feel vaguely like I’m getting smarter.
###
Rigor for the hell of it [7]:
[1] Hedged hypothesis.
[2] Two-premise assertion with a slightly subjective basis, but I think a true one.
[3] Elaborated on a slightly different but related point further in my comment below to him with an example.
[4] Vague but I think acceptably so. To elaborate, I mean making one’s ideas even when in disagreement with a person palatable to the person one is disagreeing with. Note: I’m aware it doesn’t acknowledge the cost of doing so and running that filter. Note also: I think, with skill and practice, this can be done without sacrificing the content of the message. It is almost always more time-consuming though, in my experience.
[5] There’s some subjective judgments and utility function stuff going on, which is subjective naturally, but his core factual arguments, premises, and analyses basically all look correct to me.
[6] Hedged hypothesis. Note: doesn’t make a judgment either way as to whether it’s worth it or not.
[7] Added after writing to double-check I’m playing by the rules and clear up ambiguity. “For the hell of it” is just random stylishness and can be safely mentally deleted.
(Or perhaps, if I introspect closely, a way to not be committed to this level of rigor all the time. As stated below though, minor stylistic details aside, I’m always grateful whenever a member of a community attempts to encourage raising and preserving high standards.)
First, I think promoting and encouraging higher standards is, if you’ll pardon the idiom, doing God’s work.
Thank you.
I’m so appreciative any time any member of a community looks to promote and encourage higher standards. It takes a lot of work and gets a lot of pushback and I’m always super appreciative when I see someone work at it.
Second, and on a much smaller note, if I might offer some......… stylistic feedback?
I’m only speaking here about my personal experience and heuristics. I’m not speaking for anyone else. One of my heuristics — which I darn well know isn’t perfectly accurate, but it’s nevertheless a heuristic I implicitly use all the time and which I know others use — is looking at language choices made when doing a quick skim of a piece as a first-pass filter of the writer’s credibility.
It’s often inaccurate. I know it. Still, I do it.
Your writing sometimes, when you care about an issue, seems to veer very slightly into resembling the writing of someone who is heated up about a topic in a way that leads to less productive and coherent thought.
This leads my default reaction to discounting the credibility of the message slightly.
I have to forcibly remind myself not to do that in your case, since you’re actually taking pretty cohesive and intelligent positions.
As a small example:
These are all terrible ideas.
These are all
terrible
ideas.
I’m going to say it a third time, because LessWrong is not yet a place where I can rely on my reputation for saying what I actually mean and then expect to be treated as if I meant the thing that I actually said: I recognize that these are terrible ideas.
I just — umm, in my personal… umm.… filters… it doesn’t look good on a skim pass. I’m not saying emulate soul-less garbage at the expense of clarity. Certainly not. I like your ideas a lot. I loved Concentration of Force.
I’m just saying that, on the margin, if you edited down some of the first-person language and strong expressions of affect a little bit in areas where you might be concerned about it being “not yet a place where I can rely on my reputation for saying what I actually mean”… it might help credibility.
I’ve written quite literally millions of words in my life, so I can say from firsthand experience that lines like that do successfully pre-empt stupid responses so you get less dumb comments.
That’s true.
But I think it’s likely you take anywhere from a 10% to 50% penalty to credibility to many casual skimmers of threads who do not ever bother to comment (which, incidentally, is both the majority of readers and me personally in 2021).
I see things like the excerpted part, and I have to consciously remind myself not to apply a credibility discount to what you’re saying, because (in my experience and perhaps unfairly) I pattern match that style to less credible people and less credible writing.
Again, this is just a friendly stylistic note. I consider myself a fan. If I’m mistaken or it’d be expensive to implement an editing filter for toning that down, don’t bother — it’s not a huge deal in the grand scheme of things, and I’m really happy someone is working on this.
I suppose I’m just trying to improve the good guys’ effectiveness for concentration of force reasons, you could say.
Salut and thanks again.
(1) Physics generally seems like a trustworthy discipline—the level of rigor, replicability, lack of incentive for making false claims, etc. So base rate of trust is high in that domain.
(2) There doesn’t seem to be anyone claiming otherwise or any major anomalies around it, with the possible exception of how microscopic/quantum levels of things interact/aggregate/whatever with larger scale things.
(3) It would seem to need to be at least correct-ish for a lot of modern systems, like power plants, to work correctly.
(4) I’ve seen wood burn, put fuel into a car and then seen the car operate, etc.
(5) On top of all of that, if the equation turned out to be slightly wrong, it’s unlikely I’d do anything differently as a result so it’s not consequential to look very deeply into it (beyond general curiosity, learning, whatever).
As a personal convention, I don’t assign probability something is true above 99% for anything other than the very most trivial (2+2=4). So I’m at 99% E=mc2 is correct enough to treat it as true—though I’d look into it more closely if I was ever operating in an environment where it had meaningful practical implications.