Affective Death Spirals
Many, many, many are the flaws in human reasoning which lead us to overestimate how well our beloved theory explains the facts. The phlogiston theory of chemistry could explain just about anything, so long as it didn’t have to predict it in advance. And the more phenomena you use your favored theory to explain, the truer your favored theory seems—has it not been confirmed by these many observations? As the theory seems truer, you will be more likely to question evidence that conflicts with it. As the favored theory seems more general, you will seek to use it in more explanations.
If you know anyone who believes that Belgium secretly controls the US banking system, or that they can use an invisible blue spirit force to detect available parking spaces, that’s probably how they got started.
(Just keep an eye out, and you’ll observe much that seems to confirm this theory . . .)
This positive feedback cycle of credulity and confirmation is indeed fearsome, and responsible for much error, both in science and in everyday life.
But it’s nothing compared to the death spiral that begins with a charge of positive affect—a thought that feels really good.
A new political system that can save the world. A great leader, strong and noble and wise. An amazing tonic that can cure upset stomachs and cancer.
Heck, why not go for all three? A great cause needs a great leader. A great leader should be able to brew up a magical tonic or two.
The halo effect is that any perceived positive characteristic (such as attractiveness or strength) increases perception of any other positive characteristic (such as intelligence or courage). Even when it makes no sense, or less than no sense.
Positive characteristics enhance perception of every other positive characteristic? That sounds a lot like how a fissioning uranium atom sends out neutrons that fission other uranium atoms.
Weak positive affect is subcritical; it doesn’t spiral out of control. An attractive person seems more honest, which, perhaps, makes them seem more attractive; but the effective neutron multiplication factor is less than one. Metaphorically speaking. The resonance confuses things a little, but then dies out.
With intense positive affect attached to the Great Thingy, the resonance touches everywhere. A believing Communist sees the wisdom of Marx in every hamburger bought at McDonald’s; in every promotion they’re denied that would have gone to them in a true worker’s paradise; in every election that doesn’t go to their taste; in every newspaper article “slanted in the wrong direction.” Every time they use the Great Idea to interpret another event, the Great Idea is confirmed all the more. It feels better—positive reinforcement—and of course, when something feels good, that, alas, makes us want to believe it all the more.
When the Great Thingy feels good enough to make you seek out new opportunities to feel even better about the Great Thingy, applying it to interpret new events every day, the resonance of positive affect is like a chamber full of mousetraps loaded with ping-pong balls.
You could call it a “happy attractor,” “overly positive feedback,” a “praise locked loop,” or “funpaper.” Personally I prefer the term “affective death spiral.”
Coming up next: How to resist an affective death spiral.1
1Hint: It’s not by refusing to ever admire anything again, nor by keeping the things you admire in safe little restricted magisteria.
- Why Our Kind Can’t Cooperate by Mar 20, 2009, 8:37 AM; 295 points) (
- Politics is way too meta by Mar 17, 2021, 7:04 AM; 291 points) (
- Raising the Sanity Waterline by Mar 12, 2009, 4:28 AM; 241 points) (
- Levels of Action by Apr 14, 2011, 12:18 AM; 187 points) (
- Crisis of Faith by Oct 10, 2008, 10:08 PM; 179 points) (
- Illegible impact is still impact by Feb 13, 2020, 9:45 PM; 134 points) (EA Forum;
- Don’t Revere The Bearer Of Good Info by Mar 21, 2009, 11:22 PM; 126 points) (
- Einstein’s Superpowers by May 30, 2008, 6:40 AM; 120 points) (
- Guardians of Ayn Rand by Dec 18, 2007, 6:24 AM; 119 points) (
- Bayesians vs. Barbarians by Apr 14, 2009, 11:45 PM; 106 points) (
- How to Save the World by Dec 1, 2010, 5:17 PM; 103 points) (
- The Trouble With “Good” by Apr 17, 2009, 2:07 AM; 100 points) (
- Go Forth and Create the Art! by Apr 23, 2009, 1:37 AM; 90 points) (
- Can Humanism Match Religion’s Output? by Mar 27, 2009, 11:32 AM; 83 points) (
- The Moral Void by Jun 30, 2008, 8:52 AM; 79 points) (
- Fake Utility Functions by Dec 6, 2007, 4:55 PM; 72 points) (
- Two Truths and a Lie by Dec 23, 2009, 6:34 AM; 70 points) (
- My Childhood Death Spiral by Sep 15, 2008, 3:42 AM; 70 points) (
- Science Doesn’t Trust Your Rationality by May 14, 2008, 2:13 AM; 69 points) (
- Do Scientists Already Know This Stuff? by May 17, 2008, 2:25 AM; 66 points) (
- Changing Your Metaethics by Jul 27, 2008, 12:36 PM; 64 points) (
- Politics is far too meta by Mar 17, 2021, 11:57 PM; 58 points) (EA Forum;
- A review of Principia Qualia by Jul 12, 2023, 6:38 PM; 56 points) (
- Truth: It’s Not That Great by May 4, 2014, 10:07 PM; 50 points) (
- A Prodigy of Refutation by Sep 18, 2008, 1:57 AM; 49 points) (
- Understanding vipassana meditation by Oct 3, 2010, 6:12 PM; 48 points) (
- Seduced by Imagination by Jan 16, 2009, 3:10 AM; 46 points) (
- Fake Fake Utility Functions by Dec 6, 2007, 6:30 AM; 42 points) (
- How can I get help becoming a better rationalist? by Jul 13, 2023, 1:41 PM; 40 points) (
- A Suggested Reading Order for Less Wrong [2011] by Jul 8, 2011, 1:40 AM; 38 points) (
- Politics and Awful Art by Dec 20, 2007, 3:46 AM; 37 points) (
- Degrees of Radical Honesty by Mar 31, 2009, 8:36 PM; 34 points) (
- In Praise of Maximizing – With Some Caveats by Mar 15, 2015, 7:40 PM; 32 points) (
- Apr 29, 2011, 10:42 PM; 30 points) 's comment on Meditation, insight, and rationality. (Part 1 of 3) by (
- Some of the best rationality essays by Oct 19, 2021, 10:57 PM; 29 points) (
- The Importance of Self-Doubt by Aug 19, 2010, 10:47 PM; 28 points) (
- Heading Toward Morality by Jun 20, 2008, 8:08 AM; 27 points) (
- A Premature Word on AI by May 31, 2008, 5:48 PM; 27 points) (
- Failure By Affective Analogy by Nov 18, 2008, 7:14 AM; 27 points) (
- Expecting Beauty by Jan 12, 2008, 3:00 AM; 27 points) (
- The Uses of Fun (Theory) by Jan 2, 2009, 8:30 PM; 23 points) (
- Being wrong in ethics by Mar 29, 2019, 11:28 AM; 22 points) (
- Parallelizing Rationality: How Should Rationalists Think in Groups? by Dec 17, 2012, 4:08 AM; 21 points) (
- Jun 27, 2010, 10:50 PM; 21 points) 's comment on Unknown knowns: Why did you choose to be monogamous? by (
- That Crisis thing seems pretty useful by Apr 10, 2009, 5:10 PM; 18 points) (
- Getting Nearer by Jan 17, 2009, 9:28 AM; 16 points) (
- Concept Safety: World-models as tools by May 9, 2015, 12:07 PM; 14 points) (
- Jul 2, 2010, 4:31 AM; 13 points) 's comment on Rationality Quotes: July 2010 by (
- Feb 11, 2011, 8:34 AM; 12 points) 's comment on Rationality for Other People by (
- Nov 20, 2010, 1:34 PM; 12 points) 's comment on What I’ve learned from Less Wrong by (
- Cross-Cultural maps and Asch’s Conformity Experiment by Mar 9, 2016, 12:40 AM; 10 points) (
- Oct 7, 2010, 6:50 PM; 10 points) 's comment on Bayesian Buddhism: a path to optimal enlightenment by (
- Mar 7, 2014, 6:38 PM; 10 points) 's comment on Rationality Quotes March 2014 by (
- Rationality Reading Group: Part J: Death Spirals by Sep 24, 2015, 2:31 AM; 7 points) (
- Apr 30, 2010, 4:24 AM; 6 points) 's comment on Averaging value systems is worse than choosing one by (
- Jun 21, 2012, 3:54 PM; 6 points) 's comment on The Power of Reinforcement by (
- Feb 13, 2011, 5:57 AM; 6 points) 's comment on Subjective Relativity, Time Dilation and Divergence by (
- [SEQ RERUN] Affective Death Spirals by Nov 14, 2011, 5:34 AM; 6 points) (
- Sep 28, 2010, 11:33 PM; 6 points) 's comment on Open Thread September, Part 3 by (
- Dec 24, 2022, 11:31 PM; 5 points) 's comment on Read The Sequences by (EA Forum;
- Jun 22, 2009, 5:02 PM; 5 points) 's comment on Ask LessWrong: Human cognitive enhancement now? by (
- Jul 2, 2010, 4:49 AM; 5 points) 's comment on Open Thread: July 2010 by (
- Jun 14, 2010, 9:53 PM; 5 points) 's comment on Open Thread June 2010, Part 3 by (
- Mar 13, 2012, 10:57 PM; 5 points) 's comment on The Stable State is Broken by (
- Dec 16, 2012, 6:02 PM; 4 points) 's comment on Why Our Kind Can’t Cooperate by (
- Oct 23, 2011, 11:23 PM; 4 points) 's comment on Practicing what you preach by (
- May 16, 2012, 1:33 PM; 4 points) 's comment on Open Thread, May 16-31, 2012 by (
- Apr 30, 2009, 10:45 PM; 4 points) 's comment on Rationalist Role in the Information Age by (
- Oct 13, 2010, 6:22 AM; 3 points) 's comment on Of the Qran and its stylistic resources: deconstructing the persuasiveness Draft by (
- Aug 20, 2022, 4:42 PM; 2 points) 's comment on The Wages of North-Atlantic Bias by (EA Forum;
- Feb 7, 2012, 7:37 AM; 2 points) 's comment on Diseased disciplines: the strange case of the inverted chart by (
- Jul 7, 2011, 2:16 AM; 2 points) 's comment on Find yourself a Worthy Opponent: a Chavruta by (
- Dec 14, 2009, 7:29 PM; 2 points) 's comment on Man-with-a-hammer syndrome by (
- Apr 26, 2009, 2:50 PM; 2 points) 's comment on Where’s Your Sense of Mystery? by (
- Jul 1, 2010, 7:28 PM; 2 points) 's comment on A Challenge for LessWrong by (
- Oct 18, 2011, 11:12 AM; 1 point) 's comment on Argument Screens Off Authority by (
- Aug 7, 2011, 1:31 PM; 1 point) 's comment on Beware of Other-Optimizing by (
- Practicing what you preach by Oct 23, 2011, 6:12 PM; 1 point) (
- What does aligning AI to an ideology mean for true alignment? by Mar 30, 2025, 3:12 PM; 1 point) (
- Jun 24, 2012, 4:51 AM; 1 point) 's comment on [Link] Why don’t people like markets? by (
- Aug 13, 2008, 10:25 AM; 0 points) 's comment on Moral Error and Moral Disagreement by (
- Jun 20, 2009, 12:44 AM; 0 points) 's comment on ESR’s comments on some EY:OB/LW posts by (
- May 31, 2012, 2:32 PM; 0 points) 's comment on Only say ‘rational’ when you can’t eliminate the word by (
- Nov 13, 2014, 7:58 PM; 0 points) 's comment on First(?) Rationalist elected to state government by (
- Jan 16, 2011, 6:44 AM; 0 points) 's comment on Rational Repentance by (
- May 25, 2015, 10:25 PM; -2 points) 's comment on Less Wrong lacks direction by (
- If reason told you to jump off a cliff, would you do it? by Dec 21, 2009, 3:54 AM; -14 points) (
Please define “magisteria” in the follow-up post. I tried three dictionaries without finding its definition.
Phlogiston was the cause of fire. It’s a reification error, is all. Like ``power″ in political discourse, which is supposed to be a thing you can acquire, or lose, or contest. Whole analyses depend on it.
magisteria (plural) – Realms of belief, for example, the realm of religious belief taken together with the realm of scientific belief. The absence of legitimate conflict between these realms was termed non-overlapping magisteria by Steven J. Gould.
I hope everyone was paying attention to that bit :-)
Please include judging how much to resist what may partly be a due to the spiral, so as not to overcompensate. Sometimes a “Great Thingy” is genuinely great.
Affective death spiral sounds like something to do with depression, praise locked loop may give a more accurate impression of the idea.
“Affective death spiral” sounds like the process by which I became a militant evangelical Bayesian. But I got better: now I’m only a fundamentalist Bayesian, and my faith does not require me to witness the Bayesian Gospel to those who aren’t interested.
I’ve always thought it was silly to call great football players “heroes.” But in fact, people can be heroes (in the sense of role models) in one area of life and not in others. You can admire and be inspired by a role model’s athleticism, intellectual honesty, kindness, etc. even though these are not usually found all together in one person.
This reminds me of a Karl Popper excerpt that I read several years ago. Popper levels similar charges against Marxism and Freudianism:
http://www.stephenjaygould.org/ctrl/popper_falsification.html
Thanks
Death spiral comes from airplanes and pilot disorientation leading to corrective action making a descending turn progressively worse. Without the disorientation, it doesn’t happen.
Flying blind without instruments leads to disorientation very fast, if you’re doing the flying. If you’re just a passenger, you reorient from what the pilot does; but it’s fatal if the pilot does that, without instruments to reorient himself from.
Disorientation is the key to take away.
What would you think of “Happy Death Spiral”?
I would probably avoid taunting it.
“Coming tomorrow: How to resist an affective death spiral. (Hint: It’s not by refusing to ever admire anything again, nor by keeping the things you admire in safe little restricted magisteria.)”
Hmmm… maybe you could consider scenarios in which the Great Thingy gets you killed or seriously injured? Or by extrapolating it out until you reach predictions that are obviously absurd (eg, my boss is part of a government anti-Marxist conspiracy)?
That works just fine until your boss actually is part of a government anti-Marxist conspiracy...
Coming tomorrow: How to resist an affective death spiral.
Listening to some really good satire or mockery of the thing admired would help—it would dampen your emotional commitment to it, while leaving your rational commitment intact.
Trying to picture a world—sci-fi if needed—where your pet theory is not true may help, as long as you can create a reasonable functioning world, not a caricature...
But I’m feeling it’s something far more cunning coming along...
On a more serious note: cut up your Great Thingy into smaller independent ideas, and treat them as independent.
For instance a marxist would cut up Marx’s Great Thingy into a theory of value of labour, a theory of the political relations between classes, a theory of wages, a theory on the ultimate political state of mankind. Then each of them should be assessed independently, and the truth or falsity of one should not halo on the others. If we can do that, we should be safe from the spiral, as each theory is too narrow to start a spiral on its own.
Same thing for every other Great Thingy out there.
But some Great Thingies might not be readily splittable. For instance, consider the whole edifice of theoretical physics, which is a pretty good candidate for a genuinely great Thingy (though not of quite the same type as most of the Great Thingies under discussion here). Each bit makes most sense in the context of the whole structure, and you can only appreciate why a given piece of evidence is evidence for one bit if you have all the other bits available to do the calculations with.
Of course, all this could just indicate that the whole edifice of theoretical physics (if taken as anything more than a black box for predicting observations) is a self-referential self-supporting delusion, and in a manner of speaking it’s not unlikely that that’s so—i.e., the next major advance in theoretical physics could well overturn all the fundamentals while leaving the empirical consequences almost exactly the same. Be that as it may, much of the value of theoretical physics comes from the fact that it is a Great Thingy and not just a collection of Little Thingies, and it seems like it would be a shame to adopt a mode of thinking that prevents us appreciating it as such.
Notably, regarding theoretical physics, there are at least nine models for modern theoretical physics, all of which can perfectly explain the empirical observations, and all of which are completely and totally contradictory to one another. (Okay, almost all of which. There are a few compatibilities scattered amongst them. Neorealism can work fine with the multiverse model, and there are a small handful of models which are derived from Bohr’s interpretations and are semicompatible with one another.)
I think “completely and totally contradictory” is putting it too strongly, since they do in fact all agree about all observations we have ever been able to make or ever anticipate being able to make. Extreme verificationists would argue that the bits they disagree about are meaningless :-).
They agree about observations—but we already have the observations, so that doesn’t mean much. Any theory worth thinking about isn’t going to disagree about those observations, which, after all, they are created to explain. They disagree in every way it is meaningful that they, theories about the reason why, MAY disagree—in the reasons why. And extreme verificationists can go take a leap off a logical cliff when it comes to discussing differences in the reasons why something may be.
“they do in fact all agree about all observations we have ever been able to make or ever anticipate being able to make.”
Not entirely true.
Nick: Oh, sorry, I forgot that there are still people who take the Copenhagen interpretation seriously. Though actually I suspect that they might just decree that observation by a reversible conscious observer doesn’t count. That would hardly be less plausible than the Copenhagen interpretation itself. :-)
(I also have some doubt as to whether sufficiently faithful reversibility is feasible. It’s not enough for the system to be restored to its prior state as far as macroscopic observations go; the reversal needs to be able to undo decoherence, so to speak. That seems like a very tall order.)
Adirian: the fact that their agreement-about-observations was predictable in advance doesn’t make it any less an agreement. (And if you’re talking only about the parts of those theories that are “theories about the reasons why”, bracketing all the agreements about what’s observed and how to calculate it, then I don’t think you are entitled to call the things that disagree completely “models for modern theoretical physics”.)
Nick—that proof works fine for any of the neorealist models, in which Everett’s model is, variably, placed. The problem is in interpretation. Remember that there is great disagreement in the Copenhagen models about where, exactly, waveform collapse happens—after all, if one treats the quantum measurement device itself as being in a quantum state, then 100% correlation may be acceptable. (Because the waveform state of the computer wasn’t collapsed until the first and third measurements were examined together.)
The real problem here is that the Copenhagen models are effectively unscientific, since it is fundamentally impossible to disprove the concept that anything that is unmeasured is in an uncertain/undefined state. It’s an intellectual parlour trick, and shouldn’t be taken seriously.
At the same time though, not calculating a value until something actually needs it is exactly the kind of efficiency hack one would really want to implement if they were going to simulate an entire universe...
So if we are in some level of sub-reality that would make it much more likely that the model is correct, even if there’s no way for us to actually test it...
So from a practical point of view, it comes down entirely to which model lets us most effectively predict things. Since that’s what we actually care about. I’ll take a collection of “parlour tricks” that can tell me things about the future with high confidence over a provably self-consistent system that is wrong more often.
Upvoted because, while I don’t know the details of the Copenhagen models, if it is true they rely on “the concept that anything that is unmeasured is in an uncertain/undefined state”, then until some method of testing this state is devised the theories are effectively pseudo-science.
The Popper essay, originally mentioned above, describes the problem nicely.
It doesn’t speak to the truth or untruth of the theory, just to its scientific status, or lack thereof. In a nutshell, if it’s not testable, it’s not scientific, whether it is true or not. This is why it should not be taken too seriously, at least not until it becomes testable.
the fact that their agreement-about-observations was predictable in advance doesn’t make it any less an agreement. (And if you’re talking only about the parts of those theories that are “theories about the reasons why”, bracketing all the agreements about what’s observed and how to calculate it, then I don’t think you are entitled to call the things that disagree completely “models for modern theoretical physics”.)
It renders that agreement meaningless. If you curve-fit seven points, and come up with a thousand different formulas, the fact that each of these thousand formulas includes each of those seven points produces exactly no additional information. The fact of the matter is that we have discarded every formula which DIDN’T describe those points—that the remaining formulas do describe them tells us absolutely nothing about either the points or the relative value of the formulas with respect to one another.
At best, out of N formulas, each has a 1/N chance of being correct. (At worst, none of the formulas is correct.)
Technical note: Occam factors (and prior probabilities generally) can cause these chances to deviate from 1/N.
I didn’t mean specifically, I meant on average. My apologies for the poor phrasing. Yes, any individual formula’s odds of being correct can vary. (To deny this would be to deny Bayesian reasoning, and I think I might get mugged here if I tried that.)
Hey there. I wont ever be returning here again. Time never began. Time will never end. We will never exist. Thank you for your time.
Don’t know if that’ll solve matters, just trying. This does seem very Popperian—in a bad way, in that it’s an oversimplified approach to theory-formation. What do you think about Kuhn, who finds this kind of reinforcement in normal, productive science—but still allows a distinction between evidence-based science and entirely circular nonscience? What about the idea that we have ‘rings’ of beliefs, and will sacrifice any number of ‘outer-ring’ theory detail to preserve our core beliefs?
Yeah, the ‘help’ was a futile attempt to close the open italics tag. Didn’t work, obviously.
Adirian (sorry for not noticing your response sooner), the situation is more like: we have a million data points and several models that all fit those points very precisely and all agree very precisely on how to interpolate between those points—but if we try to use them to extrapolate wildly, into regions where in fact we have no way of getting any real data points, they diverge. It also turns out that within the region where we can actually get data—where the models agree—they don’t agree merely by coincidence, but turn out to be mathematically equivalent to one another.
You are welcome to describe this situation by saying that the models “completely and totally contradictory”, but I think that would be pretty eccentric.
(This is of course merely an analogy. I think the reality is even less favourable to your case.)
ADS may be observed, most tragically, in the history of “facilitated communication.”
http://www.cqc.state.ny.us/hottopics/fcwheel.htm
I personally prefer The Law of Fives: “ALL THINGS HAPPEN IN FIVES, OR ARE DIVISIBLE BY OR ARE MULTIPLES OF FIVE, OR ARE SOMEHOW DIRECTLY OR INDIRECTLY APPROPRIATE TO 5.”
With the corollary: “I find the Law of Fives to be more and more manifest the harder I look.”
cf. Foucault’s Pendulum; the entire novel.
Ever since reading Hitchhikers Guide to the Galaxy I’ve seen the number 42 pop up at an alarming rate… Though I guess people use that number more than average for that very same reason. (I know I do!)
Good article but why do you only talk about positive “death” spirals It would be just thr same with negative toughts added insecutity iy would be even harder to break out.
In the 20-th century, Richard Feinmann did point out that there may be some problem with how we patch our phisics by cutting out the neigbourhoods. Nowdays we are pathing the General Relativity with the dark matter (it wasn’t predicted, really) and even dark matter. It looks like we’ll have to patch some “too fast neitrino in the matter” fenomenon.
I am not claiming this “patching” business something intristically right and beautifull. Never. We’ll have to propose some new theories. But… before we’ll have some better theory, to patch General Relativity seems just the thing to do. May be—the only thing to do, sorry.
An average scientist (if there is such a thing) isn’t expected to propose something better, than General Relativity. Not really. So, even as we teach scientists, most of ’em wouldn’t need to remember, that “patching” old theories isn’t the right thing to do, in the long run. As they may do nothing about it. These with Nobel Prize ambition level would be wise to remember it, thought.
Yeah, “dark matter” really bothers me. Which seems more likely?
That there are massive quantities of invisible matter in the universe that only interacts via gravitation? And happens to be spread around in about the same density distribution as all the regular matter?
Or that our estimate for the value of the universal gravitational constant is either off a little bit or not quite as constant as we think?
The former sounds a little too much like an invisible dragon to me. Which doesn’t make it impossible, but exotic, nigh-undetectable forms of matter just doesn’t seem as plausible as observation error to me.
Your second sentence is a pretty straightforward consequence of your first.
That is a reasonable possibility, although if it only interacts with normal matter via gravitation, which is relatively weak, then I’d expect to see its dispersal lag significantly behind, say, a supernova. And that lag would seem likely to result in such events skewing the distribution over time.
Unless we’re also going to postulate that dark matter has its own energy, chemistry, and physics which resemble those of normal matter so closely that such things happen in both realms at the same time...
Measurement error and/or gravity having some kind of propagation properties we haven’t worked out yet still seems like a contender for the explanation, unless they have, indeed found pockets in the universe with differing amounts of excess gravitation that match what one would expect in the wake of fast-moving objects. I haven’t seen any reports about that myself, but I can’t say I’m an insider on the latest research or anything.
The whole point of dark matter is to hold galaxies together through gravity. And it is posited as having exotic properties apart from gravity.
Sigh He doesnt know critical thinking...
Is there such a thing as negative feeling death spirals, where say fear or mental illness keeps pushing you towards a terrible thought, concept or idea.
I think so. It’s a positive feedback loop either way.
Applying this to my own beliefs, I seem to be trapped in an affective death spiral around science and rationality. In fact, just as you described, this spiral has led me to seek out new opportunities to apply and engage with science and rationality, shaping not just my career but the entirety of my life in the process. I have a feeling most folks around here can relate to these statements.
So I wonder, are affective death spirals always a bad thing? More specifically, should they always be avoided? Do seemingly positive affective death spirals carry risk of negative externalities?
One place where my own obsession with science and rationality seems to get in the way of things is in highly emotional interactions with other people. Often, my attempts to apply science and rationality to statements made during a heated argument simply make matters worse. Same goes when consoling a friend or partner about something sad; few in such situations are actually interested in applying the scientific method.
Then again, I also used science and rationality to get out of this pattern. I noticed my default approach wasn’t working, came up with new approaches, and tested them in different situations as they arose. After evaluating the results, admittedly with little in the way of statistical analysis, I landed on a robust system for dealing with highly emotional interpersonal encounters. (The biggest hurdle has been actually remembering to use it rather than defaulting to what feels right according to the affective death spiral around science and rationality which rules my life.)
Edit: I continued onto the next article in this series. I now feel surprisingly prescient and a little silly.
I also found it a good practice to generate your own answers to how you would escape the happy death spiral, before reading the next article.
My answer:
Remember that powerful theories are the ones that eliminates many options, not ones that explains everything.
I think it is a reasonably good answer as it somewhat contains 3⁄5 of the points
Thinking about the specifics of the causal chain instead of the good or bad feelings;
Not rehearsing evidence; and
Not adding happiness from claims that “you can’t prove are wrong”;
This effect is really noticeable when you’re manic.