Affective Death Spirals
Many, many, many are the flaws in human reasoning which lead us to overestimate how well our beloved theory explains the facts. The phlogiston theory of chemistry could explain just about anything, so long as it didn’t have to predict it in advance. And the more phenomena you use your favored theory to explain, the truer your favored theory seems—has it not been confirmed by these many observations? As the theory seems truer, you will be more likely to question evidence that conflicts with it. As the favored theory seems more general, you will seek to use it in more explanations.
If you know anyone who believes that Belgium secretly controls the US banking system, or that they can use an invisible blue spirit force to detect available parking spaces, that’s probably how they got started.
(Just keep an eye out, and you’ll observe much that seems to confirm this theory . . .)
This positive feedback cycle of credulity and confirmation is indeed fearsome, and responsible for much error, both in science and in everyday life.
But it’s nothing compared to the death spiral that begins with a charge of positive affect—a thought that feels really good.
A new political system that can save the world. A great leader, strong and noble and wise. An amazing tonic that can cure upset stomachs and cancer.
Heck, why not go for all three? A great cause needs a great leader. A great leader should be able to brew up a magical tonic or two.
The halo effect is that any perceived positive characteristic (such as attractiveness or strength) increases perception of any other positive characteristic (such as intelligence or courage). Even when it makes no sense, or less than no sense.
Positive characteristics enhance perception of every other positive characteristic? That sounds a lot like how a fissioning uranium atom sends out neutrons that fission other uranium atoms.
Weak positive affect is subcritical; it doesn’t spiral out of control. An attractive person seems more honest, which, perhaps, makes them seem more attractive; but the effective neutron multiplication factor is less than one. Metaphorically speaking. The resonance confuses things a little, but then dies out.
With intense positive affect attached to the Great Thingy, the resonance touches everywhere. A believing Communist sees the wisdom of Marx in every hamburger bought at McDonald’s; in every promotion they’re denied that would have gone to them in a true worker’s paradise; in every election that doesn’t go to their taste; in every newspaper article “slanted in the wrong direction.” Every time they use the Great Idea to interpret another event, the Great Idea is confirmed all the more. It feels better—positive reinforcement—and of course, when something feels good, that, alas, makes us want to believe it all the more.
When the Great Thingy feels good enough to make you seek out new opportunities to feel even better about the Great Thingy, applying it to interpret new events every day, the resonance of positive affect is like a chamber full of mousetraps loaded with ping-pong balls.
You could call it a “happy attractor,” “overly positive feedback,” a “praise locked loop,” or “funpaper.” Personally I prefer the term “affective death spiral.”
Coming up next: How to resist an affective death spiral.1
1Hint: It’s not by refusing to ever admire anything again, nor by keeping the things you admire in safe little restricted magisteria.
- Why Our Kind Can’t Cooperate by 20 Mar 2009 8:37 UTC; 292 points) (
- Politics is way too meta by 17 Mar 2021 7:04 UTC; 288 points) (
- Raising the Sanity Waterline by 12 Mar 2009 4:28 UTC; 239 points) (
- Levels of Action by 14 Apr 2011 0:18 UTC; 186 points) (
- Crisis of Faith by 10 Oct 2008 22:08 UTC; 177 points) (
- Illegible impact is still impact by 13 Feb 2020 21:45 UTC; 134 points) (EA Forum;
- Don’t Revere The Bearer Of Good Info by 21 Mar 2009 23:22 UTC; 126 points) (
- Einstein’s Superpowers by 30 May 2008 6:40 UTC; 119 points) (
- Guardians of Ayn Rand by 18 Dec 2007 6:24 UTC; 118 points) (
- Bayesians vs. Barbarians by 14 Apr 2009 23:45 UTC; 103 points) (
- How to Save the World by 1 Dec 2010 17:17 UTC; 103 points) (
- The Trouble With “Good” by 17 Apr 2009 2:07 UTC; 100 points) (
- Go Forth and Create the Art! by 23 Apr 2009 1:37 UTC; 88 points) (
- Can Humanism Match Religion’s Output? by 27 Mar 2009 11:32 UTC; 82 points) (
- The Moral Void by 30 Jun 2008 8:52 UTC; 78 points) (
- Two Truths and a Lie by 23 Dec 2009 6:34 UTC; 70 points) (
- Fake Utility Functions by 6 Dec 2007 16:55 UTC; 69 points) (
- My Childhood Death Spiral by 15 Sep 2008 3:42 UTC; 69 points) (
- Science Doesn’t Trust Your Rationality by 14 May 2008 2:13 UTC; 68 points) (
- Do Scientists Already Know This Stuff? by 17 May 2008 2:25 UTC; 65 points) (
- Changing Your Metaethics by 27 Jul 2008 12:36 UTC; 62 points) (
- Politics is far too meta by 17 Mar 2021 23:57 UTC; 58 points) (EA Forum;
- A review of Principia Qualia by 12 Jul 2023 18:38 UTC; 56 points) (
- Truth: It’s Not That Great by 4 May 2014 22:07 UTC; 50 points) (
- Understanding vipassana meditation by 3 Oct 2010 18:12 UTC; 48 points) (
- A Prodigy of Refutation by 18 Sep 2008 1:57 UTC; 48 points) (
- Seduced by Imagination by 16 Jan 2009 3:10 UTC; 43 points) (
- Fake Fake Utility Functions by 6 Dec 2007 6:30 UTC; 42 points) (
- How can I get help becoming a better rationalist? by 13 Jul 2023 13:41 UTC; 40 points) (
- A Suggested Reading Order for Less Wrong [2011] by 8 Jul 2011 1:40 UTC; 38 points) (
- Politics and Awful Art by 20 Dec 2007 3:46 UTC; 37 points) (
- Degrees of Radical Honesty by 31 Mar 2009 20:36 UTC; 34 points) (
- In Praise of Maximizing – With Some Caveats by 15 Mar 2015 19:40 UTC; 32 points) (
- 29 Apr 2011 22:42 UTC; 30 points) 's comment on Meditation, insight, and rationality. (Part 1 of 3) by (
- Some of the best rationality essays by 19 Oct 2021 22:57 UTC; 29 points) (
- The Importance of Self-Doubt by 19 Aug 2010 22:47 UTC; 28 points) (
- Heading Toward Morality by 20 Jun 2008 8:08 UTC; 27 points) (
- Failure By Affective Analogy by 18 Nov 2008 7:14 UTC; 27 points) (
- Expecting Beauty by 12 Jan 2008 3:00 UTC; 27 points) (
- A Premature Word on AI by 31 May 2008 17:48 UTC; 26 points) (
- The Uses of Fun (Theory) by 2 Jan 2009 20:30 UTC; 23 points) (
- Being wrong in ethics by 29 Mar 2019 11:28 UTC; 22 points) (
- Parallelizing Rationality: How Should Rationalists Think in Groups? by 17 Dec 2012 4:08 UTC; 21 points) (
- That Crisis thing seems pretty useful by 10 Apr 2009 17:10 UTC; 18 points) (
- Getting Nearer by 17 Jan 2009 9:28 UTC; 16 points) (
- Concept Safety: World-models as tools by 9 May 2015 12:07 UTC; 14 points) (
- 2 Jul 2010 4:31 UTC; 13 points) 's comment on Rationality Quotes: July 2010 by (
- 11 Feb 2011 8:34 UTC; 12 points) 's comment on Rationality for Other People by (
- 20 Nov 2010 13:34 UTC; 12 points) 's comment on What I’ve learned from Less Wrong by (
- Cross-Cultural maps and Asch’s Conformity Experiment by 9 Mar 2016 0:40 UTC; 10 points) (
- 7 Oct 2010 18:50 UTC; 10 points) 's comment on Bayesian Buddhism: a path to optimal enlightenment by (
- 7 Mar 2014 18:38 UTC; 10 points) 's comment on Rationality Quotes March 2014 by (
- Rationality Reading Group: Part J: Death Spirals by 24 Sep 2015 2:31 UTC; 7 points) (
- 30 Apr 2010 4:24 UTC; 6 points) 's comment on Averaging value systems is worse than choosing one by (
- 13 Feb 2011 5:57 UTC; 6 points) 's comment on Subjective Relativity, Time Dilation and Divergence by (
- [SEQ RERUN] Affective Death Spirals by 14 Nov 2011 5:34 UTC; 6 points) (
- 28 Sep 2010 23:33 UTC; 6 points) 's comment on Open Thread September, Part 3 by (
- 24 Dec 2022 23:31 UTC; 5 points) 's comment on Read The Sequences by (EA Forum;
- 22 Jun 2009 17:02 UTC; 5 points) 's comment on Ask LessWrong: Human cognitive enhancement now? by (
- 2 Jul 2010 4:49 UTC; 5 points) 's comment on Open Thread: July 2010 by (
- 14 Jun 2010 21:53 UTC; 5 points) 's comment on Open Thread June 2010, Part 3 by (
- 13 Mar 2012 22:57 UTC; 5 points) 's comment on The Stable State is Broken by (
- 16 Dec 2012 18:02 UTC; 4 points) 's comment on Why Our Kind Can’t Cooperate by (
- 23 Oct 2011 23:23 UTC; 4 points) 's comment on Practicing what you preach by (
- 16 May 2012 13:33 UTC; 4 points) 's comment on Open Thread, May 16-31, 2012 by (
- 30 Apr 2009 22:45 UTC; 4 points) 's comment on Rationalist Role in the Information Age by (
- 13 Oct 2010 6:22 UTC; 3 points) 's comment on Of the Qran and its stylistic resources: deconstructing the persuasiveness Draft by (
- 20 Aug 2022 16:42 UTC; 2 points) 's comment on The Wages of North-Atlantic Bias by (EA Forum;
- 7 Feb 2012 7:37 UTC; 2 points) 's comment on Diseased disciplines: the strange case of the inverted chart by (
- 7 Jul 2011 2:16 UTC; 2 points) 's comment on Find yourself a Worthy Opponent: a Chavruta by (
- 14 Dec 2009 19:29 UTC; 2 points) 's comment on Man-with-a-hammer syndrome by (
- 26 Apr 2009 14:50 UTC; 2 points) 's comment on Where’s Your Sense of Mystery? by (
- 1 Jul 2010 19:28 UTC; 2 points) 's comment on A Challenge for LessWrong by (
- 18 Oct 2011 11:12 UTC; 1 point) 's comment on Argument Screens Off Authority by (
- 7 Aug 2011 13:31 UTC; 1 point) 's comment on Beware of Other-Optimizing by (
- Practicing what you preach by 23 Oct 2011 18:12 UTC; 1 point) (
- 24 Jun 2012 4:51 UTC; 1 point) 's comment on [Link] Why don’t people like markets? by (
- 13 Aug 2008 10:25 UTC; 0 points) 's comment on Moral Error and Moral Disagreement by (
- 20 Jun 2009 0:44 UTC; 0 points) 's comment on ESR’s comments on some EY:OB/LW posts by (
- 20 Feb 2014 7:49 UTC; 0 points) 's comment on Mandatory Secret Identities by (
- 31 May 2012 14:32 UTC; 0 points) 's comment on Only say ‘rational’ when you can’t eliminate the word by (
- 13 Nov 2014 19:58 UTC; 0 points) 's comment on First(?) Rationalist elected to state government by (
- 16 Jan 2011 6:44 UTC; 0 points) 's comment on Rational Repentance by (
- 25 May 2015 22:25 UTC; -2 points) 's comment on Less Wrong lacks direction by (
- If reason told you to jump off a cliff, would you do it? by 21 Dec 2009 3:54 UTC; -14 points) (
Please define “magisteria” in the follow-up post. I tried three dictionaries without finding its definition.
Phlogiston was the cause of fire. It’s a reification error, is all. Like ``power″ in political discourse, which is supposed to be a thing you can acquire, or lose, or contest. Whole analyses depend on it.
magisteria (plural) – Realms of belief, for example, the realm of religious belief taken together with the realm of scientific belief. The absence of legitimate conflict between these realms was termed non-overlapping magisteria by Steven J. Gould.
I hope everyone was paying attention to that bit :-)
Please include judging how much to resist what may partly be a due to the spiral, so as not to overcompensate. Sometimes a “Great Thingy” is genuinely great.
Affective death spiral sounds like something to do with depression, praise locked loop may give a more accurate impression of the idea.
“Affective death spiral” sounds like the process by which I became a militant evangelical Bayesian. But I got better: now I’m only a fundamentalist Bayesian, and my faith does not require me to witness the Bayesian Gospel to those who aren’t interested.
I’ve always thought it was silly to call great football players “heroes.” But in fact, people can be heroes (in the sense of role models) in one area of life and not in others. You can admire and be inspired by a role model’s athleticism, intellectual honesty, kindness, etc. even though these are not usually found all together in one person.
This reminds me of a Karl Popper excerpt that I read several years ago. Popper levels similar charges against Marxism and Freudianism:
http://www.stephenjaygould.org/ctrl/popper_falsification.html
Thanks
Death spiral comes from airplanes and pilot disorientation leading to corrective action making a descending turn progressively worse. Without the disorientation, it doesn’t happen.
Flying blind without instruments leads to disorientation very fast, if you’re doing the flying. If you’re just a passenger, you reorient from what the pilot does; but it’s fatal if the pilot does that, without instruments to reorient himself from.
Disorientation is the key to take away.
What would you think of “Happy Death Spiral”?
I would probably avoid taunting it.
“Coming tomorrow: How to resist an affective death spiral. (Hint: It’s not by refusing to ever admire anything again, nor by keeping the things you admire in safe little restricted magisteria.)”
Hmmm… maybe you could consider scenarios in which the Great Thingy gets you killed or seriously injured? Or by extrapolating it out until you reach predictions that are obviously absurd (eg, my boss is part of a government anti-Marxist conspiracy)?
That works just fine until your boss actually is part of a government anti-Marxist conspiracy...
Coming tomorrow: How to resist an affective death spiral.
Listening to some really good satire or mockery of the thing admired would help—it would dampen your emotional commitment to it, while leaving your rational commitment intact.
Trying to picture a world—sci-fi if needed—where your pet theory is not true may help, as long as you can create a reasonable functioning world, not a caricature...
But I’m feeling it’s something far more cunning coming along...
On a more serious note: cut up your Great Thingy into smaller independent ideas, and treat them as independent.
For instance a marxist would cut up Marx’s Great Thingy into a theory of value of labour, a theory of the political relations between classes, a theory of wages, a theory on the ultimate political state of mankind. Then each of them should be assessed independently, and the truth or falsity of one should not halo on the others. If we can do that, we should be safe from the spiral, as each theory is too narrow to start a spiral on its own.
Same thing for every other Great Thingy out there.
But some Great Thingies might not be readily splittable. For instance, consider the whole edifice of theoretical physics, which is a pretty good candidate for a genuinely great Thingy (though not of quite the same type as most of the Great Thingies under discussion here). Each bit makes most sense in the context of the whole structure, and you can only appreciate why a given piece of evidence is evidence for one bit if you have all the other bits available to do the calculations with.
Of course, all this could just indicate that the whole edifice of theoretical physics (if taken as anything more than a black box for predicting observations) is a self-referential self-supporting delusion, and in a manner of speaking it’s not unlikely that that’s so—i.e., the next major advance in theoretical physics could well overturn all the fundamentals while leaving the empirical consequences almost exactly the same. Be that as it may, much of the value of theoretical physics comes from the fact that it is a Great Thingy and not just a collection of Little Thingies, and it seems like it would be a shame to adopt a mode of thinking that prevents us appreciating it as such.
Notably, regarding theoretical physics, there are at least nine models for modern theoretical physics, all of which can perfectly explain the empirical observations, and all of which are completely and totally contradictory to one another. (Okay, almost all of which. There are a few compatibilities scattered amongst them. Neorealism can work fine with the multiverse model, and there are a small handful of models which are derived from Bohr’s interpretations and are semicompatible with one another.)
I think “completely and totally contradictory” is putting it too strongly, since they do in fact all agree about all observations we have ever been able to make or ever anticipate being able to make. Extreme verificationists would argue that the bits they disagree about are meaningless :-).
They agree about observations—but we already have the observations, so that doesn’t mean much. Any theory worth thinking about isn’t going to disagree about those observations, which, after all, they are created to explain. They disagree in every way it is meaningful that they, theories about the reason why, MAY disagree—in the reasons why. And extreme verificationists can go take a leap off a logical cliff when it comes to discussing differences in the reasons why something may be.
“they do in fact all agree about all observations we have ever been able to make or ever anticipate being able to make.”
Not entirely true.
Nick: Oh, sorry, I forgot that there are still people who take the Copenhagen interpretation seriously. Though actually I suspect that they might just decree that observation by a reversible conscious observer doesn’t count. That would hardly be less plausible than the Copenhagen interpretation itself. :-)
(I also have some doubt as to whether sufficiently faithful reversibility is feasible. It’s not enough for the system to be restored to its prior state as far as macroscopic observations go; the reversal needs to be able to undo decoherence, so to speak. That seems like a very tall order.)
Adirian: the fact that their agreement-about-observations was predictable in advance doesn’t make it any less an agreement. (And if you’re talking only about the parts of those theories that are “theories about the reasons why”, bracketing all the agreements about what’s observed and how to calculate it, then I don’t think you are entitled to call the things that disagree completely “models for modern theoretical physics”.)
Nick—that proof works fine for any of the neorealist models, in which Everett’s model is, variably, placed. The problem is in interpretation. Remember that there is great disagreement in the Copenhagen models about where, exactly, waveform collapse happens—after all, if one treats the quantum measurement device itself as being in a quantum state, then 100% correlation may be acceptable. (Because the waveform state of the computer wasn’t collapsed until the first and third measurements were examined together.)
The real problem here is that the Copenhagen models are effectively unscientific, since it is fundamentally impossible to disprove the concept that anything that is unmeasured is in an uncertain/undefined state. It’s an intellectual parlour trick, and shouldn’t be taken seriously.
At the same time though, not calculating a value until something actually needs it is exactly the kind of efficiency hack one would really want to implement if they were going to simulate an entire universe...
So if we are in some level of sub-reality that would make it much more likely that the model is correct, even if there’s no way for us to actually test it...
So from a practical point of view, it comes down entirely to which model lets us most effectively predict things. Since that’s what we actually care about. I’ll take a collection of “parlour tricks” that can tell me things about the future with high confidence over a provably self-consistent system that is wrong more often.
Upvoted because, while I don’t know the details of the Copenhagen models, if it is true they rely on “the concept that anything that is unmeasured is in an uncertain/undefined state”, then until some method of testing this state is devised the theories are effectively pseudo-science.
The Popper essay, originally mentioned above, describes the problem nicely.
It doesn’t speak to the truth or untruth of the theory, just to its scientific status, or lack thereof. In a nutshell, if it’s not testable, it’s not scientific, whether it is true or not. This is why it should not be taken too seriously, at least not until it becomes testable.
the fact that their agreement-about-observations was predictable in advance doesn’t make it any less an agreement. (And if you’re talking only about the parts of those theories that are “theories about the reasons why”, bracketing all the agreements about what’s observed and how to calculate it, then I don’t think you are entitled to call the things that disagree completely “models for modern theoretical physics”.)
It renders that agreement meaningless. If you curve-fit seven points, and come up with a thousand different formulas, the fact that each of these thousand formulas includes each of those seven points produces exactly no additional information. The fact of the matter is that we have discarded every formula which DIDN’T describe those points—that the remaining formulas do describe them tells us absolutely nothing about either the points or the relative value of the formulas with respect to one another.
At best, out of N formulas, each has a 1/N chance of being correct. (At worst, none of the formulas is correct.)
Technical note: Occam factors (and prior probabilities generally) can cause these chances to deviate from 1/N.
I didn’t mean specifically, I meant on average. My apologies for the poor phrasing. Yes, any individual formula’s odds of being correct can vary. (To deny this would be to deny Bayesian reasoning, and I think I might get mugged here if I tried that.)
Hey there. I wont ever be returning here again. Time never began. Time will never end. We will never exist. Thank you for your time.
Don’t know if that’ll solve matters, just trying. This does seem very Popperian—in a bad way, in that it’s an oversimplified approach to theory-formation. What do you think about Kuhn, who finds this kind of reinforcement in normal, productive science—but still allows a distinction between evidence-based science and entirely circular nonscience? What about the idea that we have ‘rings’ of beliefs, and will sacrifice any number of ‘outer-ring’ theory detail to preserve our core beliefs?
Yeah, the ‘help’ was a futile attempt to close the open italics tag. Didn’t work, obviously.
Adirian (sorry for not noticing your response sooner), the situation is more like: we have a million data points and several models that all fit those points very precisely and all agree very precisely on how to interpolate between those points—but if we try to use them to extrapolate wildly, into regions where in fact we have no way of getting any real data points, they diverge. It also turns out that within the region where we can actually get data—where the models agree—they don’t agree merely by coincidence, but turn out to be mathematically equivalent to one another.
You are welcome to describe this situation by saying that the models “completely and totally contradictory”, but I think that would be pretty eccentric.
(This is of course merely an analogy. I think the reality is even less favourable to your case.)
ADS may be observed, most tragically, in the history of “facilitated communication.”
http://www.cqc.state.ny.us/hottopics/fcwheel.htm
I personally prefer The Law of Fives: “ALL THINGS HAPPEN IN FIVES, OR ARE DIVISIBLE BY OR ARE MULTIPLES OF FIVE, OR ARE SOMEHOW DIRECTLY OR INDIRECTLY APPROPRIATE TO 5.”
With the corollary: “I find the Law of Fives to be more and more manifest the harder I look.”
cf. Foucault’s Pendulum; the entire novel.
Ever since reading Hitchhikers Guide to the Galaxy I’ve seen the number 42 pop up at an alarming rate… Though I guess people use that number more than average for that very same reason. (I know I do!)
Good article but why do you only talk about positive “death” spirals It would be just thr same with negative toughts added insecutity iy would be even harder to break out.
In the 20-th century, Richard Feinmann did point out that there may be some problem with how we patch our phisics by cutting out the neigbourhoods. Nowdays we are pathing the General Relativity with the dark matter (it wasn’t predicted, really) and even dark matter. It looks like we’ll have to patch some “too fast neitrino in the matter” fenomenon.
I am not claiming this “patching” business something intristically right and beautifull. Never. We’ll have to propose some new theories. But… before we’ll have some better theory, to patch General Relativity seems just the thing to do. May be—the only thing to do, sorry.
An average scientist (if there is such a thing) isn’t expected to propose something better, than General Relativity. Not really. So, even as we teach scientists, most of ’em wouldn’t need to remember, that “patching” old theories isn’t the right thing to do, in the long run. As they may do nothing about it. These with Nobel Prize ambition level would be wise to remember it, thought.
Yeah, “dark matter” really bothers me. Which seems more likely?
That there are massive quantities of invisible matter in the universe that only interacts via gravitation? And happens to be spread around in about the same density distribution as all the regular matter?
Or that our estimate for the value of the universal gravitational constant is either off a little bit or not quite as constant as we think?
The former sounds a little too much like an invisible dragon to me. Which doesn’t make it impossible, but exotic, nigh-undetectable forms of matter just doesn’t seem as plausible as observation error to me.
Your second sentence is a pretty straightforward consequence of your first.
That is a reasonable possibility, although if it only interacts with normal matter via gravitation, which is relatively weak, then I’d expect to see its dispersal lag significantly behind, say, a supernova. And that lag would seem likely to result in such events skewing the distribution over time.
Unless we’re also going to postulate that dark matter has its own energy, chemistry, and physics which resemble those of normal matter so closely that such things happen in both realms at the same time...
Measurement error and/or gravity having some kind of propagation properties we haven’t worked out yet still seems like a contender for the explanation, unless they have, indeed found pockets in the universe with differing amounts of excess gravitation that match what one would expect in the wake of fast-moving objects. I haven’t seen any reports about that myself, but I can’t say I’m an insider on the latest research or anything.
The whole point of dark matter is to hold galaxies together through gravity. And it is posited as having exotic properties apart from gravity.
Sigh He doesnt know critical thinking...
Is there such a thing as negative feeling death spirals, where say fear or mental illness keeps pushing you towards a terrible thought, concept or idea.
I think so. It’s a positive feedback loop either way.
Applying this to my own beliefs, I seem to be trapped in an affective death spiral around science and rationality. In fact, just as you described, this spiral has led me to seek out new opportunities to apply and engage with science and rationality, shaping not just my career but the entirety of my life in the process. I have a feeling most folks around here can relate to these statements.
So I wonder, are affective death spirals always a bad thing? More specifically, should they always be avoided? Do seemingly positive affective death spirals carry risk of negative externalities?
One place where my own obsession with science and rationality seems to get in the way of things is in highly emotional interactions with other people. Often, my attempts to apply science and rationality to statements made during a heated argument simply make matters worse. Same goes when consoling a friend or partner about something sad; few in such situations are actually interested in applying the scientific method.
Then again, I also used science and rationality to get out of this pattern. I noticed my default approach wasn’t working, came up with new approaches, and tested them in different situations as they arose. After evaluating the results, admittedly with little in the way of statistical analysis, I landed on a robust system for dealing with highly emotional interpersonal encounters. (The biggest hurdle has been actually remembering to use it rather than defaulting to what feels right according to the affective death spiral around science and rationality which rules my life.)
Edit: I continued onto the next article in this series. I now feel surprisingly prescient and a little silly.
I also found it a good practice to generate your own answers to how you would escape the happy death spiral, before reading the next article.
My answer:
Remember that powerful theories are the ones that eliminates many options, not ones that explains everything.
I think it is a reasonably good answer as it somewhat contains 3⁄5 of the points
Thinking about the specifics of the causal chain instead of the good or bad feelings;
Not rehearsing evidence; and
Not adding happiness from claims that “you can’t prove are wrong”;
This effect is really noticeable when you’re manic.