Resist the Happy Death Spiral
Once upon a time, there was a man who was convinced that he possessed a Great Idea. Indeed, as the man thought upon the Great Idea more and more, he realized that it was not just a great idea, but the most wonderful idea ever. The Great Idea would unravel the mysteries of the universe, supersede the authority of the corrupt and error-ridden Establishment, confer nigh-magical powers upon its wielders, feed the hungry, heal the sick, make the whole world a better place, etc., etc., etc.
The man was Francis Bacon, his Great Idea was the scientific method, and he was the only crackpot in all history to claim that level of benefit to humanity and turn out to be completely right.1
That’s the problem with deciding that you’ll never admire anything that much: Some ideas really are that good. Though no one has fulfilled claims more audacious than Bacon’s; at least, not yet.
But then how can we resist the happy death spiral with respect to Science itself? The happy death spiral starts when you believe something is so wonderful that the halo effect leads you to find more and more nice things to say about it, making you see it as even more wonderful, and so on, spiraling up into the abyss. What if Science is in fact so beneficial that we cannot acknowledge its true glory and retain our sanity? Sounds like a nice thing to say, doesn’t it? Oh no it’s starting ruuunnnnn . . .
If you retrieve the standard cached deep wisdom for don’t go overboard on admiring science, you will find thoughts like “Science gave us air conditioning, but it also made the hydrogen bomb” or “Science can tell us about stars and biology, but it can never prove or disprove the dragon in my garage.” But the people who originated such thoughts were not trying to resist a happy death spiral. They weren’t worrying about their own admiration of science spinning out of control. Probably they didn’t like something science had to say about their pet beliefs, and sought ways to undermine its authority.
The standard negative things to say about science aren’t likely to appeal to someone who genuinely feels the exultation of science—that’s not the intended audience. So we’ll have to search for other negative things to say instead.
But if you look selectively for something negative to say about science—even in an attempt to resist a happy death spiral—do you not automatically convict yourself of rationalization? Why would you pay attention to your own thoughts, if you knew you were trying to manipulate yourself?
I am generally skeptical of people who claim that one bias can be used to counteract another. It sounds to me like an automobile mechanic who says that the motor is broken on your right windshield wiper, but instead of fixing it, they’ll just break your left windshield wiper to balance things out. This is the sort of cleverness that leads to shooting yourself in the foot. Whatever the solution, it ought to involve believing true things, rather than believing you believe things that you believe are false.
Can you prevent the happy death spiral by restricting your admiration of Science to a narrow domain? Part of the happy death spiral is seeing the Great Idea everywhere—thinking about how Communism could cure cancer if it were only given a chance. Probably the single most reliable sign of a cult guru is that the guru claims expertise, not in one area, not even in a cluster of related areas, but in everything. The guru knows what cult members should eat, wear, do for a living; who they should have sex with; which art they should look at; which music they should listen to . . .
Unfortunately for this plan, most people fail miserably when they try to describe the neat little box that science has to stay inside. The usual trick, “Hey, science won’t cure cancer,” isn’t going to fly. “Science has nothing to say about a parent’s love for their child”—sorry, that’s simply false. If you try to sever science from e.g. parental love, you aren’t just denying cognitive science and evolutionary psychology. You’re also denying Martine Rothblatt’s founding of United Therapeutics to seek a cure for her daughter’s pulmonary hypertension.2 Science is legitimately related, one way or another, to just about every important facet of human existence.
All right, so what’s an example of a false nice claim you could make about science?
One false claim, in my humble opinion, is that science is so wonderful that scientists shouldn’t even try to take ethical responsibility for their work—it will turn out well in the end regardless. It appears to me that this misunderstands the process whereby science benefits humanity. Scientists are human; they have prosocial concerns just like most other other people, and this is at least part of why science ends up doing more good than evil.
But that point is, evidently, not beyond dispute. So here’s a simpler false nice claim: “A cancer patient can be cured just through the publishing of enough journal papers.” Or: “Sociopaths could become fully normal, if they just committed themselves to never believing anything without replicated experimental evidence with p < 0.05.”
The way to avoid believing such statements isn’t an affective cap, deciding that science is only slightly nice. Nor searching for reasons to believe that publishing journal articles causes cancer. Nor believing that science has nothing to say about cancer one way or the other.
Rather, if you know with enough specificity how science works, then you know that while it may be possible for “science to cure cancer,” a cancer patient writing journal papers isn’t going to experience a miraculous remission. That specific proposed chain of cause and effect is not going to work out.
The happy death spiral is only an emotional problem because of a perceptual problem, the halo effect, that makes us more likely to accept future positive claims once we’ve accepted an initial positive claim. We can’t get rid of this effect just by wishing; it will probably always influence us a little. But we can manage to slow down, stop, consider each additional nice claim as an additional burdensome detail, and focus on the specific points of the claim apart from its positiveness.
What if a specific nice claim “can’t be disproven” but there are arguments “both for and against” it? Actually these are words to be wary of in general, because often this is what people say when they’re rehearsing the evidence or avoiding the real weak points. Given the danger of the happy death spiral, it makes sense to try to avoid being happy about unsettled claims—to avoid making them into a source of yet more positive affect about something you liked already.
The happy death spiral is only a big emotional problem because of the overly positive feedback, the ability for the process to go critical. You may not be able to eliminate the halo effect entirely, but you can apply enough critical reasoning to keep the halos subcritical—make sure that the resonance dies out rather than exploding.
You might even say that the whole problem starts with people not bothering to critically examine every additional burdensome detail—demanding sufficient evidence to compensate for complexity, searching for flaws as well as support, invoking curiosity—once they’ve accepted some core premise. Without the conjunction fallacy, there might still be a halo effect, but there wouldn’t be a happy death spiral.3
Even on the nicest Nice Thingies in the known universe, a perfect rationalist who demanded exactly the necessary evidence for every additional (positive) claim would experience no affective resonance. You can’t do this, but you can stay close enough to rational to keep your happiness from spiraling out of control.4
Stuart Armstrong gives closely related advice:5
Cut up your Great Thingy into smaller independent ideas, and treat them as independent.
For instance a marxist would cut up Marx’s Great Thingy into a theory of value of labour, a theory of the political relations between classes, a theory of wages, a theory on the ultimate political state of mankind. Then each of them should be assessed independently, and the truth or falsity of one should not halo on the others. If we can do that, we should be safe from the spiral, as each theory is too narrow to start a spiral on its own.
This, metaphorically, is like keeping subcritical masses of plutonium from coming together. Three Great Ideas are far less likely to drive you mad than one Great Idea. Armstrong’s advice also helps promote specificity: As soon as someone says, “Publishing enough papers can cure your cancer,” you ask, “Is that a benefit of the experimental method, and if so, at which stage of the experimental process is the cancer cured? Or is it a benefit of science as a social process, and if so, does it rely on individual scientists wanting to cure cancer, or can they be self-interested?” Hopefully this leads you away from the good or bad feeling, and toward noticing the confusion and lack of support.
To summarize, you do avoid a Happy Death Spiral by:
Splitting the Great Idea into parts;
Treating every additional detail as burdensome;
Thinking about the specifics of the causal chain instead of the good or bad feelings;
Not rehearsing evidence; and
Not adding happiness from claims that “you can’t prove are wrong”;
but not by:
Refusing to admire anything too much;
Conducting a biased search for negative points until you feel unhappy again; or
Forcibly shoving an idea into a safe box.
1Bacon didn’t singlehandedly invent science, of course, but he did contribute, and may have been the first to realize the power.
2Successfully, I might add.
3For more background, see “Burdensome Details,” “How Much Evidence Does it Take?”, and “Occam’s Razor” in the previous volume, Map and Territory.
4The really dangerous cases are the ones where any criticism of any positive claim about the Great Thingy feels bad or is socially unacceptable. Arguments are soldiers; any positive claim is a soldier on our side; stabbing your soldiers in the back is treason. Then the chain reaction goes supercritical. More on this later.
5Source: http://lesswrong.com/lw/lm/affective_death_spirals/gp5.
- Guardians of Ayn Rand by 18 Dec 2007 6:24 UTC; 118 points) (
- Deconfusion Part 3 - EA Community and Social Structure by 9 Feb 2023 8:19 UTC; 83 points) (EA Forum;
- The Thing That I Protect by 7 Feb 2009 19:18 UTC; 46 points) (
- Fake Fake Utility Functions by 6 Dec 2007 6:30 UTC; 42 points) (
- Deconfusing Effective Altruism: The Philosophy by 29 Jan 2023 18:09 UTC; 39 points) (EA Forum;
- 11 Jan 2012 16:34 UTC; 34 points) 's comment on Utopian hope versus reality by (
- The Importance of Self-Doubt by 19 Aug 2010 22:47 UTC; 28 points) (
- Heading Toward Morality by 20 Jun 2008 8:08 UTC; 27 points) (
- That Crisis thing seems pretty useful by 10 Apr 2009 17:10 UTC; 18 points) (
- 19 Oct 2019 17:41 UTC; 15 points) 's comment on A simple sketch of how realism became unpopular by (
- 3 Jul 2011 12:44 UTC; 15 points) 's comment on An Outside View on Less Wrong’s Advice by (
- Book Review: Denial of Death by 14 Oct 2021 4:28 UTC; 14 points) (
- 20 Mar 2014 11:42 UTC; 12 points) 's comment on To what extent does improved rationality lead to effective altruism? by (
- Book review and policy discussion: diversity and complexity by 17 Apr 2021 11:36 UTC; 11 points) (
- 7 Aug 2010 14:34 UTC; 10 points) 's comment on Open Thread: July 2010, Part 2 by (
- 24 Mar 2012 4:29 UTC; 8 points) 's comment on Harry Potter and the Methods of Rationality discussion thread, part 11 by (
- 11 Feb 2013 2:47 UTC; 8 points) 's comment on [Link] Detachment by (
- [SEQ RERUN] Resist the Happy Death Spiral by 15 Nov 2011 4:26 UTC; 8 points) (
- 29 Dec 2010 14:50 UTC; 8 points) 's comment on The Fallacy of Dressing Like a Winner by (
- Blinded by Insight by 7 Dec 2017 2:45 UTC; 7 points) (
- 9 Aug 2013 13:14 UTC; 7 points) 's comment on Greatest Philosopher in History by (
- Rationality Reading Group: Part J: Death Spirals by 24 Sep 2015 2:31 UTC; 7 points) (
- 14 Jun 2010 21:53 UTC; 5 points) 's comment on Open Thread June 2010, Part 3 by (
- 2 Mar 2010 11:53 UTC; 3 points) 's comment on Open Thread: March 2010 by (
- 6 Mar 2023 21:38 UTC; 2 points) 's comment on The Inner-Compass Theorem by (
- 21 Apr 2021 6:30 UTC; 2 points) 's comment on Affective Death Spirals by (
- 4 Sep 2011 0:19 UTC; 1 point) 's comment on Welcome to Less Wrong! (2010-2011) by (
- 14 Oct 2022 6:02 UTC; 1 point) 's comment on When should you relocate to mitigate the risk of dying in a nuclear war? by (
- 16 Dec 2012 16:29 UTC; 0 points) 's comment on Ends Don’t Justify Means (Among Humans) by (
- 20 Mar 2010 0:36 UTC; 0 points) 's comment on Open Thread: February 2010, part 2 by (
- 13 Nov 2014 19:58 UTC; 0 points) 's comment on First(?) Rationalist elected to state government by (
- 19 Sep 2012 8:51 UTC; 0 points) 's comment on The noncentral fallacy—the worst argument in the world? by (
- A rational view on spiritual enlightenment by 1 Mar 2019 1:40 UTC; -17 points) (
So it turns out that all you have to do is overcome bias? I confess I was hoping for something a little more specific than that.
I think it’s time to take a step back and re-evaluate the purpose of this bully pulpit, because it’s not fulfilling its stated goal, and I don’t think it’s even fulfilling its unspoken and more-pragmatic goals.
Pointing out situations where eliminating bias is all well and good, but that’s not difficult. It’s not needed, because we can do it ourselves easily. What we need are elegant instructions on HOW to avoid types of bias, and this site doesn’t seem to have had anything substantial to say on that topic.
See Addendum 2.
I think “cancer paper writing journal papers isn’t going to experience a miraculous remission” should read patient instead of the first paper. Although, I would think if we had the kind of sophisticated AI that would allow papers to write papers we would probably be well on the way to curing cancer...
Hah, funny.
Sorry, just had to add that.
Maksym, fixed.
The rationalist checked his gun—a repeated, almost compulsive gesture before the coming battle, even though he already knew the weapon better than himself. He was running out of Burdensome detail bullets, but still had enough Causal Chain Specifics to pepper the Big Thingy with. He gingerly fingered the Non-Rehearsed Evidence hanging from his belt—he hoped he wouldn’t have to use them, they were dangerous and exploded all over the place.
Maybe this wouldn’t be so tough after all… There was no reason to suspect this Big Thingy would be a strong one, was there? Happiness mounting from this unprovable claim, he quickly swallowed a rational combat pill to keep it at bay. Reason returned, and he chanted the mantra against unreason: “I will not fear, I will not doubt, but I will not refuse to admire. When the refusal to admire is gone there will be nothing; only I will remain.” He readied his weapons...
And he hoped, above all else, that this Big Thingy had already been cut into manageable pieces. Because if it hadn’t, if it was huge and whole, then there was nothing for it: he’d have to deploy illegal BNPS (biased negative points searches), or even call down the big safe box...
Ha, that was awesome!
Thanks! :-)
Stuart, very cute. :)
IIRC, the core of Marxism is historical materialism. But that doesn’t mean anything to me. Is it more than the sum of Armstrong’s independent parts?
Should really ask crooked timber or somewhere. Very good post.
I’m afraid Francis Bacon cribbed essentially all of his scientific method from an Iraqi usually called “Ibn al Haytham” (or “Alhacen”, or “Alhazen”, in different contexts).
Al Haytham invented modern science as an adjunct to studying (i.e., creating the field of) optics, about a thousand years ago. Appealingly, instead of simply advocating the method, he demonstrated using it to investigate natural phenomena, and explained, alongside his results, how the method offered the reader both confidence in his results and a means to correct his errors.
Bacon deserves some credit for bringing al Haytham’s insights to the English-speaking public, centuries later. He didn’t pretend to originality, but the English at the time weren’t very interested in what an Iraqi had done 500 years earlier.
Feyerabend, as Greg Egan put it, “the basis of science is just systematic honesty, and there’s nothing we can’t be honest about”. Of what use are “alternative approaches”, I wonder!
But personally, I doubt life has any meaning in the context of infinite universe, where every possible history is guaranteed to exist or even already exists in some way, as I understand it.
Stuart, very cute. :)
Thanks :-)
I felt Eliezer’s project has reached a point where some drama would be required.
Artyom, that is a predictable non-response. Why it is about science that grants it a monopoly on systematic honesty? Why is systematic honesty the relevant procedural virtue with regard to this question? Why do you seem so sure that only science is capable of producing worthy answers to such questions?
This blog is the most cringe-inducing example of Plato’s Cave I have seen in a long, long time.
Nobody here (but you) has claimed that science had a monopoly on systematic honesty.
Systematic honesty is relevant to science for reasons that should be fairly obvious. The point of science is that dishonesty can’t be hidden for long. Repeatability shows the way to the truth, and there’s no hiding from it. The benefit of systematic honesty is that we approach the truth iteratively.
You asked “of what use is science”. Artyom seemed to be trying to point out that science is of great use—if you are seeking the truth.
He then questioned the benefit of your “alternative approaches”… which you never actually mentioned a) what they are or b) what use they are, by comparison with the scientific method of seeking.
You seemed to imply that science was of no benefit to seeking meaning… but gave no evidence of that fact, nor any benefits of you alternatives.
he probably isn’t sure. Just as I am not… However, my own experience with science.. and with many alternative methods gives me the background to state that science, with its systematic honesty, tends toward better solutions than any other method I’ve so far been able to find. Also that you can, in fact, combine science with almost any other useful method.
Take (as a random example) “following your heart” as an “alternative method”. It is my experience that “following your heart” is generally undertaken as a random decision-making procedure… but there is no reason why, at any one decision, you can’t write down what your heart predicts to be a good answer… then checking if it worked out to be the best option when the dust settles… and using that as input into your next decision.
Hey presto… science.
Hmm, ad hominem, and no evidence of any actual evidence of why even you consider this blog to be related to Plato’s cave… sadly, my heart tells me that you are likely just a troll…
This comment is going on a decade old, and if you still access this account, I would be curious about your stance on your above statements now.
The value of a mode of inquiry lies as much in the value of the questions it generates as in the answers. Science sets a high threshold for answers, but a good question can be worth much more than any answer.
What is important in life? Meaning. Life still seems important to me though I think it quite possible it’s all absurd. Reactions may vary.
Why do you seem so sure that only science is capable of producing worthy answers to such questions? Science works. It gives us cool things like rocketships. What have your alternative approaches ever given us?
For those unfamiliar with Feyerabend’s namesake, treat yourself to some David Stove.
You have failed your attempt at reading comprehension. Further attempts at conversing with you will not be fruitful.
“If you try to sever science from e.g. parental love, you aren’t just denying cognitive science and evolutionary psychology. You’re also denying Martine Rothblatt’s founding of United Therapeutics to seek a cure for her daughter’s pulmonary hypertension. (Successfully, I might add.) Science is legitimately related, one way or another, to just about every important facet of human existence.”
Well, no. No one in their right mind makes the argument that “scientists can’t love their children” or “a scientific enterprise cannot be motivated by love.”.
The phrase “Science has nothing to say about a parent’s love for their child” means only that there is no “scientific explanation” for a parent’s love. This may or may not be entirely true, but right or wrong, it has nothing to do with (as you rather confusingly put it) “denying” Martine Rothblatt seeking a cure for her daughter’s pulmonary hypertension. That search is an act of love, motivated by love, not an explanation for why the love is there in the first place.
Not yet, anyway, at least with regards to the specific mental mechanisms that create the feeling. If you take a “ten thousand foot high” view of the subject, evolution explains love perfectly—love is what drives humans to be monogamous (though not perfectly, for various reasons) and it also drives us to protect our young. This is beneficial for the survival of the species, and it is one of the reasons humans are arguably the most successful creatures on the planet in terms of survival. Nearly every mammal exhibits similar behavior, with variations depending on their specific adaptations, so it is quite reasonable to say they likely experience a feeling very much like what we call love.
That’s the point. There is nothing that science is not involved with, and there are researchers right now attempting to find why we love (and there has been a lot of progress in the area—I’ve seen some really cool documentaries on the subject).
jsabotta didn’t make claims about whether a scientific explanation of parental love exists; he stated, correctly I think, that your beliefs about the existence of such an explanation have no bearing on whether or not you deny Martine Rothblatt’s founding of United Therapeutics to seek a cure for her daughter’s pulmonary hypertension.
“You’re also denying Martine Rothblatt’s founding of United Therapeutics to seek a cure for her daughter’s pulmonary hypertension.” I’m not sure what Eliezer means by this statement. Is he talking about denying that Martine Rothblatt founded United Therapeutics? Is he talking about denying that she founded it to seek a cure for her daughter’s pulmonary hypertension? I think that there must be some other interpretation because I don’t see how denying either of those things would result from denying that science can explain parental love.
bigjeff, I don’t see how you can claim that love “is one of the reasons humans are arguably the most successful creatures on the planet in terms of survival,” if “Nearly every mammal exhibits similar behavior.” How can our position as the “most” successful species be a result of a characteristic that we share with so many other animals? The reason we are most successful needs to be something that distinguishes us from all other species, our intelligence, for instance.
I’m not sure whether this is redundant , but keep an eye out for Goodhart’s Law. Does a particular thing which is claimed to be wonderful actually share the virtues of some wonderful thing that it resembles?
This post made me think of this article: http://www.nytimes.com/2011/08/29/opinion/republicans-against-science.html
Okay, was the voting down because I posted this, or in response to the article? Just curious since no one explained...
I, For one, liked the article. The author might benefit from reading Politics is the MindKiller, and so on, but he has valid points, and linking to it does not, in my opinion warrant a downvote.
A few things.
1) The linked to article brings some examples of thinking distorted by politics but does not explain the relationship between belief and political convenience. If anything, he implies there is little relationship.
In the first example, Perry already truly believes what would be convenient, and is castigated for believing the unlikely and not changing his mind, but the original reasons for his beliefs and his reasons for not changing them aren’t really explored.
In the second example, in the author’s opinion, Romney has not had his beliefs corrupted by what would be convenient to believe, but is merely hiding his inconvenient beliefs with ambiguous statements.
2) That guy writing about politics corrupting clear thought has clearly had his own mind badly damaged by politics (or is writing as if he had as part of his job). It is impossible to tell how much the linked to piece was motivated by its contents’ truth and how much was motivated by the attractiveness of making accusations against political enemies, true or not.
My own cure for the Singularitarian happiness death spiral: Science falls on the just and the unjust alike.
Video
--Tom Lehrer, “Wernher von Braun”
Once you know about affective death spirals, you can use them in tricky ways. Consider for example, that you got into an affective death spiral about capital “R” Rationality which caused you to start entertaining false delusions (like that teaching Rationality to your evil stepmother would finally make her love you, or whatever). If you know that this is an affective death spiral, you can do an “affective death spiral transfer” that helps you avoid the negative outcome without needing to go to war with your own positive feelings: in this case, realise that it’s incredibly awesome that Rationality is so cool that it can even help you correct an affective death spiral about itself. Of course, you have to be careful to become actually good at this (but you get Rationality points for realising this, and triple Rationality points for actually achieving it. Awesome!!!). You also get huge Rationality boosts for realising failure modes in your pursuit of Rationality in general (because that’s totally Rational too! See how that works?).
Affective death spirals are like anti-akrasia engines, so getting rid of them entirely might be substantially less advantageous than applying some clever munchkinry to them.
This post blew my mind. What a great idea! This great idea idea explains so much. Now I can see why I’ve naively been a communist my entire life. It makes me feel good to be part of a savagely oppressed minority. Now I see that all of my opinions have this dangerous feedback loop of me believing them. I think god is a myth, because it makes me feel good. I think cheesy bread is unhealthy, it make me feel good. Oh no! The great idea is too powerful. It makes me feel so superior to compare other people’s political philosophies to cults. The great idea idea must be a ‘great idea’ itself! I’m in the death spiral helpppppppppppppp................
This post is ridiculous. Any ‘general’ theory is far more complicated than presented here. Any real ‘general’ idea is far more complex both logically and emotionally than presented here. While I’m not against simplification or reduction, this is a strawman of a strawman. The real way to avoid being drawn in to ideological complacency is continued re-examination, reflection and growth of your ideas. In short, to keep thinking and expanding them.
It may be useful to the cause of avoiding one’s own potential happy death spirals (HDSs) to actively attempt to subvert the “my ideas are my children” trope. Perceived ownership of an idea or mental tool may be a prime contributor to HDS thinkery, giving rise to the kind of protectiveness we humans tend to provide our offspring whether or not they deserve it. The fact that our child started the fight with another child doesn’t prevent us from stepping in on OUR child’s side; the fact that our child is demonstrably average doesn’t prevent us from telling complete strangers how intelligent, sweet, talented, beautiful, etc. OUR child is, was, and shall always be, forever and ever, amen.
So too it seems to be with the ideas we feel we own, particularly the ones we ourselves have generated. This impulse is entirely understandable within the context of a species whose primary survival trait is intelligence, with opposable thumbs taking a distant second. Yet to feel ownership of an idea to the point that we feel protective of it seems rationally contraindicated: an idea—anyone’s—should only be valued insofar as it can stand on its own in the uncaring realm of reality… in a making beliefs pay rent kind of way.
So perhaps a good solution to the “How?” of resisting HDSs would be to try to view ideas and mental tools as being both fundamentally borrowed and potentially disposable upon breaking. It’s a nice way of avoiding even the temptation to indulge in ad hominem, as well.
Dead link to “scientists shouldn’t even try to take ethical responsibility for their work” link is now here
fixed
“You’re also denying Martine Rothblatt’s founding of United Therapeutics to seek a cure for her daughter’s pulmonary hypertension”—if I were defending a mind/science division I would say you are messing up possibilities of science and motivations to do science. She may have been motivated by her parental (ahem, maternal… funny how parental and paternal consist of the same letters...) love to go to seek Science’s help but it tells us nothing more about possibilities of science than going to seek a shaman’s help would tell about possibilities of shaman rituals.
I just want to call these two paragraphs out as truly exceptional.
The Happy Death Spiral is a very really thing still 14 years later. In the past I’ve heard similar behaviors called “True Believer Syndrome.” But I can spot many times in my past when an initial positive feeling about something made me too eager to believe other claims in it’s orbit.
I wouldn’t say Bacon’s scientific method is the only great idea that both promised and delivers on being massively beneficial to all mankind.
There are certain social principles that crop up again and again as well. For example, the idea that free people making their own decisions and setting their own goals are, in the long run, vastly more efficient at practically everything than top-down, centralized control.
It works surprisingly well wherever it’s tried, consistently out-performs the predictions of the centralizers, and, at this point, we’re even starting to understand the logical and mathematical basis for why it works.
And yet, somehow, most of its historical proponents are seen as crackpots or religious nuts.
What is the mathematical basis for people doing stuff at their own “free will”? I would appreciate some keywords or links.
I’m afraid I haven’t collected a definite list. I just notice when it pops up in the wide variety of materials I tend to read. For example, traffic studies showing better flow rates and safety when drivers are allowed more individual discretion. You’ll probably also find some stuff in Austrian economics with regard to how more freedom of choice allows for better optimization by making fuller use of the processing capability of each individual. And there have been a few references to it in business management studies about why micromanaging your employees almost invariably leads to worse productivity.
“Network Effects” is probably a good keyword if you want to go looking for such examples specifically. It seems to be a common phrase.
See also critical brain hypothesis; attempting to summarize my current understanding in english, seems like systems work together better if every node in the system is given enough information that no other part of the system can predict that node’s response, but every part of the system can trust that every other part of the system is well informed. Collective action works better when every node can contribute to what the collective action actually is; local free will is probably “useful chaos”. At least, that’s my current read of things. see https://www.youtube.com/watch?v=vwLb3XlPCB4 - an interesting implication of this is that it’s possible to have less free will, when you’re trying to be more disciplined and ignore parts of your own brain’s input; not a lot less, but less. see also https://pubmed.ncbi.nlm.nih.gov/35145021/
From the economics side of things, individual nodes having massive amounts of locally useful information, but it being very difficult to determine exactly which pieces of that information are globally relevant and it being completely impractical to ship and process every piece of that information at the global level is the fundamental problem that most “command economies” tend to run into.
indeed. since it came up—hence the need to move out of a centrally planned economy and into one where workers own their own planning ;) though of course most of the issue is the high extraction ratio of stocks compared to bounded forms of debt such as bounded loans. interest bearing loans are a significant fraction of the problem. this is all probably irrelevant to ai long term, but short term I really like the capped returns model as a starting point.
but re information availability—kademlia style information routing in latent space seems likely to suffice to me.
There are quite a few ways it can go wrong other than just central planning. Ultimately most of them come back to some special interest group attempting to forcibly subvert the economy to favor their own preferences.
High extraction ratios aren’t inherently problematic economically speaking since it’s not like the extracted resources simply vanish, and market forces tend to bring the extraction ratio down over time until it reaches the lowest level anyone’s willing to do the job for. But, high extraction ratios do make a tempting target for non-economic actions designed to preserve the lucrative ratio against the actions of the market.
I find splitting the Great Idea a very useful tool to quantify its relevance (“For how many parts do I feel they are true?”), and this way, to apply falsification (for which I find “If your idea wasn’t true, how would you find out? Because if you don’t ask, it might be not true, but you didn’t find out.” to be the most logical and intuitive description. And so in case of splitting, you can say “I would admit my idea was incorrect, if not all its 12 parts felt correct separately. Easy.”.).
Among early critics of science making the hydrogen bomb were Einstein, Oppenheimer, Leo Szilard, and Bertrand Russell. They didn’t like the risk of mass death, civilizational collapse, and possible human extinction. They weren’t trying to undermine the authority of science.
If someone is so deep into a “happy death spiral” about science that nuclear weapons don’t make them blink that is a severe case. I think it can be an effective argument for milder cases. Certainly my love for science was held in check by reading about AI extinction risk in 1997.
More generally I think that noticing the skulls of your Great Idea is often a cure. If someone is getting a happy death spiral about the USA, it helps if they notice the slavery. If industrialization, notice the global warming. If Christianity, notice the Inquisition. And so on.