The Uses of Fun (Theory)
“But is there anyone who actually wants to live in a Wellsian Utopia? On the contrary, not to live in a world like that, not to wake up in a hygenic garden suburb infested by naked schoolmarms, has actually become a conscious political motive. A book like Brave New World is an expression of the actual fear that modern man feels of the rationalised hedonistic society which it is within his power to create.”
—George Orwell, Why Socialists Don’t Believe in Fun
There are three reasons I’m talking about Fun Theory, some more important than others:
If every picture ever drawn of the Future looks like a terrible place to actually live, it might tend to drain off the motivation to create the future. It takes hope to sign up for cryonics.
People who leave their religions, but don’t familiarize themselves with the deep, foundational, fully general arguments against theism, are at risk of backsliding. Fun Theory lets you look at our present world, and see that it is not optimized even for considerations like personal responsibility or self-reliance. It is the fully general reply to theodicy.
Going into the details of Fun Theory helps you see that eudaimonia is actually complicated —that there are a lot of properties necessary for a mind to lead a worthwhile existence. Which helps you appreciate just how worthless a galaxy would end up looking (with extremely high probability) if it was optimized by something with a utility function rolled up at random.
To amplify on these points in order:
(1) You’ve got folks like Leon Kass and the other members of Bush’s “President’s Council on Bioethics” running around talking about what a terrible, terrible thing it would be if people lived longer than threescore and ten. While some philosophers have pointed out the flaws in their arguments, it’s one thing to point out a flaw and another to provide a counterexample. “Millions long for immortality who do not know what to do with themselves on a rainy Sunday afternoon,” said Susan Ertz, and that argument will sound plausible for as long as you can’t imagine what to do on a rainy Sunday afternoon, and it seems unlikely that anyone could imagine it.
It’s not exactly the fault of Hans Moravec that his world in which humans are kept by superintelligences as pets, doesn’t sound quite Utopian. Utopias are just really hard to construct, for reasons I’ll talk about in more detail later—but this observation has already been made by many, including George Orwell.
Building the Future is part of the ethos of secular humanism, our common project. If you have nothing to look forward to—if there’s no image of the Future that can inspire real enthusiasm—then you won’t be able to scrape up enthusiasm for that common project. And if the project is, in fact, a worthwhile one, the expected utility of the future will suffer accordingly from that nonparticipation. So that’s one side of the coin, just as the other side is living so exclusively in a fantasy of the Future that you can’t bring yourself to go on in the Present.
I recommend thinking vaguely of the Future’s hopes, thinking specifically of the Past’s horrors, and spending most of your time in the Present. This strategy has certain epistemic virtues beyond its use in cheering yourself up.
But it helps to have legitimate reason to vaguely hope—to minimize the leaps of abstract optimism involved in thinking that, yes, you can live and obtain happiness in the Future.
(2) Rationality is our goal, and atheism is just a side effect—the judgment that happens to be produced. But atheism is an important side effect. John C. Wright, who wrote the heavily transhumanist The Golden Age, had some kind of temporal lobe epileptic fit and became a Christian. There’s a once-helpful soul, now lost to us.
But it is possible to do better, even if your brain malfunctions on you. I know a transhumanist who has strong religious visions, which she once attributed to future minds reaching back in time and talking to her… but then she reasoned it out, asking why future superminds would grant only her the solace of conversation, and why they could offer vaguely reassuring arguments but not tell her winning lottery numbers or the 900th digit of pi. So now she still has strong religious experiences, but she is not religious. That’s the difference between weak rationality and strong rationality, and it has to do with the depth and generality of the epistemic rules that you know and apply.
Fun Theory is part of the fully general reply to religion; in particular, it is the fully general reply to theodicy. If you can’t say how God could have better created the world without sliding into an antiseptic Wellsian Utopia, you can’t carry Epicurus’s argument. If, on the other hand, you have some idea of how you could build a world that was not only more pleasant but also a better medium for self-reliance, then you can see that permanently losing both your legs in a car accident when someone else crashes into you, doesn’t seem very eudaimonic.
If we can imagine what the world might look like if it had been designed by anything remotely like a benevolently inclined superagent, we can look at the world around us, and see that this isn’t it. This doesn’t require that we correctly forecast the full optimization of a superagent—just that we can envision strict improvements on the present world, even if they prove not to be maximal.
(3) There’s a severe problem in which people, due to anthropomorphic optimism and the lack of specific reflective knowledge about their invisible background framework and many other biases which I have discussed, think of a “nonhuman future” and just subtract off a few aspects of humanity that are salient, like enjoying the taste of peanut butter or something. While still envisioning a future filled with minds that have aesthetic sensibilities, experience happiness on fulfilling a task, get bored with doing the same thing repeatedly, etcetera. These things seem universal, rather than specifically human—to a human, that is. They don’t involve having ten fingers or two eyes, so they must be universal, right?
And if you’re still in this frame of mind—where “real values” are the ones that persuade every possible mind, and the rest is just some extra specifically human stuff—then Friendly AI will seem unnecessary to you, because, in its absence, you expect the universe to be valuable but not human.
It turns out, though, that once you start talking about what specifically is and isn’t valuable, even if you try to keep yourself sounding as “non-human” as possible—then you still end up with a big complicated computation that is only instantiated physically in human brains and nowhere else in the universe. Complex challenges? Novelty? Individualism? Self-awareness? Experienced happiness? A paperclip maximizer cares not about these things.
It is a long project to crack people’s brains loose of thinking that things will turn out regardless—that they can subtract off a few specifically human-seeming things, and then end up with plenty of other things they care about that are universal and will appeal to arbitrarily constructed AIs. And of this I have said a very great deal already. But it does not seem to be enough. So Fun Theory is one more step—taking the curtains off some of the invisible background of our values, and revealing some of the complex criteria that go into a life worth living.
- Raising the Sanity Waterline by 12 Mar 2009 4:28 UTC; 239 points) (
- Serious Stories by 8 Jan 2009 23:49 UTC; 126 points) (
- The Fun Theory Sequence by 25 Jan 2009 11:18 UTC; 95 points) (
- Seduced by Imagination by 16 Jan 2009 3:10 UTC; 43 points) (
- [SEQ RERUN] The Uses of Fun (Theory) by 20 Jan 2013 7:56 UTC; 6 points) (
- 20 Mar 2010 15:05 UTC; 6 points) 's comment on Undiscriminating Skepticism by (
- 8 Jan 2012 1:28 UTC; 5 points) 's comment on Welcome to Less Wrong! by (
- Communicator Aptitude—Fun Theory Summary and Writeup by 20 Apr 2023 7:43 UTC; 3 points) (EA Forum;
- 16 Jul 2009 19:04 UTC; 2 points) 's comment on Absolute denial for atheists by (
- 11 Jun 2011 15:29 UTC; 2 points) 's comment on Crisis of Faith by (
- 16 Aug 2010 18:58 UTC; 2 points) 's comment on Open Thread, August 2010 by (
Something with a utility function “rolled at random” typically does not “optimise the universe”. Rather it dies out. Of those agents with utility functions that do actually spread themselves throughout the universe, it is not remotely obvious that most of them are “worthless” or “uninteresting”—unless you choose to define the term “worth” so that this is true, for some reason.
Indeed, rather the opposite—since such agents would construct galactic-scale civilisations, they would probably be highly interesting and valuable instances of living systems in the universal community.
Sure it would: as proximate goals. Animals are expected gene-fitness maximisers. Expected gene-fitness is not somehow intrinsically more humane than expected paperclip number. Both have about the same chance of leading to the things you mentioned being proximate goals.
Novelty-seeking and self-awareness are things you get out of any sufficiently-powerful optimisation process—just as they all develop fusion, space travel, nanotechnology—and so on.
The paper-clipper is a straw man that is only relevant if some well-meaning person tries to replace evolution with their own optimization or control system. (It may also be relevant in the case of a singleton; but it would be non-trivial to demonstrate that.)
All of Tim Tyler’s points have been addressed in previous posts. Likewise the idea that evolution would have more shaping influence than a simple binary filter on utility functions. Don’t particularly feel like going over these points again; other commenters are welcome to do so.
A random utility function will do fine, iff the agent has perfect knowledge.
Imagine, if you will a stabber, something that wants to turn the world into things that have been stabbed. If it knows that stabbing itself will kill itself, it will know to stab itself last. If it doesn’t know know that stabbing itself will lead to it no longer being able to stab things, then it may not do well in actually achieving its stabbing goal by stabbing itself too early.
Well, that is so vague as to hardly be worth the trouble of responding to—but I will say that I do hope you were not thinking of referring me here.
However, I should perhaps add that I overspoke. I did not literally mean “any sufficiently-powerful optimisation process”. Only that such things are natural tendencies—that tend to be produced unless you actively wire things into the utility function to prevent their manifestation.
My guess is that it’s a representation of my position on sexual selection and cultural evolution. I may still be banned from discussing this subject—and anyway, it seems off-topic on this thread, so I won’t go into details.
If this hypothesis about the comment is correct, the main link that I can see would be: things that Eliezer and Tim disagree about.
The society of Brave New World actually seemed like quite an improvement to me.
“John C. Wright, who wrote the heavily transhumanist The Golden Age, had some kind of temporal lobe epileptic fit and became a Christian. There’s a once-helpful soul, now lost to us.”
this seems needlessly harsh. as you’ve pointed out in the past, the world’s biggest idiot/liar saying the sun is shining, does not necesarily mean its dark out. the fictional evidence fallacy notwithstanding, if Mr. Wright’s novels have useful things to say about transhumanism or the future in general, they should be apreciated for that. the fact the author is born-again shouldnt mean we throw his work on the bonfire.
TGGP,
The Brave New World was exceedingly stable and not improving. Our current society has some chance of becoming much better.
My own complaints regarding the Brave New World consist mainly of noting that Huxley’s dystopia specialized in making people fit the needs of society. And if meant whittling down a square peg so it would fit into a round hole, so be it.
Embryos were intentionally damaged (primarily through exposure to alcohol) so that they would be unlikely to have capabilities beyond what society needed them to.
This is completely incompatible with my beliefs about the necessity of self-regulating feedback loops, and developing order from the bottom upwards.
Mr. Tyler:
I admire your persistence; however, you should be reminded that preaching to the deaf is not a particularly worthwhile activity.
Does this person genuinely have schizophrenia? I’ve occasionally wondered what would happen if a schizophrenic was taught rationality, or a rationalist developed schizophrenic. I didn’t think such a thing had happened already though.
I recall a neurologist that suffered a stroke and was able to reason out that she was suffering a stroke and managed to use the phone to call for help while severely impaired. It doubled as a religious experience for her.
I also recall a story about a woman trained in medicine who developed schizophrenia and turned her intellect to coping with her delusions, and rationalizing them, and poking holes in her rationalizations. Unfortunately I can’t find the story, but I remember that she was convinced that rats were running around in her brain chewing on her nerves, but that she could electrocute them by thinking really hard. She realized that real rats couldn’t possibly be running around in her brain, but had some rationalization for that.
That sounds fascinating, I wish I could read it..
Of late, during my discussions with others about rational politics and eudaimonia, I’ve been experiencing a strangely significant proportion of people (particularly the religious) asking me—with no irony—“What would you even DO with immortality?” My favored response: “Anything. And everything. In that order.” LessWrong and HP:MoR has played no small part in that answer, and much of the further discussion that generally ensues.
So… thanks, everyone!