Complex Novelty
From Greg Egan’s Permutation City:
The workshop abutted a warehouse full of table legs—one hundred and sixty-two thousand, three hundred and twenty-nine, so far. Peer could imagine nothing more satisfying than reaching the two hundred thousand mark—although he knew it was likely that he’d change his mind and abandon the workshop before that happened; new vocations were imposed by his exoself at random intervals, but statistically, the next one was overdue. Immediately before taking up woodwork, he’d passionately devoured all the higher mathematics texts in the central library, run all the tutorial software, and then personally contributed several important new results to group theory—untroubled by the fact that none of the Elysian mathematicians would ever be aware of his work. Before that, he’d written over three hundred comic operas, with librettos in Italian, French and English—and staged most of them, with puppet performers and audience. Before that, he’d patiently studied the structure and biochemistry of the human brain for sixty-seven years; towards the end he had fully grasped, to his own satisfaction, the nature of the process of consciousness. Every one of these pursuits had been utterly engrossing, and satisfying, at the time. He’d even been interested in the Elysians, once.
No longer. He preferred to think about table legs.
Among science fiction authors, (early) Greg Egan is my favorite; of early-Greg-Egan’s books, Permutation City is my favorite; and this particular passage in Permutation City, more than any of the others, I find utterly horrifying.
If this were all the hope the future held, I don’t know if I could bring myself to try. Small wonder that people don’t sign up for cryonics, if even SF writers think this is the best we can do.
You could think of this whole series on Fun Theory as my reply to Greg Egan—a list of the ways that his human-level uploaded civilizations Fail At Fun. (And yes, this series will also explain what’s wrong with the Culture and how to fix it.)
We won’t get to all of Peer’s problems today—but really. Table legs?
I could see myself carving one table leg, maybe, if there was something non-obvious to learn from the experience. But not 162,329.
In Permutation City, Peer modified himself to find table-leg-carving fascinating and worthwhile and pleasurable. But really, at that point, you might as well modify yourself to get pleasure from playing Tic-Tac-Toe, or lie motionless on a pillow as a limbless eyeless blob having fantastic orgasms. It’s not a worthy use of a human-level intelligence.
Worse, carving the 162,329th table leg doesn’t teach you anything that you didn’t already know from carving 162,328 previous table legs. A mind that changes so little in life’s course is scarcely experiencing time.
But apparently, once you do a little group theory, write a few operas, and solve the mystery of consciousness, there isn’t much else worth doing in life: you’ve exhausted the entirety of Fun Space down to the level of table legs.
Is this plausible? How large is Fun Space?
Let’s say you were a human-level intelligence who’d never seen a Rubik’s Cube, or anything remotely like it. As Hofstadter describes in two whole chapters of Metamagical Themas, there’s a lot that intelligent human novices can learn from the Cube—like the whole notion of an “operator” or “macro”, a sequence of moves that accomplishes a limited swap with few side effects. Parity, search, impossibility—
So you learn these things in the long, difficult course of solving the first scrambled Rubik’s Cube you encounter. The second scrambled Cube—solving it might still be difficult, still be enough fun to be worth doing. But you won’t have quite the same pleasurable shock of encountering something as new, and strange, and interesting as the first Cube was unto you.
Even if you encounter a variant of the Rubik’s Cube—like a 4x4x4 Cube instead of a 3x3x3 Cube—or even a Rubik’s Tesseract (a 3x3x3x3 Cube in four dimensions)—it still won’t contain quite as much fun as the first Cube you ever saw. I haven’t tried mastering the Rubik’s Tesseract myself, so I don’t know if there are added secrets in four dimensions—but it doesn’t seem likely to teach me anything as fundamental as “operators”, “side effects”, or “parity”.
(I was quite young when I encountered a Rubik’s Cube in a toy cache, and so that actually is where I discovered such concepts. I tried that Cube on and off for months, without solving it. Finally I took out a book from the library on Cubes, applied the macros there, and discovered that this particular Cube was unsolvable —it had been disassembled and reassembled into an impossible position. I think I was faintly annoyed.)
Learning is fun, but it uses up fun: you can’t have the same stroke of genius twice. Insight is insight because it makes future problems less difficult, and “deep” because it applies to many such problems.
And the smarter you are, the faster you learn—so the smarter you are, the less total fun you can have. Chimpanzees can occupy themselves for a lifetime at tasks that would bore you or I to tears. Clearly, the solution to Peer’s difficulty is to become stupid enough that carving table legs is difficult again—and so lousy at generalizing that every table leg is a new and exciting challenge—
Well, but hold on: If you’re a chimpanzee, you can’t understand the Rubik’s Cube at all. At least I’m willing to bet against anyone training a chimpanzee to solve one—let alone a chimpanzee solving it spontaneously—let alone a chimpanzee understanding the deep concepts like “operators”, “side effects”, and “parity”.
I could be wrong here, but it seems to me, on the whole, that when you look at the number of ways that chimpanzees have fun, and the number of ways that humans have fun, that Human Fun Space is larger than Chimpanzee Fun Space.
And not in a way that increases just linearly with brain size, either.
The space of problems that are Fun to a given brain, will definitely be smaller than the exponentially increasing space of all possible problems that brain can represent. We are interested only in the borderland between triviality and impossibility—problems difficult enough to worthily occupy our minds, yet tractable enough to be worth challenging. (What looks “impossible” is not always impossible, but the border is still somewhere even if we can’t see it at a glance—there are some problems so difficult you can’t even learn much from failing.)
An even stronger constraint is that if you do something many times, you ought to learn from the experience and get better—many problems of the same difficulty will have the same “learnable lessons” embedded in them, so that doing one consumes some of the fun of others.
As you learn new things, and your skills improve, problems will get easier. Some will move off the border of the possible and the impossible, and become too easy to be interesting.
But others will move from the territory of impossibility into the borderlands of mere extreme difficulty. It’s easier to invent group theory if you’ve solved the Rubik’s Cube first. There are insights you can’t have without prerequisite insights.
If you get smarter over time (larger brains, improved mind designs) that’s a still higher octave of the same phenomenon. (As best I can grasp the Law, there are insights you can’t understand at all without having a brain of sufficient size and sufficient design. Humans are not maximal in this sense, and I don’t think there should be any maximum—but that’s a rather deep topic, which I shall not explore further in this blog post. Note that Greg Egan seems to explicitly believe the reverse—that humans can understand anything understandable—which explains a lot.)
One suspects that in a better-designed existence, the eudaimonic rate of intelligence increase would be bounded below by the need to integrate the loot of your adventures—to incorporate new knowledge and new skills efficiently, without swamping your mind in a sea of disconnected memories and associations—to manipulate larger, more powerful concepts that generalize more of your accumulated life-knowledge at once.
And one also suspects that part of the poignancy of transhuman existence will be having to move on from your current level—get smarter, leaving old challenges behind—before you’ve explored more than an infinitesimal fraction of the Fun Space for a mind of your level. If, like me, you play through computer games trying to slay every single monster so you can collect every single experience point, this is as much tragedy as an improved existence could possibly need.
Fun Space can increase much more slowly than the space of representable problems, and still overwhelmingly swamp the amount of time you could bear to spend as a mind of a fixed level. Even if Fun Space grows at some ridiculously tiny rate like N-squared—bearing in mind that the actual raw space of representable problems goes as 2N—we’re still talking about “way more fun than you can handle”.
If you consider the loot of every human adventure—everything that was ever learned about science, and everything that was ever learned about people, and all the original stories ever told, and all the original games ever invented, and all the plots and conspiracies that were ever launched, and all the personal relationships ever raveled, and all the ways of existing that were ever tried, and all the glorious epiphanies of wisdom that were ever minted—
—and you deleted all the duplicates, keeping only one of every lesson that had the same moral—
—how long would you have to stay human, to collect every gold coin in the dungeons of history?
Would it all fit into a single human brain, without that mind completely disintegrating under the weight of unrelated associations? And even then, would you have come close to exhausting the space of human possibility, which we’ve surely not finished exploring?
This is all sounding like suspiciously good news. So let’s turn it around. Is there any way that Fun Space could fail to grow, and instead collapse?
Suppose there’s only so many deep insights you can have on the order of “parity”, and that you collect them all, and then math is never again as exciting as it was in the beginning. And that you then exhaust the shallower insights, and the trivial insights, until finally you’re left with the delightful shock of “Gosh wowie gee willickers, the product of 845 and 109 is 92105, I didn’t know that logical truth before.”
Well—obviously, if you sit around and catalogue all the deep insights known to you to exist, you’re going to end up with a bounded list. And equally obviously, if you declared, “This is all there is, and all that will ever be,” you’d be taking an unjustified step. (Though I fully expect some people out there to step up and say how it seems to them that they’ve already started to run out of available insights that are as deep as the ones they remember from their childhood. And I fully expect that—compared to the sort of person who makes such a pronouncement—I personally will have collected more additional insights than they believe exist in the whole remaining realm of possibility.)
Can we say anything more on this subject of fun insights that might exist, but that we haven’t yet found?
The obvious thing to do is start appealing to Godel, but Godelian arguments are dangerous tools to employ in debate. It does seem to me that Godelian arguments weigh in the general direction of “inexhaustible deep insights”, but inconclusively and only by loose analogies.
For example, the Busy-Beaver(N) problem asks for the longest running time of a Turing machine with no more than N states. The Busy Beaver problem is uncomputable—there is no fixed Turing machine that computes it for all N—because if you knew all the Busy Beaver numbers, you would have an infallible way of telling whether a Turing machine halts; just run it up for as long as the longest-running Turing machine of that size.
The human species has managed to figure out and prove the Busy Beaver numbers up to 4, and they are:
BB(1): 1
BB(2): 6
BB(3): 21
BB(4): 107
Busy-Beaver 5 is believed to be 47,176,870.
The current lower bound on Busy-Beaver(6) is ~2.5 × 102879.
This function provably grows faster than any compact specification you can imagine. Which would seem to argue that each new Turing machine is exhibiting a new and interesting kind of behavior. Given infinite time, you would even be able to notice this behavior. You won’t ever know for certain that you’ve discovered the Busy-Beaver champion for any given N, after finite time; but conversely, you will notice the Busy Beaver champion for any N after some finite time.
Yes, this is an unimaginably long time—one of the few occasions where the word “unimaginable” is literally correct. We can’t actually do this unless reality works the way it does in Greg Egan novels. But the point is that in the limit of infinite time we can point to something sorta like “an infinite sequence of learnable deep insights not reducible to any of their predecessors or to any learnable abstract summary”. It’s not conclusive, but it’s at least suggestive.
Now you could still look at that and say, “I don’t think my life would be an adventure of neverending excitement if I spent until the end of time trying to figure out the weird behaviors of slightly larger Tuing machines.”
Well—as I said before, Peer is doing more than one thing wrong. Here I’ve dealt with only one sort of dimension of Fun Space—the dimension of how much novelty we can expect to find available to introduce into our fun.
But even on the arguments given so far… I don’t call it conclusive, but it seems like sufficient reason to hope and expect that our descendants and future selves won’t exhaust Fun Space to the point that there is literally nothing left to do but carve the 162,329th table leg.
- 31 Laws of Fun by 26 Jan 2009 10:13 UTC; 100 points) (
- The Fun Theory Sequence by 25 Jan 2009 11:18 UTC; 95 points) (
- The Steampunk Aesthetic by 8 Mar 2018 9:06 UTC; 73 points) (
- A New Day by 31 Dec 2008 18:40 UTC; 47 points) (
- Amputation of Destiny by 29 Dec 2008 18:00 UTC; 45 points) (
- In Praise of Boredom by 18 Jan 2009 9:03 UTC; 42 points) (
- Sensual Experience by 21 Dec 2008 0:56 UTC; 39 points) (
- Continuous Improvement by 11 Jan 2009 2:09 UTC; 29 points) (
- Justified Expectation of Pleasant Surprises by 15 Jan 2009 7:26 UTC; 27 points) (
- Emotional Involvement by 6 Jan 2009 22:23 UTC; 26 points) (
- Greg Egan and the Incomprehensible by 19 May 2011 10:38 UTC; 23 points) (
- [SEQ RERUN] Complex Novelty by 7 Jan 2013 3:57 UTC; 7 points) (
- 31 Oct 2012 10:10 UTC; 4 points) 's comment on Interpersonal Entanglement by (
- 1 Aug 2013 18:01 UTC; 1 point) 's comment on More “Stupid” Questions by (
- 3 May 2021 22:14 UTC; 1 point) 's comment on Thoughts on Re-reading Brave New World by (
Figuring out how to do the cube was the highlight of 7th grade. Didn’t have to use a cheater book, but my method wasn’t going to win any speed contests.
My wife got me a picture cube a year or so back and it was fun playing around with it again. It came back really quickly, though I hadn’t touched one in a couple decades. But I couldn’t always solve it. Sometimes I’d get one center square 180 degrees off and the only way I could fix it was by totally scrambling the cube and resolving. Sometimes it takes 3 or 4 rescrambles.
So do I know how to solve a picture cube?
Hmm. The Busy Beaver functions only deal with the case of a TM with a blank tape. Have an arbitrary starting configuration on the tape, and TMs can run for much longer.
The halting problem normally deals with arbitrary input configurations: “Alan Turing proved in 1936 that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist”.
Tim: given a Turing machine T and an input X, you can make another Turing Machine which, regardless of its input, first writes X to the tape and then executes T.
One day we’ll discover the means to quickly communicate insights from one individual to another, say by directly copying and integrating the relevant neural circuitry. Then, in order for an insight to be Fun, it will have to be novel to transhumanity, not just the person learning or discovering it. Learning something the fast efficient way will not be Fun because there’s not true effort. Pretending that the new way doesn’t exist, and learning the old-fashioned way, will not be Fun because there’s not true victory.
I’m not sure there are enough natural problems in the universe to supply the whole of transhumanity with an adequate quantity of potential insights. “Natural” meaning not invented for the sole purpose of providing an artificial challenge. Personally, I can’t see how solving the n-th random irrelevant mathematical problem is any better than lathing the n-th table leg.
You’re underestimating the amount of fun that can be had carving 162,329 table legs. Suppose you have a theory of table legs, that you believe can describe the physics of table legs completely, and suddenly, on the 89,402nd table leg, something unexpected happens. Maybe the piece of wood has icky things crawling in it. Your theory of table legs says nothing about icky things or crawling. You have to rethink everything you know about table legs, trying to make the icky things fit in with the table leg theory; your new theory will make predictions, and those predictions will have to be tested. Maybe there’s only a 1 in 162,329 chance, as far as you know, of deriving a result that’s at all useful, but at least that’s a chance.
“Pretending that the new way doesn’t exist, and learning the old-fashioned way, will not be Fun because there’s not true victory.”
What’s a true victory, then? Making something that’s kind of like a mixture of yourself and someone else? I’m not sure reproduction is more of a “natural” challenge than making 200,000 table legs; it just happens to be the one that evolution selects for.
This fun theory seems to be based on equivocation. Sure, insights might be fun, but that doesn’t mean they literally are the same thing. The point of studying the brain is to cure neurological disorders and to move forward AI. The point of playing chess is to prove your worth. So is the (relatively) insight-less task of becoming world champion at track and field. What UTILITY does solving BB(254) have?
I think a human can only have so much fun if he knows that even shooting himself in the head wouldn’t kill him, because There Is Now A God. And altering your brain might be the only solution. And I don’t see why it’s so abhorrent.
You keep mentioning “orgasmium” like it’s supposed to horrify me. Well, it doesn’t. I’m more horrified by the prospect of spending eternity proving theorems that don’t make my life one bit easier, like Sysiphus.
Fun comes from overcoming adversity. If we are so advanced that the natural world no longer offers a significant challenge, obviously the only fun available will be conflict with other intelligences at a similar level of power, so we will probably see a world of continual violent strife.
Fun comes from overcoming adversity. If we are so advanced that the natural world no longer offers a significant challenge, obviously the only fun available will be conflict with other intelligences at a similar level of power, so we will probably see a world of continual violent strife.
Strife, yes, I’d expect that. Violent strife, no. At least, no more violent than two people sitting down on opposite sides of a chess board are. You don’t have to fight to the death for there to be enough of a competition to enjoy.
Sure, but that is not really a “Turing machine of that size” - it’s a substantially bigger one—since it has to encode the input, and another program expressing decoding it and writing it out.
This fact does not cripple the proof. No binary turing decider of size x could run longer on an input of binary length n than x2^nBB(x), which is still an upper bound which can be used to solve the halting problem. (This is a tighter upper bound than that given by anon19 below, since BB(x) grows at a superexponential rate.)
“If this were all the hope the future held, I don’t know if I could bring myself to try. Small wonder that people don’t sign up for cryonics, if even SF writers think this is the best we can do.”
Well, I think that the points missed is that you are not FORCED to carve those legs. If you find something else interesting, do it.
“I could see myself carving one table leg, maybe, if there was something non-obvious to learn from the experience.”
Not even four, so that you could then make a table? As long as we’re still within the human-ish range of mind design space, have at least some respect for the sheer human pleasure of doing things without having to justify the educational value of every action. Orgasmium is not the only alternative to purism.
Well, let’s see. There are friendly AIs and automated technology carrying out all the needs of life, so that human beings do not need to work, and anything which damages human cells can be fixed, so we are immortal if we wish to be.
For me, pleasure comes from achievement. But in this world, there is nothing which I can achieve which the AIs cannot achieve better. Or, if it is entertaining other people, perhaps a few manage this, and the rest fail to create any interest at all in their peers. If achievement is possible, failure is possible. If people decide to pass the time in conflict or competition, there is only one winner of chess or sprinting, and only a few people who are close enough to that winner to keep training.
So I become an eyeless limbless blob having endless orgasms.
So to avoid this, the AIs, being friendly, set challenges for the humans to overcome, so that the humans can be the best each can be. So that the species evolves. So the AIs go away and do other things, taking as much interest in the humans as the God of the Deists does.
Or- I fulfil myself by making my brain, body and Wisdom as good as they can be. I find, reading, say, the Tao Te Ching, or some sayings of Jesus (!) that I only understand them when I have learned the lesson elsewhere. I enjoy associating with other people who are at higher stages of wisdom, so that I can learn, and lower stages of wisdom, so I can teach, and am not always certain which is which. I take delight in what this world of utter comfort and endless possibility can bring.
Addicts recover. If you find the thought of having endless orgasms repulsive, might not the person who had, er, sunk so low, also find his state repulsive, eventually? I hope the humans and AIs together are creative enough to bring fulfilment for each human being in this paradise.
RDV: Fun comes from overcoming adversity.
Got anything to back that up ? I expect most game designers (I’m one) will tell you that fun is about learning new skills. You may not have noticed, but most games don’t involve any actual threat to the player. (simulated) “adversity” is something you see often in games, but that’s because it’s a “rich” kind of challenge that can give rise to a lot of complex decisions. And also because before video games, the best way to make complex challenges in games was to have another player.
But apparently, once you do a little group theory, write a few operas, and solve the mystery of consciousness, there isn’t much else worth doing in life: you’ve exhausted the entirety of Fun Space down to the level of table legs.
Hmm. I didn’t read the book that way: I never got the impression that he’d be doing table legs because he’d exhausted all the challenging stuff. Instead I interpreted it as his exoself just randomly picking different kinds of tasks, with the table leg bit just happening by chance to follow a sequence of higher-complexity tasks.
Abigail: “”“If you find the thought of having endless orgasms repulsive, might not the person who had, er, sunk so low, also find his state repulsive, eventually?”””
I, for one, cannot imagine one who has, er, ascended so high voluntarily reducing his own utility.
I cannot see why I shouldn’t want to become orgasmium. It would certsinly be disgusting to look at someone else turning into something like that—it is too similar to people who are horribly maimed. But It’s What’s Inside That Counts.
The reason that drug addiction is bad is that it has deleterious health effects. But although orgasmium is defenselsess, it is guarded by a benevolent god. Nothing in the world could destroy it.
Abigail: “”“If you find the thought of having endless orgasms repulsive, might not the person who had, er, sunk so low, also find his state repulsive, eventually?”””
I, for one, cannot imagine one who has, er, ascended so high voluntarily reducing his own utility.
I cannot see why I shouldn’t want to become orgasmium. It would certsinly be disgusting to look at someone else turning into something like that—it is too similar to people who are horribly maimed. But It’s What’s Inside That Counts.
The reason that drug addiction is bad is that it has deleterious health effects. But although orgasmium is defenselsess, it is guarded by a benevolent god. Nothing in the world could destroy it.
Solving problem X is interesting. So you solve all problems of of the class that X is in. And then you start on other classes. And then you eventually see that not only do all problems boil down to classes of problems, but that all of those classes are part of the superclass of “problems”, at which point you might decide that solving problem X162329 is as dull as making chair leg 162,329.
Solving a problem not being any more a “good” activity than having an orgasm, eating a cake, or making a chair leg is.
Tim:
That’s beside the point, which was that if you could somehow find BB(n) for n equal to the size of a (modified to run on an empty string) Turing machine then the halting problem is solved for that machine.
Tim:
That’s beside the point, which was that if you could somehow find BB(n) for n equal to the size of a (modified to run on an empty string) Turing machine then the halting problem is solved for that machine.
Solving problem X is interesting. So you solve all problems of of the class that X is in. And then you start on other classes. And then you eventually see that not only do all problems boil down to classes of problems, but that all of those classes are part of the superclass of “problems”, at which point you might decide that solving problem X162329 is as dull as making chair leg 162,329.
Solving a problem not being any more a “good” activity than having an orgasm, eating a cake, or making a chair leg is.
(Apologies if this gets double-posted—I’m having some really weird issues with my browser and this site, and am uncertain if it got through on the first time.)
EY: But apparently, once you do a little group theory, write a few operas, and solve the mystery of consciousness, there isn’t much else worth doing in life: you’ve exhausted the entirety of Fun Space down to the level of table legs.
Hmm. I didn’t read the book that way: I never got the impression that he’d be doing table legs because he’d exhausted all the challenging stuff. Instead I interpreted it as his exoself just randomly picking different kinds of tasks, with the table leg bit just happening by chance to follow a sequence of higher-complexity tasks.
Tim:
That’s beside the point, which was that if you could somehow find BB(n) for n equal to the size of a (modified to run on an empty string) Turing machine then the halting problem is solved for that machine.
Looks like everybody is having comment problems. Maybe someone should post a big sign saying
DON’T REPOST IF YOU GET “The requested URL could not be retrieved” OR YOU’LL POST A DUPLICATE!!!
(I’m using Firefox 3.0.5 on Windows XP, if anyone’s trying to troubleshoot this)
Even if Fun Space were exhaustible, it wouldn’t worry me. I could always just remove some of my memories and jump into e.g. an ancestor simulation of the 21st century, that would be new and exciting all over again just like a life currently can be.
Though I would perhaps like even more to set up a futuristic version of Civilization the computer game. Be born to the year 4000 B.C. as an immortal, with a bunch of similar immortals around the world ruling their own civilizations (and some peers on the same team with me as close companions, perhaps), and see who manages to colonise most of the universe this time around.
This particular idea of fun(solving increasingly more complex problems) seems to reflect the audience’s intellectual mindset. I wonder what people who hate mathematics would have to say. If you are a professional golf player how would your perfect world look like? Would it be an ever increasing golf game where you had to hit a small hole over astronomical distances?
Roland: theoreticians of game design (Well—some guys, Raph Coster, Dan Cook …) often reduce “fun” in games to learning—and those aren’t just games that would appeal to this audience. The space of mathematical problems to solve seems like a close enough approximation of the space of complex systems in which one can learn, and, by a happy coincidence, there’s plenty of theoretical work on the complexity of mathematical problems and how large that space might be.
Roland: to put it differently, I don’t think that those who hate mathematics don’t enjoy the “aha” of insight (and might greatly enjoy it in video games) - but mathematics in school (especially in the early grades) doesn’t have much to do with insight.
Games are challenges presented in a really inviting and “casual” context. They’re naturally inviting.
Maths is challenges presented in a somewhat dry and “serious” context, being able to enjoy them mostly has to do with being able to understand the language.
(I wonder if there’s any formal study of the kind of games enjoyed by people who like or don’t like maths … it’d be interesting)
I think some of the above posts have a point—what’s the difference between “fun” and wireheading if what you’re doing for fun has no impact on the external world (because you’ve already set up your physical situation more-or-less as well as it can be given the laws of physics)?
Maybe if we can somehow reach other universes or some sort of (physical) “higher planes of existence” then there will always be something that “needs to be done”, but otherwise it seems that there will come a point where there is nothing to do but await the heat-death of the universe.
Honestly, reading these articles might be having the opposite of the intended effect on me, making me more nihilistic. You keep talking about “fun” and I keep wondering what the point is when the desire for fun isn’t really different from hunger or other evolution-driven desires that would presumably be (optionally?) eliminated in a friendly singularity scenario.
If you never get bored, do you care about or need to have fun?
ShardPhoenix, someone once asked me if I thought our universe was a “computation”, and I said, “I see no difference between the concept ‘computation’ and the concept ‘universe’ except that a universe doesn’t have anything outside it.”
If you keep looking for something external to affect, what happens when you run into the borders of reality?
If we cannot take joy in the merely internal, our lives shall be empty indeed.
Kitty Pryde: “Science, magic, politics… is there anything you can’t do?” Doctor Doom: “Knit. I find it repetitive.”
What’s more internal than wireheading?
The problem with wireheading isn’t that it’s “internal” but that it lacks numerous other fun-features that have been discussed and will be discussed, such as challenge and novelty—and emotion, and interpersonality...
Keep questioning, ShardPhoenix. And note that Eliezer never answered your question, namely, if you can modify yourself so that you never get bored, do you care about or need to have fun?
Sure, everyone living now has to attend to their own internal experience, to make sure that they do not get too bored, too sad or too stuck in another negative emotional state—just like everyone living now has to make sure they have enough income—and that need for income occupies the majority of the waking hours of a large fraction of current humanity.
But why would negative emotional states press any harder on an individual than poverty or staying disease free or avoiding being defrauded by another individual once we are able to design a general intelligence from scratch and to redesign our own minds? What is so special about the need for fun postsingularity that makes it worth a series of posts on this blog whereas, e.g., avoiding being defrauded postsingularity is not worth of series of posts?
As a biologist, I’m pretty wrapped up in all the awesome weirdness this planet has to offer, and I know for a fact there is no way I could ever become bored with it.
It seems to me, if you are bored with life you are doing it wrong, no matter how old you are. Ennui is caused by an unwillingness to step outside of your comfortable boundaries, and not by a lack of interesting things to see, do, or ponder. It is also an unwillingness to simply take pleasure in the genuine present moment. Someone once told me “Do not depend on others to entertain you, boredom is a sign of immaturity.”
~Kai
What about competition?
Competition against other people makes a lot of things which are ‘not fun’ into ‘fun’. Sports are a great example. I find just running boring, and running around hitting a little yellow ball is boring too, even when there are lines and targets. But put an opponent of similar skill on the other side of the net, and tennis can entertain me for as long as my muscles hold out (and, indeed, stretching that limit is part of the fun).
It seems like competing against another person is qualitatively different than solving a given problem.
I know a professor of mathematics that makes bows as a hobby (the kind that shoots arrows). He made a LOT of them so far. Apparently, he still finds it fun. Eleazar, have you ever actually had a hobby like that?
What about competition?
It seems to me that Eliezer’s main point is that we probably won’t run out of fun things to do. The fact that we enjoy competition seems to also be an argument for that.
I’m looking forwards to the next posts on this—I expect the “wirehead problem” to feature in a big place. Are some kinds of fun “ok”, and others not? Where do we draw the line?
@Eliezer: If you are going to solve The Culture, then you’re my genuine hero. I know it’s odd to pick this one thing, but the implications of that series have been quietly bugging me for ages.
@Richard Hollerith: every argument for wireheading of the form “but then I wouldn’t care about that stuff any more” can also be applied to shooting yourself in the head. You’d get zero utils, but you wouldn’t care about that anymore.
An ethical dilemma around your future self can be rotated off the T axis into one relating to a separate person. (If anything, the case relating to another person is weaker, you don’t reasonably expect to become them yourself.) In both cases, the one that judges and the one that experiences are separate. Thus if you should be against another person shooting themself, you should also be against your future self doing so, and for (no fewer than) the same reasons.
Julian, I agree: becoming a wirehead who will never again have a external effect aside from being a recipient of support or maintenence is no better than just shooting yourself under my system of valuing things.
And note that Eliezer never answered your question, namely, if you can modify yourself so that you never get bored, do you care about or need to have fun?
Richard, probably you wouldn’t care or need to have fun. But why would you do that? Modifying yourself that way would just demonstrate that you value the means of fun more than the ends. Even if you could make that modification, would you?
You’re all wrong. We can’t run out of real-world goals. When we find ourselves boxed in, the next frontier iwill be to get out, ad infinitum. Is there a logical mistake in my reasoning?
No matter how many boxes we get out of, when do we start having fun? What’s the point of trying to escape if there’s nothing to do when you’re free?
Anytime! If you want exploration, you’ll see the next frontier of escape after the Singularity. If you want family life, artistic achievement or wireheading, you can have it now.
on table legs:
your core assumptions about the nature of ‘fun’ - i.e., as quantifiable information influx—follow a particularly post-Enlightenment Western-scientific model.
Consider the pursuit of perfection as an alternate model.
We can assume that the achievement of perfection is impossible (as is implicit in many philosophical systems that treat perfection as a goal—zen, Aquinas, etc etc). It’s possible then to find a vocation—any vocation—and pursue it infinitely while still being challenged by the imperfections of the physical world.
One might argue that eventually you would craft a ‘perfect’ table leg. I suspect that this ‘perfection’ would be the result of low standards on the part of judger (inability to perceive microscopic flaws in the interior grains of the wood, etc), but let’s say that it is possible to do so. Then of course the quest is to create a perfect form of craftsmanship, in which every table leg is totally perfect.
Even assuming always-perfect wood, your craftsmanship can be perfect only until proven otherwise, so one would need to continue in perfect craftsmanship to approach certainty of perfect craftsmanship. Since perfection doesn’t have the ability reach p=1, you would need to create perfect table legs infinitely to constantly fail to disprove your perfect craftsmanship hypothesis.
Perfection is time-based in that it only exists in the present (zen again), not with statistical certainty. The ‘fun space’ is already infinite in the crafting of table legs—it just depends on your definition of ‘fun.’
Isn’t everything we do aimed toward achieving perfection? Perfect control of our environment and ourselves through perfect understanding.
Eliezer: This post is an example of how all your goals and everything you’re doing is affected by your existing preferences and biases.
For some reason, you see Peer’s existence as described by Greg Egan as horrible. You propose an insight-driven alternative, but this seems no more convincing to me than Peer’s leg carving. I think Peer’s existence is totally acceptable, and might even be delightful. If Peer wires himself to get ultimate satisfaction from leg carving, then by definition, he is getting ultimate satisfaction from leg carving. There’s nothing wrong with that.
More importantly—no alternative you might propose is more meaningful!
There’s also nothing wrong with being a blob lying down on a pillow having a permanent fantastic orgasm.
The one argument I do have against these preoccupations is that they provide no progress towards avoiding threats to one’s existence. In this respect, the most sensible preoccupation to wire yourself for would be something that involves preserving life, and other creatures’ lives as well, if you care for that as the designer.
Satisfying that, the options are open. What’s really wrong with leg carving?
Yes, Ben Jones, I sincerely would. (I also value the means of friendship, love, sex, pleasure, health, wealth, security, justice, fairness, my survival and the survival of my friends and loved ones more than the ends. I have a very compact system of terminal values. I.e., very few ultimate ends.)
I am fully aware that my saying that I value friendship as a means to an end rather than an end in itself handicaps me in the eyes of prospective friends. Ditto love and prospective lovers. But I am not here to make friends or find a lover.
People have a bias for people with many terminal values. Take for example a person who refuses to eat meat because doing so would participate in the exploitation of farm animals. My hypothesis is that that position helps the person win friends and lovers because prospective friends and lovers think that if the person is that scrupulous towards a chicken he has never met then he is more likely than the average person to treat his human friends scrupulously and non-exploitatively. A person with many terminal values is trusted more than a person with with fewer and is rarely called on to explain the contradictions in his system of terminal values.
There are commercials for cars in which the employees of the car company are portrayed as holding reliable cars with zero defects as a terminal value. Or great-tasting beer as a terminal value. And of course advertiser tend to keep using a pitch only if it helps sell more cars or beer. It is my hope that some of the readers of these words realize that there is something wrong with an agent of general intelligence (a human in this case or an organization composed of humans) holding great-tasting beer as a terminal value.
I invite the reader to believe with me that Occam’s Razor—that everything else being equal, a simple system of beliefs is to be preferred over a complex system—applies to normative beliefs as well as positive beliefs. Moreover, since there is nothing that counts as evidence for or against a normative belief, a system of normative beliefs should not grow in complexity as the agent gathers evidence from its environment the way a system of positive beliefs does.
Finally, if Vladimir Slepnev has written up his ethical beliefs, I ask him to send them to me.
Richard Hollerith, thanks for your interest, but you’ll be disappointed: I have no religion to offer. The highlights of every person’s ethical system depend on the specific wrongs they have perceived in life. My own life has taught me to bear fruit into tomorrow, but also to never manipulate others with normative/religious cheap talk.
Also, Occam’s Razor can only apply to those terminal beliefs that are weaker held than the razor itself. Fortunately, most people’s values aren’t so weak, even if yours are. :-)
Even if Fun Space were exhaustible, it wouldn’t worry me. I could always just remove some of my memories and jump into e.g. an ancestor simulation of the 21st century, that would be new and exciting all over again just like a life currently can be.
Though I would perhaps like even more to set up a futuristic version of Civilization the computer game. Be born to the year 4000 B.C. as an immortal, with a bunch of similar immortals around the world ruling their own civilizations (and some peers on the same team with me as close companions, perhaps), and see who manages to colonise most of the universe this time around.
SPOILER WARNING!
This comment contains spoilers for Permutation City.
I agree that Peer’s strategy, as described in Permutation City, is a very suboptimal strategy for maximizing fun, given both finite resources and finite time.
But Peer had an infinite amount of processing time, and (spoiler!) until the final chapter, believed that he had an infinite amount of computing resources as well. (In the final chapter, Peer found that he only had a finite, but large amount of computing resources—equivalent to a planet of computronium? a solar system? a galaxy? the story didn’t say.)
Also, Peer had one strategically relevant belief which you may have overlooked: Peer believes that an experience has literally no value if someone has had exactly the same experience before. i.e. if the digital representation of a mind ever enters exactly the same state more than once.
(I personally disagree with this. A pleasure is still real, and still has positive value, the second time you experience it. A pain is still real, and still has negative value, the second time you experience it.)
Given this belief, Peer’s strategy is the optimal: Randomly alternating between any experience that could even remotely be considered fun.
Given infinite time, this will eventually cover the entire volume of Fun Space. Or rather, all the parts of fun space that can be accessed by a mind running on the computing resources available to Peer.
If a mind really does have access to infinite computing resources, then that mind’s Fun Space is truly infinite. (trivial proof, and a degenerate example: spend a year contemplating the number 1, then spend a year contemplating the number 2, then 3, then 4...)
All Peer cares about is that as much of Fun Space is covered as possible, and this strategy achieves that.
(another spoiler) In the final chapter, Peer decides that he doesn’t even care who experiences the pleasure, and splits himself into a whole Solipsist Nation of minds that are each happy for their own wildly arbitrary reasons.
Other than a couple of major differences, Peer’s philosophy matches mine quite well. Well enough for me to name myself after him. (After all, if my life is at all worthwhile, then Peer must have experienced the good parts of it at some point during his random explorations of Fun Space.)
related plugs:
a wiki page I wrote examining Peer’s philosophy, and my own philosophy, in more detail: http://transhumanistwiki.com/wiki/Peer_Infinity/Quotes_from_Permutation_City
a dream I had, where I was Peer, that seemed almost good enough to write a short story out of, but not surprisingly ended up kinda lame: http://transhumanistwiki.com/wiki/Peer_Infinity/Short_Story_1
If there are hidden variables and random noise, you can still be learning after repeating an experience an arbitrary number of times. Consider the probability of observed x calculated after reestimating the distribution on hidden variable t. We calculate this by integrating the probability of x given t, p(x|t), over all possible t weighted by the probability of t given x, p(t|x). We have
Integral p(x|t)p(t|x) dt = Integral p(x|t)p(x|t)p(t)/p(x) dt = Expectation(p(x|t)^2)/p(x) = Expectation(p(x|t))^2/p(x) + Variance(p(x|t))/p(x) ≥ Expectation(p(x|t))^2/p(x) = p(x). Here the expectation is over the prior distribution of t. Note that we have equality iff the variance of P(x|t), according to our prior distribution on t, is zero, which is to say that the probability of x given t is constant almost everywhere the prior distribution on t is positive. If this variance is not zero, then p(x) in this calculation changes (increases) which means that we are revising our distribution on t, and changing our minds.
Right, but even with a digital brain, if you only have a finite number of bits to store the floating point number representing the probabilities, eventually you will run out of bits. What you just described gets you a whole lot of new experience, but not a literally infinite amount.
hehehe what’s BB(3^^^3)
Hold on, let me create a new universe to hold all the matter needed to compute and write out its value.
This is almost certainly not computable in any amount of finite time since the Busy Beaver function is not computable in general, and is probably not computable for even much much smaller arguments (say on the order of 100 or possibly much less). So if one is working in computable universes this is simply not computable.
It’s an integer, of course it’s computable!
I think you’re going to need more than one of those...
Hold on, let me create a new universe to hold all the matter needed to compute how big and how many universes I’ll need to compute BB(3^^^3).
“Clearly, the solution to Peer’s difficulty is to become stupid enough that carving table legs is difficult again—and so lousy at generalizing that every table leg is a new and exciting challenge—”
I’m an in-house counsel at a leading renewable energy company (in other words, an average corporate slave who ends up doing many tasks repeatedly) who has to deal with many different large scale projects, but the truth is that a lot of the work that I end up doing has many similar terms and regulations, so there is some amount of repetition to my work. I am sure that this is the case with a great many number of jobs in the world, not just desk clerk jobs, but even jobs that are more cognitively demanding. I’m quite sure that even researchers in nanotech or space travel have routine jobs that they have to repeatedly perform in order to get to the more exciting aspects of their work.
This form of thinking (I’m now wondering whether it can be classified under the dark arts section) where you enjoy mechanically performing certain functions can in fact be tamed.
Another thought that occurs is that the same function i.e. carving a wooden leg can be viewed as two completely separate activities. If you went and asked Aldous Huxley, he would probably say that this is possible. If I remember correctly, he mentions this form of random perspective on more mundane day to day facts a lot in the Doors of Perception (I may need to refresh my memory, I read it years ago). Timothy Leary (although I’ve only read the Wikipedia page on this one, havent read one of his books yet) also seems to suggest that this can happen through a variety of different methods.
Does anyone think, any kind of effective compartmentalisation is possible wherein we combine the maximum fun that can be obtained while also retaining the ability to think cognitively/rationally when we really need to?
EY, I’m not sure I’m with you about needing to get smarter to integrate all new experiences. If we want to stay and slay every monster, couldn’t we instead allow ourselves to forget some experiences, and to not learn at maximum capacity?
It does seem wrong to willfully not learn, but maybe as a compromise, I could learn all that my ordinary brain allows, then allow that to act as a cap and not augment my intelligence until that level of challenges fully bored me. I could maybe even learn new things while forgetting others to make space.
Or am I merely misunderstanding something about how brains work?
My motivation for taking this tack is that I find the fun of making art and of telling stories more compelling than the fun of learning; therefore, I’m not inclined to learn as fast as possible, if it means skipping over other fun; I’m also disinclined to become so competent that I’m alienated from the hardships/imperfections that give my life a story / allow me to enjoy stories.
Yes, I think he recognizes this in this post. He also writes about this (from a slightly different perspective) in high challenge.
A world without complex novelty would be lacking. But so would a world without some simple pleasures. There are people who really do enjoy woodworking. I can’t picture a utopia where no one ever whittles. And a few of them will fancy it enough to get really, really good at it, for pretty much the same reason that there are a handful of devoted enthusiasts. Even without Olympic competitions and marathons, I’d bet there would still be plenty of runners, who did so purely for it’s own sake, rather than to get better or to compete, or for novelty. Given an infinite amount of time, everyone is likely to spend a great deal of time on such non-novel things. So, what’s most disturbing about carving 162,329 table legs is that he altered his utility function to want to do it.
Perhaps I’m missing something, but it seems to me that any mind capable of designing a turning-complete computer can, in principle, understand any class of problem. I say “class of problem”, because I doubt we can even wrap our brains around a 10x10x10x10 Rubik’s Cube. But we are aware of simpler puzzles of that class. (And honestly, I’m just using an operational definition of “classes of problem”, and haven’t fleshed out the notion.) There will always be harder logic puzzles, riddles, and games. But I’m not sure there exist entirely new classes of problems, waiting to be discovered. So we may well start running out of novelty of that type after a couple million years, or even just a couple thousand years.
That really expresses something I’ve been mulling over to myself for a while: that failed utopias in fiction, or at least a large class of such, only appear to work because they lack certain types of people. The Culture, ironically, has no transhumanists, people who look at the Minds and say, “I want to be one of those.” Certain agrarian return-to-nature fantasies lack people like me, who couldn’t psychologically survive outside of a city and who derive literally no pleasure from so-called ‘beautiful dioramas’. And of course, any utopia I would try to write probably would fall into the same trap, most likely because I wouldn’t include people who want to whittle.
Good point. It seems like we 1) value an incredibly diverse assortment of things, and 2) value our freedom to fixate on any particular one of those things. So, any future which lacks some option we now have will be lacking. Because at some point we have to choose one future over another, perhaps we will always have a tiny bit of nostalgia. (Assuming that the notion of removing that nostalgia from our minds is also abhorrent.)
I’ll also note that after a bit more contemplation, I’ve shifted my views from what I expressed in the second paragraph of my comment above. It seems plausible that certain classes of problems tickle a certain part of our brain. Visual stimuli excite our visual cortex, so maybe Rubik’s Cubes excite the parts of our brain involved in spatial reasoning. It seems plausible, then, that we could add entire new modules to our minds for solving entire new classes of problems. Perhaps neuroplasticity allows us to already do this to a degree, but it also seems likely that a digital mind would be much less restricted in this regard.