Dragon Ball’s Hyperbolic Time Chamber
A time dilation tool from an anime is discussed for its practical use on Earth; there seem surprisingly few uses and none that will change the world, due to the severe penalties humans would incur while using it, and basic constraints like Amdahl’s law limit the scientific uses. A comparison with the position of an Artificial Intelligence such as an emulated human brain seems fair, except most of the time dilation disadvantages do not apply or can be ameliorated and hence any speedups could be quite effectively exploited. I suggest that skeptics of the idea that speedups give advantages are implicitly working off the crippled time dilation tool and not making allowance for the disanalogies.
Master version on gwern.net
- 4 Nov 2012 18:22 UTC; 0 points) 's comment on Are we trying to do things the hard way? by (
A year in a day is not very impressive, but 365 years (of research) in a year is. Depending on how many people can be placed in these boxes at any given time, this may amount to three centuries worth of progress (at least in software and mathematics).
yo dawg we heard you like hyperbolic time chambers
I think the characters in Primer may have done something like this, getting around a limitation of their time machine by putting a second time machine inside of the first. Then again, the movie isn’t always clear as to what’s happening, so it’s hard to tell...
Note: the above is actually a highly insightful comment if you stop and think about it for a second.
Any comment can seem insightful if you’re allowed to supply details until it makes sense.
I don’t think the details you have to supply here (you have to know what the meme is, I suppose?) are particularly difficult or unreasonable.
afsd;ljkurjzvn,x
The above comment would be insightful if it was a counterexample. This means it is not a counterexample. That means that it is not insightful. That means it is a counterexample. It’s like the least interesting number paradox of nonsense strings of letters.
Regardless, I might recognize the technical accuracy of your point, but your point is only superficially useful. I liked the original comment and thought that it was both funny and insightful. Yes, some of that insight is mine as well, rocks can’t sing or dance or use logic, but that doesn’t mean that the initial comment isn’t also interesting.
This does not follow. You’re treating the first premise like a double implication, but it’s certainly not true that the comment would be insightful if and only if it was a counterexample.
Clearly the comment “afsd;ljkurjzvn,x” was just a typo for “afsd:ljkurjzvn.x”, which I read as agreement with my point, making clever reference to complexity theory and Aaronson’s refutation of the waterfall argument.
So it’s isn’t rot13?
Indeed. Put a hyperbolic time chamber inside another hyperbolic time chamber, and you get a speedup factor of 365 squared.
I think the characters in Primer may have done something like this, getting around a limitation of their time machine by putting a second time machine inside of the first. Then again, the movie isn’t always clear as to what’s happening, so it’s hard to tell...
Yeah, it would be interesting, but it’s not doable in either the original DBZ scenario, or in upload scenarios: you can’t emulate an emulator and get a speedup like that—the buckpassing doesn’t work, the computations still have to be done somewhere.
(Any optimization you could apply to emulating an emulation, like some sort of Futamura projection collapsing the emulated program and the emulated hardware, could be done at the original emulation level, so all it leaves you with is possible programming convenience and constant factors of inefficiency and indirection.)
You can have an arbitrarily deep sequence of speeding-up optimized nested emulations, with each subsequent emulation running faster by its container’s clock than the otherwise identical container would run by itself (by its own clock).
(The catch is that n obk gung’f ehaavat na rzhyngvba znl or fybjre ol vgf pbagnvare’f pybpx guna vs vg vfa’g.)
I think you’d have to get a pretty large team in there to see any substantial results. One person, working alone without feedback or contact with other scientists or any chance to do experiments won’t do very much more in 365 years than they would in one.
The consequences if it could be tuned for smaller time chunks would be interesting—for example, people who’d like to have a time pocket for work and/or sleep, freeing up their out-in-the-world time for family, friends, or hobbies that can’t be done in a time box.
To some extent, this is a capability already offered by modafinil. Modafinil is great and all, but you don’t see its use universal or even particularly revolutionary. The world without modafinil looks pretty much like it does with.
I really liked William Sleator’s YA novel Singularity, which is about teenage identical twins who find a time anomaly with a similar speedup factor.
I enjoyed that book when I was a teenager as well. It’s kind of absurd because it starts out in the kids-discover-a-mysterious-artifact/phenomenon genre, and then suddenly turns into a irefvba bs Zl Fvqr Bs Gur Zbhagnva jurer gur xvq whfg fvgf va n furq sbe n lrne. V’z yvxr, “Ubj qvq gur nhgube xabj V’q rawbl ernqvat gung?”
I see that it’s on libgen.info, so I’ll give it a look.
It’s not a super deep book*, but it is very gripping, and more character-oriented than you might expect given the premise. The viewpoint character is a convincing 16-year-old. For me, the book is one of the most memorable fictional depictions of grit) I’ve seen, right up there with Gattaca and The Shawshank Redemption**. (Disclaimer: I’ve read the book several times, but the most recent time was five or ten years ago.)
* But much deeper than Dragon Ball Z from what I’ve seen. :-)
Edit: Here’s Orson Scott Card giving a glowing review to Singularity and some other Sleator books. This contains a spoiler for Singularity! -- although vg’f n cybg cbvag lbh pbhyq cebonoyl thrff tvira gung jr’ir nyernql gnyxrq nobhg gur gjva cnenqbk.
** Edit 2: The Count of Monte Cristo deserves a place on this list too.
I just finished it. The Count of Monte Cristo came to my mind too during the ‘prison’ sequence, which was fairly good.
As far as the HTC scenario goes, it illustrates both the upside and downside: the ability to focus on something for a long interval, but also the massive reduction in quality of life. (It also mentions in passing the aging problem: the uncle is 40 but ‘looks 60’ and dies early, in the middle of his research—which he might have been able to finish if he had spent more time in realtime so he could await future replies from researchers & new textbooks or results.)
I guess it is a short book, but man! You don’t kid around.
As shown in the book, the aging thing can in very narrow circumstances be a feature not a bug, but when I daydream about using a secret HTC to amaze everyone with my productivity and learning speed, the daydream includes some means of not aging faster than everyone else. (This makes me think of some sort of SF Dorian Gray.)
Gwern probably has a better version of the HTC that allows you to spend 365 hours to each hour of realtime.
That sounds like an interesting book, thanks for the recommendation. It’s going on my to-read list.
One important aspect of uploads is that they can (presumably) be easily copied—this is enough for them to have a huge economic impact even if they only run at normal human speed. If you have one upload that is willing and able to do a job, suddenly you have as many as you need, and can displace all human workers in that job (at least once the hardware gets cheap enough).
This assumes that uploaded people would agree to being copied or that the world turned so dystopian that no one would ask them for permission. I for one wouldn’t want several instances of me running around.
Can you say more about why not?
Not that you’re obligated to; preferences are preferences. But this particular preference is sufficiently alien to me that I’d like to understand it better.
I like my personal identity and creating several causally interacting copies of me would feel like diluting it. I could anticipate a future in which there would be several instances of me with different life experiences since the moment of splitting. All of those experiences would be ‘mine’ in a way, yet ‘I’ wouldn’t own most of them.
This would be less of a problem if there was a way to merge back into a single person after doing whatever needed doing but that’s not a given. Copying is straightforward in an uploading scenario while merging requires progress in conceptual understanding of the mind. And if we knew how to merge back together, we might also know how to make it an ongoing process, so instead of splitting into separate personalities we could launch several communicating threads of attention that would still match my intuition of being a single person, which I would find preferable.
(And yes, I know about many worlds. That’s different because the world splits with me.)
Cool; thanks.
Agreed that merging different-but-similar minds is a vastly different (and more complicated) problem from creating copies. And agreed that ongoing synchronization among minds is yet a third problem, and that both of those would be awesome.
Think of any real-world problem that takes one day on modern hardware. When computers were 365 times slower (that is, 12 − 13 years ago), it would have taken one year to solve such problems.
Don’t you think there is any real-world problem that would benefit from 365x faster hardware, even if you can interact with it only once a day?
(Even if individual tasks take less than one year to complete, you can pool several of them and run them serially on the 365x computer)
The first thing that comes to mind is evolutionary algorithms, which eat up tons of ram and keeping running at a decent pace while keeping track of dozens variables is an enormous engineering challenge.
Doesn’t circuit design (and therefore computer processor design) require fairly large computational resources (for mathematical modelling)? Thus faster hardware now can be used to create even faster hardware.. faster.
Yes, but how much of the work that goes into the next generation is just layout? It doesn’t solve all of your chemical or quantum mechanical issues, or fixes your photomasks for the next shrunken generation, etc. If layout were a major factor, we should expect to hear of ‘layout farms’ or supercomputers or datacenters devoted devoted to the task. I, at least, haven’t. (I’m sure Intel has a datacenter or two, but so do many >billion tech multinationals.)
And if layout is just a fraction of the effort like 10%, then Amdahl’s law especially applies.
it doesn’t give many actual current details, but http://en.wikipedia.org/wiki/Computational_lithography implies that as of 2006 designing the photomask for a given chip required ~100 CPU years of processing, and presumably that has only gone up.
Etching a 22nm line with 193nm light is a hard problem, and a lot of the techniques used certainly appear to require huge amounts of processing. It’s close to impossible to say how much of a bottle neck this particular step in the process is, but based on how much really knowing what is going on in even just simple mechanical design requires lots of simulation I would actually expect that every step in chip design has similar types of simulation requirements.
There is some positive feedback in circuit design (although sublinear, I think), but hardware serial speed is essentially limited by the size of the surface features on the IC, which is in turn limited by the manufacturing process and ultimately by the physical limits of CMOS technology.
Most of the examples people would come up with of extremely compute-intensive tasks are parallel algorithms, and those would be cheaper to run in the real-world on server farms which do not need self-contained powersources fitting in an HTC or similar special setups or attendants paid handsomely to be willing to spend a year isolated in the prison of HTC. There’s simply no reason to take a highly parallel task and run it in an HTC when you can get the same result at less cost and less latency by running it on a perfectly normal server farm or cloud computing platform.
(The genetic algorithm will spit out similar answers if you run it on 1 CPU in a HTC at >$1/day for >$365 or if you run it on 365 CPUs on Amazon EC2 at $1/day for 1 day total for $365.)
Remember, electricity is already the dominant cost to running a server these days!
Are there serial tasks which people do not run because they would take a fraction of a year but if they could be run overnight would justify a years’ worth of premium power bills? I’m sure there’s some and the introduction of an HTC would conjure up some uses which no one had bothered to work out because it was obviously pointless, but I can’t help but think that there being no obvious candidates means the candidates wouldn’t be fantastically useful.
Even the so-called Embarrassingly parallel problems, those whose theoretical performance scales almost linearly with the number of cpus, in practice scale sublinearly in the amount of work done per dollar: massive parallelization comes with all kinds of overheads, from synchronization to cache contention to network communication costs to distributed storage issues. More trivially, large data centers have significant heat dissipation issues: they all need active cooling and many are also housed in high-tech buildings specifically designed to address this issue. Many companies even place data centers in northern countries to take advantage of the colder climate, instead of putting them in, say, China, India or Brazil where labor costs much less.
Problems that are not embarrassingly parallel are limited by Amdahl’s law: as you increase the number of cpus, the performance quickly reach an asymptote where the sequential parts of the algorithms dominate.
Take P-complete problems, for instance. These are problems which are efficient (polynomial time) on a sequential computer, but are conjectured to be inherently difficult to parallelize (the NC != P conjecture). This class contains problems of practical interest, notably linear programming and various problems for model checking. Being able to run these tasks overnight instead of in one year would a significant advantage.
A HTC would come with serious overhead costs too; the cooling is just the flip side of the electricity—a HTC isn’t in Iceland and the obvious interpretation of a HTC as a very small pocket universe means that you have serious cooling issues as well (a years’ worth of heat production to eject each opening).
I’m not sure how much of an advantage that would be: there are pretty good approximations for some (most/all?) problems like linear programming (remember Grötschel’s report citing a 43 million times speedup of a benchmark linear programming problem since 1988) and such stuff tends to asymptote. How much of an advantage is running for a year rather than the otherwise available days/weeks? Is it large enough to pay for a year of premium HTC computing power?
Of course given that the HTC is a fictional device you can always imagine arbitrary issues that make it uneconomical. I was considering the HTC just as a computer that had 365x the serial speed of present day computers, and considering whether there would be economically interesting batch (~1 day long) computations to run on it.
These problems have polynomial time complexity, they don’t asymptote. Linear programming, for instance has quadratic worst-case time complexity in the size of problem instance (and O(n^3.5) time complexity in the number of variables). For problems related to model checking (circuit value problem, Horn-satisfiability, type inference) approximate solutions don’t seem particularly useful.
Hm, I wasn’t, except in the shift to the upload scenario where the speedup is not from executing regular algorithms (presumably anything capable of executing emulated brains at 365x realtime will have much better serial performance than current CPUs). As an ordinary computer there’s still heat considerations—how is it taking care of putting out 365x a regular computer’s heat even if it’s doing 365x the work? And as a pocket universe as specified, heat is an issue—in fact, now that I think about it Stephen Baxter invented an space-faring alien race in his Ring hard sf universe which lives inside tiny pocket universes as the ultimate in heat insulation.
I was referring to the quality of the solution produced by the approximating algorithms.
Quickly googling, there seems to be plenty of work on approximate solutions and approaches in model checking; for example http://cs5824.userapi.com/u11728334/docs/77a8b8880f48/Bernhard_Steffen_Verification_Model_Checking_an.pdf includes a paper:
I’ll admit I don’t know much about the model checking field, though.
The reason I expect uploads to have a big impact is not particularly accelerated time—it’s the ability to copy, observe and modify the uploads.
If I was an upload with access to my own source code (or access to my data and the source of the software on which I’m running, etc.), I might want to try to run modified versions of myself to see what changes, to see if I can have better short-term memory, or if I can “outsource” any maths calculation to more optimized software, or have introspective access to my emotional reactions or the reasons I believe things, or have better/different senses (directly perceive and modify code and data?), or decrease my learning time, etc.
What stops you from making a change that is addictive or self-amplifying? For example, suppose a subtle tweak makes you less averse to making another subtle tweak in the same direction. A few thousand iterations later and your network is trashed. http://lesswrong.com/lw/ase/schelling_fences_on_slippery_slopes/
It seems to me that the only safe way to do this would be to only permit other uploaded entities to make the edits, working in teams, with careful observation and testing of results. Older versions of yourself might be team members.
Also, the hardware design would need to be extremely well thought out, so that it is not possible for someone to Blue Pill attack you without your knowledge, or directly overwrite your neural structures with someone else’s patterns. The hardware would have to be designed with security permissions inherently baked in : here’s a blog post where Drexler discusses this :
http://metamodern.com/2011/08/03/quiz-question-what-is-wrong-with-this-model-of-computation/
Greg Egan just posted an article about his new books, which seem to deal with a somewhat similar setup: The Orthogonal Universe (spoilers for the basic worldbuilding of Egan’s Orthogonal trilogy)
There’s also a similar concept in Accel World (dealing with a VR world with a speedup of 10,000) and the later novels of Sword Art Online (Alicization, specifically), which is set earlier on the same timeline.
It’s not entirely obvious why there are no apparent consequences to society as a whole, but it definitely affects the characters who get trapped there, who end up variously between a decade and several hundred years older than their apparent age.
That universe also makes uploading much easier than ours. They may appear human, but their brains don’t work on quite the same principles as ours.
That’s true, The Clockwork Rocket is a clear example of the ‘exploiting time acceleration for practical use’, but note that it does this by either biting bullets or working around them: everyone on the rocket expects to die of aging or worse, and the research is only plausible because a good chunk of the world’s researchers are apparently on board.
Plenty of people would benefit from an opportunity to adjust their life’s timeline a bit relative to the rest of the world. To stick with the training premise, you might take a year to recover from an injury and get back up to form in time for the Olympics.
I recall an episode of Star Trek TNG in which the crew, when entering a time anomaly, affixed a device to their arms in order to anchor them to their own time. I suppose that with a similar anchor to the time outside of the HTC a person whilst inside the chamber wouldn’t age.
I don’t think there’s any particular reason to suppose either exists in the context of the other fictional setting though.
Reminds me a little of Anathem—it’s like a really fast Math.
I found Anathem really unbelievable with the deepest Maths like the Millennials—human social structures & network effects do not work that way! Make a 100 Millennials, and if you’re lucky, they’ll all just be utterly stagnant, having gotten lost in one cul-de-sac. (If you’re unlucky, they’ll all be dead due to cold or bad social dynamic or something. If you’re really unlucky, they’ll turn into a warrior cult that slaughters everyone near them.)
I’d gotten about a third of the way through Anathem when the person who lent it to me asked me how I was liking it. I replied that it was fun to read, but I was having serious trouble accepting his world as actually populated by human beings, even for long enough to suspend my disbelief… humans just don’t work that way.
He smiled and said “keep reading”.
Later, I was amused.
Some specific comments about uploads : a. First, a more reasonable estimate of the speedup is on the order of 10^6 to 10^8. This depends on a few factors : if we assume 5ghz switching speeds and we use dedicated discrete circuits for every single simulated neuron, that would in theory be a speed of 25 million times versus human switching speeds on the order of 200 Hz. Some neurons are faster than this, however, and you need enough discrete timesteps to account for small differences in arrival times that are a major factor in signaling. The upper end of the range is likely possible if you use much faster nano-scale components, and again, dedicated hardware circuits for every component. These kinds of speeds are also only practical if the neuroscience understanding is good enough that fine cellular details need not be simulated, merely neural network nodes with a large number of parameters.
If general purpose CPUs inside a supercomputer are used instead, achieving sim speeds close to realtime is expected to be very difficult to accomplish.
b. Working from the assumption that the speed advantage is 10^6, predicting superhuman capabilities from the uploads seems reasonable. A single uploaded entity wouldn’t get just 12 PhDs—they could get every PhD. Presumably, they’d enroll in every PhD program in the world simultaneously, and send some kind of robotic avatar to class. By rapidly switching between many robot avatars, queuing up commands for each one, they could presumably do many slow tasks at once.
c. These kinds of speedups allow levels of micromanagement that do not exist today. For example, suppose that the uploaded entity has a factory that produces some kind of assembly robot. As the factory runs, the entity observes every last motion by the machines in the factory and can adjust settings and motions for optimal performance. So, now the assembly robots are coming off the production line and going to work. The entity might notice a design flaw in the first batch and make dozens of changes to correct the flaw, so 5 minutes later version 1.1 of the bot is the one coming off the line. But 1.1 has other drawbacks, so 5 minutes after that, it’s version 1.2.
Or even more detailed : the entity understands the current task to such a fine grained detail that EVERY robot coming off of the assembly line is slightly customized so that it uses available resources to the most efficient degree possible.
I do acknowledge the author’s basic point. An entity that can think 1 million times faster would not be able to advance technology at 1 million times current speed. R&D cannot be done purely in simulations : physical experiments must be performed, physical prototypes must be built and tested against the real world. However, the speedups would still be large enough than a world with high speed uploaded entities would soon look very different from a world without them.
The Hyperbolic Time Chamber only allows two people at a time, except for that one time the plot demanded three people be in there, and the non-canon filler scene where four people were in there.
Cool essay otherwise, gwern.
yo dawg we heard you like hyperbolic time chambers
Deadlines? Imagine you have an important exam/audition in two days, but haven’t had the time to study/rehearse properly. Then, tomorrow you look yourself in the chamber with your books/instrument and leave it when you’re totally awesome. OTOH, since (IIRC) you are only allowed into the chamber twice in your life (and even if this weren’t the case, you don’t want to be 35 subjective years old 25 calendar years after the date on our birth certificate), you only do that for things you really care about.
Deadlines like that are zero-sum games, and so the impact is limited—it shifts around who wins the exams/auditions at a substantial cost (whatever it takes to use the HTC over just living in the real world). On the macro scale, I’m not sure how much it matters at the margin: if someone needs a HTC just to study for an exam...
So is poker, but it doesn’t mean there’s no point in playing it. (Also, they are only zero-sum if the number of candidates who will pass is fixed in advance.)
Well, I think a lot of people play poker who shouldn’t...
As for not being zero-sum—that may be true, and I assume your argument is that the additional return justifies the use of the HTC. But if you’re not using the speedup aspect but the pocket-universe/precommitment aspect, why not just run cheaper facilities in realtime which approximate prisons for students? They need a week’s practice, they enter the prison a week before the audition...
This has the tremendous advantage that we could do it already, right now, in the real world. Yet I’ve never heard of such a thing.
I think you may have misunderstood my idea. I was thinking that candidates would choose to go into the HTC, not that they would be required to. (And when I said “things you really care about” by “you” I meant candidates, not examiners.) And I wasn’t assuming it would only work if you cannot leave the room before the 24 clock hours/12 subjective months—indeed, it would work better if you could decide to stay as little or as long as you want.
I suspect that anything I am sufficiently non-motivated to study/rehearse that I haven’t used the available time to do so, I will probably end up not using the time in the chamber to study/rehearse terribly efficiently, either.
But if you have a year to do it, you don’t have to do it efficiently.
If you are going to blow an entire year of your all-too-limited life in the HTC, you might a well not go in at all and enjoy a higher quality of life while doing said inefficient studying.
If humans don’t age, it would be a great way to get some research done.
Downvoted; too reliant on personal anecdote (if we can call such speculation that).
It would be great if you articulate your dislike of the essay a little more.