Suffice it to say that I think the above is a positive move ^.^
Russell_Wallace
I hope you others feel that the character was primarily a victim way back when, instead of a dirtbag.
Of course not. The victim was the girl he murdered.
That’s the point of the chapter title—he had something to atone for. It’s what tvtropes.org calls a Heel Face Turn.
A type 2 supernova emits most of its energy in the form of neutrinos; these interact with the extremely dense inner layers that didn’t quite manage to accrete onto the neutron star, depositing energy that creates a shockwave that blows off the rest of the material. I’ve seen it claimed that the neutrino flux would be lethal out to a few AU, though I suspect you wouldn’t get the chance to actually die of radiation poisoning.
A planet the size and distance of Earth would intercept enough photons and plasma to exceed its gravitational binding energy, though I’m skeptical about whether it would actually vaporize; my guess for what its worth is that most of the energy would be radiated away again. Wouldn’t make any difference to anyone on the planet at the time of course.
Well-chosen chapter title, and good wrapup!
The point is, that the Normal Ending is the most probable one.
Historically, humans have not typically surrendered to genocidal conquerors without an attempt to fight back, even when resistance is hopeless, let alone when (as here) there is hope. No, I think this is the true ending.
Nitpick: eight hours to evacuate a planet? I think not, no matter how many ships you can call. Of course the point is to illustrate a “shut up and multiply” dilemma; I’m inclined to think both horns of the dilemma are sharper if you change it to eight days.
But overall a good ending to a good story, and a rare case where a plot is wrapped up by the characters showing the spark of intelligence. Nicely done!
You guys are very trusting of super-advanced species who already showed a strong willingness to manipulate humanity with superstimulus and pornographic advertising.
I’m not planning to trust anyone. My suggestion was based on the assumption that it is possible to watch what the Superhappys actually do and detonate the star if they start heading for the wrong portal. If that is not the case (which depends on the mechanics of the Alderson drive) then either detonate the local star immediately, or the star one hop back.
Hmm. The three networks are otherwise disconnected from each other? And the Babyeaters are the first target?
Wait a week for a Superhappy fleet to make the jump into Babyeater space, then set off the bomb.
(Otherwise, yes, I would set off the bomb immediately.)
Either way though, there would seem to be a prisoner’s dilemma of sorts with regards to that. I’m not sure about this, but let’s say we could do unto the Babyeaters without them being able to do unto us, with regards to altering them (even against their will) for the sake of our values. Wouldn’t that sort of be a form of Prisoner’s Dilemma with regards to, say, other species with different values than us and more powerful than us that could do the same to us? Wouldn’t the same metarationality results hold? I’m not entirely sure about this, but..
I’m inclined to think so, which is one reason I wasn’t in favor of going to war on the Babyeaters: what if the next species who doesn’t share our values is stronger than us, how would I have them deal with us? what sort of universe do we want to live in?
(Another reason being that I’m highly skeptical of victory in anything other than a bloody war of total extermination. Consider analogous situations in real life where atrocities are being committed in other countries, e.g. female circumcision in Africa; we typically don’t go to war over them, and for good reason.)
Good story! It’s not often you see aliens who aren’t just humans in silly make up. I particularly liked the exchange between the Confessor and the Kiritsugu.
Specifically, the point of utility theory is the attempt to predict the actions of complex agents by dividing them into two layers:
Simple list of values
Complex machinery for attaining those values
The idea being that if you can’t know the details of the machinery, successful prediction might be possible by plugging the values into your own equivalent machinery.
Does this work in real life? In practice it works well for simple agents, or complex agents in simple/narrow contexts. It works well for Deep Blue, or for Kasparov on the chessboard. It doesn’t work for Kasparov in life. If you try to predict Kasparov’s actions away from the chessboard using utility theory, it ends up as epicycles; every time you see him taking a new action you can write a corresponding clause in your model of his utility function, but the model has no particular predictive power.
In hindsight we shouldn’t really have expected otherwise; simple models in general have predictive power only in simple/narrow contexts.
But if not—if this world indeed ranks lower in my preference ordering, just because I have better scenarios to compare it to—then what happens if I write the Successful Utopia story?
Try it and see! It would be interesting and constructive, and if people still disagree with your assessment, well then there will be something meaningful to argue about.
An amusing if implausible story, Eliezer, but I have to ask, since you claimed to be writing some of these posts with the admirable goal of giving people hope in a transhumanist future:
Do you not understand that the message actually conveyed by these posts, if one were to take them seriously, is “transhumanism offers nothing of value; shun it and embrace ignorance and death, and hope that God exists, for He is our only hope”?
If existential angst comes from having at least one deep problem in your life that you aren’t thinking about explicitly, so that the pain which comes from it seems like a natural permanent feature—then the very first question I’d ask, to identify a possible source of that problem, would be, “Do you expect your life to improve in the near or mid-term future?”
Saved in quotes file.
The way stories work is not as simple as Orson Scott Card’s view. I can’t do justice to it in a blog comment, but read ‘The Seven Basic Plots’ by Christopher Booker for the first accurate, comprehensive theory of the subject.
“I’d like to see a study confirming that. The Internet is more addictive than television and I highly suspect it drains more life-force.”
If you think that, why haven’t you canceled your Internet access yet? :P I think anyone who finds it drains more than it gives back, is using it wrong. (Admittedly spending eight hours a day playing World of Warcraft does count as using it wrong.)
“But the media relentlessly bombards you with stories about the interesting people who are much richer than you or much more attractive, as if they actually constituted a large fraction of the world.”
This seems to be at least part of the explanation why television is the most important lifestyle factor. Studies of factors influencing both happiness and evolutionary fitness have found television is the one thing that really stands out above the noise—the less of it you watch, the better off you are in every way.
The Internet is a much better way to interact with the world, both because it lets you choose a community of reasonable size to be involved with, and because it’s active rather than passive—you can do something to improve your status on a mailing list, whereas you can’t do anything to improve your status relative to Angelina Jolie (the learned helplessness affect again).
“The increase in accidents for 2002 sure looks like a blip to me”
Looks like a sustained, significant increase to me. Let’s add up the numbers. From the linked page, total fatalities 1997 to 2000 were 167176. Total fatalities 2002 to 2005 were 172168. The difference (by the end of 2005, already nearly 3 years ago) is about 5000, more than the total deaths in the 9/11 attacks.
Eliezer,
I was thinking in terms of Dyson spheres—fusion reactor complete with fuel supply and confinement system already provided, just build collectors. But if you propose dismantling stars and building electromagnetically confined fusion reactors instead, it doesn’t matter; if you want stellar power output, you need square AUs of heat radiators, which will collectively be just as luminous in infrared as the original star was in visible.
Eliezer,
It turns out that there are ways to smear a laser beam across the frequency spectrum while maintaining high intensity and collimation, though I am curious as to how you propose to “pull a Maxwell’s Demon” in the face of beam intensity such that all condensed matter instantly vaporizes. (No, mirrors don’t work. Neither do lenses.)
As for scattering your parts unpredictably so that most of the attack misses—then so does most of the sunlight you were supposedly using for your energy supply.
Finally, “trust but verify” is not a new idea; a healthy society can produce verifiable accounting of roughly what its resources are being used for. Though you casually pile implausibility on top of implausibility; now we are supposed to imagine that Hannibal Lecter created his fully populated torture chamber solar system all by himself, with no subcontractors or anything else that might leave a trace.
Carl,
If “singleton” is to be defined that broadly, then we are already in a singleton, and I don’t think anyone will object to keeping that feature of today’s world.
Note that altruistic punishment of the type I describe may actually be beneficial, when done as part of a social consensus (the punishers get to seize at least some of the miscreant’s resources).
Also note that there may be no such thing as evolved hardscrabble replicators; the number of generations to full colonization of our future light cone may be too small for much evolution to take place. (The log to base 2 of the number of stars in our Hubble volume is quite small, after all.)
I have tended to focus on meta level issues in this sort of context, because I know from experience how untrustworthy our object level thoughts are.
For example, there’s a really obvious non-singleton solution to the “serial killer somehow creates his own fully populated solar system torture chamber” problem: a hundred concerned neighbors point Nicoll-Dyson lasers at him and make him an offer he can’t refuse. It’s a simple enough solution for a reasonably bright five year old to figure out in 10 seconds; the fact that I didn’t figure it out for months, makes it clear exactly how much to trust my thinking here.
The reason for this untrustworthiness is itself not too hard to figure out: our Cro-Magnon brains are hardwired to think about interpersonal interactions in ways that were appropriate for our ancestral environment at the cost of performing worse than random chance in sufficiently different environments.
But fear is not harmless. Where was the largest group of Americans killed by the 9/11 attacks? In the Twin Towers? No: on the roads, in the excess road accident toll caused by people driving for fear of airline terrorism.
If the smartest thinkers in the world can’t get together without descending into a spiral of paranoid fantasy, is there hope for the future of intelligent life in the universe? If we can avoid that descent, then it is time to begin doing so.
Well, I like the 2006 version better. For all that it’s more polemic in style—and if I recall correctly, I was one of the people against whom the polemic was directed—it’s got more punch. After all, this is the kind of topic where there’s no point in even pretending to be emotionless. The 2006 version alloys logic and emotion more seamlessly.