Singletons Rule OK
Reply to: Total Tech Wars
How does one end up with a persistent disagreement between two rationalist-wannabes who are both aware of Aumann’s Agreement Theorem and its implications?
Such a case is likely to turn around two axes: object-level incredulity (“no matter what AAT says, proposition X can’t really be true”) and meta-level distrust (“they’re trying to be rational despite their emotional commitment, but are they really capable of that?”).
So far, Robin and I have focused on the object level in trying to hash out our disagreement. Technically, I can’t speak for Robin; but at least in my own case, I’ve acted thus because I anticipate that a meta-level argument about trustworthiness wouldn’t lead anywhere interesting. Behind the scenes, I’m doing what I can to make sure my brain is actually capable of updating, and presumably Robin is doing the same.
(The linchpin of my own current effort in this area is to tell myself that I ought to be learning something while having this conversation, and that I shouldn’t miss any scrap of original thought in it—the Incremental Update technique. Because I can genuinely believe that a conversation like this should produce new thoughts, I can turn that feeling into genuine attentiveness.)
Yesterday, Robin inveighed hard against what he called “total tech wars”, and what I call “winner-take-all” scenarios:
Robin: “If you believe the other side is totally committed to total victory, that surrender is unacceptable, and that all interactions are zero-sum, you may conclude your side must never cooperate with them, nor tolerate much internal dissent or luxury.”
Robin and I both have emotional commitments and we both acknowledge the danger of that. There’s nothing irrational about feeling, per se; only failure to update is blameworthy. But Robin seems to be very strongly against winner-take-all technological scenarios, and I don’t understand why.
Among other things, I would like to ask if Robin has a Line of Retreat set up here—if, regardless of how he estimates the probabilities, he can visualize what he would do if a winner-take-all scenario were true.
Yesterday Robin wrote:
“Eliezer, if everything is at stake then ‘winner take all’ is ‘total war’; it doesn’t really matter if they shoot you or just starve you to death.”
We both have our emotional commitments, but I don’t quite understand this reaction.
First, to me it’s obvious that a “winner-take-all” technology should be defined as one in which, ceteris paribus, a local entity tends to end up with the option of becoming one kind of Bostromian singleton—the decisionmaker of a global order in which there is a single decision-making entity at the highest level. (A superintelligence with unshared nanotech would count as a singleton; a federated world government with its own military would be a different kind of singleton; or you can imagine something like a galactic operating system with a root account controllable by 80% majority vote of the populace, etcetera.)
The winner-take-all option is created by properties of the technology landscape, which is not a moral stance. Nothing is said about an agent with that option, actually becoming a singleton. Nor about using that power to shoot people, or reuse their atoms for something else, or grab all resources and let them starve (though “all resources” should include their atoms anyway).
Nothing is yet said about various patches that could try to avert a technological scenario that contains upward cliffs of progress—e.g. binding agreements enforced by source code examination or continuous monitoring, in advance of the event. (Or if you think that rational agents cooperate on the Prisoner’s Dilemma, so much work might not be required to coordinate.)
Superintelligent agents not in a humanish moral reference frame—AIs that are just maximizing paperclips or sorting pebbles—who happen on the option of becoming a Bostromian Singleton, and who have not previously executed any somehow-binding treaty; will ceteris paribus choose to grab all resources in service of their utility function, including the atoms now composing humanity. I don’t see how you could reasonably deny this! It’s a straightforward decision-theoretic choice between payoff 10 and payoff 1000!
But conversely, there are possible agents in mind design space who, given the option of becoming a singleton, will not kill you, starve you, reprogram you, tell you how to live your life, or even meddle in your destiny unseen. See Bostrom’s (short) paper on the possibility of good and bad singletons of various types.
If Robin thinks it’s impossible to have a Friendly AI or maybe even any sort of benevolent superintelligence at all, even the descendants of human uploads—if Robin is assuming that superintelligent agents will act according to roughly selfish motives, and that only economies of trade are necessary and sufficient to prevent holocaust—then Robin may have no Line of Retreat open, as I try to argue that AI has an upward cliff built in.
And in this case, it might be time well spent, to first address the question of whether Friendly AI is a reasonable thing to try to accomplish, so as to create that line of retreat. Robin and I are both trying hard to be rational despite emotional commitments; but there’s no particular reason to needlessly place oneself in the position of trying to persuade, or trying to accept, that everything of value in the universe is certainly doomed.
For me, it’s particularly hard to understand Robin’s position in this, because for me the non-singleton future is the one that is obviously abhorrent.
If you have lots of entities with root permissions on matter, any of whom has the physical capability to attack any other, then you have entities spending huge amounts of precious negentropy on defense and deterrence. If there’s no centralized system of property rights in place for selling off the universe to the highest bidder, then you have a race to burn the cosmic commons, and the degeneration of the vast majority of all agents into rapacious hardscrapple frontier replicators.
To me this is a vision of futility—one in which a future light cone that could have been full of happy, safe agents having complex fun, is mostly wasted by agents trying to seize resources and defend them so they can send out seeds to seize more resources.
And it should also be mentioned that any future in which slavery or child abuse is successfully prohibited, is a world that has some way of preventing agents from doing certain things with their computing power. There are vastly worse possibilities than slavery or child abuse opened up by future technologies, which I flinch from referring to even as much as I did in the previous sentence. There are things I don’t want to happen to anyone—including a population of a septillion captive minds running on a star-powered Matrioshka Brain that is owned, and defended against all rescuers, by the mind-descendant of Lawrence Bittaker (serial killer, aka “Pliers”). I want to win against the horrors that exist in this world and the horrors that could exist in tomorrow’s world—to have them never happen ever again, or, for the really awful stuff, never happen in the first place. And that victory requires the Future to have certain global properties.
But there are other ways to get singletons besides falling up a technological cliff. So that would be my Line of Retreat: If minds can’t self-improve quickly enough to take over, then try for the path of uploads setting up a centralized Constitutional operating system with a root account controlled by majority vote, or something like that, to prevent their descendants from having to burn the cosmic commons.
So for me, any satisfactory outcome seems to necessarily involve, if not a singleton, the existence of certain stable global properties upon the future—sufficient to prevent burning the cosmic commons, prevent life’s degeneration into rapacious hardscrapple frontier replication, and prevent supersadists torturing septillions of helpless dolls in private, obscure star systems.
Robin has written about burning the cosmic commons and rapacious hardscrapple frontier existences. This doesn’t imply that Robin approves of these outcomes. But Robin’s strong rejection even of winner-take-all language and concepts, seems to suggest that our emotional commitments are something like 180 degrees opposed. Robin seems to feel the same way about singletons as I feel about ¬singletons.
But why? I don’t think our real values are that strongly opposed—though we may have verbally-described and attention-prioritized those values in different ways.
You and Robin seem to be focused on different time periods. Robin is claiming that after ems are created one group probably won’t get a dominant position. You are saying that post-singularity (or at least post one day before the singularity) there will be either one dominant group or a high likelihood of total war. You are not in conflict if there is a large time gap between when we first have ems and when there is a singularity.
I wrote in this post that such a gap is likely: http://www.overcomingbias.com/2008/11/billion-dollar.html
“One day before the singularity”?!? Implied unbelievable rapid take off scenario alert!
I’m not trying to speak for Robin; the following are my views. One of my deepest fears—perhaps my only phobia—is fear of government. And any government with absolute power terrifies me absolutely. However the singleton is controlled, it’s an absolute power. If there’s a single entity in charge, it is subject to Lord Acton’s dictum. If control is vested in a group, then struggles for control of that become paramount. Even the suggestion that it might be controlled democratically doesn’t help me to rest easy. Democracies can be rushed off a cliff, too. And someone has to set up the initial constitution; why would we trust them to be as good as George Washington and turn down the opportunity to be king?
I also understand your admonition to prepare a line of retreat. But I don’t see a path to learn to stop worrying and love the Singleton. If anyone has suggestions, I’ll listen to them.
In the meantime, I prefer outcomes with contending powers and lots of incidental casualties over any case I can think of with a singleton in charge of the root account and sufficient security to keep out the hackers. At least in the former case there’s a chance that there will be periods with polycentric control. In the latter case, eventually there will be a tyrant who manages to wrest control, and with complete control over physical space, AGI, and presumably nanotech, there’s little hope for a future revival of freedom.
Chris, so that one’s pretty easy. Acton is for humans. It really is that simple of a problem if you’re going to configure a Friendly AI. The part of it where you trust the humans to set it up is not simple. The part where you work out how to create the Friendly AI to be configured is not simple. But the part where the AI doesn’t start exhibiting the specific human tendencies of corruptibility is as easy as not growing pizza on a palm tree.
I think Chris is righter than you. I agree that Acton’s quote is not necessarily appropriate for AI, but it is not always appropriate for humans either, but it is still a risk. As far as your 80% majority or whatever, you might try reading Hoppe’s “Democracy: The God that Failed”. I’m not finished with it, it’s rather repetetive which makes it a slowish read, but he basically argues that Churchill was wrong—democracy is the worst possible system (at least that’s my take on his arguments). (Also as a note to Robin and you—the word is hard-scrabble, not scrapple).
At least this keeps the defenses of the agents well primed. If agents are in control of their own evolution, there’s a danger that they will get lazy and put their feet up—and then when they finally run into aliens or a face a rebellion, the lazy civilization will simply be munched through as though it was made of marshmallow—destroying their seed forever.
Three points that seem salient to this disagreement:
Robin seems to object to the idea of happy slaves who prefer to serve, calling them ‘zombies.’ From a preference utilitarian point of view, it’s not clear why the preferences of obsessive servants take priority over obsessive replicators, but I do get the sense that Robin favors the latter, ceteris paribus.
Robin seems to take a more total utilitarian view to Eliezer’s average utilitarianism. If you think that, even after burning the cosmic commons and the like, average population over time will be higher in the competitive scenario, while singletons enable population restrictions (and are likely to enact them in a way that reduces preference satisfaction, i.e. doesn’t simply use resources to let fewer beings live longer and think faster).
Robin seems more committed to a specific current ethical view (preference utilitarianism), whereas Eliezer seems to expect with higher probability that he would change his moral views if he knew more, thought about it longer, etc. Uncertainty about morality supports a singleton: for many moralities, good outcomes will depend on a singleton, and a singleton can dissolve itself, while a colonization wave of replicators cannot be recalled.
I have tended to focus on meta level issues in this sort of context, because I know from experience how untrustworthy our object level thoughts are.
For example, there’s a really obvious non-singleton solution to the “serial killer somehow creates his own fully populated solar system torture chamber” problem: a hundred concerned neighbors point Nicoll-Dyson lasers at him and make him an offer he can’t refuse. It’s a simple enough solution for a reasonably bright five year old to figure out in 10 seconds; the fact that I didn’t figure it out for months, makes it clear exactly how much to trust my thinking here.
The reason for this untrustworthiness is itself not too hard to figure out: our Cro-Magnon brains are hardwired to think about interpersonal interactions in ways that were appropriate for our ancestral environment at the cost of performing worse than random chance in sufficiently different environments.
But fear is not harmless. Where was the largest group of Americans killed by the 9/11 attacks? In the Twin Towers? No: on the roads, in the excess road accident toll caused by people driving for fear of airline terrorism.
If the smartest thinkers in the world can’t get together without descending into a spiral of paranoid fantasy, is there hope for the future of intelligent life in the universe? If we can avoid that descent, then it is time to begin doing so.
Russell,
A broadly shared moral code with provisions for its propagation and defense is one of Bostrom’s examples of a singleton. If altruistic punishment of the type you describe is costly, then evolved hardscrapple replicators won’t reduce their expected reproductive fitness by punishing those who abuse the helpless. We can empathize with and help the helpless for the same reason that we can take contraceptives: evolution hasn’t yet been able to stop us without outweighing disadvantages.
Carl,
If “singleton” is to be defined that broadly, then we are already in a singleton, and I don’t think anyone will object to keeping that feature of today’s world.
Note that altruistic punishment of the type I describe may actually be beneficial, when done as part of a social consensus (the punishers get to seize at least some of the miscreant’s resources).
Also note that there may be no such thing as evolved hardscrabble replicators; the number of generations to full colonization of our future light cone may be too small for much evolution to take place. (The log to base 2 of the number of stars in our Hubble volume is quite small, after all.)
Recently, Steve Omohundro and I were having a conversation about superintelligent attack and defense.
Omohundro was talking about using non-invertible problems (P vs. NP) to ensure that each of your parts know where your other parts are, while an attacker doesn’t know where to aim without expending lots of energy to track you. (It goes without saying that an attacker has to attack in an unpredictable way, otherwise you can pull a Maxwell’s Demon / Szilard Engine on the incoming attack to extract energy from it.)
And I was replying that if each local piece of yourself had to know where its local neighbors were, that implied a degree of regularity that wasn’t usual in strong cryptography; and also that this kind of separation between pieces of yourself would make it difficult to do large quantum computations.
Some innocent unsophisticated bystander said, “I don’t understand—why not just shoot lasers?”
“Lasers?” I said. “You’d use anything except lasers.”
“Yes,” Steve Omohundro said, “it’s coherent light, each photon the same as every other photon—if you see one photon, you know what all of them look like—so basically, it’s just a gift of free energy.”
More to the point, how do you know what Hannibal Lector is doing in there? Tortured souls are not detectable from surface albedo.
Sorry, you’re going to have to think about it for a few more months.
You’re relying too much on time-reversibility here when you pooh-pooh lasers. The second law of thermodynamics strongly favors the aggressor.
Eliezer,
It turns out that there are ways to smear a laser beam across the frequency spectrum while maintaining high intensity and collimation, though I am curious as to how you propose to “pull a Maxwell’s Demon” in the face of beam intensity such that all condensed matter instantly vaporizes. (No, mirrors don’t work. Neither do lenses.)
As for scattering your parts unpredictably so that most of the attack misses—then so does most of the sunlight you were supposedly using for your energy supply.
Finally, “trust but verify” is not a new idea; a healthy society can produce verifiable accounting of roughly what its resources are being used for. Though you casually pile implausibility on top of implausibility; now we are supposed to imagine that Hannibal Lecter created his fully populated torture chamber solar system all by himself, with no subcontractors or anything else that might leave a trace.
EY “If you have lots of entities with root permissions on matter, any of whom has the physical capability to attack any other, then you have entities spending huge amounts of precious negentropy on defense and deterrence. If there’s no centralized system of property rights in place for selling off the universe to the highest bidder, then you have a race to burn the cosmic commons, and the degeneration of the vast majority of all agents into rapacious hardscrapple frontier replicators.”
yes, I agree with you exactly on this point.
I think that maybe the disagreement between you and robin is based upon the fact that Robin lives intellectually in the world of economics.
Economics, is, if you think about it, a subject in an odd position. The assumptions of classical economics say that people should be perfect selfish bastards with certain utility functions. In reality, we are not, but the theory works to a useful extent, and we use it to run the world.
Perhaps economists have made an emotional association between real “nice” human behavior in competitive environments and the theory of rational utility maximizers which they mistakenly use to describe that behavior.
When you start screaming about how competition between rational bastards will lead to futility, an economist emotionally associates this scenario with competition between quasi-rational human businessmen (or states) whose apetite for utter world domination at absolutely any cost is almost always tempered by them having a reasonably normal set of human emotions, families, social connections and reputations, etc (or in the case of states, it is tempered by irrational leaders and irrational, changing public opinion). The economist then uses whatever argument he can find to backward rationalize the emotionally motivated conclusion that competition leads to good outcomes.
Robin seems to have chosen a particularly bizzare way of rationalizing the goodness of competition: redefining ethics (or even denying the usefulness of ethics at all) so that outcomes where almost everyone is a slave are actually OK. Less intelligent fans of the market take the more obviously false route of claiming that competition between self-modifying agents will lead to “traditionally good” outcomes.
We need to somehow unlearn the association between (free market/independent state) competition and the effects that that system has when the agents concerned happen to be human. If we cannot do this, we are doomed to find out the hard way.
Many commenters in this post have been making a similar mistake by thinking that a friendly singleton would be corrupted by power. People are having trouble stepping out of the human frame of reference.
I was speaking metaphorically when I talked about Matrioshka brains, which I suppose I shouldn’t have done. In real life, of course, you don’t let stars just go on shining—it’s ridiculous to have that much waste heat and no computation.
Did you read the paper?: “A hardscrapple life is one that is tough and absent of luxuries. It refers to the dish scrapple, made from whatever’s left of the pig after the ham and sausage are made, the feet pickled, and the snouts soused.”
I suspect the paper may have the etymology incorrect, on the grounds the word “scrapple” dates back to 1855 link whereas the first recorded usage of “hardscrabble” dates back to 1804 link.
Eliezer,
I was thinking in terms of Dyson spheres—fusion reactor complete with fuel supply and confinement system already provided, just build collectors. But if you propose dismantling stars and building electromagnetically confined fusion reactors instead, it doesn’t matter; if you want stellar power output, you need square AUs of heat radiators, which will collectively be just as luminous in infrared as the original star was in visible.
Re: But why?: Capitalist economists seem to like the idea of competition. It is the primary object of their study—if there were no comptition they would have to do some serious retraining.
Robin Hanson seems keener than most. If there’s a problem, he will often propose a solution involving getting agents to compete over resources tied into alternative proposals.
“Capitalist economists seem to like the idea of competition. It is the primary object of their study—if there were no comptition they would have to do some serious retraining.”
Ditto.
Yes I read the paper, I almost commented then, but wasn’t going to waste time on a minor nitpick like that. In fact I like scrapple and have made it from scratch myself, but it has nothing to do with the term “hard-scrabble” which at least is in the American Heritage Dictionary, which “hard-scrapple” is not.
Eliezer, sometimes in a conversation one needs a rapid back and forth, often to clarify what exactly people mean by things they say. In such a situation a format like the one we are using, long daily blog posts, can work particularly badly. In my last post I was trying in part to get you to become clearer about what you meant by what you now call a “winner take all” tech, especially to place it on a continuum with other familiar techs. (And once we are clear on what it means, then I want arguments suggesting that an AI transition would be such a thing.) I suggested talking about outcome variance induced by a transition. If you now want to use that phrase to denote “a local entity tends to end up with the option of becoming one kind of Bostromian singleton”, then we need new terms to refer to the “properties of the technology landscape,” that might lead to such an option.
I am certainly not assuming it is impossible to be “friendly” though I can’t be sure without knowing better what that means. I agree that it is not obvious that we would not want a singleton, if we could choose the sort we wanted. But I am, as you note, quite wary of the sort of total war that might be required to create a singleton. But before we can choose among options we need to get clearer on what the options are.
Carl and Roko, I really wasn’t trying to lay out a moral position, though I was expressing mild horror at encouraging total war, a horror I expected (incorrectly it seems) would be widely shared.
There are people who don’t want to have a total war because of what it would cost them, and then there are people who do want to have a total war so that they can go out and win it.
“Eliezer, sometimes in a conversation one needs a rapid back and forth, often to clarify what exactly people mean by things they say. In such a situation a format like the one we are using, long daily blog posts, can work particularly badly.” Why not have an online chat, and post the transcript?
“Carl and Roko, I really wasn’t trying to lay out a moral position,” I was noting apparent value differences between you and Eliezer that might be relevant to his pondering of ‘Lines of Retreat.’
“though I was expressing mild horror at encouraging total war, a horror I expected (incorrectly it seems) would be widely shared.” It is shared, but there are offsetting benefits of accurate discussion.
Where was the largest group of Americans killed by the 9/11 attacks? In the Twin Towers? No: on the roads, in the excess road accident toll caused by people driving for fear of airline terrorism. -- Russell Wallace
There’s a rationalist’s failure in this, something to do with finding a surprising piece of data more compelling than an unsurprising one. Here’s a source for accidents per year over the last 10 years. The increase in accidents for 2002 sure looks like a blip to me, especially when you consider how many more miles were driven. The driving-is-extremely-dangerous myth you’re perpetuating is based on an agglomeration of accident statistics—most accidents happen at night or in particular weather conditions. Driving at night or in particular weather conditions is dangerous. You can’t step from that into an argument that more people driving in replacement of flying means more accidents, unless you can justify the belief that when people drive instead of flying, they drive at night or in particular weather conditions.
There seem to be fundamental differences between what Eliezer and Robin accept as important to even think about, but I can’t quite bring whatever it is into mental focus, which is why I’m posting this, maybe this will trigger something someone else has noticed as Carl trigger this.
Also, they seem to be arguing back and forth about two views of possible futures—but those views are nowhere near excluding other possible futures.
Oh, to answer Eliezer’s direct question directly, if I know that I am in a total war, I fight. I fight to make myself, or if that is impossible those who most share my values, win.
Jeff, driving as a replacement to flying means more longer trips rather than more shorter ones. If the distribution of night and bad weather didn’t change (and I’m sure we didn’t get fewer hours of darkness), then more of those hours were almost certainly in darkness or bad weather than would have been the case otherwise.
“The increase in accidents for 2002 sure looks like a blip to me”
Looks like a sustained, significant increase to me. Let’s add up the numbers. From the linked page, total fatalities 1997 to 2000 were 167176. Total fatalities 2002 to 2005 were 172168. The difference (by the end of 2005, already nearly 3 years ago) is about 5000, more than the total deaths in the 9/11 attacks.
Russell, population increased, and demographics with higher rates of accidents expanded their population share during that period.
This source looks more authoritative to me. Moreover, it contains figures relevant to what I think is the key figure: miles per person. That generally trends up, from 9k in 1994 to a peak of 10k in 2005. I don’t see any abrupt change in the trend. I’m rather surprised.
http://www.youtube.com/watch?v=_AiHlbimLZI
Anna:)
billswift: I’ll accept that more driving means more hours driving in those dangerous situations. In that case, it seems that 9/11 actually increased the safety of driving, if measured per mile. At least, the number of accidents only went mildly up, but the number of miles went up dramatically.
Eliezer asks why one might be emotionally opposed to the idea of a singleton. One reason might be that Friendly AI is impossible. Life on the rapacious hardscrapple frontier may be bleak, but it sure beats being a paperclip.
Robin:
Yeah, unfortunately I’m sort of in the middle of resetting my sleep cycle at the moment so I’m out of sync with you for purposes of conducting rapidfire comments. Should be fixed in a few days.
Suppose that a pack of socialists is having a discussion, and a libertarian (who happens to be a friend of theirs) wanders by. After listening for a few moments, the libertarian says, “I’m shocked! You want babies to starve! Doesn’t even discussing that make it more socially acceptable, and thereby increase the probability of it happening?”
“Eh?” say the socialists. “No one here said a single word about starving babies except you.”
Now I’ve set up this example so that the libertarian occupies the sympathetic position. Nonetheless, it seems to me that if the these parties wish to conduct a Disagreement, then they should have some sort of “Aha!” moment at this point where they realize that they’re working from rather different assumptions, and that it is important to bring out and explicate those assumptions.
Robin, I did not say anything about total war, you did. I think you should realize at this point that on my worldview I am not doing anything to encourage war, or setting humanity on a course for total war. You can say that I am wrong, of course; the libertarian can try to explain to the socialists why capitalism is the basic force that feeds babies. But the libertarian should also have the empathy to realize that the socialists do not believe this particular fact at the start of the conversation.
There are clear differences of worldview clashing here, which have nothing to do with the speed of an AI takeoff per se, but rather has something to do with what kind of technological progress parameters imply what sort of consequences. I was talking about large localized jumps in capability; you made a leap to total war. I can guess at some of your beliefs behind this but it would only be a guess.
The libertarian and the socialists are unlikely to have much luck conducting their Disagreement about whether babies should starve, but they might find it fruitful to try and figure out why they think that babies will starve or not starve under different circumstances. I think they would be unwise to ignore this minor-seeming point and pass directly to the main part of their discussion about whether an economy needs regulation.
That’s not much of a Line of Retreat. It would be like my saying, “Well, if a hard takeoff is impossible, I guess I’ll try to make sure we have as much fun as we can in our short lives.” If I actually believed an AI hard takeoff were impossible, I wouldn’t pass directly to the worst-case scenario and give up on all other hopes. I would pursue the path of human intelligence enhancement, or uploading, or non-takeoff AI, and promote cryonics more heavily.
If you actually came to believe in large localized capability jumps, I do not think you would say, “Oh, well, guess I’m inevitably in a total war, now I need to fight a zero-sum game and damage all who are not my allies as much as possible.” I think you would say, “Okay, so, how do we avoid a total war in this kind of situation?” If you can work out in advance what you would do then, that’s your line of retreat.
I’m sorry for this metaphor, but it just seems like a very useful and standard one if one can strip away the connotations: suppose I asked a theist to set up a Line of Retreat if there is no God, and they replied, “Then I’ll just go through my existence trying to ignore the gaping existential void in my heart”. That’s not a line of retreat—that’s a reinvocation of the same forces holding the original belief in place. I have the same problem with my asking “Can you set up a line of retreat for yourself if there is a large localized capability jump?” and your replying “Then I guess I would do my best to win the total war.”
If you can make the implication explicit, and really look for loopholes, and fail to find them, then there is no line of retreat; but to me, at least, it looks like a line of retreat really should exist here.
PS: As the above was a long comment and Robin’s time is limited: if he does not reply to every line, no one should take that as evidence that no good reply exists. We also don’t want to create a motive for people to try to win conversations by exhaustion.
Still, I’d like to hear a better line of retreat, even if it’s one line like, I don’t know, “Then I’d advocate regulations to slow down AI in favor of human enhancement” or something. Not that I’m saying this is a good idea, just something, anything, to break the link between AI hard takeoff and total moral catastrophe.
Eliezer, I’m very sorry if my language offends. If you tell the world you are building an AI and plan that post-foom it will take over the world, well then that sounds to me like a declaration of total war on the rest of the world. Now you might reasonably seek as large a coalition as possible to join you in your effort, and you might plan for the AI to not prefer you or your coalition in the acts it chooses. And you might reasonably see your hand as forced because other AI projects exist that would take over the world if you do not. But still, that take over the world step sure sounds like total war to me.
Oh, and on your “line of retreat”, I might well join your coalition, given these assumptions. I tried to be clear about that in my Stuck In Throat post as well.
If you’re fighting a total war, then at some point, somewhere along the line, you should at least stab someone in the throat. If you don’t do even that much, it’s very hard for me to see it as a total war.
You described a total war as follows:
How is writing my computer program declaring “total war” on the world? Do I believe that “the world” is totally committed to total victory over me? Do I believe that surrender to “the world” is unacceptable—well, yes, I do. Do I believe that all interactions with “the world” are zero-sum? Hell no. Do I believe that I should never cooperate with “the world”? I do that every time I shop at a supermarket. Not tolerate internal dissent or luxury—both internal dissent and luxury sound good to me, I’ll take both. All resources must be devoted to growing more resources and to fighting “the world” in every possible way? Mm… nah.
So you thus described a total war, and inveighed against it;
But then you applied the same term to the Friendly AI project, which has yet to stab a single person in the throat; and this, sir, I do not think is a fair description.
It is not a matter of indelicate language to be dealt with by substituting an appropriate euphemism. If I am to treat your words as consistently defined, then they are not, in this case, true.
Eliezer, I’m not very interested in arguing about which English words best describe the situation under consideration, at least if we are still unclear on the situation itself. Such words are just never that precise. Would you call a human stepping on an ant “total war”, even if he wasn’t trying very hard? From an aware ant’s point of view it might seem total war, but perhaps you wouldn’t say so if the human wasn’t trying hard. But the key point is that the human could be in for a world of hurt if he displayed an intention to squash the ant and greatly underestimated the ant’s ability to respond. So in a world where new AIs cannot in fact easily take over the world, AI projects that say they plan to have their AI take over the world could induce serious and harmful conflict.
The correct response to a guy scheming to take over the world someday in the future, in a pleasant and friendly way—time permitting, between grocery shopping and cocktail parties—is a bemused smile.
Robin: in this case “surrender” is acceptable—if AI doesn’t distinguish between people in its creator’s coalition and other people, there is no difference between coalition consisting of one person and whole world being in the coalition, no point in joining that coalition and no way to interpret not being in the coalition as war.
So, is this better or worse than the eternal struggle you propose? Superintelligent agents nuking it out on the planet in a struggle for the future may not be fun—and yet your proposals seem to promote and prolong that stage—rather than getting it over with as quickly as possible. It seems as though your proposal comes off looking much worse in some respects—e.g. if you compare the if you compare total number of casualties. Are you sure that is any better? If so, what makes you think that?
That is what we have today. Perhaps strangely—if you walk through a tropical rainforest, you don’t see that much fighting—it all seems rather peaceful most of the time. Nature really likes cooperation.
The frontier folk would be very cool! They would have amazing technology—and would travel near the speed of light. That’s a funny sort of “degeneration”.
It might look peaceful on the Discovery Channel, but have you ever walked in a real tropical rainforest? Everything has spikes, everything is poisonous, almost anything that moves will fight or flee at first sight—everything but the very largest and most dangerous things. It’s not the organized warfare of humans, but rather a cosmic, bloody, free-for-all, with survival and replication the only criterion for success. Evolution is not only a Blind Idiot God, but also a very harsh and unsympathetic mistress.
Are you joking, or are you really that naive? Maybe you should stop talking about evolutionary biology until you’ve studied it a bit more. Do you have any idea how much of a rainforest is war? That when you look at a tree, the bark is armor against insects that want to eat the tree, the trunk is there to raise it up above other trees that are competing for the same sunlight? If there’s anything in a rainforest that isn’t at war, it would be a rock.
You apparently have a different idea about what the term “fighting” means from me in this context. Not all conflicts of interests qualify as “fights”—since the term “fight” implies physical violence.
Trees competing for light are not “fighting”. Two people picking apples from the same tree are not “fighting” either. Indeed, the idea that—in a purely cooperative world—trees would not have tall trunks is probably a fallacy. A deep canopy of foliage actually captures more energy from sunlight than a field of grass does. Rainforest has an albedo of about 10% vs grassland with an albedo of 25%. The reason is the extra space for all the reflected light traps above the depth and darkness of the forest floor. A deep canopy is functional—a reflection of biological efficiency.