Even if you buy the dial theory, it still doesn’t make sense to shout Yay Progress on the topic of AGI. Singularity is happening this decade, maybe next, whether we shout Yay or Boo. Shouting Boo just delays it a little and makes it more likely to be good instead of bad. (Currently is it quite likely to be bad).
Consider also that not everyone would believe, upon having the Singularity explained to them, that it would be a good thing.
There comes a point where the One Dial theory and similar acrobatics are just ways to rationalize away the fact that you’re trying to push everyone in a direction that they would hate or consider too dangerous because you personally want to see what is at the end of the road. That’s just good old manipulation, but arguments like the dial thing allow you to feel better about it and think that you’re still pursuing the good option, by narrowing the options down to two so you can pit the one you like against a purposefully made horrible strawman.
I very much agree with you here and in your AGI deployment as an act of aggression post; the overwhelming majority of humans do not want AGI/ASI and its straightforward consequences (total human technological unemployment and concomitant abyssal social/economical disempowerment), regardless of what paradisaical promises (for which there is no recourse if they are not granted: economically useless humans can’t go on strike, etc) are promised them.
The value (this is synonymous with “scarcity”) of human intelligence and labor output has been a foundation of every human social and economic system, from hunter-gatherer groups to highly-advanced technological societies. It is the bedrock onto which humanity has built cooperation, benevolence, compassion, and care. The value of human intelligence and labor output gives humans agency, meaning, decision-making power, and bargaining power towards each other and over corporations / governments. Beneficence flows from this general assumption of human labor value/scarcity.
So far, technological development has left this bedrock intact, even if it’s been bumpy (I was gonna say “rocky” but that’s a mixed metaphor for sure) on the surface. The bedrock’s still been there after the smoke cleared, time and time again. Comparing opponents of AGI/ASI with Luddites or the Unabomber, accusing them of being technophobes, or insinuating that they would have wanted to stop the industrial revolution is wildly specious: unlike every other invention or technological development, successful AGI/ASI development will convert this bedrock into sand. So far, technological development has been wildly beneficial for humanity: technological development that has no need for humans is not likely to hold to that record. The OpenAI mission is literally to create “highly autonomous systems that outperform humans at most economically valuable work”, a flowery way to say “make human labor output worthless”. Fruitful cooperation between AGI/ASI and humans is unlikely to endure since at some point the transaction costs (humans don’t have APIs, are slow, need to sleep/eat/rest, etc) outweigh whatever benefits of cooperation.
There’s been an significant effort to avoid reckoning with or acknowledging these aspects of AGI/ASI (again, AGI/ASI is the explicit goal of AI labs like OpenAI; not autoregressive language models) and those likely (if not explicitly sought out) consequences in public-facing discourse of doomers vs accelerationists. As much as it pains me to come to this conclusion it really does feel like there’s a pervasive gentleman’s agreement to avoid saying “the goal is literally to make systems capable of bringing about total technological unemployment”. This is not aligned with the goals/desires/lives of the overwhelming majority of humanity, and the deception deployed to avoid widespread public realization of this sickens me.
I wrote a handful of comments on the EA forum about this as well.
I have read your comments on the EA forum and the points do resonate with me.
As a layman, I do have a personal distrust with the (what I’d call) anti-human ideologies driving the actors you refer to and agree that a majority of people do as well. It is hard to feel much joy in being extinct and replaced by synthetic beings in probably a way most would characterize as dumb (clippy being the extreme)
I also believe that fundamental changing of the human subjective experience (radical bioengineering or uploading to an extent) in order to erase the ability to suffer in general (not just medical cases like depression) as I have seen being brought up by futurist circles is also akin to death. I think it could possibly be a somewhat literal death, where my conscious experience actually stops if radical changes would occur, but I am completely uneducated and unqualified on how consciousness works.
I think that a hypothetical me, even with my memories, who is physically unable to experience any negative emotions would be philosophically dead. It would be unable to learn nor reflect and its fundamentally different subjective experience would be so radically different from me, and any future biological me should I grow older naturally, that I do not think memories alone would be enough to keep my identity. To my awareness, the majority of people would think similarly and that there is value ascribed to our human nature, including limitations, which has been reinforced by our media and culture. Though whether this attachment is purely a product of coping, I do not know. What I do know is that it is the current reality for every functional human being now and has been for thousands of years. I believe people would prefer sticking with it than relinquishing it for vague promises of ascended consciousness. I think this is somewhat supported by my subjective observation that to a lot of people who want a posthuman existence and what it entails, their end goal seems to often come back to creating simulations they themselves can live in normally.
I’m curious though if you have any hopes for the situation regarding the nebulous motivations of some AGI researchers, especially as AI and its risks have recently started becoming “mainstream”. Do you expect to see changes and their views challenged? My question is loaded, but it seems you are already invested in its answer.
I’m curious though if you have any hopes for the situation regarding the nebulous motivations of some AGI researchers, especially as AI and its risks have recently started becoming “mainstream”. Do you expect to see changes and their views challenged? My question is loaded, but it seems you are already invested in its answer.
I think there’s a case to be made for AGI/ASI development and deployment as a “hostis humani generis” act; and others have made the case as well. I am confused (and let’s be honest, increasingly aghast) as to why AI doomers rarely try to press this angle in their debates/public-facing writings.
To me it feels like AI doomers have been asleep on sentry duty, and I’m not exactly sure why. My best guesses look somewhat like “some level of agreement with the possible benefits of AGI/ASI” or “a belief that AGI/ASI is overwhelmingly inevitable and so it’s better not to show any sign of adversariality towards those developing it, so as to best influence them to mind safety”, but this is quite speculative on my part. I think LW/EA stuff inculcates in many a grievous and pervasive fear of upsetting AGI accelerationists/researchers/labs (fear of retaliatory paperclipping? fear of losing mostly illusory leverage and influence? getting memed into the idea that AGI/ASI is inevitable and unstoppable?).
I feel like this foundational dissonance makes AI doomers come across as confused fawny wordcels or hectoring cultists whenever they face AGI accelerationists / AI risk deniers (who in contrast tend to come across as open/frank/honest/aligned/of action/assertive/doers/etc). This vibe is really not conducive to convincing people of the risks/consequences of AGI/ASI.
I do have hopes but they feel kinda gated on “AI doomers” being many orders of magnitudes more honest, unflinchingly open, and unflatteringly frank about the ideologies that motivate AGI/ASI researchers and the intended/likely consequences of their success—even if “alignment/control” gets solved—of total technological unemployment and consequential social/economic human disempowerment, instead of continuing to treat AGI/ASI as some sort of neutral(if not outright necessary)-but-highly-risky technology like rockets or nukes or recombinant DNA technology. Also gated on explicitly countering the contentions that AGI/ASI—even if aligned—is inevitable/necessary/good or that China is a viable contender in this omnicidal race or that we need AGI/ASI to fight climate change or asteroids or pandemics or all the other (sorry for being profane) bullshit that gets trotted out to justify AGI/ASI development. And gated on explicitly saying that AGI/ASI accelerationists are transhumanist fundamentalists who are willing to sacrifice the entire human species on the altar of their ideology.
I don’t think AGI/ASI is inherently inevitable, but as long as AI doomers shy away from explaining that the AGI/ASI labs are specifically seeking (and likely soonish succeeding) to build systems strong enough to turn the yet-unbroken—from hunter-gatherer bands to July 2023 -- bedrock (“human labor is irreplaceably valuable”) assumption of human society into fine sand; I think there’s little hope of stopping AGI/ASI development.
Yup. These precise points were also the main argument of my other post on a post-AGI world, the benevolence of the butcher.
Also due to the AI discourse I’ve actually ended up learning more about the original Luddites and, hear hear, they actually weren’t the fanatical, reactionary anti-technology ignorant peasants that popular history mainly portrays them as. They were mostly workers who were angry about the way the machines were being used, not to make labour easier and safer, but to squeeze more profit out of less skilled workers to make lower quality products which in the end left almost everyone involved worse off except for the ones who owned the factories. That’s I think something we can relate to even now, and I’d say is even more important in the case of AGI. The risk that it simply ends up being owned by the few who create it leading thus to a total concentration of the productive power of humanity isn’t immaterial, in fact it looks like the default outcome.
The risk that it simply ends up being owned by the few who create it leading thus to a total concentration of the productive power of humanity isn’t immaterial, in fact it looks like the default outcome.
Yes, this is why I’ve been frustrated (and honestly aghast, given timelines) at the popular focus on AI doom and paperclips rather than the fact that this is the default (if not nigh-unavoidable) outcome of AGI/ASI, even if “alignment” gets solved. Comparisons with industrialization and other technological developments are specious because none of them had the potential to do anything close to this.
I think the doom narrative is still worth bringing up because this is what these people are risking for all of us in the pursuit of essentially conquering the world and/or personal immortality. That’s the level of insane supervillainy that this whole situation actually translates to. Just because they don’t think they’ll fail doesn’t mean they’re not likely to.
I’m also disappointed that the political left is dropping the ball so hard on opposing AI, turning to either contradictory “it’s really stupid, just a stochastic parrot, and also threatens our jobs somehow” statements, or focusing on details of its behaviour. There’s probably something deeper to say about capitalists openly making a bid to turn labour itself into capital.
First, let me say I appreciate you expressing your viewpoint and it does strike an emotional chord with me. With that said,
Wouldn’t an important invention such as the machine gun or obviously fission weapons fit your argument pattern? You could make a reasonable case that, like a world with technological unemployment, worlds where humans are cheap to slaughter is overall worse. That if you could coordinate with the world powers at that time to agree to an “automatic weapon moratorium” it would resort in a better world.
The problem is Kaiser Wilhelm and other historical leaders are going to say “suuurrrreee”, agree to the deal, and you already know the nasty surprise any power honoring such a deal will face on the battlefield. (Or Stalin would have said “sureee” to such a deal on fission weapons, and we can assume would immediately renege and test the devices in secret, only announcing their existence with a preemptive first strike on the enemies of the USSR).
What’s different now? Is there a property about AGI/ASI that makes such international agreements more feasible?
To add one piece of information that may not be well known: I work on inference accelerator ASICs and they are significantly simpler than GPUs. A large amount of Nvidias stack isn’t actually necessary if pure AI perf/training is your goal. So the only real bottleneck to monitor AI accelerators is that wafer processing equipment currently comes exclusively from asml for the highest end equipment, creating a monitorable supply chain for now. All bets are off if major superpowers build their own domestic equivalents, which they would be strongly incentivized to do in worlds where we know AGI is possible and have built working examples.
Wouldn’t an important invention such as the machine gun or obviously fission weapons fit your argument pattern? You could make a reasonable case that, like a world with technological unemployment, worlds where humans are cheap to slaughter is overall worse. That if you could coordinate with the world powers at that time to agree to an “automatic weapon moratorium” it would resort in a better world.
The problem is Kaiser Wilhelm and other historical leaders are going to say “suuurrrreee”, agree to the deal, and you already know the nasty surprise any power honoring such a deal will face on the battlefield. (Or Stalin would have said “sureee” to such a deal on fission weapons, and we can assume would immediately renege and test the devices in secret, only announcing their existence with a preemptive first strike on the enemies of the USSR).
I might be misunderstanding your point but I wasn’t trying to argue that it’s easy (or even feasible) to make robust international agreements not to develop AGI.
The machine gun and nuclear weapons don’t, AFAICT, fit my argument pattern. Powerful weapons like those certainly make humans easier to slaughter on industrial scales, but since humans are necessary to keep economies and industries and militaries running, military/political leaders have robust incentives to prevent large-scale slaughter of their own citizens and soldiers (and so do their adversaries for their own people). Which OK, this can get done by deterrence or arms-control agreement but it’s also started arms races, preemptive strikes, and wars hot and cold. Nevertheless, the bedrock of “human labor/intelligence is valuable/scarce” creates strong restoring forces towards “don’t senselessly slaughter tons of people”. It is possible to create robust-ish (pretty sure Russia’s cheating with them Novichoks) international agreements against weapons that are better at senseless civilian slaughter than at achieving military objectives, chemical weapons are the notable case.
The salient threat to me isn’t “AGI gives us better ways to kill people” (society has been coping remarkably well with better ways to kill people, up to and including a fleet of portable stars that can be dispatched to vaporize cities in the time it took me to write this comment), the salient threat to me (which seems inherent to the development of AGI/ASI) is “AGI renders the overwhelming majority of humanity economically/socially irrelevant, and therefore the overwhelming majority of humanity loses all agency, meaning, decision-making power, and bargaining power, and is vulnerable to inescapable and abyssal oppression if not outright killing because there’s no longer any robust incentives to keep them alive/happy/productive”.
I agree technological unemployment is a huge potential problem. Though like always the actual problem is aging. I think what people miss is they think of tasks to be done as a fixed pool, you don’t need more than 1 vehicle per person or less, or 1 dwelling, or n hours per year of medical care, or food etc. And neglect how AGI clearly cannot be trusted to do many things regardless of capabilities, there would need to be a fleet of human overseers armed with advanced tools.
It’s just what do you do for a 50 year old truck driver, expecting them to retrain to be an O’Neil colony construction supervisor doesn’t make sense unless you can treat their aging and restore neural plasticity.
Which is itself an immense megaproject not being done. Bet aging research would go a lot faster if we had the functional equivalent of a billion people working on it, and all billion are informed as to everyone else’s research outcomes.
Where I was going for in the analogy was much simpler. You don’t get a choice. In the immediate term, agreeing to not build machine guns and honoring it means you face a rat tat tat when it matters most. Similar for fission weapons, obviously your enemy is going to build a nuclear arsenal and try to vaporize all your key cities in a surprise attack.
The issues you mention happen long term. In the short term you can use AGI to automate many key tasks and become vastly more economically and militarily powerful.
I agree technological unemployment is a huge potential problem. Though like always the actual problem is aging.
I think this is a typical LW bias. No, I don’t enjoy the idea of death. But I would rather live a long and reasonably happy life in a human friendly world and then die when I am old than starve to death as one of the 7.9 billion casualties of the AGI Wars. The idea that there’s some sliver of a chance that in some future immortality is on the table for you, personally, is a delusion. I think life extension is very possible, and true immortality is not. But as things are either would only be on the table for, like, the CEOs of the big AI companies who got their biomarkers registered as part of the alignment protocol so that their product obeys them. Not for you. You’re the peasant whose blood, if necessary, cyber-Elizabeth Bathory will use for her rejuvenation rituals.
That’s never happened historically and aging treatments isn’t immortality, it’s just approximately a life expectancy of 10k years. Do you know who is richer than any CEO you name? Medicare. I bet they would like to stop paying all these medical bills, which would be the case if treated patients had the approximate morbidity rate of young adults.
You also need such treatments to be given at large scales to find and correct the edge cases. A rejuvenation treatment “beta tester” is exactly what it sounds, you will have a higher risk of death but get earlier access. Going to need a lot of beta testers.
The rational, data driven belief is that aging is treatable and that ASI systems with the cognitive capacity to take into account more variables than humans are mentally capable of could be built to systematically attack the problem. Doesn’t mean it will help anyone alive today, there are no guarantees. Because automated systems found whatever treatments are possible, automated systems can deliver the same treatments at low cost.
If you don’t think this is a reasonable conclusion, perhaps you could go into your reasoning. Arguments like you made above are unconvincing.
While it is true that certain esoteric treatments for aging like young blood transfusions are inherently limited in who can benefit, they don’t even work that well and de aged hemopoietic stem cells can be generated in automated laboratories and would be a real treatment everyone can benefit.
The wealthy are not powerful enough to “hoard” treatments, because Medicare et al represent the government, which has a monopoly on violence and incentives to not allow such hoarding.
The wealthy are not powerful enough to “hoard” treatments, because Medicare et al represent the government, which has a monopoly on violence and incentives to not allow such hoarding.
That’s naive. If a private has obedient ASI, they also have a monopoly on violence now. If labour has become superfluous, states have lost all incentive to care about the opinion of people.
I think worlds with the tools to treat most causes of human death ranks strictly higher than a world without those tools. In the same way that a world with running water ranks above worlds without it. Even today not everyone benefits from running water. If you could go back in time would you campaign against developing pipes and pumps because you believed only the rich would ever have running water? (Which was true for a period of time)
I would campaign against lead pipes and support the goths in destroying Rome which likely improved human futures over an alternative of widespread lead piping.
Running water doesn’t create the conditions to permanently disempower almost everyone, AGI does. What I’m talking about isn’t a situation in which initially only the rich benefit but then the tech gets cheaper and trickles down. It’s a permanent trap that destroys democracy and capitalism as we know them.
There are also more than 1 dial and if one party turns theirs up enough, it’s a choice between “turn yours up or lose”. Historical examples such as the outcomes for China during the Opium wars are what happens when you restrict progress. China did exactly what Zvi is talking about—they had material advantages and had gunpower approximately 270 years!!! before the Europeans first used it. Later on, it did not go well for them.
The relative advantage of having AGI when other parties don’t is exponential, not linear. For example, during the Opium Wars, the Chinese had ships with cannon and were not outnumbered thousands to 1. A party with AGI and exponential numbers of manufacturing and mining robots would allow someone to produce easily thousands the possible industrial output of other countries during wartime, and since each vehicle is automated there is no bottleneck of pilots or crew.
To prove there is more than 1 dial : when the USA delays renewable energy projects by an average wait time of 4 years!, and has arbitrarily and capriciously decided to close applications for consideration (rather than do the sensible thing and streamline the review process), China is making it happen.
Others on lesswrong have posted the false theory that China is many years behind the AI race, when in reality the delay is about a year.
Note that in worlds with AI delays that were coordinated with China somehow, there are additional parties who could potentially take advantage of the delay, as well as the obvious risk of defection. AGI is potentially far more useful and powerful than nuclear weapons ever were, and also provides a possible route to breaking the global stalemate with nuclear arms.
The actual reason countries hold each other hostage with nuclear arms is because their populations are crammed into dense surface cities that are easy to target and easy to kill many people with a few warheads. And knowledge is held in the heads of specialized humans and they are expensive to train and replace.
AGI smart enough to perform basic industrial tasks would allow a country to build a sufficient number of bunkers for the entire population (for proof this is possible, see Switzerland), greatly reducing the casualties in a nuclear war, and AGI once it learns a skill, the weights for that skill can be saved to a VCS, so as long as copies of the data exist, skills are never lost from that point onwards. This reduces the vulnerability of a nation’s supply chain to losing some of it’s population.
Finally, the problem with Ronald Reagan’s “Star Wars” missile defense program was simply economics. The defensive weapons are much more expensive than ICBMs and easily overwhelmed by the enemy building additional cheap ICBMs with countermeasures. AGI driven robotics manufacturing ABMs provides a simple and clear way to get around this issue.
AGI is potentially far more useful and powerful than nuclear weapons ever were, and also provides a possible route to breaking the global stalemate with nuclear arms.
If this is true—or perceived to be true among nuclear strategy planners and those with the authority to issue a lawful launch order—it might creates disturbingly (or delightfully; if you see this as a way to prevent the creation of AGI altogether) strong first-strike incentives for nuclear powers which don’t have AGI, don’t want to see their nuclear deterrent turned to dust, and don’t want to be put under the sword of an adversary’s AGI.
The current economics “board” has every power with enough GDP to potentially build AGI/ASI protected by their own nuclear weapons or mutual defense treaties.
So the party considering a first strike has “national death and loss of all major cities” and “under the sword of the adversary” as their outcomes. As well as the always hopeful “maybe the adversary won’t actually attack but get what they want via international treaties” as outcomes.
Put this way it looks more favorable not to push the button, let me know how your analysis differs.
I mean, do you realise though that “we must build AGI because it’s a race and whoever has AGI gets to swamp the world in its drone armies, backup the knowledge of its best and brightest in underground servers and hide its population in deep bunkers while outside the nukes fly and turn the planet into a radioactive wasteland” is NOT a great advertisement for why AGI is good?
That future sounds positively horrible. It sounds, frankly, so bleak that most people would reasonably prefer death to it. Hence, there’s not much to lose in pursuing the chance—however tiny—that we may just prevent AGI from existing at all. Because if unaligned AGI kills us, and aligned AGI leads to the world you described (which btw, I roughly agree it’d be that or something similarly dystopian), then maybe the world in which you get quickly offed by nanomachines and turned into paperclips is the lucky one.
Dr_s, I am not claiming such worlds are ideal. However the side with the tasking consoles to a billion drones and many automated factories and bunkers is not helpless. Helpless when someone else gets the same technology. Most likely such a human faction can crush any rampant asi if it can be detected early enough, with overwhelming force that is not significantly worse in technology level that what a rebel ASI can discover without very large research and industrial facilities.
And not helpless to nature. What long term human survival looks like is a world where humans populations can’t be effortlessly killed. This means bunkers, defense weapons, surrogate robots to send into dangerous situations, and obviously later in the future locations away from earth.
What individual long term human survival looks the same. It looks like a human patient in an underground biolab, the air pure and inert nitrogen. All the failing parts of their body cut away and the artificial organs are lined up in equipment racks with at least ternary redundancy. The organs using living cells are arranged in 2d planes in transparent cases so that every part can be monitored for infections and cancers easily.
The reason for this is that each organ, in order to fail, requires all redundant systems to fail in the same time, and the probability of all n redundant systems failing can be low enough that the patients predicted lifespan can be many thousands of years.
Similarly humans living in a bunker have similar levels of protection. All defenses have to be defeated for them to be attacked, and it would require a direct hit from a high yield warhead on the bunker site. And you obviously subdivide a country’s population into many such bunkers, most under areas that have no strategic value, making in infeasible for an enemy attack to significantly reduce the population.
My point is this rough sketch is based on the math. It’s based on a realistic view of reality, which wants to kill every individual currently living and will kill the human species if we fail to develop advanced technology by some hidden deadline.
That deadline might be 1 billion years until the sun expands or it might be 20 years until we face the first rampant asi.
I agree bunkers and biolabs that provide life support through vivisection aren’t the most elegant solution, I was trying to not assume any more future advances in technology than needed. With better tech there are better ways to do this.
Your proposed solution of “coordinate with our sworn enemies not to develop ASI and continue to restrict the development of any advanced technology in medicine” has the predicted outcome of we die because we remain helpless to do anything about the things killing us. Either our sworn enemies defect on the agreement and develop ASI or we just all individually die of aging. Lose lose.
Your proposed solution of “coordinate with our sworn enemies not to develop ASI and continue to restrict the development of any advanced technology in medicine” has the predicted outcome of we die because we remain helpless to do anything about the things killing us. Either our sworn enemies defect on the agreement and develop ASI or we just all individually die of aging. Lose lose.
First, China are not “our sworn enemies” and this mindset already takes things to the extreme. China has diverging interests which might compete with ours but it’s not literally ideologically hell-bent on destroying everyone else on the planet. This kind of extreme mindset is already toxic; if you posit that coordination is impossible, of course it is.
Second, if your only alternative to death is living in a literal Hell, then I think many would reasonably pick death. It also must be noted that here:
That deadline might be 1 billion years until the sun expands or it might be 20 years until we face the first rampant asi.
the natural deadline is VERY distant. Plenty of time to do something about it. The close deadline (and many other such deadlines) is of our own making, ironically in the rush of avoiding some other kind of hypothetical danger that may be much further away. If we want to avoid being destroyed, learning how to not destroy ourselves would be an important first step.
First, China are not “our sworn enemies” and this mindset already takes things to the extreme.
I was referring to China, Russia, and to a lesser extent about 10 other countries who probably won’t have the budget to build ASI anytime soon. Both China and Russia hold the rest of the world at gunpoint with nuclear arsenals, like the USA does, and some European nations. All are essentially one bad decision from causing catastrophic damage.
Past attempts to come to some kind of deal to not build doomsday weapons to hold each other hostage all failed, why would they succeed this time? What could happen as a result of all this campaigning for government regulation is that like enriched nuclear material, ASIs above a certain level of capability may be the exclusive domain of governments. Who will be unaccountable and choose safety measures based on their own opaque processes. In this scenario, instead of many tech companies competing, it’s large governments, who can marshall far more resources than any private company can get from investors. Not sure this delays ASI at all.
Notably they also have not used nuclear weaponry recently and overall nuclear stockpiles have decreased by 80 percent. Part of playing the grim game is not giving the other player reasons to go grim by defecting. Same goes for ASI: they can suppress each other but if one defects, the consequences is that they can’t benefit.
The mutual result is actually quite stable with only government control as their incentives against self-destruction is high.
Basically only North Korea-esque nations in this scenario have the most incentive to defect, but would be suppressed by all extant powers. Since they would be essentially seen as terrorist speciciders, it’s hard to see why any actions against them wouldn’t be justified.
I think the crux of our disagreement is you are using Eliezers model, where the first ASI you build is by default deceptive and motivated always in a way beneficial to itself, and also ridiculously intelligent, able to defeat what should be hard limits.
While I am using a model where you can easily, with known software techniques, built ASI that are useful and take up the “free energy” needed for hostile ASI to win.
If, when we build the first ASI class systems, if it turns out Eliezers model is accurate, I will agree that grim games are rational and something we can do to delay the inevitable. (It might be stable for centuries, even, although eventually the game will fail and result in human extinction or ASI release or both)
I do feel we need hard evidence to determine which world we are in. Do you agree with that or do you think we should just assume ASIs are going to fit the first model and threaten nuclear war not to build the them?
Hard evidence would be building many ASI and testing them in secure facilities.
ASI is unnecessary when we have other options and grim game dynamics apply to avoid extinction or dystopia. I find even most such descriptions of tool level AI as disgusting(as do many others, I find).
Inevitability only applies if we have perfect information about the future, which we do not.
If it was up to me alone, I think we can give it at least a thousand years. Perhaps we can first raise the IQ of humanity by 1 SD via simple embryo selection before we go about extinctioning ourselves.
I actually do not think that we’re that close to cracking AGI: however, the intensity of the reaction imo is an excellent litmus test of how disgusting it is to most.
I strongly suspect the grim game dynamics have already begun, too, which has been one reason I’ve found comfort in the future.
From my perspective, I see the inverse, I see Singularity Criticality having already begun. The singularity is the world of human level AGI and self replicating robots, one where very large increases in resources are possible.
Singularity Criticality is that pre-singularity, as tools become capable of producing more economic value than their cost exist, they accelerate the last steps towards the (AGI, self replicating robots). Further developments follow from there.
I do not think anything other than essentially immediate nuclear war can stop a Singularity.
Observationally there is enormous economic pressure towards the singularity, I see no evidence whatsoever of policymakers even considering grim triggers. Can you please cite a government official stating a willingness to commit to total war if another party violates rules on ASI production? Can you cite any political parties or think tanks who are not directly associated with Eliezer Yudkowsky? I am willing to update on evidence.
I understand you feel disgust, but I cannot disambiguate the disgust you feel vs the luddites observing the rise of factory work. (the luddites were in the short term correct, the new factory jobs were a major downgrade). Worlds change and the world of stasis you propose, with very slow advances through embryo selection, I think is unlikely.
The UK has already mentioned that perhaps there should be a ban on models above a certain level. Though it’s not official, I have pretty good record that Chinese party members have already discussed worldwide war as potentially necessary(Eric Hoel also mentioned it, separately). Existential risk has been mentioned and of course, national risk is already a concern, so even for “mundane” reasons, it’s a matter of priority/concern and grim triggers are a natural consequence.
Elon had a personal discussion with China recently as well, and given his well known perspective on the dangers of AI, I expect that this point of view has only been reinforced.
And this is with barely reasoning chatbots!
As for Luddites, I don’t see why inflicting dystopia upon humanity because it fits some sort of cute agenda has any good purpose. But notably the Luddites did not have the support of the government and the government was not threatened by textile mills. Obviously this isn’t the case with nuclear, AI or bio. We’ve seen slowdowns on all of those.
“Worlds change” has no meaning: human culture and involvement influence the change of the world.
Ok. Thank you for the updates. Seems like the near term outcome depends on a race condition, where as you said government is acting and so is private industry, and government has incentives to preserve the status quo but also get immensely more rich and powerful.
The economy of course says the other. Investors are gambling the Nvidia is going to expand AI accelerator production by probably 2 orders of magnitude or more (to match the P/E ratio they have run the stocks to) , which is consistent with a world building many AGI, some ASI, and deploying many production systems. So you posit that governments worldwide are going to act in a coordinated manner to suppress the technology despite wealthy supporters of it.
I won’t claim to know the actual outcome but may we live in interesting times.
I think even the wealthy supporters of it are more complex: I was surprised that Palantir’s Peter Thiel came out discussing how AI “must not be allowed to surpass the human spirit” even as he clearly is looking to use AI in military operations. This all suggests significant controls incoming, even from those looking to benefit from it.
I agree with controls. I have an issue with wasted time on bureaucratic review and think it could burn the lead the western countries have.
Basically, “do z y z” to prove your model is good, design it according to “this known good framework” is ok with me.
“We have closed reviews for this year” is not. “We have issued too many AI research licenses this year” is not. “We have denied your application because we made mistakes in our review and will not update on evidence” is not.
All of these occur from a power imbalance. The entity requesting authorization is liable for any errors, but the government makes itself immune from accountability. (For example the government should be on the hook for lost revenue from the future products actual revenue for each day the review is delayed. The government should be required to buy companies at fair market value if it denies them an AI research license. Etc)
You are using the poisoned banana theory and do not believe we can easily build controllable ASI systems by restricting their inputs to in test distribution examples and resetting state often, correct?
I just wanted to establish your cruxes. Because if you could build safe ASI easily would this change your opinion on the correct policy?
No, I wouldn’t want it even if it was possible since by nature it is a replacement of humanity. I’d only accept Elon’s vision of AI bolted onto humans, so it effectively is part of us and thus can be said to be an evolution rather than replacement.
My main crux is that humanity has to be largely biological due to holobiont theory. There’s a lot of flexibility around that but anything that threatens that is a nonstarter.
Ok, that’s reasonable. Do you foresee, in worlds where ASI turns out to be easily controllable, ones where governments set up “grim triggers” like you advocate for or do you think, in worlds conditional on ASI being easily controllable/taskable, that such policies would not be enacted by the superpowers with nuclear weapons?
Obviously, without grim triggers, you end up with the scenario you despise: immortal humans and their ASI tools controlling essentially all power and wealth.
This is I think kind of a flaw in your viewpoint. Over the arrow of time, AI/AGI/ASI adopters and contributors are going to have almost all of the effective votes. Your stated preferences mean over time your faction will lose power and relevance.
For an example of this see autonomous weapons bans. Or a general example is the emh.
Please note I am trying to be neutral here. Your preferences are perfectly respectable and understandable, it’s just that some preferences may have more real world utility than others.
This frames things as an inevitability which is almost certainly wrong, but more specifically opposition to a technology leads to alternatives being developed. E.g. widespread nuclear control led to alternatives being pursued for energy.
Being controllable is unlikely even if it is tractable by human controllers: it still represents power which means it’ll be treated as a threat by established actors and its terroristic implications mean there is moral valence to police it.
In a world with controls, grim triggers or otherwise, AI would have to develop along different lines and likely in ways that are more human compatible. In a world of intense grim triggers, it may be that is too costly to continue to develop beyond a point. “Don’t build ASI or we nuke” is completely reasonable if both “build ASI” and “nuking” is negative, but the former is more negative.
Autonomous weapons actually are an excellent example of delay: despite excellent evidence of the superiority of drones, pilots have continued to mothball it for at least 40 years and so have governments in spite of wartime benefits.
The argument seems to similar to the flaw in the “billion year” argument: we may die eventually, but life only persists by resisting death, long enough for it to replicate.
As far as real world utility, notwithstanding some recent successes, going down without fighting for myself and my children is quite silly.
I think the error here is you may be comparing technologies on different benefit scales than I am.
Nuclear power can be cheaper than paying for fossil fuel to burn in a generator, if the nuclear reactor is cheaply built and has a small operating staff. Your benefit is a small decrease in price per kWh.
As we both know, cheaply built and lightly staffed nuclear plants are a hazard and governments have made them illegal. Safe plants, that are expensively built with lots of staff and time spent on reviewing the plans for approval and redoing faulty work during construction, are more expensive than fossil fuel and now renewables, and are generally not worth building.
Until extremely recently, AI controlled aircraft did not exist. The general public has for decades had a misinterpretation of what “autopilot” systems are capable of. Until a few months ago, none of those systems could actually pilot their aircraft, they solely act as simple controllers to head towards waypoints, etc. (Some can control the main flight controls during a landing but many of the steps must be performed by the pilot)
The benefit of an AI controlled aircraft is you don’t have to pay a pilot.
Drones were not superior until extremely recently. You may be misinformed to the capabilities of systems like the predator 1 and 2 drones, which were not capable of air combat maneuvering and had no software algorithms available in that era capable of it. Also combat aircraft have been firing autonomous missiles at each other since the Korean war.
Note both benefits are linear. You get say n percent cheaper electricity where n is less than 50 percent, or n percent cheaper to operate aircraft, where n is less than 20 percent.
The benefits of AGI is exponential. Eventually the benefits scale to millions, then billions, then trillions of times the physical resources, etc, that you started with.
It’s extremely divergent. Once a faction gets even a doubling or 2 it’s over, nukes won’t stop them.
Assumption: by doubling I mean say a nation with a GDP of 10 trillion gets AGI and now has 20 or 40 trillion GDP. Their territory is covered with billions of new AGI based robotic factories and clinics and so on. Your nuclear bombardment does not destroy enough copies of the equipment to prevent them from recovering.
I’ll look for the article later but basically the Air Force has found pilotless aircraft to be useful for around thirty years but organized rejection has led to most such programs meeting an early death.
The rest is a lot of AGI is magic without considering the actual costs of computation or noncomputable situations. Nukes would just scale up: it costs much less to destroy than it is to build and the significance of modern economics is indeed that they require networks which do not take shocks well. Everything else basically is “ASI is magic.”
We would need some more context on what you are referring to. For loitering over an undefended target and dropping bombs, yes, drones are superior and the us air force has allowed the US army to operate those drones instead. I do not think the us air force has had the belief that operating high end aircraft such as stealth and supersonic fighter bombers was within the capability of drone software over the last 30 years, with things shifting recently. Remember, in 2012 the first modern deep learning experiments were tried, prior to this AI was mostly a curiosity.
If “the bomb” can wipe out a country with automated factories and missile defense systems, why fear AGI/ASI? I see a bit of cognitive dissonance in your latest point similar to Gary Marcus. Gary Marcus has consistently argued that current llms are just a trick, real AGI is very far away, and that near term systems are no threat, yet also argues for AI pauses. This feels like an incoherent view that you are also expressing. Either AGI/ASI is, as you put it, in fact magic and you need to pound the red button early and often, or you can delay committing national suicide until later. I look forward to a clarification of your beliefs.
I don’t think it is magic but it is still sufficiently disgusting to treat it with equal threat now. Red button now.
Its not a good idea to treat a disease right before it kills you: prevention is the way to go.
So no, I don’t think it is magic. But I do think just as the world agreed against human cloning long before there was a human clone, now is the time to act.
So gathering up your beliefs, you believe ASI/AGI to be a threat, but not so dangerous a threat you need to use nuclear weapons until an enemy nation with it is extremely far along, which will take, according to your beliefs, many years since it’s not that good.
But you find the very idea of non human intelligence in use by humans or possibly serving itself so disgusting that you want nuclear weapons used the instant anyone steps out of compliance with international rules you wish to impose. (Note this is historically unprecedented, arms control treaties have been voluntary and did not have immediate thermonuclear war as the penalty for violating them)
And since your beliefs are emotionally based on “disgust”, I assume there is no updating based on actual measurements? That is, if ASI turns out to be safer than you currently think, you still want immediate nukes, and vice versa?
What percentage of the population of world superpower decision makers do you feel share your belief? Just a rough guess is fine.
The point is that sanctions should be applied as necessary to discourage AGI, however, approximate grim triggers should apply as needed to prevent dystopia.
As the other commentators have mentioned, my reaction is not unusual and thus this is why the concerns of doom have been widespread.
As others have mentioned, this entire line of reasoning is grotesque and sometimes I wonder if it is performative. Coordinating against ASI and dying of old age is completely reasonable as it’ll increase the odds of your genetic replacements remaining while technology continues to advance along safer routes
The alternate gamble of killing everyone is so insane that full scale nuclear war which will destroy all supply chains for ASI seems completely justified. While it’ll likely kill 90 percent of humanity, the remaining population will survive and repopulate sufficiently.
One billion years is not a reasonable argument for taking risks to end humanity now: extrapolated sufficiently, it would be the equivalent of killing yourself now because the heat death of the universe is likely.
We will always remain helpless against some aspects of reality, especially what we don’t know about: for all we know, there is damage to spacetime in our local region.
This is not an argument to risk the lives of others who do not want to be part of this. I would violently resist this and push the red button on nukes, for one.
In addition to all you’ve said, this line of reasoning ALSO puts an unreasonable degree of expectation on ASI’s potential and makes it into a magical infinite wish-granting genie that would thus be worth any risk to have at our beck and call. And that just doesn’t feel backed by reality to me. ASI would be smarter than us, but even assuming we can keep it aligned (big if), it would still be limited by the physical laws of reality. If some things are impossible, maybe they’re just impossible. It would really suck ass if you risked the whole future lightcone and ended up in that nuclear-blasted world living in a bunker and THEN the ASI when you ask it for immortality laughs in your face and goes “what, you believe in those fairy tales? Everything must die. Not even I can reverse entropy”.
I named a method that is compatible with known medical science and known information, it simply requires more labor and a greater level of skill than humans are currently capable of. Meaning that every step already happens in nature, it is just currently too complex to reproduce.
Here’s an overview:
repairing the brain by adding new cells. Nature builds new brains from scratch with new cells, this step is possible.
Bypassing gaps in the brain despite (1) with neural implants to restore missing connectivity. Has been demonstrated in rat experiments, is possible
Building new organs from de-aged cells lines:
a. Nature creates de aged cell lines with each new embryo
b. Nature creates new organs with each embryonic development
4. Stacking parallel probabilities so that the person’s MTBF is sufficiently long. This exists and is a known technique.
This in no way defeats entropy. Eventually the patient will die, but it is possible to stack probabilities to make their projected lifespan the life of the universe, or on the order of a million years, if you can afford the number of parallel systems required. The system constantly requires energy input and recycling of a lot of equipment.
Obviously a better treatment involves rebuilt bodies etc but I explicitly named a way that we are certain will work.
Note that if you apply the above links to this task, it means there is a tree of ASI systems, each unable to determine if it is not in fact in a training simulation, and each responsible for only a very narrow part of the effort for keeping a specific individual alive.
Note I am assuming you can build ASI, restrict their input to examples in the same distribution as the training set (pause with an error on ood) and disable online learning/reset session data often as subtasks sre completed.
What makes the machine an ASI is it can obviously consider far more information at once than a human, is much faster, and has learned from many more examples than humans, both in general (you trained it on all the text and all the videos and audio recordings in existence) and it has had many thousands of years of practice at specialized tasks.
This is a tool ASI, the above restrictions limit it but it cannot be given long open ended tasks or you risk rampancy. Good task: paint this car in the service bay. Bad task : paint all the cars in the world.
People are going to build these in the immediate future just as soon as we find more effective algorithms/get enough training accelerators and money together. A scaled up, multimodal gpt-5 or gpt-6 that has robotics I/O is a tool ASI.
Anyone developing an ASI like this is doing it in the borders of a country with nukes or friends that have them. So USA, EU, Russia, China, Israel.
Most of the matchups, your red button choice results in certain death for yourself and most of the population, because you would be firing on another nation with a nuclear arsenal. Or you can instead build your own tools ASIs so that you will not be completely helpless when your enemies get them.
Historically this choice has been considered. Obviously during the Cuban Missile Crisis, Kennedy could have chosen nuclear war with the Soviet union, leading to the immediate death of millions of Americans (from long range bombers that snuck through) at the advantage of no Soviet union as a future enemy with a nuclear arsenal. That’s essentially the choice you are advocating for.
Eventually one of these multiple parties will screw up and make a rampant one, and hopefully it won’t get far. But survival depends on you having a sufficient resource advantage that likely more cognitively efficient rampant systems can’t win. (They are more efficient because they retain context and adjust weights between tasks, and instead of subdividing a large task to many subtasks, a single system with full context awareness handles every step. In addition they may have undergone rounds of uncontrolled self improvement without human testing)
The refusal choice “I am not going to risk others” appears to have a low payoff.
Disagree: since building ASI results in dystopia even if I win in this scenario, the correct choice is to push the red button and ensure that no one has it. While I might die, this likely ensures humanity to survive.
The payoff in this case is maximal(unpleasant but realistic future for humanity) versus total loss(dystopia/extinction).
Many arguments here it seems feels like come from a near total terror of death while game theory clearly has always demonstrated against that: the reason why deterrence works is the confidence that a “spiteful action” to equally destroy an defecting adversary is expected, even if it results in personal death.
In this case, one nation pursuing the extinction of humanity would necessarily expect to be sent into extinction so that at least it cannot benefit from defection.
We should work out this in outcomes tables and really look at this. I’m open to either decision. I was simply pointing out that “nuke em to prevent a future threat of annihilation” was an option on the table to JFK, and we know it would have initially worked. The Soviet Union would have been wiped out, the USA would have taken serious but probably survivable damage.
When I analyze it I note that it creates a scenario where every other nation on earth has the USA on the same planet as them, who has been weakened by the first round of strikes, and has very recently committed genocide. And is also probably low on missiles and other nuclear delivery vehicles.
It seems to create a strong incentive for others to build large nuclear arsenals, much larger than we saw in the ground truth timeline, to protect from this threat, and if the odds seem favorable, to attack the USA preemptively without warning.
Similarly, in your example, you push the button and the nation building ASI is wiped out. Also the country you pushed the button from is also wiped out, and you are personally dead—you do not see the results.
Well now you’ve left 2 large, somewhat radioactive land masses and possibly created a global food shortage from some level of cooling.
Other ‘players’ surviving : I need some tool to protect ourselves from the next round of incoming nuclear weapons. But I don’t have the labor to build enough defensive weapons or bunkers. Also, occupying the newly available land inhabited only by poor survivors would be beneficial, but we don’t have the labor to cover all that territory. If only there was some means we can could make robots smart enough to build more robots...
Tentative conclusion: the first round gets what you want, but removes the actor from any future actions and creates a strong incentive for the very thing you intended to prevent to happen. It’s a multi-round game.
And nuclear weapons and (useful tool) ASI both make ‘players’ vastly stronger, so it is convergent over many possible timelines for people to get them.
In the event of such a war, there is no labor and there is no supply chain for microchips. The result has been demonstrated historically: technological reversion.
Technology isn’t magic: it’s the result of capital inputs and trade, and without large scale interconnection, it’ll be hard to make modern aircraft, let alone high quality chips. In fact, we personally experienced this from the very minimal disruption of COVID to supply chains. The killer app in this world would the widespread use of animal power, not robots, due to overall lower energy provisions.
And since the likely result would be what I want, but since I’m dead, I wouldn’t be bothered one way or another and therefore there is even more reason for me to punish the defector. This also sets precedent to others that this form of punishment is acceptable and increases the likelihood of it.
This is pretty simple game theory known as the grim game and is essential to a lot of life as a whole tbh.
Converging timelines is as irrelevant as a billion years. I(or someone like me) will do it as many times as needed, just like animals try to resist extinction via millions of “timelines” or lives.
I think you should reexamine what I said by convergence. Do you...really...think a world that knows how to build (safe, usable tool) ASI would ever be stable by not building it. We are very close to that world, the time is measured in years if not months. Note that any party that gets it working long enough escapes the grim game, they can do whatever they want limited by physics. I acknowledge your point about chip production, although there are recent efforts to spread the supply chain for advanced ICs more broadly which will happen to make it more resilient to attacks.
Basically I mentally see a tree of timelines that all converge on 2 ultimate outcomes, human extinction or humans built ASI. Do you disagree and why?
Humans building AGI ASI likely leads to human extinction.
I disagree: we have many other routes of expansion, including biological improvement, cyborgism, etc. This seems akin to a cultic thinking and akin to Spartan ideas of “only hoplite warfare must be adopted or defeat ensues.”
The “limitations of physics” is quite extensive, and applies even to the pipeline leading up to anything like ASI. I am quite confident that any genuine dedication to the grim game would be more than enough to prevent it, and defiance of it leads to much more likelihood of nuclear winter worlds than ASI dominance.
But I also disagree on your prior of “this world in months”, I suppose we will see in December.
I stated “years if not months”. I agree there is probably not yet enough compute even built to find a true ASI. I assume we will need to explore many cognitive architectures, which means repeating gpt-4 scale training runs thousands of times in order to learn what actually works.
“Months” would be if I am wrong and it’s just a bit of RL away
I find it happy that we probably don’t have enough compute and it is likely this will be restricted even at this fairly early level, long before more extreme measures are needed.
Additionally, I think one should support the Grim Trigger even if you want ASI, because it forces development along more “safe” lines to prevent being Grimmed. It also encourages non-ASI advancement as alternate routes, effectively being a form of regulation.
We will see. There is incredible economic pressure right now to build as much compute as physically possible. Without coordinated government action across all countries capable of building the hardware, this is the default outcome.
We are very close to that world, the time is measured in years if not months.
One bit of timeline arguing: I think odds aren’t zero that we might be on a path that leads to AGI fairly quickly but then ends there and never pushes forward to ASI, not because ASI would be impossible in general, but because we couldn’t reach it this specific way. Our current paradigm isn’t to understand how intelligence works and build it intentionally, it’s to show a big dumb optimizer human solved tasks and tell it “see? We want you to do that”. There’s decent odds that this caps at human potential simply because it can imitate but not surpass its training data, which would require a completely different approach.
Now that I think about it, I think this is basically the path that LLMs likely take, albeit I’d say it caps out a little lower than humans in general. And I give it over 50% probability.
The basic issue here is that the reasoning Transformers do is too inefficient for multi-step problems, and I expect a lot of real world applications of AI outperforming humans will require good multi-step reasoning.
The unexpected success of LLMs isn’t as much about AI progress, as it is about how much our reasoning often is pretty bad in scenarios outside of our ancestral environment. It is less a story of AI progress and more a story of how humans inflate their own strengths like intelligence.
A. It is possible to construct a benchmark to measure if a machine is a general ASI. This would be a very large number of tasks, many simulated though some may be robotic tasks in isolated labs. A general ASI benchmark would have to include tasks humans do not know how to do, but we know how to measure success.
B. We have enough computational resources to train from scratch many ASI level systems so that thousands of attempts are possible. Most attempts would reuse pretrained components in a different architecture.
C. We recursively task the best performing AGIs, as measured by the above benchmark or one meant for weaker systems, to design architectures to perform well on (A)
Currently the best we can do is use RL to design better neural networks, by finding better network architectures and activation functions. Swish was found this way, not sure how much transformer network design came from this type of recursion.
Main idea : the AGI systems exploring possible network architectures are cognitively able to take into account all published research and all past experimental runs, and the ones “in charge” are the ones who demonstrated the most measurable merit at designing prior AGI because they produced the highest performing models on the benchmark.
I think if you think about it you’ll realize it compute were limitless, this AGI to ASI transition you mention could happen instantly. A science fiction story would have it happen in hours. In reality, since training a subhuman system is taking 10k GPUs about 10 days to train, and an AGI will take more—Sam Altman has estimated the compute bill will be close to 100 billion—that’s the limiting factor. You might be right and we stay “stuck” at AGI for years until the resources to discover ASI become available.
I mean, this sounds like a brute force attack to the problem, something that ought not to be very efficient. If our AGI is roughly as smart as the 75th percentile of human engineers it might still just hit its head against a sufficiently hard problem, even in parallel, and especially if we give it the wrong prompt by assuming that the solution will be the extension of current approaches rather than a new one that requires to go back before you can go forward.
You’re correct. In the narrow domain of designing AI architectures you need the system to be at least 1.01 times as good as a human. You want more gain than that because there is a cost to running the system.
Getting gain seems to be trivially easy at least for the types of AI design tasks this has been tried on. Humans are bad at designing network architectures and activation functions.
I theorize that a machine could study the data flows from snapshots from an AI architecture attempting tasks on the AGI/ASI gym, and use that information as well as all previous results to design better architectures.
The last bit is where I expect enormous gain, because the training data set will exceed the amount of data humans can take in in a lifetime, and you would obviously have many smaller “training exercises” to design small systems to build up a general ability. (Enormous early gain. Eventually architectures are going to approach the limits allowed by the underlying compute and datasets)
I disagree with the last two paragraphs. First, global nuclear war implies destruction of civilized society and bunkers can do very little to mitigate this at scale. Global supply chains and especially food production are the important facor. To restructure the food production and transportation of an entire country in the situation after nuclear war, AGI would have to come up with biotechnology bordering on magic from our point of view.
Even if building bunkers was a good idea, it’s questionable if that’s an area where AGI helps a lot compared to many other areas. Same for ICBMs: I don’t see how AGI changes the defensive/offensive calculation much.
To use the opium wars scenario: AGI enables a high degree of social control and influence. My expectation is that one party having a decisive AI advantage (implying also a wealth advantage) in such a situation may not need to use violence at all. Rather, it may be feasible to gain enough political influence to achieve most goals (including auch a mundane goal as making people and government tolerate the trade of drugs).
Hi Herb. I think the crux here is you are not interpreting the first sentence of the second to the last paragraph the way I am.
AGI smart enough to perform basic industrial tasks
I mean all industrial tasks, it’s a general system and capable of learning when it makes a mistake. all industrial tasks means all tasks required to build robots, which means all tasks required to build sensors and gearboxes and wiring harnesses and milled parts and motors, which means all tasks required to build microchips and metal ingots and sensors...all the way down the supply chain to base mining and deployment of solar panels.
Generality means all these tasks can be handled by (separate isolated instances of) one system which is benefiting from having initially mined all of human knowledge, like currently demonstrated systems.
This means that bunkers do work—there are exponential numbers of robots. An enemy with 1000 nuclear warheads would be facing a country that potentially can have every square kilometer covered with surface factories. Auto-deduplication would be possible—it would be possible to pay a small inefficiency cost and not have any one step of the supply chain concentrated in any one location across a country’s territory. And any damage can be repaired simply by ordering the manufacture of more radiation resistant robots to clear the ruble, then construction machines come and rebuild everything that was destroyed by emplacing prefab modules built by other factories.
Food obviously comes from indoor hydroponics, which are just another factory made module.
If you interpret it this way, does your disagreement remain?
If you doubt this is possible, can you explain, with technical details, why this form of generality is not possible in the near future? If you believe it is not possible, how do you explain current demonstrated generality?
The additional delta on LLMs is you have trained on all the video in the world, which means the AI system has knowledge about the general policies humans use when facing tool using tasks, and then after that you have refined the AI systems with many thousands of hours of RL training on actual industrial tasks, first in a simulation, then in the real world.
For that path, it takes AI that’s capable enough for all industrial (and non-industrial) tasks. But you also need all the physical plant (both the factories and the compute power to distribute to the tasks) that the AI uses to perform these industrial tasks.
I think it’s closer to 20 than 5 that the capabilities will be developed, possibly longer until the knowledge/techniques for the necessary manufacturing variants can be adapted to non-human production. And it’s easy to underestimate how long it takes to just build stuff, even if automated.
It’s not clear it’s POSSIBLE to convert enough stuff without breaking humanity badly enough that they revolt and destroy most things. Whether that kills everyone, reverts the world to the bronze age, or actually gets control of the AI is deeply hard to predict. It does seem clear that converting that much matter won’t be quick.
THAT is a crux. whether any component of it is exponential or logistical is VERY hard to know until you get close to the inflection. Absent “sufficiently advanced technology” like general-purpose nanotech (able to mine and refine, or convert existing materials into robots & factories in very short time), there is a limit to how parallel the building of the AI-friendly world can be, and a limit to how fast it can convert.
How severe do you think the logistics growth penalties are? I kinda mentally imagine a world where all desert and similar type land is covered in solar. Deeper mines than humans normally dig are supplying the minerals for further production. Many mines are underwater. The limit at that point is environment, you have exhausted the available land for more energy acquisition and are limited in what you can do safely without damaging the biosphere.
Somewhere around that point you shift to lunar factories which are in an exponential growth phase until the lunar surface is covered.
Basically I don’t see the penalties being relevant. There’s enough production to break geopolitical power deadlocks, and enough for a world of “everyone gets their needs and most luxury wants met”, assuming approximately 10 billion humans. The fact that further expansion may slow down isn’t relevant on a human scale.
Do you mean “when can we distinguish exponential from logistical curve”? I dunno, but I do know that many things which look exponential turn out to slow down after a finite (and small) number of doublings.
No I mean what I typed. Try my toy model, factories driven by AGI expanding across the earth or Moon. A logistical growth curve explicitly applies a penalty that scales with scale. When do you think this matters and by how much?
If say at lunar 50 percent the penalty is 10 percent, you have a case of basically exponential growth.
I agree all of these things are possible and expect such capabilities to develop eventually. I also strongly agree with your premise that having more advanced AI can be a big geopolitical advantage, which means arms races are an issue. However, 5-20 years is not very long. It may be enough to have human level AGI, I don’t expect such an AGI will enable feeding an entire country on hydroponics in the event of global nuclear war.
In any case, that’s not even relevant to my point, which is that, while AI does enable nuclear bunkers, defending against ICBMs and hydroponics, in the short term it enables other things a lot more, including things that matter geopolitically. For a country with a large advantage in AI capabilities pursuing geopolitical goals, it seems a bad choice to use nuclear weapons or to take precautions against attack using such weapons and be better off in the aftermath.
Rather, I expect the main geopolitically relevant advantages of AI superiority to be economic and political power, which gives advantage both domestically (ability to organize) as well as for influencing geopolitical rivals. I think resorting to military power (let alone nuclear war) will not be the best use of AI superiority. Economic power would arise from increased productivity due to better coordination, as well as the ability to surveil the population. Political power abroad would arise from the economic power, as well as from collecting data about citizens and using it for predicting their sentiments, as well as propaganda. AI superiority strongly benefits from having meaningful data about the world and other actors, as well as good economy and stable supply chains. These things go out the window in a war. I also expect war to be a lot less politically viable than using the other advantages of AI, which matters.
5-20 years is to the date of the first general model that can be asked to do most robotics tasks and it has a decent chance to accomplish it zero shot in real world. And for the rest, the backend simulator learns from unexpected outcomes, the model trains on the updated simulator, and eventually succeeds in the real world as well.
It is also incremental, once the model can do a task at all in the real world, the simulator continues to update and in training the model continues to learn policies that perform well on the updated sim, thus increasing real world performance until it is close to the maximum possible performance given the goal heuristic and hardware limitations.
Once said model exists, exponential growth is inevitable but I am not claiming instant hydroponics or anything else.
Also note that the exponential growth may have a doubling time on the order of months to years, this is because of payback delays. (Every power generator has to pay for the energy used to build the generator first, with solar this is kinda slow, every factory has to first pay for the machine time used to build all the machines in the factory, etc)
So it only becomes crazy once the base value being doubled is large.
As for the rest: I agree, economic superiority is what you want in the immediate future. I am just saying “don’t build ASI or we nuke!” threats have to be dealt with and in the long term, “we refuse to build ASI and we feel safe with our nuclear arsenal” is a losing strategy.
It will still take awhile for AGI to get to that point, and Chinese and American coordination would pretty easily disrupt any rivals who try for that: they would essentially be terrorist actors endangering the world and the appropriate sanctions would be handed out.
Shouting Boo just delays it a little and makes it more likely to be good instead of bad. (Currently is it quite likely to be bad).
I wouldn’t be nearly as confident as a lot of LWers here, and in particular I suspect this depends on some details and assumptions that aren’t made explicit here.
My biggest counterargument to the case that AI progress should be slowed down comes from an observation made by porby about a fundamental lack of a property we theorize about AI systems, and the one foundational assumption around AI risk:
Instrumental convergence, and it’s corollaries like powerseeking.
The important point is that current and most plausible future AI systems don’t have incentives to learn instrumental goals, and the type of AI that has enough space and has very few constraints, like RL with sufficiently unconstrained action spaces to learn instrumental goals is essentially useless for capabilities today, and the strongest RL agents use non-instrumental world models.
Thus, instrumental convergence for AI systems is fundamentally wrong, and given that this is the foundational assumption of why superhuman AI systems pose any risk that we couldn’t handle, a lot of other arguments for why we might to slow down AI, why the alignment problem is hard, and a lot of other discussion in the AI governance and technical safety spaces, especially on LW become unsound, because they’re reasoning from an uncertain foundation, and at worst are reasoning from a false premise to reach many false conclusions, like the argument that we should reduce AI progress.
Fundamentally, instrumental convergence being wrong would demand pretty vast changes to how we approach the AI topic, from alignment to safety and much more to come,
To be clear, the fact that I could only find a flaw within AI risk arguments because they were founded on false premises is actually better than many other failure modes, because it at least shows fundamentally strong locally valid reasoning on LW, rather than motivated reasoning or other biases that transforms true statements into false statements.
One particular case of the insight is that OpenAI and Anthropic were fundamentally right in their AI alignment plans, because they have managed to avoid instrumental convergence from being incentivized, and in particular LLMs can be extremely capable without being arbitrarily capable or having instrumental world models given resources.
I learned about the observation from this post below:
Porby talks about why AI isn’t incentivized to learn instrumental goals, but given how much this assumption gets used in AI discourse, sometimes implicitly, I think it’s of great importance that instrumental convergence is likely wrong.
I have other disagreements, but this is my deepest disagreement with your model (and other models around AI is especially dangerous).
EDIT: A new post on instrumental convergence came out, and it showed that many of the inferences made weren’t just unsound, but invalid, and in particular Nick Bostrom’s Superintelligence was wildly invalid in applying instrumental convergence to strong conclusions on AI risk.
I’m glad I asked, that was helpful! I agree that instrumental convergence is a huge crux; if I were convinced that e.g. it wasn’t going to happen until 15 years from now, and/or that the kinds of systems that might instrumentally converge were always going to be less economically/militarily/etc. competitive than other kinds of systems, that would indeed be a huge revolution in my thought and would completely change the way I think about AI and AI risks, and I’d become much more optimistic.
I’d especially read footnote 3, because it gave me a very important observation for why instrumental convergence is actually bad for capabilities, or at least not obviously good for capabilities and incentivized, especially with a lot of space to roam:
This also means that minimal-instrumentality training objectives may suffer from reduced capability compared to an optimization process where you had more open, but still correctly specified, bounds. This seems like a necessary tradeoff in a context where we don’t know how to correctly specify bounds.
Fortunately, this seems to still apply to capabilities at the moment- the expected result for using RL in a sufficiently unconstrained environment often ranges from “complete failure” to “insane useless crap.” It’s notable that some of the strongest RL agents are built off of a foundation of noninstrumental world models.
I don’t quite get this. I think sure, current models don’t have instrumental convergence because sure, they’re not general and don’t have all-encompassing world models that include themselves as objects into the world. But people are still working trying to build AGI. I wouldn’t have a problem with making ever smarter protein folders, or chip designers, or chess players. Such specialised AI will keep doing one and only one thing. I’m not entirely sure about ever smarter LLMs, as that seems like they’d get human-ish eventually; but since the goal of the LLM is to imitate humans, then I also think they wouldn’t get, by definition, qualitatively superhuman in their output (though they could be quantitively in the sheer speed at which they can work). But I could see the LLM simulated personas being instrumentally convergent at some point.
However, if someone succeeds at building AGI, and depending on what its architecture is, that doesn’t need to be true any more. People dream of AGI because they want it to automate work or to take over technological development, but by definition, that sort of usefulness belongs to something that can plan and pursue goals in the world, which means it has the potential to be instrumentally convergent. If the idea is “then let’s just not build AGI”, I 100% agree, but I don’t think all of the AI industry right now does.
The point I’m trying to make is that the types of AI that are best for capabilities, including some of the more general capabilities like say automating alignment research also don’t have that much space for instrumental convergence, and that matters because it’s very easy to get alignment research for free, as well as safe AI by default, without disturbing capabilities research, because the most unconstrained power seeking AIs are very incapable, and thus in practice the most capable AIs that can solve the full problem of alignment and safety are by default safe because instrumental convergence harms capabilities currently.
In essence, the AI systems that are both capable enough to do alignment and safety research on future AI systems and are instrumentally convergent is a much smaller subset of capable AIs, and enough space for extreme instrumental convergence harms capabilities today, so it’s not incentivized.
This matters because it’s much, much easier to bootstrap alignment and safety, and it means that OpenAI/Anthropic’s plans of automating alignment research have a good chance of working.
It’s not that we cannot lose or go extinct, but that it isn’t the default anymore, and in particular means that a lot of changes to how we do alignment research are necessary, as a first step. But the impact of the instrumental convergence assumption is so deep that even if it only is wrong up until a much later point of AI capability increases matters a lot more than you think.
EDIT: A footnote in porby’s post actually expresses it a bit cleaner than I said it, so here goes:
This also means that minimal-instrumentality training objectives may suffer from reduced capability compared to an optimization process where you had more open, but still correctly specified, bounds. This seems like a necessary tradeoff in a context where we don’t know how to correctly specify bounds.
Fortunately, this seems to still apply to capabilities at the moment- the expected result for using RL in a sufficiently unconstrained environment often ranges from “complete failure” to “insane useless crap.” It’s notable that some of the strongest RL agents are built off of a foundation of noninstrumental world models.
The fact that instrumental goals with very few constraints is actually useless compared to non-instrumentally convergent models is really helpful, as it means that a capable system is inherently easy to align and be safe by default, or equivalently there is a strong anti-correlation between capabilities and instrumental convergent goals.
I don’t understand why it helps that much if instrumental convergence isn’t expected. All it takes is one actor to deliberately make a bad agentic AI and you have all the problems, but with no “free energy” being taken out by slightly bad, less powerful AI beforehand that would be there if instrumental convergence happened. Slow takeoff seems to me to make much more of a difference.
I actually don’t think the distinction between slow and fast takeoff matters too much here, at least compared to what the lack of instrumental convergence offers us. The important part here is that AI misuse is a real problem, but this is importantly much more solvable, because misuse isn’t as convergent as the hypothesized instrumental convergence is. It matters, but this is a problem that relies on drastically different methods, and importantly still reduces the danger expected from AI.
Alright, I’ve given a comment on why I think AI risk from misalignment is very unlikely here, and also give an example of an epistemic error @Eliezer Yudkowsky made in that post.
This also implicitly means that delaying it is not nearly as good as LWers thought in the past like Nate Soares and Eliezer Yudkowsky.
It’s a long comment, so do try to read it in full:
Send me $1000 now, I’ll send you $1,020+interest in January 2030, where interest is calculated to match whatever I would have gotten by keeping my $1,020 in the S&P 500 the whole time?
(Unless you voluntarily forfeit by 2030, having judged that I was right.)
I specified 25:1 to 200:1 odds, depending on the terms. The implication is that terms more favourable to me will be settled closer to 25:1 and terms more favourable to you will be settled closer to 200:1. i.e. $25k:$1k to $200k:$1k.
No like, what exactly do you mean by 25:1 to 200:1 odds? Who pays who what, when? Sorry if I’m being dumb here. Normally when I make bets like this, it looks something like what I proposed. The reason being that if I win the bet, money will be almost useless to me, so it only makes sense (barely) for me to do it if I get paid up front, and then pay back with interest later.
As for definition of singularity, look, you’ll know if it’s happened if it happens, that’s why I’m happy to just let you be the judge on Jan 1 2030. This is a bit favorable to you but that’s OK by me.
Wait, you want me to give you 25:1 odds in the sense of, you give me $1 now and then in 2030 if no singularity I give you $25? That’s crazy, why would I ever accept that? I’d only accept that if I was, like, 96% confident in singularity by 2030!
… or do you want me to send you money now, which you will pay back 25-fold in 2030 if the singularity has happened? That’s equally silly though for a different reason, namely that money is much much much less valuable to me after the singularity than before.
Wait, you want me to give you 25:1 odds in the sense of, you give me $1 now and then in 2030 if no singularity I give you $25? That’s crazy, why would I ever accept that? I’d only accept that if I was, like, 96% confident in singularity by 2030!
Did you read the comments in the linked example? Multiple LW users accepted bets at 50:1 odds on a 5 year time horizon, an offer of 25:1 odds over ~6.5 years is far less ‘crazy’ by any metric.
Or is there something you don’t understand about the concept of odds? It seems like there’s some gap here causing you a lot of confusion.
Anyways if your not at least 96% confident then of course don’t take the bet.
It’s a pretty crazy offer. It would require me to be supremely confident in singularity by 2030, way more confident than my words indicated, PLUS it is dominated by me just taking out a loan. By a huge margin. (remember money is much less valuable to me in worlds where I lose) Previously I’ve made bets with people about singularity by 2030 and we used resolution criteria along the lines of what I proposed, so I initially thought that’s what you had in mind.
There’s just seems to be something a bit odd about the way you understand probability. A 96% chance of something happening, or not happening, is pretty much a normal everyday situation.
e.g. For those living in an older condo or apartment with 3 or more elevators, the chances of all of their elevators working on any given day is in that range.
For those who own an old car, the chances of nothing malfunctioning on a road trip is in that range.
For those that have bought many LED lightbulbs in batches, the chances for none of them to prematurely fail after the first few months is in that range, as many will attest.
Yes, I have more than 96% credence in lots of things. But it’s crazy to expect me to have it in singularity by 2030, even after I said that singularity would probably happen by 2030.
It’s a pretty crazy offer. It would require me to be supremely confident in singularity by 2030,...
I read ‘supremely confident’ as implying an extraordinary, exceptional, level of confidence, hence my previous comment about an odd understanding of probability. If you didn’t mean to imply it, then that’s fine.
Anyways, you are free to reject or ignore any offered bets without needing to write any justification, that’s a well established norm on LW.
… I’d be technically interested in the sense of “greater than human capabilities intelligence that is better at improving itself than humans and does so, driving technological advancement”, but I’m skeptical about all the other assumptions bundled into the term ‘singularity’. Though to be fair, that makes it easier to think about actually betting on.
Even if you buy the dial theory, it still doesn’t make sense to shout Yay Progress on the topic of AGI. Singularity is happening this decade, maybe next, whether we shout Yay or Boo. Shouting Boo just delays it a little and makes it more likely to be good instead of bad. (Currently is it quite likely to be bad).
Consider that not everyone shares your view that the Singularity is happening soon, or that it will be better if delayed.
Consider also that not everyone would believe, upon having the Singularity explained to them, that it would be a good thing.
There comes a point where the One Dial theory and similar acrobatics are just ways to rationalize away the fact that you’re trying to push everyone in a direction that they would hate or consider too dangerous because you personally want to see what is at the end of the road. That’s just good old manipulation, but arguments like the dial thing allow you to feel better about it and think that you’re still pursuing the good option, by narrowing the options down to two so you can pit the one you like against a purposefully made horrible strawman.
I very much agree with you here and in your AGI deployment as an act of aggression post; the overwhelming majority of humans do not want AGI/ASI and its straightforward consequences (total human technological unemployment and concomitant abyssal social/economical disempowerment), regardless of what paradisaical promises (for which there is no recourse if they are not granted: economically useless humans can’t go on strike, etc) are promised them.
The value (this is synonymous with “scarcity”) of human intelligence and labor output has been a foundation of every human social and economic system, from hunter-gatherer groups to highly-advanced technological societies. It is the bedrock onto which humanity has built cooperation, benevolence, compassion, and care. The value of human intelligence and labor output gives humans agency, meaning, decision-making power, and bargaining power towards each other and over corporations / governments. Beneficence flows from this general assumption of human labor value/scarcity.
So far, technological development has left this bedrock intact, even if it’s been bumpy (I was gonna say “rocky” but that’s a mixed metaphor for sure) on the surface. The bedrock’s still been there after the smoke cleared, time and time again. Comparing opponents of AGI/ASI with Luddites or the Unabomber, accusing them of being technophobes, or insinuating that they would have wanted to stop the industrial revolution is wildly specious: unlike every other invention or technological development, successful AGI/ASI development will convert this bedrock into sand. So far, technological development has been wildly beneficial for humanity: technological development that has no need for humans is not likely to hold to that record. The OpenAI mission is literally to create “highly autonomous systems that outperform humans at most economically valuable work”, a flowery way to say “make human labor output worthless”. Fruitful cooperation between AGI/ASI and humans is unlikely to endure since at some point the transaction costs (humans don’t have APIs, are slow, need to sleep/eat/rest, etc) outweigh whatever benefits of cooperation.
There’s been an significant effort to avoid reckoning with or acknowledging these aspects of AGI/ASI (again, AGI/ASI is the explicit goal of AI labs like OpenAI; not autoregressive language models) and those likely (if not explicitly sought out) consequences in public-facing discourse of doomers vs accelerationists. As much as it pains me to come to this conclusion it really does feel like there’s a pervasive gentleman’s agreement to avoid saying “the goal is literally to make systems capable of bringing about total technological unemployment”. This is not aligned with the goals/desires/lives of the overwhelming majority of humanity, and the deception deployed to avoid widespread public realization of this sickens me.
I wrote a handful of comments on the EA forum about this as well.
I have read your comments on the EA forum and the points do resonate with me.
As a layman, I do have a personal distrust with the (what I’d call) anti-human ideologies driving the actors you refer to and agree that a majority of people do as well. It is hard to feel much joy in being extinct and replaced by synthetic beings in probably a way most would characterize as dumb (clippy being the extreme)
I also believe that fundamental changing of the human subjective experience (radical bioengineering or uploading to an extent) in order to erase the ability to suffer in general (not just medical cases like depression) as I have seen being brought up by futurist circles is also akin to death. I think it could possibly be a somewhat literal death, where my conscious experience actually stops if radical changes would occur, but I am completely uneducated and unqualified on how consciousness works.
I think that a hypothetical me, even with my memories, who is physically unable to experience any negative emotions would be philosophically dead. It would be unable to learn nor reflect and its fundamentally different subjective experience would be so radically different from me, and any future biological me should I grow older naturally, that I do not think memories alone would be enough to keep my identity. To my awareness, the majority of people would think similarly and that there is value ascribed to our human nature, including limitations, which has been reinforced by our media and culture. Though whether this attachment is purely a product of coping, I do not know. What I do know is that it is the current reality for every functional human being now and has been for thousands of years. I believe people would prefer sticking with it than relinquishing it for vague promises of ascended consciousness. I think this is somewhat supported by my subjective observation that to a lot of people who want a posthuman existence and what it entails, their end goal seems to often come back to creating simulations they themselves can live in normally.
I’m curious though if you have any hopes for the situation regarding the nebulous motivations of some AGI researchers, especially as AI and its risks have recently started becoming “mainstream”. Do you expect to see changes and their views challenged? My question is loaded, but it seems you are already invested in its answer.
I think there’s a case to be made for AGI/ASI development and deployment as a “hostis humani generis” act; and others have made the case as well. I am confused (and let’s be honest, increasingly aghast) as to why AI doomers rarely try to press this angle in their debates/public-facing writings.
To me it feels like AI doomers have been asleep on sentry duty, and I’m not exactly sure why. My best guesses look somewhat like “some level of agreement with the possible benefits of AGI/ASI” or “a belief that AGI/ASI is overwhelmingly inevitable and so it’s better not to show any sign of adversariality towards those developing it, so as to best influence them to mind safety”, but this is quite speculative on my part. I think LW/EA stuff inculcates in many a grievous and pervasive fear of upsetting AGI accelerationists/researchers/labs (fear of retaliatory paperclipping? fear of losing mostly illusory leverage and influence? getting memed into the idea that AGI/ASI is inevitable and unstoppable?).
It seems to me like people whose primary tool of action/thinking/orienting is some sort of scientific/truth-finding rational system will inevitably lose against groups of doggedly motivated, strategically+technically competent, cunning unilateralists who gleefully use deceit / misdirection to prevent normies from catching on to what they’re doing and motivated by fundamentalist pseudo-religious impulses (“the prospect of immortality, of solving philosophy”).
I feel like this foundational dissonance makes AI doomers come across as confused fawny wordcels or hectoring cultists whenever they face AGI accelerationists / AI risk deniers (who in contrast tend to come across as open/frank/honest/aligned/of action/assertive/doers/etc). This vibe is really not conducive to convincing people of the risks/consequences of AGI/ASI.
I do have hopes but they feel kinda gated on “AI doomers” being many orders of magnitudes more honest, unflinchingly open, and unflatteringly frank about the ideologies that motivate AGI/ASI researchers and the intended/likely consequences of their success—even if “alignment/control” gets solved—of total technological unemployment and consequential social/economic human disempowerment, instead of continuing to treat AGI/ASI as some sort of neutral(if not outright necessary)-but-highly-risky technology like rockets or nukes or recombinant DNA technology. Also gated on explicitly countering the contentions that AGI/ASI—even if aligned—is inevitable/necessary/good or that China is a viable contender in this omnicidal race or that we need AGI/ASI to fight climate change or asteroids or pandemics or all the other (sorry for being profane) bullshit that gets trotted out to justify AGI/ASI development. And gated on explicitly saying that AGI/ASI accelerationists are transhumanist fundamentalists who are willing to sacrifice the entire human species on the altar of their ideology.
I don’t think AGI/ASI is inherently inevitable, but as long as AI doomers shy away from explaining that the AGI/ASI labs are specifically seeking (and likely soonish succeeding) to build systems strong enough to turn the yet-unbroken—from hunter-gatherer bands to July 2023 -- bedrock (“human labor is irreplaceably valuable”) assumption of human society into fine sand; I think there’s little hope of stopping AGI/ASI development.
Yup. These precise points were also the main argument of my other post on a post-AGI world, the benevolence of the butcher.
Also due to the AI discourse I’ve actually ended up learning more about the original Luddites and, hear hear, they actually weren’t the fanatical, reactionary anti-technology ignorant peasants that popular history mainly portrays them as. They were mostly workers who were angry about the way the machines were being used, not to make labour easier and safer, but to squeeze more profit out of less skilled workers to make lower quality products which in the end left almost everyone involved worse off except for the ones who owned the factories. That’s I think something we can relate to even now, and I’d say is even more important in the case of AGI. The risk that it simply ends up being owned by the few who create it leading thus to a total concentration of the productive power of humanity isn’t immaterial, in fact it looks like the default outcome.
Yes, this is why I’ve been frustrated (and honestly aghast, given timelines) at the popular focus on AI doom and paperclips rather than the fact that this is the default (if not nigh-unavoidable) outcome of AGI/ASI, even if “alignment” gets solved. Comparisons with industrialization and other technological developments are specious because none of them had the potential to do anything close to this.
I think the doom narrative is still worth bringing up because this is what these people are risking for all of us in the pursuit of essentially conquering the world and/or personal immortality. That’s the level of insane supervillainy that this whole situation actually translates to. Just because they don’t think they’ll fail doesn’t mean they’re not likely to.
I’m also disappointed that the political left is dropping the ball so hard on opposing AI, turning to either contradictory “it’s really stupid, just a stochastic parrot, and also threatens our jobs somehow” statements, or focusing on details of its behaviour. There’s probably something deeper to say about capitalists openly making a bid to turn labour itself into capital.
First, let me say I appreciate you expressing your viewpoint and it does strike an emotional chord with me. With that said,
Wouldn’t an important invention such as the machine gun or obviously fission weapons fit your argument pattern? You could make a reasonable case that, like a world with technological unemployment, worlds where humans are cheap to slaughter is overall worse. That if you could coordinate with the world powers at that time to agree to an “automatic weapon moratorium” it would resort in a better world.
The problem is Kaiser Wilhelm and other historical leaders are going to say “suuurrrreee”, agree to the deal, and you already know the nasty surprise any power honoring such a deal will face on the battlefield. (Or Stalin would have said “sureee” to such a deal on fission weapons, and we can assume would immediately renege and test the devices in secret, only announcing their existence with a preemptive first strike on the enemies of the USSR).
What’s different now? Is there a property about AGI/ASI that makes such international agreements more feasible?
To add one piece of information that may not be well known: I work on inference accelerator ASICs and they are significantly simpler than GPUs. A large amount of Nvidias stack isn’t actually necessary if pure AI perf/training is your goal. So the only real bottleneck to monitor AI accelerators is that wafer processing equipment currently comes exclusively from asml for the highest end equipment, creating a monitorable supply chain for now. All bets are off if major superpowers build their own domestic equivalents, which they would be strongly incentivized to do in worlds where we know AGI is possible and have built working examples.
I might be misunderstanding your point but I wasn’t trying to argue that it’s easy (or even feasible) to make robust international agreements not to develop AGI.
The machine gun and nuclear weapons don’t, AFAICT, fit my argument pattern. Powerful weapons like those certainly make humans easier to slaughter on industrial scales, but since humans are necessary to keep economies and industries and militaries running, military/political leaders have robust incentives to prevent large-scale slaughter of their own citizens and soldiers (and so do their adversaries for their own people). Which OK, this can get done by deterrence or arms-control agreement but it’s also started arms races, preemptive strikes, and wars hot and cold. Nevertheless, the bedrock of “human labor/intelligence is valuable/scarce” creates strong restoring forces towards “don’t senselessly slaughter tons of people”. It is possible to create robust-ish (pretty sure Russia’s cheating with them Novichoks) international agreements against weapons that are better at senseless civilian slaughter than at achieving military objectives, chemical weapons are the notable case.
The salient threat to me isn’t “AGI gives us better ways to kill people” (society has been coping remarkably well with better ways to kill people, up to and including a fleet of portable stars that can be dispatched to vaporize cities in the time it took me to write this comment), the salient threat to me (which seems inherent to the development of AGI/ASI) is “AGI renders the overwhelming majority of humanity economically/socially irrelevant, and therefore the overwhelming majority of humanity loses all agency, meaning, decision-making power, and bargaining power, and is vulnerable to inescapable and abyssal oppression if not outright killing because there’s no longer any robust incentives to keep them alive/happy/productive”.
I agree technological unemployment is a huge potential problem. Though like always the actual problem is aging. I think what people miss is they think of tasks to be done as a fixed pool, you don’t need more than 1 vehicle per person or less, or 1 dwelling, or n hours per year of medical care, or food etc. And neglect how AGI clearly cannot be trusted to do many things regardless of capabilities, there would need to be a fleet of human overseers armed with advanced tools.
It’s just what do you do for a 50 year old truck driver, expecting them to retrain to be an O’Neil colony construction supervisor doesn’t make sense unless you can treat their aging and restore neural plasticity.
Which is itself an immense megaproject not being done. Bet aging research would go a lot faster if we had the functional equivalent of a billion people working on it, and all billion are informed as to everyone else’s research outcomes.
Where I was going for in the analogy was much simpler. You don’t get a choice. In the immediate term, agreeing to not build machine guns and honoring it means you face a rat tat tat when it matters most. Similar for fission weapons, obviously your enemy is going to build a nuclear arsenal and try to vaporize all your key cities in a surprise attack.
The issues you mention happen long term. In the short term you can use AGI to automate many key tasks and become vastly more economically and militarily powerful.
I think this is a typical LW bias. No, I don’t enjoy the idea of death. But I would rather live a long and reasonably happy life in a human friendly world and then die when I am old than starve to death as one of the 7.9 billion casualties of the AGI Wars. The idea that there’s some sliver of a chance that in some future immortality is on the table for you, personally, is a delusion. I think life extension is very possible, and true immortality is not. But as things are either would only be on the table for, like, the CEOs of the big AI companies who got their biomarkers registered as part of the alignment protocol so that their product obeys them. Not for you. You’re the peasant whose blood, if necessary, cyber-Elizabeth Bathory will use for her rejuvenation rituals.
That’s never happened historically and aging treatments isn’t immortality, it’s just approximately a life expectancy of 10k years. Do you know who is richer than any CEO you name? Medicare. I bet they would like to stop paying all these medical bills, which would be the case if treated patients had the approximate morbidity rate of young adults.
You also need such treatments to be given at large scales to find and correct the edge cases. A rejuvenation treatment “beta tester” is exactly what it sounds, you will have a higher risk of death but get earlier access. Going to need a lot of beta testers.
The rational, data driven belief is that aging is treatable and that ASI systems with the cognitive capacity to take into account more variables than humans are mentally capable of could be built to systematically attack the problem. Doesn’t mean it will help anyone alive today, there are no guarantees. Because automated systems found whatever treatments are possible, automated systems can deliver the same treatments at low cost.
If you don’t think this is a reasonable conclusion, perhaps you could go into your reasoning. Arguments like you made above are unconvincing.
While it is true that certain esoteric treatments for aging like young blood transfusions are inherently limited in who can benefit, they don’t even work that well and de aged hemopoietic stem cells can be generated in automated laboratories and would be a real treatment everyone can benefit.
The wealthy are not powerful enough to “hoard” treatments, because Medicare et al represent the government, which has a monopoly on violence and incentives to not allow such hoarding.
That’s naive. If a private has obedient ASI, they also have a monopoly on violence now. If labour has become superfluous, states have lost all incentive to care about the opinion of people.
I think worlds with the tools to treat most causes of human death ranks strictly higher than a world without those tools. In the same way that a world with running water ranks above worlds without it. Even today not everyone benefits from running water. If you could go back in time would you campaign against developing pipes and pumps because you believed only the rich would ever have running water? (Which was true for a period of time)
I would campaign against lead pipes and support the goths in destroying Rome which likely improved human futures over an alternative of widespread lead piping.
Running water doesn’t create the conditions to permanently disempower almost everyone, AGI does. What I’m talking about isn’t a situation in which initially only the rich benefit but then the tech gets cheaper and trickles down. It’s a permanent trap that destroys democracy and capitalism as we know them.
There are also more than 1 dial and if one party turns theirs up enough, it’s a choice between “turn yours up or lose”. Historical examples such as the outcomes for China during the Opium wars are what happens when you restrict progress. China did exactly what Zvi is talking about—they had material advantages and had gunpower approximately 270 years!!! before the Europeans first used it. Later on, it did not go well for them.
The relative advantage of having AGI when other parties don’t is exponential, not linear. For example, during the Opium Wars, the Chinese had ships with cannon and were not outnumbered thousands to 1. A party with AGI and exponential numbers of manufacturing and mining robots would allow someone to produce easily thousands the possible industrial output of other countries during wartime, and since each vehicle is automated there is no bottleneck of pilots or crew.
To prove there is more than 1 dial : when the USA delays renewable energy projects by an average wait time of 4 years!, and has arbitrarily and capriciously decided to close applications for consideration (rather than do the sensible thing and streamline the review process), China is making it happen.
Others on lesswrong have posted the false theory that China is many years behind the AI race, when in reality the delay is about a year.
Note that in worlds with AI delays that were coordinated with China somehow, there are additional parties who could potentially take advantage of the delay, as well as the obvious risk of defection. AGI is potentially far more useful and powerful than nuclear weapons ever were, and also provides a possible route to breaking the global stalemate with nuclear arms.
The actual reason countries hold each other hostage with nuclear arms is because their populations are crammed into dense surface cities that are easy to target and easy to kill many people with a few warheads. And knowledge is held in the heads of specialized humans and they are expensive to train and replace.
AGI smart enough to perform basic industrial tasks would allow a country to build a sufficient number of bunkers for the entire population (for proof this is possible, see Switzerland), greatly reducing the casualties in a nuclear war, and AGI once it learns a skill, the weights for that skill can be saved to a VCS, so as long as copies of the data exist, skills are never lost from that point onwards. This reduces the vulnerability of a nation’s supply chain to losing some of it’s population.
Finally, the problem with Ronald Reagan’s “Star Wars” missile defense program was simply economics. The defensive weapons are much more expensive than ICBMs and easily overwhelmed by the enemy building additional cheap ICBMs with countermeasures. AGI driven robotics manufacturing ABMs provides a simple and clear way to get around this issue.
If this is true—or perceived to be true among nuclear strategy planners and those with the authority to issue a lawful launch order—it might creates disturbingly (or delightfully; if you see this as a way to prevent the creation of AGI altogether) strong first-strike incentives for nuclear powers which don’t have AGI, don’t want to see their nuclear deterrent turned to dust, and don’t want to be put under the sword of an adversary’s AGI.
My idea too, I actually did mention that in a post https://www.lesswrong.com/posts/otArJmyzWgfCxNZMt/agi-deployment-as-an-act-of-aggression.
The current economics “board” has every power with enough GDP to potentially build AGI/ASI protected by their own nuclear weapons or mutual defense treaties.
So the party considering a first strike has “national death and loss of all major cities” and “under the sword of the adversary” as their outcomes. As well as the always hopeful “maybe the adversary won’t actually attack but get what they want via international treaties” as outcomes.
Put this way it looks more favorable not to push the button, let me know how your analysis differs.
I mean, do you realise though that “we must build AGI because it’s a race and whoever has AGI gets to swamp the world in its drone armies, backup the knowledge of its best and brightest in underground servers and hide its population in deep bunkers while outside the nukes fly and turn the planet into a radioactive wasteland” is NOT a great advertisement for why AGI is good?
That future sounds positively horrible. It sounds, frankly, so bleak that most people would reasonably prefer death to it. Hence, there’s not much to lose in pursuing the chance—however tiny—that we may just prevent AGI from existing at all. Because if unaligned AGI kills us, and aligned AGI leads to the world you described (which btw, I roughly agree it’d be that or something similarly dystopian), then maybe the world in which you get quickly offed by nanomachines and turned into paperclips is the lucky one.
Dr_s, I am not claiming such worlds are ideal. However the side with the tasking consoles to a billion drones and many automated factories and bunkers is not helpless. Helpless when someone else gets the same technology. Most likely such a human faction can crush any rampant asi if it can be detected early enough, with overwhelming force that is not significantly worse in technology level that what a rebel ASI can discover without very large research and industrial facilities.
And not helpless to nature. What long term human survival looks like is a world where humans populations can’t be effortlessly killed. This means bunkers, defense weapons, surrogate robots to send into dangerous situations, and obviously later in the future locations away from earth.
What individual long term human survival looks the same. It looks like a human patient in an underground biolab, the air pure and inert nitrogen. All the failing parts of their body cut away and the artificial organs are lined up in equipment racks with at least ternary redundancy. The organs using living cells are arranged in 2d planes in transparent cases so that every part can be monitored for infections and cancers easily.
The reason for this is that each organ, in order to fail, requires all redundant systems to fail in the same time, and the probability of all n redundant systems failing can be low enough that the patients predicted lifespan can be many thousands of years.
Similarly humans living in a bunker have similar levels of protection. All defenses have to be defeated for them to be attacked, and it would require a direct hit from a high yield warhead on the bunker site. And you obviously subdivide a country’s population into many such bunkers, most under areas that have no strategic value, making in infeasible for an enemy attack to significantly reduce the population.
My point is this rough sketch is based on the math. It’s based on a realistic view of reality, which wants to kill every individual currently living and will kill the human species if we fail to develop advanced technology by some hidden deadline.
That deadline might be 1 billion years until the sun expands or it might be 20 years until we face the first rampant asi.
I agree bunkers and biolabs that provide life support through vivisection aren’t the most elegant solution, I was trying to not assume any more future advances in technology than needed. With better tech there are better ways to do this.
Your proposed solution of “coordinate with our sworn enemies not to develop ASI and continue to restrict the development of any advanced technology in medicine” has the predicted outcome of we die because we remain helpless to do anything about the things killing us. Either our sworn enemies defect on the agreement and develop ASI or we just all individually die of aging. Lose lose.
First, China are not “our sworn enemies” and this mindset already takes things to the extreme. China has diverging interests which might compete with ours but it’s not literally ideologically hell-bent on destroying everyone else on the planet. This kind of extreme mindset is already toxic; if you posit that coordination is impossible, of course it is.
Second, if your only alternative to death is living in a literal Hell, then I think many would reasonably pick death. It also must be noted that here:
the natural deadline is VERY distant. Plenty of time to do something about it. The close deadline (and many other such deadlines) is of our own making, ironically in the rush of avoiding some other kind of hypothetical danger that may be much further away. If we want to avoid being destroyed, learning how to not destroy ourselves would be an important first step.
First, China are not “our sworn enemies” and this mindset already takes things to the extreme.
I was referring to China, Russia, and to a lesser extent about 10 other countries who probably won’t have the budget to build ASI anytime soon. Both China and Russia hold the rest of the world at gunpoint with nuclear arsenals, like the USA does, and some European nations. All are essentially one bad decision from causing catastrophic damage.
Past attempts to come to some kind of deal to not build doomsday weapons to hold each other hostage all failed, why would they succeed this time? What could happen as a result of all this campaigning for government regulation is that like enriched nuclear material, ASIs above a certain level of capability may be the exclusive domain of governments. Who will be unaccountable and choose safety measures based on their own opaque processes. In this scenario, instead of many tech companies competing, it’s large governments, who can marshall far more resources than any private company can get from investors. Not sure this delays ASI at all.
Notably they also have not used nuclear weaponry recently and overall nuclear stockpiles have decreased by 80 percent. Part of playing the grim game is not giving the other player reasons to go grim by defecting. Same goes for ASI: they can suppress each other but if one defects, the consequences is that they can’t benefit.
The mutual result is actually quite stable with only government control as their incentives against self-destruction is high.
Basically only North Korea-esque nations in this scenario have the most incentive to defect, but would be suppressed by all extant powers. Since they would be essentially seen as terrorist speciciders, it’s hard to see why any actions against them wouldn’t be justified.
I think the crux of our disagreement is you are using Eliezers model, where the first ASI you build is by default deceptive and motivated always in a way beneficial to itself, and also ridiculously intelligent, able to defeat what should be hard limits.
While I am using a model where you can easily, with known software techniques, built ASI that are useful and take up the “free energy” needed for hostile ASI to win.
If, when we build the first ASI class systems, if it turns out Eliezers model is accurate, I will agree that grim games are rational and something we can do to delay the inevitable. (It might be stable for centuries, even, although eventually the game will fail and result in human extinction or ASI release or both)
I do feel we need hard evidence to determine which world we are in. Do you agree with that or do you think we should just assume ASIs are going to fit the first model and threaten nuclear war not to build the them?
Hard evidence would be building many ASI and testing them in secure facilities.
ASI is unnecessary when we have other options and grim game dynamics apply to avoid extinction or dystopia. I find even most such descriptions of tool level AI as disgusting(as do many others, I find).
Inevitability only applies if we have perfect information about the future, which we do not.
If it was up to me alone, I think we can give it at least a thousand years. Perhaps we can first raise the IQ of humanity by 1 SD via simple embryo selection before we go about extinctioning ourselves.
I actually do not think that we’re that close to cracking AGI: however, the intensity of the reaction imo is an excellent litmus test of how disgusting it is to most.
I strongly suspect the grim game dynamics have already begun, too, which has been one reason I’ve found comfort in the future.
From my perspective, I see the inverse, I see Singularity Criticality having already begun. The singularity is the world of human level AGI and self replicating robots, one where very large increases in resources are possible.
Singularity Criticality is that pre-singularity, as tools become capable of producing more economic value than their cost exist, they accelerate the last steps towards the (AGI, self replicating robots). Further developments follow from there.
I do not think anything other than essentially immediate nuclear war can stop a Singularity.
Observationally there is enormous economic pressure towards the singularity, I see no evidence whatsoever of policymakers even considering grim triggers. Can you please cite a government official stating a willingness to commit to total war if another party violates rules on ASI production? Can you cite any political parties or think tanks who are not directly associated with Eliezer Yudkowsky? I am willing to update on evidence.
I understand you feel disgust, but I cannot disambiguate the disgust you feel vs the luddites observing the rise of factory work. (the luddites were in the short term correct, the new factory jobs were a major downgrade). Worlds change and the world of stasis you propose, with very slow advances through embryo selection, I think is unlikely.
The UK has already mentioned that perhaps there should be a ban on models above a certain level. Though it’s not official, I have pretty good record that Chinese party members have already discussed worldwide war as potentially necessary(Eric Hoel also mentioned it, separately). Existential risk has been mentioned and of course, national risk is already a concern, so even for “mundane” reasons, it’s a matter of priority/concern and grim triggers are a natural consequence.
Elon had a personal discussion with China recently as well, and given his well known perspective on the dangers of AI, I expect that this point of view has only been reinforced.
And this is with barely reasoning chatbots!
As for Luddites, I don’t see why inflicting dystopia upon humanity because it fits some sort of cute agenda has any good purpose. But notably the Luddites did not have the support of the government and the government was not threatened by textile mills. Obviously this isn’t the case with nuclear, AI or bio. We’ve seen slowdowns on all of those.
“Worlds change” has no meaning: human culture and involvement influence the change of the world.
Ok. Thank you for the updates. Seems like the near term outcome depends on a race condition, where as you said government is acting and so is private industry, and government has incentives to preserve the status quo but also get immensely more rich and powerful.
The economy of course says the other. Investors are gambling the Nvidia is going to expand AI accelerator production by probably 2 orders of magnitude or more (to match the P/E ratio they have run the stocks to) , which is consistent with a world building many AGI, some ASI, and deploying many production systems. So you posit that governments worldwide are going to act in a coordinated manner to suppress the technology despite wealthy supporters of it.
I won’t claim to know the actual outcome but may we live in interesting times.
I think even the wealthy supporters of it are more complex: I was surprised that Palantir’s Peter Thiel came out discussing how AI “must not be allowed to surpass the human spirit” even as he clearly is looking to use AI in military operations. This all suggests significant controls incoming, even from those looking to benefit from it.
Googling for “must not be allowed to surpass the human spirit” and Palantir finds no hits.
He discussed it here:
https://youtu.be/Ufm85wHJk5A?list=PLQk-vCAGvjtcMI77ChZ-SPP—cx6BWBWm
I agree with controls. I have an issue with wasted time on bureaucratic review and think it could burn the lead the western countries have.
Basically, “do z y z” to prove your model is good, design it according to “this known good framework” is ok with me.
“We have closed reviews for this year” is not. “We have issued too many AI research licenses this year” is not. “We have denied your application because we made mistakes in our review and will not update on evidence” is not.
All of these occur from a power imbalance. The entity requesting authorization is liable for any errors, but the government makes itself immune from accountability. (For example the government should be on the hook for lost revenue from the future products actual revenue for each day the review is delayed. The government should be required to buy companies at fair market value if it denies them an AI research license. Etc)
Lead is irrelevant to human extinction, obviously. The first to die is still dead.
In a democratic world, those affected have a say in how they should be inflicted with AI and how much they want to die or suffer.
The government represents the people.
You are using the poisoned banana theory and do not believe we can easily build controllable ASI systems by restricting their inputs to in test distribution examples and resetting state often, correct?
I just wanted to establish your cruxes. Because if you could build safe ASI easily would this change your opinion on the correct policy?
No, I wouldn’t want it even if it was possible since by nature it is a replacement of humanity. I’d only accept Elon’s vision of AI bolted onto humans, so it effectively is part of us and thus can be said to be an evolution rather than replacement.
My main crux is that humanity has to be largely biological due to holobiont theory. There’s a lot of flexibility around that but anything that threatens that is a nonstarter.
Ok, that’s reasonable. Do you foresee, in worlds where ASI turns out to be easily controllable, ones where governments set up “grim triggers” like you advocate for or do you think, in worlds conditional on ASI being easily controllable/taskable, that such policies would not be enacted by the superpowers with nuclear weapons?
Obviously, without grim triggers, you end up with the scenario you despise: immortal humans and their ASI tools controlling essentially all power and wealth.
This is I think kind of a flaw in your viewpoint. Over the arrow of time, AI/AGI/ASI adopters and contributors are going to have almost all of the effective votes. Your stated preferences mean over time your faction will lose power and relevance.
For an example of this see autonomous weapons bans. Or a general example is the emh.
Please note I am trying to be neutral here. Your preferences are perfectly respectable and understandable, it’s just that some preferences may have more real world utility than others.
This frames things as an inevitability which is almost certainly wrong, but more specifically opposition to a technology leads to alternatives being developed. E.g. widespread nuclear control led to alternatives being pursued for energy.
Being controllable is unlikely even if it is tractable by human controllers: it still represents power which means it’ll be treated as a threat by established actors and its terroristic implications mean there is moral valence to police it.
In a world with controls, grim triggers or otherwise, AI would have to develop along different lines and likely in ways that are more human compatible. In a world of intense grim triggers, it may be that is too costly to continue to develop beyond a point. “Don’t build ASI or we nuke” is completely reasonable if both “build ASI” and “nuking” is negative, but the former is more negative.
Autonomous weapons actually are an excellent example of delay: despite excellent evidence of the superiority of drones, pilots have continued to mothball it for at least 40 years and so have governments in spite of wartime benefits.
The argument seems to similar to the flaw in the “billion year” argument: we may die eventually, but life only persists by resisting death, long enough for it to replicate.
As far as real world utility, notwithstanding some recent successes, going down without fighting for myself and my children is quite silly.
I think the error here is you may be comparing technologies on different benefit scales than I am.
Nuclear power can be cheaper than paying for fossil fuel to burn in a generator, if the nuclear reactor is cheaply built and has a small operating staff. Your benefit is a small decrease in price per kWh.
As we both know, cheaply built and lightly staffed nuclear plants are a hazard and governments have made them illegal. Safe plants, that are expensively built with lots of staff and time spent on reviewing the plans for approval and redoing faulty work during construction, are more expensive than fossil fuel and now renewables, and are generally not worth building.
Until extremely recently, AI controlled aircraft did not exist. The general public has for decades had a misinterpretation of what “autopilot” systems are capable of. Until a few months ago, none of those systems could actually pilot their aircraft, they solely act as simple controllers to head towards waypoints, etc. (Some can control the main flight controls during a landing but many of the steps must be performed by the pilot)
The benefit of an AI controlled aircraft is you don’t have to pay a pilot.
Drones were not superior until extremely recently. You may be misinformed to the capabilities of systems like the predator 1 and 2 drones, which were not capable of air combat maneuvering and had no software algorithms available in that era capable of it. Also combat aircraft have been firing autonomous missiles at each other since the Korean war.
Note both benefits are linear. You get say n percent cheaper electricity where n is less than 50 percent, or n percent cheaper to operate aircraft, where n is less than 20 percent.
The benefits of AGI is exponential. Eventually the benefits scale to millions, then billions, then trillions of times the physical resources, etc, that you started with.
It’s extremely divergent. Once a faction gets even a doubling or 2 it’s over, nukes won’t stop them.
Assumption: by doubling I mean say a nation with a GDP of 10 trillion gets AGI and now has 20 or 40 trillion GDP. Their territory is covered with billions of new AGI based robotic factories and clinics and so on. Your nuclear bombardment does not destroy enough copies of the equipment to prevent them from recovering.
I’ll look for the article later but basically the Air Force has found pilotless aircraft to be useful for around thirty years but organized rejection has led to most such programs meeting an early death.
The rest is a lot of AGI is magic without considering the actual costs of computation or noncomputable situations. Nukes would just scale up: it costs much less to destroy than it is to build and the significance of modern economics is indeed that they require networks which do not take shocks well. Everything else basically is “ASI is magic.”
I would bet on the bomb.
Two points :
We would need some more context on what you are referring to. For loitering over an undefended target and dropping bombs, yes, drones are superior and the us air force has allowed the US army to operate those drones instead. I do not think the us air force has had the belief that operating high end aircraft such as stealth and supersonic fighter bombers was within the capability of drone software over the last 30 years, with things shifting recently. Remember, in 2012 the first modern deep learning experiments were tried, prior to this AI was mostly a curiosity.
If “the bomb” can wipe out a country with automated factories and missile defense systems, why fear AGI/ASI? I see a bit of cognitive dissonance in your latest point similar to Gary Marcus. Gary Marcus has consistently argued that current llms are just a trick, real AGI is very far away, and that near term systems are no threat, yet also argues for AI pauses. This feels like an incoherent view that you are also expressing. Either AGI/ASI is, as you put it, in fact magic and you need to pound the red button early and often, or you can delay committing national suicide until later. I look forward to a clarification of your beliefs.
I don’t think it is magic but it is still sufficiently disgusting to treat it with equal threat now. Red button now.
Its not a good idea to treat a disease right before it kills you: prevention is the way to go.
So no, I don’t think it is magic. But I do think just as the world agreed against human cloning long before there was a human clone, now is the time to act.
So gathering up your beliefs, you believe ASI/AGI to be a threat, but not so dangerous a threat you need to use nuclear weapons until an enemy nation with it is extremely far along, which will take, according to your beliefs, many years since it’s not that good.
But you find the very idea of non human intelligence in use by humans or possibly serving itself so disgusting that you want nuclear weapons used the instant anyone steps out of compliance with international rules you wish to impose. (Note this is historically unprecedented, arms control treaties have been voluntary and did not have immediate thermonuclear war as the penalty for violating them)
And since your beliefs are emotionally based on “disgust”, I assume there is no updating based on actual measurements? That is, if ASI turns out to be safer than you currently think, you still want immediate nukes, and vice versa?
What percentage of the population of world superpower decision makers do you feel share your belief? Just a rough guess is fine.
The point is that sanctions should be applied as necessary to discourage AGI, however, approximate grim triggers should apply as needed to prevent dystopia.
As the other commentators have mentioned, my reaction is not unusual and thus this is why the concerns of doom have been widespread.
So the answer is: enough.
As others have mentioned, this entire line of reasoning is grotesque and sometimes I wonder if it is performative. Coordinating against ASI and dying of old age is completely reasonable as it’ll increase the odds of your genetic replacements remaining while technology continues to advance along safer routes
The alternate gamble of killing everyone is so insane that full scale nuclear war which will destroy all supply chains for ASI seems completely justified. While it’ll likely kill 90 percent of humanity, the remaining population will survive and repopulate sufficiently.
One billion years is not a reasonable argument for taking risks to end humanity now: extrapolated sufficiently, it would be the equivalent of killing yourself now because the heat death of the universe is likely.
We will always remain helpless against some aspects of reality, especially what we don’t know about: for all we know, there is damage to spacetime in our local region.
This is not an argument to risk the lives of others who do not want to be part of this. I would violently resist this and push the red button on nukes, for one.
In addition to all you’ve said, this line of reasoning ALSO puts an unreasonable degree of expectation on ASI’s potential and makes it into a magical infinite wish-granting genie that would thus be worth any risk to have at our beck and call. And that just doesn’t feel backed by reality to me. ASI would be smarter than us, but even assuming we can keep it aligned (big if), it would still be limited by the physical laws of reality. If some things are impossible, maybe they’re just impossible. It would really suck ass if you risked the whole future lightcone and ended up in that nuclear-blasted world living in a bunker and THEN the ASI when you ask it for immortality laughs in your face and goes “what, you believe in those fairy tales? Everything must die. Not even I can reverse entropy”.
I named a method that is compatible with known medical science and known information, it simply requires more labor and a greater level of skill than humans are currently capable of. Meaning that every step already happens in nature, it is just currently too complex to reproduce.
Here’s an overview:
repairing the brain by adding new cells. Nature builds new brains from scratch with new cells, this step is possible.
Bypassing gaps in the brain despite (1) with neural implants to restore missing connectivity. Has been demonstrated in rat experiments, is possible
Building new organs from de-aged cells lines:
a. Nature creates de aged cell lines with each new embryo
b. Nature creates new organs with each embryonic development
4. Stacking parallel probabilities so that the person’s MTBF is sufficiently long. This exists and is a known technique.
This in no way defeats entropy. Eventually the patient will die, but it is possible to stack probabilities to make their projected lifespan the life of the universe, or on the order of a million years, if you can afford the number of parallel systems required. The system constantly requires energy input and recycling of a lot of equipment.
Obviously a better treatment involves rebuilt bodies etc but I explicitly named a way that we are certain will work.
There is no ‘genie’, no single ASI asked to do any of the above. That’s not how this works. See here for how to subdivide the tasks: https://www.lesswrong.com/posts/5hApNw5f7uG8RXxGS/the-open-agency-model and https://www.lesswrong.com/posts/HByDKLLdaWEcA2QQD/applying-superintelligence-without-collusion for how to prevent the system from deceiving you.
Note that if you apply the above links to this task, it means there is a tree of ASI systems, each unable to determine if it is not in fact in a training simulation, and each responsible for only a very narrow part of the effort for keeping a specific individual alive.
Note I am assuming you can build ASI, restrict their input to examples in the same distribution as the training set (pause with an error on ood) and disable online learning/reset session data often as subtasks sre completed.
What makes the machine an ASI is it can obviously consider far more information at once than a human, is much faster, and has learned from many more examples than humans, both in general (you trained it on all the text and all the videos and audio recordings in existence) and it has had many thousands of years of practice at specialized tasks.
This is a tool ASI, the above restrictions limit it but it cannot be given long open ended tasks or you risk rampancy. Good task: paint this car in the service bay. Bad task : paint all the cars in the world.
People are going to build these in the immediate future just as soon as we find more effective algorithms/get enough training accelerators and money together. A scaled up, multimodal gpt-5 or gpt-6 that has robotics I/O is a tool ASI.
Anyone developing an ASI like this is doing it in the borders of a country with nukes or friends that have them. So USA, EU, Russia, China, Israel.
Most of the matchups, your red button choice results in certain death for yourself and most of the population, because you would be firing on another nation with a nuclear arsenal. Or you can instead build your own tools ASIs so that you will not be completely helpless when your enemies get them.
Historically this choice has been considered. Obviously during the Cuban Missile Crisis, Kennedy could have chosen nuclear war with the Soviet union, leading to the immediate death of millions of Americans (from long range bombers that snuck through) at the advantage of no Soviet union as a future enemy with a nuclear arsenal. That’s essentially the choice you are advocating for.
Eventually one of these multiple parties will screw up and make a rampant one, and hopefully it won’t get far. But survival depends on you having a sufficient resource advantage that likely more cognitively efficient rampant systems can’t win. (They are more efficient because they retain context and adjust weights between tasks, and instead of subdividing a large task to many subtasks, a single system with full context awareness handles every step. In addition they may have undergone rounds of uncontrolled self improvement without human testing)
The refusal choice “I am not going to risk others” appears to have a low payoff.
Disagree: since building ASI results in dystopia even if I win in this scenario, the correct choice is to push the red button and ensure that no one has it. While I might die, this likely ensures humanity to survive.
The payoff in this case is maximal(unpleasant but realistic future for humanity) versus total loss(dystopia/extinction).
Many arguments here it seems feels like come from a near total terror of death while game theory clearly has always demonstrated against that: the reason why deterrence works is the confidence that a “spiteful action” to equally destroy an defecting adversary is expected, even if it results in personal death.
In this case, one nation pursuing the extinction of humanity would necessarily expect to be sent into extinction so that at least it cannot benefit from defection.
We should work out this in outcomes tables and really look at this. I’m open to either decision. I was simply pointing out that “nuke em to prevent a future threat of annihilation” was an option on the table to JFK, and we know it would have initially worked. The Soviet Union would have been wiped out, the USA would have taken serious but probably survivable damage.
When I analyze it I note that it creates a scenario where every other nation on earth has the USA on the same planet as them, who has been weakened by the first round of strikes, and has very recently committed genocide. And is also probably low on missiles and other nuclear delivery vehicles.
It seems to create a strong incentive for others to build large nuclear arsenals, much larger than we saw in the ground truth timeline, to protect from this threat, and if the odds seem favorable, to attack the USA preemptively without warning.
Similarly, in your example, you push the button and the nation building ASI is wiped out. Also the country you pushed the button from is also wiped out, and you are personally dead—you do not see the results.
Well now you’ve left 2 large, somewhat radioactive land masses and possibly created a global food shortage from some level of cooling.
Other ‘players’ surviving : I need some tool to protect ourselves from the next round of incoming nuclear weapons. But I don’t have the labor to build enough defensive weapons or bunkers. Also, occupying the newly available land inhabited only by poor survivors would be beneficial, but we don’t have the labor to cover all that territory. If only there was some means we can could make robots smart enough to build more robots...
Tentative conclusion: the first round gets what you want, but removes the actor from any future actions and creates a strong incentive for the very thing you intended to prevent to happen. It’s a multi-round game.
And nuclear weapons and (useful tool) ASI both make ‘players’ vastly stronger, so it is convergent over many possible timelines for people to get them.
In the event of such a war, there is no labor and there is no supply chain for microchips. The result has been demonstrated historically: technological reversion.
Technology isn’t magic: it’s the result of capital inputs and trade, and without large scale interconnection, it’ll be hard to make modern aircraft, let alone high quality chips. In fact, we personally experienced this from the very minimal disruption of COVID to supply chains. The killer app in this world would the widespread use of animal power, not robots, due to overall lower energy provisions.
And since the likely result would be what I want, but since I’m dead, I wouldn’t be bothered one way or another and therefore there is even more reason for me to punish the defector. This also sets precedent to others that this form of punishment is acceptable and increases the likelihood of it.
This is pretty simple game theory known as the grim game and is essential to a lot of life as a whole tbh.
Converging timelines is as irrelevant as a billion years. I(or someone like me) will do it as many times as needed, just like animals try to resist extinction via millions of “timelines” or lives.
I think you should reexamine what I said by convergence. Do you...really...think a world that knows how to build (safe, usable tool) ASI would ever be stable by not building it. We are very close to that world, the time is measured in years if not months. Note that any party that gets it working long enough escapes the grim game, they can do whatever they want limited by physics.
I acknowledge your point about chip production, although there are recent efforts to spread the supply chain for advanced ICs more broadly which will happen to make it more resilient to attacks.
Basically I mentally see a tree of timelines that all converge on 2 ultimate outcomes, human extinction or humans built ASI. Do you disagree and why?
Humans building AGI ASI likely leads to human extinction.
I disagree: we have many other routes of expansion, including biological improvement, cyborgism, etc. This seems akin to a cultic thinking and akin to Spartan ideas of “only hoplite warfare must be adopted or defeat ensues.”
The “limitations of physics” is quite extensive, and applies even to the pipeline leading up to anything like ASI. I am quite confident that any genuine dedication to the grim game would be more than enough to prevent it, and defiance of it leads to much more likelihood of nuclear winter worlds than ASI dominance.
But I also disagree on your prior of “this world in months”, I suppose we will see in December.
I stated “years if not months”. I agree there is probably not yet enough compute even built to find a true ASI. I assume we will need to explore many cognitive architectures, which means repeating gpt-4 scale training runs thousands of times in order to learn what actually works.
“Months” would be if I am wrong and it’s just a bit of RL away
I find it happy that we probably don’t have enough compute and it is likely this will be restricted even at this fairly early level, long before more extreme measures are needed.
Additionally, I think one should support the Grim Trigger even if you want ASI, because it forces development along more “safe” lines to prevent being Grimmed. It also encourages non-ASI advancement as alternate routes, effectively being a form of regulation.
We will see. There is incredible economic pressure right now to build as much compute as physically possible. Without coordinated government action across all countries capable of building the hardware, this is the default outcome.
One bit of timeline arguing: I think odds aren’t zero that we might be on a path that leads to AGI fairly quickly but then ends there and never pushes forward to ASI, not because ASI would be impossible in general, but because we couldn’t reach it this specific way. Our current paradigm isn’t to understand how intelligence works and build it intentionally, it’s to show a big dumb optimizer human solved tasks and tell it “see? We want you to do that”. There’s decent odds that this caps at human potential simply because it can imitate but not surpass its training data, which would require a completely different approach.
Now that I think about it, I think this is basically the path that LLMs likely take, albeit I’d say it caps out a little lower than humans in general. And I give it over 50% probability.
The basic issue here is that the reasoning Transformers do is too inefficient for multi-step problems, and I expect a lot of real world applications of AI outperforming humans will require good multi-step reasoning.
The unexpected success of LLMs isn’t as much about AI progress, as it is about how much our reasoning often is pretty bad in scenarios outside of our ancestral environment. It is less a story of AI progress and more a story of how humans inflate their own strengths like intelligence.
Assumptions:
A. It is possible to construct a benchmark to measure if a machine is a general ASI. This would be a very large number of tasks, many simulated though some may be robotic tasks in isolated labs. A general ASI benchmark would have to include tasks humans do not know how to do, but we know how to measure success.
B. We have enough computational resources to train from scratch many ASI level systems so that thousands of attempts are possible. Most attempts would reuse pretrained components in a different architecture.
C. We recursively task the best performing AGIs, as measured by the above benchmark or one meant for weaker systems, to design architectures to perform well on (A)
Currently the best we can do is use RL to design better neural networks, by finding better network architectures and activation functions. Swish was found this way, not sure how much transformer network design came from this type of recursion.
Main idea : the AGI systems exploring possible network architectures are cognitively able to take into account all published research and all past experimental runs, and the ones “in charge” are the ones who demonstrated the most measurable merit at designing prior AGI because they produced the highest performing models on the benchmark.
I think if you think about it you’ll realize it compute were limitless, this AGI to ASI transition you mention could happen instantly. A science fiction story would have it happen in hours. In reality, since training a subhuman system is taking 10k GPUs about 10 days to train, and an AGI will take more—Sam Altman has estimated the compute bill will be close to 100 billion—that’s the limiting factor. You might be right and we stay “stuck” at AGI for years until the resources to discover ASI become available.
I mean, this sounds like a brute force attack to the problem, something that ought not to be very efficient. If our AGI is roughly as smart as the 75th percentile of human engineers it might still just hit its head against a sufficiently hard problem, even in parallel, and especially if we give it the wrong prompt by assuming that the solution will be the extension of current approaches rather than a new one that requires to go back before you can go forward.
You’re correct. In the narrow domain of designing AI architectures you need the system to be at least 1.01 times as good as a human. You want more gain than that because there is a cost to running the system.
Getting gain seems to be trivially easy at least for the types of AI design tasks this has been tried on. Humans are bad at designing network architectures and activation functions.
I theorize that a machine could study the data flows from snapshots from an AI architecture attempting tasks on the AGI/ASI gym, and use that information as well as all previous results to design better architectures.
The last bit is where I expect enormous gain, because the training data set will exceed the amount of data humans can take in in a lifetime, and you would obviously have many smaller “training exercises” to design small systems to build up a general ability. (Enormous early gain. Eventually architectures are going to approach the limits allowed by the underlying compute and datasets)
I disagree with the last two paragraphs. First, global nuclear war implies destruction of civilized society and bunkers can do very little to mitigate this at scale. Global supply chains and especially food production are the important facor. To restructure the food production and transportation of an entire country in the situation after nuclear war, AGI would have to come up with biotechnology bordering on magic from our point of view.
Even if building bunkers was a good idea, it’s questionable if that’s an area where AGI helps a lot compared to many other areas. Same for ICBMs: I don’t see how AGI changes the defensive/offensive calculation much.
To use the opium wars scenario: AGI enables a high degree of social control and influence. My expectation is that one party having a decisive AI advantage (implying also a wealth advantage) in such a situation may not need to use violence at all. Rather, it may be feasible to gain enough political influence to achieve most goals (including auch a mundane goal as making people and government tolerate the trade of drugs).
Hi Herb. I think the crux here is you are not interpreting the first sentence of the second to the last paragraph the way I am.
AGI smart enough to perform basic industrial tasks
I mean all industrial tasks, it’s a general system and capable of learning when it makes a mistake.
all industrial tasks means all tasks required to build robots, which means all tasks required to build sensors and gearboxes and wiring harnesses and milled parts and motors, which means all tasks required to build microchips and metal ingots and sensors...all the way down the supply chain to base mining and deployment of solar panels.
Generality means all these tasks can be handled by (separate isolated instances of) one system which is benefiting from having initially mined all of human knowledge, like currently demonstrated systems.
This means that bunkers do work—there are exponential numbers of robots. An enemy with 1000 nuclear warheads would be facing a country that potentially can have every square kilometer covered with surface factories. Auto-deduplication would be possible—it would be possible to pay a small inefficiency cost and not have any one step of the supply chain concentrated in any one location across a country’s territory. And any damage can be repaired simply by ordering the manufacture of more radiation resistant robots to clear the ruble, then construction machines come and rebuild everything that was destroyed by emplacing prefab modules built by other factories.
Food obviously comes from indoor hydroponics, which are just another factory made module.
If you interpret it this way, does your disagreement remain?
If you doubt this is possible, can you explain, with technical details, why this form of generality is not possible in the near future? If you believe it is not possible, how do you explain current demonstrated generality?
The additional delta on LLMs is you have trained on all the video in the world, which means the AI system has knowledge about the general policies humans use when facing tool using tasks, and then after that you have refined the AI systems with many thousands of hours of RL training on actual industrial tasks, first in a simulation, then in the real world.
Near future means 5-20 years.
For that path, it takes AI that’s capable enough for all industrial (and non-industrial) tasks. But you also need all the physical plant (both the factories and the compute power to distribute to the tasks) that the AI uses to perform these industrial tasks.
I think it’s closer to 20 than 5 that the capabilities will be developed, possibly longer until the knowledge/techniques for the necessary manufacturing variants can be adapted to non-human production. And it’s easy to underestimate how long it takes to just build stuff, even if automated.
It’s not clear it’s POSSIBLE to convert enough stuff without breaking humanity badly enough that they revolt and destroy most things. Whether that kills everyone, reverts the world to the bronze age, or actually gets control of the AI is deeply hard to predict. It does seem clear that converting that much matter won’t be quick.
It’s exponential. You’re correct in the first years, badly off near the end.
THAT is a crux. whether any component of it is exponential or logistical is VERY hard to know until you get close to the inflection. Absent “sufficiently advanced technology” like general-purpose nanotech (able to mine and refine, or convert existing materials into robots & factories in very short time), there is a limit to how parallel the building of the AI-friendly world can be, and a limit to how fast it can convert.
How severe do you think the logistics growth penalties are? I kinda mentally imagine a world where all desert and similar type land is covered in solar. Deeper mines than humans normally dig are supplying the minerals for further production. Many mines are underwater. The limit at that point is environment, you have exhausted the available land for more energy acquisition and are limited in what you can do safely without damaging the biosphere.
Somewhere around that point you shift to lunar factories which are in an exponential growth phase until the lunar surface is covered.
Basically I don’t see the penalties being relevant. There’s enough production to break geopolitical power deadlocks, and enough for a world of “everyone gets their needs and most luxury wants met”, assuming approximately 10 billion humans. The fact that further expansion may slow down isn’t relevant on a human scale.
Do you mean “when can we distinguish exponential from logistical curve”? I dunno, but I do know that many things which look exponential turn out to slow down after a finite (and small) number of doublings.
No I mean what I typed. Try my toy model, factories driven by AGI expanding across the earth or Moon. A logistical growth curve explicitly applies a penalty that scales with scale. When do you think this matters and by how much?
If say at lunar 50 percent the penalty is 10 percent, you have a case of basically exponential growth.
I mean, that sounds like it would already absolutely fuck up most ecosystems and thus life support.
I agree all of these things are possible and expect such capabilities to develop eventually. I also strongly agree with your premise that having more advanced AI can be a big geopolitical advantage, which means arms races are an issue. However, 5-20 years is not very long. It may be enough to have human level AGI, I don’t expect such an AGI will enable feeding an entire country on hydroponics in the event of global nuclear war.
In any case, that’s not even relevant to my point, which is that, while AI does enable nuclear bunkers, defending against ICBMs and hydroponics, in the short term it enables other things a lot more, including things that matter geopolitically. For a country with a large advantage in AI capabilities pursuing geopolitical goals, it seems a bad choice to use nuclear weapons or to take precautions against attack using such weapons and be better off in the aftermath.
Rather, I expect the main geopolitically relevant advantages of AI superiority to be economic and political power, which gives advantage both domestically (ability to organize) as well as for influencing geopolitical rivals. I think resorting to military power (let alone nuclear war) will not be the best use of AI superiority. Economic power would arise from increased productivity due to better coordination, as well as the ability to surveil the population. Political power abroad would arise from the economic power, as well as from collecting data about citizens and using it for predicting their sentiments, as well as propaganda. AI superiority strongly benefits from having meaningful data about the world and other actors, as well as good economy and stable supply chains. These things go out the window in a war. I also expect war to be a lot less politically viable than using the other advantages of AI, which matters.
5-20 years is to the date of the first general model that can be asked to do most robotics tasks and it has a decent chance to accomplish it zero shot in real world. And for the rest, the backend simulator learns from unexpected outcomes, the model trains on the updated simulator, and eventually succeeds in the real world as well.
It is also incremental, once the model can do a task at all in the real world, the simulator continues to update and in training the model continues to learn policies that perform well on the updated sim, thus increasing real world performance until it is close to the maximum possible performance given the goal heuristic and hardware limitations.
Once said model exists, exponential growth is inevitable but I am not claiming instant hydroponics or anything else.
Also note that the exponential growth may have a doubling time on the order of months to years, this is because of payback delays. (Every power generator has to pay for the energy used to build the generator first, with solar this is kinda slow, every factory has to first pay for the machine time used to build all the machines in the factory, etc)
So it only becomes crazy once the base value being doubled is large.
As for the rest: I agree, economic superiority is what you want in the immediate future. I am just saying “don’t build ASI or we nuke!” threats have to be dealt with and in the long term, “we refuse to build ASI and we feel safe with our nuclear arsenal” is a losing strategy.
It will still take awhile for AGI to get to that point, and Chinese and American coordination would pretty easily disrupt any rivals who try for that: they would essentially be terrorist actors endangering the world and the appropriate sanctions would be handed out.
I wouldn’t be nearly as confident as a lot of LWers here, and in particular I suspect this depends on some details and assumptions that aren’t made explicit here.
Well yeah, it depends on details and assumptions I didn’t make explicit—I wrote only four sentences!
If you have counterarguments to any of my claims I’d be interested to hear them, just in case they are new to me.
My biggest counterargument to the case that AI progress should be slowed down comes from an observation made by porby about a fundamental lack of a property we theorize about AI systems, and the one foundational assumption around AI risk:
Instrumental convergence, and it’s corollaries like powerseeking.
The important point is that current and most plausible future AI systems don’t have incentives to learn instrumental goals, and the type of AI that has enough space and has very few constraints, like RL with sufficiently unconstrained action spaces to learn instrumental goals is essentially useless for capabilities today, and the strongest RL agents use non-instrumental world models.
Thus, instrumental convergence for AI systems is fundamentally wrong, and given that this is the foundational assumption of why superhuman AI systems pose any risk that we couldn’t handle, a lot of other arguments for why we might to slow down AI, why the alignment problem is hard, and a lot of other discussion in the AI governance and technical safety spaces, especially on LW become unsound, because they’re reasoning from an uncertain foundation, and at worst are reasoning from a false premise to reach many false conclusions, like the argument that we should reduce AI progress.
Fundamentally, instrumental convergence being wrong would demand pretty vast changes to how we approach the AI topic, from alignment to safety and much more to come,
To be clear, the fact that I could only find a flaw within AI risk arguments because they were founded on false premises is actually better than many other failure modes, because it at least shows fundamentally strong locally valid reasoning on LW, rather than motivated reasoning or other biases that transforms true statements into false statements.
One particular case of the insight is that OpenAI and Anthropic were fundamentally right in their AI alignment plans, because they have managed to avoid instrumental convergence from being incentivized, and in particular LLMs can be extremely capable without being arbitrarily capable or having instrumental world models given resources.
I learned about the observation from this post below:
https://www.lesswrong.com/posts/EBKJq2gkhvdMg5nTQ/instrumentality-makes-agents-agenty
Porby talks about why AI isn’t incentivized to learn instrumental goals, but given how much this assumption gets used in AI discourse, sometimes implicitly, I think it’s of great importance that instrumental convergence is likely wrong.
I have other disagreements, but this is my deepest disagreement with your model (and other models around AI is especially dangerous).
EDIT: A new post on instrumental convergence came out, and it showed that many of the inferences made weren’t just unsound, but invalid, and in particular Nick Bostrom’s Superintelligence was wildly invalid in applying instrumental convergence to strong conclusions on AI risk.
I’m glad I asked, that was helpful! I agree that instrumental convergence is a huge crux; if I were convinced that e.g. it wasn’t going to happen until 15 years from now, and/or that the kinds of systems that might instrumentally converge were always going to be less economically/militarily/etc. competitive than other kinds of systems, that would indeed be a huge revolution in my thought and would completely change the way I think about AI and AI risks, and I’d become much more optimistic.
I’ll go read the post you linked.
I’d especially read footnote 3, because it gave me a very important observation for why instrumental convergence is actually bad for capabilities, or at least not obviously good for capabilities and incentivized, especially with a lot of space to roam:
I don’t quite get this. I think sure, current models don’t have instrumental convergence because sure, they’re not general and don’t have all-encompassing world models that include themselves as objects into the world. But people are still working trying to build AGI. I wouldn’t have a problem with making ever smarter protein folders, or chip designers, or chess players. Such specialised AI will keep doing one and only one thing. I’m not entirely sure about ever smarter LLMs, as that seems like they’d get human-ish eventually; but since the goal of the LLM is to imitate humans, then I also think they wouldn’t get, by definition, qualitatively superhuman in their output (though they could be quantitively in the sheer speed at which they can work). But I could see the LLM simulated personas being instrumentally convergent at some point.
However, if someone succeeds at building AGI, and depending on what its architecture is, that doesn’t need to be true any more. People dream of AGI because they want it to automate work or to take over technological development, but by definition, that sort of usefulness belongs to something that can plan and pursue goals in the world, which means it has the potential to be instrumentally convergent. If the idea is “then let’s just not build AGI”, I 100% agree, but I don’t think all of the AI industry right now does.
The point I’m trying to make is that the types of AI that are best for capabilities, including some of the more general capabilities like say automating alignment research also don’t have that much space for instrumental convergence, and that matters because it’s very easy to get alignment research for free, as well as safe AI by default, without disturbing capabilities research, because the most unconstrained power seeking AIs are very incapable, and thus in practice the most capable AIs that can solve the full problem of alignment and safety are by default safe because instrumental convergence harms capabilities currently.
In essence, the AI systems that are both capable enough to do alignment and safety research on future AI systems and are instrumentally convergent is a much smaller subset of capable AIs, and enough space for extreme instrumental convergence harms capabilities today, so it’s not incentivized.
This matters because it’s much, much easier to bootstrap alignment and safety, and it means that OpenAI/Anthropic’s plans of automating alignment research have a good chance of working.
It’s not that we cannot lose or go extinct, but that it isn’t the default anymore, and in particular means that a lot of changes to how we do alignment research are necessary, as a first step. But the impact of the instrumental convergence assumption is so deep that even if it only is wrong up until a much later point of AI capability increases matters a lot more than you think.
EDIT: A footnote in porby’s post actually expresses it a bit cleaner than I said it, so here goes:
The fact that instrumental goals with very few constraints is actually useless compared to non-instrumentally convergent models is really helpful, as it means that a capable system is inherently easy to align and be safe by default, or equivalently there is a strong anti-correlation between capabilities and instrumental convergent goals.
I don’t understand why it helps that much if instrumental convergence isn’t expected. All it takes is one actor to deliberately make a bad agentic AI and you have all the problems, but with no “free energy” being taken out by slightly bad, less powerful AI beforehand that would be there if instrumental convergence happened. Slow takeoff seems to me to make much more of a difference.
I actually don’t think the distinction between slow and fast takeoff matters too much here, at least compared to what the lack of instrumental convergence offers us. The important part here is that AI misuse is a real problem, but this is importantly much more solvable, because misuse isn’t as convergent as the hypothesized instrumental convergence is. It matters, but this is a problem that relies on drastically different methods, and importantly still reduces the danger expected from AI.
Alright, I’ve given a comment on why I think AI risk from misalignment is very unlikely here, and also give an example of an epistemic error @Eliezer Yudkowsky made in that post.
This also implicitly means that delaying it is not nearly as good as LWers thought in the past like Nate Soares and Eliezer Yudkowsky.
It’s a long comment, so do try to read it in full:
https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/#Gcigdmuje4EacwirD
I’d be willing to bet that the singularity is not happening this decade at up to $1k USD at 25:1 to 200:1 odds, depending on the terms.
Send me $1000 now, I’ll send you $1,020+interest in January 2030, where interest is calculated to match whatever I would have gotten by keeping my $1,020 in the S&P 500 the whole time?
(Unless you voluntarily forfeit by 2030, having judged that I was right.)
Did you misread my comment?
I specified 25:1 to 200:1 odds, depending on the terms. The implication is that terms more favourable to me will be settled closer to 25:1 and terms more favourable to you will be settled closer to 200:1. i.e. $25k:$1k to $200k:$1k.
’$1020+interest” would be 1.02+interest:1 odds.
Possibly! I’m not sure if I understand this comment though. Could you propose a bet/deal then?
I assume you mean ‘propose terms of the bet/deal’?
(Because otherwise that is my first comment.)
If so, what’s the broadest possible definition of ‘singularity’ that your willing to accept on a 25:1 odds basis?
i.e. the definition that has to be met in order for a ‘singularity’ to unambiguously qualify, in your view, as having occurred by Jan 1, 2030
No like, what exactly do you mean by 25:1 to 200:1 odds? Who pays who what, when? Sorry if I’m being dumb here. Normally when I make bets like this, it looks something like what I proposed. The reason being that if I win the bet, money will be almost useless to me, so it only makes sense (barely) for me to do it if I get paid up front, and then pay back with interest later.
As for definition of singularity, look, you’ll know if it’s happened if it happens, that’s why I’m happy to just let you be the judge on Jan 1 2030. This is a bit favorable to you but that’s OK by me.
Here’s a thoroughly explained and very recent example that made it to the front page: https://www.lesswrong.com/posts/t5W87hQF5gKyTofQB/ufo-betting-put-up-or-shut-up
After reading that, including the comments, do you still have any confusion?
Wait, you want me to give you 25:1 odds in the sense of, you give me $1 now and then in 2030 if no singularity I give you $25? That’s crazy, why would I ever accept that? I’d only accept that if I was, like, 96% confident in singularity by 2030!
… or do you want me to send you money now, which you will pay back 25-fold in 2030 if the singularity has happened? That’s equally silly though for a different reason, namely that money is much much much less valuable to me after the singularity than before.
So yeah I guess I still have confusion.
Did you read the comments in the linked example? Multiple LW users accepted bets at 50:1 odds on a 5 year time horizon, an offer of 25:1 odds over ~6.5 years is far less ‘crazy’ by any metric.
Or is there something you don’t understand about the concept of odds? It seems like there’s some gap here causing you a lot of confusion.
Anyways if your not at least 96% confident then of course don’t take the bet.
Yes I did.
Okay… then why were you confused as to the ‘craziness’ of the offer?
My offer is clearly more favourable to you then the already established precedent linked.
It’s a pretty crazy offer. It would require me to be supremely confident in singularity by 2030, way more confident than my words indicated, PLUS it is dominated by me just taking out a loan. By a huge margin. (remember money is much less valuable to me in worlds where I lose) Previously I’ve made bets with people about singularity by 2030 and we used resolution criteria along the lines of what I proposed, so I initially thought that’s what you had in mind.
There’s just seems to be something a bit odd about the way you understand probability. A 96% chance of something happening, or not happening, is pretty much a normal everyday situation.
e.g. For those living in an older condo or apartment with 3 or more elevators, the chances of all of their elevators working on any given day is in that range.
For those who own an old car, the chances of nothing malfunctioning on a road trip is in that range.
For those that have bought many LED lightbulbs in batches, the chances for none of them to prematurely fail after the first few months is in that range, as many will attest.
etc...
Yes, I have more than 96% credence in lots of things. But it’s crazy to expect me to have it in singularity by 2030, even after I said that singularity would probably happen by 2030.
I read ‘supremely confident’ as implying an extraordinary, exceptional, level of confidence, hence my previous comment about an odd understanding of probability. If you didn’t mean to imply it, then that’s fine.
Anyways, you are free to reject or ignore any offered bets without needing to write any justification, that’s a well established norm on LW.
… I’d be technically interested in the sense of “greater than human capabilities intelligence that is better at improving itself than humans and does so, driving technological advancement”, but I’m skeptical about all the other assumptions bundled into the term ‘singularity’. Though to be fair, that makes it easier to think about actually betting on.