Someone who is interested in learning and doing good.
My Twitter: https://twitter.com/MatthewJBar
My Substack: https://matthewbarnett.substack.com/
Someone who is interested in learning and doing good.
My Twitter: https://twitter.com/MatthewJBar
My Substack: https://matthewbarnett.substack.com/
For what it’s worth, while my credence in human extinction from AI in the 21st century is 10-20%, I think the chance of human extinction in the next 5 years is much lower. I’d put that at around 1%. The main way I think AI could cause human extinction is by just generally accelerating technology and making the world a scarier and more dangerous place to live. I don’t really buy the model in which an AI will soon foom until it becomes a ~god.
I’m confused about why your <20% isn’t sufficient for you to want to shut down AI research. Is it because of benefits outweigh the risk, or because we’ll gain evidence about potential danger and can shut down later if necessary?
I think the expected benefits outweigh the risks, given that I care about the existing generation of humans (to a large, though not overwhelming degree). The expected benefits here likely include (in my opinion) a large reduction in global mortality, a very large increase in the quality of life, a huge expansion in material well-being, and more generally a larger and more vibrant world earlier in time. Without AGI, I think most existing people would probably die and get replaced by the next generation of humans, in a relatively much poor world (compared to the alternative).
I also think the absolute level risk from AI barely decreases if we globally pause. My best guess is that pausing would mainly just delay adoption without significantly impacting safety. Under my model of AI, the primary risks are long-term, and will happen substantially after humans have already gradually “handed control” over to the AIs and retired their labor on a large scale. Most of these problems—such as cultural drift and evolution—do not seem to be the type of issue that can be satisfactorily solved in advance, prior to a pause (especially by working out a mathematical theory of AI, or something like that).
On the level of analogy, I think of AI development as more similar to “handing off control to our children” than “developing a technology that disempowers all humans at a discrete moment in time”. In general, I think the transition period to AI will be more diffuse and incremental than MIRI seems to imagine, and there won’t be a sharp distinction between “human values” and “AI values” either during, or after the period.
(I also think AIs will probably be conscious in a way that’s morally important, in case that matters to you.)
In fact, I think it’s quite plausible the absolute level of AI risk would increase under a global pause, rather than going down, given the high level of centralization of power required to achieve a global pause, and the perverse institutions and cultural values that would likely arise under such a regime of strict controls. As a result, even if I weren’t concerned at all about the current generation of humans, and their welfare, I’d still be pretty hesitant to push pause on the entire technology.
(I think of technology as itself being pretty risky, but worth it. To me, pushing pause on AI is like pushing pause on technology itself, in the sense that they’re both generically risky yet simultaneously seem great on average. Yes, there are dangers ahead. But I think we can be careful and cautious without completely ripping up all the value for ourselves.)
Chemists would give an example of chemical reactions, where final thermodynamically stable states are easy to predict, while unstable intermediate states are very hard to even observe.
I agree there are examples where predicting the end state is easier to predict than the intermediate states. Here, it’s because we have strong empirical and theoretical reasons to think that chemicals will settle into some equilibrium after a reaction. With AGI, I have yet to see a compelling argument for why we should expect a specific easy-to-predict equilibrium state after it’s developed, which somehow depends very little on how the technology is developed.
It’s also important to note that, even if we know that there will be an equilibrium state after AGI, more evidence is generally needed to establish that the end equilibrium state will specifically be one in which all humans die.
And why don’t you accept classic MIRI example that even if it’s impossible for human to predict moves of Stockfish 16, you can be certain that Stockfish will win?
I don’t accept this argument as a good reason to think doom is highly predictable partly because I think the argument is dramatically underspecified without shoehorning in assumptions about what AGI will look like to make the argument more comprehensible. I generally classify arguments like this under the category of “analogies that are hard to interpret because the assumptions are so unclear”.
To help explain my frustration at the argument’s ambiguity, I’ll just give a small yet certainly non-exhaustive set of questions I have about this argument:
Are we imagining that creating an AGI implies that we play a zero-sum game against it? Why?
Why is it a simple human vs. AGI game anyway? Does that mean we’re lumping together all the humans into a single agent, and all the AGIs into another agent, and then they play off against each other like a chess match? What is the justification for believing the battle will be binary like this?
Are we assuming the AGI wants to win? Maybe it’s not an agent at all. Or maybe it’s an agent but not the type of agent that wants this particular type of outcome.
What does “win” mean in the general case here? Does it mean the AGI merely gets more resources than us, or does it mean the AGI kills everyone? These seem like different yet legitimate ways that one can “win” in life, with dramatically different implications for the losing parties.
There’s a lot more I can say here, but the basic point I want to make is that once you start fleshing this argument out, and giving it details, I think it starts to look a lot weaker than the general heuristic that Stockfish 16 will reliably beat humans in chess, even if we can’t predict its exact moves.
There’s a pretty big difference between statements like “superintelligence is physically possible”, “superintelligence could be dangerous” and statements like “doom is >80% likely in the 21st century unless we globally pause”. I agree with (and am not objecting to) the former claims, but I don’t agree with the latter claim.
I also agree that it’s sometimes true that endpoints are easier to predict than intermediate points. I haven’t seen Eliezer give a reasonable defense of this thesis as it applies to his doom model. If all he means here is that superintelligence is possible, it will one day be developed, and we should be cautious when developing it, then I don’t disagree. But I think he’s saying a lot more than that.
I think it’s more similar to saying that the climate in 2040 is less predictable than the climate in 2100, or saying that the weather 3 days from now is less predictable than the weather 10 days from now, which are both not true. By contrast, the weather vs. climate distinction is more of a difference between predicting point estimates vs. predicting averages.
I unfortunately am busy right now but would love to give a fuller response someday, especially if you are genuinely interested to hear what I have to say (which I doubt, given your attitude towards MIRI).
I’m a bit surprised you suspect I wouldn’t be interested in hearing what you have to say?
I think the amount of time I’ve spent engaging with MIRI perspectives over the years provides strong evidence that I’m interested in hearing opposing perspectives on this issue. I’d guess I’ve engaged with MIRI perspectives vastly more than almost everyone on Earth who explicitly disagrees with them as strongly as I do (although obviously some people like Paul Christiano and other AI safety researchers have engaged with them even more than me).
(I might not reply to you, but that’s definitely not because I wouldn’t be interested in what you have to say. I read virtually every comment-reply to me carefully, even if I don’t end up replying.)
I appreciate the straightforward and honest nature of this communication strategy, in the sense of “telling it like it is” and not hiding behind obscure or vague language. In that same spirit, I’ll provide my brief, yet similarly straightforward reaction to this announcement:
I think MIRI is incorrect in their assessment of the likelihood of human extinction from AI. As per their messaging, several people at MIRI seem to believe that doom is >80% likely in the 21st century (conditional on no global pause) whereas I think it’s more like <20%.
MIRI’s arguments for doom are often difficult to pin down, given the informal nature of their arguments, and in part due to their heavy reliance on analogies, metaphors, and vague supporting claims instead of concrete empirically verifiable models. Consequently, I find it challenging to respond to MIRI’s arguments precisely. The fact that they want to essentially shut down the field of AI based on these largely informal arguments seems premature to me.
MIRI researchers rarely provide any novel predictions about what will happen before AI doom, making their theories of doom appear unfalsifiable. This frustrates me. Given a low prior probability of doom as apparent from the empirical track record of technological progress, I think we should generally be skeptical of purely theoretical arguments for doom, especially if they are vague and make no novel, verifiable predictions prior to doom.
Separately from the previous two points, MIRI’s current most prominent arguments for doom seem very weak to me. Their broad model of doom appears to be something like the following (although they would almost certainly object to the minutia of how I have written it here):
(1) At some point in the future, a powerful AGI will be created. This AGI will be qualitatively distinct from previous, more narrow AIs. Unlike concepts such as “the economy”, “GPT-4″, or “Microsoft”, this AGI is not a mere collection of entities or tools integrated into broader society that can automate labor, share knowledge, and collaborate on a wide scale. This AGI is instead conceived of as a unified and coherent decision agent, with its own long-term values that it acquired during training. As a result, it can do things like lie about all of its fundamental values and conjure up plans of world domination, by itself, without any risk of this information being exposed to the wider world.
(2) This AGI, via some process such as recursive self-improvement, will rapidly “foom” until it becomes essentially an immortal god, at which point it will be able to do almost anything physically attainable, including taking over the world at almost no cost or risk to itself. While recursive self-improvement is the easiest mechanism to imagine here, it is not the only way this could happen.
(3) The long-term values of this AGI will bear almost no relation to the values that we tried to instill through explicit training, because of difficulties in inner alignment (i.e., a specific version of the general phenomenon of models failing to generalize correctly from training data). This implies that the AGI will care almost literally 0% about the welfare of humans (despite potentially being initially trained from the ground up on human data, and carefully inspected and tested by humans for signs of misalignment, in diverse situations and environments). Instead, this AGI will pursue a completely meaningless goal until the heat death of the universe.
(4) Therefore, the AGI will kill literally everyone after fooming and taking over the world.
It is difficult to explain in a brief comment why I think the argument just given is very weak. Instead of going into the various subclaims here in detail, for now I want to simply say, “If your model of reality has the power to make these sweeping claims with high confidence, then you should almost certainly be able to use your model of reality to make novel predictions about the state of the world prior to AI doom that would help others determine if your model is correct.”
The fact that MIRI has yet to produce (to my knowledge) any major empirically validated predictions or important practical insights into the nature AI, or AI progress, in the last 20 years, undermines the idea that they have the type of special insight into AI that would allow them to express high confidence in a doom model like the one outlined in (4).
Eliezer’s response to claims about unfalsifiability, namely that “predicting endpoints is easier than predicting intermediate points”, seems like a cop-out to me, since this would seem to reverse the usual pattern in forecasting and prediction, without good reason.
Since I think AI will most likely be a very good thing for currently existing people, I am much more hesitant to “shut everything down” compared to MIRI. I perceive MIRI researchers as broadly well-intentioned, thoughtful, yet ultimately fundamentally wrong in their worldview on the central questions that they research, and therefore likely to do harm to the world. This admittedly makes me sad to think about.
He did talk about enforcing a global treaty backed by the threat of force (because all law is ultimately backed by violence, don’t pretend otherwise)
Most international treaties are not backed by military force, such as the threat of airstrikes. They’re typically backed by more informal pressures, such as diplomatic isolation, conditional aid, sanctions, asset freezing, damage to credibility and reputation, and threats of mutual defection (i.e., “if you don’t follow the treaty, then I won’t either”). It seems bad to me that Eliezer’s article incidentally amplified the idea that most international treaties are backed by straightforward threats of war, because that idea is not true.
I also expect AIs to be constrained by social norms, laws, and societal values. But I think there’s a distinction between how AIs will be constrained and how AIs will try to help humans. Although it often censors certain topics, Google still usually delivers the results the user wants, rather than serving some broader social agenda upon each query. Likewise, ChatGPT is constrained by social mores, but it’s still better described as a user assistant, not as an engine for social change or as a benevolent agent that acts on behalf of humanity.
No arbitrarily powerful AI could succeed at taking over the world
This is closest to what I am saying. The current world appears to be in a state of inter-agent competition. Even as technology has gotten more advanced, and as agents have gotten powerful over time, no single unified agent has been able to obtain control over everything and win the entire pie, defeating all the other agents. I think we should expect this state of affairs to continue even as AGI gets invented and technology continues to get more powerful.
(One plausible exception to the idea that “no single agent has ever won the competition over the world” is the human species itself, which dominates over other animal species. But I don’t think the human species is well-described as a unified agent, and I think our power comes mostly from accumulated technological abilities, rather than raw intelligence by itself. This distinction is important because the effects of technological innovation generally diffuse across society rather than giving highly concentrated powers to the people who invent stuff. This generally makes the situation with humans vs. animals disanalogous to a hypothetical AGI foom in several important ways.)
Separately, I also think that even if an AGI agent could violently take over the world, it would likely not be rational for it to try, due to the fact that compromising with the rest of the world would be a less risky and more efficient way of achieving its goals. I’ve written about these ideas in a shortform thread here.
It sounds like you’re thinking mostly of AI and not AGI that can self-improve at some point
I think you can simply have an economy of arbitrarily powerful AGI services, some of which contribute to R&D in a way that feeds into the entire development process recursively. There’s nothing here about my picture that rejects general intelligence, or R&D feedback loops.
My guess is that the actual disagreement here is that you think that at some point a unified AGI will foom and take over the world, becoming a centralized authority that is able to exert its will on everything else without constraint. I don’t think that’s likely to happen. Instead, I think we’ll see inter-agent competition and decentralization indefinitely (albeit with increasing economies of scale, prompting larger bureaucratic organizations, in the age of AGI).
Here’s something I wrote that seems vaguely relevant, and might give you a sense as to what I’m imagining,
Given that we are already seeing market forces shaping the values of existing commercialized AIs, it is confusing to me why an EA would assume this fact will at some point no longer be true. To explain this, my best guess is that many EAs have roughly the following model of AI development:
There is “narrow AI”, which will be commercialized, and its values will be determined by market forces, regulation, and to a limited degree, the values of AI developers. In this category we find GPT-4 from OpenAI, Gemini from Google, and presumably at least a few future iterations of these products.
Then there is “general AI”, which will at some point arrive, and is qualitatively different from narrow AI. Its values will be determined almost solely by the intentions of the first team to develop AGI, assuming they solve the technical problems of value alignment.
My advice is that we should probably just drop the second step, and think of future AI as simply continuing from the first step indefinitely, albeit with AIs becoming incrementally more general and more capable over time.
Yes, but I don’t consider this outcome very pessimistic because this is already what the current world looks like. How commonly do businesses work for the common good of all humanity, rather than for the sake of their shareholders? The world is not a utopia, but I guess that’s something I’ve already gotten used to.
I think we probably disagree substantially on the difficulty of alignment and the relationship between “resources invested in alignment technology” and “what fraction aligned those AIs are” (by fraction aligned, I mean what fraction of resources they take as a cut).
That’s plausible. If you think that we can likely solve the problem of ensuring that our AIs stay perfectly obedient and aligned to our wishes perpetually, then you are indeed more optimistic than I am. Ironically, by virtue of my pessimism, I’m more happy to roll the dice and hasten the arrival of imperfect AI, because I don’t think it’s worth trying very hard and waiting a long time to try to come up with a perfect solution that likely doesn’t exist.
I also think that something like a basin of corrigibility is plausible and maybe important: if you have mostly aligned AIs, you can use such AIs to further improve alignment, potentially rapidly.
I mostly see corrigible AI as a short-term solution (although a lot depends on how you define this term). I thought the idea of a corrigible AI is that you’re trying to build something that isn’t itself independent and agentic, but will help you in your goals regardless. In this sense, GPT-4 is corrigible, because it’s not an independent entity that tries to pursue long-term goals, but it will try to help you.
But purely corrigible AIs seem pretty obviously uncompetitive with more agentic AIs in the long-run, for almost any large-scale goal that you have in mind. Ideally, you eventually want to hire something that doesn’t require much oversight and operates relatively independently from you. It’s a bit like how, when hiring an employee, at first you want to teach them everything you can and monitor their work, but eventually, you want them to take charge and run things themselves as best they can, without much oversight.
And I’m not convinced you could use corrigible AIs to help you come up with the perfect solution to AI alignment, as I’m not convinced that something like that exists. So, ultimately I think we’re probably just going to deploy autonomous slightly misaligned AI agents (and again, I’m pretty happy to do that, because I don’t think it would be catastrophic except maybe over the very long-run).
I think various governments will find it unacceptable to construct massively powerful agents extremely quickly which aren’t under the control of their citizens or leaders.
I think people will justifiably freak out if AIs clearly have long run preferences and are powerful and this isn’t currently how people are thinking about the situation.
For what it’s worth, I’m not sure which part of my scenario you are referring to here, because these are both statements I agree with.
In fact, this consideration is a major part of my general aversion to pushing for an AI pause, because, as you say, governments will already be quite skeptical of quickly deploying massively powerful agents that we can’t fully control. By default, I think people will probably freak out and try to slow down advanced AI, even without any intervention from current effective altruists and rationalists. By contrast, I’m a lot more ready to unroll the autonomous AI agents that we can’t fully control compared to the median person, simply because I see a lot of value in hastening the arrival of such agents (i.e., I don’t find that outcome as scary as most other people seem to imagine.)
At the same time, I don’t think people will pause forever. I expect people to go more slowly than what I’d prefer, but I don’t expect people to pause AI for centuries either. And in due course, so long as at least some non-negligible misalignment “slips through the cracks”, then AIs will become more and more independent (both behaviorally and legally), their values will slowly drift, and humans will gradually lose control—not overnight, or all at once, but eventually.
Naively, it seems like it should undercut their wages to subsistence levels (just paying for the compute they run on). Even putting aside the potential for alignment, it seems like there will general be a strong pressure toward AIs operating at subsistence given low costs of copying.
I largely agree. However, I’m having trouble seeing how this idea challenges what I am trying to say. I agree that people will try to undercut unaligned AIs by making new AIs that do more of what they want instead. However, unless all the new AIs perfectly share the humans’ values, you just get the same issue as before, but perhaps slightly less severe (i.e., the new AIs will gradually drift away from humans too).
I think what’s crucial here is that I think perfect alignment is very likely unattainable. If that’s true, then we’ll get some form of “value drift” in almost any realistic scenario. Over long periods, the world will start to look alien and inhuman. Here, the difficulty of alignment mostly sets how quickly this drift will occur, rather than determining whether the drift occurs at all.
A thing I always feel like I’m missing in your stories of how the future goes is “if it is obvious that the AIs are exerting substantial influence and acquiring money/power, why don’t people train competitor AIs which don’t take a cut?”
People could try to do that. In fact, I expect them to do that, at first. However, people generally don’t have unlimited patience, and they aren’t perfectionists. If people don’t think that a perfectly robustly aligned AI is attainable (and I strongly doubt this type of entity is attainable), then they may be happy to compromise by adopting imperfect (and slightly power-seeking) AI as an alternative. Eventually people will think we’ve done “enough” alignment work, even if it doesn’t guarantee full control over everything the AIs ever do, and simply deploy the AIs that we can actually build.
This story makes sense to me because I think even imperfect AIs will be a great deal for humanity. In my story, the loss of control will be gradual enough that probably most people will tolerate it, given the massive near-term benefits of quick AI adoption. To the extent people don’t want things to change quickly, they can (and probably will) pass regulations to slow things down. But I don’t expect people to support total stasis. It’s more likely that people will permit some continuous loss of control, implicitly, in exchange for hastening the upside benefits of adopting AI.
Even a very gradual loss of control, continuously compounded, eventually means that humans won’t fully be in charge anymore.
In the medium to long-term, when AIs become legal persons, “replacing them” won’t be an option—as that would violate their rights. And creating a new AI to compete with them wouldn’t eliminate them entirely. It would just reduce their power somewhat by undercutting their wages or bargaining power.
Most of my “doom” scenarios are largely about what happens long after AIs have established a footing in the legal and social sphere, rather than the initial transition period when we’re first starting to automate labor. When AIs have established themselves as autonomous entities in their own right, they can push the world in directions that biological humans don’t like, for much the same reasons that young people can currently push the world in directions that old people don’t like.
Everything seems to be going great, the AI systems vasten, growth accelerates, etc, but there is mysteriously little progress in uploading or life extension, the decline in fertility accelerates, and in a few decades most of the economy and wealth is controlled entirely by de novo AI; bio humans are left behind and marginalized.
I agree with the first part of your AI doom scenario (the part about us adopting AI technologies broadly and incrementally), but this part of the picture seems unrealistic to me. When AIs start to influence culture, it probably won’t be a big conspiracy. It won’t really be “mysterious” if things start trending away from what most humans want. It will likely just look like how cultural drift generally always looks: scary because it’s out of your individual control, but nonetheless largely decentralized, transparent, and driven by pretty banal motives.
AIs probably won’t be “out to get us”, even if they’re unaligned. For example, I don’t anticipate them blocking funding for uploading and life extension, although maybe that could happen. I think human influence could simply decline in relative terms even without these dramatic components to the story. We’ll simply become “old” and obsolete, and our power will wane as AIs becomes increasingly autonomous, legally independent, and more adapted to the modern environment than we are.
Staying in permanent control of the future seems like a long, hard battle. And it’s not clear to me that this is a battle we should even try to fight in the long run. Gradually, humans may eventually lose control—not because of a sudden coup or because of coordinated scheming against the human species—but simply because humans won’t be the only relevant minds in the world anymore.
I think the main reason why we won’t align AGIs to some abstract conception of “human values” is because users won’t want to rent or purchase AI services that are aligned to such a broad, altruistic target. Imagine a version of GPT-4 that, instead of helping you, used its time and compute resources to do whatever was optimal for humanity as a whole. Even if that were a great thing for GPT-4 to do from a moral perspective, most users aren’t looking for charity when they sign up for ChatGPT, and they wouldn’t be interested in signing up for such a service. They’re just looking for an AI that helps them do whatever they personally want.
In the future I expect this fact will remain true. Broadly speaking, people will spend their resources on AI services to achieve their own goals, not the goals of humanity-as-a-whole. This will likely look a lot more like “an economy of AIs who (primarily) serve humans” rather than “a monolithic AGI that does stuff for the world (for good or ill)”. The first picture just seems like a default extrapolation of current trends. The second picture, by contrast, seems like a naive conception of the future that (perhaps uncharitably), the LessWrong community generally seems way too anchored on, for historical reasons.
I’m not sure if you’d categorize this under “scaling actually hitting a wall” but the main possibility that feels relevant in my mind is that progress simply is incremental in this case, as a fact about the world, rather than being a strategic choice on behalf of OpenAI. When underlying progress is itself incremental, it makes sense to release frequent small updates. This is common in the software industry, and would not at all be surprising if what’s often true for most software development holds for OpenAI as well.
(Though I also expect GPT-5 to be medium-sized jump, once it comes out.)
Yes, I expect AI labs will run extensive safety tests in the future on their systems before deployment. Mostly this is because I think people will care a lot more about safety as the systems get more powerful, especially as they become more economically significant and the government starts regulating the technology. I think regulatory forces will likely be quite strong at the moment AIs are becoming slightly smarter than humans. Intuitively I anticipate the 5 FTE-year threshold to be well-exceeded before such a model release.
You’ve made detailed predictions about what you expect in the next several years, on numerous occasions, and made several good-faith attempts to elucidate your models of AI concretely. There are many ways we disagree, and many ways I could characterize your views, but “unfalsifiable” is not a label I would tend to use for your opinions on AI. I do not mentally lump you together with MIRI in any strong sense.