List #1: Why stopping the development of AGI is hard but doable
A friend in technical AI Safety shared a list of cruxes for their next career step.
The first crux was that they did not believe that progress on AI can be expected to stop.
Copy-pasting a list that I compiled in response (with light edits):
Why stopping the development of AGI is hard but doable
History of technological restraint by institutions:
From Matthijs Maas’ essay:
“In short: some expect that we could not meaningfully slow or halt the development of A(G)I even if we expect extreme risks.
“Yet there is a surprisingly diverse historical track record of technological delay and restraint, even for strategically promising technologies that were seen as ‘obvious’ and near-inevitable in their time.
“Epistemic hurdles around studying ‘undeployed’ technologies make it likely that we underestimate the frequency of restraint decisions, or misinterpret their causes. From an outside view, this should lead us to be slightly more optimistic about the viability of restraint for future technologies.
A lot of tech does not get developed:
From Katja Grace’s Slow Down AI post:
“There seems to be a common thought that technology is a kind of inevitable path along which the world must tread, and that trying to slow down or avoid any part of it would be both futile and extreme.
“But empirically, the world doesn’t pursue every technology—it barely pursues any technologies.
Sucky technologies
For a start, there are many machines that there is no pressure to make, because they have no value. Consider a machine that sprays shit in your eyes. We can technologically do that, but probably nobody has ever built that machine.
Extremely valuable technologies
It doesn’t look like it to me. Here are a few technologies which I’d guess have substantial economic value, where research progress or uptake appears to be drastically slower than it could be, for reasons of concern about safety or ethics: …
Coordination is not miraculous world government, usually
The common image of coordination seems to be explicit, centralized, involving of every party in the world, and something like cooperating on a prisoners’ dilemma: incentives push every rational party toward defection at all times, yet maybe through deontological virtues or sophisticated decision theories or strong international treaties, everyone manages to not defect for enough teetering moments to find another solution.
That is a possible way coordination could be. (And I think one that shouldn’t be seen as so hopeless—the world has actually coordinated on some impressive things, e.g. nuclear non-proliferation.) But if what you want is for lots of people to coincide in doing one thing when they might have done another, then there are quite a few ways of achieving that.
Consider some other case studies of coordinated behavior: …
Founder’s effects and techno-utopian bias:
The two key founders of the AI-Alignment community, Nick Bostrom and Eliezer Yudkowsky, each founded a techno-utopian organisation before (World Transhumanist Association and Singularity Institute, respectively).
Each with a bias toward:
there being an ‘inevitable march of progress’ of technology,
that more ‘advanced’ technology is good,
that technology can be controlled such to not have hazardous effects on human society and the wider ecosystem that we’re all part of.
The people who selected in to join this community on average resonated more with these assumptions about technological “progress” and got incululated over time to hold these assumptions more strongly.
Frankly, we are acting exactly like the AGI researchers here – “progress” toward AGI is inevitable anyway, so that excuses me to focus all of my work on these interesting geeky technical puzzles of how to build (safe) AGI.
From Zoe Cremer and Luke Kemp on Democratising Risk:
“The historically dominant techno-utopian approach (henceforth the “TUA”) played an important role in establishing the field and drawing attention to the significance of studying human extinction. It is time to examine this approach with a critical eye. We do this to identify weaknesses, areas for further investigation and the need to also explore alternative approaches...
“We understand it to be primarily based on three main pillars of belief:
transhumanism, total utilitarianism and strong longtermism.
“More precisely:
(1) the belief that a maximally technologically developed future could contain (and is defined in terms of) enormous quantities of utilitarian intrinsic value, particularly due to more fulfilling posthuman modes of living
(2) the failure to fully realise or have capacity to realise this potential value would constitute an existential catastrophe
(3) we have an overwhelming moral obligation to ensure that such value is realised by avoiding an existential catastrophe, including through exceptional actions.
“[T]he TUA often appears to assume an exogenous threat model in which existential hazards naturally and apolitically arise from inevitable and near-autonomous technological progress. The TUA rarely examines the drivers of risk generation. Instead, key texts contend that regulating or stopping technological progress is either deeply difficult, undesirable, or outright impossible...
“Risk assessment has evolved dramatically in past decades.
Scholars now commonly analyse systemic risk (the ability for a single disruption to cascade into systems failures), how risks can cascade across borders and sectors, and how failures in critical systems can synchronise and reinforce each other.
This has led to new forms of complex risk assessment, particularly in climate science and disaster risk reduction. The Intergovernmental Panel on Climate Change (IPCC) sees risk as composed of vulnerabilities, hazards, and exposures, as well as response risks.
Similarly, others have suggested that a complex risk assessment needs to consider four determinants of risk (hazard, vulnerability, exposure, response) as well as how risks link and cascade. Understanding the common drivers across each of these determinants is critical to mitigation efforts...
“Most existential risk texts take a simpler, hazard-centric approach.
They tend to focus on a few selected hazards: biologically engineered pandemics, Artificial General Intelligence (AGI), nuclear war, climate change, and asteroid strikes.
As currently framed, TUA equates risk with hazard and ignores the wider literature on risk assessment in fields such as disaster risk reduction.
“The choice to structure risk assessment this way has not been explained or defended.
“It may have been chosen due to an implicit techno-determinist threat-model:
the TUA often appears to assume an exogenous threat model in which existential hazards naturally and apolitically arise from inevitable and near-autonomous technological progress.
The TUA rarely examines the drivers of risk generation. Instead, key texts contend that regulating or stopping technological progress is either deeply difficult, undesirable, or outright impossible...
“Bostrom proposed a “Technological Completion Conjecture”:
if technological developments do not cease, then all important, basic technological capabilities will be obtained in the long run. Others offer a more sophisticated view, in which military-economic competition exerts a powerful selection pressure on technological development. This “military-economic adaptionism” constrains sociotechnical change to deterministic paths. Technologies that gift a strong strategic advantage will almost certainly be built. Many in the related Effective Altruism community disregard controlling technology on the grounds of a perceived lack of tractability.
“From a techno-utopian perspective, a failure to build these dangerous, powerful technologies is an existential risk. Bostrom, aware of the tension arising from recommending the (albeit careful) development of technologies, warns:
“We should not blame civilization or technology for imposing big existential risks. Because of the way we have defined existential risks, a failure to develop technological civilization would imply that we had fallen victims of an existential disaster. […] Without technology, our chances of avoiding existential risk would therefore be nil”.
“Whether it is technological determinism, the more nuanced military-economic adaptationism model, or concerns around tractability, the result is the same: regressing, relinquishing, or stopping the development of many technologies is often disregarded as a feasible option. The proposed alternative is “differential technological development”: speeding up and slowing down different technologies to ensure they occur in the safest order possible. Why this is more tractable or effective than bans, moratoriums and other measures has not been fully explained and defended.
From Timnit Gebru:
“Apparently they need to build “AGI,” whatever that is, but its also the single thing that poses the biggest “existential risk” to humanity.
“Then why are you building it?
“Argument seems to be if I build it, it will be the good kind.
DAIR:
“AI needs to be brought back down to earth…
“It has been elevated to a superhuman level that leads us to believe it is both inevitable and beyond our control. When AI research, development and deployment is rooted in people and communities from the start, we can get in front of these harms and create a future that values equity and humanity.
- List #2: Why coordinating to align as humans to not develop AGI is a lot easier than, well… coordinating as humans with AGI coordinating to be aligned with humans by 24 Dec 2022 9:53 UTC; 3 points) (EA Forum;
- 27 Jan 2024 5:30 UTC; 1 point) 's comment on This might be the last AI Safety Camp by (EA Forum;
- List #2: Why coordinating to align as humans to not develop AGI is a lot easier than, well… coordinating as humans with AGI coordinating to be aligned with humans by 24 Dec 2022 9:53 UTC; 1 point) (
I think that this may be the case, but I would be much more cautious about trying to regulate AI development. I’d start with baby steps that mostly won’t cost too much or provoke backlash, like interpretability research.
My model of the situation is:
People are more or less rational, that is we shouldn’t expect deviations from rational agent models.
People are mostly selfish, with altruism being essentially signalling, which has little value here.
AI has enough of a chance to bring vastly positive changes on par with a singularity that it dominates other considerations.
In other words, even if there was a 1% chance of a singularity, it would have enough impact that even believing in high AI risk is insufficient to get the population on your side.
This is why I do not think the post is correct, in a nutshell, and that I think the AI governance/digital democracy/privacy movements are way overestimating what costs can be imposed on AI companies (Also known as alignment taxes).
I think AI governance could be surprisingly useful. But attempts to slow things down significantly are mostly unrealistic for the time being.
Good to read your thoughts.
I would agree that slowing further AI capability generalisation developments down by more than half in the next years is highly improbable. Got to work with what we have.
My mental model of the situation is different.
People engage in positively reinforcing dynamics around social prestige and market profit, even if what they are doing is net bad for what they care about over the long run.
People are mostly egocentric, and have difficulty connecting and relating, particularly in the current individualistic social signalling and “divide and conquer” market environment.
Scaling up deployable capabilities of AI has enough of a chance to reap extractive benefits for narcissistic/psychopathic tech leader types, that they will go ahead with it, while sowing the world with techno-optimistic visions that suit their strategy. That is, even though general AI will (cannot not) lead to wholesale destruction of everything we care about in the society and larger environment we’re part of.
So we agree that people are selfish/egocentric, essentially.
My problem is that from a selfish perspective, even a low chance of the technological singularity (let’s say you can survive to see from your perspective essentially a near-heaven) outweighs the high chance harm to the self and others, by multiple orders of magnitude. Arguably more than 10 orders of magnitude.
Even most non-narcissists/non-psychopaths would take this deal, and unless convenient plot induced stupidity occurs, we should expect this again and again.
So I disagree with numbers 1 and 3, since given their selfishness, they can distribute externalities to others.
I think there are bunch of relevant but subtle differences in terms of how we are thinking about this. My beliefs after quite a lot of thinking are:
A. Most people don’t care about tech singularity. People are captured by the AI hypes cycles though, especially people who work under the tech elite. The general public is much more wary overall though of current use of AI, and are starting to notice the harms in their daily lives (eg. addictive and ideology + distorted self-image reinforcing social media, exploitative work gigs handed to them by algorithms).
B. Tech singularity, as envisioned in the past, involved a lot of motivated and simplifying reasoning about directing the complex world into utopias using complicated tech that cannot realistically be caused to happen using those methods. Tech elites like to co-opt these nerdy utopian visions for their own ends.
C. By your descriptions, I think you are essentialising humans as rational individuals who are socially signalling for self-benefit. I’m actually saying that, yes, people are egocentric right now, particularly in the neoliberalist consumption-oriented market and self-presentation-oriented culture we are exposed to right now. But also, humans are social creatures and can relate and interact based on deeper shared needs. So in that, I’m not essentialising people as fundamentally selfish. I’m saying that within the social environment on top of our tribal and sex+survival oriented psychological predispositions, people come out as particularly egocentric.
D. I don’t think baby steps are going to do it, given that we’re dealing with potential auto-scaling/catalysing technology that would mark the end of organic DNA-based life. The baby steps description reminds me of various scenes in the film “Don’t Look Up” where bystanders kept signalling to the main actors not to “overdo it”.
E. Interpretability techniques are used by tech elites to justify further capability developments. Interpretability techniques do not and cannot contribute to long-term AGI safety (https://www.lesswrong.com/posts/NeNRy8iQv4YtzpTfa/why-mechanistic-interpretability-does-not-and-cannot).
So 1 and 3 were my descriptions about what is actually happening and how that would continue, not about the end conclusion of what’s happening. To disagree with the former, I think you would need to clarify your observations/analysis of why something opposite/different is happening.
One key point to keep in mind is that my arguments aren’t about refuting the idea of slowing down AI, instead it’s about offering a reality check.
The reason I said baby steps is that 1. They might be enough, but 2. even if it isn’t enough, one common failure mode in politics is to go fully maximalist in your agenda first. This is a route to failure for your agenda. It is better instead to progress your agenda from the least controversial/costly, and if necessary go then add more costly/controversial laws. However this is extremely dangerous, a single case of bad publicity or otherwise making it very controversial to govern AI may well doom the effort.
Another lesson for politics is that your opposition (AI companies) is probably rational, but having very different goals compared to the median LW/EA person. So we shouldn’t expect unusually easy wins in this area, and progress will likely be slow, especially in lobbying.
It’s still very useful for AI governance to do it, the high risk does not mean there aren’t high rewards, especially if you think AI Alignment is possible, but governance can help AI Alignment do it’s best, as well as preventing s-risks, but I do think that AI governance may be overestimating what costs the public and companies are willing to bear for regulations. Especially if AI companies can make externalities.
For example, the climate change agenda stalled until solar, wind and batteries became cheap enough in the 2010s that moving out of fossil fuels represented a very cheap way to decarbonize. And still there’s some opposition here.
That’s clarifying. I agree that immediately trying to impose costly/controversial laws would be bad.
What I am personally thinking about first here is “actually trying to clarify the concerns and find consensus with other movements concerned about AI developments” (which by itself does not involve immediate radical law reforms).
We first need to have a basis of common understanding from which legislation can be drawn.
Let me also copy over Forrest’s (my collaborator) notes here:
I am honestly very confused on how Forrest is so confident that radical positive changes will not happen in our lifetime.
More importantly, he seems to be complaining that his opponents have different goals, and claims they’re selectively rational. Heads up, but rational behavior can only be determined once what goals you have are determined. Now, to him, his goals probably are much less selfish than those that want AI progress to speed up, so it’s not rational for AI capabilities to increase. I too do not think AI progress is beneficial, and believe it probably is harmful, so I’d slowdown on the progress too.
This is critical, because Forrest is misidentifying why AI progress people want AI to progress. The fact that they have very different goals compared to you is the reason why they want AI to progress, and not a rationality failure.
Another critical crux is I am far more optimistic than Forrest or Remmelt on AGI Alignment working out in the end. If I had a pessimism level comparable to Forrest or Remmelt, I too would probably advocate far more around governance strategies.
This is for several reasons:
My general prior is most problems are solvable. This doesn’t always occur, see the halting problem’s unsolvability, or the likely non-solvability of a perpetual motion machine, but my prior is if there isn’t a theorem prohibiting it and it doesn’t rely on violating the laws of physics, I’d say it solvable. And AGI alignment is in this spot.
I believe alignment is progressing, not enough to be clear, but if AI alignment was as well resourced as AI capabilities research, then I’d give it a fair shot of solving the problem.
Finally, time. In the more conservative story described here, it still takes 20-30 years, and while AGI now would probably be incompatible with life due to instrumental convergence and inner alignment failures, so long as you have extremely pessimistic beliefs about progress in AI alignment, this is the type of time frame where I’d place 60% probability on having a working solution to the AGI alignment problem due to progress on it.
Responding below:
That prior for most problems being solvable is not justified. For starters, because you did not provide any reasons above to justify why beneficial AGI is not like a perpetual motion machine, AKA a “perpetual general benefit machine”.
See reasons to shift your prior: https://www.lesswrong.com/posts/Qp6oetspnGpSpRRs4/list-3-why-not-to-assume-on-prior-that-agi-alignment
Again no reasons given for the belief that AGI alignment is “progressing” or would have a “fair shot” of solving “the problem” if as well resourced as capabilities research. Basically nothing to argue against, because you are providing no arguments yet.
No reasons given, again. Presents instrumental convergence and intrinsic optimisation misalignment failures as the (only) threat models in terms of artificial general intelligence incompatibility with organic DNA-based life. Overlooks substrate-needs convergence.
I’ll concede here that I unfortunately do not have good arguments, and I’m updating towards pessimism regarding the alignment problem.
Appreciating your honesty, genuinely!
Always happy to chat further about the substantive arguments. I was initially skeptical of Forrest’s “AGI-alignment is impossible” claim. But after probing and digging into this question intensely over the last year, I could not find anything unsound (in terms of premises) or invalid (in terms of logic) about his core arguments.