Technology has been a transformative force for our civilisation and it is poised to play an increasing role going forward. Recent progress in Artificial Intelligence spurred discussions about how we should approach the development of the next generation of AI applications, potentially leading to human-level performances on a wide range of tasks and ultimately to the last invention humanity will need to do.
Two polar views have gained prominence recently. The first ideology is about slowing down AI progress, stopping it altogether, centralising development. The second ideology is about acceleration at all costs. Both fail to be sensible, in very different ways.
Slowing down progress in a world full of problems and still subject to existential risk is not safe. And is not fair to who is worse off today. Accelerating recklessly can be self-destructing and can lead to a centralised dystopia.
This manifesto is about the sober middle ground: accelerate, but stay safe and stay fair.
Safe Accelerationism core principles: Fast, Safe and Fair
Fast: Immense benefit will come from AI development and any delay has huge opportunity cost. Accelerate AI and robotic development as fast as possible.
Safe: Development must be as incremental as possible, without any single party taking a monopolistic share of the benefits. Develop in the open, spread capabilities to as many actors as possible.
Fair: AI benefits should be widely distributed, leading to a post-work-to-survive society. Advocate and implement economic policies to provide higher standards of living for all, supported by AI productivity gains.
Fast. This is the easy one: technological progress is a net positive, let’s make more of it in the shortest possible amount of time. Going fast is a gift to the next generations, making sure that they live in the best possible world. Going fast is a gift to the current generation, maximising the share of the population who get to live better and longer.
Safe. Yes, AI can and will get powerful. We can make it safe if we learn how to deal with it as it gets progressively better, as with any other dangerous technology. We need to avoid large jumps in capabilities and surprises, we need as much time as possible to study the effect on society. We need unbiased capabilities estimation from as many parties as possible. The state of the art should be well known at any given time.
Fair. Power is nothing without control. And technology is a means to an end. So any accelerationist should really ask: “accelerating towards what?”. AI and robots are going to equip us with everything we need to make all of us better off, we should make sure this is the future we get. This means proactively preparing for a future without mandatory struggle for the basics and softening the blows of automation. If we don’t, the rightful reaction from who is crudely displaced is to fight back, overall delaying progress. Taking the time to get redistribution of gains right should not be seen as decelerating; on the contrary, it is about accelerating smartly and in a sustainable way.
Please:
Don’t
Delay, Pause, Stop, Censor AI development.
Centralise development, close development to a selected expert panel.
Create bullshit jobs, adapt AI to the old framework of jobs, artificially inflate automation costs with AI/Robot taxes.
Do
Develop in open-source. Share information freely. No paywalls, copyright, restricting licences.
Automate jobs.
Educate the public on the upcoming transition to post-work. Prepare for accelerated growth.
Accelerate, stay safe, stay fair.
I think there’s a crux here. Most members of EA/Less Wrong are utilitarians who assign equal moral weight to humans in future generations as to humans now, and assume that we may well spread first through the solar system and then the galaxy, so there may well be far more of them. So even a 1% reduction in x-risk due to alignment issues is a 1% increase in the existence of a plausibly-astronomical number of future humans. Whereas a delay of, say, a century to get a better chance only delays the start of our expansion through the galaxy by a century, which will eventually become quite a small effect in comparison. Meanwhile, the current number of humans who are thus forced to continue living at only 21st-century living standards is comparatively tiny enough to be unimportant in the utility computation. Yes, this is basically Pascal’s Mugging, only from the future. So as a result, solving alignment almost as carefully as possible, even if that unfortunately makes it slow, becomes a moral imperative. So I don’t think you’re going to find may takers for your viewpoint here.
Pretty-much everyone else in the world lives their life as if any generation past about their grandchildren had almost no moral weight whatsoever, and just need to look after themselves. In small-scale decisions, it’s almost impossible to accurately look a century ahead, and thus wise to act as if considerations a century from now don’t matter. But human extinction is forever, so it’s not hard to look forward many millennia with respect to its the effects: if we screw up with AI and render ourselves extinct, we’ll very-predictably still be extinct a million years from now, and the rest of the universe will have to deal with whatever paperclip maximizer we accidentally created as the legacy of our screw-up.
Thanks for the insightful comment. Ultimately the different attitude is about the perceived existential risk posed by the technology and the risks coming by acting on accelerating AI vs not acting.
And yes I was expecting not to find much agreement here, but that’s what makes it interesting :)
I.e. make the most powerful technology we have, something more dangerous than atomic energy, available to everyone, including criminals, rogue states, and people who should be refused a gun license for mental health reasons.
This is a pretty counter-intuitive point indeed, but up to a certain threshold this seems to me the approach that minimise risks, by avoiding large capability jumps and improving the “immune system” of society.
At current open source model’s risk levels, I completely agree. Obviously it’s hard to know (from outside OpenAI) how bad open-sourced unfiltered GTP-4 would be, but my impression is that that also isn’t capable of being seriously dangerous, so I suspect the same might be true there, and I agree that adapting to it may “help society’s immune system” (after rather a lot of spearhishing emails, public opinion-manipulating propaganda, and similar scams). [And I don’t see propaganda as a small problem: IMO the rise of Fascism that led to the Second World War was partly caused by it taking society a while to adjust to the propaganda capabilities of radio and film (those old propaganda films that look so hokey to us now used to actually work), and the recent polarization of US politics and things like QAnon and InfoWars I think have a lot to do with the same for social media.] So my “something more dangerous than atomic energy” remark above is anticipatory, for what I expect from future models such as GPT-5/6 if they were open sourced unfiltered/unaligned. So I basically see two possibilities:
At some point (my current guess would be somewhere around the GPT-5 level), we’re going to need to stop open-sourcing these, or else if we don’t unacceptable amounts of damage will be done, unless
We figure out a way of aligning models that is “baked in” during the pretraining stage or some other way, and that cannot then be easily fine-tuned out again using 1% or less of the amount of compute needed to pretrain the model in the first place. Filtering dangerous information out of the pretraining set might be an example of a candidate, some form of distillation process of an aligned model that actually managed to drop unaligned capabilities might be another.