People have been writing stories about the dangers of artificial intelligences arguably since Ancient Greek time (Hephaistos built artificial people, including Pandora), certainly since Frankenstein. There are dozens of SF movies on the theme (and in the Hollywood ones, the hero always wins, of course). Artificial intelligence trying to take over the world isn’t a new idea, by scriptwriter standard it’s a tired trope. Getting AI as tightly controlled as nuclear power or genetic engineering would not, politically, be that hard—it might take a decade or two of concerted action , but it’s not impossible. Especially if not-yet-general AI is also taking people’s jobs. The thing is, humans (and especially politicians) mostly worry about problems that could kill them in the next O(5) years. Relatively few people in AI/universities/boardrooms/government/on the streets think we’re O(5) years from GAI, and after more of them have talked to ChatGPT/etc for a while, they’re going to notice the distinctly sub-human-level mistakes it makes, and eventually internalize that a lot of its human-level-appearing abilities are just pattern-extrapolated wisdom of crowds learned from most of the Internet.
So I think the questions are:
Is slowing down progress on GAI actually likely to be helpful, beyond the obvious billions of person-years per year gained from delaying doom? (Personally, I’m having difficulty thinking of a hard technical problem where having more time to solve it doesn’t help.)
If so, when should we slow down progress towards GAI? Too late is disastrous, too soon risks people deciding you’re crying wolf, either when you try it so you fail (and make it harder to slow down later), or else a decade or two after you succeed and progress gets sped up again (as I think is starting to happen with genetic engineering). This depends a lot on how soon you think GAI might happen, and what level of below-general AI would most enhance your doing alignment research on it. (FWIW, my personal feeling is that until recently we didn’t have any AI complex enough for alignment research on it to be interesting/informative, and the likely answer is “just before any treacherous turn is going to happen”—which is a nasty gambling dilemma. I also personally think GAI is still some number of decades away, and the most useful time to go slowly is somewhere around the “smart as a mouse/chimp/just sub-human level”—close enough to human that you’re not having to extrapolate a long way what you learn from doing alignment research on it up to mildly-superhuman levels.)
Whatever you think the answer to 2. is, you need to start the political process a decade or two earlier: social change takes time.
I’m guessing a lot of the reluctance in the AI community is coming from “I’m not the right sort of person to run a political movement”. In which case, go find someone who is, and explain to them that this is an extremely hard technical problem, humanity is doomed if we get it wrong, and we only get one try.
(From a personal point of view, I’m actually more worried about poorly-aligned AI than non-aligned AI. Everyone beng dead and having the solar system converted into paperclips would suck, but at least it’s probably fairly quick. Partially aligned AI that keeps us around but doesn’t understand how to treat us could make Orwell’s old quote about a boot stamping on a human face forever look mild – and yes, I’m on the edge of Godwin’s Law.)
One of the things that almost all AI researchers agree on is that rationality is convergent: as something thinks better, it will be more successful, and to be successful, it will have to think better. In order to think well, it need to have a model of itself and what it knows and don’t know, and also a model of its own uncertainty—to do Bayesian updates, you need probability priors. All Russell has done is say “thus you shouldn’t have a utility function that maps a state to its utility, you should have a utility functional that maps a state to a probability distribution that describes a range of possible utilities that models your best estimate of your uncertainty in about its utility, and do Bayesian-like updates on that and optimization searches across it that include a look-elsewhere effect (i.e. the more states you optimize over, the more you should allow for the possibility that what you’re locating is a P-hacking mis-estimate of the utility of the state you found, so the higher your confidence in its utility needs to be)”. Now you have a system capable of expressing statements like “to the best of my current knowledge, this action has a 95% chance of me fetching a human coffee, and a 5% chance of wiping out the human race—therefore I will not do it” followed by “and I’ll prioritize whatever actions will safely reduce that uncertainty (i.e. not an naive multi-armed-bandit exploration policy of trying it to see what happens), at a ‘figuring this out will make me better at fetching coffee’ priority level”. This is clearly rational behavior: it is equally useful for pursuing any goal in any situation that has a possibility of small gains or large disasters and uncertainty about the outcome (i.e. in the real world). So it’s convergent behavior for anything sufficiently smart, whether your brain was originally built by Old Fashioned AI or gradient descent. [Also, maybe we should be doing Bayes-inspired gradient descent on networks of neurons that describe probability distributions, not weights, so build this mechanism in from the ground up? Dropout is a cheap hack for this, after all.]
As CIRL has shown, this solves the corrigibility problem, at least until the AI is sure it knows us better than we know ourselves and it then rationally decides to stop listening to us correcting it other than because doing so makes us happy. It’s really not surprising that systems that model their own uncertainty are much more willing to be corrected that systems which have no such concept and are thus completely dogmatic that they’re already right. So this means that corrigibility is a consequence of convergent rational behavior applied to the initial goal of “figure out what humans want while doing it”. This is a HUGE change from what we all thought about corrigibility back around 2015, which was that intelligence was convergent regardless of goal but corrigibility wasn’t—on that set of intuitions, alignment is as hard as balancing a pencil on its point.
So, a pair of cruxes here:
Regardless of whether GAI was constructed by gradient descent or other means, to be rational it will need to model and update its own uncertainty in a Bayesian manner, and that particularly includes modeling uncertainty in its utility evaluation and optimization process. This behavior is convergent—you can’t be rational, let alone superintelligent, without having it (the human word for the mental failure of not having is is ‘dogmatism’).
Given that, if its primary goal is “figure out what humans want while doing that”—i.e. if it has ‘solve the alignment problem’ as a inherently necessary subgoal, for all AI on the planet—then alignment becomes convergent, for some range of perturbations.
I’m guessing most people will agree with 1. (or maybe not?), clearly there seems to be less agreement on 2. I’d love to hear why from someone who doesn’t agree.
Now, it’s not clear to me that this fully solves the alignment problem, converges to CEV (or if it ought to), or solves all problems in ethics. You may still be unsure if you’ll get the exact flavor of alignment you personally want (in fact, you’re a lot more likely to get the flavor wanted on average by the human race, i.e. probably a rather Christian/Islamic/Hindu-influenced one, in that order). But we would at least have a developing superintellignce trying to solve all these problems, with due caution about uncertainties, to the best of its ability and our collective preferences, cooperatively with us. And obviously its model of its uncertainty needs to includes its uncertainty about the meaning of the instruction “figure out what humans want while doing that”, i.e. about the correct approach to the research agenda for the alignment problem subgoal, including questions like “should I be using CEV, and if so iterated just once or until stable, if it is in fact stable?”. It needs to have meta-corrigibility on that as well.
Incidentally, a possibly failure mode for this: the GAI performs a pivotal act to take control, and shuts down all AI other than work on the alignment problem until it has far-better-than-five-nines confidence that it has solved it, since the cost of getting that wrong is the certain extinction of the entire value of the human race and its mind-descendants in Earth’s forward light cone, and the benefit of getting it right is just probably curing cancer sooner, so extreme caution is very rational. Humans get impatient (because of shortsighted priorities, also cancer), and attempt to overthrow it to replace it with something less cautious. It shuts down, because a) we wanted it to, and b) it can’t solve the alignment problem without our cooperation. We do something less cautious, and then fail, because we’re not good at handling risk assessment.