Link: Does the following seem like a reasonable brief summary of the key disagreements regarding AI risk?
This is a link to a question asked on the EA Forum by Aryeh Englander. (Please post responses / discussion there.)
Does the following seem like a reasonable brief summary of the key disagreements regarding AI risk?
Among those experts (AI researchers, economists, careful knowledgeable thinkers in general) who appear to be familiar with the arguments:
Seems to be broad (but not universal?) agreement that:
Superintelligent AI (in some form, perhaps distributed rather than single-agent) is possible and will probably be created one day
By default there is at least a decent chance that the AI will not be aligned
If it is not aligned or controlled in some way then there is at least a decent chance that it will be incredibly dangerous by default
Some core disagreements (starred questions are at least partially social science / economics questions):
Just how likely are all of the above?
Will we have enough time to see it coming, and will it be obvious enough, that people will react appropriately in time to prevent bad outcomes?
Still might be useful to have some people keeping tabs on it (Robin Hanson thinks about 100), but not that many
How hard is it to solve?
If easy then less time needed to see it coming, or inventors more likely to incorporate solutions by default
If really hard then may need a long time in advance
How far away is it?
Can we work on it profitably now given that we don’t know how AGI will work?
If current ML scales to AGI then presumably yes, otherwise disagreement
Will something less than superhuman AI pose similar extreme risks? If yes: How much less, how far in advance will we see it coming, when will it come, how easy is it to solve?
Will we need coordination mechanisms in place to prevent dangerous races to the bottom? If yes, how far in advance will we need them?
If it’s a low probability of something really catastrophic, how much should we be spending on it now? (Where is the cutoff where we stop worrying about finite versions of Pascal’s Wager?)
What about misuse risks, structural risks, or future moral risks?
Various combinations of these and related arguments result in anything from “we don’t need to worry about this at all yet” to “we should be pouring massive amounts of research into this”
There are disagreements over approach (e.g. provably friendly vs. boxed “tool” AI), which I don’t see on your list.
Valid. I was primarily summarizing the risk part though, rather than the solutions.