The nonobvious problems are the whole reason why AI alignment is hard in the first place.
I disagree with the implication that there’s nothing to worry about on the “obvious problems” side.
An out-of-control AGI self-reproducing around the internet, causing chaos and blackouts etc., is an “obvious problem”. I still worry about it.
After all, consider this: an out-of-control virus self-reproducing around the human population, causing death and disability etc., is also an “obvious problem”. We already have this problem; we’ve had this problem for millennia! And yet, we haven’t solved it!
(It’s even worse than that—it’s an obvious problem with obvious mitigations, e.g. end gain-of-function research, and we’re not even doing that.)
There is an important difference here between “obvious in advance” and “obvious in hindsight”, but your basic point is fair, and the virus example is a good one. Humanity’s current state is indeed so spectacularly incompetent that even the obvious problems might not be solved, depending on how things go.
I would say “Humanity’s current state is so spectacularly incompetent that even the obvious problems with obvious solutions might not be solved”.
If humanity were not spectacularly incompetent, then maybe we wouldn’t have to worry about the obvious problems with obvious solutions. But we would still need to worry about the obvious problems with extremely difficult and non-obvious solutions.
I disagree with the implication that there’s nothing to worry about on the “obvious problems” side.
An out-of-control AGI self-reproducing around the internet, causing chaos and blackouts etc., is an “obvious problem”. I still worry about it.
After all, consider this: an out-of-control virus self-reproducing around the human population, causing death and disability etc., is also an “obvious problem”. We already have this problem; we’ve had this problem for millennia! And yet, we haven’t solved it!
(It’s even worse than that—it’s an obvious problem with obvious mitigations, e.g. end gain-of-function research, and we’re not even doing that.)
There is an important difference here between “obvious in advance” and “obvious in hindsight”, but your basic point is fair, and the virus example is a good one. Humanity’s current state is indeed so spectacularly incompetent that even the obvious problems might not be solved, depending on how things go.
I would say “Humanity’s current state is so spectacularly incompetent that even the obvious problems with obvious solutions might not be solved”.
If humanity were not spectacularly incompetent, then maybe we wouldn’t have to worry about the obvious problems with obvious solutions. But we would still need to worry about the obvious problems with extremely difficult and non-obvious solutions.