Being on the frontier of controllable power means we need to increase power only slightly to stop being in control
Slightly increasing power generally means slightly decreasing control, most of the time. What causes the very non-linear relationship you are assuming? Foom? But we can build narrow AIs that don’t foom, because we have. We should be able to build narrow AIs that don’t foom by not including anything that would allow them to recursively self improve [*].
EY’s answer to the question “why isn’t narrow AI safe” wasn’t “narrow AI will foom”, it was “we won’t be motivated to keep AI’s narrow”.
[*] not that we could tell them how to self-improve, since we don’t really understand it ourselves.
What causes the very non-linear relationship you are assuming?
Advantage of offence over defense in high-capability regime—you need only cross one threshold like “can finish a plan to rowhammer itself to internet” or “can hide its thoughts before it is spotted”. And we will build non-narrow AI because in practice “most powerful AIs we can control” means “we built some AIs, we can control them, so we continue to do what we have done before” and not “we try to understand what we will not be able to control in the future and try not to do it” because we already don’t check whether our current AI will be general before we turn it on and we already explicitly trying to create non-narrow AI.
Slightly increasing power generally means slightly decreasing control, most of the time. What causes the very non-linear relationship you are assuming? Foom? But we can build narrow AIs that don’t foom, because we have. We should be able to build narrow AIs that don’t foom by not including anything that would allow them to recursively self improve [*].
EY’s answer to the question “why isn’t narrow AI safe” wasn’t “narrow AI will foom”, it was “we won’t be motivated to keep AI’s narrow”.
[*] not that we could tell them how to self-improve, since we don’t really understand it ourselves.
Advantage of offence over defense in high-capability regime—you need only cross one threshold like “can finish a plan to rowhammer itself to internet” or “can hide its thoughts before it is spotted”. And we will build non-narrow AI because in practice “most powerful AIs we can control” means “we built some AIs, we can control them, so we continue to do what we have done before” and not “we try to understand what we will not be able to control in the future and try not to do it” because we already don’t check whether our current AI will be general before we turn it on and we already explicitly trying to create non-narrow AI.