There is an ambiguity in “avoid” here, which could mean either:
avoid building power-seeking AI oneself
prevent anyone from building power-seeking AI or prevent power-seeking AI from taking over
From the list you give, you seem to mean 1, but what we actually need is 2, right? The main additional question is, can we build a non-power-seeking AI that can (safely) prevent anyone from building power-seeking AI or prevent power-seeking AI from taking over (which in turn involves a bunch of technical and social/governance questions)?
Or do you already implicitly mean 2? (Perhaps you hold the position that the above problem is very easy to solve, for example that building a human-level non-power-seeking AI will take away almost all motivation to build power-seeking AI and we can easily police any remaining efforts in that direction?)
Oh, I was responding to 1, because that was (my interpretation of) what the OP (Jason) was interested in and talking about in this post, e.g. the following excerpt:
The arguments above are sometimes used to rank AI at safety level 1 [“So dangerous that no one can use it safely”] … And this is a key pillar in the the argument for slowing or stopping AI development.
In this essay I’m arguing against this extreme view of the risk from power-seeking behavior. My current view is that AI is on level 2 [“Safe only if used very carefully”] to 3 [“Safe unless used recklessly or maliciously”]: it can be used safely by a trained professional and perhaps even by a prudent layman. But there could still be unacceptable risks from reckless or malicious use, and nothing here should be construed as arguing otherwise.
Separately, since you bring it up, I do in fact expect that if we make technical safety / alignment progress such that future powerful AGI is level 2 or 3, rather than 1, then kudos to us, but I still pretty strongly expect human extinction for reasons here. ¯\_(ツ)_/¯
Makes sense, thanks for the clarification and link to your post. I remember reading your post and thinking that I agree with it. I’m surprised that you didn’t point Jason (OP) to that post, since that seems like a bigger crux or more important consideration to convey to him, whereas your disagreement with him on whether we can avoid (in the first sense) building power-seeking AI doesn’t actually seem that big.
There is an ambiguity in “avoid” here, which could mean either:
avoid building power-seeking AI oneself
prevent anyone from building power-seeking AI or prevent power-seeking AI from taking over
From the list you give, you seem to mean 1, but what we actually need is 2, right? The main additional question is, can we build a non-power-seeking AI that can (safely) prevent anyone from building power-seeking AI or prevent power-seeking AI from taking over (which in turn involves a bunch of technical and social/governance questions)?
Or do you already implicitly mean 2? (Perhaps you hold the position that the above problem is very easy to solve, for example that building a human-level non-power-seeking AI will take away almost all motivation to build power-seeking AI and we can easily police any remaining efforts in that direction?)
Oh, I was responding to 1, because that was (my interpretation of) what the OP (Jason) was interested in and talking about in this post, e.g. the following excerpt:
Separately, since you bring it up, I do in fact expect that if we make technical safety / alignment progress such that future powerful AGI is level 2 or 3, rather than 1, then kudos to us, but I still pretty strongly expect human extinction for reasons here. ¯\_(ツ)_/¯
Makes sense, thanks for the clarification and link to your post. I remember reading your post and thinking that I agree with it. I’m surprised that you didn’t point Jason (OP) to that post, since that seems like a bigger crux or more important consideration to convey to him, whereas your disagreement with him on whether we can avoid (in the first sense) building power-seeking AI doesn’t actually seem that big.