Oh, I was responding to 1, because that was (my interpretation of) what the OP (Jason) was interested in and talking about in this post, e.g. the following excerpt:
The arguments above are sometimes used to rank AI at safety level 1 [“So dangerous that no one can use it safely”] … And this is a key pillar in the the argument for slowing or stopping AI development.
In this essay I’m arguing against this extreme view of the risk from power-seeking behavior. My current view is that AI is on level 2 [“Safe only if used very carefully”] to 3 [“Safe unless used recklessly or maliciously”]: it can be used safely by a trained professional and perhaps even by a prudent layman. But there could still be unacceptable risks from reckless or malicious use, and nothing here should be construed as arguing otherwise.
Separately, since you bring it up, I do in fact expect that if we make technical safety / alignment progress such that future powerful AGI is level 2 or 3, rather than 1, then kudos to us, but I still pretty strongly expect human extinction for reasons here. ¯\_(ツ)_/¯
Makes sense, thanks for the clarification and link to your post. I remember reading your post and thinking that I agree with it. I’m surprised that you didn’t point Jason (OP) to that post, since that seems like a bigger crux or more important consideration to convey to him, whereas your disagreement with him on whether we can avoid (in the first sense) building power-seeking AI doesn’t actually seem that big.
Oh, I was responding to 1, because that was (my interpretation of) what the OP (Jason) was interested in and talking about in this post, e.g. the following excerpt:
Separately, since you bring it up, I do in fact expect that if we make technical safety / alignment progress such that future powerful AGI is level 2 or 3, rather than 1, then kudos to us, but I still pretty strongly expect human extinction for reasons here. ¯\_(ツ)_/¯
Makes sense, thanks for the clarification and link to your post. I remember reading your post and thinking that I agree with it. I’m surprised that you didn’t point Jason (OP) to that post, since that seems like a bigger crux or more important consideration to convey to him, whereas your disagreement with him on whether we can avoid (in the first sense) building power-seeking AI doesn’t actually seem that big.