Because (it seems to me) existential risk seems asymmetrically bad in comparison to potential technology upsides (large as upsides may be), I just have different standards of evidence for “significant risk” vs “significant good”.
This is a normative argument, not an empirical one. The normative position seems reasonable to me, though I’d want to think more about it (I haven’t because it doesn’t seem decision-relevant).
I especially don’t see an argument that one could expect all failure modes of very very capable systems to present themselves first in less-capable systems.
The quick version is that to the extent that the system is adversarially optimizing against you, it had to at some point learn that that was a worthwhile thing to do, which we could notice. (This is assuming that capable systems are built via learning; if not then who knows what’ll happen.)
I am confused about how the normative question isn’t decision-relevant here. Is it that I have a model where it is the relevant question, but you have one where it isn’t? To be hopefully clear: I’m applying this normative claim to argue that proof is needed to establish the desired level of confidence. That doesn’t mean direct proof of the claim “the AI will do good”, but rather of supporting claims, perhaps involving the learning-theoretic properties of the system (putting bounds on errors of certain kinds) and such.
It’s possible that this isn’t my true disagreement, because actually the question seems more complicated than just a question of how large potential downsides are if things go poorly in comparison to potential upsides if things go well. But some kind of analysis of the risks seems relevant here—if there weren’t such large downside risks, I would have lower standards of evidence for claims that things will go well.
The quick version is that to the extent that the system is adversarially optimizing against you, it had to at some point learn that that was a worthwhile thing to do, which we could notice. (This is assuming that capable systems are built via learning; if not then who knows what’ll happen.)
It sounds like we would have to have a longer discussion to resolve this. I don’t expect this to hit the mark very well, but here’s my reply to what I understand:
I don’t see how you can be confident enough of that view for it to be how you really want to check.
A system can be optimizing a fairly good proxy, so that at low levels of capability it is highly aligned, but this falls apart as the system becomes highly capable and figures out “hacks” around the “usual interpretation” of the proxy.
I also note that it seems like we disagree both about how useful proofs will be and about how useful empirical investigations will be (keeping in mind that those aren’t the only two things in the universe). I’m not sure which of those two disagreements is more important here.
To be hopefully clear: I’m applying this normative claim to argue that proof is needed to establish the desired level of confidence.
Under my model, it’s overwhelmingly likely that regardless of what we do AGI will be deployed with less than the desired level of confidence in its alignment. If I personally controlled whether or not AGI was deployed, then I’d be extremely interested in the normative claim. If I then agreed with the normative claim, I’d agree with:
proof is needed to establish the desired level of confidence. That doesn’t mean direct proof of the claim “the AI will do good”, but rather of supporting claims, perhaps involving the learning-theoretic properties of the system (putting bounds on errors of certain kinds) and such.
I don’t see how you can be confident enough of that view for it to be how you really want to check.
If I want >99% confidence, I agree that I couldn’t be confident enough in that argument.
A system can be optimizing a fairly good proxy, so that at low levels of capability it is highly aligned, but this falls apart as the system becomes highly capable and figures out “hacks” around the “usual interpretation” of the proxy.
Yeah, the hope here would be that the relevant decision-makers are aware of this dynamic (due to previous situations in which e.g. a recommender system optimized the fairly good proxy of clickthrough rate but this lead to “hacks” around the “usual interpretation”), and have some good reason to think that it won’t happen with the highly capable system they are planning to deploy.
I also note that it seems like we disagree both about how useful proofs will be and about how useful empirical investigations will be
Agreed. It also might be that we disagree on the tractability of proofs in addition to / instead of the utility of proofs.
This is a normative argument, not an empirical one. The normative position seems reasonable to me, though I’d want to think more about it (I haven’t because it doesn’t seem decision-relevant).
The quick version is that to the extent that the system is adversarially optimizing against you, it had to at some point learn that that was a worthwhile thing to do, which we could notice. (This is assuming that capable systems are built via learning; if not then who knows what’ll happen.)
I am confused about how the normative question isn’t decision-relevant here. Is it that I have a model where it is the relevant question, but you have one where it isn’t? To be hopefully clear: I’m applying this normative claim to argue that proof is needed to establish the desired level of confidence. That doesn’t mean direct proof of the claim “the AI will do good”, but rather of supporting claims, perhaps involving the learning-theoretic properties of the system (putting bounds on errors of certain kinds) and such.
It’s possible that this isn’t my true disagreement, because actually the question seems more complicated than just a question of how large potential downsides are if things go poorly in comparison to potential upsides if things go well. But some kind of analysis of the risks seems relevant here—if there weren’t such large downside risks, I would have lower standards of evidence for claims that things will go well.
It sounds like we would have to have a longer discussion to resolve this. I don’t expect this to hit the mark very well, but here’s my reply to what I understand:
I don’t see how you can be confident enough of that view for it to be how you really want to check.
A system can be optimizing a fairly good proxy, so that at low levels of capability it is highly aligned, but this falls apart as the system becomes highly capable and figures out “hacks” around the “usual interpretation” of the proxy.
I also note that it seems like we disagree both about how useful proofs will be and about how useful empirical investigations will be (keeping in mind that those aren’t the only two things in the universe). I’m not sure which of those two disagreements is more important here.
Under my model, it’s overwhelmingly likely that regardless of what we do AGI will be deployed with less than the desired level of confidence in its alignment. If I personally controlled whether or not AGI was deployed, then I’d be extremely interested in the normative claim. If I then agreed with the normative claim, I’d agree with:
If I want >99% confidence, I agree that I couldn’t be confident enough in that argument.
Yeah, the hope here would be that the relevant decision-makers are aware of this dynamic (due to previous situations in which e.g. a recommender system optimized the fairly good proxy of clickthrough rate but this lead to “hacks” around the “usual interpretation”), and have some good reason to think that it won’t happen with the highly capable system they are planning to deploy.
Agreed. It also might be that we disagree on the tractability of proofs in addition to / instead of the utility of proofs.