I think a natural way of approaching impact measures is asking “how do I stop a smart unaligned AI from hurting me?” and patching hole after hole. This is really, really, really not the way to go about things. I think I might be equally concerned and pessimistic about the thing you’re thinking of.
The reason I’ve spent enormous effort on Reframing Impact is that the impact-measures-as-traps framing is wrong! The research program I have in mind is: let’s understand instrumental convergence on a gears level. Let’s understand why instrumental convergence tends to be bad on a gears level. Let’s understand the incentives so well that we can design an unaligned AI which doesn’t cause disaster by default.
The worst-case outcome is that we have a theorem characterizing when and why instrumental convergence arises, but find out that you can’t obviously avoid disaster-by-default without aligning the actual goal. This seems pretty darn good to me.
I think a natural way of approaching impact measures is asking “how do I stop a smart unaligned AI from hurting me?” and patching hole after hole. This is really, really, really not the way to go about things. I think I might be equally concerned and pessimistic about the thing you’re thinking of.
The reason I’ve spent enormous effort on Reframing Impact is that the impact-measures-as-traps framing is wrong! The research program I have in mind is: let’s understand instrumental convergence on a gears level. Let’s understand why instrumental convergence tends to be bad on a gears level. Let’s understand the incentives so well that we can design an unaligned AI which doesn’t cause disaster by default.
The worst-case outcome is that we have a theorem characterizing when and why instrumental convergence arises, but find out that you can’t obviously avoid disaster-by-default without aligning the actual goal. This seems pretty darn good to me.