Thanks for your work on this. I think having some explicit examples of these kinds of scenarios helps make clearer the broad range of ways things could go badly, especially when it happens slowly and it’s not easy to notice until it’s too late. I think there’s especially a lot of value in calling out specific scenarios which may be very like scenarios people actually find themselves in later since it will help them notice they are matching a dangerous pattern and should consider AI safety more if they have thus far failed to.
Thanks for your work on this. I think having some explicit examples of these kinds of scenarios helps make clearer the broad range of ways things could go badly, especially when it happens slowly and it’s not easy to notice until it’s too late. I think there’s especially a lot of value in calling out specific scenarios which may be very like scenarios people actually find themselves in later since it will help them notice they are matching a dangerous pattern and should consider AI safety more if they have thus far failed to.