While I’m not an AI researcher so not sure which concerns are most relevant, I’ll say that “risk of scaling down” and “risk to relative scale” hadn’t even been on my radar as things to pay attention to, so having them succinctly given handles to refer to seemed handy.
While I’m not an AI researcher so not sure which concerns are most relevant, I’ll say that “risk of scaling down” and “risk to relative scale” hadn’t even been on my radar as things to pay attention to, so having them succinctly given handles to refer to seemed handy.
It does seem nice to group them and have clear handles.
ML researchers more often think about risks from scaling down and relative scale, since those come up more frequently (and are harder to fix) today.