I keep saying that AI may need a human ‘caregiver,’ and I meant something like this post (or this one). While I’m not sure I explained it clearly enough or whether that is really what it will amount to in the end, I believe that we can learn more about this kind of alignment by listening to social scientists more closely. One could at least try the approach and see to which degree it works or how for it scales under increased optimization power.
I keep saying that AI may need a human ‘caregiver,’ and I meant something like this post (or this one). While I’m not sure I explained it clearly enough or whether that is really what it will amount to in the end, I believe that we can learn more about this kind of alignment by listening to social scientists more closely. One could at least try the approach and see to which degree it works or how for it scales under increased optimization power.