Interesting proposal. Just finished reading and will be thinking on it.
One candidate for an alternative to “AGI safety” that is less precise but also less fraught is “ML safety”, a term which I’ve noticed Dan Hendryks using.
Would be very interested in your feedback, once you’ve thought it through.
Just shared some of my thoughts in a top-level comment.
Interesting proposal. Just finished reading and will be thinking on it.
One candidate for an alternative to “AGI safety” that is less precise but also less fraught is “ML safety”, a term which I’ve noticed Dan Hendryks using.
Would be very interested in your feedback, once you’ve thought it through.
Just shared some of my thoughts in a top-level comment.