Another objection is that you can minimize the wrong cost function. Making “cost” go to zero could mean making “the thing we actually care about” go to (negative huge number).
I don’t think this objection lands unless one first sees why the safety guarantees we usually associate with cost minimization don’t apply to AGI. Like what sort of mindset would hear Yann LeCun’s objection, go “ah, so we’re safe”, and then hear your objection, and go “oh I see, so Yann LeCun was wrong”?
Another objection is that you can minimize the wrong cost function. Making “cost” go to zero could mean making “the thing we actually care about” go to (negative huge number).
I don’t think this objection lands unless one first sees why the safety guarantees we usually associate with cost minimization don’t apply to AGI. Like what sort of mindset would hear Yann LeCun’s objection, go “ah, so we’re safe”, and then hear your objection, and go “oh I see, so Yann LeCun was wrong”?