Yann LeCun: We only design machines that minimize costs [therefore they are safe]

Link post

Just a tweet I saw:

Yann LeCun

Doomers: OMG, if a machine is designed to maximize utility, it will inevitably diverge

šŸ˜±

Engineers: calm down, dude. We only design machines that minimize costs. Cost functions have a lower bound at zero. Minimizing costs canā€™t cause divergence unless youā€™re really stupid.

Some commentary:

I think Yann LeCun is being misleading here. While people intuitively think maximization and minimization are different, the real distinction is between convex optimization (where e.g. every local optimum is a global optimum) and non-convex optimization. When dealing with AGI, typically what people hope to solve is non-convex optimization.

Translating back to practical matters, you are presumably going to end up with some cost functions where you donā€™t reach the lower point of zero, just because there are some desirable outcomes that require tradeoffs or have resource limitations or similar. If you backchain these costs through the causal structure of the real world, that gives you instrumental convergence for standard reasons, just as you get when backchaining utilities.