I never said that the “invisible hand” would fail to function, I said that it would function inefficiently. Since efficiency is the major factor in deciding whether an economic strategy “works”, I noted that it would be out-performed by a system that can account for externalities. The free market could be patched to optimize things that contain externalities by applying tariffs and subsidies.
Given that I know of no system to properly account for externalities, I noted that as a failing of the free market but did not suggest any alternative—especially since my country already has this patch applied to some of the biggest and most obvious externalities, yet also shows signs of promoting the wrong things (eg corn based ethanol).
I find myself agreeing with you—human goals are a complex mess, which we seldom understand ourselves. We don’t come with clear inherent goals, and what goals we do have we abuse by using things like sugar and condoms instead of eating healthy and reproducing like we were “supposed” to. People have been asking about the meaning of life for thousands of years, and we still have no answer.
An AI on the other hand, could have very simple goals—make paperclips, for example. An AI’s goals might be completely specified in two words. It’s the AI’s sub-goals and plans to reach its goals that I doubt I could comprehend. It’s the very single-mindedness of an AI’s goals and our inability to comprehend our own goals, plus the prospect of an AI being both smarter and better at goal-hacking than us, that has many of us fearing that we will accidentally kill ourselves via non-friendly AI. Not everyone will think to clarify “make paperclips” with, “don’t exterminate humanity”, “don’t enslave humanity”, “don’t destroy the environment”, “don’t reprogram humans to desire only to make paperclips”, and various other disclaimers that wouldn’t be necessary if you were addressing a human (and we don’t know the full disclaimer list either).