The popular dystopian vision of AI is wrong for one simple reason: it equates intelligence with autonomy. That is, it assumes a smart computer will create its own goals, and have its own will (...)
The distinction is formally correct. But I agree that autonomy comes in very quickly by attaching a read-eval-print loop around the optimizer which takes the state of the world as input for the maximzation.
It’s not even formally correct. An autonomous AI does not need to create its own terminal goals*, and the will we give it is perfectly adequate to screw us over.
if it can’t create instrumental goals it’s not strong enough to worry about
Probably we disagree about what intelligence is. If intelligence is the ability to follow goals in the presence of obstancles the question becomes trivial. If intelligence is the ability to effectively find solutions in a given complex search space then little follows. It depends you the AI is decomposed into action and planning components and where the feedback cycles reside.
The distinction is formally correct. But I agree that autonomy comes in very quickly by attaching a read-eval-print loop around the optimizer which takes the state of the world as input for the maximzation.
It’s not even formally correct. An autonomous AI does not need to create its own terminal goals*, and the will we give it is perfectly adequate to screw us over.
if it can’t create instrumental goals it’s not strong enough to worry about
Probably we disagree about what intelligence is. If intelligence is the ability to follow goals in the presence of obstancles the question becomes trivial. If intelligence is the ability to effectively find solutions in a given complex search space then little follows. It depends you the AI is decomposed into action and planning components and where the feedback cycles reside.