I think the core distinction was poorly worded by Holden. The distinction is between AIs as they exist now (e.g. self driving car), and the economical model of AI within a larger model, as economical utility maximizer agent, a non-reductionistically modelled entity within a larger model, which is maximizing some utility non-reductionistically modelled within larger model (e.g. paperclip maximizer).
The AIs as they exist now, at the core, throw the ‘intelligence’ in form of solution search, at a problem of finding inputs to an internally defined mathematical function that produce the largest output value. Those inputs can be representing real world manipulator states, and output of the function can be representing the future metric of performance, but very loosely so. The intelligence is not thrown at the job of forming the best model of real world for making real world paperclips; the notion is not even coherent because the ‘number of paperclips’ is ill defined outside context of specific model of the world.
I think the core distinction was poorly worded by Holden. The distinction is between AIs as they exist now (e.g. self driving car), and the economical model of AI within a larger model, as economical utility maximizer agent, a non-reductionistically modelled entity within a larger model, which is maximizing some utility non-reductionistically modelled within larger model (e.g. paperclip maximizer).
The AIs as they exist now, at the core, throw the ‘intelligence’ in form of solution search, at a problem of finding inputs to an internally defined mathematical function that produce the largest output value. Those inputs can be representing real world manipulator states, and output of the function can be representing the future metric of performance, but very loosely so. The intelligence is not thrown at the job of forming the best model of real world for making real world paperclips; the notion is not even coherent because the ‘number of paperclips’ is ill defined outside context of specific model of the world.