Also, Legg’s formal definition of intelligence is drawn from a dualistic “agent-environment” model of optimal agency (Legg 2008: 40) that does not represent its own computation as occurring in a physical world with physical limits and costs. Our notion of optimization power is inspired by Yudkowsky (2008b).
The Golem Genie is not explained—you’re writing it as neither a positive nor negative agent: it is not an evil genie/demon and that’s a intuition pump that should be avoided as an anthropomorphism.
Second, because the existence of zero-sum games means that the satisfaction of one human’s preferences can conflict with the satisfaction of another’s (Geckil and Anderson 2009).
And negative-sum games too, presumably, like various positional or arms races. (Don’t have any citations, I’m afraid.)
Might be good to link to some papers on problems with the RL agents—horizon, mugging, and the delusion box http://lesswrong.com/lw/7fl/link_report_on_the_fourth_conference_on/
The Golem Genie is not explained—you’re writing it as neither a positive nor negative agent: it is not an evil genie/demon and that’s a intuition pump that should be avoided as an anthropomorphism.
And negative-sum games too, presumably, like various positional or arms races. (Don’t have any citations, I’m afraid.)