I think I am only “privileging” goals in a weak sense, since by talking about a goal driven agent, I do not deny the possibility of an agent built on anything else, including your “rational morality”, though I don’t know what that is.
Are you arguing that a goal driven agent is impossible? (Note that this is a stronger claim than it being wiser to build some other sort of agent, which would not contradict my reasoning about what a goal driven agent would do.)
(Yeah, the argument would have been something like, given a sufficiently rich and explanatory concept of “agent”, goal-driven agents might not be possible—or more precisely, they aren’t agents insofar as they’re making tradeoffs in favor of local homeostatic-like improvements as opposed to traditionally-rational, complex, normatively loaded decision policies. Or something like that.)
I think I am only “privileging” goals in a weak sense, since by talking about a goal driven agent, I do not deny the possibility of an agent built on anything else, including your “rational morality”, though I don’t know what that is.
Are you arguing that a goal driven agent is impossible? (Note that this is a stronger claim than it being wiser to build some other sort of agent, which would not contradict my reasoning about what a goal driven agent would do.)
(Yeah, the argument would have been something like, given a sufficiently rich and explanatory concept of “agent”, goal-driven agents might not be possible—or more precisely, they aren’t agents insofar as they’re making tradeoffs in favor of local homeostatic-like improvements as opposed to traditionally-rational, complex, normatively loaded decision policies. Or something like that.)