Sorry, I meant “ontology” in the information science sense, not the metaphysics sense; I simply meant that you’re conceptually (not necessarily metaphysically) privileging goals. What if you’re wrong to do that? I suppose I’m suggesting that carving out “goals” might be smuggling in conclusions that make you think universal convergence is unlikely. If you conceptually privileged rational morality instead, as many meta-ethicists do, then your conclusions might change, in which case it seems you’d have to be unjustifiably confident in your “goal”-centric conceptualization.
I think I am only “privileging” goals in a weak sense, since by talking about a goal driven agent, I do not deny the possibility of an agent built on anything else, including your “rational morality”, though I don’t know what that is.
Are you arguing that a goal driven agent is impossible? (Note that this is a stronger claim than it being wiser to build some other sort of agent, which would not contradict my reasoning about what a goal driven agent would do.)
(Yeah, the argument would have been something like, given a sufficiently rich and explanatory concept of “agent”, goal-driven agents might not be possible—or more precisely, they aren’t agents insofar as they’re making tradeoffs in favor of local homeostatic-like improvements as opposed to traditionally-rational, complex, normatively loaded decision policies. Or something like that.)
Sorry, I meant “ontology” in the information science sense, not the metaphysics sense; I simply meant that you’re conceptually (not necessarily metaphysically) privileging goals. What if you’re wrong to do that? I suppose I’m suggesting that carving out “goals” might be smuggling in conclusions that make you think universal convergence is unlikely. If you conceptually privileged rational morality instead, as many meta-ethicists do, then your conclusions might change, in which case it seems you’d have to be unjustifiably confident in your “goal”-centric conceptualization.
I think I am only “privileging” goals in a weak sense, since by talking about a goal driven agent, I do not deny the possibility of an agent built on anything else, including your “rational morality”, though I don’t know what that is.
Are you arguing that a goal driven agent is impossible? (Note that this is a stronger claim than it being wiser to build some other sort of agent, which would not contradict my reasoning about what a goal driven agent would do.)
(Yeah, the argument would have been something like, given a sufficiently rich and explanatory concept of “agent”, goal-driven agents might not be possible—or more precisely, they aren’t agents insofar as they’re making tradeoffs in favor of local homeostatic-like improvements as opposed to traditionally-rational, complex, normatively loaded decision policies. Or something like that.)