@Tim Tyler: “That’s no reason not to talk about goals, and instead only mention something like “utility”.”
Tim, the problem with expected utility maps directly onto the problem with goals. Each is coherent only to the extent that the future context can be effectively specified (functionally modeled, such that you could interact with it and ask it questions, not to be confused with simply pointing to it.) Applied to a complexly evolving future of increasingly uncertain context, due to combinatorial explosion but also due to critical underspecification of priors, we find that ultimately (in the bigger picture) rational decision-making is not so much about “expected utility” or “goals” as it is about promoting a present model of evolving values into one’s future, via increasingly effective interaction with one’s (necessarily local) environment of interaction. Wash, rinse, repeat. Certainty, goals, and utility are always only a special case, applicable to the extent that the context is adequately specifiable. This is the key to so-called “paradoxes” such a Prisoners’s Dilemma and Parfit’s Repugnant Conclusion as well.
Tim, this forum appears to be over-heated and I’m only a guest here. Besides, I need to pack and get on my motorcycle and head up to San Jose for Singularity Summit 08 and a few surrounding days of high geekdom.
@Tim Tyler: “That’s no reason not to talk about goals, and instead only mention something like “utility”.”
Tim, the problem with expected utility maps directly onto the problem with goals. Each is coherent only to the extent that the future context can be effectively specified (functionally modeled, such that you could interact with it and ask it questions, not to be confused with simply pointing to it.) Applied to a complexly evolving future of increasingly uncertain context, due to combinatorial explosion but also due to critical underspecification of priors, we find that ultimately (in the bigger picture) rational decision-making is not so much about “expected utility” or “goals” as it is about promoting a present model of evolving values into one’s future, via increasingly effective interaction with one’s (necessarily local) environment of interaction. Wash, rinse, repeat. Certainty, goals, and utility are always only a special case, applicable to the extent that the context is adequately specifiable. This is the key to so-called “paradoxes” such a Prisoners’s Dilemma and Parfit’s Repugnant Conclusion as well.
Tim, this forum appears to be over-heated and I’m only a guest here. Besides, I need to pack and get on my motorcycle and head up to San Jose for Singularity Summit 08 and a few surrounding days of high geekdom.
I’m (virtually) outta here.