To remove this funny “I don’t know what my utility function is” business, let’s split our agent into two: Agent R is bounded-rational, and it’s utility function is simply “do what Agent M wants”. Agent M has a complex utility function of morality and teeth soreness, which is partially obscure to Agent R. Agent R makes evidence-based updates concerning both the outside world and M’s utility function. (Functionally, this is the same as having one agent which is unsure of what it’s utility function is, but it seems easier to talk about)
Am I still following you correctly?
Conceiving of it as two separate agents is a bit funny, but yeah, that’s more or less the right model.
I think of it as “I know what my utility function is, but the utility of outcomes depends on some important moral facts that I don’t know about.”
Are some humans contained in the outlined set of sort-of VNM compliant agents? And if not, what quality excludes them from the set?
No. I assert in “we don’t have a utility function” that we (all humans) do not have a utility function. Of course I could be wrong.
As I said, humans are excluded on both acting in a sane and consistent way, and on knowing what we even want.
Actually, in some sense, the question of whether X is a VNM agent is uninteresting. It’s like the question of whether X is a heat engine. If you twist things around enough, even a rock could be a heat engine or a VNM agent with zero efficiency or a preference for accelerating in the direction of gravity.
The point of VNM, and of Thermodynamics, is as analysis tools for analyzing systems that we are designing. Everything is a heat engine, but some are more efficient/usable than others. Likewise with agents, everything is an agent, but some produce outcomes that we like and others do not.
So with applying VNM to humans the question is not whether we descriptively have utility functions or whatever; the question is if and how we can use a VNM analysis to make useful changes in behavior, or how we can build a system that produces valuable outcomes.
So the point of this moral uncertainty business is “oh look, if we concieve of moral uncertainty like this, we can provably meet these criteria and solve those problems in a coherent way”.
A utility function is only a method of approximating an agent’s behavior. If I wanted to make a precise description, I wouldn’t bother “agent-izing” the object in the first place. The rock falls vs. the rock wants to fall is a meaningless distinction. In that sense, nothing “has a utility function”, since utility functions aren’t ontologically fundamental.
When I say “does X have a utility function”, I mean “Is it useful and intuitive to predict the behavior of X by ascribing agency to it and using a utility function”. So the real question is, do humans deviate from the model to such an extent that the model should not be used? It certainly doesn’t seem like the model describes anything else better than it describes humans—although as AI improves that might change.
So even if I agree that humans don’t technically “have a utility function” anymore than any other object, I would say that if anything on this planet is worth ascribing agency and using a utility function to describe, it’s animals. So if humans and other animals don’t have a utility function, who does?
So if humans and other animals don’t have a utility function, who does?
No one yet. We’re working on it.
So the real question is, do humans deviate from the model to such an extent that the model should not be used?
Yes. You will find it much more fruitful to predict most humans as causal systems (including youself), and if you wanted to model human behavior with a utility function, you’d either have a lot of error, or a lot of trouble adding enough epicycles.
As I said though, VNM isn’t useful descriptively; if you use it like that, it’s tautological, and doesn’t really tell you anything. Where it shines is in design of agenty systems; “If we had these preferences, what would that imply about where we would steer the future” (which worlds are ranked high) “if we want to steer the future over there, what decision architecture do we need?”.
Conceiving of it as two separate agents is a bit funny, but yeah, that’s more or less the right model.
I think of it as “I know what my utility function is, but the utility of outcomes depends on some important moral facts that I don’t know about.”
No. I assert in “we don’t have a utility function” that we (all humans) do not have a utility function. Of course I could be wrong.
As I said, humans are excluded on both acting in a sane and consistent way, and on knowing what we even want.
Actually, in some sense, the question of whether X is a VNM agent is uninteresting. It’s like the question of whether X is a heat engine. If you twist things around enough, even a rock could be a heat engine or a VNM agent with zero efficiency or a preference for accelerating in the direction of gravity.
The point of VNM, and of Thermodynamics, is as analysis tools for analyzing systems that we are designing. Everything is a heat engine, but some are more efficient/usable than others. Likewise with agents, everything is an agent, but some produce outcomes that we like and others do not.
So with applying VNM to humans the question is not whether we descriptively have utility functions or whatever; the question is if and how we can use a VNM analysis to make useful changes in behavior, or how we can build a system that produces valuable outcomes.
So the point of this moral uncertainty business is “oh look, if we concieve of moral uncertainty like this, we can provably meet these criteria and solve those problems in a coherent way”.
OK, I think we’re on the same page now.
A utility function is only a method of approximating an agent’s behavior. If I wanted to make a precise description, I wouldn’t bother “agent-izing” the object in the first place. The rock falls vs. the rock wants to fall is a meaningless distinction. In that sense, nothing “has a utility function”, since utility functions aren’t ontologically fundamental.
When I say “does X have a utility function”, I mean “Is it useful and intuitive to predict the behavior of X by ascribing agency to it and using a utility function”. So the real question is, do humans deviate from the model to such an extent that the model should not be used? It certainly doesn’t seem like the model describes anything else better than it describes humans—although as AI improves that might change.
So even if I agree that humans don’t technically “have a utility function” anymore than any other object, I would say that if anything on this planet is worth ascribing agency and using a utility function to describe, it’s animals. So if humans and other animals don’t have a utility function, who does?
No one yet. We’re working on it.
Yes. You will find it much more fruitful to predict most humans as causal systems (including youself), and if you wanted to model human behavior with a utility function, you’d either have a lot of error, or a lot of trouble adding enough epicycles.
As I said though, VNM isn’t useful descriptively; if you use it like that, it’s tautological, and doesn’t really tell you anything. Where it shines is in design of agenty systems; “If we had these preferences, what would that imply about where we would steer the future” (which worlds are ranked high) “if we want to steer the future over there, what decision architecture do we need?”.