So in general you can’t have utility functions that are as simple as the generator, right? E.g. the generator could be deontological. In which case your utility function would be complicated. Or it could be random, or it could choose actions by alphabetical order, or...
And so maybe you can have a little note for each of these. But now what it sounds like is: “I need my notes to be able to describe every possible cognitive algorithm that the agent could be running”. Which seems very very complicated.
I guess this is what you meant by the “tremendous number” of possible decorators. But if that’s what you need to do to keep talking about “utility functions”, then it just seems better to acknowledge that they’re broken as an abstraction.
E.g. in the case of python code, you wouldn’t do anything analogous to this. You would just try to reason about all the possible python programs directly. Similarly, I want to reason about all the cognitive algorithms directly.
I realized my grandparent comment is unclear here:
but need a very complicated utility function to make a utility-maximizer that matches the behavior.
This should have been “consequence-desirability-maximizer” or something, since the whole question is “does my utility function have to be defined in terms of consequences, or can it be defined in terms of arbitrary propositions?”. If I want to make the deontologist-approximating Innocent-Bot, I have a terrible time if I have to specify the consequences that correspond to the bot being innocent and the consequences that don’t, but if you let me say “Utility = 0 - badness of sins committed” then I’ve constructed a ‘simple’ deontologist. (At least, about as simple as the bot that says “take random actions that aren’t sins”, since both of them need to import the sins library.)
In general, I think it makes sense to not allow this sort of elaboration of what we mean by utility functions, since the behavior we want to point to is the backwards assignment of desirability to actions based on the desirability of their expected consequences, rather than the expectation of any arbitrary property.
---
Actually, I also realized something about your original comment which I don’t think I had the first time around; if by “some reasonable percentage of an agent’s actions are random” you mean something like “the agent does epsilon-exploration” or “the agent plays an optimal mixed strategy”, then I think it doesn’t at all require a complicated utility function to generate identical behavior. Like, in the rock-paper-scissors world, and with the simple function ‘utility = number of wins’, the expected utility maximizing move (against tough competition) is to throw randomly, and we won’t falsify the simple ‘utility = number of wins’ hypothesis by observing random actions.
Instead I read it as something like “some unreasonable percentage of an agent’s actions are random”, where the agent is performing some simple-to-calculate mixed strategy that is either suboptimal or only optimal by luck (when the optimal mixed strategy is the maxent strategy, for example), and matching the behavior with an expected utility maximizer is a challenge (because your target has to be not some fact about the environment, but some fact about the statistical properties of the actions taken by the agent).
---
I think this is where the original intuition becomes uncompelling. We care about utility-maximizers because they’re doing their backwards assignment, using their predictions of the future to guide their present actions to try to shift the future to be more like what they want it to be. We don’t necessarily care about imitators, or simple-to-write bots, or so on. And so if I read the original post as “the further a robot’s behavior is from optimal, the less likely it is to demonstrate convergent instrumental goals”, I say “yeah, sure, but I’m trying to build smart robots (or at least reasoning about what will happen if people try to).”
Instead I read it as something like “some unreasonable percentage of an agent’s actions are random”
This is in fact the intended reading, sorry for ambiguity. Will edit. But note that there are probably very few situations where exploring via actual randomness is best; there will almost always be some type of exploration which is more favourable. So I don’t think this helps.
We care about utility-maximizers because they’re doing their backwards assignment, using their predictions of the future to guide their present actions to try to shift the future to be more like what they want it to be.
To be pedantic: we care about “consequence-desirability-maximisers” (or in Rohin’s terminology, goal-directed agents) because they do backwards assignment. But I think the pedantry is important, because people substitute utility-maximisers for goal-directed agents, and then reason about those agents by thinking about utility functions, and that just seems incorrect.
And so if I read the original post as “the further a robot’s behavior is from optimal, the less likely it is to demonstrate convergent instrumental goals”
What do you mean by optimal here? The robot’s observed behaviour will be optimal for some utility function, no matter how long you run it.
To be pedantic: we care about “consequence-desirability-maximisers” (or in Rohin’s terminology, goal-directed agents) because they do backwards assignment.
Valid point.
But I think the pedantry is important, because people substitute utility-maximisers for goal-directed agents, and then reason about those agents by thinking about utility functions, and that just seems incorrect.
This also seems right. Like, my understanding of what’s going on here is we have:
‘central’ consequence-desirability-maximizers, where there’s a simple utility function that they’re trying to maximize according to the VNM axioms
‘general’ consequence-desirability-maximizers, where there’s a complicated utility function that they’re trying to maximize, which is selected because it imitates some other behavior
The first is a narrow class, and depending on how strict you are with ‘maximize’, quite possibly no physically real agents will fall into it. The second is a universal class, which instantiates the ‘trivial claim’ that everything is utility maximization.
Put another way, the first is what happens if you hold utility fixed / keep utility simple, and then examine what behavior follows; the second is what happens if you hold behavior fixed / keep behavior simple, and then examine what utility follows.
Distance from the first is what I mean by “the further a robot’s behavior is from optimal”; I want to say that I should have said something like “VNM-optimal” but actually I think it needs to be closer to “simple utility VNM-optimal.”
I think you’re basically right in calling out a bait-and-switch that sometimes happens, where anyone who wants to talk about the universality of expected utility maximization in the trivial ‘general’ sense can’t get it to do any work, because it should all add up to normality, and in normality there’s a meaningful distinction between people who sort of pursue fuzzy goals and ruthless utility maximizers.
So in general you can’t have utility functions that are as simple as the generator, right? E.g. the generator could be deontological. In which case your utility function would be complicated. Or it could be random, or it could choose actions by alphabetical order, or...
And so maybe you can have a little note for each of these. But now what it sounds like is: “I need my notes to be able to describe every possible cognitive algorithm that the agent could be running”. Which seems very very complicated.
I guess this is what you meant by the “tremendous number” of possible decorators. But if that’s what you need to do to keep talking about “utility functions”, then it just seems better to acknowledge that they’re broken as an abstraction.
E.g. in the case of python code, you wouldn’t do anything analogous to this. You would just try to reason about all the possible python programs directly. Similarly, I want to reason about all the cognitive algorithms directly.
That’s right.
I realized my grandparent comment is unclear here:
This should have been “consequence-desirability-maximizer” or something, since the whole question is “does my utility function have to be defined in terms of consequences, or can it be defined in terms of arbitrary propositions?”. If I want to make the deontologist-approximating Innocent-Bot, I have a terrible time if I have to specify the consequences that correspond to the bot being innocent and the consequences that don’t, but if you let me say “Utility = 0 - badness of sins committed” then I’ve constructed a ‘simple’ deontologist. (At least, about as simple as the bot that says “take random actions that aren’t sins”, since both of them need to import the sins library.)
In general, I think it makes sense to not allow this sort of elaboration of what we mean by utility functions, since the behavior we want to point to is the backwards assignment of desirability to actions based on the desirability of their expected consequences, rather than the expectation of any arbitrary property.
---
Actually, I also realized something about your original comment which I don’t think I had the first time around; if by “some reasonable percentage of an agent’s actions are random” you mean something like “the agent does epsilon-exploration” or “the agent plays an optimal mixed strategy”, then I think it doesn’t at all require a complicated utility function to generate identical behavior. Like, in the rock-paper-scissors world, and with the simple function ‘utility = number of wins’, the expected utility maximizing move (against tough competition) is to throw randomly, and we won’t falsify the simple ‘utility = number of wins’ hypothesis by observing random actions.
Instead I read it as something like “some unreasonable percentage of an agent’s actions are random”, where the agent is performing some simple-to-calculate mixed strategy that is either suboptimal or only optimal by luck (when the optimal mixed strategy is the maxent strategy, for example), and matching the behavior with an expected utility maximizer is a challenge (because your target has to be not some fact about the environment, but some fact about the statistical properties of the actions taken by the agent).
---
I think this is where the original intuition becomes uncompelling. We care about utility-maximizers because they’re doing their backwards assignment, using their predictions of the future to guide their present actions to try to shift the future to be more like what they want it to be. We don’t necessarily care about imitators, or simple-to-write bots, or so on. And so if I read the original post as “the further a robot’s behavior is from optimal, the less likely it is to demonstrate convergent instrumental goals”, I say “yeah, sure, but I’m trying to build smart robots (or at least reasoning about what will happen if people try to).”
This is in fact the intended reading, sorry for ambiguity. Will edit. But note that there are probably very few situations where exploring via actual randomness is best; there will almost always be some type of exploration which is more favourable. So I don’t think this helps.
To be pedantic: we care about “consequence-desirability-maximisers” (or in Rohin’s terminology, goal-directed agents) because they do backwards assignment. But I think the pedantry is important, because people substitute utility-maximisers for goal-directed agents, and then reason about those agents by thinking about utility functions, and that just seems incorrect.
What do you mean by optimal here? The robot’s observed behaviour will be optimal for some utility function, no matter how long you run it.
Valid point.
This also seems right. Like, my understanding of what’s going on here is we have:
‘central’ consequence-desirability-maximizers, where there’s a simple utility function that they’re trying to maximize according to the VNM axioms
‘general’ consequence-desirability-maximizers, where there’s a complicated utility function that they’re trying to maximize, which is selected because it imitates some other behavior
The first is a narrow class, and depending on how strict you are with ‘maximize’, quite possibly no physically real agents will fall into it. The second is a universal class, which instantiates the ‘trivial claim’ that everything is utility maximization.
Put another way, the first is what happens if you hold utility fixed / keep utility simple, and then examine what behavior follows; the second is what happens if you hold behavior fixed / keep behavior simple, and then examine what utility follows.
Distance from the first is what I mean by “the further a robot’s behavior is from optimal”; I want to say that I should have said something like “VNM-optimal” but actually I think it needs to be closer to “simple utility VNM-optimal.”
I think you’re basically right in calling out a bait-and-switch that sometimes happens, where anyone who wants to talk about the universality of expected utility maximization in the trivial ‘general’ sense can’t get it to do any work, because it should all add up to normality, and in normality there’s a meaningful distinction between people who sort of pursue fuzzy goals and ruthless utility maximizers.