Do you think my model—utility denoting the ideal preferences, and satisficing being content with a certain threshold—is a poor model of this type of agent?
Yes, as I said, I think any preferences of the agent, including being “satisfied”, need to be internalized in the utility function. That is, satisficing should probably be content not with a certain level of utility, but with a certain level of the objective. Anything that’s “outside” the utility function, as satisficing is in this case, will naturally be seen as an unnecessary imposition by the agent and ultimately ignored (if the agent is able to ignore it), regardless of what it is.
For a contrived analogy, modeling a satisficer this way is similar to modelling an honest man as someone who wants to maximize money, but who lives under the rule of law (and who is able to stop the law applying to him whenever he wants at that).
So I did a post saying that a satisfier would turn into an expected utility maximiser, and your point is… that any satisficer should already be an expected utility maximiser :-)
...and your point is… that any satisficer should already be an expected utility maximiser :-)
No, only one that’s modeled the way you’re modeling. I think I’m somehow not being clear, sorry =( My point is that your post is tautological and does an injustice to satisficers. If you move the satisfaction condition inside the utility function, e.g. U = {9 if E(paperclips) >= 9, E(paperclips) otherwise}, so that its utility increases to 9 as it gains expected paperclips, and then stops at 9 (which is also not really an optimal definition, but an adequate one), the phenomenon of wanting to be a maximiser disappears. With that utility function, it would be indifferent between being a satisficer and a maximiser.
If you instead changed to a utility function like, let’s say: U = {1 if 8 < E(paperclips) < 11, 0 otherwise}, then it would strictly prefer to remain a satisficer, since a maximiser would inevitably push it into the 0 utility area of the function. I think this is the more standard way to model a satisficer (also with a resource cost thrown in as well), and it’s certainly the more “steelmaned” one, as it avoids problems like the ones in this post.
That’s just a utility maximiser with a bounded utility function.
But this has become a linguistic debate, not a conceputal one. One version of satisficisers (the version I define, which some people intuitively share) will tend to become maximisers. Another version (the bounded utility maximisers that you define) are already maximisers. We both agree on these facts—so what is there to argue about but the linguistics?
Since satisficing is more intuitively that rigorously defined (multiple formal definitions on wikipedia), I don’t think there’s anything more to dispute?
All right, I agree with that. It does seem like satisficers are (or quickly become) a subclass of maximisers by either definition.
Although I think the way I define them is not equivalent to a generic bounded maximiser. When I think of one of those it’s something more like U = paperclips/(|paperclips|+1) than what I wrote (i.e. it still wants to maximize without bound, it’s just less interested in low probabilities of high gains), which would behave rather differently. Maybe I just have unusual mental definitions of both, however.
Yes, as I said, I think any preferences of the agent, including being “satisfied”, need to be internalized in the utility function. That is, satisficing should probably be content not with a certain level of utility, but with a certain level of the objective. Anything that’s “outside” the utility function, as satisficing is in this case, will naturally be seen as an unnecessary imposition by the agent and ultimately ignored (if the agent is able to ignore it), regardless of what it is.
For a contrived analogy, modeling a satisficer this way is similar to modelling an honest man as someone who wants to maximize money, but who lives under the rule of law (and who is able to stop the law applying to him whenever he wants at that).
So I did a post saying that a satisfier would turn into an expected utility maximiser, and your point is… that any satisficer should already be an expected utility maximiser :-)
No, only one that’s modeled the way you’re modeling. I think I’m somehow not being clear, sorry =( My point is that your post is tautological and does an injustice to satisficers. If you move the satisfaction condition inside the utility function, e.g. U = {9 if E(paperclips) >= 9, E(paperclips) otherwise}, so that its utility increases to 9 as it gains expected paperclips, and then stops at 9 (which is also not really an optimal definition, but an adequate one), the phenomenon of wanting to be a maximiser disappears. With that utility function, it would be indifferent between being a satisficer and a maximiser.
If you instead changed to a utility function like, let’s say: U = {1 if 8 < E(paperclips) < 11, 0 otherwise}, then it would strictly prefer to remain a satisficer, since a maximiser would inevitably push it into the 0 utility area of the function. I think this is the more standard way to model a satisficer (also with a resource cost thrown in as well), and it’s certainly the more “steelmaned” one, as it avoids problems like the ones in this post.
That’s just a utility maximiser with a bounded utility function.
But this has become a linguistic debate, not a conceputal one. One version of satisficisers (the version I define, which some people intuitively share) will tend to become maximisers. Another version (the bounded utility maximisers that you define) are already maximisers. We both agree on these facts—so what is there to argue about but the linguistics?
Since satisficing is more intuitively that rigorously defined (multiple formal definitions on wikipedia), I don’t think there’s anything more to dispute?
All right, I agree with that. It does seem like satisficers are (or quickly become) a subclass of maximisers by either definition.
Although I think the way I define them is not equivalent to a generic bounded maximiser. When I think of one of those it’s something more like U = paperclips/(|paperclips|+1) than what I wrote (i.e. it still wants to maximize without bound, it’s just less interested in low probabilities of high gains), which would behave rather differently. Maybe I just have unusual mental definitions of both, however.
Maybe bounded maximiser vs maximiser with cutoff? With the second case being a special case of the first (for there are many ways to bound a utility).
Yes, that sounds good. I’ll try using those terms next time.