I usually treat “satisficer” as “utility for a thing flattens out”, not “utility stops being valuable”. In fact, I’m not sure you can call it utility using your method. In the real world, I don’t think there are any true satisficers—most will still act as if more is better, just that their diminishing returns are sharper than optimizers.
A definition of satisficing that starts with utility units is somewhat incoherent, I think. Utility is already a measure of what one wants—it is nonsense to talk about “not wanting more utility”.
Of course they can (generally thought of as something logarithmic), just not as sharp a drop-off as satisficers, and with somewhat less utility cost of searching for improvements.
the clippy problem isn’t so much maximizer vs satisficer, it’s just a far too simple goal structure.
Please explain. Do you mean that a given maximizer is looking at the marginal utility of paperclips as the percentage of total paperclip mass? Because that is entirely dependent on the agent’s utility function. Clippy will never stop making paperclips unless making paperclips results in a net loss of paperclips.
A given decision agent is making choices, including clippy, maximizers, and satisficers. All of them have utility functions which include increasing utility for things they like. Generally, both maximizers and satisficers have declining marginal utility for things they like, but increasing absolute utility for them. U(n things) < U(n+1 things), but U(thing #n) < U(thing #n+1).
Agents have competing desires (more than one thing in their utility function). So choices they make have to weigh different things. Do I want N of x and M+1 of y, or do I want N+1 of x and M of y? This is where it gets interesting: a satisficer generally values minimizing time and hassle more than getting more of a thing than really necessary. An optimizer values minimizing time and hassle, but less so compared to getting more desirable future states.
Clippy doesn’t have multiple things to balance against each other, so it doesn’t matter whether its utility function has declining marginal utilty, nor to what degree it declines. It has increasing absolute utilty, and there’s nothing else to optimize, so more clips is always better. This is an unrelated topic to satisficers vs maximizers.
I usually treat “satisficer” as “utility for a thing flattens out”, not “utility stops being valuable”. In fact, I’m not sure you can call it utility using your method. In the real world, I don’t think there are any true satisficers—most will still act as if more is better, just that their diminishing returns are sharper than optimizers.
A definition of satisficing that starts with utility units is somewhat incoherent, I think. Utility is already a measure of what one wants—it is nonsense to talk about “not wanting more utility”.
That’s one of the problems with the class of maximizers MIRI talks about. They don’t have diminishing returns on utility per paperclip created.
Of course they can (generally thought of as something logarithmic), just not as sharp a drop-off as satisficers, and with somewhat less utility cost of searching for improvements.
the clippy problem isn’t so much maximizer vs satisficer, it’s just a far too simple goal structure.
Please explain. Do you mean that a given maximizer is looking at the marginal utility of paperclips as the percentage of total paperclip mass? Because that is entirely dependent on the agent’s utility function. Clippy will never stop making paperclips unless making paperclips results in a net loss of paperclips.
A given decision agent is making choices, including clippy, maximizers, and satisficers. All of them have utility functions which include increasing utility for things they like. Generally, both maximizers and satisficers have declining marginal utility for things they like, but increasing absolute utility for them. U(n things) < U(n+1 things), but U(thing #n) < U(thing #n+1).
Agents have competing desires (more than one thing in their utility function). So choices they make have to weigh different things. Do I want N of x and M+1 of y, or do I want N+1 of x and M of y? This is where it gets interesting: a satisficer generally values minimizing time and hassle more than getting more of a thing than really necessary. An optimizer values minimizing time and hassle, but less so compared to getting more desirable future states.
Clippy doesn’t have multiple things to balance against each other, so it doesn’t matter whether its utility function has declining marginal utilty, nor to what degree it declines. It has increasing absolute utilty, and there’s nothing else to optimize, so more clips is always better. This is an unrelated topic to satisficers vs maximizers.
Okay, thank you. I was focusing on the pathological case.