“optimization where ‘all’ costs, including the cost of the optimization calculations themselves and the cost of getting information for use in those calculations, are considered.”
That is, a satisficer explicitly will not become a maximizer, because it is consciously aware of the costs of being a maximizer rather than a satisficer.
A maximizer might have a utility function like “p”, where p is the number of paperclips, while a satisficer would have a utility function like “p-c”, where p is the number of paperclips and c is the cost of the optimization process. The maximizer is potentially unbounded; the satisficer stops when marginal reward equals marginal cost (which could also be unbounded, but is less likely to be so).
That is, a satisficer explicitly will not become a maximizer, because it is consciously aware of the costs of being a maximizer rather than a satisficer.
According to the page you cite, satisficers are a subset of maximisers. Satisficers are just maximisers whose utility functions factor in constraints.
Yes for some definitions of maximizers. The article Stuart_Armstrong wrote seems have to differing definitions: maximizers are agents that seek to get as much X as possible, and his satisficers want to get as much E(X) as possible. Then, trivially, those reduce to agents that want to get as much X as possible.
I don’t see that as novel or relevant since what I would call satisficers are those that try to set marginal gain equal to marginal cost. Those generally do not reduce to agents that seek to get as much X as possible.
Um, the standard AI definition of a satisficer is:
“optimization where ‘all’ costs, including the cost of the optimization calculations themselves and the cost of getting information for use in those calculations, are considered.”
That is, a satisficer explicitly will not become a maximizer, because it is consciously aware of the costs of being a maximizer rather than a satisficer.
A maximizer might have a utility function like “p”, where p is the number of paperclips, while a satisficer would have a utility function like “p-c”, where p is the number of paperclips and c is the cost of the optimization process. The maximizer is potentially unbounded; the satisficer stops when marginal reward equals marginal cost (which could also be unbounded, but is less likely to be so).
According to the page you cite, satisficers are a subset of maximisers. Satisficers are just maximisers whose utility functions factor in constraints.
Yes for some definitions of maximizers. The article Stuart_Armstrong wrote seems have to differing definitions: maximizers are agents that seek to get as much X as possible, and his satisficers want to get as much E(X) as possible. Then, trivially, those reduce to agents that want to get as much X as possible.
I don’t see that as novel or relevant since what I would call satisficers are those that try to set marginal gain equal to marginal cost. Those generally do not reduce to agents that seek to get as much X as possible.