The thing is, an AI wouldn’t need to feel a sunk cost effect. It would act optimally simply by maximising expected utility.
For example, say that I’m decide to work on Task A, which will take me five hours and will earn me $200. After two hours of work, I discover Task B which will award me $300 after five hours. At this point, I can behave like a human, and feel bored and annoyed, but the sunk cost effect will make me continue, maybe. Or I can calculate expected return: I’ll get $200 after 3 hours of work on Task A, which is %67 per hour, wheras I’ll get $300 after 5 hours on Task B, which is $60 per hour. So the rational thing to do is to avoid switching.
The sunk cost fallacy reflects that after putting work into something, the wage for continuing work decreases. An AI wouldn’t need that to act optimally.
One of my points is that you bury a great deal of hidden complexity and intelligence in ‘simply maximize expected utility’; it is true sunk cost is a fallacy in many simple fully-specified models and any simple AI can be rescued just by saying ‘give it a longer horizon! more computing power! more data!’, but do these simple models correspond to the real world?
(See also the question of whether exponential discounting rather than hyperbolic discounting is appropriate, if returns follow various random walks rather than remain constant in each time period.)
You neglected the part where the AI may stand to learn something from the task, which may have a large expected value relative to the tasks themselves.
What else are you optimising besides utility? Doing the calculations with the money can tell you the expected money value of the tasks, but unless your utility function is U=$$$, you need to take other things into account.
The thing is, an AI wouldn’t need to feel a sunk cost effect. It would act optimally simply by maximising expected utility.
For example, say that I’m decide to work on Task A, which will take me five hours and will earn me $200. After two hours of work, I discover Task B which will award me $300 after five hours. At this point, I can behave like a human, and feel bored and annoyed, but the sunk cost effect will make me continue, maybe. Or I can calculate expected return: I’ll get $200 after 3 hours of work on Task A, which is %67 per hour, wheras I’ll get $300 after 5 hours on Task B, which is $60 per hour. So the rational thing to do is to avoid switching.
The sunk cost fallacy reflects that after putting work into something, the wage for continuing work decreases. An AI wouldn’t need that to act optimally.
One of my points is that you bury a great deal of hidden complexity and intelligence in ‘simply maximize expected utility’; it is true sunk cost is a fallacy in many simple fully-specified models and any simple AI can be rescued just by saying ‘give it a longer horizon! more computing power! more data!’, but do these simple models correspond to the real world?
(See also the question of whether exponential discounting rather than hyperbolic discounting is appropriate, if returns follow various random walks rather than remain constant in each time period.)
You neglected the part where the AI may stand to learn something from the task, which may have a large expected value relative to the tasks themselves.
Yeah, but that comes under expected utility.
What else are you optimising besides utility? Doing the calculations with the money can tell you the expected money value of the tasks, but unless your utility function is U=$$$, you need to take other things into account.