We rightly despair of modeling humans as behavior-executors, so we model them as utility-maximizers instead.
I might be wrong about this, but it seems like your point here is similar to Daniel Dennett’s concept of the intentional stance.
Furthermore, I think here we get to another issue that is relevant for some of our previous discussions over utilitarianism, as well as various questions of cognitive bias. Namely, modeling humans (and other creatures that display some intelligence) as utility-maximizers in the literal sense—i.e. via actual maximization of an explicitly known utility function—is for all practical purposes totally intractable, just like modeling them as behavior-executors with full accuracy would be. What is necessary to make people’s actions predictable enough (and in turn enable human cooperation and coordination) is that their behavior verifiably follows some decision algorithm that is at the same time good enough to grapple with real-world problems and manageably predictable by other people in its relevant aspects. And here we get to the point that I often bring up, namely that behaviors that look like irrational bias (in the sense of deviation from rational individual utility maximization) and folk-ethical intuitions that clash with seemingly clear-cut consequentialist arguments may in fact be instances of such decision algorithms, and thus in fact serving non-obvious but critically important functions in practice.
...folk-ethical intuitions that clash with seemingly clear-cut consequentialist arguments may in fact be instances of such decision algorithms, and thus in fact serving non-obvious but critically important functions in practice.
Is it fair to say that they are common to the extent they self-replicate, and that usefulness to the host is one important factor in each algorithm’s chance to exist? An important factor, but only one; only one factor, but an important one?
Indeed, a little too similar to Dennett’s intentional stance. If people don’t really have goals, but it is merely convenient to pretend they do, then the idea that people really have beliefs would seem to be in equal jeopardy. And then truth-seeking is in double jeopardy. But the trouble is, all along I’ve been trying to seek the truth about this blue-minimizing robot and related puzzles. I’ve been treating myself as an intentional system, something with both beliefs and goals, including goals about beliefs. And what I’ve just been told, it seems, is that my goals (or “goals”) will not be satisfied by this approach. OK then, I’ll turn elsewhere.
If there is some definition or criterion of “having goals” that human beings don’t meet—the von Neumann-Morgenstern utility theory, for example—it’s easy enough to discard that definition or criterion.
I might be wrong about this, but it seems like your point here is similar to Daniel Dennett’s concept of the intentional stance.
Furthermore, I think here we get to another issue that is relevant for some of our previous discussions over utilitarianism, as well as various questions of cognitive bias. Namely, modeling humans (and other creatures that display some intelligence) as utility-maximizers in the literal sense—i.e. via actual maximization of an explicitly known utility function—is for all practical purposes totally intractable, just like modeling them as behavior-executors with full accuracy would be. What is necessary to make people’s actions predictable enough (and in turn enable human cooperation and coordination) is that their behavior verifiably follows some decision algorithm that is at the same time good enough to grapple with real-world problems and manageably predictable by other people in its relevant aspects. And here we get to the point that I often bring up, namely that behaviors that look like irrational bias (in the sense of deviation from rational individual utility maximization) and folk-ethical intuitions that clash with seemingly clear-cut consequentialist arguments may in fact be instances of such decision algorithms, and thus in fact serving non-obvious but critically important functions in practice.
Is it fair to say that they are common to the extent they self-replicate, and that usefulness to the host is one important factor in each algorithm’s chance to exist? An important factor, but only one; only one factor, but an important one?
Indeed, a little too similar to Dennett’s intentional stance. If people don’t really have goals, but it is merely convenient to pretend they do, then the idea that people really have beliefs would seem to be in equal jeopardy. And then truth-seeking is in double jeopardy. But the trouble is, all along I’ve been trying to seek the truth about this blue-minimizing robot and related puzzles. I’ve been treating myself as an intentional system, something with both beliefs and goals, including goals about beliefs. And what I’ve just been told, it seems, is that my goals (or “goals”) will not be satisfied by this approach. OK then, I’ll turn elsewhere.
If there is some definition or criterion of “having goals” that human beings don’t meet—the von Neumann-Morgenstern utility theory, for example—it’s easy enough to discard that definition or criterion.