Your initial observation is interesting; your analysis is somewhat problematic.
First, there is, as mentioned elsewhere, a distinct misconception of status that seems to recur here. Status is not a one-dimensional line upon which each person can be objectively placed. Status is multi-dimensional. For example, your person with a second-rate Faberge egg collection may simply decide that what really counts is how nice someone’s roses are, and thus look at better Faberge eggs with sympathy (they should have spent all this money on roses!) rather than jealousy. By competing on multiple dimensions, it’s possible for many people to win, potentially even everyone.
You are also using a non-representative sample. The type of person who’s likely to become successful through celebrity seems disproportionately likely to be status-obsessed. (This is likely also true of people who inherit fortunes, as they have few other ways to distinguish their self-worth.) Generalizing about the ability of an AI to serve all people based on these few may be inappropriate. This is even more true when we consider that a world with an FAI may look very different in terms of how people are raised and educated.
Furthermore, your restriction on an FAI as being unable to alter utility functions seems like overkill. Every ad you see on television is an attempt to change your utility function. They’re just not terribly effective. Any alteration that an FAI made on advertising, education, perhaps even market pricing, would necessarily have some effect on people’s utility functions which the FAI would presumably be aware of. Thus, while perhaps you want some restriction on the FAI rewiring people’s brains directly, it seems like an FAI that is wholly prohibited from altering utility functions would be practically incapable of action. In short, therefore, your analysis has an overly restricted imagination for how an FAI would go about its business—you assume either it is directly rewiring everyone’s brain or sitting on its own circuits unable to act. There should be a middle ground.
And, as an aside, your mention of “average utilitarians” betrays a common misconception that’s getting old ’round these parts. Utility is inter-dependent. It’s all well and good to talk about how, in theory, eliminating everyone but the two happiest people would maximize average utility, but, in reality, two people would not want to live like that. While it might work on a spreadsheet, actually eliminating most of the population is extraordinarily unlikely to maximize average utility, because utility is not merely a function of available resources.
Your initial observation is interesting; your analysis is somewhat problematic.
First, there is, as mentioned elsewhere, a distinct misconception of status that seems to recur here. Status is not a one-dimensional line upon which each person can be objectively placed. Status is multi-dimensional. For example, your person with a second-rate Faberge egg collection may simply decide that what really counts is how nice someone’s roses are, and thus look at better Faberge eggs with sympathy (they should have spent all this money on roses!) rather than jealousy. By competing on multiple dimensions, it’s possible for many people to win, potentially even everyone.
You are also using a non-representative sample. The type of person who’s likely to become successful through celebrity seems disproportionately likely to be status-obsessed. (This is likely also true of people who inherit fortunes, as they have few other ways to distinguish their self-worth.) Generalizing about the ability of an AI to serve all people based on these few may be inappropriate. This is even more true when we consider that a world with an FAI may look very different in terms of how people are raised and educated.
Furthermore, your restriction on an FAI as being unable to alter utility functions seems like overkill. Every ad you see on television is an attempt to change your utility function. They’re just not terribly effective. Any alteration that an FAI made on advertising, education, perhaps even market pricing, would necessarily have some effect on people’s utility functions which the FAI would presumably be aware of. Thus, while perhaps you want some restriction on the FAI rewiring people’s brains directly, it seems like an FAI that is wholly prohibited from altering utility functions would be practically incapable of action. In short, therefore, your analysis has an overly restricted imagination for how an FAI would go about its business—you assume either it is directly rewiring everyone’s brain or sitting on its own circuits unable to act. There should be a middle ground.
And, as an aside, your mention of “average utilitarians” betrays a common misconception that’s getting old ’round these parts. Utility is inter-dependent. It’s all well and good to talk about how, in theory, eliminating everyone but the two happiest people would maximize average utility, but, in reality, two people would not want to live like that. While it might work on a spreadsheet, actually eliminating most of the population is extraordinarily unlikely to maximize average utility, because utility is not merely a function of available resources.