i interpret this to mean “some entities’ values will want to use as much matter as they can for things, so not all values can be unboundedly fulfilled”. this is true and not a crux. if a moral patient who wants to make unboundedly much of something actually making unboundedly much of it would be less good than other ways the world could be, then an (altruistically-)aligned agent would choose one of the other ways.
superintelligence is context-aware in this way, it is not {a rigid system which fails to outliers it doesn’t expect (e.g.: “tries to create utopia, but instead gives all the lightcone to whichever maximizer requests it all first”), and so which needs a somewhat less rigid but not-superintelligent system (an economy) to avoid this}. i suspect this (superintelligence being context-aware) is effectively the crux here.
The other issue is value conflicts, which I expect to be mostly irresolvable in a satisfying way by default due to moral subjectivism combined with me believing that lots of value conflicts today are mostly suppressed because people can’t make their own nation-states, but with AI, they can, and superintelligence makes the problem worse.
The other issue is value conflicts, which I expect to be mostly irresolvable in a satisfying way by default due to moral subjectivism combined with me believing that lots of value conflicts today are mostly suppressed because people can’t make their own nation-states, but with AI, they can, and superintelligence makes the problem worse.
That’s why you can’t have utopia for everyone.