The incentive problem still remains, such that it’s more effective to use the price system than to use a command economy to deal with incentive issues:
going by the linked tweet, does “incentive problem” mean “needing to incentivize individuals to share information about their preferences in some way, which is currently done through their economic behavior, in order for their preferences to be fulfilled”? and contrasted with a “command economy”, where everything is planned out long in advance, and possibly on less information about the preferences of individual moral patients?
if so, those sound like abstractions which were relevant to the world so far, but can you not imagine any better way a superintelligence could elicit this information? it does not need to use prices or trade. some examples:
it could have many copies of itself talk to them
it could let beings enter whatever they want into a computer in real time, or really let beings convey their preferences in whatever medium they prefer, and fulfill them[1]
it could mind-scan those who are okay with this.
(these are just examples selected for clarity; i personally would expect something more complex and less thing-oriented, around moral patients who are okay with/desire it, where superintelligence imbues itself as computation throughout the lowest level of physics upon which this is possible, and so it is as if physics itself is contextually aware and benevolent)
(i think these also sufficiently address your point 2, about SI needing ‘contact with reality’)
there is also a second (but non-cruxy) assumption here, that preference information would need to be dispersed across some production ecosystem, which would not be true given general-purpose superintelligent nanofactories. this though is not a crux as long as whatever is required for production can fit on, e.g., a planet (which the information derived in, e.g., one of those listed ways, can be communicated across at light-speed, as we partially do now).
A potentially large crux is I don’t really think a utopia is possible, at least in the early years even by superintelligences, because I expect preferences in the new environment to grow unboundedly such that preferences are always dissatisfied
i interpret this to mean “some entities’ values will want to use as much matter as they can for things, so not all values can be unboundedly fulfilled”. this is true and not a crux. if a moral patient who wants to make unboundedly much of something actually making unboundedly much of it would be less good than other ways the world could be, then an (altruistically-)aligned agent would choose one of the other ways.
superintelligence is context-aware in this way, it is not {a rigid system which fails to outliers it doesn’t expect (e.g.: “tries to create utopia, but instead gives all the lightcone to whichever maximizer requests it all first”), and so which needs a somewhat less rigid but not-superintelligent system (an economy) to avoid this}. i suspect this (superintelligence being context-aware) is effectively the crux here.
i interpret this to mean “some entities’ values will want to use as much matter as they can for things, so not all values can be unboundedly fulfilled”. this is true and not a crux. if a moral patient who wants to make unboundedly much of something actually making unboundedly much of it would be less good than other ways the world could be, then an (altruistically-)aligned agent would choose one of the other ways.
superintelligence is context-aware in this way, it is not {a rigid system which fails to outliers it doesn’t expect (e.g.: “tries to create utopia, but instead gives all the lightcone to whichever maximizer requests it all first”), and so which needs a somewhat less rigid but not-superintelligent system (an economy) to avoid this}. i suspect this (superintelligence being context-aware) is effectively the crux here.
The other issue is value conflicts, which I expect to be mostly irresolvable in a satisfying way by default due to moral subjectivism combined with me believing that lots of value conflicts today are mostly suppressed because people can’t make their own nation-states, but with AI, they can, and superintelligence makes the problem worse.
going by the linked tweet, does “incentive problem” mean “needing to incentivize individuals to share information about their preferences in some way, which is currently done through their economic behavior, in order for their preferences to be fulfilled”? and contrasted with a “command economy”, where everything is planned out long in advance, and possibly on less information about the preferences of individual moral patients?
if so, those sound like abstractions which were relevant to the world so far, but can you not imagine any better way a superintelligence could elicit this information? it does not need to use prices or trade. some examples:
it could have many copies of itself talk to them
it could let beings enter whatever they want into a computer in real time, or really let beings convey their preferences in whatever medium they prefer, and fulfill them[1]
it could mind-scan those who are okay with this.
(these are just examples selected for clarity; i personally would expect something more complex and less thing-oriented, around moral patients who are okay with/desire it, where superintelligence imbues itself as computation throughout the lowest level of physics upon which this is possible, and so it is as if physics itself is contextually aware and benevolent)
(i think these also sufficiently address your point 2, about SI needing ‘contact with reality’)
there is also a second (but non-cruxy) assumption here, that preference information would need to be dispersed across some production ecosystem, which would not be true given general-purpose superintelligent nanofactories. this though is not a crux as long as whatever is required for production can fit on, e.g., a planet (which the information derived in, e.g., one of those listed ways, can be communicated across at light-speed, as we partially do now).
i interpret this to mean “some entities’ values will want to use as much matter as they can for things, so not all values can be unboundedly fulfilled”. this is true and not a crux. if a moral patient who wants to make unboundedly much of something actually making unboundedly much of it would be less good than other ways the world could be, then an (altruistically-)aligned agent would choose one of the other ways.
superintelligence is context-aware in this way, it is not {a rigid system which fails to outliers it doesn’t expect (e.g.: “tries to create utopia, but instead gives all the lightcone to whichever maximizer requests it all first”), and so which needs a somewhat less rigid but not-superintelligent system (an economy) to avoid this}. i suspect this (superintelligence being context-aware) is effectively the crux here.
(if morally acceptable, e.g. no creating hells)
The other issue is value conflicts, which I expect to be mostly irresolvable in a satisfying way by default due to moral subjectivism combined with me believing that lots of value conflicts today are mostly suppressed because people can’t make their own nation-states, but with AI, they can, and superintelligence makes the problem worse.
That’s why you can’t have utopia for everyone.