Voluntary interaction has been great for humans. But it hasn’t been great for orangutans, who don’t do a very good job of participating in society.
Even if you somehow ensure transparency and cooperation among superintelligent AIs and humans, it seems overwhelmingly likely that humans will take the place of the orangutan, marginalized and taken from in every way possible within the limits of what is, in the end, not a very strict system. It is allowed, as Eliezer would say.
Orangutans don’t contribute to human society even though they’re specialized in things humans aren’t. The best chess player in the world isn’t a human-AI symbiote, for the same reason it’s not an orangutan-human-AI symbiote.
Human trades with superintelligent AI do not have to be Pareto improvements (in the common-sense way), because humans make systematic mistakes (according to the common-sense standard). If you actually knew how to detect what trades would be good for humans—how to systematize that common sense, and necessarily also how to improve it since it is itself inconsistent and systematically mistaken—this would be solving the key parts of the value alignment problem that one might have hoped to sidestep by relying on voluntarism instead.
Voluntary interaction has been great for humans. But it hasn’t been great for orangutans, who don’t do a very good job of participating in society.
Even if you somehow ensure transparency and cooperation among superintelligent AIs and humans, it seems overwhelmingly likely that humans will take the place of the orangutan, marginalized and taken from in every way possible within the limits of what is, in the end, not a very strict system. It is allowed, as Eliezer would say.
Orangutans don’t contribute to human society even though they’re specialized in things humans aren’t. The best chess player in the world isn’t a human-AI symbiote, for the same reason it’s not an orangutan-human-AI symbiote.
Human trades with superintelligent AI do not have to be Pareto improvements (in the common-sense way), because humans make systematic mistakes (according to the common-sense standard). If you actually knew how to detect what trades would be good for humans—how to systematize that common sense, and necessarily also how to improve it since it is itself inconsistent and systematically mistaken—this would be solving the key parts of the value alignment problem that one might have hoped to sidestep by relying on voluntarism instead.