And if you’re not using your power/money to affect which of those two outcomes is more likely to happen than the other, then your power/money is completely useless. They won’t be useful if we all die, and they won’t be useful if we get utopia.
I disagree with this, because I think the following three things are true:
There is a finite amount of value in the accessible universe (or multiverse, or whatever).
Some people have unbounded “values”, especially around positional goods like status among other humans.
A way I imagine this concretely playing out, conditional on intent alignment succeeding, is that very powerful post-human beings descended from people who controlled AI during the pivotal period playing very costly status games with each other, constructing the cosmic equivalent of the Bughatti Veyron or the Oman Royal Yacht Squadron, without being concerned with impartial value. I still expect them to provide for the “basic” needs of humanity, because it is so incredibly cheap, making it a utopia for people with bounded or modest goals, but e.g. preventing impartial hedonic utilitarians or people with many positional or nosy values from enacting their goals.
This depends on the people ultimately in charge of powerful AI systems to be philosophically unsophisticated, but most people are philosophically unsophisticated, and philosophical sophistication appears mostly uncorrelated with engineering or business success, so this doesn’t seem like a bottleneck.
This view of course fails when single individuals become exceedingly powerful, in which case I don’t have as strong a story. I’d be interested in what individual humans have historically done when they were strongly dominant over all forces around them.
Um, I’m pretty sure that history has some examples of what individual humans tend to do when strongly dominant over all local social forces. If we extrapolate from that, it uh, doesn’t look pretty. We can hope that things will be different when there is an abundance of material wealth, but I don’t feel much confidence in that.
I disagree with this, because I think the following three things are true:
There is a finite amount of value in the accessible universe (or multiverse, or whatever).
Some people have unbounded “values”, especially around positional goods like status among other humans.
A way I imagine this concretely playing out, conditional on intent alignment succeeding, is that very powerful post-human beings descended from people who controlled AI during the pivotal period playing very costly status games with each other, constructing the cosmic equivalent of the Bughatti Veyron or the Oman Royal Yacht Squadron, without being concerned with impartial value. I still expect them to provide for the “basic” needs of humanity, because it is so incredibly cheap, making it a utopia for people with bounded or modest goals, but e.g. preventing impartial hedonic utilitarians or people with many positional or nosy values from enacting their goals.
This depends on the people ultimately in charge of powerful AI systems to be philosophically unsophisticated, but most people are philosophically unsophisticated, and philosophical sophistication appears mostly uncorrelated with engineering or business success, so this doesn’t seem like a bottleneck.
This view of course fails when single individuals become exceedingly powerful, in which case I don’t have as strong a story. I’d be interested in what individual humans have historically done when they were strongly dominant over all forces around them.
Um, I’m pretty sure that history has some examples of what individual humans tend to do when strongly dominant over all local social forces. If we extrapolate from that, it uh, doesn’t look pretty. We can hope that things will be different when there is an abundance of material wealth, but I don’t feel much confidence in that.