I’d be happy with an AI that makes people on Earth better off without eating the rest of the universe, and gives us the option to eat the universe later if we want to...
If the AI doesn’t take over the universe first, how will it prevent Malthusian uploads, burning of the cosmic commons, private hell simulations, and such?
Those things you want to prevent are all caused by humans, so the AI on Earth can directly prevent them. The rest of the universe is only relevant if you think that there are other optimizers out there, or if you want to use it, probably because you are a total utilitarian. But the small chance of another optimizer suggests that anyone would eat the universe.
Cousin_it said “and gives us the option to eat the universe later if we want to...” which I take to mean that the AI would not stop humans from colonizing the universe on their own, which would bring the problems that I mentioned.
On second thought, I agree with Douglas_Knight’s answer. It’s important for the AI to stop people from doing bad things with the universe, but for that the AI just needs to have power over people, not over the whole universe. And since I know about the risks from alien AIs and still don’t want to take over the universe, maybe the CEV of all people won’t want that either. It depends on how many people think population growth is good, and how many people think it’s better to leave most of the universe untouched, and how strongly people believe in these and other related ideas, and which of them will be marked as “wrong” by the AI.
I find your desire to leave the universe “untouched” puzzling. Are you saying that you have a terminal goal to prevent most of the universe from being influenced by human actions, or is it an instrumental value of some sort (for example you want to know what would happen if the universe is allowed to develop naturally)?
Well, it’s not a very strong desire, I suspect that many other people have much stronger “naturalistic” urges than me. But since you ask, I’ll try to introspect anyway:
Curiosity doesn’t seem to be the reason, because I want to leave the universe untouched even after I die. It feels more like altruism. Sometime ago Eliezer wrote about the desire not to be optimized too hard by an outside agent. If I can desire that for myself, then I can also desire it for aliens, give them a chance to not be optimized by us… Of course if there are aliens, we might need to defend ourselves. But something in me doesn’t like the idea of taking over the universe in preemptive self-defense. I’d prefer to find some other way to stay safe...
Sorry if this sounds confusing, I’m confused about it too.
That helps me to understand your position, but it seems unlikely that enough people would desire it strongly enough for CEV to conclude we should give up colonizing the universe altogether. Perhaps some sort of compromised would be reached, for example the FAI would colonize the universe but bypass any solar systems that contain or may evolve intelligent life. Would that be sufficient to satisfy (or mostly satisfy) your desire to not optimize aliens?
Would you like it if aliens colonized the whole universe except our system, or would you prefer if they cared about our wishes and didn’t put us in that situation?
Ok, but are we optimising the expected case or the worst case? If the former, then the probability of those things happening with no special steps against them is relevant. To take the easiest example: would postponing the “take over the universe” step for 300 years make a big difference in the expected amount of cosmic commons burned before takeover?
Depends. Would this allow someone else to move outside its defined sphere of influence and build an AI that doesn’t wait?
If the AI isn’t taking over the universe, that might leave the option open that something else will. If it doesn’t control humanity, chances are that will be another human-originated AI. If it does control humanity, why are we waiting?
For that, it’s sufficient to only take over the Earth and keep an eye on its Malthusian uploads and private hell simulations (but restriction to the Earth seems pointless and hard to implement).
I’d be happy with an AI that makes people on Earth better off without eating the rest of the universe, and gives us the option to eat the universe later if we want to...
If the AI doesn’t take over the universe first, how will it prevent Malthusian uploads, burning of the cosmic commons, private hell simulations, and such?
Those things you want to prevent are all caused by humans, so the AI on Earth can directly prevent them. The rest of the universe is only relevant if you think that there are other optimizers out there, or if you want to use it, probably because you are a total utilitarian. But the small chance of another optimizer suggests that anyone would eat the universe.
Cousin_it said “and gives us the option to eat the universe later if we want to...” which I take to mean that the AI would not stop humans from colonizing the universe on their own, which would bring the problems that I mentioned.
On second thought, I agree with Douglas_Knight’s answer. It’s important for the AI to stop people from doing bad things with the universe, but for that the AI just needs to have power over people, not over the whole universe. And since I know about the risks from alien AIs and still don’t want to take over the universe, maybe the CEV of all people won’t want that either. It depends on how many people think population growth is good, and how many people think it’s better to leave most of the universe untouched, and how strongly people believe in these and other related ideas, and which of them will be marked as “wrong” by the AI.
I find your desire to leave the universe “untouched” puzzling. Are you saying that you have a terminal goal to prevent most of the universe from being influenced by human actions, or is it an instrumental value of some sort (for example you want to know what would happen if the universe is allowed to develop naturally)?
Well, it’s not a very strong desire, I suspect that many other people have much stronger “naturalistic” urges than me. But since you ask, I’ll try to introspect anyway:
Curiosity doesn’t seem to be the reason, because I want to leave the universe untouched even after I die. It feels more like altruism. Sometime ago Eliezer wrote about the desire not to be optimized too hard by an outside agent. If I can desire that for myself, then I can also desire it for aliens, give them a chance to not be optimized by us… Of course if there are aliens, we might need to defend ourselves. But something in me doesn’t like the idea of taking over the universe in preemptive self-defense. I’d prefer to find some other way to stay safe...
Sorry if this sounds confusing, I’m confused about it too.
That helps me to understand your position, but it seems unlikely that enough people would desire it strongly enough for CEV to conclude we should give up colonizing the universe altogether. Perhaps some sort of compromised would be reached, for example the FAI would colonize the universe but bypass any solar systems that contain or may evolve intelligent life. Would that be sufficient to satisfy (or mostly satisfy) your desire to not optimize aliens?
Would you like it if aliens colonized the whole universe except our system, or would you prefer if they cared about our wishes and didn’t put us in that situation?
Ok, but are we optimising the expected case or the worst case? If the former, then the probability of those things happening with no special steps against them is relevant. To take the easiest example: would postponing the “take over the universe” step for 300 years make a big difference in the expected amount of cosmic commons burned before takeover?
Depends. Would this allow someone else to move outside its defined sphere of influence and build an AI that doesn’t wait?
If the AI isn’t taking over the universe, that might leave the option open that something else will. If it doesn’t control humanity, chances are that will be another human-originated AI. If it does control humanity, why are we waiting?
For that, it’s sufficient to only take over the Earth and keep an eye on its Malthusian uploads and private hell simulations (but restriction to the Earth seems pointless and hard to implement).