If OT is false , then some or no combinations of goals and intelligence are possible. Oracle .AI could still fall in the set of limited combinations.
The argument is that given an Oracle and an entity of limited intelligence that has goal G, we can construct a superintelligent being with goal G by having the limited intelligence ask the Oracle how to achieve G.
You are possibly the first person in the world to do think that morality has something to do with your copies.
Negotiating with your copies is the much easier version of negotiating with other people.
The argument is that given an Oracle and an entity of limited intelligence that has goal G, we can construct a superintelligent being with goal G by having the limited intelligence ask the Oracle how to achieve G.
But it still might not be possible, in which case the Oracle will not be of help. That scenario only removes difficulties due to limited intelligence on the builders part.
Negotiating with your copies is the much easier version of negotiating with other people.
I don’t have any copies I can interact with, so how can it be easy?
I still don’t see the big problem with MR. In other conversations, people have put it to me that MR is impossilbe because it is impossible to completely satisfy everyone’s preferecnces. It is impossible to completely satisfy everyone’s preferecnces, but that is not soemthing MR requires. It is kind of obvious thar morality in genral requires compromises and sacrifices, since we see that happening all the time. in the real world.
The argument is that given an Oracle and an entity of limited intelligence that has goal G, we can construct a superintelligent being with goal G by having the limited intelligence ask the Oracle how to achieve G.
Negotiating with your copies is the much easier version of negotiating with other people.
But it still might not be possible, in which case the Oracle will not be of help. That scenario only removes difficulties due to limited intelligence on the builders part.
I don’t have any copies I can interact with, so how can it be easy?
I still don’t see the big problem with MR. In other conversations, people have put it to me that MR is impossilbe because it is impossible to completely satisfy everyone’s preferecnces. It is impossible to completely satisfy everyone’s preferecnces, but that is not soemthing MR requires. It is kind of obvious thar morality in genral requires compromises and sacrifices, since we see that happening all the time. in the real world.