If importance were objective, then a Clippie could realise that paperclips are unimporant. The OT then comes down to intrinsic moral motivation, ie whether a clippie could realise the importance without being online to act on it.
OT implies the possibility of oracle AI, but the falsehood of OT does not imply they falsehood of oracle .AI. If OT is false , then some or no combinations of goals and intelligence are possible. Oracle .AI could still fall in the set of limited combinations.
OT does not imply that MR is false, any more than MR implies OT is false. The intuitoveness of oracle .AI does not support the OT for reasons given above.
Moral realists are not obviously in need of a definition of objecti8ve truth, anymore than physicalists are. They may be in need of a epistemology to explain how it is arrived at, it’s justification.
It is uncontentious that physical and mathematical facts do not compel all minds. Objective truth is not unconditional compulsion.
Moral realists do not have to, and often do not, claim that there is anything special about the truth or justification of their claims: at least, you have the burden to justify the claim that moral realists have the special about notion of truth.
The fact that some people are more dogmatic about their moral beliefs than proper epistemology would allow is no argument against MR. Dogmatism, confirmation bias, is widespread. Much what has been believe by scientists and rationalists has been wrong. If you had been born 4000 years go, you would have little e evidence of objective truth mathematical or physical truth .
Your steel manning of MR is fair enough. (It would have helped to emphasize that high level principles, .such as”don’t annoy people” are more defensible than fine grained stuff like “don’t scrape your fingernails across black board”) It is not as fair as reading and commenting on an actual moral realist. (Lesswrong is Lesswrong)
You are possibly the first person in the world to do think that morality has something to do with your copies. (By definition, you cannot interact with your MWI counterparts)
Reducing the huge set of possibilities is not so f.ar away from Gurus CEV: nor is it so far away from utilitariamsim. I don’t think that either is obviously true and I don’t think either is obviously false. It’s an open question.
If OT is false , then some or no combinations of goals and intelligence are possible. Oracle .AI could still fall in the set of limited combinations.
The argument is that given an Oracle and an entity of limited intelligence that has goal G, we can construct a superintelligent being with goal G by having the limited intelligence ask the Oracle how to achieve G.
You are possibly the first person in the world to do think that morality has something to do with your copies.
Negotiating with your copies is the much easier version of negotiating with other people.
The argument is that given an Oracle and an entity of limited intelligence that has goal G, we can construct a superintelligent being with goal G by having the limited intelligence ask the Oracle how to achieve G.
But it still might not be possible, in which case the Oracle will not be of help. That scenario only removes difficulties due to limited intelligence on the builders part.
Negotiating with your copies is the much easier version of negotiating with other people.
I don’t have any copies I can interact with, so how can it be easy?
I still don’t see the big problem with MR. In other conversations, people have put it to me that MR is impossilbe because it is impossible to completely satisfy everyone’s preferecnces. It is impossible to completely satisfy everyone’s preferecnces, but that is not soemthing MR requires. It is kind of obvious thar morality in genral requires compromises and sacrifices, since we see that happening all the time. in the real world.
If importance were objective, then a Clippie could realise that paperclips are unimporant. The OT then comes down to intrinsic moral motivation, ie whether a clippie could realise the importance without being online to act on it.
OT implies the possibility of oracle AI, but the falsehood of OT does not imply they falsehood of oracle .AI. If OT is false , then some or no combinations of goals and intelligence are possible. Oracle .AI could still fall in the set of limited combinations.
OT does not imply that MR is false, any more than MR implies OT is false. The intuitoveness of oracle .AI does not support the OT for reasons given above.
Moral realists are not obviously in need of a definition of objecti8ve truth, anymore than physicalists are. They may be in need of a epistemology to explain how it is arrived at, it’s justification.
It is uncontentious that physical and mathematical facts do not compel all minds. Objective truth is not unconditional compulsion.
Moral realists do not have to, and often do not, claim that there is anything special about the truth or justification of their claims: at least, you have the burden to justify the claim that moral realists have the special about notion of truth.
The fact that some people are more dogmatic about their moral beliefs than proper epistemology would allow is no argument against MR. Dogmatism, confirmation bias, is widespread. Much what has been believe by scientists and rationalists has been wrong. If you had been born 4000 years go, you would have little e evidence of objective truth mathematical or physical truth .
Your steel manning of MR is fair enough. (It would have helped to emphasize that high level principles, .such as”don’t annoy people” are more defensible than fine grained stuff like “don’t scrape your fingernails across black board”) It is not as fair as reading and commenting on an actual moral realist. (Lesswrong is Lesswrong)
You are possibly the first person in the world to do think that morality has something to do with your copies. (By definition, you cannot interact with your MWI counterparts)
Reducing the huge set of possibilities is not so f.ar away from Gurus CEV: nor is it so far away from utilitariamsim. I don’t think that either is obviously true and I don’t think either is obviously false. It’s an open question.
The argument is that given an Oracle and an entity of limited intelligence that has goal G, we can construct a superintelligent being with goal G by having the limited intelligence ask the Oracle how to achieve G.
Negotiating with your copies is the much easier version of negotiating with other people.
But it still might not be possible, in which case the Oracle will not be of help. That scenario only removes difficulties due to limited intelligence on the builders part.
I don’t have any copies I can interact with, so how can it be easy?
I still don’t see the big problem with MR. In other conversations, people have put it to me that MR is impossilbe because it is impossible to completely satisfy everyone’s preferecnces. It is impossible to completely satisfy everyone’s preferecnces, but that is not soemthing MR requires. It is kind of obvious thar morality in genral requires compromises and sacrifices, since we see that happening all the time. in the real world.