According to orthogonality, importance is a subjective belief, not a objective one
The OT needs that assumption. It is not a free-floating truth, and so should not be argued from as if it were.
If instead you believe that moral values are objective, do you have evidence for this position?
It is an open question philosophically. Which means that
1 it is more a of a question of arguing for it than finding empirical evidence
2 There is no unproblematic default position. You can’t argue “no evidence for objectivism, therefore subjectivism” because it is equally the case that there is no evidence of subjectivism. Just arguments on both sides.
Yes, it’s a morass and a mess. But OT can be true even if (most variants of) moral realism are true. Though OT is a strong indication that a lot of the intuitions connected with strong moral realism are suspect.
As for burden of proof… Well, the debate is complex (it feels like a diseased debate, but I’m not yet sure about that). But, to simplify a bit, moral realists are asserting that certain moral facts are “objectively true” in a way they haven’t really defined (or if they have defined, their definitions are almost certainly false, such as being universally compelling moral facts for all types of minds).
So I’d say the burden of proof is very clearly on those asserting the existence of these special properties of moral facts. Especially since many people seem much more certain as to what some of the objective moral facts are, than why they are objective (always a bad sign) and often disagree with each other.
If I wanted to steal-man moral realism, I’d argue that the properties of utility functions demonstrate that one cannot have completely unconstrained preferences and still be consistent in ways that feel natural. UDT and other acausal decision theories put certain constraints on how you should interact with your copies, and maybe you can come up with some decent single way of negotiating between different agents with different preferences (we tried this for a long time, but couldn’t crack it). Therefore there is some sense in which some classes of preferences are better than others. Then a moral realist could squint really hard and and say “hey, we can continue this process, and refine this, and reduce the huge class of possible utilities to a much smaller set”.
If importance were objective, then a Clippie could realise that paperclips are unimporant. The OT then comes down to intrinsic moral motivation, ie whether a clippie could realise the importance without being online to act on it.
OT implies the possibility of oracle AI, but the falsehood of OT does not imply they falsehood of oracle .AI. If OT is false , then some or no combinations of goals and intelligence are possible. Oracle .AI could still fall in the set of limited combinations.
OT does not imply that MR is false, any more than MR implies OT is false. The intuitoveness of oracle .AI does not support the OT for reasons given above.
Moral realists are not obviously in need of a definition of objecti8ve truth, anymore than physicalists are. They may be in need of a epistemology to explain how it is arrived at, it’s justification.
It is uncontentious that physical and mathematical facts do not compel all minds. Objective truth is not unconditional compulsion.
Moral realists do not have to, and often do not, claim that there is anything special about the truth or justification of their claims: at least, you have the burden to justify the claim that moral realists have the special about notion of truth.
The fact that some people are more dogmatic about their moral beliefs than proper epistemology would allow is no argument against MR. Dogmatism, confirmation bias, is widespread. Much what has been believe by scientists and rationalists has been wrong. If you had been born 4000 years go, you would have little e evidence of objective truth mathematical or physical truth .
Your steel manning of MR is fair enough. (It would have helped to emphasize that high level principles, .such as”don’t annoy people” are more defensible than fine grained stuff like “don’t scrape your fingernails across black board”) It is not as fair as reading and commenting on an actual moral realist. (Lesswrong is Lesswrong)
You are possibly the first person in the world to do think that morality has something to do with your copies. (By definition, you cannot interact with your MWI counterparts)
Reducing the hhuge set of possibilities is not so f.ar away from Gurus CEV: nor is it so far away from utilitariamsim. I don’t think that either is obviously true and I don’t think either is obviously false. It’s an open question.
If importance were objective, then a Clippie could realise that paperclips are unimporant. The OT then comes down to intrinsic moral motivation, ie whether a clippie could realise the importance without being online to act on it.
OT implies the possibility of oracle AI, but the falsehood of OT does not imply they falsehood of oracle .AI. If OT is false , then some or no combinations of goals and intelligence are possible. Oracle .AI could still fall in the set of limited combinations.
OT does not imply that MR is false, any more than MR implies OT is false. The intuitoveness of oracle .AI does not support the OT for reasons given above.
Moral realists are not obviously in need of a definition of objecti8ve truth, anymore than physicalists are. They may be in need of a epistemology to explain how it is arrived at, it’s justification.
It is uncontentious that physical and mathematical facts do not compel all minds. Objective truth is not unconditional compulsion.
Moral realists do not have to, and often do not, claim that there is anything special about the truth or justification of their claims: at least, you have the burden to justify the claim that moral realists have the special about notion of truth.
The fact that some people are more dogmatic about their moral beliefs than proper epistemology would allow is no argument against MR. Dogmatism, confirmation bias, is widespread. Much what has been believe by scientists and rationalists has been wrong. If you had been born 4000 years go, you would have little e evidence of objective truth mathematical or physical truth .
Your steel manning of MR is fair enough. (It would have helped to emphasize that high level principles, .such as”don’t annoy people” are more defensible than fine grained stuff like “don’t scrape your fingernails across black board”) It is not as fair as reading and commenting on an actual moral realist. (Lesswrong is Lesswrong)
You are possibly the first person in the world to do think that morality has something to do with your copies. (By definition, you cannot interact with your MWI counterparts)
Reducing the huge set of possibilities is not so f.ar away from Gurus CEV: nor is it so far away from utilitariamsim. I don’t think that either is obviously true and I don’t think either is obviously false. It’s an open question.
If OT is false , then some or no combinations of goals and intelligence are possible. Oracle .AI could still fall in the set of limited combinations.
The argument is that given an Oracle and an entity of limited intelligence that has goal G, we can construct a superintelligent being with goal G by having the limited intelligence ask the Oracle how to achieve G.
You are possibly the first person in the world to do think that morality has something to do with your copies.
Negotiating with your copies is the much easier version of negotiating with other people.
The argument is that given an Oracle and an entity of limited intelligence that has goal G, we can construct a superintelligent being with goal G by having the limited intelligence ask the Oracle how to achieve G.
But it still might not be possible, in which case the Oracle will not be of help. That scenario only removes difficulties due to limited intelligence on the builders part.
Negotiating with your copies is the much easier version of negotiating with other people.
I don’t have any copies I can interact with, so how can it be easy?
I still don’t see the big problem with MR. In other conversations, people have put it to me that MR is impossilbe because it is impossible to completely satisfy everyone’s preferecnces. It is impossible to completely satisfy everyone’s preferecnces, but that is not soemthing MR requires. It is kind of obvious thar morality in genral requires compromises and sacrifices, since we see that happening all the time. in the real world.
The OT needs that assumption. It is not a free-floating truth, and so should not be argued from as if it were.
It is an open question philosophically. Which means that
1 it is more a of a question of arguing for it than finding empirical evidence
2 There is no unproblematic default position. You can’t argue “no evidence for objectivism, therefore subjectivism” because it is equally the case that there is no evidence of subjectivism. Just arguments on both sides.
Actually, I phrased that poorly—the OT does not need that assumption, it doesn’t use it at all. OT is true for extreme ideas such as AIXI and Godel machines, and if OT is false, then Oracle AI cannot be built. See http://lesswrong.com/lw/cej/general_purpose_intelligence_arguing_the/ for more details.
Yes, it’s a morass and a mess. But OT can be true even if (most variants of) moral realism are true. Though OT is a strong indication that a lot of the intuitions connected with strong moral realism are suspect.
As for burden of proof… Well, the debate is complex (it feels like a diseased debate, but I’m not yet sure about that). But, to simplify a bit, moral realists are asserting that certain moral facts are “objectively true” in a way they haven’t really defined (or if they have defined, their definitions are almost certainly false, such as being universally compelling moral facts for all types of minds).
So I’d say the burden of proof is very clearly on those asserting the existence of these special properties of moral facts. Especially since many people seem much more certain as to what some of the objective moral facts are, than why they are objective (always a bad sign) and often disagree with each other.
If I wanted to steal-man moral realism, I’d argue that the properties of utility functions demonstrate that one cannot have completely unconstrained preferences and still be consistent in ways that feel natural. UDT and other acausal decision theories put certain constraints on how you should interact with your copies, and maybe you can come up with some decent single way of negotiating between different agents with different preferences (we tried this for a long time, but couldn’t crack it). Therefore there is some sense in which some classes of preferences are better than others. Then a moral realist could squint really hard and and say “hey, we can continue this process, and refine this, and reduce the huge class of possible utilities to a much smaller set”.
If importance were objective, then a Clippie could realise that paperclips are unimporant. The OT then comes down to intrinsic moral motivation, ie whether a clippie could realise the importance without being online to act on it.
OT implies the possibility of oracle AI, but the falsehood of OT does not imply they falsehood of oracle .AI. If OT is false , then some or no combinations of goals and intelligence are possible. Oracle .AI could still fall in the set of limited combinations.
OT does not imply that MR is false, any more than MR implies OT is false. The intuitoveness of oracle .AI does not support the OT for reasons given above.
Moral realists are not obviously in need of a definition of objecti8ve truth, anymore than physicalists are. They may be in need of a epistemology to explain how it is arrived at, it’s justification.
It is uncontentious that physical and mathematical facts do not compel all minds. Objective truth is not unconditional compulsion.
Moral realists do not have to, and often do not, claim that there is anything special about the truth or justification of their claims: at least, you have the burden to justify the claim that moral realists have the special about notion of truth.
The fact that some people are more dogmatic about their moral beliefs than proper epistemology would allow is no argument against MR. Dogmatism, confirmation bias, is widespread. Much what has been believe by scientists and rationalists has been wrong. If you had been born 4000 years go, you would have little e evidence of objective truth mathematical or physical truth .
Your steel manning of MR is fair enough. (It would have helped to emphasize that high level principles, .such as”don’t annoy people” are more defensible than fine grained stuff like “don’t scrape your fingernails across black board”) It is not as fair as reading and commenting on an actual moral realist. (Lesswrong is Lesswrong)
You are possibly the first person in the world to do think that morality has something to do with your copies. (By definition, you cannot interact with your MWI counterparts)
Reducing the hhuge set of possibilities is not so f.ar away from Gurus CEV: nor is it so far away from utilitariamsim. I don’t think that either is obviously true and I don’t think either is obviously false. It’s an open question.
If importance were objective, then a Clippie could realise that paperclips are unimporant. The OT then comes down to intrinsic moral motivation, ie whether a clippie could realise the importance without being online to act on it.
OT implies the possibility of oracle AI, but the falsehood of OT does not imply they falsehood of oracle .AI. If OT is false , then some or no combinations of goals and intelligence are possible. Oracle .AI could still fall in the set of limited combinations.
OT does not imply that MR is false, any more than MR implies OT is false. The intuitoveness of oracle .AI does not support the OT for reasons given above.
Moral realists are not obviously in need of a definition of objecti8ve truth, anymore than physicalists are. They may be in need of a epistemology to explain how it is arrived at, it’s justification.
It is uncontentious that physical and mathematical facts do not compel all minds. Objective truth is not unconditional compulsion.
Moral realists do not have to, and often do not, claim that there is anything special about the truth or justification of their claims: at least, you have the burden to justify the claim that moral realists have the special about notion of truth.
The fact that some people are more dogmatic about their moral beliefs than proper epistemology would allow is no argument against MR. Dogmatism, confirmation bias, is widespread. Much what has been believe by scientists and rationalists has been wrong. If you had been born 4000 years go, you would have little e evidence of objective truth mathematical or physical truth .
Your steel manning of MR is fair enough. (It would have helped to emphasize that high level principles, .such as”don’t annoy people” are more defensible than fine grained stuff like “don’t scrape your fingernails across black board”) It is not as fair as reading and commenting on an actual moral realist. (Lesswrong is Lesswrong)
You are possibly the first person in the world to do think that morality has something to do with your copies. (By definition, you cannot interact with your MWI counterparts)
Reducing the huge set of possibilities is not so f.ar away from Gurus CEV: nor is it so far away from utilitariamsim. I don’t think that either is obviously true and I don’t think either is obviously false. It’s an open question.
The argument is that given an Oracle and an entity of limited intelligence that has goal G, we can construct a superintelligent being with goal G by having the limited intelligence ask the Oracle how to achieve G.
Negotiating with your copies is the much easier version of negotiating with other people.
But it still might not be possible, in which case the Oracle will not be of help. That scenario only removes difficulties due to limited intelligence on the builders part.
I don’t have any copies I can interact with, so how can it be easy?
I still don’t see the big problem with MR. In other conversations, people have put it to me that MR is impossilbe because it is impossible to completely satisfy everyone’s preferecnces. It is impossible to completely satisfy everyone’s preferecnces, but that is not soemthing MR requires. It is kind of obvious thar morality in genral requires compromises and sacrifices, since we see that happening all the time. in the real world.