It would seem that you believe that. So what is your proof?
Proof is only meaningful in a system of shared assumptions (physics) or axioms (logic).
The statement that “it’s wrong to believe without proof”, equivalently, that “a single correct set of beliefs is mandated given by your proof (=evidence) and assumptions (=prior)”, is a logical consequence of the rules of Bayesian deduction.
If moral realists, or anyone else, doesn’t agree to Bayesianism or another commonly understood framework of proof, logic, and common assumptions, then I’m not interested in talking to them (about moral realism).
From the more physicsy side [...]
I believe all those things (with very high probability, to be pedantic). I know Godel’s incompleteness theorems.
Saying that some true things are formally unprovable in a logic system is not significant evidence that any specific unproven statement (eg moral realism) is in fact true just because it’s not disproven. And the theorem doesn’t apply to probabilistic belief systems modeling the physical universe: I can have an arbitrary degree of confidence in a belief, as long as it’s not probability 1, without requiring logical proof.
However, the situation for moral realism isn’t “unproven conjecture”, it’s more like “unformalized conjecture whose proponents refuse to specify what it actually means”. At least that’s the state of debate in this thread.
As to my explaining to you the details of why anyone should be a moral realist, I”m not interested in attempting that. I’m not a committed moral realist myself, and I’d just have to do a lot of reading myself to find a good description of what ask for. Sorry.
Yet you assign sufficient probability to moral realism to think it’s worth discussing or reading about. Otherwise you’d have said from the start, “I agree with you that moral realism has no evidence for it, let’s just drop the subject”. To have such a high prior requires evidence. If you don’t have such evidence, you are wrong.
It would seem that you believe that. So what is your proof?
Proof is only meaningful in a system of shared assumptions (physics) or axioms (logic).
I’m glad you recognize that. Then you should also recognize that for reasons having nothing to do with logical necessity you have accepted some things as true which are unprovable, in your case a particular interpretation of how to do Bayesian.
All you have offered so far is assertion, and the appearance that you don’t even realize that you are making assumptions until after it is pointed out. When I found my self in that position, it humbled me a bit. In Bayesian terms, it moved all of my estimates further from 0 and 1 than they had been.
In any case, whatever program you have used to decide what you could assume, what if your assumptions are incomplete? What if you simply haven’t tried hard enough to have something “more than zero” on the moral side?
If moral realists, or anyone else, doesn’t agree to Bayesianism or another commonly understood framework of proof, logic, and common assumptions, then I’m not interested in talking to them (about moral realism).
So you have picked your church and your doctrine and you wish to preserve your orthodoxy by avoiding intelligent apostates. This is not a new position to take, but it has always seemed to me to be a very human bias, so I am surprised to see it stated so baldly on a website devoted to avoiding human biases in the twenty-first century. Which is to say, are you SURE you want to treat your assumptions as if they were the one true religion?
However, the situation for moral realism isn’t “unproven conjecture”, it’s more like “unformalized conjecture whose proponents refuse to specify what it actually means”. At least that’s the state of debate in this thread.
Why limit yourself to this one thread, populated as it isn’t by anyone who claims any real expertise?
Yet you assign sufficient probability to moral realism to think it’s worth discussing or reading about. Otherwise you’d have said from the start, “I agree with you that moral realism has no evidence for it, let’s just drop the subject”. To have such a high prior requires evidence. If you don’t have such evidence, you are wrong.
Your particular rejection of moral realism doesn’t seem to reflect much knowledge. For a Bayesian, knowing that other intelligent minds have looked at something, gathered LOTS of evidence and done lots of analysis, and reached a different conclusion than your prior should LOWER your certainty in your prior. Finding one guy who can’t or won’t spoonfeed you concentrated moral realism, and claiming on that basis that your prior of essentially zero must stand is not at all how I interpret the Bayesian program. In my interpretation, it is when I am ignorant that my mind is most open, that my estimates are furthest from 0 and 1.
I wish someone like Eliezer, or who knows his morality well enough, would pipe in on this, because from my reading of Eliezer, he is also a moral realist. Not that that proves anything, but it is relevant bayesian evidence.
what if your assumptions are incomplete? What if you simply haven’t tried hard enough to have something “more than zero” on the moral side?
Normally one tries to assume as little as necessary. To argue in favour of new assumptions, one might show that they are necessary (or even sufficient) for some useful, desirable conclusions. Are there any such here? If not, why assume e.g. moral realism when one could just as well assume any of infinitely many alternatives?
So you have picked your church and your doctrine and you wish to preserve your orthodoxy by avoiding intelligent apostates.
NO. This is completely wrong. You have not understood my position.
I said:
If moral realists, or anyone else, doesn’t agree to Bayesianism or another commonly understood framework of proof, logic, and common assumptions...
Note emphasis. I am not demanding dialogue within a specific worldview. I’m asking that the rules of the worldview being discussed be stated clearly. I’m asking for rigorous definitions instead of words like “morality objectively exists”, which everyone may understand differently.
Why limit yourself to this one thread, populated as it isn’t by anyone who claims any real expertise?
This thread is on LW. When people here said they gave a high prior to moral realism (i.e. did not dismiss it as I did), I assumed they were rational about it: that they had some evidence to support such a prior. By now it’s pretty clear that this is not the case, so after these last few posts of clarification I think the thread should end.
As for looking elsewhere, I did when referred—as with the Stanford encyclopedia of philosophy—and the presumably high quality summaries there confirmed me in my belief that there’s nothing to moral realism, it’s not a defensible or even well defined position, and it is not worth investigating.
Your particular rejection of moral realism doesn’t seem to reflect much knowledge. For a Bayesian, knowing that other intelligent minds have looked at something, gathered LOTS of evidence and done lots of analysis, and reached a different conclusion than your prior should LOWER your certainty in your prior.
I lowered it. That’s why I was willing to spend time on this conversation. Then I examined the evidence those other minds could offer and raised my certainty way back up.
In my interpretation, it is when I am ignorant that my mind is most open, that my estimates are furthest from 0 and 1.
That is just wrong. A prior of zero knowledge does not mean assigning 0.5 probability to every proposition. Propositions are entangled, so that would be inconsistent. Besides, you have evidence-based priors about other propositions entangled with this one, so your prior isn’t naive anyway.
from my reading of Eliezer, he is also a moral realist.
No he’s not. See this post which was recently on Sequence Reruns and the other posts linked from it. See also the entire Metaethics Sequence in the wiki and in particular this post in twoparts arguing against simple moral realism.
When people here said they gave a high prior to moral realism (i.e. did not dismiss it as I did), I assumed they were rational about it: that they had some evidence to support such a prior. By now it’s pretty clear that this is not the case,
Perhaps they misunderstood what was referred to by “moral realism”. The phrase certainly doesn’t seem to be very well defined. For example Eliezer does say that there are things that are actually right, and actually wrong. mwengler seems to think this is sufficient to make him a moral realist. You don’t. Classic recipe for confusion.
Proof is only meaningful in a system of shared assumptions (physics) or axioms (logic).
The statement that “it’s wrong to believe without proof”, equivalently, that “a single correct set of beliefs is mandated given by your proof (=evidence) and assumptions (=prior)”, is a logical consequence of the rules of Bayesian deduction.
If moral realists, or anyone else, doesn’t agree to Bayesianism or another commonly understood framework of proof, logic, and common assumptions, then I’m not interested in talking to them (about moral realism).
I believe all those things (with very high probability, to be pedantic). I know Godel’s incompleteness theorems.
Saying that some true things are formally unprovable in a logic system is not significant evidence that any specific unproven statement (eg moral realism) is in fact true just because it’s not disproven. And the theorem doesn’t apply to probabilistic belief systems modeling the physical universe: I can have an arbitrary degree of confidence in a belief, as long as it’s not probability 1, without requiring logical proof.
However, the situation for moral realism isn’t “unproven conjecture”, it’s more like “unformalized conjecture whose proponents refuse to specify what it actually means”. At least that’s the state of debate in this thread.
Yet you assign sufficient probability to moral realism to think it’s worth discussing or reading about. Otherwise you’d have said from the start, “I agree with you that moral realism has no evidence for it, let’s just drop the subject”. To have such a high prior requires evidence. If you don’t have such evidence, you are wrong.
I’m glad you recognize that. Then you should also recognize that for reasons having nothing to do with logical necessity you have accepted some things as true which are unprovable, in your case a particular interpretation of how to do Bayesian.
All you have offered so far is assertion, and the appearance that you don’t even realize that you are making assumptions until after it is pointed out. When I found my self in that position, it humbled me a bit. In Bayesian terms, it moved all of my estimates further from 0 and 1 than they had been.
In any case, whatever program you have used to decide what you could assume, what if your assumptions are incomplete? What if you simply haven’t tried hard enough to have something “more than zero” on the moral side?
So you have picked your church and your doctrine and you wish to preserve your orthodoxy by avoiding intelligent apostates. This is not a new position to take, but it has always seemed to me to be a very human bias, so I am surprised to see it stated so baldly on a website devoted to avoiding human biases in the twenty-first century. Which is to say, are you SURE you want to treat your assumptions as if they were the one true religion?
Why limit yourself to this one thread, populated as it isn’t by anyone who claims any real expertise?
Your particular rejection of moral realism doesn’t seem to reflect much knowledge. For a Bayesian, knowing that other intelligent minds have looked at something, gathered LOTS of evidence and done lots of analysis, and reached a different conclusion than your prior should LOWER your certainty in your prior. Finding one guy who can’t or won’t spoonfeed you concentrated moral realism, and claiming on that basis that your prior of essentially zero must stand is not at all how I interpret the Bayesian program. In my interpretation, it is when I am ignorant that my mind is most open, that my estimates are furthest from 0 and 1.
I wish someone like Eliezer, or who knows his morality well enough, would pipe in on this, because from my reading of Eliezer, he is also a moral realist. Not that that proves anything, but it is relevant bayesian evidence.
Apologies for replying late.
You seem to misunderstand my comments.
Normally one tries to assume as little as necessary. To argue in favour of new assumptions, one might show that they are necessary (or even sufficient) for some useful, desirable conclusions. Are there any such here? If not, why assume e.g. moral realism when one could just as well assume any of infinitely many alternatives?
NO. This is completely wrong. You have not understood my position.
I said:
Note emphasis. I am not demanding dialogue within a specific worldview. I’m asking that the rules of the worldview being discussed be stated clearly. I’m asking for rigorous definitions instead of words like “morality objectively exists”, which everyone may understand differently.
This thread is on LW. When people here said they gave a high prior to moral realism (i.e. did not dismiss it as I did), I assumed they were rational about it: that they had some evidence to support such a prior. By now it’s pretty clear that this is not the case, so after these last few posts of clarification I think the thread should end.
As for looking elsewhere, I did when referred—as with the Stanford encyclopedia of philosophy—and the presumably high quality summaries there confirmed me in my belief that there’s nothing to moral realism, it’s not a defensible or even well defined position, and it is not worth investigating.
I lowered it. That’s why I was willing to spend time on this conversation. Then I examined the evidence those other minds could offer and raised my certainty way back up.
That is just wrong. A prior of zero knowledge does not mean assigning 0.5 probability to every proposition. Propositions are entangled, so that would be inconsistent. Besides, you have evidence-based priors about other propositions entangled with this one, so your prior isn’t naive anyway.
No he’s not. See this post which was recently on Sequence Reruns and the other posts linked from it. See also the entire Metaethics Sequence in the wiki and in particular this post in two parts arguing against simple moral realism.
Perhaps they misunderstood what was referred to by “moral realism”. The phrase certainly doesn’t seem to be very well defined. For example Eliezer does say that there are things that are actually right, and actually wrong. mwengler seems to think this is sufficient to make him a moral realist. You don’t. Classic recipe for confusion.