Aumann’s Agreement Theorem says that SREoEs who start with the same beliefs and see the same evidence cannot disagree without doing something wrong.
Perhaps, then, I don’t fully agree with Aumann’s Agreement Theorem. I’ll leave it to you to decide whether that means I’m not a “genuine” Bayesian. I wouldn’t have a problem with being unable to fully adopt a single method of thinking about the universe.
In practice, I’m not sure any adult human hasn’t been exposed to E1 already, and I’m doubtful that most children are SREoEs
Is it fair to say that most current SREoEs became that way during a sort of rationalist awakening? (I know it’s not as simple as being a SREoE or not, and so this process actually takes years. but let’s pretend for a moment.) Imagine a child who grows up being fed very high priors about G1. This child (not a SREoE) is exposed to E1 and has a high confidence in G1. When he (/she) grows up and eventually becomes a SREoE, he first of all consciously throws out all his priors (rebellion against parents), then re-evaluates E1 (re-exposure?) and decides that in fact it entails ~G1.
Whether or not this describes you, does it make sense?
I’m saying that people who assign high probability to G1 after exposure to E1 either (a) had very different priors about G1 than I before exposure to E1, or (b) are not SREoEs. Alternatively, I either (a) am not an SREoE, or (b) have not been exposed to the evidence we have referred to as E1.
How about this: since both of you have been exposed to the same evidence and don’t agree, then either (a) you had very different priors (which is likely), or (b) you evaluate evidence differently. I’m going to avoid saying either of you is “better” or “more rational” at evaluating evidence.
Perhaps, then, I don’t fully agree with Aumann’s Agreement Theorem.
Whoa there. Aumann’s agreement theorem is a theorem. It is true, full stop. Whatever that term “SREoE” means (I keep going up and keep not seeing an explanation), either it doesn’t map onto the hypotheses of Aumann’s agreement theorem or you are attempting to disagree with a mathematical fact.
I believe it was “Sufficiently reasonable evaluator of evidence”—which I was using roughly equivalently to Bayesian empiricist. I’m beginning to doubt that is what ibidem means by it.
TheOtherDave defined it way back in the thread to try to taboo “rationalist,” since that word has such a multitude of denotations and connotations (including the LW intended meanings). Edit: terminology mostly defined here and here.
Sufficiently reliable, but otherwise yes. That said, we’ve since established that ibidem and I don’t have a shared understanding of “reliable” or “evidence,” either, so I’d have to call it a failed/incomplete attempt at tabooing.
For it to be a mathematical fact, it needs a mathematical proof. Go ahead...!
Like it or not, rationality is not mathematics—it is full of estimations, assumptions, objective decisions, and wishful thinking. Thus, a “theorem” in evidence evaluation is not a mathematical theorem, obtained using unambiguous formal logic.
If what you mean to say is that Aumann’s Agreement “Theorem” is a fundamental building block of your particular flavor of rational thinking, then what this means is simply that I don’t fully subscribe to your particular flavor of rational thinking. Nothing (mathematics nearly excepted) is “true, full stop.” Remember? 1 is not a probability. That one’s even more “true, full stop” than Aumann’s ideas about rational disagreement.
When did I claim that rationality was mathematics?
Right here:
you are attempting to disagree with a mathematical fact.
it needs a mathematical proof.
Here you go.
Maybe not “rationality” exactly but Aumann’s work, whatever it is you call what we’re doing here. Rational decision-making.
So yes, Aumann’s theorem can be proven using a certain system of formalization, taking a certain set of definitions and assumptions. What I’m saying is not that I disagree with the derivation I gave, but that I don’t fully agree with its premises.
If what you mean to say is that Aumann’s Agreement “Theorem” is a fundamental building block of your particular flavor of rational thinking
When did I say this?
You didn’t yet, I didn’t say you did. I’m guessing that that’s what you actually mean though, because very, very few things if any are “true, full stop.” Something like this theorem can be fully true according to Bayesian statistics or some other system of thought, full stop. If this is the case, then in means I don’t fully accept that system of thought. Is disagreement not allowed?
Maybe not “rationality” exactly but Aumann’s work, whatever it is you call what we’re doing here. Rational decision-making.
How does what I said there mean “rationality is mathematics”? All I’m saying is that Aumann’s agreement theorem is mathematics, and if you’re attempting to disagree with it, then you’re attempting to disagree with mathematics.
What I’m saying is not that I disagree with the derivation I gave, but that I don’t fully agree with its premises.
I agree that this is what you should’ve said, but that isn’t what you said. Disagreeing with an implication “if P, then Q” doesn’t mean disagreeing with P.
I’m guessing that that’s what you actually mean though
No, it’s not. I just mean that mathematical facts are mathematical facts and questioning their relevance to real life is not the same as questioning their truth.
Now this just depends on what we mean by “disagree.” Of course I can’t dispute a formal logical derivation. The math, of course, is sound.
Disagreeing with an implication “if P, then Q” doesn’t mean disagreeing with P.
All I disagree with X, which means either that I don’t agree that Q implies X, or I don’t accept P.
I’m not questioning mathematical truth. All I’m questioning is what TimS said.
But if we agree it was just a misunderstanding, can we move on? Or not. This also doesn’t seem to be going anywhere, especially if we’ve decided we fundamentally disagree. (Which in and of itself is not grounds for a downvote, may I remind you all.)
I didn’t downvote you because we disagree, I downvoted you because you conflated disagreeing with the applicability of a mathematical fact to a situation with disagreeing with a mathematical fact. Previously I downvoted you because you tried to argue against two positions I never claimed to hold.
Glad we’ve got that cleared up, then. I wasn’t only talking to you; there are a few people who have taken it upon themselves to make my views feel unwelcome here. Sorry if we’ve had some misunderstandings.
Imagine a child who grows up being fed very high priors about G1. This child (not a SREoE) is exposed to E1 and has a high confidence in G1. When he (/she) grows up and eventually becomes a SREoE, he first of all consciously throws out all his priors (rebellion against parents), then re-evaluates E1 (re-exposure?) and decides that in fact it entails ~G1.
This was not my experience. I was raised in a practicing religious family, and the existence of the holy texts, the well-being of the members of the religious community, and the existence of the religious community were all strong evidence for G1.
I reduced the probability I assigned to G1 because I realized I was underweighing other evidence. Things I would expect to be true if G1 were true turned out to be false. I think I knew those facts were false, but did not consider the implications, and so didn’t adjust my belief in G1.
Once I considered the implications, it became clear to me that E1 was outweighed by the falsification of other implications of G1. Given that balance, I assign G1 very very low probability of being accurate. But I still don’t deny that E1 is evidence of G1. If I didn’t know E1, learning it would adjust upward my belief in G1.
In practice, what people seem to mean is best described technically as changing what sorts of things count as evidence. I changed my beliefs about G1 because I started taking the state of the world and the prevalence of human suffering as a fact about G1
Also, if we are going to talk coherently about priors, we can’t really describe anything humans do as “throwing out their priors.” If we really assign probability zero to any proposition, we have no way of changing our minds again.. And if we assign some other probability, justifying that is weird.
Certainly you can’t simply will your aliefs to change, but it does seem to be a conscious and deliberate effort around here. The belief in G1 usually happens without any knowledge about Bayesian statistics, technical rationality, or priors, so this “awakening” may be the first time a person ever thought of E1 as “evidence” in this technical sense.
the prevalence of human suffering
By the way, I think the best response to this argument is that yes, there is evil, but God allows it because it is better for us in the long run—in other words, if there is an afterlife which is partly defined by our existence here, than our temporary comfort isn’t the only thing to consider. If we all lived in the Garden of Eden, we would never learn or progress. But I don’t want a whole new argument on my hands.
Perhaps, then, I don’t fully agree with Aumann’s Agreement Theorem. I’ll leave it to you to decide whether that means I’m not a “genuine” Bayesian. I wouldn’t have a problem with being unable to fully adopt a single method of thinking about the universe.
Is it fair to say that most current SREoEs became that way during a sort of rationalist awakening? (I know it’s not as simple as being a SREoE or not, and so this process actually takes years. but let’s pretend for a moment.) Imagine a child who grows up being fed very high priors about G1. This child (not a SREoE) is exposed to E1 and has a high confidence in G1. When he (/she) grows up and eventually becomes a SREoE, he first of all consciously throws out all his priors (rebellion against parents), then re-evaluates E1 (re-exposure?) and decides that in fact it entails ~G1.
Whether or not this describes you, does it make sense?
How about this: since both of you have been exposed to the same evidence and don’t agree, then either (a) you had very different priors (which is likely), or (b) you evaluate evidence differently. I’m going to avoid saying either of you is “better” or “more rational” at evaluating evidence.
Whoa there. Aumann’s agreement theorem is a theorem. It is true, full stop. Whatever that term “SREoE” means (I keep going up and keep not seeing an explanation), either it doesn’t map onto the hypotheses of Aumann’s agreement theorem or you are attempting to disagree with a mathematical fact.
I believe it was “Sufficiently reasonable evaluator of evidence”—which I was using roughly equivalently to Bayesian empiricist. I’m beginning to doubt that is what ibidem means by it.
TheOtherDave defined it way back in the thread to try to taboo “rationalist,” since that word has such a multitude of denotations and connotations (including the LW intended meanings). Edit: terminology mostly defined here and here.
Sufficiently reliable, but otherwise yes.
That said, we’ve since established that ibidem and I don’t have a shared understanding of “reliable” or “evidence,” either, so I’d have to call it a failed/incomplete attempt at tabooing.
They’re using it to mean “sufficiently reliable evaluator of evidence”.
For it to be a mathematical fact, it needs a mathematical proof. Go ahead...!
Like it or not, rationality is not mathematics—it is full of estimations, assumptions, objective decisions, and wishful thinking. Thus, a “theorem” in evidence evaluation is not a mathematical theorem, obtained using unambiguous formal logic.
If what you mean to say is that Aumann’s Agreement “Theorem” is a fundamental building block of your particular flavor of rational thinking, then what this means is simply that I don’t fully subscribe to your particular flavor of rational thinking. Nothing (mathematics nearly excepted) is “true, full stop.” Remember? 1 is not a probability. That one’s even more “true, full stop” than Aumann’s ideas about rational disagreement.
Here you go.
When did I claim that rationality was mathematics?
When did I say this?
Right here:
Maybe not “rationality” exactly but Aumann’s work, whatever it is you call what we’re doing here. Rational decision-making.
So yes, Aumann’s theorem can be proven using a certain system of formalization, taking a certain set of definitions and assumptions. What I’m saying is not that I disagree with the derivation I gave, but that I don’t fully agree with its premises.
You didn’t yet, I didn’t say you did. I’m guessing that that’s what you actually mean though, because very, very few things if any are “true, full stop.” Something like this theorem can be fully true according to Bayesian statistics or some other system of thought, full stop. If this is the case, then in means I don’t fully accept that system of thought. Is disagreement not allowed?
How does what I said there mean “rationality is mathematics”? All I’m saying is that Aumann’s agreement theorem is mathematics, and if you’re attempting to disagree with it, then you’re attempting to disagree with mathematics.
I agree that this is what you should’ve said, but that isn’t what you said. Disagreeing with an implication “if P, then Q” doesn’t mean disagreeing with P.
No, it’s not. I just mean that mathematical facts are mathematical facts and questioning their relevance to real life is not the same as questioning their truth.
Now this just depends on what we mean by “disagree.” Of course I can’t dispute a formal logical derivation. The math, of course, is sound.
All I disagree with X, which means either that I don’t agree that Q implies X, or I don’t accept P.
I’m not questioning mathematical truth. All I’m questioning is what TimS said. But if we agree it was just a misunderstanding, can we move on? Or not. This also doesn’t seem to be going anywhere, especially if we’ve decided we fundamentally disagree. (Which in and of itself is not grounds for a downvote, may I remind you all.)
I didn’t downvote you because we disagree, I downvoted you because you conflated disagreeing with the applicability of a mathematical fact to a situation with disagreeing with a mathematical fact. Previously I downvoted you because you tried to argue against two positions I never claimed to hold.
Glad we’ve got that cleared up, then. I wasn’t only talking to you; there are a few people who have taken it upon themselves to make my views feel unwelcome here. Sorry if we’ve had some misunderstandings.
This was not my experience. I was raised in a practicing religious family, and the existence of the holy texts, the well-being of the members of the religious community, and the existence of the religious community were all strong evidence for G1.
I reduced the probability I assigned to G1 because I realized I was underweighing other evidence. Things I would expect to be true if G1 were true turned out to be false. I think I knew those facts were false, but did not consider the implications, and so didn’t adjust my belief in G1.
Once I considered the implications, it became clear to me that E1 was outweighed by the falsification of other implications of G1. Given that balance, I assign G1 very very low probability of being accurate. But I still don’t deny that E1 is evidence of G1. If I didn’t know E1, learning it would adjust upward my belief in G1.
Also, if we are going to talk coherently about priors, we can’t really describe anything humans do as “throwing out their priors.” If we really assign probability zero to any proposition, we have no way of changing our minds again.. And if we assign some other probability, justifying that is weird.
In practice, what people seem to mean is best described technically as changing what sorts of things count as evidence. I changed my beliefs about G1 because I started taking the state of the world and the prevalence of human suffering as a fact about G1
Certainly you can’t simply will your aliefs to change, but it does seem to be a conscious and deliberate effort around here. The belief in G1 usually happens without any knowledge about Bayesian statistics, technical rationality, or priors, so this “awakening” may be the first time a person ever thought of E1 as “evidence” in this technical sense.
By the way, I think the best response to this argument is that yes, there is evil, but God allows it because it is better for us in the long run—in other words, if there is an afterlife which is partly defined by our existence here, than our temporary comfort isn’t the only thing to consider. If we all lived in the Garden of Eden, we would never learn or progress. But I don’t want a whole new argument on my hands.