Here’s where your understanding, by the way, is breaking down: the difference between practical behavior and valid behavior. Bayesian rationality in particular is highly susceptible to this problem, and it’s one of my main objections to the system in principle: that it fails to parse the process of forming beliefs from the process of confirming truth.
Here’s what may be tripping you up. Very often it doesn’t even make sense for humans to pay attention to small bits of evidence, because we can’t really process them very effectively. So for most bits of tiny evidence (such as most very weak appeals to authority) often the correct action given our limited processing capability is to simply ignore them. But this doesn’t make them not evidence.
Say for example, I have a conjecture about all positive integers n, and I check it for every n up to 10^6. If I verify it for 10^6+1, working out how much more certain I should be is really tough, so I only treat the evidence in aggregate. Similarly, if I have a hypothesis that all ravens ravens are black, and I’ve checked it for a thousand ravens, I should be more confident when I checked the 1001st raven and find it is black. But actually doing that update is difficult.
The question then becomes for any given appeal to authority, how reliable is that appeal?
Note that in almost any field there’s going to be some degree of relying on such appeals. In math for examples, there’s a very deep result called the classification of finite simple groups. The proof of the full result is thousands of pages spread across literally hundreds of separate papers. It is possible at some point that some people have really looked at almost the whole thing, but the vast majority of people who use the classification certainly have not. Relying on the classification is essentially an appeal to authority.
Here’s what may be tripping you up. Very often it doesn’t even make sense for humans to pay attention to small bits of evidence, because we can’t really process them very effectively. So for most bits of tiny evidence (such as most very weak appeals to authority) often the correct action given our limited processing capability is to simply ignore them. But this doesn’t make them not evidence.
That’s a given, and I have said exactly as much repeatedly. Reiterating it as though it were introducing a new concept to me or one that I was having trouble with isn’t going to be an effective tool for the conversation at hand.
So, I’m confused about how you would believe that humans have limited enough processing capacity that this would be an issue but in an earlier thread thought that humans were close enough to being good Bayesians that Aumann’s agreement theorem should apply. In general, the computations involved in an Aumann update from sharing posteriors will generally be much more involved than the computations in updating with a single tiny piece of evidence.
In general, the computations involved in an Aumann update from sharing posteriors will generally be much more involved than the computations in updating with a single tiny piece of evidence.
The computation for a normal update are two multiplications and a division. Now look at the computations involved in an explicit proof of Aumann. The standard proof is constructive. You can see that while it largely just involves a lot of addition, multiplication and division, you need to do it carefully for every single hypothesis. See this paper by Scott Aaronson www.scottaaronson.com/papers/agree-econ.ps which gives a more detailed analysis. That paper shows that the standard construction for Aumann is not as efficient as you’d want for some purposes. However, the primary upshot for our purposes is that the time required is roughly exponential in 1/eps where eps is how close you want the two Bayesians with limited computational agents to agree, and the equivalent issue for a standard multiplication and division is slightly worse than linear in 1/eps.
An important caveat is that I don’t know what happens if you try to do this in a Blum-Shub-Smale model, and in that context, it may be that the difference goes away. BSS machines are confusing and I don’t understand them well enough to figure out how much the protocol gets improved if one has two units with BSS capabilities able to send real numbers. Since humans can’t send exact real numbers or do computations with them in the general case, this is a mathematically important issue but is not terribly important for our purposes.
Alright, this is along the lines of what I thought you might say. A couple of points to consider.
In general discourse, humans use fuzzy approximations rather than precise statements. These have the effect of simplifying such ‘calculations’.
Backwards propagation of any given point of fact within that ‘fuzziness’ prevents the need for recalculation if the specific item is sufficiently trivial—as you noted.
None of this directly correlates to the topic of whether appeals to authority are ever legitimate in supporting truth-claims. Please note that I have differentiated between “I believe X” and “I hold X is true.”
I understand that this conflicts with the ‘baseline’ epistemology of Bayesian rationality as used on LessWrong… my assertion as this conversation has progressed has been that this represents a strong failure of Bayesian rationality; the inability to establish claims about the territory as opposed to simply refining the maps.
Your own language continues to perpetuate this problem: “The computation for a normal update”.
I’m not sure I understand your response. 3 is essentially unrelated to my question concerning small updates v. Aumann updates which makes me confused as to what you are trying to say with the three points and whether they are a disagreement with my claim, an argument that my claim is irrelevant, or something else.
Regarding 1, yes that’s true and 2 is sort of 2, but that’s the same thing that doing things with an order of epsilon does. Our epsilon keeps track of how fuzzy things should be. The point is that for the same degree of fuzziness it is much easier to do a single update on evidence than it is to do a Aumann type update based on common knowledge of current estimates.
I understand that this conflicts with the ‘baseline’ epistemology of Bayesian rationality as used on LessWrong… my assertion as this conversation has progressed has been that this represents a strong failure of Bayesian rationality; the inability to establish claims about the territory as opposed to simply refining the maps.
Your own language continues to perpetuate this problem: “The computation for a normal update”.
I don’t understand what you mean here. Updates are equivalent to claims about how the territory. Refining the map if whether one is or is not a Bayesian means one is making a more narrow claim about what the territory looks like.
Refining the map if whether one is or is not a Bayesian means one is making a more narrow claim about what the territory looks like.
Clarification: by “narrow claim” you mean “allowing for a smaller range of possibilities”.
That is not equivalent to making a specific claim about a manifested instantiation of the territory. It is stating that you believe the map more closely resembles the territory; it is not saying the territory is a certain specific way.
That is not equivalent to making a specific claim about a manifested instantiation of the territory. It is stating that you believe the map more closely resembles the territory; it is not saying the territory is a certain specific way.
If I a Bayesian estimates that with a high probability (say >.99) that say the Earth is around eight light-minutes from the sun then how is that not saying that the territory is a certain specific way? Do you just mean that Bayesian can’t be absolutely certain about that the territory matches their most likely hypothesis. Most Bayesians and for that matter most traditional rationalists would probably consider that a feature rather than a bug of an epistemological system. If you don’t mean this, what do you mean?
If I a Bayesian estimates that with a high probability (say >.99) that say the Earth is around eight light-minutes from the sun then how is that not saying that the territory is a certain specific way?
I was afraid of this. This is an epistemological barrier: if you express a notion in probabilistic form then you are not holding a specific claim.
Do you just mean that Bayesian can’t be absolutely certain about that the territory matches their most likely hypothesis.
I mean that Bayesian nomenclature does not permit certainty. However, I also assert that (naive) certainty exists.
Well, Bayesian nomenclature does permit certainty, just set P=1 or P=0. It isn’t a consequence that the nomenclature doesn’t allow it. It is that good Bayesians don’t ever say that. To use a possibly silly analogy, the language of Catholic theology allows one to talk about Jesus being fully man and not fully divine, but a good Catholic will never assert that Jesus was not fully divine.
However, I also assert that (naive) certainty exists.
In what sense does it exist? In the sense that human brains function with a naive certainty element? I agree that humans aren’t processing marginally probable descriptors. Are you claiming that a philosophical system that works should allow for naive certainty as something it can talk about and think exists?
Hmm, but the Bayesians agree with some form naive realism in general. What they disagree with you on is whether they have any access to that universe aside from in a probabilistic fashion. Alternatively, a a Bayesian could even deny naive realism and do almost everything correctly. Naive realism is a stance about ontology. Bayesianism is primarily a stance about epistemology. They do sometimes overlap and can inform each other but they aren’t the same thing.
I assert that there are knowable truth claims about the universe—the “territory”—which are absolute in nature. I related one as an example; my knowledge of the nature of my ongoing observation of how many fingers there are on my right hand. (This should not be conflated for knowledge of the number of fingers on my right hand. That is a different question.) I also operate with an epistemological framework which allows knowledge statements to be made and discussed in the naive-realistic sense (in which case the statement “I see five fingers on my hand” is a true statement of how many fingers there really are—a statement not of the nature of my map but of the territory itself.).
Thusly I reserve assertions of truth for claims that are definite and discrete, and differentiate between the model of material reality and the substance of reality itself when making claims. Probabilistic statements are always models, by category; no matter how closely they resemble the territory claims about the model are never claims in specific about the territory.
… For any given specific event of which I am cognizant and aware, yes. With the important caveat that this perfection is not a heritable property beyond that specific cognizance event.
Ok. I’m curious if you have ever been on any form of mind-altering substance. Here’s a related anecdote that may help:
When I was a teenager I had to get my wisdom teeth taken out. For a pain killer I was put on percocet or some variant thereof. Since I couldn’t speak, I had to communicate with people by writing things down. After I had recovered somewhat, I looked at what I had been writing down to my family. A large part apparently focused on the philosophical ramifications of being aware that my mind was behaving in a highly fractured fashion and incoherent ramblings about what this apparently did to my sense of self. I seemed particularly concerned with the fact that I could tell that my memories were behaving in a fragmented fashion but that this had not altered my intellectual sense of self as a continuing entity. Other remarks I made apparently were substantially less coherent than even that, but it was clear from what I had wrote that I had considered them to be deep philosophical thoughts and descriptions of my mind state.
In this context, I was cognizant and aware of my mental state. To say that my perception was accurate is pretty far off.
To use a different example, do you ever have thoughts that don’t quite make sense when you are falling asleep or waking up? Does your cognition seem perfectly accurate then? Or, when you make an arithmetic mistake, are you aware that you’ve misadded before you find the mistake?
In this context, I was cognizant and aware of my mental state. To say that my perception was accurate is pretty far off.
Hence: “With the important caveat that this perfection is not a heritable property beyond that specific cognizance event.”
but it was clear from what I had wrote that I had considered them to be deep philosophical thoughts and descriptions of my mind state.
Was it to you or to someone else that I stressed that recollections are not a part of my claim?
To use a different example, do you ever have thoughts that don’t quite make sense when you are falling asleep or waking up? Does your cognition seem perfectly accurate then?
“For any given specific event of which I am cognizant and aware, yes.” -- also, again note the caveat of non-heritability. Whether that cognizent-awareness is of inscrutability is irrelevant to the knowledge of that specifically ongoing instance of a cognizent-awareness event.
Or, when you make an arithmetic mistake, are you aware that you’ve misadded before you find the mistake?
I am going to ask you to acknowledge how irrelevant this ‘example’ is to my claim, as a means of guaging where you are on understanding it.
Hence: “With the important caveat that this perfection is not a heritable property beyond that specific cognizance event.”
Right. For those specific cognizance events I was pretty damn sure that I knew how my mind was functioning. And the reactions I got to percocet are on the milder end of what mind-altering substances can do.
Whether that cognizent-awareness is of inscrutability is irrelevant to the knowledge of that specifically ongoing instance of a cognizent-awareness event.
This makes me wonder if your claim is intended to be non-falsifiable. Is there anything that would convince you that you aren’t as aware as you think you are?
I am going to ask you to acknowledge how irrelevant this ‘example’ is to my claim, as a means of guaging where you are on understanding it.
Potentially quite far since this seems to be in the same category if I’m understanding you correctly. The cognitive awareness that I’ve added correctly is pretty basic, but one can screw up pretty easily and still feel like one is completely correct.
Right. For those specific cognizance events I was pretty damn sure that I knew how my mind was functioning.
Whether you were right or wrong about how your mind was functioning is irrelevant to the fact that you were aware of what it was that you were aware of. How accurate your beliefs were about your internal functionings is irrelevant to how accurate your beliefs were about what it was you were, at that instant, currently believing. These are fundamentally separate categories.
This is why I used an external example rather than internal, initially, by the way: the deeply recursive nature of where this dialogue is going only serves as a distraction from what I am trying to assert.
This makes me wonder if your claim is intended to be non-falsifiable. Is there anything that would convince you that you aren’t as aware as you think you are?
I haven’t made any assertions about how aware I believe I or anyone else is. I have made an assertion about how valid any given belief is regarding the specific, individual, ongoing cognition of the self-same specific, ongoing, individual cognition-event. This is why I have stressed the non-heritability.
The claim, in this case, is not so much non-falsifiable as it is tautological.
The cognitive awareness that I’ve added correctly is pretty basic
That it is basic does not mean that it is of the same category. Awareness of past mental states is not equivalent to awareness of ongoing mental states. This is why I specifically restricted the statemnt to ongoing events. I even previously stressed that recollections have nothing to do with my claim.
but one can screw up pretty easily and still feel like one is completely correct.
I’ll state this with the necessary recursion to demonstrate further why I would prefer we not continue using any cognition event not deriving from an external source: one would be correct to feel correct about feeling correct; but that does not mean that one would be correct about the feeling-correct that he is correct to feel correct about feeling-correct.
I’ll state this with the necessary recursion to demonstrate further why I would prefer we not continue using any cognition event not deriving from an external source: one would be correct to feel correct about feeling correct; but that does not mean that one would be correct about the feeling-correct that he is correct to feel correct about feeling-correct.
Here’s what may be tripping you up. Very often it doesn’t even make sense for humans to pay attention to small bits of evidence, because we can’t really process them very effectively. So for most bits of tiny evidence (such as most very weak appeals to authority) often the correct action given our limited processing capability is to simply ignore them. But this doesn’t make them not evidence.
Say for example, I have a conjecture about all positive integers n, and I check it for every n up to 10^6. If I verify it for 10^6+1, working out how much more certain I should be is really tough, so I only treat the evidence in aggregate. Similarly, if I have a hypothesis that all ravens ravens are black, and I’ve checked it for a thousand ravens, I should be more confident when I checked the 1001st raven and find it is black. But actually doing that update is difficult.
The question then becomes for any given appeal to authority, how reliable is that appeal?
Note that in almost any field there’s going to be some degree of relying on such appeals. In math for examples, there’s a very deep result called the classification of finite simple groups. The proof of the full result is thousands of pages spread across literally hundreds of separate papers. It is possible at some point that some people have really looked at almost the whole thing, but the vast majority of people who use the classification certainly have not. Relying on the classification is essentially an appeal to authority.
That’s a given, and I have said exactly as much repeatedly. Reiterating it as though it were introducing a new concept to me or one that I was having trouble with isn’t going to be an effective tool for the conversation at hand.
So, I’m confused about how you would believe that humans have limited enough processing capacity that this would be an issue but in an earlier thread thought that humans were close enough to being good Bayesians that Aumann’s agreement theorem should apply. In general, the computations involved in an Aumann update from sharing posteriors will generally be much more involved than the computations in updating with a single tiny piece of evidence.
Justify this claim, and then we can begin.
The computation for a normal update are two multiplications and a division. Now look at the computations involved in an explicit proof of Aumann. The standard proof is constructive. You can see that while it largely just involves a lot of addition, multiplication and division, you need to do it carefully for every single hypothesis. See this paper by Scott Aaronson www.scottaaronson.com/papers/agree-econ.ps which gives a more detailed analysis. That paper shows that the standard construction for Aumann is not as efficient as you’d want for some purposes. However, the primary upshot for our purposes is that the time required is roughly exponential in 1/eps where eps is how close you want the two Bayesians with limited computational agents to agree, and the equivalent issue for a standard multiplication and division is slightly worse than linear in 1/eps.
An important caveat is that I don’t know what happens if you try to do this in a Blum-Shub-Smale model, and in that context, it may be that the difference goes away. BSS machines are confusing and I don’t understand them well enough to figure out how much the protocol gets improved if one has two units with BSS capabilities able to send real numbers. Since humans can’t send exact real numbers or do computations with them in the general case, this is a mathematically important issue but is not terribly important for our purposes.
Alright, this is along the lines of what I thought you might say. A couple of points to consider.
In general discourse, humans use fuzzy approximations rather than precise statements. These have the effect of simplifying such ‘calculations’.
Backwards propagation of any given point of fact within that ‘fuzziness’ prevents the need for recalculation if the specific item is sufficiently trivial—as you noted.
None of this directly correlates to the topic of whether appeals to authority are ever legitimate in supporting truth-claims. Please note that I have differentiated between “I believe X” and “I hold X is true.”
I understand that this conflicts with the ‘baseline’ epistemology of Bayesian rationality as used on LessWrong… my assertion as this conversation has progressed has been that this represents a strong failure of Bayesian rationality; the inability to establish claims about the territory as opposed to simply refining the maps.
Your own language continues to perpetuate this problem: “The computation for a normal update”.
I’m not sure I understand your response. 3 is essentially unrelated to my question concerning small updates v. Aumann updates which makes me confused as to what you are trying to say with the three points and whether they are a disagreement with my claim, an argument that my claim is irrelevant, or something else.
Regarding 1, yes that’s true and 2 is sort of 2, but that’s the same thing that doing things with an order of epsilon does. Our epsilon keeps track of how fuzzy things should be. The point is that for the same degree of fuzziness it is much easier to do a single update on evidence than it is to do a Aumann type update based on common knowledge of current estimates.
I don’t understand what you mean here. Updates are equivalent to claims about how the territory. Refining the map if whether one is or is not a Bayesian means one is making a more narrow claim about what the territory looks like.
Clarification: by “narrow claim” you mean “allowing for a smaller range of possibilities”.
That is not equivalent to making a specific claim about a manifested instantiation of the territory. It is stating that you believe the map more closely resembles the territory; it is not saying the territory is a certain specific way.
If I a Bayesian estimates that with a high probability (say >.99) that say the Earth is around eight light-minutes from the sun then how is that not saying that the territory is a certain specific way? Do you just mean that Bayesian can’t be absolutely certain about that the territory matches their most likely hypothesis. Most Bayesians and for that matter most traditional rationalists would probably consider that a feature rather than a bug of an epistemological system. If you don’t mean this, what do you mean?
I was afraid of this. This is an epistemological barrier: if you express a notion in probabilistic form then you are not holding a specific claim.
I mean that Bayesian nomenclature does not permit certainty. However, I also assert that (naive) certainty exists.
Well, Bayesian nomenclature does permit certainty, just set P=1 or P=0. It isn’t a consequence that the nomenclature doesn’t allow it. It is that good Bayesians don’t ever say that. To use a possibly silly analogy, the language of Catholic theology allows one to talk about Jesus being fully man and not fully divine, but a good Catholic will never assert that Jesus was not fully divine.
In what sense does it exist? In the sense that human brains function with a naive certainty element? I agree that humans aren’t processing marginally probable descriptors. Are you claiming that a philosophical system that works should allow for naive certainty as something it can talk about and think exists?
Granted. That was very poorly (as in, eroneously) worded on my part. I should have said “Bayesian practices”.
I mean naive realism, of course.
Essentially, yes. Although I should emphasize that I mean that in the most marginally sufficient context.
Hmm, but the Bayesians agree with some form naive realism in general. What they disagree with you on is whether they have any access to that universe aside from in a probabilistic fashion. Alternatively, a a Bayesian could even deny naive realism and do almost everything correctly. Naive realism is a stance about ontology. Bayesianism is primarily a stance about epistemology. They do sometimes overlap and can inform each other but they aren’t the same thing.
Indeed.
Ok. Now we’re getting somewhere. So why do you think the Bayesians are wrong that this should be probabilistic in nature?
Universally. Universally probabilistic in nature.
I assert that there are knowable truth claims about the universe—the “territory”—which are absolute in nature. I related one as an example; my knowledge of the nature of my ongoing observation of how many fingers there are on my right hand. (This should not be conflated for knowledge of the number of fingers on my right hand. That is a different question.) I also operate with an epistemological framework which allows knowledge statements to be made and discussed in the naive-realistic sense (in which case the statement “I see five fingers on my hand” is a true statement of how many fingers there really are—a statement not of the nature of my map but of the territory itself.).
Thusly I reserve assertions of truth for claims that are definite and discrete, and differentiate between the model of material reality and the substance of reality itself when making claims. Probabilistic statements are always models, by category; no matter how closely they resemble the territory claims about the model are never claims in specific about the territory.
Ah, so to phrase this as a Bayesian might, you assert that your awareness of your own cognition is perfectly accurate?
… For any given specific event of which I am cognizant and aware, yes. With the important caveat that this perfection is not a heritable property beyond that specific cognizance event.
Ok. I’m curious if you have ever been on any form of mind-altering substance. Here’s a related anecdote that may help:
When I was a teenager I had to get my wisdom teeth taken out. For a pain killer I was put on percocet or some variant thereof. Since I couldn’t speak, I had to communicate with people by writing things down. After I had recovered somewhat, I looked at what I had been writing down to my family. A large part apparently focused on the philosophical ramifications of being aware that my mind was behaving in a highly fractured fashion and incoherent ramblings about what this apparently did to my sense of self. I seemed particularly concerned with the fact that I could tell that my memories were behaving in a fragmented fashion but that this had not altered my intellectual sense of self as a continuing entity. Other remarks I made apparently were substantially less coherent than even that, but it was clear from what I had wrote that I had considered them to be deep philosophical thoughts and descriptions of my mind state.
In this context, I was cognizant and aware of my mental state. To say that my perception was accurate is pretty far off.
To use a different example, do you ever have thoughts that don’t quite make sense when you are falling asleep or waking up? Does your cognition seem perfectly accurate then? Or, when you make an arithmetic mistake, are you aware that you’ve misadded before you find the mistake?
Hence: “With the important caveat that this perfection is not a heritable property beyond that specific cognizance event.”
Was it to you or to someone else that I stressed that recollections are not a part of my claim?
“For any given specific event of which I am cognizant and aware, yes.” -- also, again note the caveat of non-heritability. Whether that cognizent-awareness is of inscrutability is irrelevant to the knowledge of that specifically ongoing instance of a cognizent-awareness event.
I am going to ask you to acknowledge how irrelevant this ‘example’ is to my claim, as a means of guaging where you are on understanding it.
Right. For those specific cognizance events I was pretty damn sure that I knew how my mind was functioning. And the reactions I got to percocet are on the milder end of what mind-altering substances can do.
This makes me wonder if your claim is intended to be non-falsifiable. Is there anything that would convince you that you aren’t as aware as you think you are?
Potentially quite far since this seems to be in the same category if I’m understanding you correctly. The cognitive awareness that I’ve added correctly is pretty basic, but one can screw up pretty easily and still feel like one is completely correct.
Whether you were right or wrong about how your mind was functioning is irrelevant to the fact that you were aware of what it was that you were aware of. How accurate your beliefs were about your internal functionings is irrelevant to how accurate your beliefs were about what it was you were, at that instant, currently believing. These are fundamentally separate categories.
This is why I used an external example rather than internal, initially, by the way: the deeply recursive nature of where this dialogue is going only serves as a distraction from what I am trying to assert.
I haven’t made any assertions about how aware I believe I or anyone else is. I have made an assertion about how valid any given belief is regarding the specific, individual, ongoing cognition of the self-same specific, ongoing, individual cognition-event. This is why I have stressed the non-heritability.
The claim, in this case, is not so much non-falsifiable as it is tautological.
That it is basic does not mean that it is of the same category. Awareness of past mental states is not equivalent to awareness of ongoing mental states. This is why I specifically restricted the statemnt to ongoing events. I even previously stressed that recollections have nothing to do with my claim.
I’ll state this with the necessary recursion to demonstrate further why I would prefer we not continue using any cognition event not deriving from an external source: one would be correct to feel correct about feeling correct; but that does not mean that one would be correct about the feeling-correct that he is correct to feel correct about feeling-correct.
Now I get it!
I think I get it but I’m not sure. Can you translate it for the rest of us? Or is this sarcasm?
See here. And I may be honestly mistaken, but I’m not kidding.
It certainly could easily go either way.
Again, can you explain more clearly what you mean by this?
… that is the simple/clear explanation.
A more technical explanation would be “the ongoing cognizance of a given specifically instantiated qualia”.