I’m really confused about why you’re not understanding this. Authorities are reliable to different degrees about different things. If I tell you I’m wearing an orange shirt that is clearly evidence that I am wearing an orange shirt. If a physicist tells you nothing can accelerate past the speed of light that is evidence that nothing can accelerate past the speed of light. Now, because people can be untrustworthy there are many circumstances in which witness testimony is less reliable than personal observation. But it would be rather bothersome to upload a picture of me in my shirt to you. It can also be difficult to explain special relativity and the evidence for it in a short time span. In cases like these we must settle for the testimony of authorities. This does not make appeals to authority invalid.
Now of course you might have some evidence that suggests I am not a reliable reporter of the color of my shirt. Perhaps I have lied to you many times before or I have incentives to be dishonest. In these cases it is appropriate to discount my testimony to the degree that I am unreliable. But this is not a special problem with appeals to authority. If you have reason to think you are hallucinating, perhaps because of the LSD you took an hour ago, you should appropriately discount your eyes telling you that the trees are waving at you.
Now since appeals to authority, like other kinds of sources of information, are not 100% reliable it makes sense to discuss the judgment of authorities in detail. Even if Eliezer is a reliable authority on lots of things it is a good idea to examine his reasoning. In this regard you are correct to demand arguments beyond “Eliezer says so”. But it is none the less not the case that “appeals to authority are always fallacious”. On the contrary modern science would be impossible without them since no one can possibly make all the observations necessary to support the reliability of a modern scientific theory.
I’m really confused about why you’re not understanding this.
You are confused because you do not understand, not because I do not understand.
If a physicist tells you nothing can accelerate past the speed of light that is evidence that nothing can accelerate past the speed of light.
Simply put, no. No it is not. Not unless the physicist can provide a reason to believe he is correct. Now, in common practice we assume that he can—but only because it is normal for an expert in a given field to actually be able to do this.
Here’s where your understanding, by the way, is breaking down: the difference between practical behavior and valid behavior. Bayesian rationality in particular is highly susceptible to this problem, and it’s one of my main objections to the system in principle: that it fails to parse the process of forming beliefs from the process of confirming truth.
On the contrary modern science would be impossible without them since no one can possibly make all the observations necessary to support the reliability of a modern scientific theory.
No. This is a deeply wrong view of how science is conducted. When a researcher invokes a previous publication, what they are appealing to is not an authority but rather to the body of evidence as provided. No researcher could ever get away with saying, “Dr. Knowsitall states that X is true—not without providing a citation of a paper where Dr. Knowsitall demonstrated that belief was valid.” Authorities often possess such bodies of evidence and can readily convey said information, so it’s easy to understand how this is so confusing for you folks, since it’s a fine nuance that inverts your normal perspectives on how beliefs are to be formed, and more importantly demonstrates an instance where the manners in which one forms beliefs is separated from valid claims of truth.
I’ll say it one last time: trusting someone has valid evidence is NOT the same thing as an appeal to authority, though it is a form of failure in efforts to determine truth.
Here’s where your understanding, by the way, is breaking down: the difference between practical behavior and valid behavior. Bayesian rationality in particular is highly susceptible to this problem, and it’s one of my main objections to the system in principle: that it fails to parse the process of forming beliefs from the process of confirming truth.
Here’s what may be tripping you up. Very often it doesn’t even make sense for humans to pay attention to small bits of evidence, because we can’t really process them very effectively. So for most bits of tiny evidence (such as most very weak appeals to authority) often the correct action given our limited processing capability is to simply ignore them. But this doesn’t make them not evidence.
Say for example, I have a conjecture about all positive integers n, and I check it for every n up to 10^6. If I verify it for 10^6+1, working out how much more certain I should be is really tough, so I only treat the evidence in aggregate. Similarly, if I have a hypothesis that all ravens ravens are black, and I’ve checked it for a thousand ravens, I should be more confident when I checked the 1001st raven and find it is black. But actually doing that update is difficult.
The question then becomes for any given appeal to authority, how reliable is that appeal?
Note that in almost any field there’s going to be some degree of relying on such appeals. In math for examples, there’s a very deep result called the classification of finite simple groups. The proof of the full result is thousands of pages spread across literally hundreds of separate papers. It is possible at some point that some people have really looked at almost the whole thing, but the vast majority of people who use the classification certainly have not. Relying on the classification is essentially an appeal to authority.
Here’s what may be tripping you up. Very often it doesn’t even make sense for humans to pay attention to small bits of evidence, because we can’t really process them very effectively. So for most bits of tiny evidence (such as most very weak appeals to authority) often the correct action given our limited processing capability is to simply ignore them. But this doesn’t make them not evidence.
That’s a given, and I have said exactly as much repeatedly. Reiterating it as though it were introducing a new concept to me or one that I was having trouble with isn’t going to be an effective tool for the conversation at hand.
So, I’m confused about how you would believe that humans have limited enough processing capacity that this would be an issue but in an earlier thread thought that humans were close enough to being good Bayesians that Aumann’s agreement theorem should apply. In general, the computations involved in an Aumann update from sharing posteriors will generally be much more involved than the computations in updating with a single tiny piece of evidence.
In general, the computations involved in an Aumann update from sharing posteriors will generally be much more involved than the computations in updating with a single tiny piece of evidence.
The computation for a normal update are two multiplications and a division. Now look at the computations involved in an explicit proof of Aumann. The standard proof is constructive. You can see that while it largely just involves a lot of addition, multiplication and division, you need to do it carefully for every single hypothesis. See this paper by Scott Aaronson www.scottaaronson.com/papers/agree-econ.ps which gives a more detailed analysis. That paper shows that the standard construction for Aumann is not as efficient as you’d want for some purposes. However, the primary upshot for our purposes is that the time required is roughly exponential in 1/eps where eps is how close you want the two Bayesians with limited computational agents to agree, and the equivalent issue for a standard multiplication and division is slightly worse than linear in 1/eps.
An important caveat is that I don’t know what happens if you try to do this in a Blum-Shub-Smale model, and in that context, it may be that the difference goes away. BSS machines are confusing and I don’t understand them well enough to figure out how much the protocol gets improved if one has two units with BSS capabilities able to send real numbers. Since humans can’t send exact real numbers or do computations with them in the general case, this is a mathematically important issue but is not terribly important for our purposes.
Alright, this is along the lines of what I thought you might say. A couple of points to consider.
In general discourse, humans use fuzzy approximations rather than precise statements. These have the effect of simplifying such ‘calculations’.
Backwards propagation of any given point of fact within that ‘fuzziness’ prevents the need for recalculation if the specific item is sufficiently trivial—as you noted.
None of this directly correlates to the topic of whether appeals to authority are ever legitimate in supporting truth-claims. Please note that I have differentiated between “I believe X” and “I hold X is true.”
I understand that this conflicts with the ‘baseline’ epistemology of Bayesian rationality as used on LessWrong… my assertion as this conversation has progressed has been that this represents a strong failure of Bayesian rationality; the inability to establish claims about the territory as opposed to simply refining the maps.
Your own language continues to perpetuate this problem: “The computation for a normal update”.
I’m not sure I understand your response. 3 is essentially unrelated to my question concerning small updates v. Aumann updates which makes me confused as to what you are trying to say with the three points and whether they are a disagreement with my claim, an argument that my claim is irrelevant, or something else.
Regarding 1, yes that’s true and 2 is sort of 2, but that’s the same thing that doing things with an order of epsilon does. Our epsilon keeps track of how fuzzy things should be. The point is that for the same degree of fuzziness it is much easier to do a single update on evidence than it is to do a Aumann type update based on common knowledge of current estimates.
I understand that this conflicts with the ‘baseline’ epistemology of Bayesian rationality as used on LessWrong… my assertion as this conversation has progressed has been that this represents a strong failure of Bayesian rationality; the inability to establish claims about the territory as opposed to simply refining the maps.
Your own language continues to perpetuate this problem: “The computation for a normal update”.
I don’t understand what you mean here. Updates are equivalent to claims about how the territory. Refining the map if whether one is or is not a Bayesian means one is making a more narrow claim about what the territory looks like.
Refining the map if whether one is or is not a Bayesian means one is making a more narrow claim about what the territory looks like.
Clarification: by “narrow claim” you mean “allowing for a smaller range of possibilities”.
That is not equivalent to making a specific claim about a manifested instantiation of the territory. It is stating that you believe the map more closely resembles the territory; it is not saying the territory is a certain specific way.
That is not equivalent to making a specific claim about a manifested instantiation of the territory. It is stating that you believe the map more closely resembles the territory; it is not saying the territory is a certain specific way.
If I a Bayesian estimates that with a high probability (say >.99) that say the Earth is around eight light-minutes from the sun then how is that not saying that the territory is a certain specific way? Do you just mean that Bayesian can’t be absolutely certain about that the territory matches their most likely hypothesis. Most Bayesians and for that matter most traditional rationalists would probably consider that a feature rather than a bug of an epistemological system. If you don’t mean this, what do you mean?
If I a Bayesian estimates that with a high probability (say >.99) that say the Earth is around eight light-minutes from the sun then how is that not saying that the territory is a certain specific way?
I was afraid of this. This is an epistemological barrier: if you express a notion in probabilistic form then you are not holding a specific claim.
Do you just mean that Bayesian can’t be absolutely certain about that the territory matches their most likely hypothesis.
I mean that Bayesian nomenclature does not permit certainty. However, I also assert that (naive) certainty exists.
Well, Bayesian nomenclature does permit certainty, just set P=1 or P=0. It isn’t a consequence that the nomenclature doesn’t allow it. It is that good Bayesians don’t ever say that. To use a possibly silly analogy, the language of Catholic theology allows one to talk about Jesus being fully man and not fully divine, but a good Catholic will never assert that Jesus was not fully divine.
However, I also assert that (naive) certainty exists.
In what sense does it exist? In the sense that human brains function with a naive certainty element? I agree that humans aren’t processing marginally probable descriptors. Are you claiming that a philosophical system that works should allow for naive certainty as something it can talk about and think exists?
Hmm, but the Bayesians agree with some form naive realism in general. What they disagree with you on is whether they have any access to that universe aside from in a probabilistic fashion. Alternatively, a a Bayesian could even deny naive realism and do almost everything correctly. Naive realism is a stance about ontology. Bayesianism is primarily a stance about epistemology. They do sometimes overlap and can inform each other but they aren’t the same thing.
I assert that there are knowable truth claims about the universe—the “territory”—which are absolute in nature. I related one as an example; my knowledge of the nature of my ongoing observation of how many fingers there are on my right hand. (This should not be conflated for knowledge of the number of fingers on my right hand. That is a different question.) I also operate with an epistemological framework which allows knowledge statements to be made and discussed in the naive-realistic sense (in which case the statement “I see five fingers on my hand” is a true statement of how many fingers there really are—a statement not of the nature of my map but of the territory itself.).
Thusly I reserve assertions of truth for claims that are definite and discrete, and differentiate between the model of material reality and the substance of reality itself when making claims. Probabilistic statements are always models, by category; no matter how closely they resemble the territory claims about the model are never claims in specific about the territory.
… For any given specific event of which I am cognizant and aware, yes. With the important caveat that this perfection is not a heritable property beyond that specific cognizance event.
Ok. I’m curious if you have ever been on any form of mind-altering substance. Here’s a related anecdote that may help:
When I was a teenager I had to get my wisdom teeth taken out. For a pain killer I was put on percocet or some variant thereof. Since I couldn’t speak, I had to communicate with people by writing things down. After I had recovered somewhat, I looked at what I had been writing down to my family. A large part apparently focused on the philosophical ramifications of being aware that my mind was behaving in a highly fractured fashion and incoherent ramblings about what this apparently did to my sense of self. I seemed particularly concerned with the fact that I could tell that my memories were behaving in a fragmented fashion but that this had not altered my intellectual sense of self as a continuing entity. Other remarks I made apparently were substantially less coherent than even that, but it was clear from what I had wrote that I had considered them to be deep philosophical thoughts and descriptions of my mind state.
In this context, I was cognizant and aware of my mental state. To say that my perception was accurate is pretty far off.
To use a different example, do you ever have thoughts that don’t quite make sense when you are falling asleep or waking up? Does your cognition seem perfectly accurate then? Or, when you make an arithmetic mistake, are you aware that you’ve misadded before you find the mistake?
In this context, I was cognizant and aware of my mental state. To say that my perception was accurate is pretty far off.
Hence: “With the important caveat that this perfection is not a heritable property beyond that specific cognizance event.”
but it was clear from what I had wrote that I had considered them to be deep philosophical thoughts and descriptions of my mind state.
Was it to you or to someone else that I stressed that recollections are not a part of my claim?
To use a different example, do you ever have thoughts that don’t quite make sense when you are falling asleep or waking up? Does your cognition seem perfectly accurate then?
“For any given specific event of which I am cognizant and aware, yes.” -- also, again note the caveat of non-heritability. Whether that cognizent-awareness is of inscrutability is irrelevant to the knowledge of that specifically ongoing instance of a cognizent-awareness event.
Or, when you make an arithmetic mistake, are you aware that you’ve misadded before you find the mistake?
I am going to ask you to acknowledge how irrelevant this ‘example’ is to my claim, as a means of guaging where you are on understanding it.
Hence: “With the important caveat that this perfection is not a heritable property beyond that specific cognizance event.”
Right. For those specific cognizance events I was pretty damn sure that I knew how my mind was functioning. And the reactions I got to percocet are on the milder end of what mind-altering substances can do.
Whether that cognizent-awareness is of inscrutability is irrelevant to the knowledge of that specifically ongoing instance of a cognizent-awareness event.
This makes me wonder if your claim is intended to be non-falsifiable. Is there anything that would convince you that you aren’t as aware as you think you are?
I am going to ask you to acknowledge how irrelevant this ‘example’ is to my claim, as a means of guaging where you are on understanding it.
Potentially quite far since this seems to be in the same category if I’m understanding you correctly. The cognitive awareness that I’ve added correctly is pretty basic, but one can screw up pretty easily and still feel like one is completely correct.
Right. For those specific cognizance events I was pretty damn sure that I knew how my mind was functioning.
Whether you were right or wrong about how your mind was functioning is irrelevant to the fact that you were aware of what it was that you were aware of. How accurate your beliefs were about your internal functionings is irrelevant to how accurate your beliefs were about what it was you were, at that instant, currently believing. These are fundamentally separate categories.
This is why I used an external example rather than internal, initially, by the way: the deeply recursive nature of where this dialogue is going only serves as a distraction from what I am trying to assert.
This makes me wonder if your claim is intended to be non-falsifiable. Is there anything that would convince you that you aren’t as aware as you think you are?
I haven’t made any assertions about how aware I believe I or anyone else is. I have made an assertion about how valid any given belief is regarding the specific, individual, ongoing cognition of the self-same specific, ongoing, individual cognition-event. This is why I have stressed the non-heritability.
The claim, in this case, is not so much non-falsifiable as it is tautological.
The cognitive awareness that I’ve added correctly is pretty basic
That it is basic does not mean that it is of the same category. Awareness of past mental states is not equivalent to awareness of ongoing mental states. This is why I specifically restricted the statemnt to ongoing events. I even previously stressed that recollections have nothing to do with my claim.
but one can screw up pretty easily and still feel like one is completely correct.
I’ll state this with the necessary recursion to demonstrate further why I would prefer we not continue using any cognition event not deriving from an external source: one would be correct to feel correct about feeling correct; but that does not mean that one would be correct about the feeling-correct that he is correct to feel correct about feeling-correct.
I’ll state this with the necessary recursion to demonstrate further why I would prefer we not continue using any cognition event not deriving from an external source: one would be correct to feel correct about feeling correct; but that does not mean that one would be correct about the feeling-correct that he is correct to feel correct about feeling-correct.
Simply put, no. No it is not. Not unless the physicist can provide a reason to believe he is correct. Now, in common practice we assume that he can—but only because it is normal for an expert in a given field to actually be able to do this.
That’s what makes the physicist an authority. If something is a reliable source of information “in practice” then it is a reliable source of information. Obviously if the physicist turns out not to know what she is talking about then beliefs based on that authority’s testimony turn out to be wrong.
Here’s where your understanding, by the way, is breaking down: the difference between practical behavior and valid behavior. Bayesian rationality in particular is highly susceptible to this problem, and it’s one of my main objections to the system in principle: that it fails to parse the process of forming beliefs from the process of confirming truth.
The validity of a method is it’s reliability.
No researcher could ever get away with saying, “Dr. Knowsitall states that X is true—not without providing a citation of a paper where Dr. Knowsitall demonstrated that belief was valid.”
The paper where Dr. Knowsitalll demonstrated that belief is simply his testimony regarding what happened in a particular experiment. It is routine for that researcher to not have personally duplicated prior experiments before building on them. The publication of experimental procedures is of course crucial for maintaining high standards of reliability and trustworthiness in the sciences. But ultimately no one can check the work of all scientists and therefore trust is necessary.
Here is an argument from authority for you: This idea of appeals to authority being legitimate isn’t some weird Less Wrong, Bayesian idea. It is standard, rudimentary logic. You don’t know what you’re talking about.
I’m really confused about why you’re not understanding this. Authorities are reliable to different degrees about different things. If I tell you I’m wearing an orange shirt that is clearly evidence that I am wearing an orange shirt. If a physicist tells you nothing can accelerate past the speed of light that is evidence that nothing can accelerate past the speed of light. Now, because people can be untrustworthy there are many circumstances in which witness testimony is less reliable than personal observation. But it would be rather bothersome to upload a picture of me in my shirt to you. It can also be difficult to explain special relativity and the evidence for it in a short time span. In cases like these we must settle for the testimony of authorities. This does not make appeals to authority invalid.
Now of course you might have some evidence that suggests I am not a reliable reporter of the color of my shirt. Perhaps I have lied to you many times before or I have incentives to be dishonest. In these cases it is appropriate to discount my testimony to the degree that I am unreliable. But this is not a special problem with appeals to authority. If you have reason to think you are hallucinating, perhaps because of the LSD you took an hour ago, you should appropriately discount your eyes telling you that the trees are waving at you.
Now since appeals to authority, like other kinds of sources of information, are not 100% reliable it makes sense to discuss the judgment of authorities in detail. Even if Eliezer is a reliable authority on lots of things it is a good idea to examine his reasoning. In this regard you are correct to demand arguments beyond “Eliezer says so”. But it is none the less not the case that “appeals to authority are always fallacious”. On the contrary modern science would be impossible without them since no one can possibly make all the observations necessary to support the reliability of a modern scientific theory.
You are confused because you do not understand, not because I do not understand.
Simply put, no. No it is not. Not unless the physicist can provide a reason to believe he is correct. Now, in common practice we assume that he can—but only because it is normal for an expert in a given field to actually be able to do this.
Here’s where your understanding, by the way, is breaking down: the difference between practical behavior and valid behavior. Bayesian rationality in particular is highly susceptible to this problem, and it’s one of my main objections to the system in principle: that it fails to parse the process of forming beliefs from the process of confirming truth.
No. This is a deeply wrong view of how science is conducted. When a researcher invokes a previous publication, what they are appealing to is not an authority but rather to the body of evidence as provided. No researcher could ever get away with saying, “Dr. Knowsitall states that X is true—not without providing a citation of a paper where Dr. Knowsitall demonstrated that belief was valid.” Authorities often possess such bodies of evidence and can readily convey said information, so it’s easy to understand how this is so confusing for you folks, since it’s a fine nuance that inverts your normal perspectives on how beliefs are to be formed, and more importantly demonstrates an instance where the manners in which one forms beliefs is separated from valid claims of truth.
I’ll say it one last time: trusting someone has valid evidence is NOT the same thing as an appeal to authority, though it is a form of failure in efforts to determine truth.
Appeals to authority are always fallacious.
Here’s what may be tripping you up. Very often it doesn’t even make sense for humans to pay attention to small bits of evidence, because we can’t really process them very effectively. So for most bits of tiny evidence (such as most very weak appeals to authority) often the correct action given our limited processing capability is to simply ignore them. But this doesn’t make them not evidence.
Say for example, I have a conjecture about all positive integers n, and I check it for every n up to 10^6. If I verify it for 10^6+1, working out how much more certain I should be is really tough, so I only treat the evidence in aggregate. Similarly, if I have a hypothesis that all ravens ravens are black, and I’ve checked it for a thousand ravens, I should be more confident when I checked the 1001st raven and find it is black. But actually doing that update is difficult.
The question then becomes for any given appeal to authority, how reliable is that appeal?
Note that in almost any field there’s going to be some degree of relying on such appeals. In math for examples, there’s a very deep result called the classification of finite simple groups. The proof of the full result is thousands of pages spread across literally hundreds of separate papers. It is possible at some point that some people have really looked at almost the whole thing, but the vast majority of people who use the classification certainly have not. Relying on the classification is essentially an appeal to authority.
That’s a given, and I have said exactly as much repeatedly. Reiterating it as though it were introducing a new concept to me or one that I was having trouble with isn’t going to be an effective tool for the conversation at hand.
So, I’m confused about how you would believe that humans have limited enough processing capacity that this would be an issue but in an earlier thread thought that humans were close enough to being good Bayesians that Aumann’s agreement theorem should apply. In general, the computations involved in an Aumann update from sharing posteriors will generally be much more involved than the computations in updating with a single tiny piece of evidence.
Justify this claim, and then we can begin.
The computation for a normal update are two multiplications and a division. Now look at the computations involved in an explicit proof of Aumann. The standard proof is constructive. You can see that while it largely just involves a lot of addition, multiplication and division, you need to do it carefully for every single hypothesis. See this paper by Scott Aaronson www.scottaaronson.com/papers/agree-econ.ps which gives a more detailed analysis. That paper shows that the standard construction for Aumann is not as efficient as you’d want for some purposes. However, the primary upshot for our purposes is that the time required is roughly exponential in 1/eps where eps is how close you want the two Bayesians with limited computational agents to agree, and the equivalent issue for a standard multiplication and division is slightly worse than linear in 1/eps.
An important caveat is that I don’t know what happens if you try to do this in a Blum-Shub-Smale model, and in that context, it may be that the difference goes away. BSS machines are confusing and I don’t understand them well enough to figure out how much the protocol gets improved if one has two units with BSS capabilities able to send real numbers. Since humans can’t send exact real numbers or do computations with them in the general case, this is a mathematically important issue but is not terribly important for our purposes.
Alright, this is along the lines of what I thought you might say. A couple of points to consider.
In general discourse, humans use fuzzy approximations rather than precise statements. These have the effect of simplifying such ‘calculations’.
Backwards propagation of any given point of fact within that ‘fuzziness’ prevents the need for recalculation if the specific item is sufficiently trivial—as you noted.
None of this directly correlates to the topic of whether appeals to authority are ever legitimate in supporting truth-claims. Please note that I have differentiated between “I believe X” and “I hold X is true.”
I understand that this conflicts with the ‘baseline’ epistemology of Bayesian rationality as used on LessWrong… my assertion as this conversation has progressed has been that this represents a strong failure of Bayesian rationality; the inability to establish claims about the territory as opposed to simply refining the maps.
Your own language continues to perpetuate this problem: “The computation for a normal update”.
I’m not sure I understand your response. 3 is essentially unrelated to my question concerning small updates v. Aumann updates which makes me confused as to what you are trying to say with the three points and whether they are a disagreement with my claim, an argument that my claim is irrelevant, or something else.
Regarding 1, yes that’s true and 2 is sort of 2, but that’s the same thing that doing things with an order of epsilon does. Our epsilon keeps track of how fuzzy things should be. The point is that for the same degree of fuzziness it is much easier to do a single update on evidence than it is to do a Aumann type update based on common knowledge of current estimates.
I don’t understand what you mean here. Updates are equivalent to claims about how the territory. Refining the map if whether one is or is not a Bayesian means one is making a more narrow claim about what the territory looks like.
Clarification: by “narrow claim” you mean “allowing for a smaller range of possibilities”.
That is not equivalent to making a specific claim about a manifested instantiation of the territory. It is stating that you believe the map more closely resembles the territory; it is not saying the territory is a certain specific way.
If I a Bayesian estimates that with a high probability (say >.99) that say the Earth is around eight light-minutes from the sun then how is that not saying that the territory is a certain specific way? Do you just mean that Bayesian can’t be absolutely certain about that the territory matches their most likely hypothesis. Most Bayesians and for that matter most traditional rationalists would probably consider that a feature rather than a bug of an epistemological system. If you don’t mean this, what do you mean?
I was afraid of this. This is an epistemological barrier: if you express a notion in probabilistic form then you are not holding a specific claim.
I mean that Bayesian nomenclature does not permit certainty. However, I also assert that (naive) certainty exists.
Well, Bayesian nomenclature does permit certainty, just set P=1 or P=0. It isn’t a consequence that the nomenclature doesn’t allow it. It is that good Bayesians don’t ever say that. To use a possibly silly analogy, the language of Catholic theology allows one to talk about Jesus being fully man and not fully divine, but a good Catholic will never assert that Jesus was not fully divine.
In what sense does it exist? In the sense that human brains function with a naive certainty element? I agree that humans aren’t processing marginally probable descriptors. Are you claiming that a philosophical system that works should allow for naive certainty as something it can talk about and think exists?
Granted. That was very poorly (as in, eroneously) worded on my part. I should have said “Bayesian practices”.
I mean naive realism, of course.
Essentially, yes. Although I should emphasize that I mean that in the most marginally sufficient context.
Hmm, but the Bayesians agree with some form naive realism in general. What they disagree with you on is whether they have any access to that universe aside from in a probabilistic fashion. Alternatively, a a Bayesian could even deny naive realism and do almost everything correctly. Naive realism is a stance about ontology. Bayesianism is primarily a stance about epistemology. They do sometimes overlap and can inform each other but they aren’t the same thing.
Indeed.
Ok. Now we’re getting somewhere. So why do you think the Bayesians are wrong that this should be probabilistic in nature?
Universally. Universally probabilistic in nature.
I assert that there are knowable truth claims about the universe—the “territory”—which are absolute in nature. I related one as an example; my knowledge of the nature of my ongoing observation of how many fingers there are on my right hand. (This should not be conflated for knowledge of the number of fingers on my right hand. That is a different question.) I also operate with an epistemological framework which allows knowledge statements to be made and discussed in the naive-realistic sense (in which case the statement “I see five fingers on my hand” is a true statement of how many fingers there really are—a statement not of the nature of my map but of the territory itself.).
Thusly I reserve assertions of truth for claims that are definite and discrete, and differentiate between the model of material reality and the substance of reality itself when making claims. Probabilistic statements are always models, by category; no matter how closely they resemble the territory claims about the model are never claims in specific about the territory.
Ah, so to phrase this as a Bayesian might, you assert that your awareness of your own cognition is perfectly accurate?
… For any given specific event of which I am cognizant and aware, yes. With the important caveat that this perfection is not a heritable property beyond that specific cognizance event.
Ok. I’m curious if you have ever been on any form of mind-altering substance. Here’s a related anecdote that may help:
When I was a teenager I had to get my wisdom teeth taken out. For a pain killer I was put on percocet or some variant thereof. Since I couldn’t speak, I had to communicate with people by writing things down. After I had recovered somewhat, I looked at what I had been writing down to my family. A large part apparently focused on the philosophical ramifications of being aware that my mind was behaving in a highly fractured fashion and incoherent ramblings about what this apparently did to my sense of self. I seemed particularly concerned with the fact that I could tell that my memories were behaving in a fragmented fashion but that this had not altered my intellectual sense of self as a continuing entity. Other remarks I made apparently were substantially less coherent than even that, but it was clear from what I had wrote that I had considered them to be deep philosophical thoughts and descriptions of my mind state.
In this context, I was cognizant and aware of my mental state. To say that my perception was accurate is pretty far off.
To use a different example, do you ever have thoughts that don’t quite make sense when you are falling asleep or waking up? Does your cognition seem perfectly accurate then? Or, when you make an arithmetic mistake, are you aware that you’ve misadded before you find the mistake?
Hence: “With the important caveat that this perfection is not a heritable property beyond that specific cognizance event.”
Was it to you or to someone else that I stressed that recollections are not a part of my claim?
“For any given specific event of which I am cognizant and aware, yes.” -- also, again note the caveat of non-heritability. Whether that cognizent-awareness is of inscrutability is irrelevant to the knowledge of that specifically ongoing instance of a cognizent-awareness event.
I am going to ask you to acknowledge how irrelevant this ‘example’ is to my claim, as a means of guaging where you are on understanding it.
Right. For those specific cognizance events I was pretty damn sure that I knew how my mind was functioning. And the reactions I got to percocet are on the milder end of what mind-altering substances can do.
This makes me wonder if your claim is intended to be non-falsifiable. Is there anything that would convince you that you aren’t as aware as you think you are?
Potentially quite far since this seems to be in the same category if I’m understanding you correctly. The cognitive awareness that I’ve added correctly is pretty basic, but one can screw up pretty easily and still feel like one is completely correct.
Whether you were right or wrong about how your mind was functioning is irrelevant to the fact that you were aware of what it was that you were aware of. How accurate your beliefs were about your internal functionings is irrelevant to how accurate your beliefs were about what it was you were, at that instant, currently believing. These are fundamentally separate categories.
This is why I used an external example rather than internal, initially, by the way: the deeply recursive nature of where this dialogue is going only serves as a distraction from what I am trying to assert.
I haven’t made any assertions about how aware I believe I or anyone else is. I have made an assertion about how valid any given belief is regarding the specific, individual, ongoing cognition of the self-same specific, ongoing, individual cognition-event. This is why I have stressed the non-heritability.
The claim, in this case, is not so much non-falsifiable as it is tautological.
That it is basic does not mean that it is of the same category. Awareness of past mental states is not equivalent to awareness of ongoing mental states. This is why I specifically restricted the statemnt to ongoing events. I even previously stressed that recollections have nothing to do with my claim.
I’ll state this with the necessary recursion to demonstrate further why I would prefer we not continue using any cognition event not deriving from an external source: one would be correct to feel correct about feeling correct; but that does not mean that one would be correct about the feeling-correct that he is correct to feel correct about feeling-correct.
Now I get it!
I think I get it but I’m not sure. Can you translate it for the rest of us? Or is this sarcasm?
See here. And I may be honestly mistaken, but I’m not kidding.
It certainly could easily go either way.
Again, can you explain more clearly what you mean by this?
… that is the simple/clear explanation.
A more technical explanation would be “the ongoing cognizance of a given specifically instantiated qualia”.
That’s what makes the physicist an authority. If something is a reliable source of information “in practice” then it is a reliable source of information. Obviously if the physicist turns out not to know what she is talking about then beliefs based on that authority’s testimony turn out to be wrong.
The validity of a method is it’s reliability.
The paper where Dr. Knowsitalll demonstrated that belief is simply his testimony regarding what happened in a particular experiment. It is routine for that researcher to not have personally duplicated prior experiments before building on them. The publication of experimental procedures is of course crucial for maintaining high standards of reliability and trustworthiness in the sciences. But ultimately no one can check the work of all scientists and therefore trust is necessary.
Here is an argument from authority for you: This idea of appeals to authority being legitimate isn’t some weird Less Wrong, Bayesian idea. It is standard, rudimentary logic. You don’t know what you’re talking about.