Acknowledging failure is in no wise congratulatory.
You received an answer, apparently not the one you were looking for.
Of course not. By asking the question I am implying that there is, in fact, a difference. This was in an effort to help you understand me. You in particular responded by answering as you would. Which does nothing to help you overcome what I described as the epistemological barrier of comprehension preventing you from understanding what I’m trying to relate.
To me, there is a very simple, very substantive, and very obvious difference between those two statements, and it isn’t wordiness. I invite you to try to discern what that difference is.
I am powerless over what you project into the emotional context of my statements, especially when I am directly noting a point of fact with no emotive terminology used whatsoever.
I am powerless over what you project into the emotional context of my statements, especially when I am directly noting a point of fact with no emotive terminology used whatsoever.
There was a temptation to just reply with “No. You are not.” But that’s probably less than helpful. Note that language is a subtle thing. Even when someone is making a purely factual claim, how they make it, especially when it involves assertions about other people, can easily carry negative emotional content. Moreover, careful, diplomatic phrasing can be used to minimize that problem.
That is not necessarily true. The simplest explanation is whichever explanation requires the least effort to comprehend. That, categorically, can take the form of a query / thought-exercise.
In this instance, I have already provided the simplest direct explanation I possibly can: probabilistic statements do not correlate to exact instances. I was asked to explain it even more simply yet. So I have to get a little non-linear.
So this confuses me even more. Given your qualification of what pedanterrific said, this seems off in your system. The objection you have to the Bayesians seems to be purely in the issues about the nature of your own map. The red-shirt being dead is not a statement in your own map.
If you had said no here I think I’d sort of see where you are going. Now I’m just confused.
So this confused me even more. [...] The red-shirt being dead is not a statement in your own map.
I’ll elaborate.
Naively: yes.
Absolutely: no.
Recall the epistemology of naive realism:
Naïve realism, also known as direct realism or common sense realism, is a philosophy of mind rooted in a common sense theory of perception that claims that the senses provide us with direct awareness of the external world. In contrast, some forms of idealism assert that no world exists apart from mind-dependent ideas and some forms of skepticism say we cannot trust our senses.
Ok. Most of the time McCoy determines that someone is dead he uses a tricorder. Do you declare his death when you have an intermediary object other than your senses?
My form of naive realism is contingent, not absolutist. It is perfectly acceptable to find myself to be in error. But … there are certain forms of dead that are just plain dead. Such as the cat I just came home to discover was in rigor mortis an hour ago.
My condolences. Would you prefer if we use a different example? The upshot is that the Bayesian agrees with you that there is a reality out there. If you agree that one can be in error then you and the Bayesian aren’t disagreeing here.
The only thing we seem to disagree on is how to formulate statements of belief. As I wrote elsewhere:
What I am espousing is the use of contingent naive realism for handling assertions regarding “the territory” coincident with Bayesian reasoning regarding “the map”.
I strongly agree that Bayesian reasoning is a powerful tool for making predictive statements, but I still affirm that the adage that the number of times a well-balanced coin has come up heads has no bearing on whether it actually will come up heads on the next trial.
I strongly agree that Bayesian reasoning is a powerful tool for making predictive statements, but I still affirm that the adage that the number of times a well-balanced coin has come up heads has no bearing on whether it actually will come up heads on the next trial.
But even here the Bayesian agrees with you if the coin is well-balanced.
The only thing we seem to disagree on is how to formulate statements of belief.
If that is the case then the only disagreement you have is a matter of language not a matter of philosophy. This confuses me because the subthread about knowledge about one’s own internal cognitive processes seemed to assert a difference in actual philosophy.
But even here the Bayesian agrees with you if the coin is well-balanced.
I was making a trivial example. It gets more complicated when we start talking about the probabilities that a specific truth claim is valid.
This confuses me because the subthread about knowledge about one’s own internal cognitive processes seemed to assert a difference in actual philosophy.
I’m not exactly sure how to address this problem. I do know that it is more than just a question of language. The subthread you’re talking about was dealing on the existence of knowable truth, and I was using absolute truths for that purpose.
There are, necessarily, a very limited number of knowable absolute truths.
I do note that your wording on this instance, “knowledge about one’s own internal cognitive processes”, indicates that I may still be failing to achieve sufficient clarity in the message I was conveying in that thread. To reiterate: my claim was that it is absolutely true that when you are cognizant of a specific thought, you know that you are cognizant of that thought. In other words: you know absolutely that you are aware of your awareness. This is tautological to self-awareness.
If I say “he’s dead,” it means “I believe he is dead.” The information that I possess, the map, is all that I have access to. Even if my knowledge accurately reflects the territory, it cannot possibly be the territory. I see a rock, but my knowledge of the rock is not the rock, nor is my knowledge of my self my self.
The map is not the territory, but the map is the only one that you’ve got in your head. “It is raining” and “I do not believe it is raining” can be consistent with each other, but I cannot consistently assert “it is raining, but I do not believe that it is raining.”
You know, for a moment until I saw who it was who was replying, I had a brief flash of hope that this conversation had actually started to get somewhere.
This is starting to hurt my feelings, guys. This is my earnest attempt to restate this.
Put another way: the statement “the map is not the territory” does not have a confidence value of .99999whatever. It is a statement about the map, which is itself a territory, and is simply true.
(I may, of course, be mistaken about Logos’ epistemology, in which case I would appreciate clarification.)
Well, you can take “the map is not the territory” as a premise of a system of beliefs, and assign it a probability of 1 within that system, but you should not assign a probability of 1 to the system of beliefs being true.
Of course, there are some uncertainties that it makes essentially no sense to condition your behavior on. What if everything is wrong and nothing makes sense? Then kabumfuck nothing, you do the best with what you have.
First, I want to note the tone you used. I have not so disrespected you in this dialogue.
Second: “Put another way: the statement “the map is not the territory” does not have a confidence value of .99999whatever. It is a statement about the map, which is itself a territory, and is simply true.”
There is some truth to the claim that “the map is a territory”, but it’s not really very useful. Also, while it is demonstrable that the map is not the territory, it is not tautological and thus requires demonstration.
I am, however, comfortable stating that any specific individual “map” which has been so demonstrated to be a territory of its own without being the territory for which it is mapping is, in truth, a territory of its own (caveat: thar be recursion here!), and that this is expressible as a truth claim without the need for probability values.
First, I have no idea what you’re talking about. I intended no disrespect (in this comment). What about my comment indicated to you a negative tone? I am honestly surprised that you would interpret it this way, and wish to correct whatever flaw in my communicatory ability has caused this.
Second:
There is some truth to the claim that “the map is a territory”, but it’s not really very useful.
Okay, now I’m starting to get confused again. ‘Some’ truth? And what does ‘useful’ have to do with it?
Also, while it is demonstrable that the map is not the territory, it is not tautological and thus requires demonstration.
It… seems tautological to me...
I am, however, comfortable stating that any specific individual “map” which has been so demonstrated … is, in truth, a territory of its own …, and that this is expressible as a truth claim without the need for probability values.
What about my comment indicated to you a negative tone?
The way I read “This is starting to hurt my feelings, guys.” was that you were expressing it as a miming of myself, as opposed to being indicative of your own sentiments. If this was not the case, then I apologize for the projection.
Okay, now I’m starting to get confused again. ‘Some’ truth? And what does ‘useful’ have to do with it?
Whether or not a truth claim is valid is binary. How far that a given valid claim extends is quantitative rather than qualitative, however. My comment towards usefulness has more to do with the utility of the notion, (I.e.; a true-and-interesting notion is more “useful” than a true-and-trivial notion.)
It… seems tautological to me...
Tautological would be “The map is a map”. Statements that depend upon definitions for their truth value approach tautology but aren’t necessarily so.¬A != A is tautological, as is A=A. However, B⇔A → ¬A = ¬B is definitionally true. So; “the map is not the territory” is definitionally true (and by its definition it is demonstrated). However, it is possible for a conceptual territory that the territory is the map. (I am aware of no such instance, but it conceptually could occur). This would, however, require a different operational definition of what a “map” is from the context we currently use, so it would actually be a different statement.
So, um, have I understood you or not?
I’m comfortable saying “close enough for government work”.
was a direct (and, in point of fact, honest) response to
You know, for a moment until I saw who it was who was replying, I had a brief flash of hope that this conversation had actually started to get somewhere.
Though, in retrospect, this may not mean what I took it to mean.
(I.e.; a true-and-interesting notion is more “useful” than a true-and-trivial notion.)
Agreed.
So; “the map is not the territory” is definitionally true (and by its definition it is demonstrated). However, it is possible for a conceptual territory that the territory is the map. … This would, however, require a different operational definition of what a “map” is from the context we currently use, so it would actually be a different statement.
Ah, ok.
I’m comfortable saying “close enough for government work”.
The map is not the territory, but the map is the only one that you’ve got in your head.
This is going nowhere. I am intimately familiar with the epistemological framework you just related to me, and I am trying to convey one wholly unlike to to you. Exactly what good does it do this dialogue for you to continue reiterating points I have already discussed and explained to you ad nauseum?
I see a rock, but my knowledge of the rock is not the rock, nor is my knowledge of my self my self.
But you know that you ‘see’ a rock. And that IS territory, in the absolute sense.
If I say “he’s dead,” it means “I believe he is dead.”
Unless you are using a naive realist epistemological framework. In which case it means “he’s dead” (and can be either true or not true.)
But you know that you ‘see’ a rock. And that IS territory, in the absolute sense.
My understanding of what you are saying is that I should assign a probability of 1 to the proposition that I experience the quale of rock-seeing.
I do not accept that this is the case. There are alternatives, although they do not make much intuitive sense. It may seem to me that I cannot think I have an experience, and not have it, but maybe reality isn’t even that coherent. Maybe it doesn’t follow for reasons that are fundamentally impossible for me to make sense of. Also maybe I don’t exist.
I cannot envision any way in which I could plausibly come to believe that I probably don’t experience what I believe I experience, or that I don’t exist, but if I started getting evidence that reality is fundamentally incoherent, I’d definitely have to start judging it as more likely.
My understanding of what you are saying is that I should assign a probability of 1 to the proposition that I experience the quale of rock-seeing.
I am comfortable agreeing with this statement.
There are alternatives, although they do not make much intuitive sense.
I do not accept that this is the case. Postulating counterfactuals does not validate the possibility of those counterfactuals (this is just another variation of the Ontological Argument, and is as invalid as is the Ontological Argument itself, for very similar reasons.)
I cannot envision any way in which I could plausibly come to believe that I probably don’t experience what I believe I experience
This is a question which reveals why I have the problems I do with the epistemology embedded in Bayesian reasoning. It is very similar to the fact that I’m pretty sure you could agree with the notion that it is conceptually plausible that if reality were ‘fundamentally incoherent’ the statement ¬A = A could be true.
The thing is, it isn’t possible for such incoherence to exist. They are manifestations of definitions; the act of being so defined causes them to be proscriptive restrictions on the behavior of all that which exists.
For what it’s worth, I also hold that the Logical Absolutes are in fact absolute, though I also assert that this is essentially meaningless.
The thing is, it isn’t possible for such incoherence to exist.
How do you know?
It seems pretty reasonable to me that A must always equal A, but if I suddenly found that the universe I was living in ran on dream logic, and 2+2 stopped equaling the same thing every time, I’d start viewing the notion with rather more skepticism.
Because they are functions of definition. Altering the definition invalidates the scenario.
and 2+2 stopped equaling the same thing every time
Eliezer used a cognitive trick in this case: he postulated a counterfactual and treated that postulation as sufficient justification to treat the counterfactual as a possibility. This is not justified.
Because they are functions of definition. Altering the definition invalidates the scenario.
If you state that A is A by definition, then fine, you’ve got a definition that A=A. But that doesn’t mean that the definition must be manifest. Things cannot be defined into reality.
Now, I believe that A=A is a real rule that reality follows, with very high confidence. I am even willing to accept it as a premise within my system of beliefs, and assign it a probability of 1 in the context of that system of beliefs, but I can’t assign a probability of 1 to that system of beliefs.
Eliezer used a cognitive trick in this case: he postulated a counterfactual and treated that postulation as sufficient justification to treat the counterfactual as a possibility. This is not justified.
It’s possible contingent on things that we have very strong reason to believe simply being wrong.
The postulation does not make it a possibility. It was a possibility without ever being postulated.
The postulation “Maybe time isn’t absolute, and there is no sense in which two events in different places can objectively be said to take place at the same time” would probably have been regarded by most, a thousand years ago, as impossible. People would respond that that simply isn’t what time is. But from what we can see today, it looks like reality really does work like that, and any definition of “time” which demands absolute simultaneity exist simply doesn’t correspond to our universe.
In any case, I don’t see how you get from the premise that you can have absolute certainty about your experience of qualia to the position that you can’t judge how likely it is that the Sportland Sports won the game last night based on Neighbor Bob’s say so.
Another epistemological breakdown may be about to occur here. I separate “real” from “existant”.
That which is real is any pattern that which exists is proscriptively constrained by.
That which exists is any phenomenon which directly interacts with other phenomena.
I agree with you that you cannot define things into existence. But you most assuredly can define things into reality. The logical absolutes are examples of this.
Now, I believe that A=A is a real rule that reality follows,
Yup. But that’s a manifestation of the definition, and nothing more. If A=A had never been defined, no one would bat an eye at ¬A = A. But that statement would bear no relation to the definitions supporting the assertion A=A.
It’s possible contingent on things that we have very strong reason to believe simply being wrong.
We’re talking at cross purposes here. Any arbitrary counterfactual may or may not be possible. The null hypothesis is that they are not. Until there is justification for the assertion, the mere presentation of a counterfactual is insufficient cause/motive to accept the assertion that it is possible.
But from what we can see today, it looks like reality really does work like that, and any definition of “time” which demands absolute simultaneity exist simply doesn’t correspond to our universe.
Agreed. But time isn’t “definitional” in nature; that is, it is more than a mere product of definition. We define ‘time’ to be an existing phenomenon with specific characteristics—and thus the definition of how time behaves is constrained by the actual patterns that existing phenomenon adheres to.
The number 3 on the other hand is defined as itself, and there is no existing phenomenon to which it corresponds. We can find three instances of a given phenomenon, but that instantiated collection should not be conflated with an instantiation of the number three itself.
In any case, I don’t see how you get from the premise that you can have absolute certainty about your experience of qualia to the position that you can’t judge how likely it is that the Sportland Sports won the game last night based on Neighbor Bob’s say so.
You can’t. Conversations—and thus, topics—evolve over time, and this conversation has been going on for over 24 hours now.
We’re talking at cross purposes here. Any arbitrary counterfactual may or may not be possible. The null hypothesis is that they are not. Until there is justification for the assertion, the mere presentation of a counterfactual is insufficient cause/motive to accept the assertion that it is possible.
Things that have been believed impossible have sometimes been found to be true. Just because you do not yet have any reason to suspect that something is true, does not mean that you can have assign zero probability to its truth. If you did assign a proposition zero probability, no amount of justification could ever budge you from zero probability; whatever evidence or arguments you received, it would always be infinitely more likely that they came about from some process unconnected to the truth of the proposition than that the proposition was true.
I’m getting rather tired of having to start this conversation over yet again. Do you understand my meaning when I say that until such time as a given counterfactual is provided with justification to accept its plausibility, the null hypothesis is that it is not?
I believe that I understand you, but I don’t think that is part of an epistemology that is as good at forming true beliefs and recognizing false ones.
You can label certain beliefs “null hypothesis,” but it will not correspond to any consistent likelihood that they will turn out to be true.
As a social prescription, you can call certain claims “justified knowledge,” based on how they are arrived at, and not call other claims about reality “justified knowledge” because the evidence on which they were arrived at does not meet the same criteria. And you can call it an “intellectual failing” for people to not acquire “justified knowledge” about things that they reason about. But not all “justified knowledge” will be right, and not all claims that are not “justified knowledge” will be wrong. If you had a computer that could process whatever information you put into it, and tell you exactly how confident you ought to be about any given claim given that information, and have the claims be right exactly that often, then the category of “justified belief” will not correspond to any particular range of likelihood.
This sounds like a point where you would complain that I am confusing “aggregated average” with “manifested instantiation.” We can accept the premise that the territory is the territory, and a particular event either happened or it did not. So if we are talking about a particular event, should we be able to say that it actually definitely happened, because that’s how the territory works? Well, we can say that, but then we start observing that some of the statements we make about things that “definitely happened” turn out to be wrong. If we except as a premise in our epistemologies that the territory is what it is, we can accept that the statements we make about the territory are either right, absolutely, or wrong, absolutely. But we don’t know which are which, and “justified knowledge” won’t correspond to any particular level of confidence.
If you have an epistemology that lets you label some things “absolutely justified knowledge” that are actually wrong, and tells you that you can’t predict the likelihood of some things when you can actually make confidence predictions about them and be right as often as you say you should be, I think you’ll have a hard time selling that it’s a good epistemology.
You can label certain beliefs “null hypothesis,” but it will not correspond to any consistent likelihood that they will turn out to be true.
The use of null hypothesises is a pivotal tool for scientific endeavor and rational-skepticism in general. If you object to this practice, feel free to do so, but please note that what you’re objecting to is a necessary condition of the process of forming beliefs.
And you can call it an “intellectual failing” for people to not acquire “justified knowledge” about things that they reason about. But not all “justified knowledge” will be right, and not all claims that are not “justified knowledge” will be wrong.
I do not agree with the use of “justified knowledge” so ubiquitously. Furthermore, I have tried to establish a differentiation between belief claims and knowledge claims. The latter are categorically claims about “the territory”; the former are claims of approximation of “your map” to “the territory”.
Being able to readily differentiate between these two categories is a vital element for ways of knowing.
If you have an epistemology that lets you label some things “absolutely justified knowledge” that are actually wrong,
I don’t know, at this point, exactly where this particular thread is in the nested tree of various conversations I’m handling. But I can tell you that at no time have I espoused any form of absolutism that allows for non-absolute truths to be expressed as such.
What I have espoused, however, is the use of contingent naive realism coincident with Bayesian reasoning—while understanding the nature of what each “tool” addresses.
The use of null hypotheses is a pivotal tool for scientific endeavor and rational-skepticism in general. If you object to this practice, feel free to do so, but please note that what you’re objecting to is a necessary condition of the process of forming beliefs.
Did you reach this conclusion by testing the hypothesis that the use of null hypotheses is not a pivotal tool for scientific endeavor or rational-skepticism in general against collected data, and rejecting it?
Did you reach this conclusion by testing the hypothesis that the use of null hypotheses is not a pivotal tool for scientific endeavor or rational-skepticism in general against collected data, and rejecting it?
Rational-skepticism doesn’t necessarily operate experimentally and I’m sure you know this.
That being said; yes, yes I did. Thought-experiment, but experiment it is.
The use of null hypothesises is a pivotal tool for scientific endeavor and rational-skepticism in general. If you object to this practice, feel free to do so, but please note that what you’re objecting to is a necessary condition of the process of forming beliefs.
Yes, I am aware of how scientists use null hypotheses. But the null hypothesis is only a construct which exists in the context of significance testing, and significance testing does not reliably get accurate results when you measure small effect sizes, and also gives you different probabilities based on the same data depending on how you categorize that data; Eliezer already gave an example of this in one of the pages you linked earlier. The whole reason that people on this site in general have enthusiasm for Bayesian statistics is because we believe that they do better, as well as as it is possible to do.
The concept of a “null hypothesis,” as it is used in frequentist statistics, doesn’t even make sense in the context of a system where, as you have claimed, we cannot know that it is more likely that a coin will come up heads on the basis of it having come up heads the last hundred times you’ve flipped it.
I do not agree with the use of “justified knowledge” so ubiquitously. Furthermore, I have tried to establish a differentiation between belief claims and knowledge claims. The latter are categorically claims about “the territory”; the former are claims of approximation of “your map” to “the territory”.
If I say “it is raining out,” this is a claim about the territory, yes. If I say “I believe that it is raining out,” this is a claim about my map (and my map exists within, and is a part of, the territory, so it is also a claim “about the territory,” but not about the same part of the territory.)
But my claim about the territory is not certain to be correct. If I say “it is raining out, 99.99999% confidence,” which is the sort of thing I would mean when I say “it is raining out,” that means that I am saying that the territory probably agrees with this statement, but the information in my map does not allow me to have a higher confidence in the agreement than that.
The use of null hypothesises is a pivotal tool for scientific endeavor and rational-skepticism in general. If you object to this practice, feel free to do so, but please note that what you’re objecting to is a necessary condition of the process of forming beliefs.
A null hypothesis is a particularly useful simplification not strictly a necessary tool.
I am saying it is not strictly necessary to have a hypothesis called “null” but rather that such a thing is just extremely useful. I would definitely say the thing about defaults too.
I am saying it is not strictly necessary to have a hypothesis called “null”
That’s not what a null hypothesis is. A null hypothesis is a default state.
I would definitely say the thing about defaults too.
I’m curious; how could a situation be arranged such that an individual has no default position on a given topic? Please provide such a scenario—I find my imagination insufficient to the task of authoring such a thing.
I’m curious; how could a situation be arranged such that an individual has no default position on a given topic? Please provide such a scenario—I find my imagination insufficient to the task of authoring such a thing.
If you tell me you’ve flipped a coin, I may not have any default position on whether it’s heads or tails. Similarly, I might have no prior belief, or a symmetric prior, on lots of questions that I haven’t thought much about or that don’t have any visible correlation to anything else I know about.
Sure. I didn’t say “I have no information related to the experiment”. What I’m saying is this: if I do an experiment to choose between K options, it might be that I don’t have any prior (or have a symmetric prior) about which of those K it will be. That’s what a null hypothesis in statistics is. When a pharmaceutical company does a drug trial and talks about the null hypothesis, the hypothesis they’re referring to is “no effect”, not “the chemical never existed”.
Yes, any experiment also will give you information about whether the experiment’s assumptions were valid. But talking about null hypotheses or default hypotheses is in the context of a particular formal model, where we’re referring to something more specific.
When a pharmaceutical company does a drug trial and talks about the null hypothesis, the hypothesis they’re referring to is “no effect”, not “the chemical never existed”.
Correct. And that’s a situationally relevant phenomenon. One quite similar to how “the coin will be on one of its sides” is situationally relevant to the coin-toss. (As opposed to it being on edge, or the faces smoothed off.)
You had asked for an example of where an individual might have no prior on “on a given topic.” I gave one, for a narrow topic: “is the coin heads or tails?”. I didn’t say, and you didn’t ask, for a case where an individual has no prior on anything related to the topic.
But let me give on attempt at that, stronger, claim. You’ve never met my friend Sam. You’ve never heard of or thought of my friend Sam. Before this comment, you didn’t have a prior on his having a birthday, you don’t have a prior on his existing; you never considered the possibility that lesswrong-commenter asr has a friend Sam. I would say you have no beliefs on the topic of Sam’s birthday.
It is my default position that they do not exist, even as absent constructs, unless otherwise noted (with the usual lowered threshold of standards of evidence for more evidently ordinary claims). That’s the whole point of default positions; they inform us how to react to new criteria or phenomena as they arise.
By bringing up the issue of non-topics you’re moving the goal-post. I asked you how it could be that a person could have no defaults on a given topic.
If I tell you that there is a flugmizzr, then you know that certain things are ‘true’—at least, you presume them to be until shown otherwise: one, that flugmizzrs are enumerable and two, that they are somehow enumerable by people, and three, flugmizzrs are knowable, discrete phenomena.
Those are most assuredly amongst your defaults on the topic. They could easily each be wrong—there is no way to know—but if you trust my assertion of “a flugmizzr” then each of these become required defaults, useful for acquiring further information.
I asked you how it could be that a person could have no defaults on a given topic.
To clarify—my answer is that “there are topics I’ve never considered, and before consideration, I need not have a default belief.” For me at least, the level consideration to actually form a belief is nontrivial. I am often in a state of uninterested impartiality. If you ask me whether you have a friend named Joe, I would not hazard a guess and would be about equally surprised by either answer.
To put it more abstractly: There’s no particular computational reason why the mind needs to have a default for every input. You suggest some topic X, and it’s quite possible for me to remain blank, even if I heard you and constructed some internal representation of X. That representation need not be tied to a belief or disbelief in any of the propositions about X under discussion.
Agreed with your main point; curious about a peripheral:
I cannot envision any way in which I could plausibly come to believe that I probably don’t experience what I believe I experience
Is this meaningfully different, in the context you’re operating in, from coming to believe that you probably didn’t experience what you believe you experienced?
Because I also have trouble imagining myself being skeptical about whether I’m actually experiencing what I currently, er, experience myself experiencing (though I can certainly imagine believing things about my current experience that just ain’t so, which is beside your point). But I have no difficulty imagining myself being skeptical about whether I actually experienced what I currently remember myself having experienced a moment ago; that happened a lot after my brain injury, for example.
Is this meaningfully different, in the context you’re operating in, from coming to believe that you probably didn’t experience what you believe you experienced?
Yes. I have plenty of evidence that people sometimes become convinced that they’ve had experiences that they haven’t had, but reality would have to work very differently than I think it does for people not to be having the quales they think they’re having.
(nods) That’s what I figured, but wanted to confirm.
Given that my “current” perception of the world is integrating a variety of different inputs that arrived at my senses at different times, I mostly suspect that my intuitive confidence that there’s a sharp qualitative difference between “what I am currently experiencing” and “what I remember having experienced” is (like many of my intuitive confidences) simply not reflective of what’s actually going on.
Brevity.
As I said; “epistemological barriers of comprehension.”
You were asked to explain a statement of yours as simply as possible.
You responded with a hypothetical question.
You received an answer, apparently not the one you were looking for.
You congratulated yourself on being unclear.
Acknowledging failure is in no wise congratulatory.
Of course not. By asking the question I am implying that there is, in fact, a difference. This was in an effort to help you understand me. You in particular responded by answering as you would. Which does nothing to help you overcome what I described as the epistemological barrier of comprehension preventing you from understanding what I’m trying to relate.
To me, there is a very simple, very substantive, and very obvious difference between those two statements, and it isn’t wordiness. I invite you to try to discern what that difference is.
You come across as rather condescending. Consider that this might not be the most effective way to get your point across.
I am powerless over what you project into the emotional context of my statements, especially when I am directly noting a point of fact with no emotive terminology used whatsoever.
There was a temptation to just reply with “No. You are not.” But that’s probably less than helpful. Note that language is a subtle thing. Even when someone is making a purely factual claim, how they make it, especially when it involves assertions about other people, can easily carry negative emotional content. Moreover, careful, diplomatic phrasing can be used to minimize that problem.
Certainly. But in this case my phrasing was such that it was devoid of any emotive content outside of what the reader projects.
An explanation which is as simple as possible != an exercise left to the reader.
That is not necessarily true. The simplest explanation is whichever explanation requires the least effort to comprehend. That, categorically, can take the form of a query / thought-exercise.
In this instance, I have already provided the simplest direct explanation I possibly can: probabilistic statements do not correlate to exact instances. I was asked to explain it even more simply yet. So I have to get a little non-linear.
So how would you answer what the difference is between the two statements?
The former is a statement of belief about what is. The latter is a statement of what actually is.
So if you were McCoy would you ever say “He’s dead, Jim”?
Naively, yes.
So this confuses me even more. Given your qualification of what pedanterrific said, this seems off in your system. The objection you have to the Bayesians seems to be purely in the issues about the nature of your own map. The red-shirt being dead is not a statement in your own map.
If you had said no here I think I’d sort of see where you are going. Now I’m just confused.
I’ll elaborate.
Naively: yes.
Absolutely: no.
Recall the epistemology of naive realism:
Ok. Most of the time McCoy determines that someone is dead he uses a tricorder. Do you declare his death when you have an intermediary object other than your senses?
Naive realism does not preclude instrumentation.
So do tricorders never break?
My form of naive realism is contingent, not absolutist. It is perfectly acceptable to find myself to be in error. But … there are certain forms of dead that are just plain dead. Such as the cat I just came home to discover was in rigor mortis an hour ago.
Can we change topics, please?
My condolences. Would you prefer if we use a different example? The upshot is that the Bayesian agrees with you that there is a reality out there. If you agree that one can be in error then you and the Bayesian aren’t disagreeing here.
The only thing we seem to disagree on is how to formulate statements of belief. As I wrote elsewhere:
What I am espousing is the use of contingent naive realism for handling assertions regarding “the territory” coincident with Bayesian reasoning regarding “the map”.
I strongly agree that Bayesian reasoning is a powerful tool for making predictive statements, but I still affirm that the adage that the number of times a well-balanced coin has come up heads has no bearing on whether it actually will come up heads on the next trial.
But even here the Bayesian agrees with you if the coin is well-balanced.
If that is the case then the only disagreement you have is a matter of language not a matter of philosophy. This confuses me because the subthread about knowledge about one’s own internal cognitive processes seemed to assert a difference in actual philosophy.
I was making a trivial example. It gets more complicated when we start talking about the probabilities that a specific truth claim is valid.
I’m not exactly sure how to address this problem. I do know that it is more than just a question of language. The subthread you’re talking about was dealing on the existence of knowable truth, and I was using absolute truths for that purpose.
There are, necessarily, a very limited number of knowable absolute truths.
I do note that your wording on this instance, “knowledge about one’s own internal cognitive processes”, indicates that I may still be failing to achieve sufficient clarity in the message I was conveying in that thread. To reiterate: my claim was that it is absolutely true that when you are cognizant of a specific thought, you know that you are cognizant of that thought. In other words: you know absolutely that you are aware of your awareness. This is tautological to self-awareness.
If I say “he’s dead,” it means “I believe he is dead.” The information that I possess, the map, is all that I have access to. Even if my knowledge accurately reflects the territory, it cannot possibly be the territory. I see a rock, but my knowledge of the rock is not the rock, nor is my knowledge of my self my self.
The map is not the territory, but the map is the only one that you’ve got in your head. “It is raining” and “I do not believe it is raining” can be consistent with each other, but I cannot consistently assert “it is raining, but I do not believe that it is raining.”
Ah, but is your knowledge of your knowledge of your self your knowledge of your self?
You know, for a moment until I saw who it was who was replying, I had a brief flash of hope that this conversation had actually started to get somewhere.
This is starting to hurt my feelings, guys. This is my earnest attempt to restate this.
Put another way: the statement “the map is not the territory” does not have a confidence value of .99999whatever. It is a statement about the map, which is itself a territory, and is simply true.
(I may, of course, be mistaken about Logos’ epistemology, in which case I would appreciate clarification.)
Well, you can take “the map is not the territory” as a premise of a system of beliefs, and assign it a probability of 1 within that system, but you should not assign a probability of 1 to the system of beliefs being true.
Of course, there are some uncertainties that it makes essentially no sense to condition your behavior on. What if everything is wrong and nothing makes sense? Then kabumfuck nothing, you do the best with what you have.
First, I want to note the tone you used. I have not so disrespected you in this dialogue.
Second: “Put another way: the statement “the map is not the territory” does not have a confidence value of .99999whatever. It is a statement about the map, which is itself a territory, and is simply true.”
There is some truth to the claim that “the map is a territory”, but it’s not really very useful. Also, while it is demonstrable that the map is not the territory, it is not tautological and thus requires demonstration.
I am, however, comfortable stating that any specific individual “map” which has been so demonstrated to be a territory of its own without being the territory for which it is mapping is, in truth, a territory of its own (caveat: thar be recursion here!), and that this is expressible as a truth claim without the need for probability values.
First, I have no idea what you’re talking about. I intended no disrespect (in this comment). What about my comment indicated to you a negative tone? I am honestly surprised that you would interpret it this way, and wish to correct whatever flaw in my communicatory ability has caused this.
Second:
Okay, now I’m starting to get confused again. ‘Some’ truth? And what does ‘useful’ have to do with it?
It… seems tautological to me...
So, um, have I understood you or not?
The way I read “This is starting to hurt my feelings, guys.” was that you were expressing it as a miming of myself, as opposed to being indicative of your own sentiments. If this was not the case, then I apologize for the projection.
Whether or not a truth claim is valid is binary. How far that a given valid claim extends is quantitative rather than qualitative, however. My comment towards usefulness has more to do with the utility of the notion, (I.e.; a true-and-interesting notion is more “useful” than a true-and-trivial notion.)
Tautological would be “The map is a map”. Statements that depend upon definitions for their truth value approach tautology but aren’t necessarily so.
¬A != A
is tautological, as isA=A
. However,B⇔A → ¬A = ¬B
is definitionally true. So; “the map is not the territory” is definitionally true (and by its definition it is demonstrated). However, it is possible for a conceptual territory that the territory is the map. (I am aware of no such instance, but it conceptually could occur). This would, however, require a different operational definition of what a “map” is from the context we currently use, so it would actually be a different statement.I’m comfortable saying “close enough for government work”.
was a direct (and, in point of fact, honest) response to
Though, in retrospect, this may not mean what I took it to mean.
Agreed.
Ah, ok.
:)
This is going nowhere. I am intimately familiar with the epistemological framework you just related to me, and I am trying to convey one wholly unlike to to you. Exactly what good does it do this dialogue for you to continue reiterating points I have already discussed and explained to you ad nauseum?
But you know that you ‘see’ a rock. And that IS territory, in the absolute sense.
Unless you are using a naive realist epistemological framework. In which case it means “he’s dead” (and can be either true or not true.)
My understanding of what you are saying is that I should assign a probability of 1 to the proposition that I experience the quale of rock-seeing.
I do not accept that this is the case. There are alternatives, although they do not make much intuitive sense. It may seem to me that I cannot think I have an experience, and not have it, but maybe reality isn’t even that coherent. Maybe it doesn’t follow for reasons that are fundamentally impossible for me to make sense of. Also maybe I don’t exist.
I cannot envision any way in which I could plausibly come to believe that I probably don’t experience what I believe I experience, or that I don’t exist, but if I started getting evidence that reality is fundamentally incoherent, I’d definitely have to start judging it as more likely.
I am comfortable agreeing with this statement.
I do not accept that this is the case. Postulating counterfactuals does not validate the possibility of those counterfactuals (this is just another variation of the Ontological Argument, and is as invalid as is the Ontological Argument itself, for very similar reasons.)
This is a question which reveals why I have the problems I do with the epistemology embedded in Bayesian reasoning. It is very similar to the fact that I’m pretty sure you could agree with the notion that it is conceptually plausible that if reality were ‘fundamentally incoherent’ the statement
¬A = A
could be true.The thing is, it isn’t possible for such incoherence to exist. They are manifestations of definitions; the act of being so defined causes them to be proscriptive restrictions on the behavior of all that which exists.
For what it’s worth, I also hold that the Logical Absolutes are in fact absolute, though I also assert that this is essentially meaningless.
How do you know?
It seems pretty reasonable to me that A must always equal A, but if I suddenly found that the universe I was living in ran on dream logic, and 2+2 stopped equaling the same thing every time, I’d start viewing the notion with rather more skepticism.
Because they are functions of definition. Altering the definition invalidates the scenario.
Eliezer used a cognitive trick in this case: he postulated a counterfactual and treated that postulation as sufficient justification to treat the counterfactual as a possibility. This is not justified.
If you state that A is A by definition, then fine, you’ve got a definition that A=A. But that doesn’t mean that the definition must be manifest. Things cannot be defined into reality.
Now, I believe that A=A is a real rule that reality follows, with very high confidence. I am even willing to accept it as a premise within my system of beliefs, and assign it a probability of 1 in the context of that system of beliefs, but I can’t assign a probability of 1 to that system of beliefs.
It’s possible contingent on things that we have very strong reason to believe simply being wrong.
The postulation does not make it a possibility. It was a possibility without ever being postulated.
The postulation “Maybe time isn’t absolute, and there is no sense in which two events in different places can objectively be said to take place at the same time” would probably have been regarded by most, a thousand years ago, as impossible. People would respond that that simply isn’t what time is. But from what we can see today, it looks like reality really does work like that, and any definition of “time” which demands absolute simultaneity exist simply doesn’t correspond to our universe.
In any case, I don’t see how you get from the premise that you can have absolute certainty about your experience of qualia to the position that you can’t judge how likely it is that the Sportland Sports won the game last night based on Neighbor Bob’s say so.
Another epistemological breakdown may be about to occur here. I separate “real” from “existant”.
That which is real is any pattern that which exists is proscriptively constrained by.
That which exists is any phenomenon which directly interacts with other phenomena.
I agree with you that you cannot define things into existence. But you most assuredly can define things into reality. The logical absolutes are examples of this.
Yup. But that’s a manifestation of the definition, and nothing more. If A=A had never been defined, no one would bat an eye at
¬A = A
. But that statement would bear no relation to the definitions supporting the assertion A=A.We’re talking at cross purposes here. Any arbitrary counterfactual may or may not be possible. The null hypothesis is that they are not. Until there is justification for the assertion, the mere presentation of a counterfactual is insufficient cause/motive to accept the assertion that it is possible.
Agreed. But time isn’t “definitional” in nature; that is, it is more than a mere product of definition. We define ‘time’ to be an existing phenomenon with specific characteristics—and thus the definition of how time behaves is constrained by the actual patterns that existing phenomenon adheres to.
The number 3 on the other hand is defined as itself, and there is no existing phenomenon to which it corresponds. We can find three instances of a given phenomenon, but that instantiated collection should not be conflated with an instantiation of the number three itself.
You can’t. Conversations—and thus, topics—evolve over time, and this conversation has been going on for over 24 hours now.
Things that have been believed impossible have sometimes been found to be true. Just because you do not yet have any reason to suspect that something is true, does not mean that you can have assign zero probability to its truth. If you did assign a proposition zero probability, no amount of justification could ever budge you from zero probability; whatever evidence or arguments you received, it would always be infinitely more likely that they came about from some process unconnected to the truth of the proposition than that the proposition was true.
I’m getting rather tired of having to start this conversation over yet again. Do you understand my meaning when I say that until such time as a given counterfactual is provided with justification to accept its plausibility, the null hypothesis is that it is not?
I believe that I understand you, but I don’t think that is part of an epistemology that is as good at forming true beliefs and recognizing false ones.
You can label certain beliefs “null hypothesis,” but it will not correspond to any consistent likelihood that they will turn out to be true.
As a social prescription, you can call certain claims “justified knowledge,” based on how they are arrived at, and not call other claims about reality “justified knowledge” because the evidence on which they were arrived at does not meet the same criteria. And you can call it an “intellectual failing” for people to not acquire “justified knowledge” about things that they reason about. But not all “justified knowledge” will be right, and not all claims that are not “justified knowledge” will be wrong. If you had a computer that could process whatever information you put into it, and tell you exactly how confident you ought to be about any given claim given that information, and have the claims be right exactly that often, then the category of “justified belief” will not correspond to any particular range of likelihood.
This sounds like a point where you would complain that I am confusing “aggregated average” with “manifested instantiation.” We can accept the premise that the territory is the territory, and a particular event either happened or it did not. So if we are talking about a particular event, should we be able to say that it actually definitely happened, because that’s how the territory works? Well, we can say that, but then we start observing that some of the statements we make about things that “definitely happened” turn out to be wrong. If we except as a premise in our epistemologies that the territory is what it is, we can accept that the statements we make about the territory are either right, absolutely, or wrong, absolutely. But we don’t know which are which, and “justified knowledge” won’t correspond to any particular level of confidence.
If you have an epistemology that lets you label some things “absolutely justified knowledge” that are actually wrong, and tells you that you can’t predict the likelihood of some things when you can actually make confidence predictions about them and be right as often as you say you should be, I think you’ll have a hard time selling that it’s a good epistemology.
The use of null hypothesises is a pivotal tool for scientific endeavor and rational-skepticism in general. If you object to this practice, feel free to do so, but please note that what you’re objecting to is a necessary condition of the process of forming beliefs.
I do not agree with the use of “justified knowledge” so ubiquitously. Furthermore, I have tried to establish a differentiation between belief claims and knowledge claims. The latter are categorically claims about “the territory”; the former are claims of approximation of “your map” to “the territory”.
Being able to readily differentiate between these two categories is a vital element for ways of knowing.
I don’t know, at this point, exactly where this particular thread is in the nested tree of various conversations I’m handling. But I can tell you that at no time have I espoused any form of absolutism that allows for non-absolute truths to be expressed as such.
What I have espoused, however, is the use of contingent naive realism coincident with Bayesian reasoning—while understanding the nature of what each “tool” addresses.
Did you reach this conclusion by testing the hypothesis that the use of null hypotheses is not a pivotal tool for scientific endeavor or rational-skepticism in general against collected data, and rejecting it?
Rational-skepticism doesn’t necessarily operate experimentally and I’m sure you know this.
That being said; yes, yes I did. Thought-experiment, but experiment it is.
Yes, I am aware of how scientists use null hypotheses. But the null hypothesis is only a construct which exists in the context of significance testing, and significance testing does not reliably get accurate results when you measure small effect sizes, and also gives you different probabilities based on the same data depending on how you categorize that data; Eliezer already gave an example of this in one of the pages you linked earlier. The whole reason that people on this site in general have enthusiasm for Bayesian statistics is because we believe that they do better, as well as as it is possible to do.
The concept of a “null hypothesis,” as it is used in frequentist statistics, doesn’t even make sense in the context of a system where, as you have claimed, we cannot know that it is more likely that a coin will come up heads on the basis of it having come up heads the last hundred times you’ve flipped it.
If I say “it is raining out,” this is a claim about the territory, yes. If I say “I believe that it is raining out,” this is a claim about my map (and my map exists within, and is a part of, the territory, so it is also a claim “about the territory,” but not about the same part of the territory.)
But my claim about the territory is not certain to be correct. If I say “it is raining out, 99.99999% confidence,” which is the sort of thing I would mean when I say “it is raining out,” that means that I am saying that the territory probably agrees with this statement, but the information in my map does not allow me to have a higher confidence in the agreement than that.
A null hypothesis is a particularly useful simplification not strictly a necessary tool.
So I understand you—you are here claiming that it is not necessary to have a default position in a given topic?
I am saying it is not strictly necessary to have a hypothesis called “null” but rather that such a thing is just extremely useful. I would definitely say the thing about defaults too.
That’s not what a null hypothesis is. A null hypothesis is a default state.
I’m curious; how could a situation be arranged such that an individual has no default position on a given topic? Please provide such a scenario—I find my imagination insufficient to the task of authoring such a thing.
If you tell me you’ve flipped a coin, I may not have any default position on whether it’s heads or tails. Similarly, I might have no prior belief, or a symmetric prior, on lots of questions that I haven’t thought much about or that don’t have any visible correlation to anything else I know about.
But you would have the default position that it had in fact occupied one of those two outcomes.
Sure. I didn’t say “I have no information related to the experiment”. What I’m saying is this: if I do an experiment to choose between K options, it might be that I don’t have any prior (or have a symmetric prior) about which of those K it will be. That’s what a null hypothesis in statistics is. When a pharmaceutical company does a drug trial and talks about the null hypothesis, the hypothesis they’re referring to is “no effect”, not “the chemical never existed”.
Yes, any experiment also will give you information about whether the experiment’s assumptions were valid. But talking about null hypotheses or default hypotheses is in the context of a particular formal model, where we’re referring to something more specific.
Correct. And that’s a situationally relevant phenomenon. One quite similar to how “the coin will be on one of its sides” is situationally relevant to the coin-toss. (As opposed to it being on edge, or the faces smoothed off.)
I’m not sure whether we’re disagreeing here.
You had asked for an example of where an individual might have no prior on “on a given topic.” I gave one, for a narrow topic: “is the coin heads or tails?”. I didn’t say, and you didn’t ask, for a case where an individual has no prior on anything related to the topic.
But let me give on attempt at that, stronger, claim. You’ve never met my friend Sam. You’ve never heard of or thought of my friend Sam. Before this comment, you didn’t have a prior on his having a birthday, you don’t have a prior on his existing; you never considered the possibility that lesswrong-commenter asr has a friend Sam. I would say you have no beliefs on the topic of Sam’s birthday.
It is my default position that they do not exist, even as absent constructs, unless otherwise noted (with the usual lowered threshold of standards of evidence for more evidently ordinary claims). That’s the whole point of default positions; they inform us how to react to new criteria or phenomena as they arise.
By bringing up the issue of non-topics you’re moving the goal-post. I asked you how it could be that a person could have no defaults on a given topic.
If I tell you that there is a flugmizzr, then you know that certain things are ‘true’—at least, you presume them to be until shown otherwise: one, that flugmizzrs are enumerable and two, that they are somehow enumerable by people, and three, flugmizzrs are knowable, discrete phenomena.
Those are most assuredly amongst your defaults on the topic. They could easily each be wrong—there is no way to know—but if you trust my assertion of “a flugmizzr” then each of these become required defaults, useful for acquiring further information.
To clarify—my answer is that “there are topics I’ve never considered, and before consideration, I need not have a default belief.” For me at least, the level consideration to actually form a belief is nontrivial. I am often in a state of uninterested impartiality. If you ask me whether you have a friend named Joe, I would not hazard a guess and would be about equally surprised by either answer.
To put it more abstractly: There’s no particular computational reason why the mind needs to have a default for every input. You suggest some topic X, and it’s quite possible for me to remain blank, even if I heard you and constructed some internal representation of X. That representation need not be tied to a belief or disbelief in any of the propositions about X under discussion.
Agreed with your main point; curious about a peripheral:
Is this meaningfully different, in the context you’re operating in, from coming to believe that you probably didn’t experience what you believe you experienced?
Because I also have trouble imagining myself being skeptical about whether I’m actually experiencing what I currently, er, experience myself experiencing (though I can certainly imagine believing things about my current experience that just ain’t so, which is beside your point). But I have no difficulty imagining myself being skeptical about whether I actually experienced what I currently remember myself having experienced a moment ago; that happened a lot after my brain injury, for example.
Yes. I have plenty of evidence that people sometimes become convinced that they’ve had experiences that they haven’t had, but reality would have to work very differently than I think it does for people not to be having the quales they think they’re having.
(nods) That’s what I figured, but wanted to confirm.
Given that my “current” perception of the world is integrating a variety of different inputs that arrived at my senses at different times, I mostly suspect that my intuitive confidence that there’s a sharp qualitative difference between “what I am currently experiencing” and “what I remember having experienced” is (like many of my intuitive confidences) simply not reflective of what’s actually going on.