The map is not the territory, but the map is the only one that you’ve got in your head.
This is going nowhere. I am intimately familiar with the epistemological framework you just related to me, and I am trying to convey one wholly unlike to to you. Exactly what good does it do this dialogue for you to continue reiterating points I have already discussed and explained to you ad nauseum?
I see a rock, but my knowledge of the rock is not the rock, nor is my knowledge of my self my self.
But you know that you ‘see’ a rock. And that IS territory, in the absolute sense.
If I say “he’s dead,” it means “I believe he is dead.”
Unless you are using a naive realist epistemological framework. In which case it means “he’s dead” (and can be either true or not true.)
But you know that you ‘see’ a rock. And that IS territory, in the absolute sense.
My understanding of what you are saying is that I should assign a probability of 1 to the proposition that I experience the quale of rock-seeing.
I do not accept that this is the case. There are alternatives, although they do not make much intuitive sense. It may seem to me that I cannot think I have an experience, and not have it, but maybe reality isn’t even that coherent. Maybe it doesn’t follow for reasons that are fundamentally impossible for me to make sense of. Also maybe I don’t exist.
I cannot envision any way in which I could plausibly come to believe that I probably don’t experience what I believe I experience, or that I don’t exist, but if I started getting evidence that reality is fundamentally incoherent, I’d definitely have to start judging it as more likely.
My understanding of what you are saying is that I should assign a probability of 1 to the proposition that I experience the quale of rock-seeing.
I am comfortable agreeing with this statement.
There are alternatives, although they do not make much intuitive sense.
I do not accept that this is the case. Postulating counterfactuals does not validate the possibility of those counterfactuals (this is just another variation of the Ontological Argument, and is as invalid as is the Ontological Argument itself, for very similar reasons.)
I cannot envision any way in which I could plausibly come to believe that I probably don’t experience what I believe I experience
This is a question which reveals why I have the problems I do with the epistemology embedded in Bayesian reasoning. It is very similar to the fact that I’m pretty sure you could agree with the notion that it is conceptually plausible that if reality were ‘fundamentally incoherent’ the statement ¬A = A could be true.
The thing is, it isn’t possible for such incoherence to exist. They are manifestations of definitions; the act of being so defined causes them to be proscriptive restrictions on the behavior of all that which exists.
For what it’s worth, I also hold that the Logical Absolutes are in fact absolute, though I also assert that this is essentially meaningless.
The thing is, it isn’t possible for such incoherence to exist.
How do you know?
It seems pretty reasonable to me that A must always equal A, but if I suddenly found that the universe I was living in ran on dream logic, and 2+2 stopped equaling the same thing every time, I’d start viewing the notion with rather more skepticism.
Because they are functions of definition. Altering the definition invalidates the scenario.
and 2+2 stopped equaling the same thing every time
Eliezer used a cognitive trick in this case: he postulated a counterfactual and treated that postulation as sufficient justification to treat the counterfactual as a possibility. This is not justified.
Because they are functions of definition. Altering the definition invalidates the scenario.
If you state that A is A by definition, then fine, you’ve got a definition that A=A. But that doesn’t mean that the definition must be manifest. Things cannot be defined into reality.
Now, I believe that A=A is a real rule that reality follows, with very high confidence. I am even willing to accept it as a premise within my system of beliefs, and assign it a probability of 1 in the context of that system of beliefs, but I can’t assign a probability of 1 to that system of beliefs.
Eliezer used a cognitive trick in this case: he postulated a counterfactual and treated that postulation as sufficient justification to treat the counterfactual as a possibility. This is not justified.
It’s possible contingent on things that we have very strong reason to believe simply being wrong.
The postulation does not make it a possibility. It was a possibility without ever being postulated.
The postulation “Maybe time isn’t absolute, and there is no sense in which two events in different places can objectively be said to take place at the same time” would probably have been regarded by most, a thousand years ago, as impossible. People would respond that that simply isn’t what time is. But from what we can see today, it looks like reality really does work like that, and any definition of “time” which demands absolute simultaneity exist simply doesn’t correspond to our universe.
In any case, I don’t see how you get from the premise that you can have absolute certainty about your experience of qualia to the position that you can’t judge how likely it is that the Sportland Sports won the game last night based on Neighbor Bob’s say so.
Another epistemological breakdown may be about to occur here. I separate “real” from “existant”.
That which is real is any pattern that which exists is proscriptively constrained by.
That which exists is any phenomenon which directly interacts with other phenomena.
I agree with you that you cannot define things into existence. But you most assuredly can define things into reality. The logical absolutes are examples of this.
Now, I believe that A=A is a real rule that reality follows,
Yup. But that’s a manifestation of the definition, and nothing more. If A=A had never been defined, no one would bat an eye at ¬A = A. But that statement would bear no relation to the definitions supporting the assertion A=A.
It’s possible contingent on things that we have very strong reason to believe simply being wrong.
We’re talking at cross purposes here. Any arbitrary counterfactual may or may not be possible. The null hypothesis is that they are not. Until there is justification for the assertion, the mere presentation of a counterfactual is insufficient cause/motive to accept the assertion that it is possible.
But from what we can see today, it looks like reality really does work like that, and any definition of “time” which demands absolute simultaneity exist simply doesn’t correspond to our universe.
Agreed. But time isn’t “definitional” in nature; that is, it is more than a mere product of definition. We define ‘time’ to be an existing phenomenon with specific characteristics—and thus the definition of how time behaves is constrained by the actual patterns that existing phenomenon adheres to.
The number 3 on the other hand is defined as itself, and there is no existing phenomenon to which it corresponds. We can find three instances of a given phenomenon, but that instantiated collection should not be conflated with an instantiation of the number three itself.
In any case, I don’t see how you get from the premise that you can have absolute certainty about your experience of qualia to the position that you can’t judge how likely it is that the Sportland Sports won the game last night based on Neighbor Bob’s say so.
You can’t. Conversations—and thus, topics—evolve over time, and this conversation has been going on for over 24 hours now.
We’re talking at cross purposes here. Any arbitrary counterfactual may or may not be possible. The null hypothesis is that they are not. Until there is justification for the assertion, the mere presentation of a counterfactual is insufficient cause/motive to accept the assertion that it is possible.
Things that have been believed impossible have sometimes been found to be true. Just because you do not yet have any reason to suspect that something is true, does not mean that you can have assign zero probability to its truth. If you did assign a proposition zero probability, no amount of justification could ever budge you from zero probability; whatever evidence or arguments you received, it would always be infinitely more likely that they came about from some process unconnected to the truth of the proposition than that the proposition was true.
I’m getting rather tired of having to start this conversation over yet again. Do you understand my meaning when I say that until such time as a given counterfactual is provided with justification to accept its plausibility, the null hypothesis is that it is not?
I believe that I understand you, but I don’t think that is part of an epistemology that is as good at forming true beliefs and recognizing false ones.
You can label certain beliefs “null hypothesis,” but it will not correspond to any consistent likelihood that they will turn out to be true.
As a social prescription, you can call certain claims “justified knowledge,” based on how they are arrived at, and not call other claims about reality “justified knowledge” because the evidence on which they were arrived at does not meet the same criteria. And you can call it an “intellectual failing” for people to not acquire “justified knowledge” about things that they reason about. But not all “justified knowledge” will be right, and not all claims that are not “justified knowledge” will be wrong. If you had a computer that could process whatever information you put into it, and tell you exactly how confident you ought to be about any given claim given that information, and have the claims be right exactly that often, then the category of “justified belief” will not correspond to any particular range of likelihood.
This sounds like a point where you would complain that I am confusing “aggregated average” with “manifested instantiation.” We can accept the premise that the territory is the territory, and a particular event either happened or it did not. So if we are talking about a particular event, should we be able to say that it actually definitely happened, because that’s how the territory works? Well, we can say that, but then we start observing that some of the statements we make about things that “definitely happened” turn out to be wrong. If we except as a premise in our epistemologies that the territory is what it is, we can accept that the statements we make about the territory are either right, absolutely, or wrong, absolutely. But we don’t know which are which, and “justified knowledge” won’t correspond to any particular level of confidence.
If you have an epistemology that lets you label some things “absolutely justified knowledge” that are actually wrong, and tells you that you can’t predict the likelihood of some things when you can actually make confidence predictions about them and be right as often as you say you should be, I think you’ll have a hard time selling that it’s a good epistemology.
You can label certain beliefs “null hypothesis,” but it will not correspond to any consistent likelihood that they will turn out to be true.
The use of null hypothesises is a pivotal tool for scientific endeavor and rational-skepticism in general. If you object to this practice, feel free to do so, but please note that what you’re objecting to is a necessary condition of the process of forming beliefs.
And you can call it an “intellectual failing” for people to not acquire “justified knowledge” about things that they reason about. But not all “justified knowledge” will be right, and not all claims that are not “justified knowledge” will be wrong.
I do not agree with the use of “justified knowledge” so ubiquitously. Furthermore, I have tried to establish a differentiation between belief claims and knowledge claims. The latter are categorically claims about “the territory”; the former are claims of approximation of “your map” to “the territory”.
Being able to readily differentiate between these two categories is a vital element for ways of knowing.
If you have an epistemology that lets you label some things “absolutely justified knowledge” that are actually wrong,
I don’t know, at this point, exactly where this particular thread is in the nested tree of various conversations I’m handling. But I can tell you that at no time have I espoused any form of absolutism that allows for non-absolute truths to be expressed as such.
What I have espoused, however, is the use of contingent naive realism coincident with Bayesian reasoning—while understanding the nature of what each “tool” addresses.
The use of null hypotheses is a pivotal tool for scientific endeavor and rational-skepticism in general. If you object to this practice, feel free to do so, but please note that what you’re objecting to is a necessary condition of the process of forming beliefs.
Did you reach this conclusion by testing the hypothesis that the use of null hypotheses is not a pivotal tool for scientific endeavor or rational-skepticism in general against collected data, and rejecting it?
Did you reach this conclusion by testing the hypothesis that the use of null hypotheses is not a pivotal tool for scientific endeavor or rational-skepticism in general against collected data, and rejecting it?
Rational-skepticism doesn’t necessarily operate experimentally and I’m sure you know this.
That being said; yes, yes I did. Thought-experiment, but experiment it is.
The use of null hypothesises is a pivotal tool for scientific endeavor and rational-skepticism in general. If you object to this practice, feel free to do so, but please note that what you’re objecting to is a necessary condition of the process of forming beliefs.
Yes, I am aware of how scientists use null hypotheses. But the null hypothesis is only a construct which exists in the context of significance testing, and significance testing does not reliably get accurate results when you measure small effect sizes, and also gives you different probabilities based on the same data depending on how you categorize that data; Eliezer already gave an example of this in one of the pages you linked earlier. The whole reason that people on this site in general have enthusiasm for Bayesian statistics is because we believe that they do better, as well as as it is possible to do.
The concept of a “null hypothesis,” as it is used in frequentist statistics, doesn’t even make sense in the context of a system where, as you have claimed, we cannot know that it is more likely that a coin will come up heads on the basis of it having come up heads the last hundred times you’ve flipped it.
I do not agree with the use of “justified knowledge” so ubiquitously. Furthermore, I have tried to establish a differentiation between belief claims and knowledge claims. The latter are categorically claims about “the territory”; the former are claims of approximation of “your map” to “the territory”.
If I say “it is raining out,” this is a claim about the territory, yes. If I say “I believe that it is raining out,” this is a claim about my map (and my map exists within, and is a part of, the territory, so it is also a claim “about the territory,” but not about the same part of the territory.)
But my claim about the territory is not certain to be correct. If I say “it is raining out, 99.99999% confidence,” which is the sort of thing I would mean when I say “it is raining out,” that means that I am saying that the territory probably agrees with this statement, but the information in my map does not allow me to have a higher confidence in the agreement than that.
The use of null hypothesises is a pivotal tool for scientific endeavor and rational-skepticism in general. If you object to this practice, feel free to do so, but please note that what you’re objecting to is a necessary condition of the process of forming beliefs.
A null hypothesis is a particularly useful simplification not strictly a necessary tool.
I am saying it is not strictly necessary to have a hypothesis called “null” but rather that such a thing is just extremely useful. I would definitely say the thing about defaults too.
I am saying it is not strictly necessary to have a hypothesis called “null”
That’s not what a null hypothesis is. A null hypothesis is a default state.
I would definitely say the thing about defaults too.
I’m curious; how could a situation be arranged such that an individual has no default position on a given topic? Please provide such a scenario—I find my imagination insufficient to the task of authoring such a thing.
I’m curious; how could a situation be arranged such that an individual has no default position on a given topic? Please provide such a scenario—I find my imagination insufficient to the task of authoring such a thing.
If you tell me you’ve flipped a coin, I may not have any default position on whether it’s heads or tails. Similarly, I might have no prior belief, or a symmetric prior, on lots of questions that I haven’t thought much about or that don’t have any visible correlation to anything else I know about.
Sure. I didn’t say “I have no information related to the experiment”. What I’m saying is this: if I do an experiment to choose between K options, it might be that I don’t have any prior (or have a symmetric prior) about which of those K it will be. That’s what a null hypothesis in statistics is. When a pharmaceutical company does a drug trial and talks about the null hypothesis, the hypothesis they’re referring to is “no effect”, not “the chemical never existed”.
Yes, any experiment also will give you information about whether the experiment’s assumptions were valid. But talking about null hypotheses or default hypotheses is in the context of a particular formal model, where we’re referring to something more specific.
When a pharmaceutical company does a drug trial and talks about the null hypothesis, the hypothesis they’re referring to is “no effect”, not “the chemical never existed”.
Correct. And that’s a situationally relevant phenomenon. One quite similar to how “the coin will be on one of its sides” is situationally relevant to the coin-toss. (As opposed to it being on edge, or the faces smoothed off.)
You had asked for an example of where an individual might have no prior on “on a given topic.” I gave one, for a narrow topic: “is the coin heads or tails?”. I didn’t say, and you didn’t ask, for a case where an individual has no prior on anything related to the topic.
But let me give on attempt at that, stronger, claim. You’ve never met my friend Sam. You’ve never heard of or thought of my friend Sam. Before this comment, you didn’t have a prior on his having a birthday, you don’t have a prior on his existing; you never considered the possibility that lesswrong-commenter asr has a friend Sam. I would say you have no beliefs on the topic of Sam’s birthday.
It is my default position that they do not exist, even as absent constructs, unless otherwise noted (with the usual lowered threshold of standards of evidence for more evidently ordinary claims). That’s the whole point of default positions; they inform us how to react to new criteria or phenomena as they arise.
By bringing up the issue of non-topics you’re moving the goal-post. I asked you how it could be that a person could have no defaults on a given topic.
If I tell you that there is a flugmizzr, then you know that certain things are ‘true’—at least, you presume them to be until shown otherwise: one, that flugmizzrs are enumerable and two, that they are somehow enumerable by people, and three, flugmizzrs are knowable, discrete phenomena.
Those are most assuredly amongst your defaults on the topic. They could easily each be wrong—there is no way to know—but if you trust my assertion of “a flugmizzr” then each of these become required defaults, useful for acquiring further information.
I asked you how it could be that a person could have no defaults on a given topic.
To clarify—my answer is that “there are topics I’ve never considered, and before consideration, I need not have a default belief.” For me at least, the level consideration to actually form a belief is nontrivial. I am often in a state of uninterested impartiality. If you ask me whether you have a friend named Joe, I would not hazard a guess and would be about equally surprised by either answer.
To put it more abstractly: There’s no particular computational reason why the mind needs to have a default for every input. You suggest some topic X, and it’s quite possible for me to remain blank, even if I heard you and constructed some internal representation of X. That representation need not be tied to a belief or disbelief in any of the propositions about X under discussion.
Agreed with your main point; curious about a peripheral:
I cannot envision any way in which I could plausibly come to believe that I probably don’t experience what I believe I experience
Is this meaningfully different, in the context you’re operating in, from coming to believe that you probably didn’t experience what you believe you experienced?
Because I also have trouble imagining myself being skeptical about whether I’m actually experiencing what I currently, er, experience myself experiencing (though I can certainly imagine believing things about my current experience that just ain’t so, which is beside your point). But I have no difficulty imagining myself being skeptical about whether I actually experienced what I currently remember myself having experienced a moment ago; that happened a lot after my brain injury, for example.
Is this meaningfully different, in the context you’re operating in, from coming to believe that you probably didn’t experience what you believe you experienced?
Yes. I have plenty of evidence that people sometimes become convinced that they’ve had experiences that they haven’t had, but reality would have to work very differently than I think it does for people not to be having the quales they think they’re having.
(nods) That’s what I figured, but wanted to confirm.
Given that my “current” perception of the world is integrating a variety of different inputs that arrived at my senses at different times, I mostly suspect that my intuitive confidence that there’s a sharp qualitative difference between “what I am currently experiencing” and “what I remember having experienced” is (like many of my intuitive confidences) simply not reflective of what’s actually going on.
This is going nowhere. I am intimately familiar with the epistemological framework you just related to me, and I am trying to convey one wholly unlike to to you. Exactly what good does it do this dialogue for you to continue reiterating points I have already discussed and explained to you ad nauseum?
But you know that you ‘see’ a rock. And that IS territory, in the absolute sense.
Unless you are using a naive realist epistemological framework. In which case it means “he’s dead” (and can be either true or not true.)
My understanding of what you are saying is that I should assign a probability of 1 to the proposition that I experience the quale of rock-seeing.
I do not accept that this is the case. There are alternatives, although they do not make much intuitive sense. It may seem to me that I cannot think I have an experience, and not have it, but maybe reality isn’t even that coherent. Maybe it doesn’t follow for reasons that are fundamentally impossible for me to make sense of. Also maybe I don’t exist.
I cannot envision any way in which I could plausibly come to believe that I probably don’t experience what I believe I experience, or that I don’t exist, but if I started getting evidence that reality is fundamentally incoherent, I’d definitely have to start judging it as more likely.
I am comfortable agreeing with this statement.
I do not accept that this is the case. Postulating counterfactuals does not validate the possibility of those counterfactuals (this is just another variation of the Ontological Argument, and is as invalid as is the Ontological Argument itself, for very similar reasons.)
This is a question which reveals why I have the problems I do with the epistemology embedded in Bayesian reasoning. It is very similar to the fact that I’m pretty sure you could agree with the notion that it is conceptually plausible that if reality were ‘fundamentally incoherent’ the statement
¬A = A
could be true.The thing is, it isn’t possible for such incoherence to exist. They are manifestations of definitions; the act of being so defined causes them to be proscriptive restrictions on the behavior of all that which exists.
For what it’s worth, I also hold that the Logical Absolutes are in fact absolute, though I also assert that this is essentially meaningless.
How do you know?
It seems pretty reasonable to me that A must always equal A, but if I suddenly found that the universe I was living in ran on dream logic, and 2+2 stopped equaling the same thing every time, I’d start viewing the notion with rather more skepticism.
Because they are functions of definition. Altering the definition invalidates the scenario.
Eliezer used a cognitive trick in this case: he postulated a counterfactual and treated that postulation as sufficient justification to treat the counterfactual as a possibility. This is not justified.
If you state that A is A by definition, then fine, you’ve got a definition that A=A. But that doesn’t mean that the definition must be manifest. Things cannot be defined into reality.
Now, I believe that A=A is a real rule that reality follows, with very high confidence. I am even willing to accept it as a premise within my system of beliefs, and assign it a probability of 1 in the context of that system of beliefs, but I can’t assign a probability of 1 to that system of beliefs.
It’s possible contingent on things that we have very strong reason to believe simply being wrong.
The postulation does not make it a possibility. It was a possibility without ever being postulated.
The postulation “Maybe time isn’t absolute, and there is no sense in which two events in different places can objectively be said to take place at the same time” would probably have been regarded by most, a thousand years ago, as impossible. People would respond that that simply isn’t what time is. But from what we can see today, it looks like reality really does work like that, and any definition of “time” which demands absolute simultaneity exist simply doesn’t correspond to our universe.
In any case, I don’t see how you get from the premise that you can have absolute certainty about your experience of qualia to the position that you can’t judge how likely it is that the Sportland Sports won the game last night based on Neighbor Bob’s say so.
Another epistemological breakdown may be about to occur here. I separate “real” from “existant”.
That which is real is any pattern that which exists is proscriptively constrained by.
That which exists is any phenomenon which directly interacts with other phenomena.
I agree with you that you cannot define things into existence. But you most assuredly can define things into reality. The logical absolutes are examples of this.
Yup. But that’s a manifestation of the definition, and nothing more. If A=A had never been defined, no one would bat an eye at
¬A = A
. But that statement would bear no relation to the definitions supporting the assertion A=A.We’re talking at cross purposes here. Any arbitrary counterfactual may or may not be possible. The null hypothesis is that they are not. Until there is justification for the assertion, the mere presentation of a counterfactual is insufficient cause/motive to accept the assertion that it is possible.
Agreed. But time isn’t “definitional” in nature; that is, it is more than a mere product of definition. We define ‘time’ to be an existing phenomenon with specific characteristics—and thus the definition of how time behaves is constrained by the actual patterns that existing phenomenon adheres to.
The number 3 on the other hand is defined as itself, and there is no existing phenomenon to which it corresponds. We can find three instances of a given phenomenon, but that instantiated collection should not be conflated with an instantiation of the number three itself.
You can’t. Conversations—and thus, topics—evolve over time, and this conversation has been going on for over 24 hours now.
Things that have been believed impossible have sometimes been found to be true. Just because you do not yet have any reason to suspect that something is true, does not mean that you can have assign zero probability to its truth. If you did assign a proposition zero probability, no amount of justification could ever budge you from zero probability; whatever evidence or arguments you received, it would always be infinitely more likely that they came about from some process unconnected to the truth of the proposition than that the proposition was true.
I’m getting rather tired of having to start this conversation over yet again. Do you understand my meaning when I say that until such time as a given counterfactual is provided with justification to accept its plausibility, the null hypothesis is that it is not?
I believe that I understand you, but I don’t think that is part of an epistemology that is as good at forming true beliefs and recognizing false ones.
You can label certain beliefs “null hypothesis,” but it will not correspond to any consistent likelihood that they will turn out to be true.
As a social prescription, you can call certain claims “justified knowledge,” based on how they are arrived at, and not call other claims about reality “justified knowledge” because the evidence on which they were arrived at does not meet the same criteria. And you can call it an “intellectual failing” for people to not acquire “justified knowledge” about things that they reason about. But not all “justified knowledge” will be right, and not all claims that are not “justified knowledge” will be wrong. If you had a computer that could process whatever information you put into it, and tell you exactly how confident you ought to be about any given claim given that information, and have the claims be right exactly that often, then the category of “justified belief” will not correspond to any particular range of likelihood.
This sounds like a point where you would complain that I am confusing “aggregated average” with “manifested instantiation.” We can accept the premise that the territory is the territory, and a particular event either happened or it did not. So if we are talking about a particular event, should we be able to say that it actually definitely happened, because that’s how the territory works? Well, we can say that, but then we start observing that some of the statements we make about things that “definitely happened” turn out to be wrong. If we except as a premise in our epistemologies that the territory is what it is, we can accept that the statements we make about the territory are either right, absolutely, or wrong, absolutely. But we don’t know which are which, and “justified knowledge” won’t correspond to any particular level of confidence.
If you have an epistemology that lets you label some things “absolutely justified knowledge” that are actually wrong, and tells you that you can’t predict the likelihood of some things when you can actually make confidence predictions about them and be right as often as you say you should be, I think you’ll have a hard time selling that it’s a good epistemology.
The use of null hypothesises is a pivotal tool for scientific endeavor and rational-skepticism in general. If you object to this practice, feel free to do so, but please note that what you’re objecting to is a necessary condition of the process of forming beliefs.
I do not agree with the use of “justified knowledge” so ubiquitously. Furthermore, I have tried to establish a differentiation between belief claims and knowledge claims. The latter are categorically claims about “the territory”; the former are claims of approximation of “your map” to “the territory”.
Being able to readily differentiate between these two categories is a vital element for ways of knowing.
I don’t know, at this point, exactly where this particular thread is in the nested tree of various conversations I’m handling. But I can tell you that at no time have I espoused any form of absolutism that allows for non-absolute truths to be expressed as such.
What I have espoused, however, is the use of contingent naive realism coincident with Bayesian reasoning—while understanding the nature of what each “tool” addresses.
Did you reach this conclusion by testing the hypothesis that the use of null hypotheses is not a pivotal tool for scientific endeavor or rational-skepticism in general against collected data, and rejecting it?
Rational-skepticism doesn’t necessarily operate experimentally and I’m sure you know this.
That being said; yes, yes I did. Thought-experiment, but experiment it is.
Yes, I am aware of how scientists use null hypotheses. But the null hypothesis is only a construct which exists in the context of significance testing, and significance testing does not reliably get accurate results when you measure small effect sizes, and also gives you different probabilities based on the same data depending on how you categorize that data; Eliezer already gave an example of this in one of the pages you linked earlier. The whole reason that people on this site in general have enthusiasm for Bayesian statistics is because we believe that they do better, as well as as it is possible to do.
The concept of a “null hypothesis,” as it is used in frequentist statistics, doesn’t even make sense in the context of a system where, as you have claimed, we cannot know that it is more likely that a coin will come up heads on the basis of it having come up heads the last hundred times you’ve flipped it.
If I say “it is raining out,” this is a claim about the territory, yes. If I say “I believe that it is raining out,” this is a claim about my map (and my map exists within, and is a part of, the territory, so it is also a claim “about the territory,” but not about the same part of the territory.)
But my claim about the territory is not certain to be correct. If I say “it is raining out, 99.99999% confidence,” which is the sort of thing I would mean when I say “it is raining out,” that means that I am saying that the territory probably agrees with this statement, but the information in my map does not allow me to have a higher confidence in the agreement than that.
A null hypothesis is a particularly useful simplification not strictly a necessary tool.
So I understand you—you are here claiming that it is not necessary to have a default position in a given topic?
I am saying it is not strictly necessary to have a hypothesis called “null” but rather that such a thing is just extremely useful. I would definitely say the thing about defaults too.
That’s not what a null hypothesis is. A null hypothesis is a default state.
I’m curious; how could a situation be arranged such that an individual has no default position on a given topic? Please provide such a scenario—I find my imagination insufficient to the task of authoring such a thing.
If you tell me you’ve flipped a coin, I may not have any default position on whether it’s heads or tails. Similarly, I might have no prior belief, or a symmetric prior, on lots of questions that I haven’t thought much about or that don’t have any visible correlation to anything else I know about.
But you would have the default position that it had in fact occupied one of those two outcomes.
Sure. I didn’t say “I have no information related to the experiment”. What I’m saying is this: if I do an experiment to choose between K options, it might be that I don’t have any prior (or have a symmetric prior) about which of those K it will be. That’s what a null hypothesis in statistics is. When a pharmaceutical company does a drug trial and talks about the null hypothesis, the hypothesis they’re referring to is “no effect”, not “the chemical never existed”.
Yes, any experiment also will give you information about whether the experiment’s assumptions were valid. But talking about null hypotheses or default hypotheses is in the context of a particular formal model, where we’re referring to something more specific.
Correct. And that’s a situationally relevant phenomenon. One quite similar to how “the coin will be on one of its sides” is situationally relevant to the coin-toss. (As opposed to it being on edge, or the faces smoothed off.)
I’m not sure whether we’re disagreeing here.
You had asked for an example of where an individual might have no prior on “on a given topic.” I gave one, for a narrow topic: “is the coin heads or tails?”. I didn’t say, and you didn’t ask, for a case where an individual has no prior on anything related to the topic.
But let me give on attempt at that, stronger, claim. You’ve never met my friend Sam. You’ve never heard of or thought of my friend Sam. Before this comment, you didn’t have a prior on his having a birthday, you don’t have a prior on his existing; you never considered the possibility that lesswrong-commenter asr has a friend Sam. I would say you have no beliefs on the topic of Sam’s birthday.
It is my default position that they do not exist, even as absent constructs, unless otherwise noted (with the usual lowered threshold of standards of evidence for more evidently ordinary claims). That’s the whole point of default positions; they inform us how to react to new criteria or phenomena as they arise.
By bringing up the issue of non-topics you’re moving the goal-post. I asked you how it could be that a person could have no defaults on a given topic.
If I tell you that there is a flugmizzr, then you know that certain things are ‘true’—at least, you presume them to be until shown otherwise: one, that flugmizzrs are enumerable and two, that they are somehow enumerable by people, and three, flugmizzrs are knowable, discrete phenomena.
Those are most assuredly amongst your defaults on the topic. They could easily each be wrong—there is no way to know—but if you trust my assertion of “a flugmizzr” then each of these become required defaults, useful for acquiring further information.
To clarify—my answer is that “there are topics I’ve never considered, and before consideration, I need not have a default belief.” For me at least, the level consideration to actually form a belief is nontrivial. I am often in a state of uninterested impartiality. If you ask me whether you have a friend named Joe, I would not hazard a guess and would be about equally surprised by either answer.
To put it more abstractly: There’s no particular computational reason why the mind needs to have a default for every input. You suggest some topic X, and it’s quite possible for me to remain blank, even if I heard you and constructed some internal representation of X. That representation need not be tied to a belief or disbelief in any of the propositions about X under discussion.
Agreed with your main point; curious about a peripheral:
Is this meaningfully different, in the context you’re operating in, from coming to believe that you probably didn’t experience what you believe you experienced?
Because I also have trouble imagining myself being skeptical about whether I’m actually experiencing what I currently, er, experience myself experiencing (though I can certainly imagine believing things about my current experience that just ain’t so, which is beside your point). But I have no difficulty imagining myself being skeptical about whether I actually experienced what I currently remember myself having experienced a moment ago; that happened a lot after my brain injury, for example.
Yes. I have plenty of evidence that people sometimes become convinced that they’ve had experiences that they haven’t had, but reality would have to work very differently than I think it does for people not to be having the quales they think they’re having.
(nods) That’s what I figured, but wanted to confirm.
Given that my “current” perception of the world is integrating a variety of different inputs that arrived at my senses at different times, I mostly suspect that my intuitive confidence that there’s a sharp qualitative difference between “what I am currently experiencing” and “what I remember having experienced” is (like many of my intuitive confidences) simply not reflective of what’s actually going on.