We’re talking at cross purposes here. Any arbitrary counterfactual may or may not be possible. The null hypothesis is that they are not. Until there is justification for the assertion, the mere presentation of a counterfactual is insufficient cause/motive to accept the assertion that it is possible.
Things that have been believed impossible have sometimes been found to be true. Just because you do not yet have any reason to suspect that something is true, does not mean that you can have assign zero probability to its truth. If you did assign a proposition zero probability, no amount of justification could ever budge you from zero probability; whatever evidence or arguments you received, it would always be infinitely more likely that they came about from some process unconnected to the truth of the proposition than that the proposition was true.
I’m getting rather tired of having to start this conversation over yet again. Do you understand my meaning when I say that until such time as a given counterfactual is provided with justification to accept its plausibility, the null hypothesis is that it is not?
I believe that I understand you, but I don’t think that is part of an epistemology that is as good at forming true beliefs and recognizing false ones.
You can label certain beliefs “null hypothesis,” but it will not correspond to any consistent likelihood that they will turn out to be true.
As a social prescription, you can call certain claims “justified knowledge,” based on how they are arrived at, and not call other claims about reality “justified knowledge” because the evidence on which they were arrived at does not meet the same criteria. And you can call it an “intellectual failing” for people to not acquire “justified knowledge” about things that they reason about. But not all “justified knowledge” will be right, and not all claims that are not “justified knowledge” will be wrong. If you had a computer that could process whatever information you put into it, and tell you exactly how confident you ought to be about any given claim given that information, and have the claims be right exactly that often, then the category of “justified belief” will not correspond to any particular range of likelihood.
This sounds like a point where you would complain that I am confusing “aggregated average” with “manifested instantiation.” We can accept the premise that the territory is the territory, and a particular event either happened or it did not. So if we are talking about a particular event, should we be able to say that it actually definitely happened, because that’s how the territory works? Well, we can say that, but then we start observing that some of the statements we make about things that “definitely happened” turn out to be wrong. If we except as a premise in our epistemologies that the territory is what it is, we can accept that the statements we make about the territory are either right, absolutely, or wrong, absolutely. But we don’t know which are which, and “justified knowledge” won’t correspond to any particular level of confidence.
If you have an epistemology that lets you label some things “absolutely justified knowledge” that are actually wrong, and tells you that you can’t predict the likelihood of some things when you can actually make confidence predictions about them and be right as often as you say you should be, I think you’ll have a hard time selling that it’s a good epistemology.
You can label certain beliefs “null hypothesis,” but it will not correspond to any consistent likelihood that they will turn out to be true.
The use of null hypothesises is a pivotal tool for scientific endeavor and rational-skepticism in general. If you object to this practice, feel free to do so, but please note that what you’re objecting to is a necessary condition of the process of forming beliefs.
And you can call it an “intellectual failing” for people to not acquire “justified knowledge” about things that they reason about. But not all “justified knowledge” will be right, and not all claims that are not “justified knowledge” will be wrong.
I do not agree with the use of “justified knowledge” so ubiquitously. Furthermore, I have tried to establish a differentiation between belief claims and knowledge claims. The latter are categorically claims about “the territory”; the former are claims of approximation of “your map” to “the territory”.
Being able to readily differentiate between these two categories is a vital element for ways of knowing.
If you have an epistemology that lets you label some things “absolutely justified knowledge” that are actually wrong,
I don’t know, at this point, exactly where this particular thread is in the nested tree of various conversations I’m handling. But I can tell you that at no time have I espoused any form of absolutism that allows for non-absolute truths to be expressed as such.
What I have espoused, however, is the use of contingent naive realism coincident with Bayesian reasoning—while understanding the nature of what each “tool” addresses.
The use of null hypotheses is a pivotal tool for scientific endeavor and rational-skepticism in general. If you object to this practice, feel free to do so, but please note that what you’re objecting to is a necessary condition of the process of forming beliefs.
Did you reach this conclusion by testing the hypothesis that the use of null hypotheses is not a pivotal tool for scientific endeavor or rational-skepticism in general against collected data, and rejecting it?
Did you reach this conclusion by testing the hypothesis that the use of null hypotheses is not a pivotal tool for scientific endeavor or rational-skepticism in general against collected data, and rejecting it?
Rational-skepticism doesn’t necessarily operate experimentally and I’m sure you know this.
That being said; yes, yes I did. Thought-experiment, but experiment it is.
The use of null hypothesises is a pivotal tool for scientific endeavor and rational-skepticism in general. If you object to this practice, feel free to do so, but please note that what you’re objecting to is a necessary condition of the process of forming beliefs.
Yes, I am aware of how scientists use null hypotheses. But the null hypothesis is only a construct which exists in the context of significance testing, and significance testing does not reliably get accurate results when you measure small effect sizes, and also gives you different probabilities based on the same data depending on how you categorize that data; Eliezer already gave an example of this in one of the pages you linked earlier. The whole reason that people on this site in general have enthusiasm for Bayesian statistics is because we believe that they do better, as well as as it is possible to do.
The concept of a “null hypothesis,” as it is used in frequentist statistics, doesn’t even make sense in the context of a system where, as you have claimed, we cannot know that it is more likely that a coin will come up heads on the basis of it having come up heads the last hundred times you’ve flipped it.
I do not agree with the use of “justified knowledge” so ubiquitously. Furthermore, I have tried to establish a differentiation between belief claims and knowledge claims. The latter are categorically claims about “the territory”; the former are claims of approximation of “your map” to “the territory”.
If I say “it is raining out,” this is a claim about the territory, yes. If I say “I believe that it is raining out,” this is a claim about my map (and my map exists within, and is a part of, the territory, so it is also a claim “about the territory,” but not about the same part of the territory.)
But my claim about the territory is not certain to be correct. If I say “it is raining out, 99.99999% confidence,” which is the sort of thing I would mean when I say “it is raining out,” that means that I am saying that the territory probably agrees with this statement, but the information in my map does not allow me to have a higher confidence in the agreement than that.
The use of null hypothesises is a pivotal tool for scientific endeavor and rational-skepticism in general. If you object to this practice, feel free to do so, but please note that what you’re objecting to is a necessary condition of the process of forming beliefs.
A null hypothesis is a particularly useful simplification not strictly a necessary tool.
I am saying it is not strictly necessary to have a hypothesis called “null” but rather that such a thing is just extremely useful. I would definitely say the thing about defaults too.
I am saying it is not strictly necessary to have a hypothesis called “null”
That’s not what a null hypothesis is. A null hypothesis is a default state.
I would definitely say the thing about defaults too.
I’m curious; how could a situation be arranged such that an individual has no default position on a given topic? Please provide such a scenario—I find my imagination insufficient to the task of authoring such a thing.
I’m curious; how could a situation be arranged such that an individual has no default position on a given topic? Please provide such a scenario—I find my imagination insufficient to the task of authoring such a thing.
If you tell me you’ve flipped a coin, I may not have any default position on whether it’s heads or tails. Similarly, I might have no prior belief, or a symmetric prior, on lots of questions that I haven’t thought much about or that don’t have any visible correlation to anything else I know about.
Sure. I didn’t say “I have no information related to the experiment”. What I’m saying is this: if I do an experiment to choose between K options, it might be that I don’t have any prior (or have a symmetric prior) about which of those K it will be. That’s what a null hypothesis in statistics is. When a pharmaceutical company does a drug trial and talks about the null hypothesis, the hypothesis they’re referring to is “no effect”, not “the chemical never existed”.
Yes, any experiment also will give you information about whether the experiment’s assumptions were valid. But talking about null hypotheses or default hypotheses is in the context of a particular formal model, where we’re referring to something more specific.
When a pharmaceutical company does a drug trial and talks about the null hypothesis, the hypothesis they’re referring to is “no effect”, not “the chemical never existed”.
Correct. And that’s a situationally relevant phenomenon. One quite similar to how “the coin will be on one of its sides” is situationally relevant to the coin-toss. (As opposed to it being on edge, or the faces smoothed off.)
You had asked for an example of where an individual might have no prior on “on a given topic.” I gave one, for a narrow topic: “is the coin heads or tails?”. I didn’t say, and you didn’t ask, for a case where an individual has no prior on anything related to the topic.
But let me give on attempt at that, stronger, claim. You’ve never met my friend Sam. You’ve never heard of or thought of my friend Sam. Before this comment, you didn’t have a prior on his having a birthday, you don’t have a prior on his existing; you never considered the possibility that lesswrong-commenter asr has a friend Sam. I would say you have no beliefs on the topic of Sam’s birthday.
It is my default position that they do not exist, even as absent constructs, unless otherwise noted (with the usual lowered threshold of standards of evidence for more evidently ordinary claims). That’s the whole point of default positions; they inform us how to react to new criteria or phenomena as they arise.
By bringing up the issue of non-topics you’re moving the goal-post. I asked you how it could be that a person could have no defaults on a given topic.
If I tell you that there is a flugmizzr, then you know that certain things are ‘true’—at least, you presume them to be until shown otherwise: one, that flugmizzrs are enumerable and two, that they are somehow enumerable by people, and three, flugmizzrs are knowable, discrete phenomena.
Those are most assuredly amongst your defaults on the topic. They could easily each be wrong—there is no way to know—but if you trust my assertion of “a flugmizzr” then each of these become required defaults, useful for acquiring further information.
I asked you how it could be that a person could have no defaults on a given topic.
To clarify—my answer is that “there are topics I’ve never considered, and before consideration, I need not have a default belief.” For me at least, the level consideration to actually form a belief is nontrivial. I am often in a state of uninterested impartiality. If you ask me whether you have a friend named Joe, I would not hazard a guess and would be about equally surprised by either answer.
To put it more abstractly: There’s no particular computational reason why the mind needs to have a default for every input. You suggest some topic X, and it’s quite possible for me to remain blank, even if I heard you and constructed some internal representation of X. That representation need not be tied to a belief or disbelief in any of the propositions about X under discussion.
Things that have been believed impossible have sometimes been found to be true. Just because you do not yet have any reason to suspect that something is true, does not mean that you can have assign zero probability to its truth. If you did assign a proposition zero probability, no amount of justification could ever budge you from zero probability; whatever evidence or arguments you received, it would always be infinitely more likely that they came about from some process unconnected to the truth of the proposition than that the proposition was true.
I’m getting rather tired of having to start this conversation over yet again. Do you understand my meaning when I say that until such time as a given counterfactual is provided with justification to accept its plausibility, the null hypothesis is that it is not?
I believe that I understand you, but I don’t think that is part of an epistemology that is as good at forming true beliefs and recognizing false ones.
You can label certain beliefs “null hypothesis,” but it will not correspond to any consistent likelihood that they will turn out to be true.
As a social prescription, you can call certain claims “justified knowledge,” based on how they are arrived at, and not call other claims about reality “justified knowledge” because the evidence on which they were arrived at does not meet the same criteria. And you can call it an “intellectual failing” for people to not acquire “justified knowledge” about things that they reason about. But not all “justified knowledge” will be right, and not all claims that are not “justified knowledge” will be wrong. If you had a computer that could process whatever information you put into it, and tell you exactly how confident you ought to be about any given claim given that information, and have the claims be right exactly that often, then the category of “justified belief” will not correspond to any particular range of likelihood.
This sounds like a point where you would complain that I am confusing “aggregated average” with “manifested instantiation.” We can accept the premise that the territory is the territory, and a particular event either happened or it did not. So if we are talking about a particular event, should we be able to say that it actually definitely happened, because that’s how the territory works? Well, we can say that, but then we start observing that some of the statements we make about things that “definitely happened” turn out to be wrong. If we except as a premise in our epistemologies that the territory is what it is, we can accept that the statements we make about the territory are either right, absolutely, or wrong, absolutely. But we don’t know which are which, and “justified knowledge” won’t correspond to any particular level of confidence.
If you have an epistemology that lets you label some things “absolutely justified knowledge” that are actually wrong, and tells you that you can’t predict the likelihood of some things when you can actually make confidence predictions about them and be right as often as you say you should be, I think you’ll have a hard time selling that it’s a good epistemology.
The use of null hypothesises is a pivotal tool for scientific endeavor and rational-skepticism in general. If you object to this practice, feel free to do so, but please note that what you’re objecting to is a necessary condition of the process of forming beliefs.
I do not agree with the use of “justified knowledge” so ubiquitously. Furthermore, I have tried to establish a differentiation between belief claims and knowledge claims. The latter are categorically claims about “the territory”; the former are claims of approximation of “your map” to “the territory”.
Being able to readily differentiate between these two categories is a vital element for ways of knowing.
I don’t know, at this point, exactly where this particular thread is in the nested tree of various conversations I’m handling. But I can tell you that at no time have I espoused any form of absolutism that allows for non-absolute truths to be expressed as such.
What I have espoused, however, is the use of contingent naive realism coincident with Bayesian reasoning—while understanding the nature of what each “tool” addresses.
Did you reach this conclusion by testing the hypothesis that the use of null hypotheses is not a pivotal tool for scientific endeavor or rational-skepticism in general against collected data, and rejecting it?
Rational-skepticism doesn’t necessarily operate experimentally and I’m sure you know this.
That being said; yes, yes I did. Thought-experiment, but experiment it is.
Yes, I am aware of how scientists use null hypotheses. But the null hypothesis is only a construct which exists in the context of significance testing, and significance testing does not reliably get accurate results when you measure small effect sizes, and also gives you different probabilities based on the same data depending on how you categorize that data; Eliezer already gave an example of this in one of the pages you linked earlier. The whole reason that people on this site in general have enthusiasm for Bayesian statistics is because we believe that they do better, as well as as it is possible to do.
The concept of a “null hypothesis,” as it is used in frequentist statistics, doesn’t even make sense in the context of a system where, as you have claimed, we cannot know that it is more likely that a coin will come up heads on the basis of it having come up heads the last hundred times you’ve flipped it.
If I say “it is raining out,” this is a claim about the territory, yes. If I say “I believe that it is raining out,” this is a claim about my map (and my map exists within, and is a part of, the territory, so it is also a claim “about the territory,” but not about the same part of the territory.)
But my claim about the territory is not certain to be correct. If I say “it is raining out, 99.99999% confidence,” which is the sort of thing I would mean when I say “it is raining out,” that means that I am saying that the territory probably agrees with this statement, but the information in my map does not allow me to have a higher confidence in the agreement than that.
A null hypothesis is a particularly useful simplification not strictly a necessary tool.
So I understand you—you are here claiming that it is not necessary to have a default position in a given topic?
I am saying it is not strictly necessary to have a hypothesis called “null” but rather that such a thing is just extremely useful. I would definitely say the thing about defaults too.
That’s not what a null hypothesis is. A null hypothesis is a default state.
I’m curious; how could a situation be arranged such that an individual has no default position on a given topic? Please provide such a scenario—I find my imagination insufficient to the task of authoring such a thing.
If you tell me you’ve flipped a coin, I may not have any default position on whether it’s heads or tails. Similarly, I might have no prior belief, or a symmetric prior, on lots of questions that I haven’t thought much about or that don’t have any visible correlation to anything else I know about.
But you would have the default position that it had in fact occupied one of those two outcomes.
Sure. I didn’t say “I have no information related to the experiment”. What I’m saying is this: if I do an experiment to choose between K options, it might be that I don’t have any prior (or have a symmetric prior) about which of those K it will be. That’s what a null hypothesis in statistics is. When a pharmaceutical company does a drug trial and talks about the null hypothesis, the hypothesis they’re referring to is “no effect”, not “the chemical never existed”.
Yes, any experiment also will give you information about whether the experiment’s assumptions were valid. But talking about null hypotheses or default hypotheses is in the context of a particular formal model, where we’re referring to something more specific.
Correct. And that’s a situationally relevant phenomenon. One quite similar to how “the coin will be on one of its sides” is situationally relevant to the coin-toss. (As opposed to it being on edge, or the faces smoothed off.)
I’m not sure whether we’re disagreeing here.
You had asked for an example of where an individual might have no prior on “on a given topic.” I gave one, for a narrow topic: “is the coin heads or tails?”. I didn’t say, and you didn’t ask, for a case where an individual has no prior on anything related to the topic.
But let me give on attempt at that, stronger, claim. You’ve never met my friend Sam. You’ve never heard of or thought of my friend Sam. Before this comment, you didn’t have a prior on his having a birthday, you don’t have a prior on his existing; you never considered the possibility that lesswrong-commenter asr has a friend Sam. I would say you have no beliefs on the topic of Sam’s birthday.
It is my default position that they do not exist, even as absent constructs, unless otherwise noted (with the usual lowered threshold of standards of evidence for more evidently ordinary claims). That’s the whole point of default positions; they inform us how to react to new criteria or phenomena as they arise.
By bringing up the issue of non-topics you’re moving the goal-post. I asked you how it could be that a person could have no defaults on a given topic.
If I tell you that there is a flugmizzr, then you know that certain things are ‘true’—at least, you presume them to be until shown otherwise: one, that flugmizzrs are enumerable and two, that they are somehow enumerable by people, and three, flugmizzrs are knowable, discrete phenomena.
Those are most assuredly amongst your defaults on the topic. They could easily each be wrong—there is no way to know—but if you trust my assertion of “a flugmizzr” then each of these become required defaults, useful for acquiring further information.
To clarify—my answer is that “there are topics I’ve never considered, and before consideration, I need not have a default belief.” For me at least, the level consideration to actually form a belief is nontrivial. I am often in a state of uninterested impartiality. If you ask me whether you have a friend named Joe, I would not hazard a guess and would be about equally surprised by either answer.
To put it more abstractly: There’s no particular computational reason why the mind needs to have a default for every input. You suggest some topic X, and it’s quite possible for me to remain blank, even if I heard you and constructed some internal representation of X. That representation need not be tied to a belief or disbelief in any of the propositions about X under discussion.