A given statement is either true or not true. Authorities on a given topic can be mistaken; and as such no appeal to an authority can be, absent any other actual supporting materials, treated as a rational motive for belief. In all such cases, it is those supporting materials which comprise the nature of the argument.
Here’s a good example: Richard Dawkins is an expert—an authority—on evolutionary biology. Yet he rejects the idea of group selection. Group selection has been widely demonstrated to be valid.
Authorities on a given topic can be mistaken; and as such no appeal to an authority can be, absent any other actual supporting materials, treated as a rational motive for belief.
… it doesn’t follow that because authorities are unreliable, their assertions without supporting evidence should not adjust your weighting in belief on any specific claim?
It is not so easy to separate assertions and evidence, Logos01. An assertion is in itself evidence—strong evidence perhaps, depending on who it comes from and in what context. Evidence is entanglement with reality, and the physical phenomenon of someone having said or written something can be entagled with what you want to know about in just the same way that any other type of evidence is entangled with reality.
For example if you were to ring your friend and ask him for the football results, you would generally update your degree of belief in the fact that your team won if he told you so (unless you had a particular reason to mistrust him). You would not wait until you had been provided with television footage and newspaper coverage of the result before updating, despite the fact that he had given you a mere assertion.
That is a trivial example, because you apparently are in need of one to gain understanding.
If someone quotes Yudkowsky as saying something, depending on how impressed you are with him as a thinker you may update on his mere opinions (indeed, rationality may demand that you do so) without or before considering his arguments in detail. Authorities may be “unreliable”, but it is the fallacy of grey to suggest that they therefore provide no evidence whatsoever. For that matter, your own sensory organs are unreliable—does this lead you not to update your degree of belief according to anything that you see or hear?
Since Yudkowsky is widely respected in intellectual terms here, someone might quote him without feeling the need to provide a lengthy exposition of the argument behind his words. This might be because you can easily follow the link if you want to see the argument, or because they don’t feel that the debate in question is worth much of their time (just enough to point you in the right direction, perhaps).
On the other hand it is true that argument screens off authority, and perhaps that is what you are imperfectly groping towards. If you really want to persuade anyone of whatever it is you are trying to say, I suggest that you attempt to (articulately) refute Yudkowsky’s argument, thereby screening off his authority. Don’t expect anyone to put your unsupported opinions together with Yudkowsky’s on the same blank slate, because you have to earn that respect. And for that matter, Yudkowsky actually has defended his statements with in-depth arguments, which should not really need to be recapitulated every time someone here references them. His writings are here, waiting for you to read them!
For example if you were to ring your friend and ask him for the football results, you would generally update your degree of belief in the fact that your team won if he told you so (unless you had a particular reason to mistrust him).
This presupposes that you had reason to believe that he had a means of having that information. In which case you are weighting your beliefs based on your assessment of the likelihood of him intentionally deceiving you and your assessment of the likelihood of him having correctly observed the information you are requesting.
If you take it as a given that these things are true then what you are requesting is a testimonial as to what he witnessed, and treating him as a reliable witness, because you have supporting reasons as to why this is so. You are not, on the other hand, treating him as a blind oracle who simply guesses correctly, if you are anything remotely resembling a functionally capable instrumental rationalist.
That is a trivial example, because you apparently are in need of one to gain understanding.
This, sir, is a decidedly disingenuous statement. I have noted it, and it has lowered my opinion of you.
If someone quotes Yudkowsky as saying something, depending on how impressed you are with him as a thinker you may update on his mere opinions (indeed, rationality may demand that you do so)
I want to take an aside here to note that you just stated that it is possible for rationality to “demand” that you accept the opinions of others as facts based merely on the reputation of that person before you even comprehend what the basis of that opinion is.
I cannot under any circumstances now known to me endorse such a definition of “rationality”.
I suggest that you attempt to (articulately) refute Yudkowsky’s argument, thereby screening off his authority.
I’ve done that recently enough that invocations of Eliezer’s name into a conversation are unimpressive. If his arguments are valid, they are valid. If they are not, I have no hesitation to say they are not.
His writings are here, waiting for you to read them!
I want you to understand that in making this statement you have caused me to reassess you as an individual under the effects of a ‘cult of personality’ effect.
A given statement is either true or not true. Authorities on a given topic can be mistaken; and as such no appeal to an authority can be, absent any other actual supporting materials, treated as a rational motive for belief. In all such cases, it is those supporting materials which comprise the nature of the argument.
And:
… it doesn’t follow that because authorities are unreliable, their assertions without supporting evidence should not adjust your weighting in belief on any specific claim?
You now state:
If you take it as a given that these things are true then what you are requesting is a testimonial as to what he witnessed, and treating him as a reliable witness, because you have supporting reasons as to why this is so. You are not, on the other hand, treating him as a blind oracle who simply guesses correctly, if you are anything remotely resembling a functionally capable instrumental rationalist.
This is a different claim to that contained in the former two quotes. I do not disagree with this claim, but it does not support your earlier claims, which are the claims that I was criticising.
I do not know of any human being who I would regard as “a blind oracle who simply guesses correctly”. The existence of such an oracle is extremely improbable. On the other hand it is very normal to believe that there are “supporting reasons” for why someone’s claims should change your degree of belief in something. For example if Yudkowsky has made 100 claims that were backed up by sound argument and evidence, that leads me to believe that the 101st claim he makes is more likely to be true than false, therefore increasing my degree of belief somewhat (even if only a little, sometimes) in that claim at the expense of competing propositions even before I read the argument.
It also possible for someone’s claims to be reliably anti-correlated with the truth, although that would be a little strange. For example someone who generally holds accurate beliefs, but is a practical joker and always lies to you, might cause you to (insofar as you are rational) decrease your degree of belief in any proposition that he asserts, unless you have particular reason to believe that he is telling the truth on this one occasion.
You may have no particular regard for someone’s opinion. Nonetheless, the fact that a group of humans beings regard that person as smart; and that he writes cogently; and even the mere fact that he is human – these are all “supporting reasons” such as you mentioned. Even if this doesn’t amount to much, it necessarily amounts to something (except for in the vastly improbable case that reasons for this opinion to correlate with the truth, and for it to anti-correlate with the truth, seem to exactly cancel each other out).
I think what is tripping you up is the idea that you can be gamed by people who are telling you to “automatically believe everything that someone says”. But actually the laws of probability are theorems – not social rules or rules of thumb. If you do think you are being gamed or lied to then you might be rational to decrease your degree of belief in something that someone else claims, or else only update your belief a very small amount towards their belief. No-one denies that.
All that we are trying to explain to you is that your generalisations about not treating other people’s beliefs as evidence are wrong as probability theory. Belief is not binary – believe or not believe. It is a continuum, and updates in your degree of belief in a proposition can be large or tiny. You don’t have to flick from 0 to 1 when someone makes some assertion, but rationality requires you to make use of all the evidence available to you—therefore your degree of belief should change by some amount in some direction, which is usually towards the other person’s belief except in unusual circumstances.
Still, if you do in fact regard Yudkowsky’s (or anyone else’s) stated belief in some proposition as being completely uncorrelated with the truth of that proposition, then you should not revise your belief in either direction when you encounter that statement of his. Otherwise, failure to update on evidence – things that are entangled with what you want to know about – is irrational.
I want to take an aside here to note that you just stated that it is possible for rationality to “demand” that you accept the opinions of others as facts based merely on the reputation of that person before you even comprehend what the basis of that opinion is.
That is called a figure of speech my friend.
I want you to understand that in making this statement you have caused me to reassess you as an individual under the effects of a ‘cult of personality’ effect.
I’ve done that recently enough that invocations of Eliezer’s name into a conversation are unimpressive. If his arguments are valid, they are valid. If they are not, I have no hesitation to say they are not.
There is nothing wrong with this in itself. But as has already been pointed out to you, an assertion and a link could be regarded as encouragement for you to seek out the arguments behind that assertion, which are written elsewhere. It might also be for the edification of other commenters and lurkers, who may be more impressed by Eliezer (therefore more willing to update on his beliefs).
Personally I don’t find the actual argument that started this little comment thread off remotely interesting (arguments over definitions are a common failure mode), so I shan’t get involved in that. But I hope you understand the point about updating on other people’s beliefs a little more clearly now.
Still, if you do in fact regard Yudkowsky’s (or anyone else’s) stated belief in some proposition as being completely uncorrelated with the truth of that proposition, then you should not revise your belief in either direction when you encounter that statement of his. Otherwise, failure to update on evidence – things that are entangled with what you want to know about – is irrational.
This is accurate in the context you are considering—it is less accurate generally without an additional caveat.
You should also not revise your belief if you have already counted the evidence on which the expert’s beliefs depend.
(Except, perhaps, for a very slight nudge as you become more confident that your own treatment of the evidence did not contain an error, but my intuition says this is probably well below the noise level.)
You should also not revise your belief if you have already counted the evidence on which the expert’s beliefs depend.
What if Yudkowsky says that he believes X with probability 90%, and you only believe it with probability 40% - even though you don’t believe that he has access to any more evidence than you do?
Also, I agree that in practice there is a “noise level”—i.e. very weak evidence isn’t worth paying attention to—but in theory (which was the context of the discussion) I don’t believe there is.
What if Yudkowsky says that he believes X with probability 90%, and you only believe it with probability 40% - even though you don’t believe that he has access to any more evidence than you do?
Then perhaps I have evidence that he does not, or perhaps our priors differ, or perhaps I have made a mistake, or perhaps he has made a mistake, or perhaps both. Ideally, I could talk to him and we could work to figure out which of us is wrong and by how much. Otherwise, I could consider the likelihood of the various possibilities and try to update accordingly.
For case 1, I should not be updating; I already have his evidence, and my result should be more accurate.
For case 2, I believe I should not be updating, though if someone disagrees we can delve deeper.
For cases 3 and 5, I should be updating. Preferably by finding my mistake, but I can probably approximate this by doing a normal update if I am in a hurry.
For case 4, I should not be updating.
Also, I agree that in practice there is a “noise level”—i.e. very weak evidence isn’t worth paying attention to—but in theory (which was the context of the discussion) I don’t believe there is.
In theory, there isn’t. My caveat regarding noise was directed to anyone intending to apply my parenthetical note to practice—when we consider a very small effect in the first place, we are likely to over-weight it.
What you are saying is that insofar as we know all of the evidence that has informed some authority’s belief in some proposition, his making a statement of that belief does not provide additional evidence. I agree with that, assuming we are ignoring tiny probabilities that are below a realistic “noise level”.
As you said this is not particularly relevant to the case of someone appealing to an authority during an argument, because their interlocutor is unlikely to know what evidence this authority possesses in the large majority of cases. But it is a good objection in general.
This is a different claim to that contained in the former two quotes. I do not disagree with this claim, but it does not support your earlier claims, which are the claims that I was criticising.
You are mistaken. It is exactly the same claim, rephrased to match the circumstances at most—containing exactly the same assertion: “Authorities on a given topic can be mistaken; and as such no appeal to an authority can be, absent any other actual supporting materials, treated as a rational motive for belief. In all such cases, it is those supporting materials which comprise the nature of the argument.” In the specific case you are taking as supporting evidence your justified belief that the individual had in fact directly observed the event, and the justified belief that the individual would relate it to you honestly. Those, together, comprise justification for the belief.
That is called a figure of speech my friend.
An insufficient retraction.
It is not cultish to praise someone highly.
What you did wasn’t praise.
But as has already been pointed out to you, an assertion and a link could be regarded as encouragement for you to seek out the arguments behind that assertion
See, THIS is why I called you cultish. Do you understand that the quote that was cited to me wasn’t even relevant contextually in the first place? I had already differentiated between proper rationality and instrumental rationality.
The quote of Eliezer’s was discussing instrumental rationality.
I even pointed this out.
But I hope you understand the point about updating on other people’s beliefs a little more clearly now.
No, I do not. But that’s because I wasn’t remotely confused about the topic to begin with, and have throughout these threads demonstrated a better capacity to differentiate between various modes and justifications of belief with a finer standard of differentiating between what justifications and what modes of thought are being engaged than anyone who’s as yet argued the topic in these threads with me, yourself included.
This conversation has officially reached my limit of investment, so feel free to get your last word in, but don’t be surprised if I never read it.
You are mistaken. It is exactly the same claim, rephrased to match the circumstances at most—containing exactly the same assertion: “Authorities on a given topic can be mistaken; and as such no appeal to an authority can be, absent any other actual supporting materials, treated as a rational motive for belief. In all such cases, it is those supporting materials which comprise the nature of the argument.” In the specific case you are taking as supporting evidence your justified belief that the individual had in fact directly observed the event, and the justified belief that the individual would relate it to you honestly. Those, together, comprise justification for the belief.
So in other words, you would like to distinguish between “appeal to authority” and “supporting materials” as though when someone refers you to the sayings of some authority, they expect you to consider these sayings as data purely “in themselves”, separately from whatever reasons you may have for believing that the sayings of that authority are evidentially entangled with whatever you want to know about.
Firstly, “appeal to authority” is generally taken simply to mean the act of referring someone to an authority during an argument about something – there is no connotation that this authority has no evidential entanglement with the subject of the argument, which would be bizarre (why would anyone refer you to him in that case?)
Secondly, if someone makes a statement about something, that in itself implies that there is evidential entanglement between that thing and their statement – i.e. the thing they are talking about is part of the chain of cause and effect (however indirect) that led to the person eventually making a statement about it (otherwise we have to postulate a very big coincidence). Therefore the idea that someone could make a statement about something without there being any evidential entanglement between them and it (which is necessary in order for it to be true that you should not update your belief at all based on their statement) is implausible in the extreme.
You started off by using “appeal to authority” in the normal way, but now you are attempting to redefine it in a nonsensical way so as to avoid admitting that you were mistaken (NB: there is no shame in being mistaken).
If you have read Harry Potter and the Methods of Rationality, you may remember the bit where Quirrell demonstrates “how to lose” as an important lesson in magical combat. Correspondingly, in future I would advise you not to create edifices of nonsense when it would be easier just to admit your mistake. Debate is after all a constructive enterprise, not a battle for tribal status in which it is necessary to save face at all costs.
I’ll say again that the error that is facilitating your confusion about beliefs and evidence is the idea that belief is a binary quality – I believe, or I do not believe – when in fact you believe with probability 90%, or 55%, or 12.2485%. The way that the concept of belief and the phrase “I believe X” is used in ordinary conversation may mislead people on this point, but that doesn’t change the facts of probability theory.
This allows you to think that the question is whether you are “justified” in believing proposition X in light of evidence Y, when the right question to be asking is “how has my degree of belief in proposition X changed as a result of this evidence?” You are reluctant to accept that someone’s mere assertions can be evidence in favour of some proposition because you have in mind the idea that evidence must always be highly persuasive (so as to change your belief status in a binary way from “unjustified” to “justified”), otherwise it isn’t evidence – whereas actually, evidence is still evidence even if it only causes you to shift your degree of belief in a proposition from 1% to 1.2%.
Firstly, “appeal to authority” is generally taken simply to mean the act of referring someone to an authority during an argument about something – there is no connotation that this authority has no evidential entanglement with the subject of the argument, which would be bizarre (why would anyone refer you to him in that case?)
“there is no connotation that this authority has no evidential entanglement with the subject of the argument”—quite correct. Which is why it is fallacious: it is an assertion that this is the case without corroborating that it actually is the case.
If it were the case, then the act would not be an ‘appeal to authority’.
I’ll say again that the error that is facilitating your confusion about beliefs and evidence is the idea that belief is a binary quality – I believe, or I do not believe – when in fact you believe with probability 90%, or 55%, or 12.2485%.
This is categorically invalid. Humans are not bayesian belief-networks. In fact, humans are notoriously poor at assessing their own probabilistic estimates of belief. But this, really, is neither hither nor thither.
The only error that’s occurring here is your continued belief that beliefs are relevant to this conversation. They simply aren’t. We’re not discussing “what should you believe”—we are discussing “what should you hold to be true.”
And that, sir, categorically IS binary. A thing is either true or not true. If you affirm it to be true you are assigning it a fixed binary state.
This is categorically invalid. Humans are not bayesian belief-networks.
You have a point in saying that “12.2485%” is an unlikely number to give to your degree of belief in something, although you could create a scenario in which it is reasonable (e.g. you put 122485 red balls in a bag...). And it’s also fair to say that casually giving a number to your degree of belief is often unwise when that number is plucked from thin air—if you are just using “90%” to mean “strong belief” for example. The point about belief not being binary stands in any case.
We’re not discussing “what should you believe”—we are discussing “what should you hold to be true.”
Those are one and the same! If that’s the real source of your disagreement with everyone here, it’s a doozy.
And that, sir, categorically IS binary. A thing is either true or not true. If you affirm it to be true you are assigning it a fixed binary state.
Perhaps this will help make the point clear. In fact I’m sure it will—it deals with this exact confusion. Please, if you don’t read any of these other links, look at that one!
No, they are not. They are fundamentally different. One is a point in a map. The other is a statement regarding the correlation of that map to the actual territory. These are not identical. Nor should they be.
As I have stated elsewhere: Bayesian ‘probabilistic beliefs’ eschew too greatly the ability of the human mind to make assertions about the nature of the territory.
Perhaps this will help make the point clear. In fact I’m sure it will—it deals with this exact confusion.
The first time I read that page was roughly a year and a half ago.
I am not confused.
I am telling you something that you canot digest; but nothing you have said is inscrutable to me.
These three things together should tell you something. I wno’t bother trying to repeat myself about what.
These three things together should tell you something.
To be blunt, they tell us you are arrogant, ill informed and sufficiently entrenched in you habits of thought that trying to explain anything to you is almost certainly a waste of a time even if meant well. Boasting of the extent your knowledge of the relevant fundamentals while simultaneously blatantly displaying your ignorance of the same is probably the worst signal you can give regarding your potential.
I am afraid, simply put, you are mistaken. I can reveal this simply: what is the process by which a Bayesian rationalist can derive a belief about an event which is not expressible in probabilistic form?
I am telling you something that you cannot digest; but nothing you have said is inscrutable to me.
That is not something of which to be proud.
Depends on what it is that is being said and how. Unfortunately, in this case, we have each been using very simple language to achieve our goals—or, at least, equivalently comprehensible statements. Despite my best efforts to inform you of what it is you are not understanding, you have failed to do so. I have, contrastingly, from the beginning understood what has been said to me—and yet you continue to believe that I do not.
This is problematic. But it is also indicative that the failure is simply not on my part.
If you understand something, you should be able to describe it in such a way that other people who understand it will agree that your description is correct. Thus far you have consistently failed to do this.
I am not confident that I understand your position yet. I don’t think you have made it very clear. But you have made it very clear that you think you understand Bayesian reasoning, but your understanding of how it works does not agree with anyone else’s here.
Not, sadly, for any particular point of actual fact. The objections and disagreements have all been consequential, rather than factual, in nature. I have, contrastingly, repeatedly been accused of committing myself to fallacies I simply have not committed (to name one specifically, the ‘Fallacy of Grey’).
Multiple individuals have objected to my noting the consequences of the fact that Bayesian rationality belief-networks are always maps and never assertions about the territory. And yet, this fact is a core element of Bayesian rationality.
but your understanding of how it works does not agree with anyone else’s here.
Quite frankly, the only reason this is so is because no one wants to confront the necessary conclusions resultant from my assertions about basic, core principles regarding Bayesian rationality.
Hence the absence of a response to my previous challenge: “what is the process by which a Bayesian rationalist can derive a belief about an event which is not expressible in probabilistic form?” (This is, of course, a “gotcha” question. There is no such process. Which is absolute proof of my positional claim regarding the flaws inherent in Bayesian rationality, and is furthermore directly related to the reasons why probabilistic statements are useless in deriving information regarding a specific manifested instantiation.)
I am not confident that I understand your position yet.
Frankly, I have come to despair of anyone on this site doing so. You folks lack the epistemological framework necessary to do so. I have attempted to relate this repeatedly. I have attempted to direct your thoughts in dialogue to this realization in multiple different ways. I have even simply spelled it out directly.
I have exhausted every rhetorical technique I am aware of to rephrase this point. I no longer have any hope on the matter, and frankly the behaviors I have begun to see out of many of you who are still in this thread have caused me to very seriously negatively reassess my opinion of this site’s community.
For the last year or so, I have been proselytizing this site to others as a good resource for learning how to become rational. I am now unable to do so without such heavy qualifiers that I’m not even sure it’s worth it.
What other people are telling you is that your representation of Bayesian reasoning is incorrect, and that you are misunderstanding them. I suggest that you try to lay out as clear and straightforward an explanation of Bayesian reasoning as you can. If other people agree that it is correct, then we will take your claims to be understanding us much more seriously. If we still tell you that you are misunderstanding it, then I think you should consider seriously the likelihood that you are, in fact, misunderstanding it.
If you do understand our position you should be capable of this, even if you disagree with it. I suggest leaving out your points of disagreement in your explanation; we can say if we agree that it reflects our understanding, and if it does, then you can tell us why you think we are wrong.
I think we are talking past each other right now. I have only low confidence that I understand your position, and I also have very low confidence that you understand ours. If you can do this, then I will no longer have low confidence that you understand our position, and I will put in the effort to attain sufficient understanding of your position (something that you claim to already have of ours) that I can produce an explanation of it that I am confident that you will agree with, and then we can have a proper conversation.
Bayesian reasoning formulates beliefs in the form of probability statements, which are predictive in nature. (See: “Making beliefs pay rent”)
Bayesian probabilities are fundamentally statements of a predictive nature, rather than statements of actual occurrence. This is the basis of the conflict between Bayesians and frequentists. Further corroboration of this point.
The language of Bayesian reasoning regarding beliefs is that of expressing beliefs in probabilistic form, and updating those within a network of beliefs (“givens”) which are each informed by the priors and new inputs.
These four points together are the basis of my assertion that Bayesian rationality is extremely poor at making assertions about what the territory actually is. This is where the epistemological barrier is introduced: there is a difference between what I believe to be true and what I can validly assert is true. The former is a predictive statement. The latter is a material, instantiated, assertion.
No matter how many times a coin has come up heads or tails, that information has no bearing on what position the coin under my hand is actually in. You can make statements about what you believe it will be; but that is not the same as saying what it is.
THIS is the failure of Bayesian reasoning in general; and it is why appeals to authority are always invalid.
Are you prepared to guess which parts I will take issue with?
Frankly, no. Especially since I derived each and every one of those four statements from canonical sources of explanations of how Bayesian rationality operates (and from LessWrong itself, no less.)
I’m more than willing to admit that I might find it interesting. I strongly anticipate that it will be very strongly unpersuasive as to the notion of my having a poor grasp of how Bayesian reasoning operates, however.
Bayesian probabilities are fundamentally statements of a predictive nature, rather than statements of actual occurrence.
Bayesian probabilities are predictive and statements of occurrence. To the extent that frequentist statements of occurrence are correct, Bayesian probabilities will always agree with them.
Bayesian reasoning formulates beliefs in the form of probability statements, which are predictive in nature.
I would not take issue with this if not in light of the statement that you made after it. It’s true that Bayesian probability statements are predictive, we can reason about events we have not yet observed with them. They are also descriptive; they describe “rates of actual occurrence” as you put it.
It seems to me that you are confused about Bayesian reasoning because you are confused about frequentist reasoning, and so draw mistaken conclusions about Bayesian reasoning based on the comparisons Eliezer has made. A frequentist would, in fact, tell you that if you flipped a coin many times, and it has come up heads every time, then if you flip the coin again, it will probably come up heads. They would do a significance test for the proposition that the coin was biased, and determine that it almost certainly was. There is, in fact, no school of probability theory that reflects the position that you have been espousing so far. You seem to be contrasting “predictive” Bayesianism with “non-predictive” frequentism, arguing for a system that allows you to totally suspend judgment on events until you make direct observations about those events. But while frequentist statistics fail to allow you to make predictions about the probability of events when you do not have a body of data in which you can observe how often those events tend to occur, it does provide predictions about the probability of events based on based on known data on frequency, and when large body of data for the frequency of an event is available, the frequentist and Bayesian estimates for its probability will tend to converge.
I agree with JoshuaZ that the second part of your comment does not appear to follow from the first part at all, but if we resolve this confusion, perhaps things will become clearer.
Bayesian probabilities are predictive and statements of occurrence.
Yes, they are. But what they cannot be is statements regarding the exact nature of any given specific manifested instantiation of an event.
They are also descriptive; they describe “rates of actual occurrence” as you put it.
That is predictive.
It seems to me that you are confused about Bayesian reasoning because you are confused about frequentist reasoning,
I can dismiss this concern for you: while I’ve targeted Bayesian rationality here, frequentism would be essentially fungible to all of my meaningful assertions.
I agree with JoshuaZ that the second part of your comment does not appear to follow from the first part at all, but if we resolve this confusion, perhaps things will become clearer.
I don’t know that it’s possible for that to occur until such time as I can discern a means of helping you folks to break through your epistemological barriers of comprehension (please note: comprehension is not equivalent to concurrence: I’m not asserting that if you understand me you will agree with me). Try following a little further where the dialogue currently is with JoshuaZ, perhaps? I seem to be beginning to make headway there.
If you want your intended audience to understand, try to tailor your explanation for people much dumber and less knowledgeable than you think they are.
Yup. There’s a deep inferential gap here, and I’m trying to relate it as best I can. I know I’m doing poorly, but a lot of that has to do with the fact that the ideas that you are having trouble with are so very simple to me that my simple explanations make no sense to you.
I know what all those words mean, but I can’t tell what you mean by them, and “specific manifested instantiation of an event” does not sound like a very good attempt to be clear about anything.
Specific: relating to a unique case.
Manifested: made to appear or become present.
Instantiation: a real instance or example
-- a specific manifested instantiation thus is a unique, real example of a thing or instance of an idea or event that is currently present and apparent. I sacrifice simplicity for precision in using this phrase; it lacks any semantic baggage aside from whatI assign to it here, and this is a conversation where precision is life-and-death to comprehension.
Can you explain what you mean by this as simply as possible?
What is the difference between “There is a 99.9999% chance he’s dead, and I’m 99.9999% confident of this, Jim” and “He’s dead, Jim.”?
Acknowledging failure is in no wise congratulatory.
You received an answer, apparently not the one you were looking for.
Of course not. By asking the question I am implying that there is, in fact, a difference. This was in an effort to help you understand me. You in particular responded by answering as you would. Which does nothing to help you overcome what I described as the epistemological barrier of comprehension preventing you from understanding what I’m trying to relate.
To me, there is a very simple, very substantive, and very obvious difference between those two statements, and it isn’t wordiness. I invite you to try to discern what that difference is.
I am powerless over what you project into the emotional context of my statements, especially when I am directly noting a point of fact with no emotive terminology used whatsoever.
I am powerless over what you project into the emotional context of my statements, especially when I am directly noting a point of fact with no emotive terminology used whatsoever.
There was a temptation to just reply with “No. You are not.” But that’s probably less than helpful. Note that language is a subtle thing. Even when someone is making a purely factual claim, how they make it, especially when it involves assertions about other people, can easily carry negative emotional content. Moreover, careful, diplomatic phrasing can be used to minimize that problem.
That is not necessarily true. The simplest explanation is whichever explanation requires the least effort to comprehend. That, categorically, can take the form of a query / thought-exercise.
In this instance, I have already provided the simplest direct explanation I possibly can: probabilistic statements do not correlate to exact instances. I was asked to explain it even more simply yet. So I have to get a little non-linear.
So this confuses me even more. Given your qualification of what pedanterrific said, this seems off in your system. The objection you have to the Bayesians seems to be purely in the issues about the nature of your own map. The red-shirt being dead is not a statement in your own map.
If you had said no here I think I’d sort of see where you are going. Now I’m just confused.
So this confused me even more. [...] The red-shirt being dead is not a statement in your own map.
I’ll elaborate.
Naively: yes.
Absolutely: no.
Recall the epistemology of naive realism:
Naïve realism, also known as direct realism or common sense realism, is a philosophy of mind rooted in a common sense theory of perception that claims that the senses provide us with direct awareness of the external world. In contrast, some forms of idealism assert that no world exists apart from mind-dependent ideas and some forms of skepticism say we cannot trust our senses.
Ok. Most of the time McCoy determines that someone is dead he uses a tricorder. Do you declare his death when you have an intermediary object other than your senses?
My form of naive realism is contingent, not absolutist. It is perfectly acceptable to find myself to be in error. But … there are certain forms of dead that are just plain dead. Such as the cat I just came home to discover was in rigor mortis an hour ago.
My condolences. Would you prefer if we use a different example? The upshot is that the Bayesian agrees with you that there is a reality out there. If you agree that one can be in error then you and the Bayesian aren’t disagreeing here.
The only thing we seem to disagree on is how to formulate statements of belief. As I wrote elsewhere:
What I am espousing is the use of contingent naive realism for handling assertions regarding “the territory” coincident with Bayesian reasoning regarding “the map”.
I strongly agree that Bayesian reasoning is a powerful tool for making predictive statements, but I still affirm that the adage that the number of times a well-balanced coin has come up heads has no bearing on whether it actually will come up heads on the next trial.
I strongly agree that Bayesian reasoning is a powerful tool for making predictive statements, but I still affirm that the adage that the number of times a well-balanced coin has come up heads has no bearing on whether it actually will come up heads on the next trial.
But even here the Bayesian agrees with you if the coin is well-balanced.
The only thing we seem to disagree on is how to formulate statements of belief.
If that is the case then the only disagreement you have is a matter of language not a matter of philosophy. This confuses me because the subthread about knowledge about one’s own internal cognitive processes seemed to assert a difference in actual philosophy.
But even here the Bayesian agrees with you if the coin is well-balanced.
I was making a trivial example. It gets more complicated when we start talking about the probabilities that a specific truth claim is valid.
This confuses me because the subthread about knowledge about one’s own internal cognitive processes seemed to assert a difference in actual philosophy.
I’m not exactly sure how to address this problem. I do know that it is more than just a question of language. The subthread you’re talking about was dealing on the existence of knowable truth, and I was using absolute truths for that purpose.
There are, necessarily, a very limited number of knowable absolute truths.
I do note that your wording on this instance, “knowledge about one’s own internal cognitive processes”, indicates that I may still be failing to achieve sufficient clarity in the message I was conveying in that thread. To reiterate: my claim was that it is absolutely true that when you are cognizant of a specific thought, you know that you are cognizant of that thought. In other words: you know absolutely that you are aware of your awareness. This is tautological to self-awareness.
If I say “he’s dead,” it means “I believe he is dead.” The information that I possess, the map, is all that I have access to. Even if my knowledge accurately reflects the territory, it cannot possibly be the territory. I see a rock, but my knowledge of the rock is not the rock, nor is my knowledge of my self my self.
The map is not the territory, but the map is the only one that you’ve got in your head. “It is raining” and “I do not believe it is raining” can be consistent with each other, but I cannot consistently assert “it is raining, but I do not believe that it is raining.”
You know, for a moment until I saw who it was who was replying, I had a brief flash of hope that this conversation had actually started to get somewhere.
This is starting to hurt my feelings, guys. This is my earnest attempt to restate this.
Put another way: the statement “the map is not the territory” does not have a confidence value of .99999whatever. It is a statement about the map, which is itself a territory, and is simply true.
(I may, of course, be mistaken about Logos’ epistemology, in which case I would appreciate clarification.)
Well, you can take “the map is not the territory” as a premise of a system of beliefs, and assign it a probability of 1 within that system, but you should not assign a probability of 1 to the system of beliefs being true.
Of course, there are some uncertainties that it makes essentially no sense to condition your behavior on. What if everything is wrong and nothing makes sense? Then kabumfuck nothing, you do the best with what you have.
First, I want to note the tone you used. I have not so disrespected you in this dialogue.
Second: “Put another way: the statement “the map is not the territory” does not have a confidence value of .99999whatever. It is a statement about the map, which is itself a territory, and is simply true.”
There is some truth to the claim that “the map is a territory”, but it’s not really very useful. Also, while it is demonstrable that the map is not the territory, it is not tautological and thus requires demonstration.
I am, however, comfortable stating that any specific individual “map” which has been so demonstrated to be a territory of its own without being the territory for which it is mapping is, in truth, a territory of its own (caveat: thar be recursion here!), and that this is expressible as a truth claim without the need for probability values.
First, I have no idea what you’re talking about. I intended no disrespect (in this comment). What about my comment indicated to you a negative tone? I am honestly surprised that you would interpret it this way, and wish to correct whatever flaw in my communicatory ability has caused this.
Second:
There is some truth to the claim that “the map is a territory”, but it’s not really very useful.
Okay, now I’m starting to get confused again. ‘Some’ truth? And what does ‘useful’ have to do with it?
Also, while it is demonstrable that the map is not the territory, it is not tautological and thus requires demonstration.
It… seems tautological to me...
I am, however, comfortable stating that any specific individual “map” which has been so demonstrated … is, in truth, a territory of its own …, and that this is expressible as a truth claim without the need for probability values.
What about my comment indicated to you a negative tone?
The way I read “This is starting to hurt my feelings, guys.” was that you were expressing it as a miming of myself, as opposed to being indicative of your own sentiments. If this was not the case, then I apologize for the projection.
Okay, now I’m starting to get confused again. ‘Some’ truth? And what does ‘useful’ have to do with it?
Whether or not a truth claim is valid is binary. How far that a given valid claim extends is quantitative rather than qualitative, however. My comment towards usefulness has more to do with the utility of the notion, (I.e.; a true-and-interesting notion is more “useful” than a true-and-trivial notion.)
It… seems tautological to me...
Tautological would be “The map is a map”. Statements that depend upon definitions for their truth value approach tautology but aren’t necessarily so.¬A != A is tautological, as is A=A. However, B⇔A → ¬A = ¬B is definitionally true. So; “the map is not the territory” is definitionally true (and by its definition it is demonstrated). However, it is possible for a conceptual territory that the territory is the map. (I am aware of no such instance, but it conceptually could occur). This would, however, require a different operational definition of what a “map” is from the context we currently use, so it would actually be a different statement.
So, um, have I understood you or not?
I’m comfortable saying “close enough for government work”.
was a direct (and, in point of fact, honest) response to
You know, for a moment until I saw who it was who was replying, I had a brief flash of hope that this conversation had actually started to get somewhere.
Though, in retrospect, this may not mean what I took it to mean.
(I.e.; a true-and-interesting notion is more “useful” than a true-and-trivial notion.)
Agreed.
So; “the map is not the territory” is definitionally true (and by its definition it is demonstrated). However, it is possible for a conceptual territory that the territory is the map. … This would, however, require a different operational definition of what a “map” is from the context we currently use, so it would actually be a different statement.
Ah, ok.
I’m comfortable saying “close enough for government work”.
The map is not the territory, but the map is the only one that you’ve got in your head.
This is going nowhere. I am intimately familiar with the epistemological framework you just related to me, and I am trying to convey one wholly unlike to to you. Exactly what good does it do this dialogue for you to continue reiterating points I have already discussed and explained to you ad nauseum?
I see a rock, but my knowledge of the rock is not the rock, nor is my knowledge of my self my self.
But you know that you ‘see’ a rock. And that IS territory, in the absolute sense.
If I say “he’s dead,” it means “I believe he is dead.”
Unless you are using a naive realist epistemological framework. In which case it means “he’s dead” (and can be either true or not true.)
But you know that you ‘see’ a rock. And that IS territory, in the absolute sense.
My understanding of what you are saying is that I should assign a probability of 1 to the proposition that I experience the quale of rock-seeing.
I do not accept that this is the case. There are alternatives, although they do not make much intuitive sense. It may seem to me that I cannot think I have an experience, and not have it, but maybe reality isn’t even that coherent. Maybe it doesn’t follow for reasons that are fundamentally impossible for me to make sense of. Also maybe I don’t exist.
I cannot envision any way in which I could plausibly come to believe that I probably don’t experience what I believe I experience, or that I don’t exist, but if I started getting evidence that reality is fundamentally incoherent, I’d definitely have to start judging it as more likely.
My understanding of what you are saying is that I should assign a probability of 1 to the proposition that I experience the quale of rock-seeing.
I am comfortable agreeing with this statement.
There are alternatives, although they do not make much intuitive sense.
I do not accept that this is the case. Postulating counterfactuals does not validate the possibility of those counterfactuals (this is just another variation of the Ontological Argument, and is as invalid as is the Ontological Argument itself, for very similar reasons.)
I cannot envision any way in which I could plausibly come to believe that I probably don’t experience what I believe I experience
This is a question which reveals why I have the problems I do with the epistemology embedded in Bayesian reasoning. It is very similar to the fact that I’m pretty sure you could agree with the notion that it is conceptually plausible that if reality were ‘fundamentally incoherent’ the statement ¬A = A could be true.
The thing is, it isn’t possible for such incoherence to exist. They are manifestations of definitions; the act of being so defined causes them to be proscriptive restrictions on the behavior of all that which exists.
For what it’s worth, I also hold that the Logical Absolutes are in fact absolute, though I also assert that this is essentially meaningless.
The thing is, it isn’t possible for such incoherence to exist.
How do you know?
It seems pretty reasonable to me that A must always equal A, but if I suddenly found that the universe I was living in ran on dream logic, and 2+2 stopped equaling the same thing every time, I’d start viewing the notion with rather more skepticism.
Because they are functions of definition. Altering the definition invalidates the scenario.
and 2+2 stopped equaling the same thing every time
Eliezer used a cognitive trick in this case: he postulated a counterfactual and treated that postulation as sufficient justification to treat the counterfactual as a possibility. This is not justified.
Because they are functions of definition. Altering the definition invalidates the scenario.
If you state that A is A by definition, then fine, you’ve got a definition that A=A. But that doesn’t mean that the definition must be manifest. Things cannot be defined into reality.
Now, I believe that A=A is a real rule that reality follows, with very high confidence. I am even willing to accept it as a premise within my system of beliefs, and assign it a probability of 1 in the context of that system of beliefs, but I can’t assign a probability of 1 to that system of beliefs.
Eliezer used a cognitive trick in this case: he postulated a counterfactual and treated that postulation as sufficient justification to treat the counterfactual as a possibility. This is not justified.
It’s possible contingent on things that we have very strong reason to believe simply being wrong.
The postulation does not make it a possibility. It was a possibility without ever being postulated.
The postulation “Maybe time isn’t absolute, and there is no sense in which two events in different places can objectively be said to take place at the same time” would probably have been regarded by most, a thousand years ago, as impossible. People would respond that that simply isn’t what time is. But from what we can see today, it looks like reality really does work like that, and any definition of “time” which demands absolute simultaneity exist simply doesn’t correspond to our universe.
In any case, I don’t see how you get from the premise that you can have absolute certainty about your experience of qualia to the position that you can’t judge how likely it is that the Sportland Sports won the game last night based on Neighbor Bob’s say so.
Another epistemological breakdown may be about to occur here. I separate “real” from “existant”.
That which is real is any pattern that which exists is proscriptively constrained by.
That which exists is any phenomenon which directly interacts with other phenomena.
I agree with you that you cannot define things into existence. But you most assuredly can define things into reality. The logical absolutes are examples of this.
Now, I believe that A=A is a real rule that reality follows,
Yup. But that’s a manifestation of the definition, and nothing more. If A=A had never been defined, no one would bat an eye at ¬A = A. But that statement would bear no relation to the definitions supporting the assertion A=A.
It’s possible contingent on things that we have very strong reason to believe simply being wrong.
We’re talking at cross purposes here. Any arbitrary counterfactual may or may not be possible. The null hypothesis is that they are not. Until there is justification for the assertion, the mere presentation of a counterfactual is insufficient cause/motive to accept the assertion that it is possible.
But from what we can see today, it looks like reality really does work like that, and any definition of “time” which demands absolute simultaneity exist simply doesn’t correspond to our universe.
Agreed. But time isn’t “definitional” in nature; that is, it is more than a mere product of definition. We define ‘time’ to be an existing phenomenon with specific characteristics—and thus the definition of how time behaves is constrained by the actual patterns that existing phenomenon adheres to.
The number 3 on the other hand is defined as itself, and there is no existing phenomenon to which it corresponds. We can find three instances of a given phenomenon, but that instantiated collection should not be conflated with an instantiation of the number three itself.
In any case, I don’t see how you get from the premise that you can have absolute certainty about your experience of qualia to the position that you can’t judge how likely it is that the Sportland Sports won the game last night based on Neighbor Bob’s say so.
You can’t. Conversations—and thus, topics—evolve over time, and this conversation has been going on for over 24 hours now.
We’re talking at cross purposes here. Any arbitrary counterfactual may or may not be possible. The null hypothesis is that they are not. Until there is justification for the assertion, the mere presentation of a counterfactual is insufficient cause/motive to accept the assertion that it is possible.
Things that have been believed impossible have sometimes been found to be true. Just because you do not yet have any reason to suspect that something is true, does not mean that you can have assign zero probability to its truth. If you did assign a proposition zero probability, no amount of justification could ever budge you from zero probability; whatever evidence or arguments you received, it would always be infinitely more likely that they came about from some process unconnected to the truth of the proposition than that the proposition was true.
I’m getting rather tired of having to start this conversation over yet again. Do you understand my meaning when I say that until such time as a given counterfactual is provided with justification to accept its plausibility, the null hypothesis is that it is not?
I believe that I understand you, but I don’t think that is part of an epistemology that is as good at forming true beliefs and recognizing false ones.
You can label certain beliefs “null hypothesis,” but it will not correspond to any consistent likelihood that they will turn out to be true.
As a social prescription, you can call certain claims “justified knowledge,” based on how they are arrived at, and not call other claims about reality “justified knowledge” because the evidence on which they were arrived at does not meet the same criteria. And you can call it an “intellectual failing” for people to not acquire “justified knowledge” about things that they reason about. But not all “justified knowledge” will be right, and not all claims that are not “justified knowledge” will be wrong. If you had a computer that could process whatever information you put into it, and tell you exactly how confident you ought to be about any given claim given that information, and have the claims be right exactly that often, then the category of “justified belief” will not correspond to any particular range of likelihood.
This sounds like a point where you would complain that I am confusing “aggregated average” with “manifested instantiation.” We can accept the premise that the territory is the territory, and a particular event either happened or it did not. So if we are talking about a particular event, should we be able to say that it actually definitely happened, because that’s how the territory works? Well, we can say that, but then we start observing that some of the statements we make about things that “definitely happened” turn out to be wrong. If we except as a premise in our epistemologies that the territory is what it is, we can accept that the statements we make about the territory are either right, absolutely, or wrong, absolutely. But we don’t know which are which, and “justified knowledge” won’t correspond to any particular level of confidence.
If you have an epistemology that lets you label some things “absolutely justified knowledge” that are actually wrong, and tells you that you can’t predict the likelihood of some things when you can actually make confidence predictions about them and be right as often as you say you should be, I think you’ll have a hard time selling that it’s a good epistemology.
You can label certain beliefs “null hypothesis,” but it will not correspond to any consistent likelihood that they will turn out to be true.
The use of null hypothesises is a pivotal tool for scientific endeavor and rational-skepticism in general. If you object to this practice, feel free to do so, but please note that what you’re objecting to is a necessary condition of the process of forming beliefs.
And you can call it an “intellectual failing” for people to not acquire “justified knowledge” about things that they reason about. But not all “justified knowledge” will be right, and not all claims that are not “justified knowledge” will be wrong.
I do not agree with the use of “justified knowledge” so ubiquitously. Furthermore, I have tried to establish a differentiation between belief claims and knowledge claims. The latter are categorically claims about “the territory”; the former are claims of approximation of “your map” to “the territory”.
Being able to readily differentiate between these two categories is a vital element for ways of knowing.
If you have an epistemology that lets you label some things “absolutely justified knowledge” that are actually wrong,
I don’t know, at this point, exactly where this particular thread is in the nested tree of various conversations I’m handling. But I can tell you that at no time have I espoused any form of absolutism that allows for non-absolute truths to be expressed as such.
What I have espoused, however, is the use of contingent naive realism coincident with Bayesian reasoning—while understanding the nature of what each “tool” addresses.
The use of null hypotheses is a pivotal tool for scientific endeavor and rational-skepticism in general. If you object to this practice, feel free to do so, but please note that what you’re objecting to is a necessary condition of the process of forming beliefs.
Did you reach this conclusion by testing the hypothesis that the use of null hypotheses is not a pivotal tool for scientific endeavor or rational-skepticism in general against collected data, and rejecting it?
Did you reach this conclusion by testing the hypothesis that the use of null hypotheses is not a pivotal tool for scientific endeavor or rational-skepticism in general against collected data, and rejecting it?
Rational-skepticism doesn’t necessarily operate experimentally and I’m sure you know this.
That being said; yes, yes I did. Thought-experiment, but experiment it is.
The use of null hypothesises is a pivotal tool for scientific endeavor and rational-skepticism in general. If you object to this practice, feel free to do so, but please note that what you’re objecting to is a necessary condition of the process of forming beliefs.
Yes, I am aware of how scientists use null hypotheses. But the null hypothesis is only a construct which exists in the context of significance testing, and significance testing does not reliably get accurate results when you measure small effect sizes, and also gives you different probabilities based on the same data depending on how you categorize that data; Eliezer already gave an example of this in one of the pages you linked earlier. The whole reason that people on this site in general have enthusiasm for Bayesian statistics is because we believe that they do better, as well as as it is possible to do.
The concept of a “null hypothesis,” as it is used in frequentist statistics, doesn’t even make sense in the context of a system where, as you have claimed, we cannot know that it is more likely that a coin will come up heads on the basis of it having come up heads the last hundred times you’ve flipped it.
I do not agree with the use of “justified knowledge” so ubiquitously. Furthermore, I have tried to establish a differentiation between belief claims and knowledge claims. The latter are categorically claims about “the territory”; the former are claims of approximation of “your map” to “the territory”.
If I say “it is raining out,” this is a claim about the territory, yes. If I say “I believe that it is raining out,” this is a claim about my map (and my map exists within, and is a part of, the territory, so it is also a claim “about the territory,” but not about the same part of the territory.)
But my claim about the territory is not certain to be correct. If I say “it is raining out, 99.99999% confidence,” which is the sort of thing I would mean when I say “it is raining out,” that means that I am saying that the territory probably agrees with this statement, but the information in my map does not allow me to have a higher confidence in the agreement than that.
The use of null hypothesises is a pivotal tool for scientific endeavor and rational-skepticism in general. If you object to this practice, feel free to do so, but please note that what you’re objecting to is a necessary condition of the process of forming beliefs.
A null hypothesis is a particularly useful simplification not strictly a necessary tool.
I am saying it is not strictly necessary to have a hypothesis called “null” but rather that such a thing is just extremely useful. I would definitely say the thing about defaults too.
I am saying it is not strictly necessary to have a hypothesis called “null”
That’s not what a null hypothesis is. A null hypothesis is a default state.
I would definitely say the thing about defaults too.
I’m curious; how could a situation be arranged such that an individual has no default position on a given topic? Please provide such a scenario—I find my imagination insufficient to the task of authoring such a thing.
I’m curious; how could a situation be arranged such that an individual has no default position on a given topic? Please provide such a scenario—I find my imagination insufficient to the task of authoring such a thing.
If you tell me you’ve flipped a coin, I may not have any default position on whether it’s heads or tails. Similarly, I might have no prior belief, or a symmetric prior, on lots of questions that I haven’t thought much about or that don’t have any visible correlation to anything else I know about.
Sure. I didn’t say “I have no information related to the experiment”. What I’m saying is this: if I do an experiment to choose between K options, it might be that I don’t have any prior (or have a symmetric prior) about which of those K it will be. That’s what a null hypothesis in statistics is. When a pharmaceutical company does a drug trial and talks about the null hypothesis, the hypothesis they’re referring to is “no effect”, not “the chemical never existed”.
Yes, any experiment also will give you information about whether the experiment’s assumptions were valid. But talking about null hypotheses or default hypotheses is in the context of a particular formal model, where we’re referring to something more specific.
When a pharmaceutical company does a drug trial and talks about the null hypothesis, the hypothesis they’re referring to is “no effect”, not “the chemical never existed”.
Correct. And that’s a situationally relevant phenomenon. One quite similar to how “the coin will be on one of its sides” is situationally relevant to the coin-toss. (As opposed to it being on edge, or the faces smoothed off.)
You had asked for an example of where an individual might have no prior on “on a given topic.” I gave one, for a narrow topic: “is the coin heads or tails?”. I didn’t say, and you didn’t ask, for a case where an individual has no prior on anything related to the topic.
But let me give on attempt at that, stronger, claim. You’ve never met my friend Sam. You’ve never heard of or thought of my friend Sam. Before this comment, you didn’t have a prior on his having a birthday, you don’t have a prior on his existing; you never considered the possibility that lesswrong-commenter asr has a friend Sam. I would say you have no beliefs on the topic of Sam’s birthday.
It is my default position that they do not exist, even as absent constructs, unless otherwise noted (with the usual lowered threshold of standards of evidence for more evidently ordinary claims). That’s the whole point of default positions; they inform us how to react to new criteria or phenomena as they arise.
By bringing up the issue of non-topics you’re moving the goal-post. I asked you how it could be that a person could have no defaults on a given topic.
If I tell you that there is a flugmizzr, then you know that certain things are ‘true’—at least, you presume them to be until shown otherwise: one, that flugmizzrs are enumerable and two, that they are somehow enumerable by people, and three, flugmizzrs are knowable, discrete phenomena.
Those are most assuredly amongst your defaults on the topic. They could easily each be wrong—there is no way to know—but if you trust my assertion of “a flugmizzr” then each of these become required defaults, useful for acquiring further information.
I asked you how it could be that a person could have no defaults on a given topic.
To clarify—my answer is that “there are topics I’ve never considered, and before consideration, I need not have a default belief.” For me at least, the level consideration to actually form a belief is nontrivial. I am often in a state of uninterested impartiality. If you ask me whether you have a friend named Joe, I would not hazard a guess and would be about equally surprised by either answer.
To put it more abstractly: There’s no particular computational reason why the mind needs to have a default for every input. You suggest some topic X, and it’s quite possible for me to remain blank, even if I heard you and constructed some internal representation of X. That representation need not be tied to a belief or disbelief in any of the propositions about X under discussion.
Agreed with your main point; curious about a peripheral:
I cannot envision any way in which I could plausibly come to believe that I probably don’t experience what I believe I experience
Is this meaningfully different, in the context you’re operating in, from coming to believe that you probably didn’t experience what you believe you experienced?
Because I also have trouble imagining myself being skeptical about whether I’m actually experiencing what I currently, er, experience myself experiencing (though I can certainly imagine believing things about my current experience that just ain’t so, which is beside your point). But I have no difficulty imagining myself being skeptical about whether I actually experienced what I currently remember myself having experienced a moment ago; that happened a lot after my brain injury, for example.
Is this meaningfully different, in the context you’re operating in, from coming to believe that you probably didn’t experience what you believe you experienced?
Yes. I have plenty of evidence that people sometimes become convinced that they’ve had experiences that they haven’t had, but reality would have to work very differently than I think it does for people not to be having the quales they think they’re having.
(nods) That’s what I figured, but wanted to confirm.
Given that my “current” perception of the world is integrating a variety of different inputs that arrived at my senses at different times, I mostly suspect that my intuitive confidence that there’s a sharp qualitative difference between “what I am currently experiencing” and “what I remember having experienced” is (like many of my intuitive confidences) simply not reflective of what’s actually going on.
What is the difference between “There is a 99.9999% chance he’s dead, and I’m 99.9999% confident of this, Jim” and “He’s dead, Jim.”?
Pedanterrific nailed it.
Or rather, people mostly don’t realize that even claims for which they have high confidence ought to have probabilities attached to them. If I observe that someone seems to be dead, and I tell another person “he’s dead,” what I mean is that I have a very high but less than 1 confidence that he’s dead. A person might think they’re justified in being absolutely certain that someone is dead, but this is something that people have been wrong about plenty of times before; they’re simply unaware of the possibility that they’re wrong.
Suppose that you want to be really sure that the person is dead, so you cut their head off. Can you be absolutely certain then? No, they could have been substituted by an illusion by some Sufficiently Advanced technology you’re not aware of, they could be some amazing never-before-seen medical freak who can survive with their body cut off, or more likely, you’re simply delusional and only imagined that you cut off their head or saw them dead in the first place. These things are very unlikely, but if the next day they turn up perfectly fine, answer questions only they should be able to answer, confirm their identity with dental records, fingerprints, retinal scan and DNA tests, give you your secret handshake and assure you that they absolutely did not die yesterday, you had better start regarding the idea that they died with a lot more suspicion.
Or rather, people mostly don’t realize that even claims for which they have high confidence ought to have probabilities attached to them.
Thank you for reiterating how to properly formulate beliefs. Unfortunately, this is not relevant to this conversation.
A person might think they’re justified in being absolutely certain that someone is dead, but this is something that people have been wrong about plenty of times before; they’re simply unaware of the possibility that they’re wrong.
That a truth claim is later falsified does not mean it wasn’t a truth claim.
Suppose that you want to be really sure that the person is dead, so you cut their head off. Can you be absolutely certain then? No, they could have been substituted by an illusion by some Sufficiently Advanced technology you’re not aware of,
Again, thank you for again demonstrating the Problem of Induction. Again, it just isn’t relevant to this conversation.
By continuing to bring these points up you are rejecting restrictions, definitions, and qualifiers I have added to my claims, to the point where what you are attempting to discuss is entirely unrelated to anything I’m discussing.
I have no interesting in talking past one another.
Ok. I don’t think 1 is a Bayesian issue by itself. That’s a general rationality issue. (Speaking as a non-Bayesian fellow-traveler of the Bayesians.)
2,3, and 4 seems roughly accurate. Whether 3 is correct depends a lot on how you unpack occurrence. A Bayesian is perfectly ok with the central limit theorem and applying it to a coin. This is a statement about occurrences. A Bayesian agrees with the frequentist that if you flip a fair coin the ratio of heads to tails should approach 1 as the number of flips goes to infinity. So what do you mean by occurrences here that Bayesians can’t talk about them?
But there then seems to be a total disconnect between those statements and your later claims and even going back and reading your earlier remarks doesn’t give any illuminating connection.
No matter how many times a coin has come up heads or tails, that information has no bearing on what position the coin under my hand is actually in. You can make statements about what you believe it will be; but that is not the same as saying what it is.
I can’t parse this in any way that makes sense. Are you simply objecting to the fact that 0 and 1 are not allowed probabilities in a Bayesian framework? If not, what does this mean?
I’m saying that the Bayesian framework is restricted to probabilities, and as such is entirely unsuited to non-probabilistic matters. Such as the number of fingers I am currently seeing on my right hand as I type this. For you, it’s a question of what you predict to be true. For me, it’s a specific manifested instance, and as such is not subject to any form of probabilistic assessments.
Note that this is not a claim that we do not share a single physical reality, but rather a question of the ability of either of us to make valid claims of truth.
I’m slowly beginning to understand your thought process. The Bayesian approach treats the number of fingers you currently see on you right hand as a probabilistic matter. The reason it does this, and the reason this method is preferable to those which treat the number of fingers on your hand as not subject to probabilistic assessments is that you can be wrong about the number of fingers on your hand. To demonstrate this I could describe any number of complex scenarios in which you have been tricked about the number of fingers you have. Or I could just point you to real instances of people being wrong about the number of limbs they possess or people who outright deny their disability.
Like anything else a “specific manifested instance” is an entity which we are justified in believing so long as it reliably explains and predicts our sensory impressions.
The reason it does this, and the reason this method is preferable to those which treat the number of fingers on your hand as not subject to probabilistic assessments is that you can be wrong about the number of fingers on your hand.
True, but irrelevant. It would have helped if you had continued to read further; you would have seen me explain to JoshuaZ that he had made exactly the same error that you just made in understanding what I just said.
The specific manifested instance is not the number of fingers but the number of fingers I am currently seeing. It is not, fundamentally speaking, possible for me to be mistaken about the latter (caveat: this only applies to ongoing perception, as opposed to recollections of perception).
Like anything else a “specific manifested instance” is an entity which we are justified in believing so long as it reliably explains and predicts our sensory impressions.
It is not proper to speak of beliefs about specific manifested instances when making assertions about what those instantiations actually are.
The statement “I observe X” is unequivocably absolutely true. Any conclusions derived from it however do not inherit this property.
Similarly, I want to note that you are close to making a categorical error regarding the relevance of explanatory power and predictions to this discussion. Those are very important elements for the formation of rational beliefs; but they are irrelevant to justified truth claims.
The specific manifested instance is not the number of fingers but the number of fingers I am currently seeing. It is not, fundamentally speaking, possible for me to be mistaken about the latter (caveat: this only applies to ongoing perception, as opposed to recollections of perception).
Perceptions do not have propositional content- speaking about their truth or falsity is nonsense. Beliefs about perceptions like “the number of fingers I am currently seeing is five” do and can correspondingly be false. They are of course rarely false but humans routinely miscount things. The closer a belief gets to merely expressing a perception the less there is that can be meaningfully said about it’s truth value.
Similarly, I want to note that you are close to making a categorical error regarding the relevance of explanatory power and predictions to this discussion. Those are very important elements for the formation of rational beliefs; but they are irrelevant to justified truth claims.
Perceptions do not have propositional content- speaking about their truth or falsity is nonsense.
Unless you are operating within the naive realist framework.
Beliefs about perceptions like “the number of fingers I am currently seeing is five” do and can correspondingly be false.
You’ve fallen into that same error I originally warned about. You are conflating beliefs about the nature of the perception with beliefs about the content of the perception.
The closer a belief gets to merely expressing a perception the less there is that can be meaningfully said about it’s truth value.
True but irrelevant to this discussion. I never claimed that there were absolute truths accessible to an arbitrary person which were of significant informational value. I only asserted that they do exist.
A rational belief is a justified truth claim.
Then I guess my epistemology doesn’t exist. I must have just been trolling this entire time. I couldn’t possibly believe that this is statement is committing a categorical error.
Are you willing to accept the notion that I am conveying to you a framework that you are currently unfamiliar with that makes statements about how truth, belief, and knowledge operate which you currently disagree with? If yes, then why do you actively resist comprehending my statements in this manner? What are you hoping to achieve?
Then I guess my epistemology doesn’t exist. I must have just been trolling this entire time. I couldn’t possibly believe that this is statement is committing a categorical error.
Are you willing to accept the notion that I am conveying to you a framework that you are currently unfamiliar with that makes statements about how truth, belief, and knowledge operate which you currently disagree with? If yes, then why do you actively resist comprehending my statements in this manner? What are you hoping to achieve?
What are you talking about! We’re talking about epistemology! If you want to demonstrate why a calling a rational belief a justified truth claim is a category error then do so. But please stop condescendingly repeating it. I actively “resist comprehending your statements”?! You can’t just assert things that don’t make sense in another person’s framework and expect them to not say “No those things are the same”.
In any case, if it is a common position in the epistemological literature then I suspect I am familiar with it and that you are simply really bad at explaining what it is. If it is your original epistemological framework then I suspect it will be a bad one (nothing personal, just my experience with the reference class).
Unless you are operating within the naive realist framework.
Is that your position?
You’ve fallen into that same error I originally warned about. You are conflating beliefs about the nature of the perception with beliefs about the content of the perception.
You keep doing this. You keep using words to make distinctions as if it were obvious what distinction is implied. I can assure you nearly no one here has any idea what you mean by the difference between the nature of the perception and the content of the perception. Please stop acting like we’re stupid because you aren’t explaining yourself.
I’m saying that the Bayesian framework is restricted to probabilities, and as such is entirely unsuited to non-probabilistic matters. Such as the number of fingers I am currently seeing on my right hand as I type this. For you, it’s a question of what you predict to be true. For me, it’s a specific manifested instance, and as such is not subject to any form of probabilistic assessments.
Actually, that’s a probabilistic assertion that you are seeing n fingers for whatever n you choose. You could for example be hallucinating. Or could miscount how many fingers you have. Humans aren’t used to thinking that way, and it generally helps for practical purposes not to think this way. But presumably if five minutes from now a person in a white lab coat walked into your room and explained that you had been tested with a new reversible neurological procedure that specifically alters how many fingers people think they have on their hands and makes them forget that they had any such procedure, you wouldn’t assign zero chance it is a prank.
Note by the way there are stroke victims who assert contrary to all evidence that they can move a paralyzed limb. How certain are you that you aren’t a victim of such a stroke? Is your ability to move your arms a specific manifested instance? Are you sure of that? If that case is different than the finger example how is it different?
Actually, that’s a probabilistic assertion that you are seeing n fingers for whatever n you choose. You could for example be hallucinating. Or could miscount how many fingers you have.
If I am hallucinating, I am still seeing what I am seeing. If I miscount, I still see what I see. There is nothing probabilistic about the exact condition of what it is that I am seeing. You can, if you wish to eschew naive realism, make fundamental assertions about the necessarily inductive nature of all empirical observations—but then, there’s a reason why I phrased my statement the way I did: not “I can see how many fingers I really have” but “I know how many fingers I am currently seeing”.
Are you able to properly parse the difference between these two, or do I need to go further in depth about this?
(The remainder of your post expounded further along the lines of an explanation into your response, which itself was based on an eroneous reading of what I had written. As such I am disregarding it.)
Do you take the same attitude with all the intellectual communities that don’t believe appeals to authority are always fallacious? If so you must find yourself rather isolated.
I have exhausted every rhetorical technique I am aware of to rephrase this point. I no longer have any hope on the matter, and frankly the behaviors I have begun to see out of many of you who are still in this thread have caused me to very seriously negatively reassess my opinion of this site’s community.
FWIW, for I think the first time I see LW as going more or less mad in perceiving value in a blog post linked to in the discussion section.
For what it is worth I hadn’t followed the thread and my impression when reading it after your priming was “just kinda ok”. The reasoning wasn’t absurd or anything but there isn’t any easy way to see how much influence that particular dynamic has had relative to other factors. My impression is that the effect is relatively minor.
I tentatively think any story humans tell about natural selection that obeys certain Darwinian and logical rules is true in that it must have an effect. However this effect may be too small to make any predictions from. This thought is under suspicion for committing the no true Scotsman fallacy.
An example is group selection. If humans can tell a non-flawed story about why it would be in a region of foxes’ benefit to individually restrain their breeding, this does not mean one can predict foxes can be seen to do this. It does mean that the effect itself is real subject to caveats about the rate of migration for foxes from one region to another, etc., such that under artificual enough conditions the real effect could be important. The problem is that there are a million other real effects that don’t come to mind as nice stories, and all have different vectors of effect.
This is why evolutionary psychology and the like is so bewitching and misleading. Pretty much all the effects postulated are true—though most are insignificant. People are entranced by their logical truth.
… I am unable to parse “FWIW, for I think the first time I see LW as going more or less mad in perceiving value in a blog post linked to in the discussion section.” to anything intelligible. What are you trying to say? I’ll wait until you respond before following the link.
Recently, I found myself disagreeing with dozens of LWers. Presumably, when this happens, sometimes I’m right and sometimes I’m wrong. Since I shouldn’t be totally confident I am right this time, how confident should I be?
Confidence in a given circumstance should be constrained by:
the available evidence at hand
how well you can demonstrate internal and external consistency in conforming to said evidence.
how well your understanding of that evidence allows you to predict outcomes of events associated with said evidence.
Any probabilities resultant from this would have to be taken as aggregates for predictive purposes, of course, and as such could not be ascribed as valid justification in any specific instance. (This caveat should be totally unsurprising from me at this point.)
how well you can demonstrate internal and external consistency in conforming to said evidence.
how well your understanding of that evidence allows you to predict outcomes of events associated with said evidence.
It’s a bit tricky because my position is that the post has practically no content and cannot be used to make predictions because it is a careful construction of an effect that is reasonable and does not contradict evidence, though it is in complete disregard of effect size.
After a brief skimming I have come to the conclusion that a brief skimming is not effective enough to provide a sufficient understanding of the conversation thread in question as to allow me to form any opinions on the topic.
tl;dr version: I skimmed it, and couldn’t wrap my head around it, so I’ll have to get back to you.
… To be an appeal to authority I would have to claim I was correct because some other person’s reputation says I am. So this is just you signalling you don’t care whether what you say is true; you merely wish to score points.
That you were upvoted tells me that others share this hostility to me.
This radically adjusts my views of LW as a community. Very, very negatively.
You are appealing to your authority regarding your mental states, degree of comprehension and reading history. This is why it is valid for you to simply assert them instead of us expecting you to provide us with internet histories and fMRI lie detector results. I am trying to point out how absurdly wrong your position on appeals to authority is. Detailed explanations had not succeeded so I hoped pointing out your use of valid appeals to authority would succeed. I desired karma to the extent that one always desires karma when commenting on Less Wrong.
This radically adjusts my views of LW as a community. Very, very negatively.
You are of course free to do this. But I suggest that before leaving you consider the possibility that you are wrong. Presumably at one point you found insight in this community and thought it was worth spending your time here. Did you at that time imagine you could write an accurate and insightful comment that would be voted to −8? Is it not possible that you don’t understand the reasons we’ve given for considering appeals to authority sometimes valid? Is it not possible that you misunderstand how “appeal to authority” is being used? Alternatively, is it not possible that you have not adequately explained your clever understanding of justification that prohibits appeals to authority? If you seriously consider these possibilities and cannot see how we could be responding to you rationally then you are probably right to hold a low opinion of us.
Presumably at one point you found insight in this community and thought it was worth spending your time here. Did you at that time imagine you could write an accurate and insightful comment that would be voted to −8?
In one day, in one thread, I have ‘lost’ roughly 70 ‘karma’. All from objecting to the notion that appeals to authority can be valid, and from my disparaging on Bayesian probabilism’s capability to make truth statements in a non-probabilistic fashion.
I expected better of you all, and I have learned my lesson.
For what it’s worth, as someone who has been reading your various exchanges without becoming involved in them (even to the extent of voting), I think your summary of the causes of that shift leaves out some important aspects.
That aside, though, I suspect your conclusion is substantively correct: karma-shift is a reasonable indicator (though hardly a perfectly reliable one) of how your recent behavior is affecting your reputation here , and if you expected that your recent behavior would improve your reputation your expectations were way misaligned with the reality of this site.
if you expected that your recent behavior would improve your reputation your expectations were way misaligned with the reality of this site.
Even assuming Logos was entirely correct about all his main points it would be bizarre to expect anything but a drastic drop in reputation in response to Logos’ recent behavior. This requires only a rudimentary knowledge of social behavior.
, and if you expected that your recent behavior would improve your reputation your expectations were way misaligned with the reality of this site.
It’s a question of degree. I realized from the outset that I’m essentially committing heresy against sacred beliefs of this community. But I expected a greater capacity for rationality.
I’m a non-Bayesian with over 6000 karma. I’ve started discussion threads on problems about problems with finding the right priors in a Bayesian context and have expressed skepticism that there any genuinely good set of priors exists. Almost every time I post on something related to AGI it is to discuss reasons why I think fooming isn’t likely. I’m not signed up for cryonics and have made multiple comments discussing problems with it from both a strict utilitarian perspective and from a more general framework. When there was a surge of interest in bitcoins here I made a discussion thread pointing out a potentially disturbing issue with that. One of my very first posts here was arguing that phlogiston is a really bad example of an unfalsifiable theory, and I’ve made this argument repeatedly here, despite phlogiston being the go-to example here for a bad scientific theory (although I don’t seem to have had much success in convincing anyone).
I have over 6000 karma. A few days ago I had gained enough karma to be one of the top contributors in the last 30 days. (This signaled to me that I needed to spend less time here and more time being actually productive.)
It should be clear from my example that arguing against “sacred beliefs” here does not by itself result in downvotes. And it isn’t like I’ve had those comments get downvoted and balanced out by my other remarks. Almost all such comments have been upvoted. I therefore have to conclude that either the set of heresies here is very different than what I would guess or something you are doing is getting you downvoted other than your questioning of sacred beliefs.
It would not surprise me if quality of arguments and their degree of politeness matter. It helps to keep in mind that in any community with a karma system or something similar, high quality, polite arguments help more. Even on Less Wrong, people often care a lot about civility, sometimes more than logical correctness. As a general rule of thumb in internet conversations, high quality arguments that support shared beliefs in a community will be treated well. Mediocre or low quality arguments that support community beliefs will be ignored or treated somewhat positively. At the better places on the internet high quality arguments against communal beliefs will be treated with respect. Mediocre or low quality arguments against communal beliefs will generally be treated harshly. That’s not fair, but it is a good rule of thumb. Less Wrong is better than the vast majority of the internet but in this regard it is still roughly an approximation of what you would expect on any internet community.
So when one is trying to argue against a community belief, you need to be very careful to have your ducks lined up in a row. Have your arguments carefully thought out. Be civil at all times. If something is not going well take a break and come back to it later. Also, keep in mind that aside from shared beliefs, almost any community has shared norms about communication and behavior and these norms may have implicit elements that take time to pick up. This can result in harsh receptions unless one has either spent a lot of time in the community or has carefully studied the community. This can make worse the other issues mentioned above.
I’m a non-Bayesian with over 6000 karma. I’ve started discussion threads on problems about problems with finding the right priors in a Bayesian context and have expressed skepticism that there any genuinely good set of priors exists.
That’s a standard element of Bayesian discourse, actually. The notion I’ve been arguing for, on the other hand, fundamentally violates Bayesian epistemology. And yes, I haven’t been highly rigorous about it; but then, I’m also really not all that concerned about my karma score in general. I was simply noting it as demonstrative of something.
However, actively dishonest rhetorical tactics have been taken up ‘against’ me in this thread, and that is what I have reacted strongly negatively against.
That’s a standard element of Bayesian discourse, actually.
That’s an interesting notion which I’d be curious if the Bayesians here could comment on. Do you agree that discussions that good priors might not be possible seem to be standard Bayesian discourse?
However, actively dishonest rhetorical tactics have been taken up ‘against’ me in this thread, and that is what I have reacted strongly negatively against.
I haven’t seen any indications of that in this thread. I do however recall the unfortunate communication lapses that you and I apparently had in the subthread on anti-aging medicine and it seems that some of the comments you made there fit a similar pattern match to accusing people for “actively dishonest rhetorical tactics” (albeit less extreme in that context). Given that two similar issues have occurred on wildly different topics, there seem to be two different explanations: 1) There is a problem with most of the commentators at Less Wrong 2) Something is occurring with the other common denominator of these discussions.
I know you aren’t a Bayesian so I won’t ask you to estimate the probabilities in these two situations. But let me ask a different question: If we took a neutral individual who has never gone to LW and isn’t a Bayesian, do you think you would bet even that on reading this thread they would agree with you and think there’s a problem with the Less Wrong community? What about 2-1 odds? 3-1? If not, why not?
1) There is a problem with most of the commentators at Less Wrong 2) Something is occurring with the other common denominator of these discussions.
Very gently worded. It is my current belief that both statements are true. I have never before so routinely encountered such difficulty in expressing my ideas and having them be understood, despite the fact that the inferential gap between myself and others is… well, larger than what I have ever witnessed between any other two people saving those with severely abnormal psychology. When I’m feeling particularly “existential” I sometimes worry about what that means about me.
On the other hand, I have also never before encountered a community whose dialogue was so deeply entrenched with so many unique linguistic constructs as LessWrong. I do not, fundamentally, disapprove of this: language shapes thought, after all. But this does also create a problem that if I cannot express my ideas within that patois, then I am not going to be understood—with the corollary that ideas which directly violate those construct’s patterns will be violently rejected as incomprehensible, “wrong”, or “confused”.
But let me ask a different question: If we took a neutral individual who has never gone to LW and isn’t a Bayesian, do you think you would bet even that on reading this thread they would agree with you and think there’s a problem with the Less Wrong community? What about 2-1 odds? 3-1? If not, why not?
I am disappointed by this. Shifting the framework of the question from the personal perspective to the outside view does not substantively make it a different question.
On the other hand, I have also never before encountered a community whose dialogue was so deeply entrenched with so many unique linguistic constructs as LessWrong. I do not, fundamentally, disapprove of this: language shapes thought, after all. But this does also create a problem that if I cannot express my ideas within that patois, then I am not going to be understood—with the corollary that ideas which directly violate those construct’s patterns will be violently rejected as incomprehensible, “wrong”, or “confused”.
I’m not sure this is the case. At least it has not struck me as the case. There is a fair number of constructs here that are specific to LW and a larger set that while not specific to LW are not common. But in my observation this results much more frequently in people on LW not explaining themselves well to newcomers. It rarely seems to result in people not being understood or rejected as confused. The most likely way for that to happen is for someone to try to speak in the LW patois without internalizing the meaning of the terms.
I am disappointed by this. Shifting the framework of the question from the personal perspective to the outside view does not substantively make it a different question.
Not my intended point by the question. I wanted an outside view point in general and wanted your estimate on what it would be like. I phrased it in terms of a bet so one would not need to talk about any notion of probability but could just speak of what bets you would be willing to take.
I’m not sure this is the case. At least it has not struck me as the case.
To be frank, have you ever encountered a scenario which displays this phenomenon as vulgarly as this one?
The most likely way for that to happen is for someone to try to speak in the LW patois without internalizing the meaning of the terms.
Well, again; note—what I’m doing here is actually directly violating the framework of the “LWP”. Those who have internalized it, but are unfamiliar with my own framework, would have significant barriers to comprehension. And that is very frequently associated with all sorts of negative reactions—especially when by that framework I am clearly a very confused person who keeps asserting that I am not the one who’s confused here.
I wanted an outside view point in general and wanted your estimate on what it would be like.
I can see why you think it would be an interesting question. I have, however, no opinion or belief on the matter; it is a thoroughly uninteresting question to me.
To be frank, have you ever encountered a scenario which displays this phenomenon as vulgarly as this one?
I’m not at all convinced that that is what is going on here, and this doesn’t seem to be a very vulgar case if I am interpreting your meaning correctly. You seem to think that people are responding in a much more negative and personal fashion than they are.
Well, again; note—what I’m doing here is actually directly violating the framework of the “LWP”. Those who have internalized it, but are unfamiliar with my own framework, would have significant barriers to comprehension.
So the solution then is not to just use your own language and get annoyed by when people fail to respond positively. The solution there is to either use a common framework (e.g. very basic English) or to carefully translate into the new language, or to start off by constructing a helpful dictionary. In general, it is socially rude and and unlikely to be productive to go to any area of the internet where there’s a specialized vocab and not only not learn it but to use a different vocab that has overlapping words. I wouldn’t recommend this in a Dungeons and Dragons forum either.
I can see why you think it would be an interesting question. I have, however, no opinion or belief on the matter; it is a thoroughly uninteresting question to me.
This is unfortunate. It is a question that while uninteresting to you may help you calibrate what is going on. I would tentatively suggest spending a few seconds on the question before dismissing it.
In general, it is socially rude and and unlikely to be productive to go to any area of the internet where there’s a specialized vocab and not only not learn it but to use a different vocab that has overlapping words. I wouldn’t recommend this in a Dungeons and Dragons forum either.
eg. “d20 doesn’t mean a twenty sided dice it refers to the bust and cup size of a female NPC!”
Well, again; note—what I’m doing here is actually directly violating the framework of the “LWP”.
Also the framework presented in “A Practical Study of Argument” by Grovier—my textbook from my first year Philosophy class called “Critical Thinking”. It is actually the only textbook I kept from my first undergrad degree—definitely recommended for anyone wanting to get up to speed on pre-bayesian rational thinking and argument.
Nonsense. This is exactly on topic. It isn’t my “Less Wrong Framework” you are challenging. When I learned about thinking, reasoning and fallacies LessWrong Wasn’t even in existence. For the matter Eliezer’s posts on OvercomingBias weren’t even in existence. Your claim that the response you are getting is the result of your violation of lesswrong specific beliefs is utterly absurd.
So that justifies your assertion that I violate the basic principles of logic and argumentation?
What justifies my assertion that you violate the basic principles of logic and argumentation is Trudy Govier, “A Practical Study of Argument”, 4th edition—Chapter 5 (“Premises—What to accept and why”), page 138. Under the subheading “Proper Authority”.
For an explanation of when and why an appeal to authority is, in fact, fallacious see pages 141, 159 and 434. Or wikipedia. Either way my disagreement with you is nothing to do with what I learned on LessWrong. If I’m wrong it’s the result of my prior training and an independent flaw in my personal thinking. Don’t try to foist this off on LessWrong groupthink. (That claim would be credible if we were arguing about, say, cryonics.)
What justifies my assertion that you violate the basic principles of logic and argumentation is Trudy Govier, “A Practical Study of Argument”, 4th edition—Chapter 5 (“Premises—What to accept and why”), page 138. Under the subheading “Proper Authority”.
You are really going to claim that by logically arguing over what qualifies as a valid argument I violate the basic principles of argumentation and logic?
Just guessing from the chapter and subheading titles, but I’m pretty sure that bit of “A Practical Study of Argument” has to do with why arguments from authority are not always fallacious.
You are really going to claim that by logically arguing over what qualifies as a valid argument I violate the basic principles of argumentation and logic?
Then by all means enlighten me as to how it can be possible that merely by disagreeing with Grovier on the topic of appeals to authority, and in doing so providing explanations based on deduction and induction, I “violate the basic principles of logic and argumentation”.
I don’t know why this hasn’t been done before: appeal to authority on wikipedia.
As far as I can tell, this definition is what the rest of us are talking about, and it specifically says that appealing to authority only becomes a fallacy if a) the authority is not a legitimate expert, or b) it is used to prove the conclusion must be true. If you disagree with WP’s definition, could you lay out your own?
EDIT: Right, saying that without providing some context was probably a bad idea. I’m not trying to disparage Jack’s comment here; it’s of the same general form as an appeal to external authority, and I’d expect that to come across without saying so. But if you’re being extra super pedantic...
Hey, I didn’t downvote you. (I actually thought that stating “Bare assertion.” as a bare assertion was metahilarious, but I didn’t think it technically applied.)
I appreciate the clarification but what he said was not as bad as a bare assertion. In fact, what he said was not unsupported at all! He was speaking as the best authority we have here on Logos01′s comprehension and reading habits. A bare assertion would have been if he made a claim about which we have no reason to think he is an authority (like, say, the rules of inductive logic).
What, you can’t appeal to your own authority? What would you call “because I said so”?
A bare assertion, as Nornagest indicated. Also a form of fallacy. If I had done such a thing, that would be worthy of consideration here. I have not, so we can safely stop here.
Evidence itself can be mistaken. If your theory says an event is rare, and it happens, then that is evidence against the theory. If the theory is correct, it should be overwhelmed by evidence for the theory. If the statements of experts statistically correlate with reality, then you should update on the statements of experts until they are screened off by evidence/arguments you have looked at more directly or the statements of other experts.
… it doesn’t follow that because authorities are unreliable, their assertions without supporting evidence should not adjust your weighting in belief on any specific claim?
Being less than a perfect oracle does not make an information source worthless.
Individuals are not information sources but witnesses of information sources. Experts are useful heuristically for trust-system utilization but should never be taken as viable sources of reliable, validated data directly whenever possible. Whatever experience or observations an expert has made in a given field, if related directly, can be treated as valid data, however. But the argument in this case is NOT “I am an expert in {topic of X} and I say X is true” but rather “I am an expert in {topic of X} and I have observed X.”
At which point, you may safely update your beliefs towards X being likely true—and it is again vital to note, for purposes of this dialogue, the difference between assertions by individuals and testimonies of evidence.
Anything that correlates to an information source is necessarily itself an information source.
This is either a trivially true statement or else a conflation of terminology. In context of where we are in this dialogue your statement is dangerously close to said (conflation) fallacy. A source of information which conveys information about another source of information is not that second source of information. A witness, by definition, is a source of information—yes. But that witness is not itself anything other than a source of information which relays information about the relevant source of information.
I’m really confused about why you’re not understanding this. Authorities are reliable to different degrees about different things. If I tell you I’m wearing an orange shirt that is clearly evidence that I am wearing an orange shirt. If a physicist tells you nothing can accelerate past the speed of light that is evidence that nothing can accelerate past the speed of light. Now, because people can be untrustworthy there are many circumstances in which witness testimony is less reliable than personal observation. But it would be rather bothersome to upload a picture of me in my shirt to you. It can also be difficult to explain special relativity and the evidence for it in a short time span. In cases like these we must settle for the testimony of authorities. This does not make appeals to authority invalid.
Now of course you might have some evidence that suggests I am not a reliable reporter of the color of my shirt. Perhaps I have lied to you many times before or I have incentives to be dishonest. In these cases it is appropriate to discount my testimony to the degree that I am unreliable. But this is not a special problem with appeals to authority. If you have reason to think you are hallucinating, perhaps because of the LSD you took an hour ago, you should appropriately discount your eyes telling you that the trees are waving at you.
Now since appeals to authority, like other kinds of sources of information, are not 100% reliable it makes sense to discuss the judgment of authorities in detail. Even if Eliezer is a reliable authority on lots of things it is a good idea to examine his reasoning. In this regard you are correct to demand arguments beyond “Eliezer says so”. But it is none the less not the case that “appeals to authority are always fallacious”. On the contrary modern science would be impossible without them since no one can possibly make all the observations necessary to support the reliability of a modern scientific theory.
I’m really confused about why you’re not understanding this.
You are confused because you do not understand, not because I do not understand.
If a physicist tells you nothing can accelerate past the speed of light that is evidence that nothing can accelerate past the speed of light.
Simply put, no. No it is not. Not unless the physicist can provide a reason to believe he is correct. Now, in common practice we assume that he can—but only because it is normal for an expert in a given field to actually be able to do this.
Here’s where your understanding, by the way, is breaking down: the difference between practical behavior and valid behavior. Bayesian rationality in particular is highly susceptible to this problem, and it’s one of my main objections to the system in principle: that it fails to parse the process of forming beliefs from the process of confirming truth.
On the contrary modern science would be impossible without them since no one can possibly make all the observations necessary to support the reliability of a modern scientific theory.
No. This is a deeply wrong view of how science is conducted. When a researcher invokes a previous publication, what they are appealing to is not an authority but rather to the body of evidence as provided. No researcher could ever get away with saying, “Dr. Knowsitall states that X is true—not without providing a citation of a paper where Dr. Knowsitall demonstrated that belief was valid.” Authorities often possess such bodies of evidence and can readily convey said information, so it’s easy to understand how this is so confusing for you folks, since it’s a fine nuance that inverts your normal perspectives on how beliefs are to be formed, and more importantly demonstrates an instance where the manners in which one forms beliefs is separated from valid claims of truth.
I’ll say it one last time: trusting someone has valid evidence is NOT the same thing as an appeal to authority, though it is a form of failure in efforts to determine truth.
Here’s where your understanding, by the way, is breaking down: the difference between practical behavior and valid behavior. Bayesian rationality in particular is highly susceptible to this problem, and it’s one of my main objections to the system in principle: that it fails to parse the process of forming beliefs from the process of confirming truth.
Here’s what may be tripping you up. Very often it doesn’t even make sense for humans to pay attention to small bits of evidence, because we can’t really process them very effectively. So for most bits of tiny evidence (such as most very weak appeals to authority) often the correct action given our limited processing capability is to simply ignore them. But this doesn’t make them not evidence.
Say for example, I have a conjecture about all positive integers n, and I check it for every n up to 10^6. If I verify it for 10^6+1, working out how much more certain I should be is really tough, so I only treat the evidence in aggregate. Similarly, if I have a hypothesis that all ravens ravens are black, and I’ve checked it for a thousand ravens, I should be more confident when I checked the 1001st raven and find it is black. But actually doing that update is difficult.
The question then becomes for any given appeal to authority, how reliable is that appeal?
Note that in almost any field there’s going to be some degree of relying on such appeals. In math for examples, there’s a very deep result called the classification of finite simple groups. The proof of the full result is thousands of pages spread across literally hundreds of separate papers. It is possible at some point that some people have really looked at almost the whole thing, but the vast majority of people who use the classification certainly have not. Relying on the classification is essentially an appeal to authority.
Here’s what may be tripping you up. Very often it doesn’t even make sense for humans to pay attention to small bits of evidence, because we can’t really process them very effectively. So for most bits of tiny evidence (such as most very weak appeals to authority) often the correct action given our limited processing capability is to simply ignore them. But this doesn’t make them not evidence.
That’s a given, and I have said exactly as much repeatedly. Reiterating it as though it were introducing a new concept to me or one that I was having trouble with isn’t going to be an effective tool for the conversation at hand.
So, I’m confused about how you would believe that humans have limited enough processing capacity that this would be an issue but in an earlier thread thought that humans were close enough to being good Bayesians that Aumann’s agreement theorem should apply. In general, the computations involved in an Aumann update from sharing posteriors will generally be much more involved than the computations in updating with a single tiny piece of evidence.
In general, the computations involved in an Aumann update from sharing posteriors will generally be much more involved than the computations in updating with a single tiny piece of evidence.
The computation for a normal update are two multiplications and a division. Now look at the computations involved in an explicit proof of Aumann. The standard proof is constructive. You can see that while it largely just involves a lot of addition, multiplication and division, you need to do it carefully for every single hypothesis. See this paper by Scott Aaronson www.scottaaronson.com/papers/agree-econ.ps which gives a more detailed analysis. That paper shows that the standard construction for Aumann is not as efficient as you’d want for some purposes. However, the primary upshot for our purposes is that the time required is roughly exponential in 1/eps where eps is how close you want the two Bayesians with limited computational agents to agree, and the equivalent issue for a standard multiplication and division is slightly worse than linear in 1/eps.
An important caveat is that I don’t know what happens if you try to do this in a Blum-Shub-Smale model, and in that context, it may be that the difference goes away. BSS machines are confusing and I don’t understand them well enough to figure out how much the protocol gets improved if one has two units with BSS capabilities able to send real numbers. Since humans can’t send exact real numbers or do computations with them in the general case, this is a mathematically important issue but is not terribly important for our purposes.
Alright, this is along the lines of what I thought you might say. A couple of points to consider.
In general discourse, humans use fuzzy approximations rather than precise statements. These have the effect of simplifying such ‘calculations’.
Backwards propagation of any given point of fact within that ‘fuzziness’ prevents the need for recalculation if the specific item is sufficiently trivial—as you noted.
None of this directly correlates to the topic of whether appeals to authority are ever legitimate in supporting truth-claims. Please note that I have differentiated between “I believe X” and “I hold X is true.”
I understand that this conflicts with the ‘baseline’ epistemology of Bayesian rationality as used on LessWrong… my assertion as this conversation has progressed has been that this represents a strong failure of Bayesian rationality; the inability to establish claims about the territory as opposed to simply refining the maps.
Your own language continues to perpetuate this problem: “The computation for a normal update”.
I’m not sure I understand your response. 3 is essentially unrelated to my question concerning small updates v. Aumann updates which makes me confused as to what you are trying to say with the three points and whether they are a disagreement with my claim, an argument that my claim is irrelevant, or something else.
Regarding 1, yes that’s true and 2 is sort of 2, but that’s the same thing that doing things with an order of epsilon does. Our epsilon keeps track of how fuzzy things should be. The point is that for the same degree of fuzziness it is much easier to do a single update on evidence than it is to do a Aumann type update based on common knowledge of current estimates.
I understand that this conflicts with the ‘baseline’ epistemology of Bayesian rationality as used on LessWrong… my assertion as this conversation has progressed has been that this represents a strong failure of Bayesian rationality; the inability to establish claims about the territory as opposed to simply refining the maps.
Your own language continues to perpetuate this problem: “The computation for a normal update”.
I don’t understand what you mean here. Updates are equivalent to claims about how the territory. Refining the map if whether one is or is not a Bayesian means one is making a more narrow claim about what the territory looks like.
Refining the map if whether one is or is not a Bayesian means one is making a more narrow claim about what the territory looks like.
Clarification: by “narrow claim” you mean “allowing for a smaller range of possibilities”.
That is not equivalent to making a specific claim about a manifested instantiation of the territory. It is stating that you believe the map more closely resembles the territory; it is not saying the territory is a certain specific way.
That is not equivalent to making a specific claim about a manifested instantiation of the territory. It is stating that you believe the map more closely resembles the territory; it is not saying the territory is a certain specific way.
If I a Bayesian estimates that with a high probability (say >.99) that say the Earth is around eight light-minutes from the sun then how is that not saying that the territory is a certain specific way? Do you just mean that Bayesian can’t be absolutely certain about that the territory matches their most likely hypothesis. Most Bayesians and for that matter most traditional rationalists would probably consider that a feature rather than a bug of an epistemological system. If you don’t mean this, what do you mean?
If I a Bayesian estimates that with a high probability (say >.99) that say the Earth is around eight light-minutes from the sun then how is that not saying that the territory is a certain specific way?
I was afraid of this. This is an epistemological barrier: if you express a notion in probabilistic form then you are not holding a specific claim.
Do you just mean that Bayesian can’t be absolutely certain about that the territory matches their most likely hypothesis.
I mean that Bayesian nomenclature does not permit certainty. However, I also assert that (naive) certainty exists.
Well, Bayesian nomenclature does permit certainty, just set P=1 or P=0. It isn’t a consequence that the nomenclature doesn’t allow it. It is that good Bayesians don’t ever say that. To use a possibly silly analogy, the language of Catholic theology allows one to talk about Jesus being fully man and not fully divine, but a good Catholic will never assert that Jesus was not fully divine.
However, I also assert that (naive) certainty exists.
In what sense does it exist? In the sense that human brains function with a naive certainty element? I agree that humans aren’t processing marginally probable descriptors. Are you claiming that a philosophical system that works should allow for naive certainty as something it can talk about and think exists?
Hmm, but the Bayesians agree with some form naive realism in general. What they disagree with you on is whether they have any access to that universe aside from in a probabilistic fashion. Alternatively, a a Bayesian could even deny naive realism and do almost everything correctly. Naive realism is a stance about ontology. Bayesianism is primarily a stance about epistemology. They do sometimes overlap and can inform each other but they aren’t the same thing.
I assert that there are knowable truth claims about the universe—the “territory”—which are absolute in nature. I related one as an example; my knowledge of the nature of my ongoing observation of how many fingers there are on my right hand. (This should not be conflated for knowledge of the number of fingers on my right hand. That is a different question.) I also operate with an epistemological framework which allows knowledge statements to be made and discussed in the naive-realistic sense (in which case the statement “I see five fingers on my hand” is a true statement of how many fingers there really are—a statement not of the nature of my map but of the territory itself.).
Thusly I reserve assertions of truth for claims that are definite and discrete, and differentiate between the model of material reality and the substance of reality itself when making claims. Probabilistic statements are always models, by category; no matter how closely they resemble the territory claims about the model are never claims in specific about the territory.
… For any given specific event of which I am cognizant and aware, yes. With the important caveat that this perfection is not a heritable property beyond that specific cognizance event.
Ok. I’m curious if you have ever been on any form of mind-altering substance. Here’s a related anecdote that may help:
When I was a teenager I had to get my wisdom teeth taken out. For a pain killer I was put on percocet or some variant thereof. Since I couldn’t speak, I had to communicate with people by writing things down. After I had recovered somewhat, I looked at what I had been writing down to my family. A large part apparently focused on the philosophical ramifications of being aware that my mind was behaving in a highly fractured fashion and incoherent ramblings about what this apparently did to my sense of self. I seemed particularly concerned with the fact that I could tell that my memories were behaving in a fragmented fashion but that this had not altered my intellectual sense of self as a continuing entity. Other remarks I made apparently were substantially less coherent than even that, but it was clear from what I had wrote that I had considered them to be deep philosophical thoughts and descriptions of my mind state.
In this context, I was cognizant and aware of my mental state. To say that my perception was accurate is pretty far off.
To use a different example, do you ever have thoughts that don’t quite make sense when you are falling asleep or waking up? Does your cognition seem perfectly accurate then? Or, when you make an arithmetic mistake, are you aware that you’ve misadded before you find the mistake?
In this context, I was cognizant and aware of my mental state. To say that my perception was accurate is pretty far off.
Hence: “With the important caveat that this perfection is not a heritable property beyond that specific cognizance event.”
but it was clear from what I had wrote that I had considered them to be deep philosophical thoughts and descriptions of my mind state.
Was it to you or to someone else that I stressed that recollections are not a part of my claim?
To use a different example, do you ever have thoughts that don’t quite make sense when you are falling asleep or waking up? Does your cognition seem perfectly accurate then?
“For any given specific event of which I am cognizant and aware, yes.” -- also, again note the caveat of non-heritability. Whether that cognizent-awareness is of inscrutability is irrelevant to the knowledge of that specifically ongoing instance of a cognizent-awareness event.
Or, when you make an arithmetic mistake, are you aware that you’ve misadded before you find the mistake?
I am going to ask you to acknowledge how irrelevant this ‘example’ is to my claim, as a means of guaging where you are on understanding it.
Hence: “With the important caveat that this perfection is not a heritable property beyond that specific cognizance event.”
Right. For those specific cognizance events I was pretty damn sure that I knew how my mind was functioning. And the reactions I got to percocet are on the milder end of what mind-altering substances can do.
Whether that cognizent-awareness is of inscrutability is irrelevant to the knowledge of that specifically ongoing instance of a cognizent-awareness event.
This makes me wonder if your claim is intended to be non-falsifiable. Is there anything that would convince you that you aren’t as aware as you think you are?
I am going to ask you to acknowledge how irrelevant this ‘example’ is to my claim, as a means of guaging where you are on understanding it.
Potentially quite far since this seems to be in the same category if I’m understanding you correctly. The cognitive awareness that I’ve added correctly is pretty basic, but one can screw up pretty easily and still feel like one is completely correct.
Right. For those specific cognizance events I was pretty damn sure that I knew how my mind was functioning.
Whether you were right or wrong about how your mind was functioning is irrelevant to the fact that you were aware of what it was that you were aware of. How accurate your beliefs were about your internal functionings is irrelevant to how accurate your beliefs were about what it was you were, at that instant, currently believing. These are fundamentally separate categories.
This is why I used an external example rather than internal, initially, by the way: the deeply recursive nature of where this dialogue is going only serves as a distraction from what I am trying to assert.
This makes me wonder if your claim is intended to be non-falsifiable. Is there anything that would convince you that you aren’t as aware as you think you are?
I haven’t made any assertions about how aware I believe I or anyone else is. I have made an assertion about how valid any given belief is regarding the specific, individual, ongoing cognition of the self-same specific, ongoing, individual cognition-event. This is why I have stressed the non-heritability.
The claim, in this case, is not so much non-falsifiable as it is tautological.
The cognitive awareness that I’ve added correctly is pretty basic
That it is basic does not mean that it is of the same category. Awareness of past mental states is not equivalent to awareness of ongoing mental states. This is why I specifically restricted the statemnt to ongoing events. I even previously stressed that recollections have nothing to do with my claim.
but one can screw up pretty easily and still feel like one is completely correct.
I’ll state this with the necessary recursion to demonstrate further why I would prefer we not continue using any cognition event not deriving from an external source: one would be correct to feel correct about feeling correct; but that does not mean that one would be correct about the feeling-correct that he is correct to feel correct about feeling-correct.
I’ll state this with the necessary recursion to demonstrate further why I would prefer we not continue using any cognition event not deriving from an external source: one would be correct to feel correct about feeling correct; but that does not mean that one would be correct about the feeling-correct that he is correct to feel correct about feeling-correct.
Simply put, no. No it is not. Not unless the physicist can provide a reason to believe he is correct. Now, in common practice we assume that he can—but only because it is normal for an expert in a given field to actually be able to do this.
That’s what makes the physicist an authority. If something is a reliable source of information “in practice” then it is a reliable source of information. Obviously if the physicist turns out not to know what she is talking about then beliefs based on that authority’s testimony turn out to be wrong.
Here’s where your understanding, by the way, is breaking down: the difference between practical behavior and valid behavior. Bayesian rationality in particular is highly susceptible to this problem, and it’s one of my main objections to the system in principle: that it fails to parse the process of forming beliefs from the process of confirming truth.
The validity of a method is it’s reliability.
No researcher could ever get away with saying, “Dr. Knowsitall states that X is true—not without providing a citation of a paper where Dr. Knowsitall demonstrated that belief was valid.”
The paper where Dr. Knowsitalll demonstrated that belief is simply his testimony regarding what happened in a particular experiment. It is routine for that researcher to not have personally duplicated prior experiments before building on them. The publication of experimental procedures is of course crucial for maintaining high standards of reliability and trustworthiness in the sciences. But ultimately no one can check the work of all scientists and therefore trust is necessary.
Here is an argument from authority for you: This idea of appeals to authority being legitimate isn’t some weird Less Wrong, Bayesian idea. It is standard, rudimentary logic. You don’t know what you’re talking about.
Authorities on a given topic can be mistaken; and as such no appeal to an authority can be, absent any other actual supporting materials, treated as a rational motive for belief.
P(X is true | someone who I consider well-educated in the field of interest stated X is true) > P(X is true)
Restating your argument in the form of a Bayesian probability statement isn’t going to increase its validity.
P (X|{affirming statement by authority in subject of X}) == P(X).
It has no bearing, and in fact is demonstrative of a broken heuristic. I’ll try to give another example to explain why. An expert’s expertise in a field is only as valid as his accuracy in the field. The probability of his expertise being, well, relevantis dependant upon his ability to make valid statements. Assigning probability to the validity of a statement by an expert, thusly, on the fact that the expert has made a statement is putting the cart before the horse. It’s like saying that because a coin has come up heads every time you’ve flipped it before it’s now likely to come up heads this time.
It’s like saying that because a coin has come up heads every time you’ve flipped it before it’s now likely to come up heads this time.
I’m puzzled and wonder if I’m missing your point because this update makes perfect sense to me. Let’s say that I start for a prior for whether the coin is fair, P(fair), a prior for whether it is biased towards heads, P(headbiased), and a prior for whether it is biased towards tails P(tailbiased). My updated probability for P(headbiased) increases if I get lots of heads on the coin and few or no tails. It’ll probably help if we understand each other on this simpler example before moving on to appeals to authority.
The fact of a person’s belief is evidence weighted according to the reliability of that person’s mechanisms for establishing the belief. To refuse to update on another person’s belief means supposing that it is uncorrelated with reality.
To fail to allow for others to be mistaken when weighting your own beliefs is to risk forming false beliefs yourself. Furthermore; establishing the reliability of a person’s mechanisms for establishing a belief is necessary for any given specific claim before expertise on said claim can be validated. The process of establishing that expertise then becomes the argument, rather than the mere assertion of the expert.
We use trust systems—trusting the word of experts without investigation—not because it is a valid practice but because it is a necessary failing of the human condition that we lack the time and energy to properly investigate every possible claim.
You must of course allow for the possibility of the other person being mistaken, otherwise you would simply substitute their probability estimate for your own. But to fail to update on the fact of someone’s belief prior to obtaining further information on the reliability of their mechanisms for determining the truth means defaulting to an assumption of zero reliability.
But to fail to update on the fact of someone’s belief prior to obtaining further information on the reliability of their mechanisms for determining the truth means defaulting to an assumption of zero reliability.
One should always assign zero reliability to any statement in and of itself, at which point it is the reliability of said mechanisms which is the argument, rather than the assertion of the individual himself. I believe I stated something very much like this already.
-- To rephrase this: it is not enough that Percival the Position-Holder tell me that Elias the Expert believes X. Elias the Expert must demonstrate to me that his expertise in X is valid.
If you have no evidence that Elias the Expert has any legitimate expertise, then you can reasonably weight his belief no more heavily than any random person holding the same belief.
If you know that he is an expert in a legitimate field that has a track record for producing true information, and he has trustworthy accreditation as an expert, you have considerably more evidence of his expertise, so you should weight his belief more heavily, even if you do not know the mechanisms he used to establish his belief.
Suppose that a physicist tells you that black holes lose mass due to something called Hawking radiation, and you have never heard this before. Prior to hearing any explanation of the mechanism or how the conclusion was reached, you should update your probability that black holes lose mass to some form of radiation, because it is much more likely that the physicist would come to that conclusion if there were evidence in favor of it than if there were not. You know enough about physicists to know that their beliefs about the mechanics of reality are correlated with fact.
Suppose that a physicist tells you that black holes lose mass due to something called Hawking radiation, and you have never heard this before. Prior to hearing any explanation of the mechanism or how the conclusion was reached, you should update your probability that black holes lose mass to some form of radiation,
No. What you should do is ask for a justification of the belief. If you do not have the resources available to you to do so, you can fail-over to the trust system and simply accept the physicist’s statement unexamined—but utilization of the trust-system is an admission of failure to have justified beliefs.
You know enough about physicists to know that their beliefs about the mechanics of reality are correlated with fact.
I know enough about physicists, actually, to know that if they cannot relate a mechanism for a given phenomenon and a justification of said phenomenon upon inquiry that I have no reason to accept their assertions as true, as opposed to speculation. If I am to accept a given statement on any level higher than “I trust so”—that is, if I am to assign a high enough probability to the claim that I would claim myself that it were true—then I cannot rely upon the trust system but rather must have a justification of belief.
Justification of belief cannot be “A person who usually is right in this field claims this is so” but can be “A person who I have reason to believe would have evidence on this matter related to me his assessment of said evidence.”
The difference here is between having a buddy who is a football buff who tells you what the Sportington Sports beat the Homeland Highlanders by last night—even though you don’t know whether he had access to a means of having said information—as opposed to the friend you know watched the game who tells you the scores.
No. What you should do is ask for a justification of the belief. If you do not have the resources available to you to do so, you can fail-over to the trust system and simply accept the physicist’s statement unexamined—but utilization of the trust-system is an admission of failure to have justified beliefs.
If you want to increase the reliability of your probability estimate, you should ask for a justification. But if you do not increase your probability estimate contingent on the physicist’s claim until you receive information on how he established that belief, then you are mistreating evidence. You don’t treat his claim as evidence in addition to to evidence on which it was conditioned, you treat it as evidence of the evidence on which it was conditioned. Once you know the physicist’s belief, you cannot expect to raise your confidence in that belief upon receiving information on how he came to that conclusion. You should assign weight to his statement according to how much evidence you would expect a physicist in his position to have if he were making such a statement, and then when you learn what evidence he has you shift upwards or downwards depending on how the evidence compares to your expectation. If you revised upwards on the basis of the physicist’s say-so, and then revised further upwards based on his having about as much evidence as you would expect, that would be double-counting evidence, but if you do not revise upwards based on the physicist’s claim in the first place, that would be assuming zero correlation of his statement with reality.
Justification of belief cannot be “A person who usually is right in this field claims this is so” but can be “A person who I have reason to believe would have evidence on this matter related to me his assessment of said evidence.”
You do not need the person to relate their assessment of the evidence to revise your belief upward based on their statement, you only need to believe that it is more likely that they would make the claim if it were true than if it were not.
The difference here is between having a buddy who is a football buff who tells you what the Sportington Sports beat the Homeland Highlanders by last night—even though you don’t know whether he had access to a means of having said information—as opposed to the friend you know watched the game who tells you the scores.
Anything that is more likely if a belief is true than if it is false is evidence which should increase your probability estimate of that belief. Have you read An Intuitive Explanation of Bayes’ Theorem, or any of the other explanations of Bayesian reasoning on this site?
If you have a buddy who is a football buff who tells you that the Sportington Sports beat the Homeland Highlanders last night, then you should treat this as evidence that the Sportington Sports won, weighted according to your estimate of how likely his claim is to correlate with reality. If you know that he watched the game, you’re justified in assuming a very high correlation with reality (although you also have to condition your estimate on information aside from whether he is likely to know, such as how likely he is to lie.) If you do not know that he watched the game last night, you will have a different estimate of the strength of his claim’s correlation with reality.
Have you read An Intuitive Explanation of Bayes’ Theorem, or any of the other explanations of Bayesian reasoning on this site?
I have read them repeatedly, and explained the concepts to others on multiple occassions.
If you have a buddy who is a football buff who tells you that the Sportington Sports beat the Homeland Highlanders last night, then you should treat this as evidence that the Sportington Sports won,
Not until such time as you have a reason to believe that he has a justification for his belief beyond mere opinion. Otherwise, it is a mere assertion regardless of the source—it cannot have a correlation to reality if there is no vehicle through which the information he claims to have reached him other than his own imagination, however accurate that imagination might be.
You do not need the person to relate their assessment of the evidence to revise your belief upward based on their statement, you only need to believe that it is more likely that they would make the claim if it were true than if it were not.
Which requires a reason to believe that to be the case. Which in turn requires that you have a means of corroborating their claim in some manner; the least-sufficient of which being that they can relate observations that correlate to their claim, in the case of experts that is.
If you want to increase the reliability of your probability estimate, you should ask for a justification.
A probability estimate without reliability is no estimate. Revising beliefs based on unreliable information is unsound. Experts’ claims which cannot be corroborated are unsound information, and should have no weighting on your estimate of beliefs solely based on their source.
If an expert’s claims are frequently true, then it can become habitual to trust them without examination. However, trusting individuals rather than examining statements is an example of a necessary but broken heuristic. We find the risk of being wrong in such situations acceptable because the expected utility cost of being wrong in any given situation, as an aggregate, is far less than the expected utility cost of having to actually investigate all such claims.
The more such claims, further, fall in line with our own priors—that is, the less ‘extraordinary’ the claims appear to be to us—the more likely we are to not require proper evidence.
The trouble is, this is a failed system. While it might be perfectly rational—instrumentally—it is not a means of properly arriving at true beliefs.
I want to take this opportunity to once again note that what I’m describing in all of this is proper argumentation, not proper instrumentality. There is a difference between the two; and Eliezer’s many works are, as a whole, targetted at instrumental rationality—as is this site itself, in general. Instrumental rationality does not always concern itself with what is true as opposed to what is practically believable. It finds the above-described risk of variance in belief from truth an acceptable risk, when asserting beliefs.
This is an area where “Bayesian rationality” is insufficient—it fails to reliably distinguish between “what I believe” and “what I can confirm is true”. It does this for a number of reasons, one of which being a foundational variance between Bayesian assertions about what kind of thing a Bayesian network is measuring when it discussed probabilities as opposed to what a frequentist is asserting is being measured when frequentists discuss probabilities.
I do not fall totally in line with “Bayesian rationality” in this, and various other, topics, for exactly this reason.
There is a difference between the two; and Eliezer’s many works are, as a whole, targetted at instrumental rationality
What? No they aren’t. They are massively biased towards epistemic rationality. He has written a few posts on instrumental rationality but by and large they tend to be unremarkable. It’s the bulk of epistemic rationality posts that he is known for.
Have you read An Intuitive Explanation of Bayes’ Theorem, or any of the other explanations of Bayesian reasoning on this site?
I have read them repeatedly, and explained the concepts to others on multiple occassions.
Really? In that case you should hopefully be able to interact correctly with probabilities like p(Elias asserts X | X is true) and p(Elias asserts X | X is false).
It ought to prevent you from making errors like this:
Really? In that case you should hopefully be able to interact correctly with probabilities like p(Elias asserts X | X is true) and p(Elias asserts X | X is false).
Assuming he was able to explain them correctly, which I think we have a lot of reason to doubt.
Really? In that case you should hopefully be able to interact correctly with probabilities like p(Elias asserts X | X is true) and p(Elias asserts X | X is false).
If you meant those to be topical you’ve got your givens inverted.
It ought to prevent you from making erors like this:
Appeals to authority are always fallacious.
I’m sorry, but no matter how many times you reiterate it, until such time as you can provide a valid justification for the assertion that appeals to authority are not always fallacious, each iteration will be no more correct than the last time.
Appeals to authority are always fallacious. Some things that look like appeals to authority to casual observation upon closer examination turn out not to be. Such as the reference to the works of an authority-figure.
Really? In that case you should hopefully be able to interact correctly with probabilities like p(Elias asserts X | X is true) and p(Elias asserts X | X is false).
If you meant those to be topical you’ve got your givens inverted.
No. Actually I don’t. Those are base priors that are important when evaluating just how to update when given the new evidence “Elias asserts X”. Have you not just been telling us that you teach others how this mathematics works?
I’m sorry, but no matter how many times you reiterate it, until such time as you can provide a valid justification for the assertion that appeals to authority are not always fallacious, each iteration will be no more correct than the last time.
Why are you saying that? I didn’t just make the assertion again. I pointed to some of the Bayesian reasoning that it translates to.
If you have values for p(Elias asserts X | X is true) and p(Elias asserts X | X is false) that are not equal to each other and you gain information “Elias asserts X” your estimation for X is true must change or your thinking is just wrong. It is simple mathematics. That being the case supplying the information “Elias asserts X” to someone who already has information about Elias’s expertise is not fallacious. It is supplying information to them that should change their mind.
The above applies regardless of whether Elias has ever written any works on the subject or in any way supplied arguments. It applies even if Elias himself has no idea why he has the intuition “X”. It applies if Elias is a flipping black box that spits out statements. If you both you and the person you are speaking with have reason to believe p(Elias asserts X | X is true) > p(Elias asserts X | X is false) then supplying “Elias asserts X” as an argument in favour of X is not fallacious.
No. Actually I don’t. Those are base priors that are important when evaluating just how to update when given the new evidence “Elias asserts X”. Have you not just been telling us that you teach others how this mathematics works?
Alright, then, but you’re not going to like the answer: the probabilities in both cases are equal, barring some other influencing factor, such as Elias-the-Expert having justification for his assertion, or a direct means of observing X.
Unfortunately, this tells us exactly nothing for our current discussion.
It applies even if Elias himself has no idea why he has the intuition “X”.
This is the Blind Oracle argument. You are not going to find it persuasive.
It applies if Elias is a flipping black box that spits out statements.
It applies if Elias is a coin that flips over and over again and has come up heads a hundred times in a row and tails twenty times before that, therefore allowing us to assert that the probability that coin will come up heads is five times greater than the probability it will come up tails.
This, of course, is an eroneous conclusion, and represents a modal failure of the use of Bayesian belief-networks as assertions of matters of truth as opposed to merely being inductive beliefs.
And before you attempt to assert that fundamentally no truths are “knowable”, and that all we have are “beliefs of varying degrees of confidence”, let me just say that this, too, is a modal failure. But a subtle one: it is an assertion that because all we have is the map, there is no territory.
That is false—and there is a corollary as a result of that fact: the territory has a measurable impact on the map. And always will.
And before you attempt to assert that fundamentally no truths are “knowable”, and that all we have are “beliefs of varying degrees of confidence”, let me just say that this, too, is a modal failure. But a subtle one: it is an assertion that because all we have is the map, there is no territory.
Is there anyone else here who would have predicted that I was about say anything remotely like this? Why on earth would I be about to do that? It doesn’t seem relevant to the topic at hand, necessary for appeals to experts to be a source of evidence or even particularly coherent as a philosophy.
I am only slightly more likely to have replied with that argument than a monkey would have been if it pressed random letters on a keyboard—and then primarily because it was if nothing else grammatically well formed.
The patient is very, very confused. I think that this post allows us to finally offer the diagnosis: he is qualitatively confused.
Quote:
The only error that’s occurring here is your continued belief that beliefs are relevant to this conversation. They simply aren’t. We’re not discussing “what should you believe”—we are discussing “what should you hold to be true.”
Attempts to surgically remove this malignant belief from the patient have commenced.
Adding to wedrifid’s comment: Logos01, you seem to be committing a fallacy of gray here. Saying that we don’t have absolute certainty is not at all the same as saying that there is no territory; that is assuredly not what Bayesian epistemology is about.
No, wedrifid is not committing one, you are. Here’s why:
He is not arguing that reality is subjective or anything like that. See his comment. On the other hand, your argument seems to be that we need to have some kind of surefire knowledge in epistemology because Bayesian probabilities are insufficient. Why does an epistemology with lack a of absolute certainty mean that said epistemology is not good enough?
But it most assuredly does color how Bayesian beliefs are formed.
What have you read or seen that makes you think this is the case?
I wonder if the word “belief” is causing problems here. Everyone in this thread is using the term to mean “statement or piece of knowledge to which you assign some level of truth value.” Saying “I believe X” is the same as saying “I think X is probably true.” Is this also how you’re using the term?
Without commenting on the specifics here, I object to this formulation because it’s possible both or neither are committing (particular fallacy), and it’s even more likely that of two parties, each is committing a different fallacy.
Without commenting on the specifics here, I object to this formulation because it’s possible both or neither are committing (particular fallacy), and it’s even more likely that of two parties, each is committing a different fallacy.
The formulation is fine. It is just two independent claims. It does not mean “wedrifid is not making an error because you are”.
The subject isn’t “an error”, it’s “the fallacy of gray”.
I agree “No, wedrifid is not committing an error, you are,” hardly implies “wedrifid is not making an error because you are”.
“No, wedrifid is not committing the fallacy of gray, you are,” much more implies “wedrifid is not committing the fallacy of gray because you are” when wedrifid’s and Logos01′s statements are in direct conflict.
The subject isn’t “an error”, it’s “the fallacy of gray”.
Why on earth does it matter that I referred to the general case? We’re discussing the implication of the general formulation which I hope you don’t consider to be a special case that only applies to “The Fallacy of The Grey”. But if we are going to be stickler’s then we should use the actual formulation you object to:
No, wedrifid is not committing one, you are.
“No, wedrifid is not committing the fallacy of gray, you are,” much more implies “wedrifid is not committing the fallacy of gray because you are” when wedrifid’s and Logos01′s statements are in direct conflict.
Not especially. This sort of thing is going to said most often regarding statements in direct conflict. There is no more relationship implied between the two than can be expected for any two claims being mentioned in the same sentence or paragraph.
The fact that it would be so easy to write “because” but they didn’t is also evidence against any assertion of a link. There is a limit to how much you can blame a writer for the theoretical possibility that a reader could make a blatant error of comprehension.
I didn’t really mean “because” in that sense, for two reasons. The first is that it is the observer’s knowledge that is caused, not the party’s error, and the other is that the causation implied goes the other way. Not “because” as if the first party’s non-error caused the second’s error, but that the observer can tell that the second party is in error because the observer can see that the first isn’t committing the particular error.
a special case...statements in direct conflict
Not special, but there is a sliding scale.
Compare:
“No, wedrifid is not committing the Prosecutor’s fallacy, you are.” Whether or not one party is committing this fallacy has basically nothing to do with whether or not the other is. So I interpret this statement to probably mean: You are wrong when you claim that wedrifid is committing the Prosecutor’s fallacy. Also, you are committing the Prosecutor’s fallacy.”
“No, wedrifid is not reversing cause and effect, you are.” Knowing that one party is not reversing cause and effect is enough to know someone accusing that party of doing so is likely doing so him or herself! So I interpret this statement to probably mean: “Because I see that wedrifid is not reversing cause and effect, I conclude that you are.”
The fallacy of gray is in between the two above examples.
“Fallacy of The Grey” returns two google hits referring to the fallacy of gray, one from this site (and zero hits for “fallacy of the gray”).
He is not arguing that reality is subjective or anything like that.
I didn’t say he was.
On the other hand, your argument seems to be that we need to have some kind of surefire knowledge in epistemology because Bayesian probabilities are insufficient.
No. I was saying that Bayesian probabilistic-belief methodologies are effective at generating maps but they say almost nothing about how those maps correlate to the territory. And that it is, basically, possible to make those assertions. The practices are not the same, and that is the key difference.
What have you read or seen that makes you think this is the case?
It is fundamental to the nature of Bayesian belief-networks that they always assert statements in the form of probabilities. It is impossible to state a Bayesian belief except in the form of a probability.
From this there is a necessary conclusion.
Everyone in this thread is using the term to mean “statement or piece of knowledge to which you assign some level of truth value.” Saying “I believe X” is the same as saying “I think X is probably true.” Is this also how you’re using the term?
No. There is a difference categorically between the position, “I believe X is probably true” and “I believe X is true.”
I was saying that Bayesian probabilistic-belief methodologies are effective at generating maps but they say almost nothing about how those maps correlate to the territory.
It is fundamental to the nature of Bayesian belief-networks that they always assert statements in the form of probabilities. It is impossible to state a Bayesian belief except in the form of a probability.
Right. So why do you think this is insufficient for making maps that correlate to the territory? What assertions do you want to make about the territory that are not captured by this model?
No. There is a difference categorically between the position, “I believe X is probably true” and “I believe X is true.”
Right, but on LW “I believe X” is generally meant as the former, not the latter. This is probably part of the reason for all of the confusion and disagreement in this thread.
Alright, then, but you’re not going to like the answer: the probabilities in both cases are equal, barring some other influencing factor, such as Elias-the-Expert having justification for his assertion, or a direct means of observing X.
What? Of course Elias has some reason for believing what he believes. “Expert” doesn’t mean “someone who magically just knows stuff”. Somewhere along the lines the operation of physics has resulted in the bunch of particles called “Elias” to be configured in such a way as utterances by Elias about things like X are more likely to be true than false. This means that p(Elias asserts X | X is true) and p(Elias asserts X | X is false) are most certainly not equal. Claiming that they must be equal is just really peculiar.
This is the Blind Oracle argument.
This isn’t to do with blind oracles. It’s to do with trivial application of probability or rudimentary logic.
You are not going to find it persuasive.
In your interactions with people here I haven’t observed p(Logos01 is persuaded by X | X is sound reasoning) to be especially high. As such I cannot be expected to consider “Logos01 is not persuaded by something” to give much information at all about the soundness of a claim.
What? Of course Elias has some reason for believing what he believes.
Unless those reasons are justified—which we cannot know without knowing them—they cannot be held to be justifiable statements.
This is tautological.
Claiming that they must be equal is just really peculiar.
Not at all. You simply aren’t grasping why it is so. This is because you are thinking in terms of predictions and not in terms of concrete instances. To you, these are one-and-the-same, as you are used to thinking in the Bayesian probabilistic-belief manner.
I am telling you that this is an instance where that manner is flawed.
In your interactions with people here I haven’t observed p(Logos01 is persuaded by X | X is sound reasoning) to be especially high.
What you hold to be sound reasoning and what actually is sound reasoning are not equivalent.
As such I cannot be expected to consider “Logos01 is not persuaded by something” to give much information at all about the soundness of a claim.
If I had meant to imply that conclusion I would have phrased it so.
Not until such time as you have a reason to believe that he has a justification for his belief beyond mere opinion. Otherwise, it is a mere assertion regardless of the source—it cannot have a correlation to reality if there is no vehicle through which the information he claims to have reached him other than his own imagination, however accurate that imagination might be.
If you know that your friend more often makes statements such as this when they are true than when they are false, then you know that his claim is relevant evidence, so you should adjust your confidence up. If he reliably either watches the game, or finds out the result by calling a friend or checking online, and you have only known him to make declarations about which team won a game when he knows which team won, then you have reason to believe that his statement is strongly correlated with reality, even if you don’t know the mechanism by which he came to decide to say that the Sportington Sports won.
If you happen to know that your friend has just gotten out of a locked room with no television, phone reception or internet access where he spent the last couple of days, then you should assume an extremely low correlation of his statement with reality. But if you do not know the mechanism, you must weight his statement according to the strength that you expect his mechanism for establishing correlation with the truth has.
There is a permanent object outside my window. You do not know what it is, and if you try to assign probabilities to all the things it could be, you will assign a very low probability to the correct object. You should assign pretty high confidence that I know what the object outside my window is, so if I tell you, then you can assign much higher probability to that object than before I told you, without my having to tell you why I know. You have reason to have a pretty high confidence in the belief that I am an authority on what is outside my window, and that I have reliable mechanisms for establishing it.
If I tell you what is outside my window, you will probably guess that the most likely mechanism by which I found out was by looking at it, so that will dominate your assessment of my statement’s correlation with the truth (along with an adjustment for the possibility that I would lie.) If I tell you that I am blind, type with a braille keyboard, and have a voice synthesizer for reading text to me online, and I know what is outside my window because someone told me, then you should adjust your probability that my claim of what is outside my window is correct downwards, both on increased probability that I am being dishonest, and on the decreased reliability of my mechanism (I could have been lied to.) If I tell you that I am blind and psychic fairies told me what is outside my window, you should adjust your probability that my claim is correlated with reality down much further.
The “trust mechanism,” as you call it, is not a device that exists separate from issues of evidence and probability. It is one of the most common ways that we reason about probabilities, basing our confidence in others’ statements on what we know about their likely mechanisms and motives.
This is an area where “Bayesian rationality” is insufficient—it fails to reliably distinguish between “what I believe” and “what I can confirm is true”.
You can’t confirm that anything is true with absolute certainty, you can only be more or less confident. If your belief is not conditioned on evidence, you’re doing something wrong, but there is no point where a “mere belief” transitions into confirmed knowledge. Your probability estimates go up and down based on how much evidence you have, and some evidence is much stronger than others, but there is no set of evidence that “counts for actually knowing things” separate from that which doesn’t.
If you know that your friend more often makes statements such as this when they are true than when they are false, then you know that his claim is relevant evidence
This is like claiming that because a coin came up heads twenty times and tails ten times it is 2x more likely to come up heads this time. Absent some other reason to justify the correlation between your friend’s accuracy and the current instance, such beliefs are invalid.
If he reliably either watches the game, or finds out the result by calling a friend or checking online, and you have only known him to make declarations about which team won a game when he knows which team won,
Yup. I said as much.
The “trust mechanism,” as you call it, is not a device that exists separate from issues of evidence and probability.
Yes, actually, it is a separate mechanism.
You can’t confirm that anything is true with absolute certainty, you can only be more or less confident.
Yes, yes. That is the Bayesian standard statement. I’m not persuaded by it. It is, by the way, a foundational error to assert that absolute knowledge is the only form of knowledge. This is one of my major objections to standard Bayesian doctrine in general; the notion that there is no such thing as knowledge but only beliefs of varying confidence.
Bayesian probability assessments work very well for making predictions and modeling unknowns, but that’s just not sufficient to the question of what constitutes knowledge, what is known, and/or what is true.
And with that, I’m done here. This conversation’s gotten boring, to be quite frank, and I’m tired of having people essentially reiterate the same claims over and over at me from multiple angles. I’ve heard it before, and it’s no more convincing now than it was previously.
This is frustrating for me as well, and you can quit if you want, but I’m going to make one more point which I don’t think will be a reiteration of something you’ve heard previously.
Suppose that you have a circle of friends who you talk to regularly, and a person uses some sort of threat to force you to write down every declarative statement they make in a journal, whether they provided justifications or not, until you collect ten thousand of them.
Now suppose that they have a way of testing the truth of these statements with very high confidence. They make a credible threat that you must correctly estimate the number of the statements in the journal that are true, with a small margin of error, or they will blow up New York. If you simply file a large number of his statements under “trust mechanism,” and fail to assign a probability which will allow you to guess what proportion are right or wrong, millions of people will die. There is an actual right answer which will save those people’s lives, and you want to maximize your chances of getting it. What do you do?
Let’s replace the journal with a log of a trillion statements. You have a computer that can add the figures up quickly, and you still have to get very close to the right number to save millions of lives. Do you want the computer to file statements under “trust mechanism” or “confirmed knowledge” so that it can better determine the correct number of correct statements, or would you rather each statement be tagged with an appropriate probability, so that it can add them up to determine what number of statements it expects to be true?
… appeal to consequences. Well, that is new in this conversation. It’s not very constructive though.
Also, you’re conflating predictions with instantiations.
That being said:
They make a credible threat that you must correctly estimate the number of the statements in the journal that are true, with a small margin of error, or they will blow up New York. [...] What do you do?
I would, without access to said test myself, be forced to resign myself to the destruction of New York.
If you simply file a large number of his statements under “trust mechanism,”
That’s not what a trust-system is. It is, simply put, the practice of trusting that something is so because the expected utility-cost of being wrong is lower than the expected utility-cost of investigating a given claim. This practice is a foible; a failing—one that is engaged in out of necessity because humans have a limit to their available cognitive resources.
Do you want the computer to file statements under “trust mechanism” or “confirmed knowledge” so that it can better determine the correct number of correct statements, or would you rather each statement be tagged with an appropriate probability,
What one wants is irrelevant. What has occurred is relevant. If you haven’t investigated a given claim directly, then you’ve got nothing but whatever available trust-systems are at hand to operate on.
That doesn’t make them valid claims.
Finally: you’re introducing another unlike-variable by abstracting from individual instances to averaged aggregate.
TL;DR—your post is not-even-wrong. On many points.
… appeal to consequences. Well, that is new in this conversation. It’s not very constructive though.
If your conception of rationality leads to worse consequences than doing something differently, you should do something differently. Do you think it’s impossible to do better than resigning yourself to the destruction of New York?
That’s not what a trust-system is. It is, simply put, the practice of trusting that something is so because the expected utility-cost of being wrong is lower than the expected utility-cost of investigating a given claim. This practice is a foible; a failing—one that is engaged in out of necessity because humans have a limit to their available cognitive resources.
The utility cost of being wrong can fluctuate. Your life may hinge tomorrow on a piece of information you did not consider investigating today. If you find yourself in a situation where you must make an important decision hinging on little information, you can do no better than your best estimate, but if you decide that you are not justified in holding forth an estimate at all, you will have rationalized yourself into helplessness.
Humans have bounded rationality. Computationally optimized Jupiter Brains have bounded rationality. Nothing can have unlimited cognitive resources in this universe, but with high levels of computational power and effective weighting of evidence it is possible to know how much confidence you should have based on any given amount of information.
Finally: you’re introducing another unlike-variable by abstracting from individual instances to averaged aggregate.
You can get the expected number of true statements just by adding the probabilities of truth of each statement. It’s like judging how many heads you should expect to get in a series of coin flips. .5 + .5 + .5..… The same formula works even if the probabilities are not all the same.
This is like claiming that because a coin came up heads twenty times and tails ten times it is 2x more likely to come up heads this time.
If you don’t assume that the coin is fair, then certainly a coin coming up heads twenty times and tails ten times is evidence in favor of it being more likely to come up heads next time, because it’s evidence that it’s weighted so that it favours heads.
Similarly if a person is weighted so that they favor truth, their claims are evidence in favour of that truth.
Absent some other reason to justify the correlation between your friend’s accuracy and the current instance, such beliefs are invalid.
Beliefs like trusting the trustworthy and not trustring the untrustworthy, whether you consider them “valid” beliefs or not, are likely to lead one to make correct predictions about the state of the world. So such beliefs are valid in the only way that matters for epistemic and instrumental rationality both.
If you don’t assume that the coin is fair, then certainly a coin coming up heads twenty times and tails ten times is evidence in favor of it being more likely to come up heads next time, because it’s evidence that it’s weighted so that it favours heads.
Or you could make a direct observation, (such as by weighing it with a fine tool, or placing it on a balancing tool) and know.
Similarly if a person is weighted so that they favor truth, their claims are evidence in favour of that truth.
Not unless they have an ability to provide their justification for a given instantiation. It would be sufficient for trusting them if you are not concerned with what is true as opposed to what is “likely true”. There’s a difference between these, categorically: one is an affirmation—the other is a belief.
So such beliefs are valid in the only way that matters for epistemic and instrumental rationality both.
Incorrect. And we are now as far as this conversation is going to go. You hold to Bayesian rationality as axiomatically true of rationality. I do not.
Or you could make a direct observation, (such as by weighing it with a fine tool, or placing it on a balancing tool) and know.
And in the absence of the ability to make direct observations? If there are two eye-witness testimonies to a crime, and one of the eye-witnesses is a notorious liar with every incentive to lie, and one of them is famous for his honesty and has no incentive to lie—which way would you have your judgment lean?
SPOCK: “If I let go of a hammer on a planet that has a positive gravity, I need not see it fall to know that it has in fact fallen. [...] Gentlemen, human beings have characteristics just as inanimate objects do. It is impossible for Captain Kirk to act out of panic or malice. It is not his nature.”
I very much like this quote, because it was one of the first times when I saw determinism, in the sense of predictability, being ennobling.
If there are two eye-witness testimonies to a crime
I have already stated that witness testimonials are valid for weighting beliefs. In the somewhere-parent topic of authorities; this is the equivalent of referencing the work of an authority on a topic.
This is like claiming that because a coin came up heads twenty times and tails ten times it is 2x more likely to come up heads this time. Absent some other reason to justify the correlation between your friend’s accuracy and the current instance, such beliefs are invalid.
If in 30 coin flips have occurred with it being that far off, I should move my probability estimate sllightly towards the coin being weighted to one side. If for example, the coin instead had all 30 flips heads, I presume you would update in the direction of the coin being weighted to be more likely to come down on one side. It won’t be 2x as more likely because the hypothesis that the coin is actually fair started with a very large prior. Moreover, the easy ways to make a coin weighted make it always come out on one side. But the essential Bayesian update in this context makes sense to put a higher probability on the coin being weighted to be more likely to comes up heads than tales.
“Bayesian probability assessments work very well for making predictions and modeling unknowns, but that’s just not sufficient to the question of what constitutes knowledge, what is known, and/or what is true.”
Bayesian probability assessments are an extremely poor tool for assertions of truth.
False.
Mere assertion.
A given statement is either true or not true. Authorities on a given topic can be mistaken; and as such no appeal to an authority can be, absent any other actual supporting materials, treated as a rational motive for belief. In all such cases, it is those supporting materials which comprise the nature of the argument.
Here’s a good example: Richard Dawkins is an expert—an authority—on evolutionary biology. Yet he rejects the idea of group selection. Group selection has been widely demonstrated to be valid.
Does not follow.
… it doesn’t follow that because authorities are unreliable, their assertions without supporting evidence should not adjust your weighting in belief on any specific claim?
That’s a strange world-view you have.
It is not so easy to separate assertions and evidence, Logos01. An assertion is in itself evidence—strong evidence perhaps, depending on who it comes from and in what context. Evidence is entanglement with reality, and the physical phenomenon of someone having said or written something can be entagled with what you want to know about in just the same way that any other type of evidence is entangled with reality.
For example if you were to ring your friend and ask him for the football results, you would generally update your degree of belief in the fact that your team won if he told you so (unless you had a particular reason to mistrust him). You would not wait until you had been provided with television footage and newspaper coverage of the result before updating, despite the fact that he had given you a mere assertion.
That is a trivial example, because you apparently are in need of one to gain understanding.
If someone quotes Yudkowsky as saying something, depending on how impressed you are with him as a thinker you may update on his mere opinions (indeed, rationality may demand that you do so) without or before considering his arguments in detail. Authorities may be “unreliable”, but it is the fallacy of grey to suggest that they therefore provide no evidence whatsoever. For that matter, your own sensory organs are unreliable—does this lead you not to update your degree of belief according to anything that you see or hear?
Since Yudkowsky is widely respected in intellectual terms here, someone might quote him without feeling the need to provide a lengthy exposition of the argument behind his words. This might be because you can easily follow the link if you want to see the argument, or because they don’t feel that the debate in question is worth much of their time (just enough to point you in the right direction, perhaps).
On the other hand it is true that argument screens off authority, and perhaps that is what you are imperfectly groping towards. If you really want to persuade anyone of whatever it is you are trying to say, I suggest that you attempt to (articulately) refute Yudkowsky’s argument, thereby screening off his authority. Don’t expect anyone to put your unsupported opinions together with Yudkowsky’s on the same blank slate, because you have to earn that respect. And for that matter, Yudkowsky actually has defended his statements with in-depth arguments, which should not really need to be recapitulated every time someone here references them. His writings are here, waiting for you to read them!
This presupposes that you had reason to believe that he had a means of having that information. In which case you are weighting your beliefs based on your assessment of the likelihood of him intentionally deceiving you and your assessment of the likelihood of him having correctly observed the information you are requesting.
If you take it as a given that these things are true then what you are requesting is a testimonial as to what he witnessed, and treating him as a reliable witness, because you have supporting reasons as to why this is so. You are not, on the other hand, treating him as a blind oracle who simply guesses correctly, if you are anything remotely resembling a functionally capable instrumental rationalist.
This, sir, is a decidedly disingenuous statement. I have noted it, and it has lowered my opinion of you.
I want to take an aside here to note that you just stated that it is possible for rationality to “demand” that you accept the opinions of others as facts based merely on the reputation of that person before you even comprehend what the basis of that opinion is.
I cannot under any circumstances now known to me endorse such a definition of “rationality”.
I’ve done that recently enough that invocations of Eliezer’s name into a conversation are unimpressive. If his arguments are valid, they are valid. If they are not, I have no hesitation to say they are not.
I want you to understand that in making this statement you have caused me to reassess you as an individual under the effects of a ‘cult of personality’ effect.
You said earlier:
And:
You now state:
This is a different claim to that contained in the former two quotes. I do not disagree with this claim, but it does not support your earlier claims, which are the claims that I was criticising.
I do not know of any human being who I would regard as “a blind oracle who simply guesses correctly”. The existence of such an oracle is extremely improbable. On the other hand it is very normal to believe that there are “supporting reasons” for why someone’s claims should change your degree of belief in something. For example if Yudkowsky has made 100 claims that were backed up by sound argument and evidence, that leads me to believe that the 101st claim he makes is more likely to be true than false, therefore increasing my degree of belief somewhat (even if only a little, sometimes) in that claim at the expense of competing propositions even before I read the argument.
It also possible for someone’s claims to be reliably anti-correlated with the truth, although that would be a little strange. For example someone who generally holds accurate beliefs, but is a practical joker and always lies to you, might cause you to (insofar as you are rational) decrease your degree of belief in any proposition that he asserts, unless you have particular reason to believe that he is telling the truth on this one occasion.
You may have no particular regard for someone’s opinion. Nonetheless, the fact that a group of humans beings regard that person as smart; and that he writes cogently; and even the mere fact that he is human – these are all “supporting reasons” such as you mentioned. Even if this doesn’t amount to much, it necessarily amounts to something (except for in the vastly improbable case that reasons for this opinion to correlate with the truth, and for it to anti-correlate with the truth, seem to exactly cancel each other out).
I think what is tripping you up is the idea that you can be gamed by people who are telling you to “automatically believe everything that someone says”. But actually the laws of probability are theorems – not social rules or rules of thumb. If you do think you are being gamed or lied to then you might be rational to decrease your degree of belief in something that someone else claims, or else only update your belief a very small amount towards their belief. No-one denies that.
All that we are trying to explain to you is that your generalisations about not treating other people’s beliefs as evidence are wrong as probability theory. Belief is not binary – believe or not believe. It is a continuum, and updates in your degree of belief in a proposition can be large or tiny. You don’t have to flick from 0 to 1 when someone makes some assertion, but rationality requires you to make use of all the evidence available to you—therefore your degree of belief should change by some amount in some direction, which is usually towards the other person’s belief except in unusual circumstances.
Still, if you do in fact regard Yudkowsky’s (or anyone else’s) stated belief in some proposition as being completely uncorrelated with the truth of that proposition, then you should not revise your belief in either direction when you encounter that statement of his. Otherwise, failure to update on evidence – things that are entangled with what you want to know about – is irrational.
That is called a figure of speech my friend.
It is not cultish to praise someone highly.
There is nothing wrong with this in itself. But as has already been pointed out to you, an assertion and a link could be regarded as encouragement for you to seek out the arguments behind that assertion, which are written elsewhere. It might also be for the edification of other commenters and lurkers, who may be more impressed by Eliezer (therefore more willing to update on his beliefs).
Personally I don’t find the actual argument that started this little comment thread off remotely interesting (arguments over definitions are a common failure mode), so I shan’t get involved in that. But I hope you understand the point about updating on other people’s beliefs a little more clearly now.
This is accurate in the context you are considering—it is less accurate generally without an additional caveat.
You should also not revise your belief if you have already counted the evidence on which the expert’s beliefs depend.
(Except, perhaps, for a very slight nudge as you become more confident that your own treatment of the evidence did not contain an error, but my intuition says this is probably well below the noise level.)
What if Yudkowsky says that he believes X with probability 90%, and you only believe it with probability 40% - even though you don’t believe that he has access to any more evidence than you do?
Also, I agree that in practice there is a “noise level”—i.e. very weak evidence isn’t worth paying attention to—but in theory (which was the context of the discussion) I don’t believe there is.
Then perhaps I have evidence that he does not, or perhaps our priors differ, or perhaps I have made a mistake, or perhaps he has made a mistake, or perhaps both. Ideally, I could talk to him and we could work to figure out which of us is wrong and by how much. Otherwise, I could consider the likelihood of the various possibilities and try to update accordingly.
For case 1, I should not be updating; I already have his evidence, and my result should be more accurate.
For case 2, I believe I should not be updating, though if someone disagrees we can delve deeper.
For cases 3 and 5, I should be updating. Preferably by finding my mistake, but I can probably approximate this by doing a normal update if I am in a hurry.
For case 4, I should not be updating.
In theory, there isn’t. My caveat regarding noise was directed to anyone intending to apply my parenthetical note to practice—when we consider a very small effect in the first place, we are likely to over-weight it.
I think we are basically in agreement.
What you are saying is that insofar as we know all of the evidence that has informed some authority’s belief in some proposition, his making a statement of that belief does not provide additional evidence. I agree with that, assuming we are ignoring tiny probabilities that are below a realistic “noise level”.
As you said this is not particularly relevant to the case of someone appealing to an authority during an argument, because their interlocutor is unlikely to know what evidence this authority possesses in the large majority of cases. But it is a good objection in general.
You are mistaken. It is exactly the same claim, rephrased to match the circumstances at most—containing exactly the same assertion: “Authorities on a given topic can be mistaken; and as such no appeal to an authority can be, absent any other actual supporting materials, treated as a rational motive for belief. In all such cases, it is those supporting materials which comprise the nature of the argument.” In the specific case you are taking as supporting evidence your justified belief that the individual had in fact directly observed the event, and the justified belief that the individual would relate it to you honestly. Those, together, comprise justification for the belief.
An insufficient retraction.
What you did wasn’t praise.
See, THIS is why I called you cultish. Do you understand that the quote that was cited to me wasn’t even relevant contextually in the first place? I had already differentiated between proper rationality and instrumental rationality.
The quote of Eliezer’s was discussing instrumental rationality.
I even pointed this out.
No, I do not. But that’s because I wasn’t remotely confused about the topic to begin with, and have throughout these threads demonstrated a better capacity to differentiate between various modes and justifications of belief with a finer standard of differentiating between what justifications and what modes of thought are being engaged than anyone who’s as yet argued the topic in these threads with me, yourself included.
This conversation has officially reached my limit of investment, so feel free to get your last word in, but don’t be surprised if I never read it.
So in other words, you would like to distinguish between “appeal to authority” and “supporting materials” as though when someone refers you to the sayings of some authority, they expect you to consider these sayings as data purely “in themselves”, separately from whatever reasons you may have for believing that the sayings of that authority are evidentially entangled with whatever you want to know about.
Firstly, “appeal to authority” is generally taken simply to mean the act of referring someone to an authority during an argument about something – there is no connotation that this authority has no evidential entanglement with the subject of the argument, which would be bizarre (why would anyone refer you to him in that case?)
Secondly, if someone makes a statement about something, that in itself implies that there is evidential entanglement between that thing and their statement – i.e. the thing they are talking about is part of the chain of cause and effect (however indirect) that led to the person eventually making a statement about it (otherwise we have to postulate a very big coincidence). Therefore the idea that someone could make a statement about something without there being any evidential entanglement between them and it (which is necessary in order for it to be true that you should not update your belief at all based on their statement) is implausible in the extreme.
You started off by using “appeal to authority” in the normal way, but now you are attempting to redefine it in a nonsensical way so as to avoid admitting that you were mistaken (NB: there is no shame in being mistaken).
If you have read Harry Potter and the Methods of Rationality, you may remember the bit where Quirrell demonstrates “how to lose” as an important lesson in magical combat. Correspondingly, in future I would advise you not to create edifices of nonsense when it would be easier just to admit your mistake. Debate is after all a constructive enterprise, not a battle for tribal status in which it is necessary to save face at all costs.
I’ll say again that the error that is facilitating your confusion about beliefs and evidence is the idea that belief is a binary quality – I believe, or I do not believe – when in fact you believe with probability 90%, or 55%, or 12.2485%. The way that the concept of belief and the phrase “I believe X” is used in ordinary conversation may mislead people on this point, but that doesn’t change the facts of probability theory.
This allows you to think that the question is whether you are “justified” in believing proposition X in light of evidence Y, when the right question to be asking is “how has my degree of belief in proposition X changed as a result of this evidence?” You are reluctant to accept that someone’s mere assertions can be evidence in favour of some proposition because you have in mind the idea that evidence must always be highly persuasive (so as to change your belief status in a binary way from “unjustified” to “justified”), otherwise it isn’t evidence – whereas actually, evidence is still evidence even if it only causes you to shift your degree of belief in a proposition from 1% to 1.2%.
See also
“there is no connotation that this authority has no evidential entanglement with the subject of the argument”—quite correct. Which is why it is fallacious: it is an assertion that this is the case without corroborating that it actually is the case.
If it were the case, then the act would not be an ‘appeal to authority’.
This is categorically invalid. Humans are not bayesian belief-networks. In fact, humans are notoriously poor at assessing their own probabilistic estimates of belief. But this, really, is neither hither nor thither.
The only error that’s occurring here is your continued belief that beliefs are relevant to this conversation. They simply aren’t. We’re not discussing “what should you believe”—we are discussing “what should you hold to be true.”
And that, sir, categorically IS binary. A thing is either true or not true. If you affirm it to be true you are assigning it a fixed binary state.
You have a point in saying that “12.2485%” is an unlikely number to give to your degree of belief in something, although you could create a scenario in which it is reasonable (e.g. you put 122485 red balls in a bag...). And it’s also fair to say that casually giving a number to your degree of belief is often unwise when that number is plucked from thin air—if you are just using “90%” to mean “strong belief” for example. The point about belief not being binary stands in any case.
Those are one and the same! If that’s the real source of your disagreement with everyone here, it’s a doozy.
Perhaps this will help make the point clear. In fact I’m sure it will—it deals with this exact confusion. Please, if you don’t read any of these other links, look at that one!
No, they are not. They are fundamentally different. One is a point in a map. The other is a statement regarding the correlation of that map to the actual territory. These are not identical. Nor should they be.
As I have stated elsewhere: Bayesian ‘probabilistic beliefs’ eschew too greatly the ability of the human mind to make assertions about the nature of the territory.
The first time I read that page was roughly a year and a half ago.
I am not confused.
I am telling you something that you canot digest; but nothing you have said is inscrutable to me.
These three things together should tell you something. I wno’t bother trying to repeat myself about what.
To be blunt, they tell us you are arrogant, ill informed and sufficiently entrenched in you habits of thought that trying to explain anything to you is almost certainly a waste of a time even if meant well. Boasting of the extent your knowledge of the relevant fundamentals while simultaneously blatantly displaying your ignorance of the same is probably the worst signal you can give regarding your potential.
“You don’t understand that article”.
That is not something of which to be proud.
I am afraid, simply put, you are mistaken. I can reveal this simply: what is the process by which a Bayesian rationalist can derive a belief about an event which is not expressible in probabilistic form?
Depends on what it is that is being said and how. Unfortunately, in this case, we have each been using very simple language to achieve our goals—or, at least, equivalently comprehensible statements. Despite my best efforts to inform you of what it is you are not understanding, you have failed to do so. I have, contrastingly, from the beginning understood what has been said to me—and yet you continue to believe that I do not.
This is problematic. But it is also indicative that the failure is simply not on my part.
If you understand something, you should be able to describe it in such a way that other people who understand it will agree that your description is correct. Thus far you have consistently failed to do this.
I am not confident that I understand your position yet. I don’t think you have made it very clear. But you have made it very clear that you think you understand Bayesian reasoning, but your understanding of how it works does not agree with anyone else’s here.
Not, sadly, for any particular point of actual fact. The objections and disagreements have all been consequential, rather than factual, in nature. I have, contrastingly, repeatedly been accused of committing myself to fallacies I simply have not committed (to name one specifically, the ‘Fallacy of Grey’).
Multiple individuals have objected to my noting the consequences of the fact that Bayesian rationality belief-networks are always maps and never assertions about the territory. And yet, this fact is a core element of Bayesian rationality.
Quite frankly, the only reason this is so is because no one wants to confront the necessary conclusions resultant from my assertions about basic, core principles regarding Bayesian rationality.
Hence the absence of a response to my previous challenge: “what is the process by which a Bayesian rationalist can derive a belief about an event which is not expressible in probabilistic form?” (This is, of course, a “gotcha” question. There is no such process. Which is absolute proof of my positional claim regarding the flaws inherent in Bayesian rationality, and is furthermore directly related to the reasons why probabilistic statements are useless in deriving information regarding a specific manifested instantiation.)
Frankly, I have come to despair of anyone on this site doing so. You folks lack the epistemological framework necessary to do so. I have attempted to relate this repeatedly. I have attempted to direct your thoughts in dialogue to this realization in multiple different ways. I have even simply spelled it out directly.
I have exhausted every rhetorical technique I am aware of to rephrase this point. I no longer have any hope on the matter, and frankly the behaviors I have begun to see out of many of you who are still in this thread have caused me to very seriously negatively reassess my opinion of this site’s community.
For the last year or so, I have been proselytizing this site to others as a good resource for learning how to become rational. I am now unable to do so without such heavy qualifiers that I’m not even sure it’s worth it.
What other people are telling you is that your representation of Bayesian reasoning is incorrect, and that you are misunderstanding them. I suggest that you try to lay out as clear and straightforward an explanation of Bayesian reasoning as you can. If other people agree that it is correct, then we will take your claims to be understanding us much more seriously. If we still tell you that you are misunderstanding it, then I think you should consider seriously the likelihood that you are, in fact, misunderstanding it.
If you do understand our position you should be capable of this, even if you disagree with it. I suggest leaving out your points of disagreement in your explanation; we can say if we agree that it reflects our understanding, and if it does, then you can tell us why you think we are wrong.
I think we are talking past each other right now. I have only low confidence that I understand your position, and I also have very low confidence that you understand ours. If you can do this, then I will no longer have low confidence that you understand our position, and I will put in the effort to attain sufficient understanding of your position (something that you claim to already have of ours) that I can produce an explanation of it that I am confident that you will agree with, and then we can have a proper conversation.
A core element of Bayesian reasoning is that the map is not the territory.
Bayesian reasoning formulates beliefs in the form of probability statements, which are predictive in nature. (See: “Making beliefs pay rent”)
Bayesian probabilities are fundamentally statements of a predictive nature, rather than statements of actual occurrence. This is the basis of the conflict between Bayesians and frequentists. Further corroboration of this point.
The language of Bayesian reasoning regarding beliefs is that of expressing beliefs in probabilistic form, and updating those within a network of beliefs (“givens”) which are each informed by the priors and new inputs.
These four points together are the basis of my assertion that Bayesian rationality is extremely poor at making assertions about what the territory actually is. This is where the epistemological barrier is introduced: there is a difference between what I believe to be true and what I can validly assert is true. The former is a predictive statement. The latter is a material, instantiated, assertion.
No matter how many times a coin has come up heads or tails, that information has no bearing on what position the coin under my hand is actually in. You can make statements about what you believe it will be; but that is not the same as saying what it is.
THIS is the failure of Bayesian reasoning in general; and it is why appeals to authority are always invalid.
Well, I do not agree that that reflects an accurate understanding of our position.
Are you prepared to guess which parts I will take issue with?
Frankly, no. Especially since I derived each and every one of those four statements from canonical sources of explanations of how Bayesian rationality operates (and from LessWrong itself, no less.)
So if you care to disagree with the Sequences, or the established doctrine of how Bayesian reasoning operates as related in such places as Yudkowsky’s Technical Explanation of Technical Explanation and his Intuitive Explanation of Bayesian Reasoning or Wikipedia’s various articles on the topic such as Bayesian Inference—well, you’re more than welcome to do so.
I’m more than willing to admit that I might find it interesting. I strongly anticipate that it will be very strongly unpersuasive as to the notion of my having a poor grasp of how Bayesian reasoning operates, however.
Bayesian probabilities are predictive and statements of occurrence. To the extent that frequentist statements of occurrence are correct, Bayesian probabilities will always agree with them.
I would not take issue with this if not in light of the statement that you made after it. It’s true that Bayesian probability statements are predictive, we can reason about events we have not yet observed with them. They are also descriptive; they describe “rates of actual occurrence” as you put it.
It seems to me that you are confused about Bayesian reasoning because you are confused about frequentist reasoning, and so draw mistaken conclusions about Bayesian reasoning based on the comparisons Eliezer has made. A frequentist would, in fact, tell you that if you flipped a coin many times, and it has come up heads every time, then if you flip the coin again, it will probably come up heads. They would do a significance test for the proposition that the coin was biased, and determine that it almost certainly was. There is, in fact, no school of probability theory that reflects the position that you have been espousing so far. You seem to be contrasting “predictive” Bayesianism with “non-predictive” frequentism, arguing for a system that allows you to totally suspend judgment on events until you make direct observations about those events. But while frequentist statistics fail to allow you to make predictions about the probability of events when you do not have a body of data in which you can observe how often those events tend to occur, it does provide predictions about the probability of events based on based on known data on frequency, and when large body of data for the frequency of an event is available, the frequentist and Bayesian estimates for its probability will tend to converge.
I agree with JoshuaZ that the second part of your comment does not appear to follow from the first part at all, but if we resolve this confusion, perhaps things will become clearer.
Yes, they are. But what they cannot be is statements regarding the exact nature of any given specific manifested instantiation of an event.
That is predictive.
I can dismiss this concern for you: while I’ve targeted Bayesian rationality here, frequentism would be essentially fungible to all of my meaningful assertions.
I don’t know that it’s possible for that to occur until such time as I can discern a means of helping you folks to break through your epistemological barriers of comprehension (please note: comprehension is not equivalent to concurrence: I’m not asserting that if you understand me you will agree with me). Try following a little further where the dialogue currently is with JoshuaZ, perhaps? I seem to be beginning to make headway there.
Can you explain what you mean by this as simply as possible?
I know what all those words mean, but I can’t tell what you mean by them, and “specific manifested instantiation of an event” does not sound like a very good attempt to be clear about anything. If you want your intended audience to understand, try to tailor your explanation for people much dumber and less knowledgeable than you think they are.
Yup. There’s a deep inferential gap here, and I’m trying to relate it as best I can. I know I’m doing poorly, but a lot of that has to do with the fact that the ideas that you are having trouble with are so very simple to me that my simple explanations make no sense to you.
Specific: relating to a unique case.
Manifested: made to appear or become present.
Instantiation: a real instance or example
-- a specific manifested instantiation thus is a unique, real example of a thing or instance of an idea or event that is currently present and apparent. I sacrifice simplicity for precision in using this phrase; it lacks any semantic baggage aside from whatI assign to it here, and this is a conversation where precision is life-and-death to comprehension.
What is the difference between “There is a 99.9999% chance he’s dead, and I’m 99.9999% confident of this, Jim” and “He’s dead, Jim.”?
Brevity.
As I said; “epistemological barriers of comprehension.”
You were asked to explain a statement of yours as simply as possible.
You responded with a hypothetical question.
You received an answer, apparently not the one you were looking for.
You congratulated yourself on being unclear.
Acknowledging failure is in no wise congratulatory.
Of course not. By asking the question I am implying that there is, in fact, a difference. This was in an effort to help you understand me. You in particular responded by answering as you would. Which does nothing to help you overcome what I described as the epistemological barrier of comprehension preventing you from understanding what I’m trying to relate.
To me, there is a very simple, very substantive, and very obvious difference between those two statements, and it isn’t wordiness. I invite you to try to discern what that difference is.
You come across as rather condescending. Consider that this might not be the most effective way to get your point across.
I am powerless over what you project into the emotional context of my statements, especially when I am directly noting a point of fact with no emotive terminology used whatsoever.
There was a temptation to just reply with “No. You are not.” But that’s probably less than helpful. Note that language is a subtle thing. Even when someone is making a purely factual claim, how they make it, especially when it involves assertions about other people, can easily carry negative emotional content. Moreover, careful, diplomatic phrasing can be used to minimize that problem.
Certainly. But in this case my phrasing was such that it was devoid of any emotive content outside of what the reader projects.
An explanation which is as simple as possible != an exercise left to the reader.
That is not necessarily true. The simplest explanation is whichever explanation requires the least effort to comprehend. That, categorically, can take the form of a query / thought-exercise.
In this instance, I have already provided the simplest direct explanation I possibly can: probabilistic statements do not correlate to exact instances. I was asked to explain it even more simply yet. So I have to get a little non-linear.
So how would you answer what the difference is between the two statements?
The former is a statement of belief about what is. The latter is a statement of what actually is.
So if you were McCoy would you ever say “He’s dead, Jim”?
Naively, yes.
So this confuses me even more. Given your qualification of what pedanterrific said, this seems off in your system. The objection you have to the Bayesians seems to be purely in the issues about the nature of your own map. The red-shirt being dead is not a statement in your own map.
If you had said no here I think I’d sort of see where you are going. Now I’m just confused.
I’ll elaborate.
Naively: yes.
Absolutely: no.
Recall the epistemology of naive realism:
Ok. Most of the time McCoy determines that someone is dead he uses a tricorder. Do you declare his death when you have an intermediary object other than your senses?
Naive realism does not preclude instrumentation.
So do tricorders never break?
My form of naive realism is contingent, not absolutist. It is perfectly acceptable to find myself to be in error. But … there are certain forms of dead that are just plain dead. Such as the cat I just came home to discover was in rigor mortis an hour ago.
Can we change topics, please?
My condolences. Would you prefer if we use a different example? The upshot is that the Bayesian agrees with you that there is a reality out there. If you agree that one can be in error then you and the Bayesian aren’t disagreeing here.
The only thing we seem to disagree on is how to formulate statements of belief. As I wrote elsewhere:
What I am espousing is the use of contingent naive realism for handling assertions regarding “the territory” coincident with Bayesian reasoning regarding “the map”.
I strongly agree that Bayesian reasoning is a powerful tool for making predictive statements, but I still affirm that the adage that the number of times a well-balanced coin has come up heads has no bearing on whether it actually will come up heads on the next trial.
But even here the Bayesian agrees with you if the coin is well-balanced.
If that is the case then the only disagreement you have is a matter of language not a matter of philosophy. This confuses me because the subthread about knowledge about one’s own internal cognitive processes seemed to assert a difference in actual philosophy.
I was making a trivial example. It gets more complicated when we start talking about the probabilities that a specific truth claim is valid.
I’m not exactly sure how to address this problem. I do know that it is more than just a question of language. The subthread you’re talking about was dealing on the existence of knowable truth, and I was using absolute truths for that purpose.
There are, necessarily, a very limited number of knowable absolute truths.
I do note that your wording on this instance, “knowledge about one’s own internal cognitive processes”, indicates that I may still be failing to achieve sufficient clarity in the message I was conveying in that thread. To reiterate: my claim was that it is absolutely true that when you are cognizant of a specific thought, you know that you are cognizant of that thought. In other words: you know absolutely that you are aware of your awareness. This is tautological to self-awareness.
If I say “he’s dead,” it means “I believe he is dead.” The information that I possess, the map, is all that I have access to. Even if my knowledge accurately reflects the territory, it cannot possibly be the territory. I see a rock, but my knowledge of the rock is not the rock, nor is my knowledge of my self my self.
The map is not the territory, but the map is the only one that you’ve got in your head. “It is raining” and “I do not believe it is raining” can be consistent with each other, but I cannot consistently assert “it is raining, but I do not believe that it is raining.”
Ah, but is your knowledge of your knowledge of your self your knowledge of your self?
You know, for a moment until I saw who it was who was replying, I had a brief flash of hope that this conversation had actually started to get somewhere.
This is starting to hurt my feelings, guys. This is my earnest attempt to restate this.
Put another way: the statement “the map is not the territory” does not have a confidence value of .99999whatever. It is a statement about the map, which is itself a territory, and is simply true.
(I may, of course, be mistaken about Logos’ epistemology, in which case I would appreciate clarification.)
Well, you can take “the map is not the territory” as a premise of a system of beliefs, and assign it a probability of 1 within that system, but you should not assign a probability of 1 to the system of beliefs being true.
Of course, there are some uncertainties that it makes essentially no sense to condition your behavior on. What if everything is wrong and nothing makes sense? Then kabumfuck nothing, you do the best with what you have.
First, I want to note the tone you used. I have not so disrespected you in this dialogue.
Second: “Put another way: the statement “the map is not the territory” does not have a confidence value of .99999whatever. It is a statement about the map, which is itself a territory, and is simply true.”
There is some truth to the claim that “the map is a territory”, but it’s not really very useful. Also, while it is demonstrable that the map is not the territory, it is not tautological and thus requires demonstration.
I am, however, comfortable stating that any specific individual “map” which has been so demonstrated to be a territory of its own without being the territory for which it is mapping is, in truth, a territory of its own (caveat: thar be recursion here!), and that this is expressible as a truth claim without the need for probability values.
First, I have no idea what you’re talking about. I intended no disrespect (in this comment). What about my comment indicated to you a negative tone? I am honestly surprised that you would interpret it this way, and wish to correct whatever flaw in my communicatory ability has caused this.
Second:
Okay, now I’m starting to get confused again. ‘Some’ truth? And what does ‘useful’ have to do with it?
It… seems tautological to me...
So, um, have I understood you or not?
The way I read “This is starting to hurt my feelings, guys.” was that you were expressing it as a miming of myself, as opposed to being indicative of your own sentiments. If this was not the case, then I apologize for the projection.
Whether or not a truth claim is valid is binary. How far that a given valid claim extends is quantitative rather than qualitative, however. My comment towards usefulness has more to do with the utility of the notion, (I.e.; a true-and-interesting notion is more “useful” than a true-and-trivial notion.)
Tautological would be “The map is a map”. Statements that depend upon definitions for their truth value approach tautology but aren’t necessarily so.
¬A != A
is tautological, as isA=A
. However,B⇔A → ¬A = ¬B
is definitionally true. So; “the map is not the territory” is definitionally true (and by its definition it is demonstrated). However, it is possible for a conceptual territory that the territory is the map. (I am aware of no such instance, but it conceptually could occur). This would, however, require a different operational definition of what a “map” is from the context we currently use, so it would actually be a different statement.I’m comfortable saying “close enough for government work”.
was a direct (and, in point of fact, honest) response to
Though, in retrospect, this may not mean what I took it to mean.
Agreed.
Ah, ok.
:)
This is going nowhere. I am intimately familiar with the epistemological framework you just related to me, and I am trying to convey one wholly unlike to to you. Exactly what good does it do this dialogue for you to continue reiterating points I have already discussed and explained to you ad nauseum?
But you know that you ‘see’ a rock. And that IS territory, in the absolute sense.
Unless you are using a naive realist epistemological framework. In which case it means “he’s dead” (and can be either true or not true.)
My understanding of what you are saying is that I should assign a probability of 1 to the proposition that I experience the quale of rock-seeing.
I do not accept that this is the case. There are alternatives, although they do not make much intuitive sense. It may seem to me that I cannot think I have an experience, and not have it, but maybe reality isn’t even that coherent. Maybe it doesn’t follow for reasons that are fundamentally impossible for me to make sense of. Also maybe I don’t exist.
I cannot envision any way in which I could plausibly come to believe that I probably don’t experience what I believe I experience, or that I don’t exist, but if I started getting evidence that reality is fundamentally incoherent, I’d definitely have to start judging it as more likely.
I am comfortable agreeing with this statement.
I do not accept that this is the case. Postulating counterfactuals does not validate the possibility of those counterfactuals (this is just another variation of the Ontological Argument, and is as invalid as is the Ontological Argument itself, for very similar reasons.)
This is a question which reveals why I have the problems I do with the epistemology embedded in Bayesian reasoning. It is very similar to the fact that I’m pretty sure you could agree with the notion that it is conceptually plausible that if reality were ‘fundamentally incoherent’ the statement
¬A = A
could be true.The thing is, it isn’t possible for such incoherence to exist. They are manifestations of definitions; the act of being so defined causes them to be proscriptive restrictions on the behavior of all that which exists.
For what it’s worth, I also hold that the Logical Absolutes are in fact absolute, though I also assert that this is essentially meaningless.
How do you know?
It seems pretty reasonable to me that A must always equal A, but if I suddenly found that the universe I was living in ran on dream logic, and 2+2 stopped equaling the same thing every time, I’d start viewing the notion with rather more skepticism.
Because they are functions of definition. Altering the definition invalidates the scenario.
Eliezer used a cognitive trick in this case: he postulated a counterfactual and treated that postulation as sufficient justification to treat the counterfactual as a possibility. This is not justified.
If you state that A is A by definition, then fine, you’ve got a definition that A=A. But that doesn’t mean that the definition must be manifest. Things cannot be defined into reality.
Now, I believe that A=A is a real rule that reality follows, with very high confidence. I am even willing to accept it as a premise within my system of beliefs, and assign it a probability of 1 in the context of that system of beliefs, but I can’t assign a probability of 1 to that system of beliefs.
It’s possible contingent on things that we have very strong reason to believe simply being wrong.
The postulation does not make it a possibility. It was a possibility without ever being postulated.
The postulation “Maybe time isn’t absolute, and there is no sense in which two events in different places can objectively be said to take place at the same time” would probably have been regarded by most, a thousand years ago, as impossible. People would respond that that simply isn’t what time is. But from what we can see today, it looks like reality really does work like that, and any definition of “time” which demands absolute simultaneity exist simply doesn’t correspond to our universe.
In any case, I don’t see how you get from the premise that you can have absolute certainty about your experience of qualia to the position that you can’t judge how likely it is that the Sportland Sports won the game last night based on Neighbor Bob’s say so.
Another epistemological breakdown may be about to occur here. I separate “real” from “existant”.
That which is real is any pattern that which exists is proscriptively constrained by.
That which exists is any phenomenon which directly interacts with other phenomena.
I agree with you that you cannot define things into existence. But you most assuredly can define things into reality. The logical absolutes are examples of this.
Yup. But that’s a manifestation of the definition, and nothing more. If A=A had never been defined, no one would bat an eye at
¬A = A
. But that statement would bear no relation to the definitions supporting the assertion A=A.We’re talking at cross purposes here. Any arbitrary counterfactual may or may not be possible. The null hypothesis is that they are not. Until there is justification for the assertion, the mere presentation of a counterfactual is insufficient cause/motive to accept the assertion that it is possible.
Agreed. But time isn’t “definitional” in nature; that is, it is more than a mere product of definition. We define ‘time’ to be an existing phenomenon with specific characteristics—and thus the definition of how time behaves is constrained by the actual patterns that existing phenomenon adheres to.
The number 3 on the other hand is defined as itself, and there is no existing phenomenon to which it corresponds. We can find three instances of a given phenomenon, but that instantiated collection should not be conflated with an instantiation of the number three itself.
You can’t. Conversations—and thus, topics—evolve over time, and this conversation has been going on for over 24 hours now.
Things that have been believed impossible have sometimes been found to be true. Just because you do not yet have any reason to suspect that something is true, does not mean that you can have assign zero probability to its truth. If you did assign a proposition zero probability, no amount of justification could ever budge you from zero probability; whatever evidence or arguments you received, it would always be infinitely more likely that they came about from some process unconnected to the truth of the proposition than that the proposition was true.
I’m getting rather tired of having to start this conversation over yet again. Do you understand my meaning when I say that until such time as a given counterfactual is provided with justification to accept its plausibility, the null hypothesis is that it is not?
I believe that I understand you, but I don’t think that is part of an epistemology that is as good at forming true beliefs and recognizing false ones.
You can label certain beliefs “null hypothesis,” but it will not correspond to any consistent likelihood that they will turn out to be true.
As a social prescription, you can call certain claims “justified knowledge,” based on how they are arrived at, and not call other claims about reality “justified knowledge” because the evidence on which they were arrived at does not meet the same criteria. And you can call it an “intellectual failing” for people to not acquire “justified knowledge” about things that they reason about. But not all “justified knowledge” will be right, and not all claims that are not “justified knowledge” will be wrong. If you had a computer that could process whatever information you put into it, and tell you exactly how confident you ought to be about any given claim given that information, and have the claims be right exactly that often, then the category of “justified belief” will not correspond to any particular range of likelihood.
This sounds like a point where you would complain that I am confusing “aggregated average” with “manifested instantiation.” We can accept the premise that the territory is the territory, and a particular event either happened or it did not. So if we are talking about a particular event, should we be able to say that it actually definitely happened, because that’s how the territory works? Well, we can say that, but then we start observing that some of the statements we make about things that “definitely happened” turn out to be wrong. If we except as a premise in our epistemologies that the territory is what it is, we can accept that the statements we make about the territory are either right, absolutely, or wrong, absolutely. But we don’t know which are which, and “justified knowledge” won’t correspond to any particular level of confidence.
If you have an epistemology that lets you label some things “absolutely justified knowledge” that are actually wrong, and tells you that you can’t predict the likelihood of some things when you can actually make confidence predictions about them and be right as often as you say you should be, I think you’ll have a hard time selling that it’s a good epistemology.
The use of null hypothesises is a pivotal tool for scientific endeavor and rational-skepticism in general. If you object to this practice, feel free to do so, but please note that what you’re objecting to is a necessary condition of the process of forming beliefs.
I do not agree with the use of “justified knowledge” so ubiquitously. Furthermore, I have tried to establish a differentiation between belief claims and knowledge claims. The latter are categorically claims about “the territory”; the former are claims of approximation of “your map” to “the territory”.
Being able to readily differentiate between these two categories is a vital element for ways of knowing.
I don’t know, at this point, exactly where this particular thread is in the nested tree of various conversations I’m handling. But I can tell you that at no time have I espoused any form of absolutism that allows for non-absolute truths to be expressed as such.
What I have espoused, however, is the use of contingent naive realism coincident with Bayesian reasoning—while understanding the nature of what each “tool” addresses.
Did you reach this conclusion by testing the hypothesis that the use of null hypotheses is not a pivotal tool for scientific endeavor or rational-skepticism in general against collected data, and rejecting it?
Rational-skepticism doesn’t necessarily operate experimentally and I’m sure you know this.
That being said; yes, yes I did. Thought-experiment, but experiment it is.
Yes, I am aware of how scientists use null hypotheses. But the null hypothesis is only a construct which exists in the context of significance testing, and significance testing does not reliably get accurate results when you measure small effect sizes, and also gives you different probabilities based on the same data depending on how you categorize that data; Eliezer already gave an example of this in one of the pages you linked earlier. The whole reason that people on this site in general have enthusiasm for Bayesian statistics is because we believe that they do better, as well as as it is possible to do.
The concept of a “null hypothesis,” as it is used in frequentist statistics, doesn’t even make sense in the context of a system where, as you have claimed, we cannot know that it is more likely that a coin will come up heads on the basis of it having come up heads the last hundred times you’ve flipped it.
If I say “it is raining out,” this is a claim about the territory, yes. If I say “I believe that it is raining out,” this is a claim about my map (and my map exists within, and is a part of, the territory, so it is also a claim “about the territory,” but not about the same part of the territory.)
But my claim about the territory is not certain to be correct. If I say “it is raining out, 99.99999% confidence,” which is the sort of thing I would mean when I say “it is raining out,” that means that I am saying that the territory probably agrees with this statement, but the information in my map does not allow me to have a higher confidence in the agreement than that.
A null hypothesis is a particularly useful simplification not strictly a necessary tool.
So I understand you—you are here claiming that it is not necessary to have a default position in a given topic?
I am saying it is not strictly necessary to have a hypothesis called “null” but rather that such a thing is just extremely useful. I would definitely say the thing about defaults too.
That’s not what a null hypothesis is. A null hypothesis is a default state.
I’m curious; how could a situation be arranged such that an individual has no default position on a given topic? Please provide such a scenario—I find my imagination insufficient to the task of authoring such a thing.
If you tell me you’ve flipped a coin, I may not have any default position on whether it’s heads or tails. Similarly, I might have no prior belief, or a symmetric prior, on lots of questions that I haven’t thought much about or that don’t have any visible correlation to anything else I know about.
But you would have the default position that it had in fact occupied one of those two outcomes.
Sure. I didn’t say “I have no information related to the experiment”. What I’m saying is this: if I do an experiment to choose between K options, it might be that I don’t have any prior (or have a symmetric prior) about which of those K it will be. That’s what a null hypothesis in statistics is. When a pharmaceutical company does a drug trial and talks about the null hypothesis, the hypothesis they’re referring to is “no effect”, not “the chemical never existed”.
Yes, any experiment also will give you information about whether the experiment’s assumptions were valid. But talking about null hypotheses or default hypotheses is in the context of a particular formal model, where we’re referring to something more specific.
Correct. And that’s a situationally relevant phenomenon. One quite similar to how “the coin will be on one of its sides” is situationally relevant to the coin-toss. (As opposed to it being on edge, or the faces smoothed off.)
I’m not sure whether we’re disagreeing here.
You had asked for an example of where an individual might have no prior on “on a given topic.” I gave one, for a narrow topic: “is the coin heads or tails?”. I didn’t say, and you didn’t ask, for a case where an individual has no prior on anything related to the topic.
But let me give on attempt at that, stronger, claim. You’ve never met my friend Sam. You’ve never heard of or thought of my friend Sam. Before this comment, you didn’t have a prior on his having a birthday, you don’t have a prior on his existing; you never considered the possibility that lesswrong-commenter asr has a friend Sam. I would say you have no beliefs on the topic of Sam’s birthday.
It is my default position that they do not exist, even as absent constructs, unless otherwise noted (with the usual lowered threshold of standards of evidence for more evidently ordinary claims). That’s the whole point of default positions; they inform us how to react to new criteria or phenomena as they arise.
By bringing up the issue of non-topics you’re moving the goal-post. I asked you how it could be that a person could have no defaults on a given topic.
If I tell you that there is a flugmizzr, then you know that certain things are ‘true’—at least, you presume them to be until shown otherwise: one, that flugmizzrs are enumerable and two, that they are somehow enumerable by people, and three, flugmizzrs are knowable, discrete phenomena.
Those are most assuredly amongst your defaults on the topic. They could easily each be wrong—there is no way to know—but if you trust my assertion of “a flugmizzr” then each of these become required defaults, useful for acquiring further information.
To clarify—my answer is that “there are topics I’ve never considered, and before consideration, I need not have a default belief.” For me at least, the level consideration to actually form a belief is nontrivial. I am often in a state of uninterested impartiality. If you ask me whether you have a friend named Joe, I would not hazard a guess and would be about equally surprised by either answer.
To put it more abstractly: There’s no particular computational reason why the mind needs to have a default for every input. You suggest some topic X, and it’s quite possible for me to remain blank, even if I heard you and constructed some internal representation of X. That representation need not be tied to a belief or disbelief in any of the propositions about X under discussion.
Agreed with your main point; curious about a peripheral:
Is this meaningfully different, in the context you’re operating in, from coming to believe that you probably didn’t experience what you believe you experienced?
Because I also have trouble imagining myself being skeptical about whether I’m actually experiencing what I currently, er, experience myself experiencing (though I can certainly imagine believing things about my current experience that just ain’t so, which is beside your point). But I have no difficulty imagining myself being skeptical about whether I actually experienced what I currently remember myself having experienced a moment ago; that happened a lot after my brain injury, for example.
Yes. I have plenty of evidence that people sometimes become convinced that they’ve had experiences that they haven’t had, but reality would have to work very differently than I think it does for people not to be having the quales they think they’re having.
(nods) That’s what I figured, but wanted to confirm.
Given that my “current” perception of the world is integrating a variety of different inputs that arrived at my senses at different times, I mostly suspect that my intuitive confidence that there’s a sharp qualitative difference between “what I am currently experiencing” and “what I remember having experienced” is (like many of my intuitive confidences) simply not reflective of what’s actually going on.
Pedanterrific nailed it.
Or rather, people mostly don’t realize that even claims for which they have high confidence ought to have probabilities attached to them. If I observe that someone seems to be dead, and I tell another person “he’s dead,” what I mean is that I have a very high but less than 1 confidence that he’s dead. A person might think they’re justified in being absolutely certain that someone is dead, but this is something that people have been wrong about plenty of times before; they’re simply unaware of the possibility that they’re wrong.
Suppose that you want to be really sure that the person is dead, so you cut their head off. Can you be absolutely certain then? No, they could have been substituted by an illusion by some Sufficiently Advanced technology you’re not aware of, they could be some amazing never-before-seen medical freak who can survive with their body cut off, or more likely, you’re simply delusional and only imagined that you cut off their head or saw them dead in the first place. These things are very unlikely, but if the next day they turn up perfectly fine, answer questions only they should be able to answer, confirm their identity with dental records, fingerprints, retinal scan and DNA tests, give you your secret handshake and assure you that they absolutely did not die yesterday, you had better start regarding the idea that they died with a lot more suspicion.
Thank you for reiterating how to properly formulate beliefs. Unfortunately, this is not relevant to this conversation.
That a truth claim is later falsified does not mean it wasn’t a truth claim.
Again, thank you for again demonstrating the Problem of Induction. Again, it just isn’t relevant to this conversation.
By continuing to bring these points up you are rejecting restrictions, definitions, and qualifiers I have added to my claims, to the point where what you are attempting to discuss is entirely unrelated to anything I’m discussing.
I have no interesting in talking past one another.
Ok. I don’t think 1 is a Bayesian issue by itself. That’s a general rationality issue. (Speaking as a non-Bayesian fellow-traveler of the Bayesians.)
2,3, and 4 seems roughly accurate. Whether 3 is correct depends a lot on how you unpack occurrence. A Bayesian is perfectly ok with the central limit theorem and applying it to a coin. This is a statement about occurrences. A Bayesian agrees with the frequentist that if you flip a fair coin the ratio of heads to tails should approach 1 as the number of flips goes to infinity. So what do you mean by occurrences here that Bayesians can’t talk about them?
But there then seems to be a total disconnect between those statements and your later claims and even going back and reading your earlier remarks doesn’t give any illuminating connection.
I can’t parse this in any way that makes sense. Are you simply objecting to the fact that 0 and 1 are not allowed probabilities in a Bayesian framework? If not, what does this mean?
I’m saying that the Bayesian framework is restricted to probabilities, and as such is entirely unsuited to non-probabilistic matters. Such as the number of fingers I am currently seeing on my right hand as I type this. For you, it’s a question of what you predict to be true. For me, it’s a specific manifested instance, and as such is not subject to any form of probabilistic assessments.
Note that this is not a claim that we do not share a single physical reality, but rather a question of the ability of either of us to make valid claims of truth.
I’m slowly beginning to understand your thought process. The Bayesian approach treats the number of fingers you currently see on you right hand as a probabilistic matter. The reason it does this, and the reason this method is preferable to those which treat the number of fingers on your hand as not subject to probabilistic assessments is that you can be wrong about the number of fingers on your hand. To demonstrate this I could describe any number of complex scenarios in which you have been tricked about the number of fingers you have. Or I could just point you to real instances of people being wrong about the number of limbs they possess or people who outright deny their disability.
Like anything else a “specific manifested instance” is an entity which we are justified in believing so long as it reliably explains and predicts our sensory impressions.
True, but irrelevant. It would have helped if you had continued to read further; you would have seen me explain to JoshuaZ that he had made exactly the same error that you just made in understanding what I just said.
The specific manifested instance is not the number of fingers but the number of fingers I am currently seeing. It is not, fundamentally speaking, possible for me to be mistaken about the latter (caveat: this only applies to ongoing perception, as opposed to recollections of perception).
It is not proper to speak of beliefs about specific manifested instances when making assertions about what those instantiations actually are.
The statement “I observe X” is unequivocably absolutely true. Any conclusions derived from it however do not inherit this property.
Similarly, I want to note that you are close to making a categorical error regarding the relevance of explanatory power and predictions to this discussion. Those are very important elements for the formation of rational beliefs; but they are irrelevant to justified truth claims.
Perceptions do not have propositional content- speaking about their truth or falsity is nonsense. Beliefs about perceptions like “the number of fingers I am currently seeing is five” do and can correspondingly be false. They are of course rarely false but humans routinely miscount things. The closer a belief gets to merely expressing a perception the less there is that can be meaningfully said about it’s truth value.
A rational belief is a justified truth claim.
Unless you are operating within the naive realist framework.
You’ve fallen into that same error I originally warned about. You are conflating beliefs about the nature of the perception with beliefs about the content of the perception.
True but irrelevant to this discussion. I never claimed that there were absolute truths accessible to an arbitrary person which were of significant informational value. I only asserted that they do exist.
Then I guess my epistemology doesn’t exist. I must have just been trolling this entire time. I couldn’t possibly believe that this is statement is committing a categorical error.
Are you willing to accept the notion that I am conveying to you a framework that you are currently unfamiliar with that makes statements about how truth, belief, and knowledge operate which you currently disagree with? If yes, then why do you actively resist comprehending my statements in this manner? What are you hoping to achieve?
What are you talking about! We’re talking about epistemology! If you want to demonstrate why a calling a rational belief a justified truth claim is a category error then do so. But please stop condescendingly repeating it. I actively “resist comprehending your statements”?! You can’t just assert things that don’t make sense in another person’s framework and expect them to not say “No those things are the same”.
In any case, if it is a common position in the epistemological literature then I suspect I am familiar with it and that you are simply really bad at explaining what it is. If it is your original epistemological framework then I suspect it will be a bad one (nothing personal, just my experience with the reference class).
Is that your position?
You keep doing this. You keep using words to make distinctions as if it were obvious what distinction is implied. I can assure you nearly no one here has any idea what you mean by the difference between the nature of the perception and the content of the perception. Please stop acting like we’re stupid because you aren’t explaining yourself.
Actually, that’s a probabilistic assertion that you are seeing n fingers for whatever n you choose. You could for example be hallucinating. Or could miscount how many fingers you have. Humans aren’t used to thinking that way, and it generally helps for practical purposes not to think this way. But presumably if five minutes from now a person in a white lab coat walked into your room and explained that you had been tested with a new reversible neurological procedure that specifically alters how many fingers people think they have on their hands and makes them forget that they had any such procedure, you wouldn’t assign zero chance it is a prank.
Note by the way there are stroke victims who assert contrary to all evidence that they can move a paralyzed limb. How certain are you that you aren’t a victim of such a stroke? Is your ability to move your arms a specific manifested instance? Are you sure of that? If that case is different than the finger example how is it different?
If I am hallucinating, I am still seeing what I am seeing. If I miscount, I still see what I see. There is nothing probabilistic about the exact condition of what it is that I am seeing. You can, if you wish to eschew naive realism, make fundamental assertions about the necessarily inductive nature of all empirical observations—but then, there’s a reason why I phrased my statement the way I did: not “I can see how many fingers I really have” but “I know how many fingers I am currently seeing”.
Are you able to properly parse the difference between these two, or do I need to go further in depth about this?
(The remainder of your post expounded further along the lines of an explanation into your response, which itself was based on an eroneous reading of what I had written. As such I am disregarding it.)
If I understand this correctly you aren’t understanding the nature of subjective probability. But please clarify “specific manifested instantiation”.
No, I understand it perfectly well. I’m asserting that subjective probability is irrelevant in justifying truth-claims.
specific : “Belonging or relating uniquely to a particular subject”
manifest: “Display or show (a quality or feeling) by one’s acts or appearance; demonstrate”
Instatiate: “Represent as or by an instance”
Do you take the same attitude with all the intellectual communities that don’t believe appeals to authority are always fallacious? If so you must find yourself rather isolated.
FWIW, for I think the first time I see LW as going more or less mad in perceiving value in a blog post linked to in the discussion section.
What are the chances I am wrong? Before looking at the subject itself and the comments there, what could we say about the chances I am wrong?
For what it is worth I hadn’t followed the thread and my impression when reading it after your priming was “just kinda ok”. The reasoning wasn’t absurd or anything but there isn’t any easy way to see how much influence that particular dynamic has had relative to other factors. My impression is that the effect is relatively minor.
I tentatively think any story humans tell about natural selection that obeys certain Darwinian and logical rules is true in that it must have an effect. However this effect may be too small to make any predictions from. This thought is under suspicion for committing the no true Scotsman fallacy.
An example is group selection. If humans can tell a non-flawed story about why it would be in a region of foxes’ benefit to individually restrain their breeding, this does not mean one can predict foxes can be seen to do this. It does mean that the effect itself is real subject to caveats about the rate of migration for foxes from one region to another, etc., such that under artificual enough conditions the real effect could be important. The problem is that there are a million other real effects that don’t come to mind as nice stories, and all have different vectors of effect.
This is why evolutionary psychology and the like is so bewitching and misleading. Pretty much all the effects postulated are true—though most are insignificant. People are entranced by their logical truth.
I think I agree with all of that (with the caveat that I don’t know exactly which Evolutionary Psychology claims you would dismiss as insignificant.)
… I am unable to parse “FWIW, for I think the first time I see LW as going more or less mad in perceiving value in a blog post linked to in the discussion section.” to anything intelligible. What are you trying to say? I’ll wait until you respond before following the link.
Recently, I found myself disagreeing with dozens of LWers. Presumably, when this happens, sometimes I’m right and sometimes I’m wrong. Since I shouldn’t be totally confident I am right this time, how confident should I be?
Ahh.
Confidence in a given circumstance should be constrained by:
the available evidence at hand
how well you can demonstrate internal and external consistency in conforming to said evidence.
how well your understanding of that evidence allows you to predict outcomes of events associated with said evidence.
Any probabilities resultant from this would have to be taken as aggregates for predictive purposes, of course, and as such could not be ascribed as valid justification in any specific instance. (This caveat should be totally unsurprising from me at this point.)
It’s a bit tricky because my position is that the post has practically no content and cannot be used to make predictions because it is a careful construction of an effect that is reasonable and does not contradict evidence, though it is in complete disregard of effect size.
After a brief skimming I have come to the conclusion that a brief skimming is not effective enough to provide a sufficient understanding of the conversation thread in question as to allow me to form any opinions on the topic.
tl;dr version: I skimmed it, and couldn’t wrap my head around it, so I’ll have to get back to you.
All three of those points are appeals to authority.
… To be an appeal to authority I would have to claim I was correct because some other person’s reputation says I am. So this is just you signalling you don’t care whether what you say is true; you merely wish to score points.
That you were upvoted tells me that others share this hostility to me.
This radically adjusts my views of LW as a community. Very, very negatively.
You are appealing to your authority regarding your mental states, degree of comprehension and reading history. This is why it is valid for you to simply assert them instead of us expecting you to provide us with internet histories and fMRI lie detector results. I am trying to point out how absurdly wrong your position on appeals to authority is. Detailed explanations had not succeeded so I hoped pointing out your use of valid appeals to authority would succeed. I desired karma to the extent that one always desires karma when commenting on Less Wrong.
You are of course free to do this. But I suggest that before leaving you consider the possibility that you are wrong. Presumably at one point you found insight in this community and thought it was worth spending your time here. Did you at that time imagine you could write an accurate and insightful comment that would be voted to −8? Is it not possible that you don’t understand the reasons we’ve given for considering appeals to authority sometimes valid? Is it not possible that you misunderstand how “appeal to authority” is being used? Alternatively, is it not possible that you have not adequately explained your clever understanding of justification that prohibits appeals to authority? If you seriously consider these possibilities and cannot see how we could be responding to you rationally then you are probably right to hold a low opinion of us.
In one day, in one thread, I have ‘lost’ roughly 70 ‘karma’. All from objecting to the notion that appeals to authority can be valid, and from my disparaging on Bayesian probabilism’s capability to make truth statements in a non-probabilistic fashion.
I expected better of you all, and I have learned my lesson.
For what it’s worth, as someone who has been reading your various exchanges without becoming involved in them (even to the extent of voting), I think your summary of the causes of that shift leaves out some important aspects.
That aside, though, I suspect your conclusion is substantively correct: karma-shift is a reasonable indicator (though hardly a perfectly reliable one) of how your recent behavior is affecting your reputation here , and if you expected that your recent behavior would improve your reputation your expectations were way misaligned with the reality of this site.
Even assuming Logos was entirely correct about all his main points it would be bizarre to expect anything but a drastic drop in reputation in response to Logos’ recent behavior. This requires only a rudimentary knowledge of social behavior.
It’s a question of degree. I realized from the outset that I’m essentially committing heresy against sacred beliefs of this community. But I expected a greater capacity for rationality.
I’m a non-Bayesian with over 6000 karma. I’ve started discussion threads on problems about problems with finding the right priors in a Bayesian context and have expressed skepticism that there any genuinely good set of priors exists. Almost every time I post on something related to AGI it is to discuss reasons why I think fooming isn’t likely. I’m not signed up for cryonics and have made multiple comments discussing problems with it from both a strict utilitarian perspective and from a more general framework. When there was a surge of interest in bitcoins here I made a discussion thread pointing out a potentially disturbing issue with that. One of my very first posts here was arguing that phlogiston is a really bad example of an unfalsifiable theory, and I’ve made this argument repeatedly here, despite phlogiston being the go-to example here for a bad scientific theory (although I don’t seem to have had much success in convincing anyone).
I have over 6000 karma. A few days ago I had gained enough karma to be one of the top contributors in the last 30 days. (This signaled to me that I needed to spend less time here and more time being actually productive.)
It should be clear from my example that arguing against “sacred beliefs” here does not by itself result in downvotes. And it isn’t like I’ve had those comments get downvoted and balanced out by my other remarks. Almost all such comments have been upvoted. I therefore have to conclude that either the set of heresies here is very different than what I would guess or something you are doing is getting you downvoted other than your questioning of sacred beliefs.
It would not surprise me if quality of arguments and their degree of politeness matter. It helps to keep in mind that in any community with a karma system or something similar, high quality, polite arguments help more. Even on Less Wrong, people often care a lot about civility, sometimes more than logical correctness. As a general rule of thumb in internet conversations, high quality arguments that support shared beliefs in a community will be treated well. Mediocre or low quality arguments that support community beliefs will be ignored or treated somewhat positively. At the better places on the internet high quality arguments against communal beliefs will be treated with respect. Mediocre or low quality arguments against communal beliefs will generally be treated harshly. That’s not fair, but it is a good rule of thumb. Less Wrong is better than the vast majority of the internet but in this regard it is still roughly an approximation of what you would expect on any internet community.
So when one is trying to argue against a community belief, you need to be very careful to have your ducks lined up in a row. Have your arguments carefully thought out. Be civil at all times. If something is not going well take a break and come back to it later. Also, keep in mind that aside from shared beliefs, almost any community has shared norms about communication and behavior and these norms may have implicit elements that take time to pick up. This can result in harsh receptions unless one has either spent a lot of time in the community or has carefully studied the community. This can make worse the other issues mentioned above.
That’s a standard element of Bayesian discourse, actually. The notion I’ve been arguing for, on the other hand, fundamentally violates Bayesian epistemology. And yes, I haven’t been highly rigorous about it; but then, I’m also really not all that concerned about my karma score in general. I was simply noting it as demonstrative of something.
However, actively dishonest rhetorical tactics have been taken up ‘against’ me in this thread, and that is what I have reacted strongly negatively against.
That’s an interesting notion which I’d be curious if the Bayesians here could comment on. Do you agree that discussions that good priors might not be possible seem to be standard Bayesian discourse?
I haven’t seen any indications of that in this thread. I do however recall the unfortunate communication lapses that you and I apparently had in the subthread on anti-aging medicine and it seems that some of the comments you made there fit a similar pattern match to accusing people for “actively dishonest rhetorical tactics” (albeit less extreme in that context). Given that two similar issues have occurred on wildly different topics, there seem to be two different explanations: 1) There is a problem with most of the commentators at Less Wrong 2) Something is occurring with the other common denominator of these discussions.
I know you aren’t a Bayesian so I won’t ask you to estimate the probabilities in these two situations. But let me ask a different question: If we took a neutral individual who has never gone to LW and isn’t a Bayesian, do you think you would bet even that on reading this thread they would agree with you and think there’s a problem with the Less Wrong community? What about 2-1 odds? 3-1? If not, why not?
Very gently worded. It is my current belief that both statements are true. I have never before so routinely encountered such difficulty in expressing my ideas and having them be understood, despite the fact that the inferential gap between myself and others is… well, larger than what I have ever witnessed between any other two people saving those with severely abnormal psychology. When I’m feeling particularly “existential” I sometimes worry about what that means about me.
On the other hand, I have also never before encountered a community whose dialogue was so deeply entrenched with so many unique linguistic constructs as LessWrong. I do not, fundamentally, disapprove of this: language shapes thought, after all. But this does also create a problem that if I cannot express my ideas within that patois, then I am not going to be understood—with the corollary that ideas which directly violate those construct’s patterns will be violently rejected as incomprehensible, “wrong”, or “confused”.
I am disappointed by this. Shifting the framework of the question from the personal perspective to the outside view does not substantively make it a different question.
I’m not sure this is the case. At least it has not struck me as the case. There is a fair number of constructs here that are specific to LW and a larger set that while not specific to LW are not common. But in my observation this results much more frequently in people on LW not explaining themselves well to newcomers. It rarely seems to result in people not being understood or rejected as confused. The most likely way for that to happen is for someone to try to speak in the LW patois without internalizing the meaning of the terms.
Not my intended point by the question. I wanted an outside view point in general and wanted your estimate on what it would be like. I phrased it in terms of a bet so one would not need to talk about any notion of probability but could just speak of what bets you would be willing to take.
To be frank, have you ever encountered a scenario which displays this phenomenon as vulgarly as this one?
Well, again; note—what I’m doing here is actually directly violating the framework of the “LWP”. Those who have internalized it, but are unfamiliar with my own framework, would have significant barriers to comprehension. And that is very frequently associated with all sorts of negative reactions—especially when by that framework I am clearly a very confused person who keeps asserting that I am not the one who’s confused here.
I can see why you think it would be an interesting question. I have, however, no opinion or belief on the matter; it is a thoroughly uninteresting question to me.
I’m not at all convinced that that is what is going on here, and this doesn’t seem to be a very vulgar case if I am interpreting your meaning correctly. You seem to think that people are responding in a much more negative and personal fashion than they are.
So the solution then is not to just use your own language and get annoyed by when people fail to respond positively. The solution there is to either use a common framework (e.g. very basic English) or to carefully translate into the new language, or to start off by constructing a helpful dictionary. In general, it is socially rude and and unlikely to be productive to go to any area of the internet where there’s a specialized vocab and not only not learn it but to use a different vocab that has overlapping words. I wouldn’t recommend this in a Dungeons and Dragons forum either.
This is unfortunate. It is a question that while uninteresting to you may help you calibrate what is going on. I would tentatively suggest spending a few seconds on the question before dismissing it.
eg. “d20 doesn’t mean a twenty sided dice it refers to the bust and cup size of a female NPC!”
Also the framework presented in “A Practical Study of Argument” by Grovier—my textbook from my first year Philosophy class called “Critical Thinking”. It is actually the only textbook I kept from my first undergrad degree—definitely recommended for anyone wanting to get up to speed on pre-bayesian rational thinking and argument.
You mean Grovier.
This is unwarranted and petty.
That’s true.
Nonsense. This is exactly on topic. It isn’t my “Less Wrong Framework” you are challenging. When I learned about thinking, reasoning and fallacies LessWrong Wasn’t even in existence. For the matter Eliezer’s posts on OvercomingBias weren’t even in existence. Your claim that the response you are getting is the result of your violation of lesswrong specific beliefs is utterly absurd.
So that justifies your assertion that I violate the basic principles of logic and argumentation?
I have only one viable response: “Bullshit.”
What justifies my assertion that you violate the basic principles of logic and argumentation is Trudy Govier, “A Practical Study of Argument”, 4th edition—Chapter 5 (“Premises—What to accept and why”), page 138. Under the subheading “Proper Authority”.
For an explanation of when and why an appeal to authority is, in fact, fallacious see pages 141, 159 and 434. Or wikipedia. Either way my disagreement with you is nothing to do with what I learned on LessWrong. If I’m wrong it’s the result of my prior training and an independent flaw in my personal thinking. Don’t try to foist this off on LessWrong groupthink. (That claim would be credible if we were arguing about, say, cryonics.)
You are really going to claim that by logically arguing over what qualifies as a valid argument I violate the basic principles of argumentation and logic?
I reiterate: I have but one viable response.
Just guessing from the chapter and subheading titles, but I’m pretty sure that bit of “A Practical Study of Argument” has to do with why arguments from authority are not always fallacious.
And this makes whatever it says the inerrant truth, never to be contradicted, and therefore a fundamental basic principle of logic and argumentation?
The claim was
The claim was later refined to: “[the] assertion that [Logos01] violate[s] the basic principles of logic and argumentation”.
By you, yes.
Which was agreed to
Okay. But do you acknowledge that the quoted exchange involves a shifting of the goalposts on your part?
Sure.
This is another straw man.
Then by all means enlighten me as to how it can be possible that merely by disagreeing with Grovier on the topic of appeals to authority, and in doing so providing explanations based on deduction and induction, I “violate the basic principles of logic and argumentation”.
That is not what an appeal to authority is.
I have no interest in being a party to such a wildly dishonest conversation.
I don’t know why this hasn’t been done before: appeal to authority on wikipedia.
As far as I can tell, this definition is what the rest of us are talking about, and it specifically says that appealing to authority only becomes a fallacy if a) the authority is not a legitimate expert, or b) it is used to prove the conclusion must be true. If you disagree with WP’s definition, could you lay out your own?
An appeal to authority is the use of an authority’s statements as a valid argument in the absence of corroborating materials to support that argument.
Note that arguments here refer to specific, instantiated claims. And as such are not subject to probabilistic assessments.
What, you can’t appeal to your own authority? What would you call “because I said so”?
Bare assertion.
EDIT: Right, saying that without providing some context was probably a bad idea. I’m not trying to disparage Jack’s comment here; it’s of the same general form as an appeal to external authority, and I’d expect that to come across without saying so. But if you’re being extra super pedantic...
Hey, I didn’t downvote you. (I actually thought that stating “Bare assertion.” as a bare assertion was metahilarious, but I didn’t think it technically applied.)
I wish I could say I’d done that on purpose.
I appreciate the clarification but what he said was not as bad as a bare assertion. In fact, what he said was not unsupported at all! He was speaking as the best authority we have here on Logos01′s comprehension and reading habits. A bare assertion would have been if he made a claim about which we have no reason to think he is an authority (like, say, the rules of inductive logic).
On reflection you’re right.
A bare assertion, as Nornagest indicated. Also a form of fallacy. If I had done such a thing, that would be worthy of consideration here. I have not, so we can safely stop here.
Evidence itself can be mistaken. If your theory says an event is rare, and it happens, then that is evidence against the theory. If the theory is correct, it should be overwhelmed by evidence for the theory. If the statements of experts statistically correlate with reality, then you should update on the statements of experts until they are screened off by evidence/arguments you have looked at more directly or the statements of other experts.
Statistical projections are not useful in instantiated instances. That is all.
I do not follow.
Being less than a perfect oracle does not make an information source worthless.
Individuals are not information sources but witnesses of information sources. Experts are useful heuristically for trust-system utilization but should never be taken as viable sources of reliable, validated data directly whenever possible. Whatever experience or observations an expert has made in a given field, if related directly, can be treated as valid data, however. But the argument in this case is NOT “I am an expert in {topic of X} and I say X is true” but rather “I am an expert in {topic of X} and I have observed X.”
At which point, you may safely update your beliefs towards X being likely true—and it is again vital to note, for purposes of this dialogue, the difference between assertions by individuals and testimonies of evidence.
Anything that correlates to an information source is necessarily itself an information source.
This is either a trivially true statement or else a conflation of terminology. In context of where we are in this dialogue your statement is dangerously close to said (conflation) fallacy. A source of information which conveys information about another source of information is not that second source of information. A witness, by definition, is a source of information—yes. But that witness is not itself anything other than a source of information which relays information about the relevant source of information.
I’m really confused about why you’re not understanding this. Authorities are reliable to different degrees about different things. If I tell you I’m wearing an orange shirt that is clearly evidence that I am wearing an orange shirt. If a physicist tells you nothing can accelerate past the speed of light that is evidence that nothing can accelerate past the speed of light. Now, because people can be untrustworthy there are many circumstances in which witness testimony is less reliable than personal observation. But it would be rather bothersome to upload a picture of me in my shirt to you. It can also be difficult to explain special relativity and the evidence for it in a short time span. In cases like these we must settle for the testimony of authorities. This does not make appeals to authority invalid.
Now of course you might have some evidence that suggests I am not a reliable reporter of the color of my shirt. Perhaps I have lied to you many times before or I have incentives to be dishonest. In these cases it is appropriate to discount my testimony to the degree that I am unreliable. But this is not a special problem with appeals to authority. If you have reason to think you are hallucinating, perhaps because of the LSD you took an hour ago, you should appropriately discount your eyes telling you that the trees are waving at you.
Now since appeals to authority, like other kinds of sources of information, are not 100% reliable it makes sense to discuss the judgment of authorities in detail. Even if Eliezer is a reliable authority on lots of things it is a good idea to examine his reasoning. In this regard you are correct to demand arguments beyond “Eliezer says so”. But it is none the less not the case that “appeals to authority are always fallacious”. On the contrary modern science would be impossible without them since no one can possibly make all the observations necessary to support the reliability of a modern scientific theory.
You are confused because you do not understand, not because I do not understand.
Simply put, no. No it is not. Not unless the physicist can provide a reason to believe he is correct. Now, in common practice we assume that he can—but only because it is normal for an expert in a given field to actually be able to do this.
Here’s where your understanding, by the way, is breaking down: the difference between practical behavior and valid behavior. Bayesian rationality in particular is highly susceptible to this problem, and it’s one of my main objections to the system in principle: that it fails to parse the process of forming beliefs from the process of confirming truth.
No. This is a deeply wrong view of how science is conducted. When a researcher invokes a previous publication, what they are appealing to is not an authority but rather to the body of evidence as provided. No researcher could ever get away with saying, “Dr. Knowsitall states that X is true—not without providing a citation of a paper where Dr. Knowsitall demonstrated that belief was valid.” Authorities often possess such bodies of evidence and can readily convey said information, so it’s easy to understand how this is so confusing for you folks, since it’s a fine nuance that inverts your normal perspectives on how beliefs are to be formed, and more importantly demonstrates an instance where the manners in which one forms beliefs is separated from valid claims of truth.
I’ll say it one last time: trusting someone has valid evidence is NOT the same thing as an appeal to authority, though it is a form of failure in efforts to determine truth.
Appeals to authority are always fallacious.
Here’s what may be tripping you up. Very often it doesn’t even make sense for humans to pay attention to small bits of evidence, because we can’t really process them very effectively. So for most bits of tiny evidence (such as most very weak appeals to authority) often the correct action given our limited processing capability is to simply ignore them. But this doesn’t make them not evidence.
Say for example, I have a conjecture about all positive integers n, and I check it for every n up to 10^6. If I verify it for 10^6+1, working out how much more certain I should be is really tough, so I only treat the evidence in aggregate. Similarly, if I have a hypothesis that all ravens ravens are black, and I’ve checked it for a thousand ravens, I should be more confident when I checked the 1001st raven and find it is black. But actually doing that update is difficult.
The question then becomes for any given appeal to authority, how reliable is that appeal?
Note that in almost any field there’s going to be some degree of relying on such appeals. In math for examples, there’s a very deep result called the classification of finite simple groups. The proof of the full result is thousands of pages spread across literally hundreds of separate papers. It is possible at some point that some people have really looked at almost the whole thing, but the vast majority of people who use the classification certainly have not. Relying on the classification is essentially an appeal to authority.
That’s a given, and I have said exactly as much repeatedly. Reiterating it as though it were introducing a new concept to me or one that I was having trouble with isn’t going to be an effective tool for the conversation at hand.
So, I’m confused about how you would believe that humans have limited enough processing capacity that this would be an issue but in an earlier thread thought that humans were close enough to being good Bayesians that Aumann’s agreement theorem should apply. In general, the computations involved in an Aumann update from sharing posteriors will generally be much more involved than the computations in updating with a single tiny piece of evidence.
Justify this claim, and then we can begin.
The computation for a normal update are two multiplications and a division. Now look at the computations involved in an explicit proof of Aumann. The standard proof is constructive. You can see that while it largely just involves a lot of addition, multiplication and division, you need to do it carefully for every single hypothesis. See this paper by Scott Aaronson www.scottaaronson.com/papers/agree-econ.ps which gives a more detailed analysis. That paper shows that the standard construction for Aumann is not as efficient as you’d want for some purposes. However, the primary upshot for our purposes is that the time required is roughly exponential in 1/eps where eps is how close you want the two Bayesians with limited computational agents to agree, and the equivalent issue for a standard multiplication and division is slightly worse than linear in 1/eps.
An important caveat is that I don’t know what happens if you try to do this in a Blum-Shub-Smale model, and in that context, it may be that the difference goes away. BSS machines are confusing and I don’t understand them well enough to figure out how much the protocol gets improved if one has two units with BSS capabilities able to send real numbers. Since humans can’t send exact real numbers or do computations with them in the general case, this is a mathematically important issue but is not terribly important for our purposes.
Alright, this is along the lines of what I thought you might say. A couple of points to consider.
In general discourse, humans use fuzzy approximations rather than precise statements. These have the effect of simplifying such ‘calculations’.
Backwards propagation of any given point of fact within that ‘fuzziness’ prevents the need for recalculation if the specific item is sufficiently trivial—as you noted.
None of this directly correlates to the topic of whether appeals to authority are ever legitimate in supporting truth-claims. Please note that I have differentiated between “I believe X” and “I hold X is true.”
I understand that this conflicts with the ‘baseline’ epistemology of Bayesian rationality as used on LessWrong… my assertion as this conversation has progressed has been that this represents a strong failure of Bayesian rationality; the inability to establish claims about the territory as opposed to simply refining the maps.
Your own language continues to perpetuate this problem: “The computation for a normal update”.
I’m not sure I understand your response. 3 is essentially unrelated to my question concerning small updates v. Aumann updates which makes me confused as to what you are trying to say with the three points and whether they are a disagreement with my claim, an argument that my claim is irrelevant, or something else.
Regarding 1, yes that’s true and 2 is sort of 2, but that’s the same thing that doing things with an order of epsilon does. Our epsilon keeps track of how fuzzy things should be. The point is that for the same degree of fuzziness it is much easier to do a single update on evidence than it is to do a Aumann type update based on common knowledge of current estimates.
I don’t understand what you mean here. Updates are equivalent to claims about how the territory. Refining the map if whether one is or is not a Bayesian means one is making a more narrow claim about what the territory looks like.
Clarification: by “narrow claim” you mean “allowing for a smaller range of possibilities”.
That is not equivalent to making a specific claim about a manifested instantiation of the territory. It is stating that you believe the map more closely resembles the territory; it is not saying the territory is a certain specific way.
If I a Bayesian estimates that with a high probability (say >.99) that say the Earth is around eight light-minutes from the sun then how is that not saying that the territory is a certain specific way? Do you just mean that Bayesian can’t be absolutely certain about that the territory matches their most likely hypothesis. Most Bayesians and for that matter most traditional rationalists would probably consider that a feature rather than a bug of an epistemological system. If you don’t mean this, what do you mean?
I was afraid of this. This is an epistemological barrier: if you express a notion in probabilistic form then you are not holding a specific claim.
I mean that Bayesian nomenclature does not permit certainty. However, I also assert that (naive) certainty exists.
Well, Bayesian nomenclature does permit certainty, just set P=1 or P=0. It isn’t a consequence that the nomenclature doesn’t allow it. It is that good Bayesians don’t ever say that. To use a possibly silly analogy, the language of Catholic theology allows one to talk about Jesus being fully man and not fully divine, but a good Catholic will never assert that Jesus was not fully divine.
In what sense does it exist? In the sense that human brains function with a naive certainty element? I agree that humans aren’t processing marginally probable descriptors. Are you claiming that a philosophical system that works should allow for naive certainty as something it can talk about and think exists?
Granted. That was very poorly (as in, eroneously) worded on my part. I should have said “Bayesian practices”.
I mean naive realism, of course.
Essentially, yes. Although I should emphasize that I mean that in the most marginally sufficient context.
Hmm, but the Bayesians agree with some form naive realism in general. What they disagree with you on is whether they have any access to that universe aside from in a probabilistic fashion. Alternatively, a a Bayesian could even deny naive realism and do almost everything correctly. Naive realism is a stance about ontology. Bayesianism is primarily a stance about epistemology. They do sometimes overlap and can inform each other but they aren’t the same thing.
Indeed.
Ok. Now we’re getting somewhere. So why do you think the Bayesians are wrong that this should be probabilistic in nature?
Universally. Universally probabilistic in nature.
I assert that there are knowable truth claims about the universe—the “territory”—which are absolute in nature. I related one as an example; my knowledge of the nature of my ongoing observation of how many fingers there are on my right hand. (This should not be conflated for knowledge of the number of fingers on my right hand. That is a different question.) I also operate with an epistemological framework which allows knowledge statements to be made and discussed in the naive-realistic sense (in which case the statement “I see five fingers on my hand” is a true statement of how many fingers there really are—a statement not of the nature of my map but of the territory itself.).
Thusly I reserve assertions of truth for claims that are definite and discrete, and differentiate between the model of material reality and the substance of reality itself when making claims. Probabilistic statements are always models, by category; no matter how closely they resemble the territory claims about the model are never claims in specific about the territory.
Ah, so to phrase this as a Bayesian might, you assert that your awareness of your own cognition is perfectly accurate?
… For any given specific event of which I am cognizant and aware, yes. With the important caveat that this perfection is not a heritable property beyond that specific cognizance event.
Ok. I’m curious if you have ever been on any form of mind-altering substance. Here’s a related anecdote that may help:
When I was a teenager I had to get my wisdom teeth taken out. For a pain killer I was put on percocet or some variant thereof. Since I couldn’t speak, I had to communicate with people by writing things down. After I had recovered somewhat, I looked at what I had been writing down to my family. A large part apparently focused on the philosophical ramifications of being aware that my mind was behaving in a highly fractured fashion and incoherent ramblings about what this apparently did to my sense of self. I seemed particularly concerned with the fact that I could tell that my memories were behaving in a fragmented fashion but that this had not altered my intellectual sense of self as a continuing entity. Other remarks I made apparently were substantially less coherent than even that, but it was clear from what I had wrote that I had considered them to be deep philosophical thoughts and descriptions of my mind state.
In this context, I was cognizant and aware of my mental state. To say that my perception was accurate is pretty far off.
To use a different example, do you ever have thoughts that don’t quite make sense when you are falling asleep or waking up? Does your cognition seem perfectly accurate then? Or, when you make an arithmetic mistake, are you aware that you’ve misadded before you find the mistake?
Hence: “With the important caveat that this perfection is not a heritable property beyond that specific cognizance event.”
Was it to you or to someone else that I stressed that recollections are not a part of my claim?
“For any given specific event of which I am cognizant and aware, yes.” -- also, again note the caveat of non-heritability. Whether that cognizent-awareness is of inscrutability is irrelevant to the knowledge of that specifically ongoing instance of a cognizent-awareness event.
I am going to ask you to acknowledge how irrelevant this ‘example’ is to my claim, as a means of guaging where you are on understanding it.
Right. For those specific cognizance events I was pretty damn sure that I knew how my mind was functioning. And the reactions I got to percocet are on the milder end of what mind-altering substances can do.
This makes me wonder if your claim is intended to be non-falsifiable. Is there anything that would convince you that you aren’t as aware as you think you are?
Potentially quite far since this seems to be in the same category if I’m understanding you correctly. The cognitive awareness that I’ve added correctly is pretty basic, but one can screw up pretty easily and still feel like one is completely correct.
Whether you were right or wrong about how your mind was functioning is irrelevant to the fact that you were aware of what it was that you were aware of. How accurate your beliefs were about your internal functionings is irrelevant to how accurate your beliefs were about what it was you were, at that instant, currently believing. These are fundamentally separate categories.
This is why I used an external example rather than internal, initially, by the way: the deeply recursive nature of where this dialogue is going only serves as a distraction from what I am trying to assert.
I haven’t made any assertions about how aware I believe I or anyone else is. I have made an assertion about how valid any given belief is regarding the specific, individual, ongoing cognition of the self-same specific, ongoing, individual cognition-event. This is why I have stressed the non-heritability.
The claim, in this case, is not so much non-falsifiable as it is tautological.
That it is basic does not mean that it is of the same category. Awareness of past mental states is not equivalent to awareness of ongoing mental states. This is why I specifically restricted the statemnt to ongoing events. I even previously stressed that recollections have nothing to do with my claim.
I’ll state this with the necessary recursion to demonstrate further why I would prefer we not continue using any cognition event not deriving from an external source: one would be correct to feel correct about feeling correct; but that does not mean that one would be correct about the feeling-correct that he is correct to feel correct about feeling-correct.
Now I get it!
I think I get it but I’m not sure. Can you translate it for the rest of us? Or is this sarcasm?
See here. And I may be honestly mistaken, but I’m not kidding.
It certainly could easily go either way.
Again, can you explain more clearly what you mean by this?
… that is the simple/clear explanation.
A more technical explanation would be “the ongoing cognizance of a given specifically instantiated qualia”.
That’s what makes the physicist an authority. If something is a reliable source of information “in practice” then it is a reliable source of information. Obviously if the physicist turns out not to know what she is talking about then beliefs based on that authority’s testimony turn out to be wrong.
The validity of a method is it’s reliability.
The paper where Dr. Knowsitalll demonstrated that belief is simply his testimony regarding what happened in a particular experiment. It is routine for that researcher to not have personally duplicated prior experiments before building on them. The publication of experimental procedures is of course crucial for maintaining high standards of reliability and trustworthiness in the sciences. But ultimately no one can check the work of all scientists and therefore trust is necessary.
Here is an argument from authority for you: This idea of appeals to authority being legitimate isn’t some weird Less Wrong, Bayesian idea. It is standard, rudimentary logic. You don’t know what you’re talking about.
P(X is true | someone who I consider well-educated in the field of interest stated X is true) > P(X is true)
Restating your argument in the form of a Bayesian probability statement isn’t going to increase its validity.
P (X|{affirming statement by authority in subject of X}) == P(X).
It has no bearing, and in fact is demonstrative of a broken heuristic. I’ll try to give another example to explain why. An expert’s expertise in a field is only as valid as his accuracy in the field. The probability of his expertise being, well, relevant is dependant upon his ability to make valid statements. Assigning probability to the validity of a statement by an expert, thusly, on the fact that the expert has made a statement is putting the cart before the horse. It’s like saying that because a coin has come up heads every time you’ve flipped it before it’s now likely to come up heads this time.
I’m puzzled and wonder if I’m missing your point because this update makes perfect sense to me. Let’s say that I start for a prior for whether the coin is fair, P(fair), a prior for whether it is biased towards heads, P(headbiased), and a prior for whether it is biased towards tails P(tailbiased). My updated probability for P(headbiased) increases if I get lots of heads on the coin and few or no tails. It’ll probably help if we understand each other on this simpler example before moving on to appeals to authority.
The fact of a person’s belief is evidence weighted according to the reliability of that person’s mechanisms for establishing the belief. To refuse to update on another person’s belief means supposing that it is uncorrelated with reality.
To fail to allow for others to be mistaken when weighting your own beliefs is to risk forming false beliefs yourself. Furthermore; establishing the reliability of a person’s mechanisms for establishing a belief is necessary for any given specific claim before expertise on said claim can be validated. The process of establishing that expertise then becomes the argument, rather than the mere assertion of the expert.
We use trust systems—trusting the word of experts without investigation—not because it is a valid practice but because it is a necessary failing of the human condition that we lack the time and energy to properly investigate every possible claim.
You must of course allow for the possibility of the other person being mistaken, otherwise you would simply substitute their probability estimate for your own. But to fail to update on the fact of someone’s belief prior to obtaining further information on the reliability of their mechanisms for determining the truth means defaulting to an assumption of zero reliability.
One should always assign zero reliability to any statement in and of itself, at which point it is the reliability of said mechanisms which is the argument, rather than the assertion of the individual himself. I believe I stated something very much like this already.
-- To rephrase this: it is not enough that Percival the Position-Holder tell me that Elias the Expert believes X. Elias the Expert must demonstrate to me that his expertise in X is valid.
If you have no evidence that Elias the Expert has any legitimate expertise, then you can reasonably weight his belief no more heavily than any random person holding the same belief.
If you know that he is an expert in a legitimate field that has a track record for producing true information, and he has trustworthy accreditation as an expert, you have considerably more evidence of his expertise, so you should weight his belief more heavily, even if you do not know the mechanisms he used to establish his belief.
Suppose that a physicist tells you that black holes lose mass due to something called Hawking radiation, and you have never heard this before. Prior to hearing any explanation of the mechanism or how the conclusion was reached, you should update your probability that black holes lose mass to some form of radiation, because it is much more likely that the physicist would come to that conclusion if there were evidence in favor of it than if there were not. You know enough about physicists to know that their beliefs about the mechanics of reality are correlated with fact.
No. What you should do is ask for a justification of the belief. If you do not have the resources available to you to do so, you can fail-over to the trust system and simply accept the physicist’s statement unexamined—but utilization of the trust-system is an admission of failure to have justified beliefs.
I know enough about physicists, actually, to know that if they cannot relate a mechanism for a given phenomenon and a justification of said phenomenon upon inquiry that I have no reason to accept their assertions as true, as opposed to speculation. If I am to accept a given statement on any level higher than “I trust so”—that is, if I am to assign a high enough probability to the claim that I would claim myself that it were true—then I cannot rely upon the trust system but rather must have a justification of belief.
Justification of belief cannot be “A person who usually is right in this field claims this is so” but can be “A person who I have reason to believe would have evidence on this matter related to me his assessment of said evidence.”
The difference here is between having a buddy who is a football buff who tells you what the Sportington Sports beat the Homeland Highlanders by last night—even though you don’t know whether he had access to a means of having said information—as opposed to the friend you know watched the game who tells you the scores.
If you want to increase the reliability of your probability estimate, you should ask for a justification. But if you do not increase your probability estimate contingent on the physicist’s claim until you receive information on how he established that belief, then you are mistreating evidence. You don’t treat his claim as evidence in addition to to evidence on which it was conditioned, you treat it as evidence of the evidence on which it was conditioned. Once you know the physicist’s belief, you cannot expect to raise your confidence in that belief upon receiving information on how he came to that conclusion. You should assign weight to his statement according to how much evidence you would expect a physicist in his position to have if he were making such a statement, and then when you learn what evidence he has you shift upwards or downwards depending on how the evidence compares to your expectation. If you revised upwards on the basis of the physicist’s say-so, and then revised further upwards based on his having about as much evidence as you would expect, that would be double-counting evidence, but if you do not revise upwards based on the physicist’s claim in the first place, that would be assuming zero correlation of his statement with reality.
You do not need the person to relate their assessment of the evidence to revise your belief upward based on their statement, you only need to believe that it is more likely that they would make the claim if it were true than if it were not.
Anything that is more likely if a belief is true than if it is false is evidence which should increase your probability estimate of that belief. Have you read An Intuitive Explanation of Bayes’ Theorem, or any of the other explanations of Bayesian reasoning on this site?
If you have a buddy who is a football buff who tells you that the Sportington Sports beat the Homeland Highlanders last night, then you should treat this as evidence that the Sportington Sports won, weighted according to your estimate of how likely his claim is to correlate with reality. If you know that he watched the game, you’re justified in assuming a very high correlation with reality (although you also have to condition your estimate on information aside from whether he is likely to know, such as how likely he is to lie.) If you do not know that he watched the game last night, you will have a different estimate of the strength of his claim’s correlation with reality.
I have read them repeatedly, and explained the concepts to others on multiple occassions.
Not until such time as you have a reason to believe that he has a justification for his belief beyond mere opinion. Otherwise, it is a mere assertion regardless of the source—it cannot have a correlation to reality if there is no vehicle through which the information he claims to have reached him other than his own imagination, however accurate that imagination might be.
Which requires a reason to believe that to be the case. Which in turn requires that you have a means of corroborating their claim in some manner; the least-sufficient of which being that they can relate observations that correlate to their claim, in the case of experts that is.
A probability estimate without reliability is no estimate. Revising beliefs based on unreliable information is unsound. Experts’ claims which cannot be corroborated are unsound information, and should have no weighting on your estimate of beliefs solely based on their source.
If an expert’s claims are frequently true, then it can become habitual to trust them without examination. However, trusting individuals rather than examining statements is an example of a necessary but broken heuristic. We find the risk of being wrong in such situations acceptable because the expected utility cost of being wrong in any given situation, as an aggregate, is far less than the expected utility cost of having to actually investigate all such claims.
The more such claims, further, fall in line with our own priors—that is, the less ‘extraordinary’ the claims appear to be to us—the more likely we are to not require proper evidence.
The trouble is, this is a failed system. While it might be perfectly rational—instrumentally—it is not a means of properly arriving at true beliefs.
I want to take this opportunity to once again note that what I’m describing in all of this is proper argumentation, not proper instrumentality. There is a difference between the two; and Eliezer’s many works are, as a whole, targetted at instrumental rationality—as is this site itself, in general. Instrumental rationality does not always concern itself with what is true as opposed to what is practically believable. It finds the above-described risk of variance in belief from truth an acceptable risk, when asserting beliefs.
This is an area where “Bayesian rationality” is insufficient—it fails to reliably distinguish between “what I believe” and “what I can confirm is true”. It does this for a number of reasons, one of which being a foundational variance between Bayesian assertions about what kind of thing a Bayesian network is measuring when it discussed probabilities as opposed to what a frequentist is asserting is being measured when frequentists discuss probabilities.
I do not fall totally in line with “Bayesian rationality” in this, and various other, topics, for exactly this reason.
What? No they aren’t. They are massively biased towards epistemic rationality. He has written a few posts on instrumental rationality but by and large they tend to be unremarkable. It’s the bulk of epistemic rationality posts that he is known for.
Really? In that case you should hopefully be able to interact correctly with probabilities like p(Elias asserts X | X is true) and p(Elias asserts X | X is false).
It ought to prevent you from making errors like this:
Assuming he was able to explain them correctly, which I think we have a lot of reason to doubt.
If you meant those to be topical you’ve got your givens inverted.
I’m sorry, but no matter how many times you reiterate it, until such time as you can provide a valid justification for the assertion that appeals to authority are not always fallacious, each iteration will be no more correct than the last time.
Appeals to authority are always fallacious. Some things that look like appeals to authority to casual observation upon closer examination turn out not to be. Such as the reference to the works of an authority-figure.
No. Actually I don’t. Those are base priors that are important when evaluating just how to update when given the new evidence “Elias asserts X”. Have you not just been telling us that you teach others how this mathematics works?
Why are you saying that? I didn’t just make the assertion again. I pointed to some of the Bayesian reasoning that it translates to.
If you have values for p(Elias asserts X | X is true) and p(Elias asserts X | X is false) that are not equal to each other and you gain information “Elias asserts X” your estimation for X is true must change or your thinking is just wrong. It is simple mathematics. That being the case supplying the information “Elias asserts X” to someone who already has information about Elias’s expertise is not fallacious. It is supplying information to them that should change their mind.
The above applies regardless of whether Elias has ever written any works on the subject or in any way supplied arguments. It applies even if Elias himself has no idea why he has the intuition “X”. It applies if Elias is a flipping black box that spits out statements. If you both you and the person you are speaking with have reason to believe p(Elias asserts X | X is true) > p(Elias asserts X | X is false) then supplying “Elias asserts X” as an argument in favour of X is not fallacious.
Alright, then, but you’re not going to like the answer: the probabilities in both cases are equal, barring some other influencing factor, such as Elias-the-Expert having justification for his assertion, or a direct means of observing X.
Unfortunately, this tells us exactly nothing for our current discussion.
This is the Blind Oracle argument. You are not going to find it persuasive.
It applies if Elias is a coin that flips over and over again and has come up heads a hundred times in a row and tails twenty times before that, therefore allowing us to assert that the probability that coin will come up heads is five times greater than the probability it will come up tails.
This, of course, is an eroneous conclusion, and represents a modal failure of the use of Bayesian belief-networks as assertions of matters of truth as opposed to merely being inductive beliefs.
And before you attempt to assert that fundamentally no truths are “knowable”, and that all we have are “beliefs of varying degrees of confidence”, let me just say that this, too, is a modal failure. But a subtle one: it is an assertion that because all we have is the map, there is no territory.
That is false—and there is a corollary as a result of that fact: the territory has a measurable impact on the map. And always will.
Is there anyone else here who would have predicted that I was about say anything remotely like this? Why on earth would I be about to do that? It doesn’t seem relevant to the topic at hand, necessary for appeals to experts to be a source of evidence or even particularly coherent as a philosophy.
I am only slightly more likely to have replied with that argument than a monkey would have been if it pressed random letters on a keyboard—and then primarily because it was if nothing else grammatically well formed.
The patient is very, very confused. I think that this post allows us to finally offer the diagnosis: he is qualitatively confused.
Quote:
Attempts to surgically remove this malignant belief from the patient have commenced.
Adding to wedrifid’s comment: Logos01, you seem to be committing a fallacy of gray here. Saying that we don’t have absolute certainty is not at all the same as saying that there is no territory; that is assuredly not what Bayesian epistemology is about.
Objecting to one, actually.
Not what it should be about, no. But it most assuredly does color how Bayesian beliefs are formed.
No, wedrifid is not committing one, you are. Here’s why:
He is not arguing that reality is subjective or anything like that. See his comment. On the other hand, your argument seems to be that we need to have some kind of surefire knowledge in epistemology because Bayesian probabilities are insufficient. Why does an epistemology with lack a of absolute certainty mean that said epistemology is not good enough?
What have you read or seen that makes you think this is the case?
I wonder if the word “belief” is causing problems here. Everyone in this thread is using the term to mean “statement or piece of knowledge to which you assign some level of truth value.” Saying “I believe X” is the same as saying “I think X is probably true.” Is this also how you’re using the term?
Without commenting on the specifics here, I object to this formulation because it’s possible both or neither are committing (particular fallacy), and it’s even more likely that of two parties, each is committing a different fallacy.
I didn’t mean to assert that it was an exclusive or, but I see how my wording implies that. Point taken and I’ll try to be more precise in the future.
The formulation is fine. It is just two independent claims. It does not mean “wedrifid is not making an error because you are”.
The subject isn’t “an error”, it’s “the fallacy of gray”.
I agree “No, wedrifid is not committing an error, you are,” hardly implies “wedrifid is not making an error because you are”.
“No, wedrifid is not committing the fallacy of gray, you are,” much more implies “wedrifid is not committing the fallacy of gray because you are” when wedrifid’s and Logos01′s statements are in direct conflict.
Why on earth does it matter that I referred to the general case? We’re discussing the implication of the general formulation which I hope you don’t consider to be a special case that only applies to “The Fallacy of The Grey”. But if we are going to be stickler’s then we should use the actual formulation you object to:
Not especially. This sort of thing is going to said most often regarding statements in direct conflict. There is no more relationship implied between the two than can be expected for any two claims being mentioned in the same sentence or paragraph.
The fact that it would be so easy to write “because” but they didn’t is also evidence against any assertion of a link. There is a limit to how much you can blame a writer for the theoretical possibility that a reader could make a blatant error of comprehension.
I didn’t really mean “because” in that sense, for two reasons. The first is that it is the observer’s knowledge that is caused, not the party’s error, and the other is that the causation implied goes the other way. Not “because” as if the first party’s non-error caused the second’s error, but that the observer can tell that the second party is in error because the observer can see that the first isn’t committing the particular error.
Not special, but there is a sliding scale.
Compare:
“No, wedrifid is not committing the Prosecutor’s fallacy, you are.” Whether or not one party is committing this fallacy has basically nothing to do with whether or not the other is. So I interpret this statement to probably mean: You are wrong when you claim that wedrifid is committing the Prosecutor’s fallacy. Also, you are committing the Prosecutor’s fallacy.”
“No, wedrifid is not reversing cause and effect, you are.” Knowing that one party is not reversing cause and effect is enough to know someone accusing that party of doing so is likely doing so him or herself! So I interpret this statement to probably mean: “Because I see that wedrifid is not reversing cause and effect, I conclude that you are.”
The fallacy of gray is in between the two above examples.
“Fallacy of The Grey” returns two google hits referring to the fallacy of gray, one from this site (and zero hits for “fallacy of the gray”).
This is actually the Fallacy of The Grey.
I didn’t say he was.
No. I was saying that Bayesian probabilistic-belief methodologies are effective at generating maps but they say almost nothing about how those maps correlate to the territory. And that it is, basically, possible to make those assertions. The practices are not the same, and that is the key difference.
It is fundamental to the nature of Bayesian belief-networks that they always assert statements in the form of probabilities. It is impossible to state a Bayesian belief except in the form of a probability.
From this there is a necessary conclusion.
No. There is a difference categorically between the position, “I believe X is probably true” and “I believe X is true.”
What does this mean?
Right. So why do you think this is insufficient for making maps that correlate to the territory? What assertions do you want to make about the territory that are not captured by this model?
Right, but on LW “I believe X” is generally meant as the former, not the latter. This is probably part of the reason for all of the confusion and disagreement in this thread.
What? Of course Elias has some reason for believing what he believes. “Expert” doesn’t mean “someone who magically just knows stuff”. Somewhere along the lines the operation of physics has resulted in the bunch of particles called “Elias” to be configured in such a way as utterances by Elias about things like X are more likely to be true than false. This means that p(Elias asserts X | X is true) and p(Elias asserts X | X is false) are most certainly not equal. Claiming that they must be equal is just really peculiar.
This isn’t to do with blind oracles. It’s to do with trivial application of probability or rudimentary logic.
In your interactions with people here I haven’t observed p(Logos01 is persuaded by X | X is sound reasoning) to be especially high. As such I cannot be expected to consider “Logos01 is not persuaded by something” to give much information at all about the soundness of a claim.
Unless those reasons are justified—which we cannot know without knowing them—they cannot be held to be justifiable statements.
This is tautological.
Not at all. You simply aren’t grasping why it is so. This is because you are thinking in terms of predictions and not in terms of concrete instances. To you, these are one-and-the-same, as you are used to thinking in the Bayesian probabilistic-belief manner.
I am telling you that this is an instance where that manner is flawed.
What you hold to be sound reasoning and what actually is sound reasoning are not equivalent.
If I had meant to imply that conclusion I would have phrased it so.
If you know that your friend more often makes statements such as this when they are true than when they are false, then you know that his claim is relevant evidence, so you should adjust your confidence up. If he reliably either watches the game, or finds out the result by calling a friend or checking online, and you have only known him to make declarations about which team won a game when he knows which team won, then you have reason to believe that his statement is strongly correlated with reality, even if you don’t know the mechanism by which he came to decide to say that the Sportington Sports won.
If you happen to know that your friend has just gotten out of a locked room with no television, phone reception or internet access where he spent the last couple of days, then you should assume an extremely low correlation of his statement with reality. But if you do not know the mechanism, you must weight his statement according to the strength that you expect his mechanism for establishing correlation with the truth has.
There is a permanent object outside my window. You do not know what it is, and if you try to assign probabilities to all the things it could be, you will assign a very low probability to the correct object. You should assign pretty high confidence that I know what the object outside my window is, so if I tell you, then you can assign much higher probability to that object than before I told you, without my having to tell you why I know. You have reason to have a pretty high confidence in the belief that I am an authority on what is outside my window, and that I have reliable mechanisms for establishing it.
If I tell you what is outside my window, you will probably guess that the most likely mechanism by which I found out was by looking at it, so that will dominate your assessment of my statement’s correlation with the truth (along with an adjustment for the possibility that I would lie.) If I tell you that I am blind, type with a braille keyboard, and have a voice synthesizer for reading text to me online, and I know what is outside my window because someone told me, then you should adjust your probability that my claim of what is outside my window is correct downwards, both on increased probability that I am being dishonest, and on the decreased reliability of my mechanism (I could have been lied to.) If I tell you that I am blind and psychic fairies told me what is outside my window, you should adjust your probability that my claim is correlated with reality down much further.
The “trust mechanism,” as you call it, is not a device that exists separate from issues of evidence and probability. It is one of the most common ways that we reason about probabilities, basing our confidence in others’ statements on what we know about their likely mechanisms and motives.
You can’t confirm that anything is true with absolute certainty, you can only be more or less confident. If your belief is not conditioned on evidence, you’re doing something wrong, but there is no point where a “mere belief” transitions into confirmed knowledge. Your probability estimates go up and down based on how much evidence you have, and some evidence is much stronger than others, but there is no set of evidence that “counts for actually knowing things” separate from that which doesn’t.
This is like claiming that because a coin came up heads twenty times and tails ten times it is 2x more likely to come up heads this time. Absent some other reason to justify the correlation between your friend’s accuracy and the current instance, such beliefs are invalid.
Yup. I said as much.
Yes, actually, it is a separate mechanism.
Yes, yes. That is the Bayesian standard statement. I’m not persuaded by it. It is, by the way, a foundational error to assert that absolute knowledge is the only form of knowledge. This is one of my major objections to standard Bayesian doctrine in general; the notion that there is no such thing as knowledge but only beliefs of varying confidence.
Bayesian probability assessments work very well for making predictions and modeling unknowns, but that’s just not sufficient to the question of what constitutes knowledge, what is known, and/or what is true.
And with that, I’m done here. This conversation’s gotten boring, to be quite frank, and I’m tired of having people essentially reiterate the same claims over and over at me from multiple angles. I’ve heard it before, and it’s no more convincing now than it was previously.
This is frustrating for me as well, and you can quit if you want, but I’m going to make one more point which I don’t think will be a reiteration of something you’ve heard previously.
Suppose that you have a circle of friends who you talk to regularly, and a person uses some sort of threat to force you to write down every declarative statement they make in a journal, whether they provided justifications or not, until you collect ten thousand of them.
Now suppose that they have a way of testing the truth of these statements with very high confidence. They make a credible threat that you must correctly estimate the number of the statements in the journal that are true, with a small margin of error, or they will blow up New York. If you simply file a large number of his statements under “trust mechanism,” and fail to assign a probability which will allow you to guess what proportion are right or wrong, millions of people will die. There is an actual right answer which will save those people’s lives, and you want to maximize your chances of getting it. What do you do?
Let’s replace the journal with a log of a trillion statements. You have a computer that can add the figures up quickly, and you still have to get very close to the right number to save millions of lives. Do you want the computer to file statements under “trust mechanism” or “confirmed knowledge” so that it can better determine the correct number of correct statements, or would you rather each statement be tagged with an appropriate probability, so that it can add them up to determine what number of statements it expects to be true?
… appeal to consequences. Well, that is new in this conversation. It’s not very constructive though.
Also, you’re conflating predictions with instantiations.
That being said:
I would, without access to said test myself, be forced to resign myself to the destruction of New York.
That’s not what a trust-system is. It is, simply put, the practice of trusting that something is so because the expected utility-cost of being wrong is lower than the expected utility-cost of investigating a given claim. This practice is a foible; a failing—one that is engaged in out of necessity because humans have a limit to their available cognitive resources.
What one wants is irrelevant. What has occurred is relevant. If you haven’t investigated a given claim directly, then you’ve got nothing but whatever available trust-systems are at hand to operate on.
That doesn’t make them valid claims.
Finally: you’re introducing another unlike-variable by abstracting from individual instances to averaged aggregate.
TL;DR—your post is not-even-wrong. On many points.
If your conception of rationality leads to worse consequences than doing something differently, you should do something differently. Do you think it’s impossible to do better than resigning yourself to the destruction of New York?
The utility cost of being wrong can fluctuate. Your life may hinge tomorrow on a piece of information you did not consider investigating today. If you find yourself in a situation where you must make an important decision hinging on little information, you can do no better than your best estimate, but if you decide that you are not justified in holding forth an estimate at all, you will have rationalized yourself into helplessness.
Humans have bounded rationality. Computationally optimized Jupiter Brains have bounded rationality. Nothing can have unlimited cognitive resources in this universe, but with high levels of computational power and effective weighting of evidence it is possible to know how much confidence you should have based on any given amount of information.
You can get the expected number of true statements just by adding the probabilities of truth of each statement. It’s like judging how many heads you should expect to get in a series of coin flips. .5 + .5 + .5..… The same formula works even if the probabilities are not all the same.
Apparently not.
If you don’t assume that the coin is fair, then certainly a coin coming up heads twenty times and tails ten times is evidence in favor of it being more likely to come up heads next time, because it’s evidence that it’s weighted so that it favours heads.
Similarly if a person is weighted so that they favor truth, their claims are evidence in favour of that truth.
Beliefs like trusting the trustworthy and not trustring the untrustworthy, whether you consider them “valid” beliefs or not, are likely to lead one to make correct predictions about the state of the world. So such beliefs are valid in the only way that matters for epistemic and instrumental rationality both.
Or you could make a direct observation, (such as by weighing it with a fine tool, or placing it on a balancing tool) and know.
Not unless they have an ability to provide their justification for a given instantiation. It would be sufficient for trusting them if you are not concerned with what is true as opposed to what is “likely true”. There’s a difference between these, categorically: one is an affirmation—the other is a belief.
Incorrect. And we are now as far as this conversation is going to go. You hold to Bayesian rationality as axiomatically true of rationality. I do not.
And in the absence of the ability to make direct observations? If there are two eye-witness testimonies to a crime, and one of the eye-witnesses is a notorious liar with every incentive to lie, and one of them is famous for his honesty and has no incentive to lie—which way would you have your judgment lean?
SPOCK: “If I let go of a hammer on a planet that has a positive gravity, I need not see it fall to know that it has in fact fallen. [...] Gentlemen, human beings have characteristics just as inanimate objects do. It is impossible for Captain Kirk to act out of panic or malice. It is not his nature.”
I very much like this quote, because it was one of the first times when I saw determinism, in the sense of predictability, being ennobling.
I have already stated that witness testimonials are valid for weighting beliefs. In the somewhere-parent topic of authorities; this is the equivalent of referencing the work of an authority on a topic.
If in 30 coin flips have occurred with it being that far off, I should move my probability estimate sllightly towards the coin being weighted to one side. If for example, the coin instead had all 30 flips heads, I presume you would update in the direction of the coin being weighted to be more likely to come down on one side. It won’t be 2x as more likely because the hypothesis that the coin is actually fair started with a very large prior. Moreover, the easy ways to make a coin weighted make it always come out on one side. But the essential Bayesian update in this context makes sense to put a higher probability on the coin being weighted to be more likely to comes up heads than tales.
“Bayesian probability assessments work very well for making predictions and modeling unknowns, but that’s just not sufficient to the question of what constitutes knowledge, what is known, and/or what is true.”
Bayesian probability assessments are an extremely poor tool for assertions of truth.