For example if you were to ring your friend and ask him for the football results, you would generally update your degree of belief in the fact that your team won if he told you so (unless you had a particular reason to mistrust him).
This presupposes that you had reason to believe that he had a means of having that information. In which case you are weighting your beliefs based on your assessment of the likelihood of him intentionally deceiving you and your assessment of the likelihood of him having correctly observed the information you are requesting.
If you take it as a given that these things are true then what you are requesting is a testimonial as to what he witnessed, and treating him as a reliable witness, because you have supporting reasons as to why this is so. You are not, on the other hand, treating him as a blind oracle who simply guesses correctly, if you are anything remotely resembling a functionally capable instrumental rationalist.
That is a trivial example, because you apparently are in need of one to gain understanding.
This, sir, is a decidedly disingenuous statement. I have noted it, and it has lowered my opinion of you.
If someone quotes Yudkowsky as saying something, depending on how impressed you are with him as a thinker you may update on his mere opinions (indeed, rationality may demand that you do so)
I want to take an aside here to note that you just stated that it is possible for rationality to “demand” that you accept the opinions of others as facts based merely on the reputation of that person before you even comprehend what the basis of that opinion is.
I cannot under any circumstances now known to me endorse such a definition of “rationality”.
I suggest that you attempt to (articulately) refute Yudkowsky’s argument, thereby screening off his authority.
I’ve done that recently enough that invocations of Eliezer’s name into a conversation are unimpressive. If his arguments are valid, they are valid. If they are not, I have no hesitation to say they are not.
His writings are here, waiting for you to read them!
I want you to understand that in making this statement you have caused me to reassess you as an individual under the effects of a ‘cult of personality’ effect.
A given statement is either true or not true. Authorities on a given topic can be mistaken; and as such no appeal to an authority can be, absent any other actual supporting materials, treated as a rational motive for belief. In all such cases, it is those supporting materials which comprise the nature of the argument.
And:
… it doesn’t follow that because authorities are unreliable, their assertions without supporting evidence should not adjust your weighting in belief on any specific claim?
You now state:
If you take it as a given that these things are true then what you are requesting is a testimonial as to what he witnessed, and treating him as a reliable witness, because you have supporting reasons as to why this is so. You are not, on the other hand, treating him as a blind oracle who simply guesses correctly, if you are anything remotely resembling a functionally capable instrumental rationalist.
This is a different claim to that contained in the former two quotes. I do not disagree with this claim, but it does not support your earlier claims, which are the claims that I was criticising.
I do not know of any human being who I would regard as “a blind oracle who simply guesses correctly”. The existence of such an oracle is extremely improbable. On the other hand it is very normal to believe that there are “supporting reasons” for why someone’s claims should change your degree of belief in something. For example if Yudkowsky has made 100 claims that were backed up by sound argument and evidence, that leads me to believe that the 101st claim he makes is more likely to be true than false, therefore increasing my degree of belief somewhat (even if only a little, sometimes) in that claim at the expense of competing propositions even before I read the argument.
It also possible for someone’s claims to be reliably anti-correlated with the truth, although that would be a little strange. For example someone who generally holds accurate beliefs, but is a practical joker and always lies to you, might cause you to (insofar as you are rational) decrease your degree of belief in any proposition that he asserts, unless you have particular reason to believe that he is telling the truth on this one occasion.
You may have no particular regard for someone’s opinion. Nonetheless, the fact that a group of humans beings regard that person as smart; and that he writes cogently; and even the mere fact that he is human – these are all “supporting reasons” such as you mentioned. Even if this doesn’t amount to much, it necessarily amounts to something (except for in the vastly improbable case that reasons for this opinion to correlate with the truth, and for it to anti-correlate with the truth, seem to exactly cancel each other out).
I think what is tripping you up is the idea that you can be gamed by people who are telling you to “automatically believe everything that someone says”. But actually the laws of probability are theorems – not social rules or rules of thumb. If you do think you are being gamed or lied to then you might be rational to decrease your degree of belief in something that someone else claims, or else only update your belief a very small amount towards their belief. No-one denies that.
All that we are trying to explain to you is that your generalisations about not treating other people’s beliefs as evidence are wrong as probability theory. Belief is not binary – believe or not believe. It is a continuum, and updates in your degree of belief in a proposition can be large or tiny. You don’t have to flick from 0 to 1 when someone makes some assertion, but rationality requires you to make use of all the evidence available to you—therefore your degree of belief should change by some amount in some direction, which is usually towards the other person’s belief except in unusual circumstances.
Still, if you do in fact regard Yudkowsky’s (or anyone else’s) stated belief in some proposition as being completely uncorrelated with the truth of that proposition, then you should not revise your belief in either direction when you encounter that statement of his. Otherwise, failure to update on evidence – things that are entangled with what you want to know about – is irrational.
I want to take an aside here to note that you just stated that it is possible for rationality to “demand” that you accept the opinions of others as facts based merely on the reputation of that person before you even comprehend what the basis of that opinion is.
That is called a figure of speech my friend.
I want you to understand that in making this statement you have caused me to reassess you as an individual under the effects of a ‘cult of personality’ effect.
I’ve done that recently enough that invocations of Eliezer’s name into a conversation are unimpressive. If his arguments are valid, they are valid. If they are not, I have no hesitation to say they are not.
There is nothing wrong with this in itself. But as has already been pointed out to you, an assertion and a link could be regarded as encouragement for you to seek out the arguments behind that assertion, which are written elsewhere. It might also be for the edification of other commenters and lurkers, who may be more impressed by Eliezer (therefore more willing to update on his beliefs).
Personally I don’t find the actual argument that started this little comment thread off remotely interesting (arguments over definitions are a common failure mode), so I shan’t get involved in that. But I hope you understand the point about updating on other people’s beliefs a little more clearly now.
Still, if you do in fact regard Yudkowsky’s (or anyone else’s) stated belief in some proposition as being completely uncorrelated with the truth of that proposition, then you should not revise your belief in either direction when you encounter that statement of his. Otherwise, failure to update on evidence – things that are entangled with what you want to know about – is irrational.
This is accurate in the context you are considering—it is less accurate generally without an additional caveat.
You should also not revise your belief if you have already counted the evidence on which the expert’s beliefs depend.
(Except, perhaps, for a very slight nudge as you become more confident that your own treatment of the evidence did not contain an error, but my intuition says this is probably well below the noise level.)
You should also not revise your belief if you have already counted the evidence on which the expert’s beliefs depend.
What if Yudkowsky says that he believes X with probability 90%, and you only believe it with probability 40% - even though you don’t believe that he has access to any more evidence than you do?
Also, I agree that in practice there is a “noise level”—i.e. very weak evidence isn’t worth paying attention to—but in theory (which was the context of the discussion) I don’t believe there is.
What if Yudkowsky says that he believes X with probability 90%, and you only believe it with probability 40% - even though you don’t believe that he has access to any more evidence than you do?
Then perhaps I have evidence that he does not, or perhaps our priors differ, or perhaps I have made a mistake, or perhaps he has made a mistake, or perhaps both. Ideally, I could talk to him and we could work to figure out which of us is wrong and by how much. Otherwise, I could consider the likelihood of the various possibilities and try to update accordingly.
For case 1, I should not be updating; I already have his evidence, and my result should be more accurate.
For case 2, I believe I should not be updating, though if someone disagrees we can delve deeper.
For cases 3 and 5, I should be updating. Preferably by finding my mistake, but I can probably approximate this by doing a normal update if I am in a hurry.
For case 4, I should not be updating.
Also, I agree that in practice there is a “noise level”—i.e. very weak evidence isn’t worth paying attention to—but in theory (which was the context of the discussion) I don’t believe there is.
In theory, there isn’t. My caveat regarding noise was directed to anyone intending to apply my parenthetical note to practice—when we consider a very small effect in the first place, we are likely to over-weight it.
What you are saying is that insofar as we know all of the evidence that has informed some authority’s belief in some proposition, his making a statement of that belief does not provide additional evidence. I agree with that, assuming we are ignoring tiny probabilities that are below a realistic “noise level”.
As you said this is not particularly relevant to the case of someone appealing to an authority during an argument, because their interlocutor is unlikely to know what evidence this authority possesses in the large majority of cases. But it is a good objection in general.
This is a different claim to that contained in the former two quotes. I do not disagree with this claim, but it does not support your earlier claims, which are the claims that I was criticising.
You are mistaken. It is exactly the same claim, rephrased to match the circumstances at most—containing exactly the same assertion: “Authorities on a given topic can be mistaken; and as such no appeal to an authority can be, absent any other actual supporting materials, treated as a rational motive for belief. In all such cases, it is those supporting materials which comprise the nature of the argument.” In the specific case you are taking as supporting evidence your justified belief that the individual had in fact directly observed the event, and the justified belief that the individual would relate it to you honestly. Those, together, comprise justification for the belief.
That is called a figure of speech my friend.
An insufficient retraction.
It is not cultish to praise someone highly.
What you did wasn’t praise.
But as has already been pointed out to you, an assertion and a link could be regarded as encouragement for you to seek out the arguments behind that assertion
See, THIS is why I called you cultish. Do you understand that the quote that was cited to me wasn’t even relevant contextually in the first place? I had already differentiated between proper rationality and instrumental rationality.
The quote of Eliezer’s was discussing instrumental rationality.
I even pointed this out.
But I hope you understand the point about updating on other people’s beliefs a little more clearly now.
No, I do not. But that’s because I wasn’t remotely confused about the topic to begin with, and have throughout these threads demonstrated a better capacity to differentiate between various modes and justifications of belief with a finer standard of differentiating between what justifications and what modes of thought are being engaged than anyone who’s as yet argued the topic in these threads with me, yourself included.
This conversation has officially reached my limit of investment, so feel free to get your last word in, but don’t be surprised if I never read it.
You are mistaken. It is exactly the same claim, rephrased to match the circumstances at most—containing exactly the same assertion: “Authorities on a given topic can be mistaken; and as such no appeal to an authority can be, absent any other actual supporting materials, treated as a rational motive for belief. In all such cases, it is those supporting materials which comprise the nature of the argument.” In the specific case you are taking as supporting evidence your justified belief that the individual had in fact directly observed the event, and the justified belief that the individual would relate it to you honestly. Those, together, comprise justification for the belief.
So in other words, you would like to distinguish between “appeal to authority” and “supporting materials” as though when someone refers you to the sayings of some authority, they expect you to consider these sayings as data purely “in themselves”, separately from whatever reasons you may have for believing that the sayings of that authority are evidentially entangled with whatever you want to know about.
Firstly, “appeal to authority” is generally taken simply to mean the act of referring someone to an authority during an argument about something – there is no connotation that this authority has no evidential entanglement with the subject of the argument, which would be bizarre (why would anyone refer you to him in that case?)
Secondly, if someone makes a statement about something, that in itself implies that there is evidential entanglement between that thing and their statement – i.e. the thing they are talking about is part of the chain of cause and effect (however indirect) that led to the person eventually making a statement about it (otherwise we have to postulate a very big coincidence). Therefore the idea that someone could make a statement about something without there being any evidential entanglement between them and it (which is necessary in order for it to be true that you should not update your belief at all based on their statement) is implausible in the extreme.
You started off by using “appeal to authority” in the normal way, but now you are attempting to redefine it in a nonsensical way so as to avoid admitting that you were mistaken (NB: there is no shame in being mistaken).
If you have read Harry Potter and the Methods of Rationality, you may remember the bit where Quirrell demonstrates “how to lose” as an important lesson in magical combat. Correspondingly, in future I would advise you not to create edifices of nonsense when it would be easier just to admit your mistake. Debate is after all a constructive enterprise, not a battle for tribal status in which it is necessary to save face at all costs.
I’ll say again that the error that is facilitating your confusion about beliefs and evidence is the idea that belief is a binary quality – I believe, or I do not believe – when in fact you believe with probability 90%, or 55%, or 12.2485%. The way that the concept of belief and the phrase “I believe X” is used in ordinary conversation may mislead people on this point, but that doesn’t change the facts of probability theory.
This allows you to think that the question is whether you are “justified” in believing proposition X in light of evidence Y, when the right question to be asking is “how has my degree of belief in proposition X changed as a result of this evidence?” You are reluctant to accept that someone’s mere assertions can be evidence in favour of some proposition because you have in mind the idea that evidence must always be highly persuasive (so as to change your belief status in a binary way from “unjustified” to “justified”), otherwise it isn’t evidence – whereas actually, evidence is still evidence even if it only causes you to shift your degree of belief in a proposition from 1% to 1.2%.
Firstly, “appeal to authority” is generally taken simply to mean the act of referring someone to an authority during an argument about something – there is no connotation that this authority has no evidential entanglement with the subject of the argument, which would be bizarre (why would anyone refer you to him in that case?)
“there is no connotation that this authority has no evidential entanglement with the subject of the argument”—quite correct. Which is why it is fallacious: it is an assertion that this is the case without corroborating that it actually is the case.
If it were the case, then the act would not be an ‘appeal to authority’.
I’ll say again that the error that is facilitating your confusion about beliefs and evidence is the idea that belief is a binary quality – I believe, or I do not believe – when in fact you believe with probability 90%, or 55%, or 12.2485%.
This is categorically invalid. Humans are not bayesian belief-networks. In fact, humans are notoriously poor at assessing their own probabilistic estimates of belief. But this, really, is neither hither nor thither.
The only error that’s occurring here is your continued belief that beliefs are relevant to this conversation. They simply aren’t. We’re not discussing “what should you believe”—we are discussing “what should you hold to be true.”
And that, sir, categorically IS binary. A thing is either true or not true. If you affirm it to be true you are assigning it a fixed binary state.
This is categorically invalid. Humans are not bayesian belief-networks.
You have a point in saying that “12.2485%” is an unlikely number to give to your degree of belief in something, although you could create a scenario in which it is reasonable (e.g. you put 122485 red balls in a bag...). And it’s also fair to say that casually giving a number to your degree of belief is often unwise when that number is plucked from thin air—if you are just using “90%” to mean “strong belief” for example. The point about belief not being binary stands in any case.
We’re not discussing “what should you believe”—we are discussing “what should you hold to be true.”
Those are one and the same! If that’s the real source of your disagreement with everyone here, it’s a doozy.
And that, sir, categorically IS binary. A thing is either true or not true. If you affirm it to be true you are assigning it a fixed binary state.
Perhaps this will help make the point clear. In fact I’m sure it will—it deals with this exact confusion. Please, if you don’t read any of these other links, look at that one!
No, they are not. They are fundamentally different. One is a point in a map. The other is a statement regarding the correlation of that map to the actual territory. These are not identical. Nor should they be.
As I have stated elsewhere: Bayesian ‘probabilistic beliefs’ eschew too greatly the ability of the human mind to make assertions about the nature of the territory.
Perhaps this will help make the point clear. In fact I’m sure it will—it deals with this exact confusion.
The first time I read that page was roughly a year and a half ago.
I am not confused.
I am telling you something that you canot digest; but nothing you have said is inscrutable to me.
These three things together should tell you something. I wno’t bother trying to repeat myself about what.
These three things together should tell you something.
To be blunt, they tell us you are arrogant, ill informed and sufficiently entrenched in you habits of thought that trying to explain anything to you is almost certainly a waste of a time even if meant well. Boasting of the extent your knowledge of the relevant fundamentals while simultaneously blatantly displaying your ignorance of the same is probably the worst signal you can give regarding your potential.
I am afraid, simply put, you are mistaken. I can reveal this simply: what is the process by which a Bayesian rationalist can derive a belief about an event which is not expressible in probabilistic form?
I am telling you something that you cannot digest; but nothing you have said is inscrutable to me.
That is not something of which to be proud.
Depends on what it is that is being said and how. Unfortunately, in this case, we have each been using very simple language to achieve our goals—or, at least, equivalently comprehensible statements. Despite my best efforts to inform you of what it is you are not understanding, you have failed to do so. I have, contrastingly, from the beginning understood what has been said to me—and yet you continue to believe that I do not.
This is problematic. But it is also indicative that the failure is simply not on my part.
If you understand something, you should be able to describe it in such a way that other people who understand it will agree that your description is correct. Thus far you have consistently failed to do this.
I am not confident that I understand your position yet. I don’t think you have made it very clear. But you have made it very clear that you think you understand Bayesian reasoning, but your understanding of how it works does not agree with anyone else’s here.
Not, sadly, for any particular point of actual fact. The objections and disagreements have all been consequential, rather than factual, in nature. I have, contrastingly, repeatedly been accused of committing myself to fallacies I simply have not committed (to name one specifically, the ‘Fallacy of Grey’).
Multiple individuals have objected to my noting the consequences of the fact that Bayesian rationality belief-networks are always maps and never assertions about the territory. And yet, this fact is a core element of Bayesian rationality.
but your understanding of how it works does not agree with anyone else’s here.
Quite frankly, the only reason this is so is because no one wants to confront the necessary conclusions resultant from my assertions about basic, core principles regarding Bayesian rationality.
Hence the absence of a response to my previous challenge: “what is the process by which a Bayesian rationalist can derive a belief about an event which is not expressible in probabilistic form?” (This is, of course, a “gotcha” question. There is no such process. Which is absolute proof of my positional claim regarding the flaws inherent in Bayesian rationality, and is furthermore directly related to the reasons why probabilistic statements are useless in deriving information regarding a specific manifested instantiation.)
I am not confident that I understand your position yet.
Frankly, I have come to despair of anyone on this site doing so. You folks lack the epistemological framework necessary to do so. I have attempted to relate this repeatedly. I have attempted to direct your thoughts in dialogue to this realization in multiple different ways. I have even simply spelled it out directly.
I have exhausted every rhetorical technique I am aware of to rephrase this point. I no longer have any hope on the matter, and frankly the behaviors I have begun to see out of many of you who are still in this thread have caused me to very seriously negatively reassess my opinion of this site’s community.
For the last year or so, I have been proselytizing this site to others as a good resource for learning how to become rational. I am now unable to do so without such heavy qualifiers that I’m not even sure it’s worth it.
What other people are telling you is that your representation of Bayesian reasoning is incorrect, and that you are misunderstanding them. I suggest that you try to lay out as clear and straightforward an explanation of Bayesian reasoning as you can. If other people agree that it is correct, then we will take your claims to be understanding us much more seriously. If we still tell you that you are misunderstanding it, then I think you should consider seriously the likelihood that you are, in fact, misunderstanding it.
If you do understand our position you should be capable of this, even if you disagree with it. I suggest leaving out your points of disagreement in your explanation; we can say if we agree that it reflects our understanding, and if it does, then you can tell us why you think we are wrong.
I think we are talking past each other right now. I have only low confidence that I understand your position, and I also have very low confidence that you understand ours. If you can do this, then I will no longer have low confidence that you understand our position, and I will put in the effort to attain sufficient understanding of your position (something that you claim to already have of ours) that I can produce an explanation of it that I am confident that you will agree with, and then we can have a proper conversation.
Bayesian reasoning formulates beliefs in the form of probability statements, which are predictive in nature. (See: “Making beliefs pay rent”)
Bayesian probabilities are fundamentally statements of a predictive nature, rather than statements of actual occurrence. This is the basis of the conflict between Bayesians and frequentists. Further corroboration of this point.
The language of Bayesian reasoning regarding beliefs is that of expressing beliefs in probabilistic form, and updating those within a network of beliefs (“givens”) which are each informed by the priors and new inputs.
These four points together are the basis of my assertion that Bayesian rationality is extremely poor at making assertions about what the territory actually is. This is where the epistemological barrier is introduced: there is a difference between what I believe to be true and what I can validly assert is true. The former is a predictive statement. The latter is a material, instantiated, assertion.
No matter how many times a coin has come up heads or tails, that information has no bearing on what position the coin under my hand is actually in. You can make statements about what you believe it will be; but that is not the same as saying what it is.
THIS is the failure of Bayesian reasoning in general; and it is why appeals to authority are always invalid.
Are you prepared to guess which parts I will take issue with?
Frankly, no. Especially since I derived each and every one of those four statements from canonical sources of explanations of how Bayesian rationality operates (and from LessWrong itself, no less.)
I’m more than willing to admit that I might find it interesting. I strongly anticipate that it will be very strongly unpersuasive as to the notion of my having a poor grasp of how Bayesian reasoning operates, however.
Bayesian probabilities are fundamentally statements of a predictive nature, rather than statements of actual occurrence.
Bayesian probabilities are predictive and statements of occurrence. To the extent that frequentist statements of occurrence are correct, Bayesian probabilities will always agree with them.
Bayesian reasoning formulates beliefs in the form of probability statements, which are predictive in nature.
I would not take issue with this if not in light of the statement that you made after it. It’s true that Bayesian probability statements are predictive, we can reason about events we have not yet observed with them. They are also descriptive; they describe “rates of actual occurrence” as you put it.
It seems to me that you are confused about Bayesian reasoning because you are confused about frequentist reasoning, and so draw mistaken conclusions about Bayesian reasoning based on the comparisons Eliezer has made. A frequentist would, in fact, tell you that if you flipped a coin many times, and it has come up heads every time, then if you flip the coin again, it will probably come up heads. They would do a significance test for the proposition that the coin was biased, and determine that it almost certainly was. There is, in fact, no school of probability theory that reflects the position that you have been espousing so far. You seem to be contrasting “predictive” Bayesianism with “non-predictive” frequentism, arguing for a system that allows you to totally suspend judgment on events until you make direct observations about those events. But while frequentist statistics fail to allow you to make predictions about the probability of events when you do not have a body of data in which you can observe how often those events tend to occur, it does provide predictions about the probability of events based on based on known data on frequency, and when large body of data for the frequency of an event is available, the frequentist and Bayesian estimates for its probability will tend to converge.
I agree with JoshuaZ that the second part of your comment does not appear to follow from the first part at all, but if we resolve this confusion, perhaps things will become clearer.
Bayesian probabilities are predictive and statements of occurrence.
Yes, they are. But what they cannot be is statements regarding the exact nature of any given specific manifested instantiation of an event.
They are also descriptive; they describe “rates of actual occurrence” as you put it.
That is predictive.
It seems to me that you are confused about Bayesian reasoning because you are confused about frequentist reasoning,
I can dismiss this concern for you: while I’ve targeted Bayesian rationality here, frequentism would be essentially fungible to all of my meaningful assertions.
I agree with JoshuaZ that the second part of your comment does not appear to follow from the first part at all, but if we resolve this confusion, perhaps things will become clearer.
I don’t know that it’s possible for that to occur until such time as I can discern a means of helping you folks to break through your epistemological barriers of comprehension (please note: comprehension is not equivalent to concurrence: I’m not asserting that if you understand me you will agree with me). Try following a little further where the dialogue currently is with JoshuaZ, perhaps? I seem to be beginning to make headway there.
If you want your intended audience to understand, try to tailor your explanation for people much dumber and less knowledgeable than you think they are.
Yup. There’s a deep inferential gap here, and I’m trying to relate it as best I can. I know I’m doing poorly, but a lot of that has to do with the fact that the ideas that you are having trouble with are so very simple to me that my simple explanations make no sense to you.
I know what all those words mean, but I can’t tell what you mean by them, and “specific manifested instantiation of an event” does not sound like a very good attempt to be clear about anything.
Specific: relating to a unique case.
Manifested: made to appear or become present.
Instantiation: a real instance or example
-- a specific manifested instantiation thus is a unique, real example of a thing or instance of an idea or event that is currently present and apparent. I sacrifice simplicity for precision in using this phrase; it lacks any semantic baggage aside from whatI assign to it here, and this is a conversation where precision is life-and-death to comprehension.
Can you explain what you mean by this as simply as possible?
What is the difference between “There is a 99.9999% chance he’s dead, and I’m 99.9999% confident of this, Jim” and “He’s dead, Jim.”?
Acknowledging failure is in no wise congratulatory.
You received an answer, apparently not the one you were looking for.
Of course not. By asking the question I am implying that there is, in fact, a difference. This was in an effort to help you understand me. You in particular responded by answering as you would. Which does nothing to help you overcome what I described as the epistemological barrier of comprehension preventing you from understanding what I’m trying to relate.
To me, there is a very simple, very substantive, and very obvious difference between those two statements, and it isn’t wordiness. I invite you to try to discern what that difference is.
I am powerless over what you project into the emotional context of my statements, especially when I am directly noting a point of fact with no emotive terminology used whatsoever.
I am powerless over what you project into the emotional context of my statements, especially when I am directly noting a point of fact with no emotive terminology used whatsoever.
There was a temptation to just reply with “No. You are not.” But that’s probably less than helpful. Note that language is a subtle thing. Even when someone is making a purely factual claim, how they make it, especially when it involves assertions about other people, can easily carry negative emotional content. Moreover, careful, diplomatic phrasing can be used to minimize that problem.
That is not necessarily true. The simplest explanation is whichever explanation requires the least effort to comprehend. That, categorically, can take the form of a query / thought-exercise.
In this instance, I have already provided the simplest direct explanation I possibly can: probabilistic statements do not correlate to exact instances. I was asked to explain it even more simply yet. So I have to get a little non-linear.
So this confuses me even more. Given your qualification of what pedanterrific said, this seems off in your system. The objection you have to the Bayesians seems to be purely in the issues about the nature of your own map. The red-shirt being dead is not a statement in your own map.
If you had said no here I think I’d sort of see where you are going. Now I’m just confused.
So this confused me even more. [...] The red-shirt being dead is not a statement in your own map.
I’ll elaborate.
Naively: yes.
Absolutely: no.
Recall the epistemology of naive realism:
Naïve realism, also known as direct realism or common sense realism, is a philosophy of mind rooted in a common sense theory of perception that claims that the senses provide us with direct awareness of the external world. In contrast, some forms of idealism assert that no world exists apart from mind-dependent ideas and some forms of skepticism say we cannot trust our senses.
Ok. Most of the time McCoy determines that someone is dead he uses a tricorder. Do you declare his death when you have an intermediary object other than your senses?
My form of naive realism is contingent, not absolutist. It is perfectly acceptable to find myself to be in error. But … there are certain forms of dead that are just plain dead. Such as the cat I just came home to discover was in rigor mortis an hour ago.
My condolences. Would you prefer if we use a different example? The upshot is that the Bayesian agrees with you that there is a reality out there. If you agree that one can be in error then you and the Bayesian aren’t disagreeing here.
The only thing we seem to disagree on is how to formulate statements of belief. As I wrote elsewhere:
What I am espousing is the use of contingent naive realism for handling assertions regarding “the territory” coincident with Bayesian reasoning regarding “the map”.
I strongly agree that Bayesian reasoning is a powerful tool for making predictive statements, but I still affirm that the adage that the number of times a well-balanced coin has come up heads has no bearing on whether it actually will come up heads on the next trial.
I strongly agree that Bayesian reasoning is a powerful tool for making predictive statements, but I still affirm that the adage that the number of times a well-balanced coin has come up heads has no bearing on whether it actually will come up heads on the next trial.
But even here the Bayesian agrees with you if the coin is well-balanced.
The only thing we seem to disagree on is how to formulate statements of belief.
If that is the case then the only disagreement you have is a matter of language not a matter of philosophy. This confuses me because the subthread about knowledge about one’s own internal cognitive processes seemed to assert a difference in actual philosophy.
But even here the Bayesian agrees with you if the coin is well-balanced.
I was making a trivial example. It gets more complicated when we start talking about the probabilities that a specific truth claim is valid.
This confuses me because the subthread about knowledge about one’s own internal cognitive processes seemed to assert a difference in actual philosophy.
I’m not exactly sure how to address this problem. I do know that it is more than just a question of language. The subthread you’re talking about was dealing on the existence of knowable truth, and I was using absolute truths for that purpose.
There are, necessarily, a very limited number of knowable absolute truths.
I do note that your wording on this instance, “knowledge about one’s own internal cognitive processes”, indicates that I may still be failing to achieve sufficient clarity in the message I was conveying in that thread. To reiterate: my claim was that it is absolutely true that when you are cognizant of a specific thought, you know that you are cognizant of that thought. In other words: you know absolutely that you are aware of your awareness. This is tautological to self-awareness.
If I say “he’s dead,” it means “I believe he is dead.” The information that I possess, the map, is all that I have access to. Even if my knowledge accurately reflects the territory, it cannot possibly be the territory. I see a rock, but my knowledge of the rock is not the rock, nor is my knowledge of my self my self.
The map is not the territory, but the map is the only one that you’ve got in your head. “It is raining” and “I do not believe it is raining” can be consistent with each other, but I cannot consistently assert “it is raining, but I do not believe that it is raining.”
You know, for a moment until I saw who it was who was replying, I had a brief flash of hope that this conversation had actually started to get somewhere.
This is starting to hurt my feelings, guys. This is my earnest attempt to restate this.
Put another way: the statement “the map is not the territory” does not have a confidence value of .99999whatever. It is a statement about the map, which is itself a territory, and is simply true.
(I may, of course, be mistaken about Logos’ epistemology, in which case I would appreciate clarification.)
Well, you can take “the map is not the territory” as a premise of a system of beliefs, and assign it a probability of 1 within that system, but you should not assign a probability of 1 to the system of beliefs being true.
Of course, there are some uncertainties that it makes essentially no sense to condition your behavior on. What if everything is wrong and nothing makes sense? Then kabumfuck nothing, you do the best with what you have.
First, I want to note the tone you used. I have not so disrespected you in this dialogue.
Second: “Put another way: the statement “the map is not the territory” does not have a confidence value of .99999whatever. It is a statement about the map, which is itself a territory, and is simply true.”
There is some truth to the claim that “the map is a territory”, but it’s not really very useful. Also, while it is demonstrable that the map is not the territory, it is not tautological and thus requires demonstration.
I am, however, comfortable stating that any specific individual “map” which has been so demonstrated to be a territory of its own without being the territory for which it is mapping is, in truth, a territory of its own (caveat: thar be recursion here!), and that this is expressible as a truth claim without the need for probability values.
First, I have no idea what you’re talking about. I intended no disrespect (in this comment). What about my comment indicated to you a negative tone? I am honestly surprised that you would interpret it this way, and wish to correct whatever flaw in my communicatory ability has caused this.
Second:
There is some truth to the claim that “the map is a territory”, but it’s not really very useful.
Okay, now I’m starting to get confused again. ‘Some’ truth? And what does ‘useful’ have to do with it?
Also, while it is demonstrable that the map is not the territory, it is not tautological and thus requires demonstration.
It… seems tautological to me...
I am, however, comfortable stating that any specific individual “map” which has been so demonstrated … is, in truth, a territory of its own …, and that this is expressible as a truth claim without the need for probability values.
What about my comment indicated to you a negative tone?
The way I read “This is starting to hurt my feelings, guys.” was that you were expressing it as a miming of myself, as opposed to being indicative of your own sentiments. If this was not the case, then I apologize for the projection.
Okay, now I’m starting to get confused again. ‘Some’ truth? And what does ‘useful’ have to do with it?
Whether or not a truth claim is valid is binary. How far that a given valid claim extends is quantitative rather than qualitative, however. My comment towards usefulness has more to do with the utility of the notion, (I.e.; a true-and-interesting notion is more “useful” than a true-and-trivial notion.)
It… seems tautological to me...
Tautological would be “The map is a map”. Statements that depend upon definitions for their truth value approach tautology but aren’t necessarily so.¬A != A is tautological, as is A=A. However, B⇔A → ¬A = ¬B is definitionally true. So; “the map is not the territory” is definitionally true (and by its definition it is demonstrated). However, it is possible for a conceptual territory that the territory is the map. (I am aware of no such instance, but it conceptually could occur). This would, however, require a different operational definition of what a “map” is from the context we currently use, so it would actually be a different statement.
So, um, have I understood you or not?
I’m comfortable saying “close enough for government work”.
was a direct (and, in point of fact, honest) response to
You know, for a moment until I saw who it was who was replying, I had a brief flash of hope that this conversation had actually started to get somewhere.
Though, in retrospect, this may not mean what I took it to mean.
(I.e.; a true-and-interesting notion is more “useful” than a true-and-trivial notion.)
Agreed.
So; “the map is not the territory” is definitionally true (and by its definition it is demonstrated). However, it is possible for a conceptual territory that the territory is the map. … This would, however, require a different operational definition of what a “map” is from the context we currently use, so it would actually be a different statement.
Ah, ok.
I’m comfortable saying “close enough for government work”.
The map is not the territory, but the map is the only one that you’ve got in your head.
This is going nowhere. I am intimately familiar with the epistemological framework you just related to me, and I am trying to convey one wholly unlike to to you. Exactly what good does it do this dialogue for you to continue reiterating points I have already discussed and explained to you ad nauseum?
I see a rock, but my knowledge of the rock is not the rock, nor is my knowledge of my self my self.
But you know that you ‘see’ a rock. And that IS territory, in the absolute sense.
If I say “he’s dead,” it means “I believe he is dead.”
Unless you are using a naive realist epistemological framework. In which case it means “he’s dead” (and can be either true or not true.)
But you know that you ‘see’ a rock. And that IS territory, in the absolute sense.
My understanding of what you are saying is that I should assign a probability of 1 to the proposition that I experience the quale of rock-seeing.
I do not accept that this is the case. There are alternatives, although they do not make much intuitive sense. It may seem to me that I cannot think I have an experience, and not have it, but maybe reality isn’t even that coherent. Maybe it doesn’t follow for reasons that are fundamentally impossible for me to make sense of. Also maybe I don’t exist.
I cannot envision any way in which I could plausibly come to believe that I probably don’t experience what I believe I experience, or that I don’t exist, but if I started getting evidence that reality is fundamentally incoherent, I’d definitely have to start judging it as more likely.
My understanding of what you are saying is that I should assign a probability of 1 to the proposition that I experience the quale of rock-seeing.
I am comfortable agreeing with this statement.
There are alternatives, although they do not make much intuitive sense.
I do not accept that this is the case. Postulating counterfactuals does not validate the possibility of those counterfactuals (this is just another variation of the Ontological Argument, and is as invalid as is the Ontological Argument itself, for very similar reasons.)
I cannot envision any way in which I could plausibly come to believe that I probably don’t experience what I believe I experience
This is a question which reveals why I have the problems I do with the epistemology embedded in Bayesian reasoning. It is very similar to the fact that I’m pretty sure you could agree with the notion that it is conceptually plausible that if reality were ‘fundamentally incoherent’ the statement ¬A = A could be true.
The thing is, it isn’t possible for such incoherence to exist. They are manifestations of definitions; the act of being so defined causes them to be proscriptive restrictions on the behavior of all that which exists.
For what it’s worth, I also hold that the Logical Absolutes are in fact absolute, though I also assert that this is essentially meaningless.
The thing is, it isn’t possible for such incoherence to exist.
How do you know?
It seems pretty reasonable to me that A must always equal A, but if I suddenly found that the universe I was living in ran on dream logic, and 2+2 stopped equaling the same thing every time, I’d start viewing the notion with rather more skepticism.
Because they are functions of definition. Altering the definition invalidates the scenario.
and 2+2 stopped equaling the same thing every time
Eliezer used a cognitive trick in this case: he postulated a counterfactual and treated that postulation as sufficient justification to treat the counterfactual as a possibility. This is not justified.
Because they are functions of definition. Altering the definition invalidates the scenario.
If you state that A is A by definition, then fine, you’ve got a definition that A=A. But that doesn’t mean that the definition must be manifest. Things cannot be defined into reality.
Now, I believe that A=A is a real rule that reality follows, with very high confidence. I am even willing to accept it as a premise within my system of beliefs, and assign it a probability of 1 in the context of that system of beliefs, but I can’t assign a probability of 1 to that system of beliefs.
Eliezer used a cognitive trick in this case: he postulated a counterfactual and treated that postulation as sufficient justification to treat the counterfactual as a possibility. This is not justified.
It’s possible contingent on things that we have very strong reason to believe simply being wrong.
The postulation does not make it a possibility. It was a possibility without ever being postulated.
The postulation “Maybe time isn’t absolute, and there is no sense in which two events in different places can objectively be said to take place at the same time” would probably have been regarded by most, a thousand years ago, as impossible. People would respond that that simply isn’t what time is. But from what we can see today, it looks like reality really does work like that, and any definition of “time” which demands absolute simultaneity exist simply doesn’t correspond to our universe.
In any case, I don’t see how you get from the premise that you can have absolute certainty about your experience of qualia to the position that you can’t judge how likely it is that the Sportland Sports won the game last night based on Neighbor Bob’s say so.
Another epistemological breakdown may be about to occur here. I separate “real” from “existant”.
That which is real is any pattern that which exists is proscriptively constrained by.
That which exists is any phenomenon which directly interacts with other phenomena.
I agree with you that you cannot define things into existence. But you most assuredly can define things into reality. The logical absolutes are examples of this.
Now, I believe that A=A is a real rule that reality follows,
Yup. But that’s a manifestation of the definition, and nothing more. If A=A had never been defined, no one would bat an eye at ¬A = A. But that statement would bear no relation to the definitions supporting the assertion A=A.
It’s possible contingent on things that we have very strong reason to believe simply being wrong.
We’re talking at cross purposes here. Any arbitrary counterfactual may or may not be possible. The null hypothesis is that they are not. Until there is justification for the assertion, the mere presentation of a counterfactual is insufficient cause/motive to accept the assertion that it is possible.
But from what we can see today, it looks like reality really does work like that, and any definition of “time” which demands absolute simultaneity exist simply doesn’t correspond to our universe.
Agreed. But time isn’t “definitional” in nature; that is, it is more than a mere product of definition. We define ‘time’ to be an existing phenomenon with specific characteristics—and thus the definition of how time behaves is constrained by the actual patterns that existing phenomenon adheres to.
The number 3 on the other hand is defined as itself, and there is no existing phenomenon to which it corresponds. We can find three instances of a given phenomenon, but that instantiated collection should not be conflated with an instantiation of the number three itself.
In any case, I don’t see how you get from the premise that you can have absolute certainty about your experience of qualia to the position that you can’t judge how likely it is that the Sportland Sports won the game last night based on Neighbor Bob’s say so.
You can’t. Conversations—and thus, topics—evolve over time, and this conversation has been going on for over 24 hours now.
We’re talking at cross purposes here. Any arbitrary counterfactual may or may not be possible. The null hypothesis is that they are not. Until there is justification for the assertion, the mere presentation of a counterfactual is insufficient cause/motive to accept the assertion that it is possible.
Things that have been believed impossible have sometimes been found to be true. Just because you do not yet have any reason to suspect that something is true, does not mean that you can have assign zero probability to its truth. If you did assign a proposition zero probability, no amount of justification could ever budge you from zero probability; whatever evidence or arguments you received, it would always be infinitely more likely that they came about from some process unconnected to the truth of the proposition than that the proposition was true.
I’m getting rather tired of having to start this conversation over yet again. Do you understand my meaning when I say that until such time as a given counterfactual is provided with justification to accept its plausibility, the null hypothesis is that it is not?
I believe that I understand you, but I don’t think that is part of an epistemology that is as good at forming true beliefs and recognizing false ones.
You can label certain beliefs “null hypothesis,” but it will not correspond to any consistent likelihood that they will turn out to be true.
As a social prescription, you can call certain claims “justified knowledge,” based on how they are arrived at, and not call other claims about reality “justified knowledge” because the evidence on which they were arrived at does not meet the same criteria. And you can call it an “intellectual failing” for people to not acquire “justified knowledge” about things that they reason about. But not all “justified knowledge” will be right, and not all claims that are not “justified knowledge” will be wrong. If you had a computer that could process whatever information you put into it, and tell you exactly how confident you ought to be about any given claim given that information, and have the claims be right exactly that often, then the category of “justified belief” will not correspond to any particular range of likelihood.
This sounds like a point where you would complain that I am confusing “aggregated average” with “manifested instantiation.” We can accept the premise that the territory is the territory, and a particular event either happened or it did not. So if we are talking about a particular event, should we be able to say that it actually definitely happened, because that’s how the territory works? Well, we can say that, but then we start observing that some of the statements we make about things that “definitely happened” turn out to be wrong. If we except as a premise in our epistemologies that the territory is what it is, we can accept that the statements we make about the territory are either right, absolutely, or wrong, absolutely. But we don’t know which are which, and “justified knowledge” won’t correspond to any particular level of confidence.
If you have an epistemology that lets you label some things “absolutely justified knowledge” that are actually wrong, and tells you that you can’t predict the likelihood of some things when you can actually make confidence predictions about them and be right as often as you say you should be, I think you’ll have a hard time selling that it’s a good epistemology.
You can label certain beliefs “null hypothesis,” but it will not correspond to any consistent likelihood that they will turn out to be true.
The use of null hypothesises is a pivotal tool for scientific endeavor and rational-skepticism in general. If you object to this practice, feel free to do so, but please note that what you’re objecting to is a necessary condition of the process of forming beliefs.
And you can call it an “intellectual failing” for people to not acquire “justified knowledge” about things that they reason about. But not all “justified knowledge” will be right, and not all claims that are not “justified knowledge” will be wrong.
I do not agree with the use of “justified knowledge” so ubiquitously. Furthermore, I have tried to establish a differentiation between belief claims and knowledge claims. The latter are categorically claims about “the territory”; the former are claims of approximation of “your map” to “the territory”.
Being able to readily differentiate between these two categories is a vital element for ways of knowing.
If you have an epistemology that lets you label some things “absolutely justified knowledge” that are actually wrong,
I don’t know, at this point, exactly where this particular thread is in the nested tree of various conversations I’m handling. But I can tell you that at no time have I espoused any form of absolutism that allows for non-absolute truths to be expressed as such.
What I have espoused, however, is the use of contingent naive realism coincident with Bayesian reasoning—while understanding the nature of what each “tool” addresses.
The use of null hypotheses is a pivotal tool for scientific endeavor and rational-skepticism in general. If you object to this practice, feel free to do so, but please note that what you’re objecting to is a necessary condition of the process of forming beliefs.
Did you reach this conclusion by testing the hypothesis that the use of null hypotheses is not a pivotal tool for scientific endeavor or rational-skepticism in general against collected data, and rejecting it?
Did you reach this conclusion by testing the hypothesis that the use of null hypotheses is not a pivotal tool for scientific endeavor or rational-skepticism in general against collected data, and rejecting it?
Rational-skepticism doesn’t necessarily operate experimentally and I’m sure you know this.
That being said; yes, yes I did. Thought-experiment, but experiment it is.
The use of null hypothesises is a pivotal tool for scientific endeavor and rational-skepticism in general. If you object to this practice, feel free to do so, but please note that what you’re objecting to is a necessary condition of the process of forming beliefs.
Yes, I am aware of how scientists use null hypotheses. But the null hypothesis is only a construct which exists in the context of significance testing, and significance testing does not reliably get accurate results when you measure small effect sizes, and also gives you different probabilities based on the same data depending on how you categorize that data; Eliezer already gave an example of this in one of the pages you linked earlier. The whole reason that people on this site in general have enthusiasm for Bayesian statistics is because we believe that they do better, as well as as it is possible to do.
The concept of a “null hypothesis,” as it is used in frequentist statistics, doesn’t even make sense in the context of a system where, as you have claimed, we cannot know that it is more likely that a coin will come up heads on the basis of it having come up heads the last hundred times you’ve flipped it.
I do not agree with the use of “justified knowledge” so ubiquitously. Furthermore, I have tried to establish a differentiation between belief claims and knowledge claims. The latter are categorically claims about “the territory”; the former are claims of approximation of “your map” to “the territory”.
If I say “it is raining out,” this is a claim about the territory, yes. If I say “I believe that it is raining out,” this is a claim about my map (and my map exists within, and is a part of, the territory, so it is also a claim “about the territory,” but not about the same part of the territory.)
But my claim about the territory is not certain to be correct. If I say “it is raining out, 99.99999% confidence,” which is the sort of thing I would mean when I say “it is raining out,” that means that I am saying that the territory probably agrees with this statement, but the information in my map does not allow me to have a higher confidence in the agreement than that.
The use of null hypothesises is a pivotal tool for scientific endeavor and rational-skepticism in general. If you object to this practice, feel free to do so, but please note that what you’re objecting to is a necessary condition of the process of forming beliefs.
A null hypothesis is a particularly useful simplification not strictly a necessary tool.
I am saying it is not strictly necessary to have a hypothesis called “null” but rather that such a thing is just extremely useful. I would definitely say the thing about defaults too.
I am saying it is not strictly necessary to have a hypothesis called “null”
That’s not what a null hypothesis is. A null hypothesis is a default state.
I would definitely say the thing about defaults too.
I’m curious; how could a situation be arranged such that an individual has no default position on a given topic? Please provide such a scenario—I find my imagination insufficient to the task of authoring such a thing.
I’m curious; how could a situation be arranged such that an individual has no default position on a given topic? Please provide such a scenario—I find my imagination insufficient to the task of authoring such a thing.
If you tell me you’ve flipped a coin, I may not have any default position on whether it’s heads or tails. Similarly, I might have no prior belief, or a symmetric prior, on lots of questions that I haven’t thought much about or that don’t have any visible correlation to anything else I know about.
Sure. I didn’t say “I have no information related to the experiment”. What I’m saying is this: if I do an experiment to choose between K options, it might be that I don’t have any prior (or have a symmetric prior) about which of those K it will be. That’s what a null hypothesis in statistics is. When a pharmaceutical company does a drug trial and talks about the null hypothesis, the hypothesis they’re referring to is “no effect”, not “the chemical never existed”.
Yes, any experiment also will give you information about whether the experiment’s assumptions were valid. But talking about null hypotheses or default hypotheses is in the context of a particular formal model, where we’re referring to something more specific.
When a pharmaceutical company does a drug trial and talks about the null hypothesis, the hypothesis they’re referring to is “no effect”, not “the chemical never existed”.
Correct. And that’s a situationally relevant phenomenon. One quite similar to how “the coin will be on one of its sides” is situationally relevant to the coin-toss. (As opposed to it being on edge, or the faces smoothed off.)
You had asked for an example of where an individual might have no prior on “on a given topic.” I gave one, for a narrow topic: “is the coin heads or tails?”. I didn’t say, and you didn’t ask, for a case where an individual has no prior on anything related to the topic.
But let me give on attempt at that, stronger, claim. You’ve never met my friend Sam. You’ve never heard of or thought of my friend Sam. Before this comment, you didn’t have a prior on his having a birthday, you don’t have a prior on his existing; you never considered the possibility that lesswrong-commenter asr has a friend Sam. I would say you have no beliefs on the topic of Sam’s birthday.
It is my default position that they do not exist, even as absent constructs, unless otherwise noted (with the usual lowered threshold of standards of evidence for more evidently ordinary claims). That’s the whole point of default positions; they inform us how to react to new criteria or phenomena as they arise.
By bringing up the issue of non-topics you’re moving the goal-post. I asked you how it could be that a person could have no defaults on a given topic.
If I tell you that there is a flugmizzr, then you know that certain things are ‘true’—at least, you presume them to be until shown otherwise: one, that flugmizzrs are enumerable and two, that they are somehow enumerable by people, and three, flugmizzrs are knowable, discrete phenomena.
Those are most assuredly amongst your defaults on the topic. They could easily each be wrong—there is no way to know—but if you trust my assertion of “a flugmizzr” then each of these become required defaults, useful for acquiring further information.
I asked you how it could be that a person could have no defaults on a given topic.
To clarify—my answer is that “there are topics I’ve never considered, and before consideration, I need not have a default belief.” For me at least, the level consideration to actually form a belief is nontrivial. I am often in a state of uninterested impartiality. If you ask me whether you have a friend named Joe, I would not hazard a guess and would be about equally surprised by either answer.
To put it more abstractly: There’s no particular computational reason why the mind needs to have a default for every input. You suggest some topic X, and it’s quite possible for me to remain blank, even if I heard you and constructed some internal representation of X. That representation need not be tied to a belief or disbelief in any of the propositions about X under discussion.
Agreed with your main point; curious about a peripheral:
I cannot envision any way in which I could plausibly come to believe that I probably don’t experience what I believe I experience
Is this meaningfully different, in the context you’re operating in, from coming to believe that you probably didn’t experience what you believe you experienced?
Because I also have trouble imagining myself being skeptical about whether I’m actually experiencing what I currently, er, experience myself experiencing (though I can certainly imagine believing things about my current experience that just ain’t so, which is beside your point). But I have no difficulty imagining myself being skeptical about whether I actually experienced what I currently remember myself having experienced a moment ago; that happened a lot after my brain injury, for example.
Is this meaningfully different, in the context you’re operating in, from coming to believe that you probably didn’t experience what you believe you experienced?
Yes. I have plenty of evidence that people sometimes become convinced that they’ve had experiences that they haven’t had, but reality would have to work very differently than I think it does for people not to be having the quales they think they’re having.
(nods) That’s what I figured, but wanted to confirm.
Given that my “current” perception of the world is integrating a variety of different inputs that arrived at my senses at different times, I mostly suspect that my intuitive confidence that there’s a sharp qualitative difference between “what I am currently experiencing” and “what I remember having experienced” is (like many of my intuitive confidences) simply not reflective of what’s actually going on.
What is the difference between “There is a 99.9999% chance he’s dead, and I’m 99.9999% confident of this, Jim” and “He’s dead, Jim.”?
Pedanterrific nailed it.
Or rather, people mostly don’t realize that even claims for which they have high confidence ought to have probabilities attached to them. If I observe that someone seems to be dead, and I tell another person “he’s dead,” what I mean is that I have a very high but less than 1 confidence that he’s dead. A person might think they’re justified in being absolutely certain that someone is dead, but this is something that people have been wrong about plenty of times before; they’re simply unaware of the possibility that they’re wrong.
Suppose that you want to be really sure that the person is dead, so you cut their head off. Can you be absolutely certain then? No, they could have been substituted by an illusion by some Sufficiently Advanced technology you’re not aware of, they could be some amazing never-before-seen medical freak who can survive with their body cut off, or more likely, you’re simply delusional and only imagined that you cut off their head or saw them dead in the first place. These things are very unlikely, but if the next day they turn up perfectly fine, answer questions only they should be able to answer, confirm their identity with dental records, fingerprints, retinal scan and DNA tests, give you your secret handshake and assure you that they absolutely did not die yesterday, you had better start regarding the idea that they died with a lot more suspicion.
Or rather, people mostly don’t realize that even claims for which they have high confidence ought to have probabilities attached to them.
Thank you for reiterating how to properly formulate beliefs. Unfortunately, this is not relevant to this conversation.
A person might think they’re justified in being absolutely certain that someone is dead, but this is something that people have been wrong about plenty of times before; they’re simply unaware of the possibility that they’re wrong.
That a truth claim is later falsified does not mean it wasn’t a truth claim.
Suppose that you want to be really sure that the person is dead, so you cut their head off. Can you be absolutely certain then? No, they could have been substituted by an illusion by some Sufficiently Advanced technology you’re not aware of,
Again, thank you for again demonstrating the Problem of Induction. Again, it just isn’t relevant to this conversation.
By continuing to bring these points up you are rejecting restrictions, definitions, and qualifiers I have added to my claims, to the point where what you are attempting to discuss is entirely unrelated to anything I’m discussing.
I have no interesting in talking past one another.
Ok. I don’t think 1 is a Bayesian issue by itself. That’s a general rationality issue. (Speaking as a non-Bayesian fellow-traveler of the Bayesians.)
2,3, and 4 seems roughly accurate. Whether 3 is correct depends a lot on how you unpack occurrence. A Bayesian is perfectly ok with the central limit theorem and applying it to a coin. This is a statement about occurrences. A Bayesian agrees with the frequentist that if you flip a fair coin the ratio of heads to tails should approach 1 as the number of flips goes to infinity. So what do you mean by occurrences here that Bayesians can’t talk about them?
But there then seems to be a total disconnect between those statements and your later claims and even going back and reading your earlier remarks doesn’t give any illuminating connection.
No matter how many times a coin has come up heads or tails, that information has no bearing on what position the coin under my hand is actually in. You can make statements about what you believe it will be; but that is not the same as saying what it is.
I can’t parse this in any way that makes sense. Are you simply objecting to the fact that 0 and 1 are not allowed probabilities in a Bayesian framework? If not, what does this mean?
I’m saying that the Bayesian framework is restricted to probabilities, and as such is entirely unsuited to non-probabilistic matters. Such as the number of fingers I am currently seeing on my right hand as I type this. For you, it’s a question of what you predict to be true. For me, it’s a specific manifested instance, and as such is not subject to any form of probabilistic assessments.
Note that this is not a claim that we do not share a single physical reality, but rather a question of the ability of either of us to make valid claims of truth.
I’m slowly beginning to understand your thought process. The Bayesian approach treats the number of fingers you currently see on you right hand as a probabilistic matter. The reason it does this, and the reason this method is preferable to those which treat the number of fingers on your hand as not subject to probabilistic assessments is that you can be wrong about the number of fingers on your hand. To demonstrate this I could describe any number of complex scenarios in which you have been tricked about the number of fingers you have. Or I could just point you to real instances of people being wrong about the number of limbs they possess or people who outright deny their disability.
Like anything else a “specific manifested instance” is an entity which we are justified in believing so long as it reliably explains and predicts our sensory impressions.
The reason it does this, and the reason this method is preferable to those which treat the number of fingers on your hand as not subject to probabilistic assessments is that you can be wrong about the number of fingers on your hand.
True, but irrelevant. It would have helped if you had continued to read further; you would have seen me explain to JoshuaZ that he had made exactly the same error that you just made in understanding what I just said.
The specific manifested instance is not the number of fingers but the number of fingers I am currently seeing. It is not, fundamentally speaking, possible for me to be mistaken about the latter (caveat: this only applies to ongoing perception, as opposed to recollections of perception).
Like anything else a “specific manifested instance” is an entity which we are justified in believing so long as it reliably explains and predicts our sensory impressions.
It is not proper to speak of beliefs about specific manifested instances when making assertions about what those instantiations actually are.
The statement “I observe X” is unequivocably absolutely true. Any conclusions derived from it however do not inherit this property.
Similarly, I want to note that you are close to making a categorical error regarding the relevance of explanatory power and predictions to this discussion. Those are very important elements for the formation of rational beliefs; but they are irrelevant to justified truth claims.
The specific manifested instance is not the number of fingers but the number of fingers I am currently seeing. It is not, fundamentally speaking, possible for me to be mistaken about the latter (caveat: this only applies to ongoing perception, as opposed to recollections of perception).
Perceptions do not have propositional content- speaking about their truth or falsity is nonsense. Beliefs about perceptions like “the number of fingers I am currently seeing is five” do and can correspondingly be false. They are of course rarely false but humans routinely miscount things. The closer a belief gets to merely expressing a perception the less there is that can be meaningfully said about it’s truth value.
Similarly, I want to note that you are close to making a categorical error regarding the relevance of explanatory power and predictions to this discussion. Those are very important elements for the formation of rational beliefs; but they are irrelevant to justified truth claims.
Perceptions do not have propositional content- speaking about their truth or falsity is nonsense.
Unless you are operating within the naive realist framework.
Beliefs about perceptions like “the number of fingers I am currently seeing is five” do and can correspondingly be false.
You’ve fallen into that same error I originally warned about. You are conflating beliefs about the nature of the perception with beliefs about the content of the perception.
The closer a belief gets to merely expressing a perception the less there is that can be meaningfully said about it’s truth value.
True but irrelevant to this discussion. I never claimed that there were absolute truths accessible to an arbitrary person which were of significant informational value. I only asserted that they do exist.
A rational belief is a justified truth claim.
Then I guess my epistemology doesn’t exist. I must have just been trolling this entire time. I couldn’t possibly believe that this is statement is committing a categorical error.
Are you willing to accept the notion that I am conveying to you a framework that you are currently unfamiliar with that makes statements about how truth, belief, and knowledge operate which you currently disagree with? If yes, then why do you actively resist comprehending my statements in this manner? What are you hoping to achieve?
Then I guess my epistemology doesn’t exist. I must have just been trolling this entire time. I couldn’t possibly believe that this is statement is committing a categorical error.
Are you willing to accept the notion that I am conveying to you a framework that you are currently unfamiliar with that makes statements about how truth, belief, and knowledge operate which you currently disagree with? If yes, then why do you actively resist comprehending my statements in this manner? What are you hoping to achieve?
What are you talking about! We’re talking about epistemology! If you want to demonstrate why a calling a rational belief a justified truth claim is a category error then do so. But please stop condescendingly repeating it. I actively “resist comprehending your statements”?! You can’t just assert things that don’t make sense in another person’s framework and expect them to not say “No those things are the same”.
In any case, if it is a common position in the epistemological literature then I suspect I am familiar with it and that you are simply really bad at explaining what it is. If it is your original epistemological framework then I suspect it will be a bad one (nothing personal, just my experience with the reference class).
Unless you are operating within the naive realist framework.
Is that your position?
You’ve fallen into that same error I originally warned about. You are conflating beliefs about the nature of the perception with beliefs about the content of the perception.
You keep doing this. You keep using words to make distinctions as if it were obvious what distinction is implied. I can assure you nearly no one here has any idea what you mean by the difference between the nature of the perception and the content of the perception. Please stop acting like we’re stupid because you aren’t explaining yourself.
I’m saying that the Bayesian framework is restricted to probabilities, and as such is entirely unsuited to non-probabilistic matters. Such as the number of fingers I am currently seeing on my right hand as I type this. For you, it’s a question of what you predict to be true. For me, it’s a specific manifested instance, and as such is not subject to any form of probabilistic assessments.
Actually, that’s a probabilistic assertion that you are seeing n fingers for whatever n you choose. You could for example be hallucinating. Or could miscount how many fingers you have. Humans aren’t used to thinking that way, and it generally helps for practical purposes not to think this way. But presumably if five minutes from now a person in a white lab coat walked into your room and explained that you had been tested with a new reversible neurological procedure that specifically alters how many fingers people think they have on their hands and makes them forget that they had any such procedure, you wouldn’t assign zero chance it is a prank.
Note by the way there are stroke victims who assert contrary to all evidence that they can move a paralyzed limb. How certain are you that you aren’t a victim of such a stroke? Is your ability to move your arms a specific manifested instance? Are you sure of that? If that case is different than the finger example how is it different?
Actually, that’s a probabilistic assertion that you are seeing n fingers for whatever n you choose. You could for example be hallucinating. Or could miscount how many fingers you have.
If I am hallucinating, I am still seeing what I am seeing. If I miscount, I still see what I see. There is nothing probabilistic about the exact condition of what it is that I am seeing. You can, if you wish to eschew naive realism, make fundamental assertions about the necessarily inductive nature of all empirical observations—but then, there’s a reason why I phrased my statement the way I did: not “I can see how many fingers I really have” but “I know how many fingers I am currently seeing”.
Are you able to properly parse the difference between these two, or do I need to go further in depth about this?
(The remainder of your post expounded further along the lines of an explanation into your response, which itself was based on an eroneous reading of what I had written. As such I am disregarding it.)
Do you take the same attitude with all the intellectual communities that don’t believe appeals to authority are always fallacious? If so you must find yourself rather isolated.
I have exhausted every rhetorical technique I am aware of to rephrase this point. I no longer have any hope on the matter, and frankly the behaviors I have begun to see out of many of you who are still in this thread have caused me to very seriously negatively reassess my opinion of this site’s community.
FWIW, for I think the first time I see LW as going more or less mad in perceiving value in a blog post linked to in the discussion section.
For what it is worth I hadn’t followed the thread and my impression when reading it after your priming was “just kinda ok”. The reasoning wasn’t absurd or anything but there isn’t any easy way to see how much influence that particular dynamic has had relative to other factors. My impression is that the effect is relatively minor.
I tentatively think any story humans tell about natural selection that obeys certain Darwinian and logical rules is true in that it must have an effect. However this effect may be too small to make any predictions from. This thought is under suspicion for committing the no true Scotsman fallacy.
An example is group selection. If humans can tell a non-flawed story about why it would be in a region of foxes’ benefit to individually restrain their breeding, this does not mean one can predict foxes can be seen to do this. It does mean that the effect itself is real subject to caveats about the rate of migration for foxes from one region to another, etc., such that under artificual enough conditions the real effect could be important. The problem is that there are a million other real effects that don’t come to mind as nice stories, and all have different vectors of effect.
This is why evolutionary psychology and the like is so bewitching and misleading. Pretty much all the effects postulated are true—though most are insignificant. People are entranced by their logical truth.
… I am unable to parse “FWIW, for I think the first time I see LW as going more or less mad in perceiving value in a blog post linked to in the discussion section.” to anything intelligible. What are you trying to say? I’ll wait until you respond before following the link.
Recently, I found myself disagreeing with dozens of LWers. Presumably, when this happens, sometimes I’m right and sometimes I’m wrong. Since I shouldn’t be totally confident I am right this time, how confident should I be?
Confidence in a given circumstance should be constrained by:
the available evidence at hand
how well you can demonstrate internal and external consistency in conforming to said evidence.
how well your understanding of that evidence allows you to predict outcomes of events associated with said evidence.
Any probabilities resultant from this would have to be taken as aggregates for predictive purposes, of course, and as such could not be ascribed as valid justification in any specific instance. (This caveat should be totally unsurprising from me at this point.)
how well you can demonstrate internal and external consistency in conforming to said evidence.
how well your understanding of that evidence allows you to predict outcomes of events associated with said evidence.
It’s a bit tricky because my position is that the post has practically no content and cannot be used to make predictions because it is a careful construction of an effect that is reasonable and does not contradict evidence, though it is in complete disregard of effect size.
After a brief skimming I have come to the conclusion that a brief skimming is not effective enough to provide a sufficient understanding of the conversation thread in question as to allow me to form any opinions on the topic.
tl;dr version: I skimmed it, and couldn’t wrap my head around it, so I’ll have to get back to you.
… To be an appeal to authority I would have to claim I was correct because some other person’s reputation says I am. So this is just you signalling you don’t care whether what you say is true; you merely wish to score points.
That you were upvoted tells me that others share this hostility to me.
This radically adjusts my views of LW as a community. Very, very negatively.
You are appealing to your authority regarding your mental states, degree of comprehension and reading history. This is why it is valid for you to simply assert them instead of us expecting you to provide us with internet histories and fMRI lie detector results. I am trying to point out how absurdly wrong your position on appeals to authority is. Detailed explanations had not succeeded so I hoped pointing out your use of valid appeals to authority would succeed. I desired karma to the extent that one always desires karma when commenting on Less Wrong.
This radically adjusts my views of LW as a community. Very, very negatively.
You are of course free to do this. But I suggest that before leaving you consider the possibility that you are wrong. Presumably at one point you found insight in this community and thought it was worth spending your time here. Did you at that time imagine you could write an accurate and insightful comment that would be voted to −8? Is it not possible that you don’t understand the reasons we’ve given for considering appeals to authority sometimes valid? Is it not possible that you misunderstand how “appeal to authority” is being used? Alternatively, is it not possible that you have not adequately explained your clever understanding of justification that prohibits appeals to authority? If you seriously consider these possibilities and cannot see how we could be responding to you rationally then you are probably right to hold a low opinion of us.
Presumably at one point you found insight in this community and thought it was worth spending your time here. Did you at that time imagine you could write an accurate and insightful comment that would be voted to −8?
In one day, in one thread, I have ‘lost’ roughly 70 ‘karma’. All from objecting to the notion that appeals to authority can be valid, and from my disparaging on Bayesian probabilism’s capability to make truth statements in a non-probabilistic fashion.
I expected better of you all, and I have learned my lesson.
For what it’s worth, as someone who has been reading your various exchanges without becoming involved in them (even to the extent of voting), I think your summary of the causes of that shift leaves out some important aspects.
That aside, though, I suspect your conclusion is substantively correct: karma-shift is a reasonable indicator (though hardly a perfectly reliable one) of how your recent behavior is affecting your reputation here , and if you expected that your recent behavior would improve your reputation your expectations were way misaligned with the reality of this site.
if you expected that your recent behavior would improve your reputation your expectations were way misaligned with the reality of this site.
Even assuming Logos was entirely correct about all his main points it would be bizarre to expect anything but a drastic drop in reputation in response to Logos’ recent behavior. This requires only a rudimentary knowledge of social behavior.
, and if you expected that your recent behavior would improve your reputation your expectations were way misaligned with the reality of this site.
It’s a question of degree. I realized from the outset that I’m essentially committing heresy against sacred beliefs of this community. But I expected a greater capacity for rationality.
I’m a non-Bayesian with over 6000 karma. I’ve started discussion threads on problems about problems with finding the right priors in a Bayesian context and have expressed skepticism that there any genuinely good set of priors exists. Almost every time I post on something related to AGI it is to discuss reasons why I think fooming isn’t likely. I’m not signed up for cryonics and have made multiple comments discussing problems with it from both a strict utilitarian perspective and from a more general framework. When there was a surge of interest in bitcoins here I made a discussion thread pointing out a potentially disturbing issue with that. One of my very first posts here was arguing that phlogiston is a really bad example of an unfalsifiable theory, and I’ve made this argument repeatedly here, despite phlogiston being the go-to example here for a bad scientific theory (although I don’t seem to have had much success in convincing anyone).
I have over 6000 karma. A few days ago I had gained enough karma to be one of the top contributors in the last 30 days. (This signaled to me that I needed to spend less time here and more time being actually productive.)
It should be clear from my example that arguing against “sacred beliefs” here does not by itself result in downvotes. And it isn’t like I’ve had those comments get downvoted and balanced out by my other remarks. Almost all such comments have been upvoted. I therefore have to conclude that either the set of heresies here is very different than what I would guess or something you are doing is getting you downvoted other than your questioning of sacred beliefs.
It would not surprise me if quality of arguments and their degree of politeness matter. It helps to keep in mind that in any community with a karma system or something similar, high quality, polite arguments help more. Even on Less Wrong, people often care a lot about civility, sometimes more than logical correctness. As a general rule of thumb in internet conversations, high quality arguments that support shared beliefs in a community will be treated well. Mediocre or low quality arguments that support community beliefs will be ignored or treated somewhat positively. At the better places on the internet high quality arguments against communal beliefs will be treated with respect. Mediocre or low quality arguments against communal beliefs will generally be treated harshly. That’s not fair, but it is a good rule of thumb. Less Wrong is better than the vast majority of the internet but in this regard it is still roughly an approximation of what you would expect on any internet community.
So when one is trying to argue against a community belief, you need to be very careful to have your ducks lined up in a row. Have your arguments carefully thought out. Be civil at all times. If something is not going well take a break and come back to it later. Also, keep in mind that aside from shared beliefs, almost any community has shared norms about communication and behavior and these norms may have implicit elements that take time to pick up. This can result in harsh receptions unless one has either spent a lot of time in the community or has carefully studied the community. This can make worse the other issues mentioned above.
I’m a non-Bayesian with over 6000 karma. I’ve started discussion threads on problems about problems with finding the right priors in a Bayesian context and have expressed skepticism that there any genuinely good set of priors exists.
That’s a standard element of Bayesian discourse, actually. The notion I’ve been arguing for, on the other hand, fundamentally violates Bayesian epistemology. And yes, I haven’t been highly rigorous about it; but then, I’m also really not all that concerned about my karma score in general. I was simply noting it as demonstrative of something.
However, actively dishonest rhetorical tactics have been taken up ‘against’ me in this thread, and that is what I have reacted strongly negatively against.
That’s a standard element of Bayesian discourse, actually.
That’s an interesting notion which I’d be curious if the Bayesians here could comment on. Do you agree that discussions that good priors might not be possible seem to be standard Bayesian discourse?
However, actively dishonest rhetorical tactics have been taken up ‘against’ me in this thread, and that is what I have reacted strongly negatively against.
I haven’t seen any indications of that in this thread. I do however recall the unfortunate communication lapses that you and I apparently had in the subthread on anti-aging medicine and it seems that some of the comments you made there fit a similar pattern match to accusing people for “actively dishonest rhetorical tactics” (albeit less extreme in that context). Given that two similar issues have occurred on wildly different topics, there seem to be two different explanations: 1) There is a problem with most of the commentators at Less Wrong 2) Something is occurring with the other common denominator of these discussions.
I know you aren’t a Bayesian so I won’t ask you to estimate the probabilities in these two situations. But let me ask a different question: If we took a neutral individual who has never gone to LW and isn’t a Bayesian, do you think you would bet even that on reading this thread they would agree with you and think there’s a problem with the Less Wrong community? What about 2-1 odds? 3-1? If not, why not?
1) There is a problem with most of the commentators at Less Wrong 2) Something is occurring with the other common denominator of these discussions.
Very gently worded. It is my current belief that both statements are true. I have never before so routinely encountered such difficulty in expressing my ideas and having them be understood, despite the fact that the inferential gap between myself and others is… well, larger than what I have ever witnessed between any other two people saving those with severely abnormal psychology. When I’m feeling particularly “existential” I sometimes worry about what that means about me.
On the other hand, I have also never before encountered a community whose dialogue was so deeply entrenched with so many unique linguistic constructs as LessWrong. I do not, fundamentally, disapprove of this: language shapes thought, after all. But this does also create a problem that if I cannot express my ideas within that patois, then I am not going to be understood—with the corollary that ideas which directly violate those construct’s patterns will be violently rejected as incomprehensible, “wrong”, or “confused”.
But let me ask a different question: If we took a neutral individual who has never gone to LW and isn’t a Bayesian, do you think you would bet even that on reading this thread they would agree with you and think there’s a problem with the Less Wrong community? What about 2-1 odds? 3-1? If not, why not?
I am disappointed by this. Shifting the framework of the question from the personal perspective to the outside view does not substantively make it a different question.
On the other hand, I have also never before encountered a community whose dialogue was so deeply entrenched with so many unique linguistic constructs as LessWrong. I do not, fundamentally, disapprove of this: language shapes thought, after all. But this does also create a problem that if I cannot express my ideas within that patois, then I am not going to be understood—with the corollary that ideas which directly violate those construct’s patterns will be violently rejected as incomprehensible, “wrong”, or “confused”.
I’m not sure this is the case. At least it has not struck me as the case. There is a fair number of constructs here that are specific to LW and a larger set that while not specific to LW are not common. But in my observation this results much more frequently in people on LW not explaining themselves well to newcomers. It rarely seems to result in people not being understood or rejected as confused. The most likely way for that to happen is for someone to try to speak in the LW patois without internalizing the meaning of the terms.
I am disappointed by this. Shifting the framework of the question from the personal perspective to the outside view does not substantively make it a different question.
Not my intended point by the question. I wanted an outside view point in general and wanted your estimate on what it would be like. I phrased it in terms of a bet so one would not need to talk about any notion of probability but could just speak of what bets you would be willing to take.
I’m not sure this is the case. At least it has not struck me as the case.
To be frank, have you ever encountered a scenario which displays this phenomenon as vulgarly as this one?
The most likely way for that to happen is for someone to try to speak in the LW patois without internalizing the meaning of the terms.
Well, again; note—what I’m doing here is actually directly violating the framework of the “LWP”. Those who have internalized it, but are unfamiliar with my own framework, would have significant barriers to comprehension. And that is very frequently associated with all sorts of negative reactions—especially when by that framework I am clearly a very confused person who keeps asserting that I am not the one who’s confused here.
I wanted an outside view point in general and wanted your estimate on what it would be like.
I can see why you think it would be an interesting question. I have, however, no opinion or belief on the matter; it is a thoroughly uninteresting question to me.
To be frank, have you ever encountered a scenario which displays this phenomenon as vulgarly as this one?
I’m not at all convinced that that is what is going on here, and this doesn’t seem to be a very vulgar case if I am interpreting your meaning correctly. You seem to think that people are responding in a much more negative and personal fashion than they are.
Well, again; note—what I’m doing here is actually directly violating the framework of the “LWP”. Those who have internalized it, but are unfamiliar with my own framework, would have significant barriers to comprehension.
So the solution then is not to just use your own language and get annoyed by when people fail to respond positively. The solution there is to either use a common framework (e.g. very basic English) or to carefully translate into the new language, or to start off by constructing a helpful dictionary. In general, it is socially rude and and unlikely to be productive to go to any area of the internet where there’s a specialized vocab and not only not learn it but to use a different vocab that has overlapping words. I wouldn’t recommend this in a Dungeons and Dragons forum either.
I can see why you think it would be an interesting question. I have, however, no opinion or belief on the matter; it is a thoroughly uninteresting question to me.
This is unfortunate. It is a question that while uninteresting to you may help you calibrate what is going on. I would tentatively suggest spending a few seconds on the question before dismissing it.
In general, it is socially rude and and unlikely to be productive to go to any area of the internet where there’s a specialized vocab and not only not learn it but to use a different vocab that has overlapping words. I wouldn’t recommend this in a Dungeons and Dragons forum either.
eg. “d20 doesn’t mean a twenty sided dice it refers to the bust and cup size of a female NPC!”
Well, again; note—what I’m doing here is actually directly violating the framework of the “LWP”.
Also the framework presented in “A Practical Study of Argument” by Grovier—my textbook from my first year Philosophy class called “Critical Thinking”. It is actually the only textbook I kept from my first undergrad degree—definitely recommended for anyone wanting to get up to speed on pre-bayesian rational thinking and argument.
Nonsense. This is exactly on topic. It isn’t my “Less Wrong Framework” you are challenging. When I learned about thinking, reasoning and fallacies LessWrong Wasn’t even in existence. For the matter Eliezer’s posts on OvercomingBias weren’t even in existence. Your claim that the response you are getting is the result of your violation of lesswrong specific beliefs is utterly absurd.
So that justifies your assertion that I violate the basic principles of logic and argumentation?
What justifies my assertion that you violate the basic principles of logic and argumentation is Trudy Govier, “A Practical Study of Argument”, 4th edition—Chapter 5 (“Premises—What to accept and why”), page 138. Under the subheading “Proper Authority”.
For an explanation of when and why an appeal to authority is, in fact, fallacious see pages 141, 159 and 434. Or wikipedia. Either way my disagreement with you is nothing to do with what I learned on LessWrong. If I’m wrong it’s the result of my prior training and an independent flaw in my personal thinking. Don’t try to foist this off on LessWrong groupthink. (That claim would be credible if we were arguing about, say, cryonics.)
What justifies my assertion that you violate the basic principles of logic and argumentation is Trudy Govier, “A Practical Study of Argument”, 4th edition—Chapter 5 (“Premises—What to accept and why”), page 138. Under the subheading “Proper Authority”.
You are really going to claim that by logically arguing over what qualifies as a valid argument I violate the basic principles of argumentation and logic?
Just guessing from the chapter and subheading titles, but I’m pretty sure that bit of “A Practical Study of Argument” has to do with why arguments from authority are not always fallacious.
You are really going to claim that by logically arguing over what qualifies as a valid argument I violate the basic principles of argumentation and logic?
Then by all means enlighten me as to how it can be possible that merely by disagreeing with Grovier on the topic of appeals to authority, and in doing so providing explanations based on deduction and induction, I “violate the basic principles of logic and argumentation”.
I don’t know why this hasn’t been done before: appeal to authority on wikipedia.
As far as I can tell, this definition is what the rest of us are talking about, and it specifically says that appealing to authority only becomes a fallacy if a) the authority is not a legitimate expert, or b) it is used to prove the conclusion must be true. If you disagree with WP’s definition, could you lay out your own?
EDIT: Right, saying that without providing some context was probably a bad idea. I’m not trying to disparage Jack’s comment here; it’s of the same general form as an appeal to external authority, and I’d expect that to come across without saying so. But if you’re being extra super pedantic...
Hey, I didn’t downvote you. (I actually thought that stating “Bare assertion.” as a bare assertion was metahilarious, but I didn’t think it technically applied.)
I appreciate the clarification but what he said was not as bad as a bare assertion. In fact, what he said was not unsupported at all! He was speaking as the best authority we have here on Logos01′s comprehension and reading habits. A bare assertion would have been if he made a claim about which we have no reason to think he is an authority (like, say, the rules of inductive logic).
What, you can’t appeal to your own authority? What would you call “because I said so”?
A bare assertion, as Nornagest indicated. Also a form of fallacy. If I had done such a thing, that would be worthy of consideration here. I have not, so we can safely stop here.
Evidence itself can be mistaken. If your theory says an event is rare, and it happens, then that is evidence against the theory. If the theory is correct, it should be overwhelmed by evidence for the theory. If the statements of experts statistically correlate with reality, then you should update on the statements of experts until they are screened off by evidence/arguments you have looked at more directly or the statements of other experts.
This presupposes that you had reason to believe that he had a means of having that information. In which case you are weighting your beliefs based on your assessment of the likelihood of him intentionally deceiving you and your assessment of the likelihood of him having correctly observed the information you are requesting.
If you take it as a given that these things are true then what you are requesting is a testimonial as to what he witnessed, and treating him as a reliable witness, because you have supporting reasons as to why this is so. You are not, on the other hand, treating him as a blind oracle who simply guesses correctly, if you are anything remotely resembling a functionally capable instrumental rationalist.
This, sir, is a decidedly disingenuous statement. I have noted it, and it has lowered my opinion of you.
I want to take an aside here to note that you just stated that it is possible for rationality to “demand” that you accept the opinions of others as facts based merely on the reputation of that person before you even comprehend what the basis of that opinion is.
I cannot under any circumstances now known to me endorse such a definition of “rationality”.
I’ve done that recently enough that invocations of Eliezer’s name into a conversation are unimpressive. If his arguments are valid, they are valid. If they are not, I have no hesitation to say they are not.
I want you to understand that in making this statement you have caused me to reassess you as an individual under the effects of a ‘cult of personality’ effect.
You said earlier:
And:
You now state:
This is a different claim to that contained in the former two quotes. I do not disagree with this claim, but it does not support your earlier claims, which are the claims that I was criticising.
I do not know of any human being who I would regard as “a blind oracle who simply guesses correctly”. The existence of such an oracle is extremely improbable. On the other hand it is very normal to believe that there are “supporting reasons” for why someone’s claims should change your degree of belief in something. For example if Yudkowsky has made 100 claims that were backed up by sound argument and evidence, that leads me to believe that the 101st claim he makes is more likely to be true than false, therefore increasing my degree of belief somewhat (even if only a little, sometimes) in that claim at the expense of competing propositions even before I read the argument.
It also possible for someone’s claims to be reliably anti-correlated with the truth, although that would be a little strange. For example someone who generally holds accurate beliefs, but is a practical joker and always lies to you, might cause you to (insofar as you are rational) decrease your degree of belief in any proposition that he asserts, unless you have particular reason to believe that he is telling the truth on this one occasion.
You may have no particular regard for someone’s opinion. Nonetheless, the fact that a group of humans beings regard that person as smart; and that he writes cogently; and even the mere fact that he is human – these are all “supporting reasons” such as you mentioned. Even if this doesn’t amount to much, it necessarily amounts to something (except for in the vastly improbable case that reasons for this opinion to correlate with the truth, and for it to anti-correlate with the truth, seem to exactly cancel each other out).
I think what is tripping you up is the idea that you can be gamed by people who are telling you to “automatically believe everything that someone says”. But actually the laws of probability are theorems – not social rules or rules of thumb. If you do think you are being gamed or lied to then you might be rational to decrease your degree of belief in something that someone else claims, or else only update your belief a very small amount towards their belief. No-one denies that.
All that we are trying to explain to you is that your generalisations about not treating other people’s beliefs as evidence are wrong as probability theory. Belief is not binary – believe or not believe. It is a continuum, and updates in your degree of belief in a proposition can be large or tiny. You don’t have to flick from 0 to 1 when someone makes some assertion, but rationality requires you to make use of all the evidence available to you—therefore your degree of belief should change by some amount in some direction, which is usually towards the other person’s belief except in unusual circumstances.
Still, if you do in fact regard Yudkowsky’s (or anyone else’s) stated belief in some proposition as being completely uncorrelated with the truth of that proposition, then you should not revise your belief in either direction when you encounter that statement of his. Otherwise, failure to update on evidence – things that are entangled with what you want to know about – is irrational.
That is called a figure of speech my friend.
It is not cultish to praise someone highly.
There is nothing wrong with this in itself. But as has already been pointed out to you, an assertion and a link could be regarded as encouragement for you to seek out the arguments behind that assertion, which are written elsewhere. It might also be for the edification of other commenters and lurkers, who may be more impressed by Eliezer (therefore more willing to update on his beliefs).
Personally I don’t find the actual argument that started this little comment thread off remotely interesting (arguments over definitions are a common failure mode), so I shan’t get involved in that. But I hope you understand the point about updating on other people’s beliefs a little more clearly now.
This is accurate in the context you are considering—it is less accurate generally without an additional caveat.
You should also not revise your belief if you have already counted the evidence on which the expert’s beliefs depend.
(Except, perhaps, for a very slight nudge as you become more confident that your own treatment of the evidence did not contain an error, but my intuition says this is probably well below the noise level.)
What if Yudkowsky says that he believes X with probability 90%, and you only believe it with probability 40% - even though you don’t believe that he has access to any more evidence than you do?
Also, I agree that in practice there is a “noise level”—i.e. very weak evidence isn’t worth paying attention to—but in theory (which was the context of the discussion) I don’t believe there is.
Then perhaps I have evidence that he does not, or perhaps our priors differ, or perhaps I have made a mistake, or perhaps he has made a mistake, or perhaps both. Ideally, I could talk to him and we could work to figure out which of us is wrong and by how much. Otherwise, I could consider the likelihood of the various possibilities and try to update accordingly.
For case 1, I should not be updating; I already have his evidence, and my result should be more accurate.
For case 2, I believe I should not be updating, though if someone disagrees we can delve deeper.
For cases 3 and 5, I should be updating. Preferably by finding my mistake, but I can probably approximate this by doing a normal update if I am in a hurry.
For case 4, I should not be updating.
In theory, there isn’t. My caveat regarding noise was directed to anyone intending to apply my parenthetical note to practice—when we consider a very small effect in the first place, we are likely to over-weight it.
I think we are basically in agreement.
What you are saying is that insofar as we know all of the evidence that has informed some authority’s belief in some proposition, his making a statement of that belief does not provide additional evidence. I agree with that, assuming we are ignoring tiny probabilities that are below a realistic “noise level”.
As you said this is not particularly relevant to the case of someone appealing to an authority during an argument, because their interlocutor is unlikely to know what evidence this authority possesses in the large majority of cases. But it is a good objection in general.
You are mistaken. It is exactly the same claim, rephrased to match the circumstances at most—containing exactly the same assertion: “Authorities on a given topic can be mistaken; and as such no appeal to an authority can be, absent any other actual supporting materials, treated as a rational motive for belief. In all such cases, it is those supporting materials which comprise the nature of the argument.” In the specific case you are taking as supporting evidence your justified belief that the individual had in fact directly observed the event, and the justified belief that the individual would relate it to you honestly. Those, together, comprise justification for the belief.
An insufficient retraction.
What you did wasn’t praise.
See, THIS is why I called you cultish. Do you understand that the quote that was cited to me wasn’t even relevant contextually in the first place? I had already differentiated between proper rationality and instrumental rationality.
The quote of Eliezer’s was discussing instrumental rationality.
I even pointed this out.
No, I do not. But that’s because I wasn’t remotely confused about the topic to begin with, and have throughout these threads demonstrated a better capacity to differentiate between various modes and justifications of belief with a finer standard of differentiating between what justifications and what modes of thought are being engaged than anyone who’s as yet argued the topic in these threads with me, yourself included.
This conversation has officially reached my limit of investment, so feel free to get your last word in, but don’t be surprised if I never read it.
So in other words, you would like to distinguish between “appeal to authority” and “supporting materials” as though when someone refers you to the sayings of some authority, they expect you to consider these sayings as data purely “in themselves”, separately from whatever reasons you may have for believing that the sayings of that authority are evidentially entangled with whatever you want to know about.
Firstly, “appeal to authority” is generally taken simply to mean the act of referring someone to an authority during an argument about something – there is no connotation that this authority has no evidential entanglement with the subject of the argument, which would be bizarre (why would anyone refer you to him in that case?)
Secondly, if someone makes a statement about something, that in itself implies that there is evidential entanglement between that thing and their statement – i.e. the thing they are talking about is part of the chain of cause and effect (however indirect) that led to the person eventually making a statement about it (otherwise we have to postulate a very big coincidence). Therefore the idea that someone could make a statement about something without there being any evidential entanglement between them and it (which is necessary in order for it to be true that you should not update your belief at all based on their statement) is implausible in the extreme.
You started off by using “appeal to authority” in the normal way, but now you are attempting to redefine it in a nonsensical way so as to avoid admitting that you were mistaken (NB: there is no shame in being mistaken).
If you have read Harry Potter and the Methods of Rationality, you may remember the bit where Quirrell demonstrates “how to lose” as an important lesson in magical combat. Correspondingly, in future I would advise you not to create edifices of nonsense when it would be easier just to admit your mistake. Debate is after all a constructive enterprise, not a battle for tribal status in which it is necessary to save face at all costs.
I’ll say again that the error that is facilitating your confusion about beliefs and evidence is the idea that belief is a binary quality – I believe, or I do not believe – when in fact you believe with probability 90%, or 55%, or 12.2485%. The way that the concept of belief and the phrase “I believe X” is used in ordinary conversation may mislead people on this point, but that doesn’t change the facts of probability theory.
This allows you to think that the question is whether you are “justified” in believing proposition X in light of evidence Y, when the right question to be asking is “how has my degree of belief in proposition X changed as a result of this evidence?” You are reluctant to accept that someone’s mere assertions can be evidence in favour of some proposition because you have in mind the idea that evidence must always be highly persuasive (so as to change your belief status in a binary way from “unjustified” to “justified”), otherwise it isn’t evidence – whereas actually, evidence is still evidence even if it only causes you to shift your degree of belief in a proposition from 1% to 1.2%.
See also
“there is no connotation that this authority has no evidential entanglement with the subject of the argument”—quite correct. Which is why it is fallacious: it is an assertion that this is the case without corroborating that it actually is the case.
If it were the case, then the act would not be an ‘appeal to authority’.
This is categorically invalid. Humans are not bayesian belief-networks. In fact, humans are notoriously poor at assessing their own probabilistic estimates of belief. But this, really, is neither hither nor thither.
The only error that’s occurring here is your continued belief that beliefs are relevant to this conversation. They simply aren’t. We’re not discussing “what should you believe”—we are discussing “what should you hold to be true.”
And that, sir, categorically IS binary. A thing is either true or not true. If you affirm it to be true you are assigning it a fixed binary state.
You have a point in saying that “12.2485%” is an unlikely number to give to your degree of belief in something, although you could create a scenario in which it is reasonable (e.g. you put 122485 red balls in a bag...). And it’s also fair to say that casually giving a number to your degree of belief is often unwise when that number is plucked from thin air—if you are just using “90%” to mean “strong belief” for example. The point about belief not being binary stands in any case.
Those are one and the same! If that’s the real source of your disagreement with everyone here, it’s a doozy.
Perhaps this will help make the point clear. In fact I’m sure it will—it deals with this exact confusion. Please, if you don’t read any of these other links, look at that one!
No, they are not. They are fundamentally different. One is a point in a map. The other is a statement regarding the correlation of that map to the actual territory. These are not identical. Nor should they be.
As I have stated elsewhere: Bayesian ‘probabilistic beliefs’ eschew too greatly the ability of the human mind to make assertions about the nature of the territory.
The first time I read that page was roughly a year and a half ago.
I am not confused.
I am telling you something that you canot digest; but nothing you have said is inscrutable to me.
These three things together should tell you something. I wno’t bother trying to repeat myself about what.
To be blunt, they tell us you are arrogant, ill informed and sufficiently entrenched in you habits of thought that trying to explain anything to you is almost certainly a waste of a time even if meant well. Boasting of the extent your knowledge of the relevant fundamentals while simultaneously blatantly displaying your ignorance of the same is probably the worst signal you can give regarding your potential.
“You don’t understand that article”.
That is not something of which to be proud.
I am afraid, simply put, you are mistaken. I can reveal this simply: what is the process by which a Bayesian rationalist can derive a belief about an event which is not expressible in probabilistic form?
Depends on what it is that is being said and how. Unfortunately, in this case, we have each been using very simple language to achieve our goals—or, at least, equivalently comprehensible statements. Despite my best efforts to inform you of what it is you are not understanding, you have failed to do so. I have, contrastingly, from the beginning understood what has been said to me—and yet you continue to believe that I do not.
This is problematic. But it is also indicative that the failure is simply not on my part.
If you understand something, you should be able to describe it in such a way that other people who understand it will agree that your description is correct. Thus far you have consistently failed to do this.
I am not confident that I understand your position yet. I don’t think you have made it very clear. But you have made it very clear that you think you understand Bayesian reasoning, but your understanding of how it works does not agree with anyone else’s here.
Not, sadly, for any particular point of actual fact. The objections and disagreements have all been consequential, rather than factual, in nature. I have, contrastingly, repeatedly been accused of committing myself to fallacies I simply have not committed (to name one specifically, the ‘Fallacy of Grey’).
Multiple individuals have objected to my noting the consequences of the fact that Bayesian rationality belief-networks are always maps and never assertions about the territory. And yet, this fact is a core element of Bayesian rationality.
Quite frankly, the only reason this is so is because no one wants to confront the necessary conclusions resultant from my assertions about basic, core principles regarding Bayesian rationality.
Hence the absence of a response to my previous challenge: “what is the process by which a Bayesian rationalist can derive a belief about an event which is not expressible in probabilistic form?” (This is, of course, a “gotcha” question. There is no such process. Which is absolute proof of my positional claim regarding the flaws inherent in Bayesian rationality, and is furthermore directly related to the reasons why probabilistic statements are useless in deriving information regarding a specific manifested instantiation.)
Frankly, I have come to despair of anyone on this site doing so. You folks lack the epistemological framework necessary to do so. I have attempted to relate this repeatedly. I have attempted to direct your thoughts in dialogue to this realization in multiple different ways. I have even simply spelled it out directly.
I have exhausted every rhetorical technique I am aware of to rephrase this point. I no longer have any hope on the matter, and frankly the behaviors I have begun to see out of many of you who are still in this thread have caused me to very seriously negatively reassess my opinion of this site’s community.
For the last year or so, I have been proselytizing this site to others as a good resource for learning how to become rational. I am now unable to do so without such heavy qualifiers that I’m not even sure it’s worth it.
What other people are telling you is that your representation of Bayesian reasoning is incorrect, and that you are misunderstanding them. I suggest that you try to lay out as clear and straightforward an explanation of Bayesian reasoning as you can. If other people agree that it is correct, then we will take your claims to be understanding us much more seriously. If we still tell you that you are misunderstanding it, then I think you should consider seriously the likelihood that you are, in fact, misunderstanding it.
If you do understand our position you should be capable of this, even if you disagree with it. I suggest leaving out your points of disagreement in your explanation; we can say if we agree that it reflects our understanding, and if it does, then you can tell us why you think we are wrong.
I think we are talking past each other right now. I have only low confidence that I understand your position, and I also have very low confidence that you understand ours. If you can do this, then I will no longer have low confidence that you understand our position, and I will put in the effort to attain sufficient understanding of your position (something that you claim to already have of ours) that I can produce an explanation of it that I am confident that you will agree with, and then we can have a proper conversation.
A core element of Bayesian reasoning is that the map is not the territory.
Bayesian reasoning formulates beliefs in the form of probability statements, which are predictive in nature. (See: “Making beliefs pay rent”)
Bayesian probabilities are fundamentally statements of a predictive nature, rather than statements of actual occurrence. This is the basis of the conflict between Bayesians and frequentists. Further corroboration of this point.
The language of Bayesian reasoning regarding beliefs is that of expressing beliefs in probabilistic form, and updating those within a network of beliefs (“givens”) which are each informed by the priors and new inputs.
These four points together are the basis of my assertion that Bayesian rationality is extremely poor at making assertions about what the territory actually is. This is where the epistemological barrier is introduced: there is a difference between what I believe to be true and what I can validly assert is true. The former is a predictive statement. The latter is a material, instantiated, assertion.
No matter how many times a coin has come up heads or tails, that information has no bearing on what position the coin under my hand is actually in. You can make statements about what you believe it will be; but that is not the same as saying what it is.
THIS is the failure of Bayesian reasoning in general; and it is why appeals to authority are always invalid.
Well, I do not agree that that reflects an accurate understanding of our position.
Are you prepared to guess which parts I will take issue with?
Frankly, no. Especially since I derived each and every one of those four statements from canonical sources of explanations of how Bayesian rationality operates (and from LessWrong itself, no less.)
So if you care to disagree with the Sequences, or the established doctrine of how Bayesian reasoning operates as related in such places as Yudkowsky’s Technical Explanation of Technical Explanation and his Intuitive Explanation of Bayesian Reasoning or Wikipedia’s various articles on the topic such as Bayesian Inference—well, you’re more than welcome to do so.
I’m more than willing to admit that I might find it interesting. I strongly anticipate that it will be very strongly unpersuasive as to the notion of my having a poor grasp of how Bayesian reasoning operates, however.
Bayesian probabilities are predictive and statements of occurrence. To the extent that frequentist statements of occurrence are correct, Bayesian probabilities will always agree with them.
I would not take issue with this if not in light of the statement that you made after it. It’s true that Bayesian probability statements are predictive, we can reason about events we have not yet observed with them. They are also descriptive; they describe “rates of actual occurrence” as you put it.
It seems to me that you are confused about Bayesian reasoning because you are confused about frequentist reasoning, and so draw mistaken conclusions about Bayesian reasoning based on the comparisons Eliezer has made. A frequentist would, in fact, tell you that if you flipped a coin many times, and it has come up heads every time, then if you flip the coin again, it will probably come up heads. They would do a significance test for the proposition that the coin was biased, and determine that it almost certainly was. There is, in fact, no school of probability theory that reflects the position that you have been espousing so far. You seem to be contrasting “predictive” Bayesianism with “non-predictive” frequentism, arguing for a system that allows you to totally suspend judgment on events until you make direct observations about those events. But while frequentist statistics fail to allow you to make predictions about the probability of events when you do not have a body of data in which you can observe how often those events tend to occur, it does provide predictions about the probability of events based on based on known data on frequency, and when large body of data for the frequency of an event is available, the frequentist and Bayesian estimates for its probability will tend to converge.
I agree with JoshuaZ that the second part of your comment does not appear to follow from the first part at all, but if we resolve this confusion, perhaps things will become clearer.
Yes, they are. But what they cannot be is statements regarding the exact nature of any given specific manifested instantiation of an event.
That is predictive.
I can dismiss this concern for you: while I’ve targeted Bayesian rationality here, frequentism would be essentially fungible to all of my meaningful assertions.
I don’t know that it’s possible for that to occur until such time as I can discern a means of helping you folks to break through your epistemological barriers of comprehension (please note: comprehension is not equivalent to concurrence: I’m not asserting that if you understand me you will agree with me). Try following a little further where the dialogue currently is with JoshuaZ, perhaps? I seem to be beginning to make headway there.
Can you explain what you mean by this as simply as possible?
I know what all those words mean, but I can’t tell what you mean by them, and “specific manifested instantiation of an event” does not sound like a very good attempt to be clear about anything. If you want your intended audience to understand, try to tailor your explanation for people much dumber and less knowledgeable than you think they are.
Yup. There’s a deep inferential gap here, and I’m trying to relate it as best I can. I know I’m doing poorly, but a lot of that has to do with the fact that the ideas that you are having trouble with are so very simple to me that my simple explanations make no sense to you.
Specific: relating to a unique case.
Manifested: made to appear or become present.
Instantiation: a real instance or example
-- a specific manifested instantiation thus is a unique, real example of a thing or instance of an idea or event that is currently present and apparent. I sacrifice simplicity for precision in using this phrase; it lacks any semantic baggage aside from whatI assign to it here, and this is a conversation where precision is life-and-death to comprehension.
What is the difference between “There is a 99.9999% chance he’s dead, and I’m 99.9999% confident of this, Jim” and “He’s dead, Jim.”?
Brevity.
As I said; “epistemological barriers of comprehension.”
You were asked to explain a statement of yours as simply as possible.
You responded with a hypothetical question.
You received an answer, apparently not the one you were looking for.
You congratulated yourself on being unclear.
Acknowledging failure is in no wise congratulatory.
Of course not. By asking the question I am implying that there is, in fact, a difference. This was in an effort to help you understand me. You in particular responded by answering as you would. Which does nothing to help you overcome what I described as the epistemological barrier of comprehension preventing you from understanding what I’m trying to relate.
To me, there is a very simple, very substantive, and very obvious difference between those two statements, and it isn’t wordiness. I invite you to try to discern what that difference is.
You come across as rather condescending. Consider that this might not be the most effective way to get your point across.
I am powerless over what you project into the emotional context of my statements, especially when I am directly noting a point of fact with no emotive terminology used whatsoever.
There was a temptation to just reply with “No. You are not.” But that’s probably less than helpful. Note that language is a subtle thing. Even when someone is making a purely factual claim, how they make it, especially when it involves assertions about other people, can easily carry negative emotional content. Moreover, careful, diplomatic phrasing can be used to minimize that problem.
Certainly. But in this case my phrasing was such that it was devoid of any emotive content outside of what the reader projects.
An explanation which is as simple as possible != an exercise left to the reader.
That is not necessarily true. The simplest explanation is whichever explanation requires the least effort to comprehend. That, categorically, can take the form of a query / thought-exercise.
In this instance, I have already provided the simplest direct explanation I possibly can: probabilistic statements do not correlate to exact instances. I was asked to explain it even more simply yet. So I have to get a little non-linear.
So how would you answer what the difference is between the two statements?
The former is a statement of belief about what is. The latter is a statement of what actually is.
So if you were McCoy would you ever say “He’s dead, Jim”?
Naively, yes.
So this confuses me even more. Given your qualification of what pedanterrific said, this seems off in your system. The objection you have to the Bayesians seems to be purely in the issues about the nature of your own map. The red-shirt being dead is not a statement in your own map.
If you had said no here I think I’d sort of see where you are going. Now I’m just confused.
I’ll elaborate.
Naively: yes.
Absolutely: no.
Recall the epistemology of naive realism:
Ok. Most of the time McCoy determines that someone is dead he uses a tricorder. Do you declare his death when you have an intermediary object other than your senses?
Naive realism does not preclude instrumentation.
So do tricorders never break?
My form of naive realism is contingent, not absolutist. It is perfectly acceptable to find myself to be in error. But … there are certain forms of dead that are just plain dead. Such as the cat I just came home to discover was in rigor mortis an hour ago.
Can we change topics, please?
My condolences. Would you prefer if we use a different example? The upshot is that the Bayesian agrees with you that there is a reality out there. If you agree that one can be in error then you and the Bayesian aren’t disagreeing here.
The only thing we seem to disagree on is how to formulate statements of belief. As I wrote elsewhere:
What I am espousing is the use of contingent naive realism for handling assertions regarding “the territory” coincident with Bayesian reasoning regarding “the map”.
I strongly agree that Bayesian reasoning is a powerful tool for making predictive statements, but I still affirm that the adage that the number of times a well-balanced coin has come up heads has no bearing on whether it actually will come up heads on the next trial.
But even here the Bayesian agrees with you if the coin is well-balanced.
If that is the case then the only disagreement you have is a matter of language not a matter of philosophy. This confuses me because the subthread about knowledge about one’s own internal cognitive processes seemed to assert a difference in actual philosophy.
I was making a trivial example. It gets more complicated when we start talking about the probabilities that a specific truth claim is valid.
I’m not exactly sure how to address this problem. I do know that it is more than just a question of language. The subthread you’re talking about was dealing on the existence of knowable truth, and I was using absolute truths for that purpose.
There are, necessarily, a very limited number of knowable absolute truths.
I do note that your wording on this instance, “knowledge about one’s own internal cognitive processes”, indicates that I may still be failing to achieve sufficient clarity in the message I was conveying in that thread. To reiterate: my claim was that it is absolutely true that when you are cognizant of a specific thought, you know that you are cognizant of that thought. In other words: you know absolutely that you are aware of your awareness. This is tautological to self-awareness.
If I say “he’s dead,” it means “I believe he is dead.” The information that I possess, the map, is all that I have access to. Even if my knowledge accurately reflects the territory, it cannot possibly be the territory. I see a rock, but my knowledge of the rock is not the rock, nor is my knowledge of my self my self.
The map is not the territory, but the map is the only one that you’ve got in your head. “It is raining” and “I do not believe it is raining” can be consistent with each other, but I cannot consistently assert “it is raining, but I do not believe that it is raining.”
Ah, but is your knowledge of your knowledge of your self your knowledge of your self?
You know, for a moment until I saw who it was who was replying, I had a brief flash of hope that this conversation had actually started to get somewhere.
This is starting to hurt my feelings, guys. This is my earnest attempt to restate this.
Put another way: the statement “the map is not the territory” does not have a confidence value of .99999whatever. It is a statement about the map, which is itself a territory, and is simply true.
(I may, of course, be mistaken about Logos’ epistemology, in which case I would appreciate clarification.)
Well, you can take “the map is not the territory” as a premise of a system of beliefs, and assign it a probability of 1 within that system, but you should not assign a probability of 1 to the system of beliefs being true.
Of course, there are some uncertainties that it makes essentially no sense to condition your behavior on. What if everything is wrong and nothing makes sense? Then kabumfuck nothing, you do the best with what you have.
First, I want to note the tone you used. I have not so disrespected you in this dialogue.
Second: “Put another way: the statement “the map is not the territory” does not have a confidence value of .99999whatever. It is a statement about the map, which is itself a territory, and is simply true.”
There is some truth to the claim that “the map is a territory”, but it’s not really very useful. Also, while it is demonstrable that the map is not the territory, it is not tautological and thus requires demonstration.
I am, however, comfortable stating that any specific individual “map” which has been so demonstrated to be a territory of its own without being the territory for which it is mapping is, in truth, a territory of its own (caveat: thar be recursion here!), and that this is expressible as a truth claim without the need for probability values.
First, I have no idea what you’re talking about. I intended no disrespect (in this comment). What about my comment indicated to you a negative tone? I am honestly surprised that you would interpret it this way, and wish to correct whatever flaw in my communicatory ability has caused this.
Second:
Okay, now I’m starting to get confused again. ‘Some’ truth? And what does ‘useful’ have to do with it?
It… seems tautological to me...
So, um, have I understood you or not?
The way I read “This is starting to hurt my feelings, guys.” was that you were expressing it as a miming of myself, as opposed to being indicative of your own sentiments. If this was not the case, then I apologize for the projection.
Whether or not a truth claim is valid is binary. How far that a given valid claim extends is quantitative rather than qualitative, however. My comment towards usefulness has more to do with the utility of the notion, (I.e.; a true-and-interesting notion is more “useful” than a true-and-trivial notion.)
Tautological would be “The map is a map”. Statements that depend upon definitions for their truth value approach tautology but aren’t necessarily so.
¬A != A
is tautological, as isA=A
. However,B⇔A → ¬A = ¬B
is definitionally true. So; “the map is not the territory” is definitionally true (and by its definition it is demonstrated). However, it is possible for a conceptual territory that the territory is the map. (I am aware of no such instance, but it conceptually could occur). This would, however, require a different operational definition of what a “map” is from the context we currently use, so it would actually be a different statement.I’m comfortable saying “close enough for government work”.
was a direct (and, in point of fact, honest) response to
Though, in retrospect, this may not mean what I took it to mean.
Agreed.
Ah, ok.
:)
This is going nowhere. I am intimately familiar with the epistemological framework you just related to me, and I am trying to convey one wholly unlike to to you. Exactly what good does it do this dialogue for you to continue reiterating points I have already discussed and explained to you ad nauseum?
But you know that you ‘see’ a rock. And that IS territory, in the absolute sense.
Unless you are using a naive realist epistemological framework. In which case it means “he’s dead” (and can be either true or not true.)
My understanding of what you are saying is that I should assign a probability of 1 to the proposition that I experience the quale of rock-seeing.
I do not accept that this is the case. There are alternatives, although they do not make much intuitive sense. It may seem to me that I cannot think I have an experience, and not have it, but maybe reality isn’t even that coherent. Maybe it doesn’t follow for reasons that are fundamentally impossible for me to make sense of. Also maybe I don’t exist.
I cannot envision any way in which I could plausibly come to believe that I probably don’t experience what I believe I experience, or that I don’t exist, but if I started getting evidence that reality is fundamentally incoherent, I’d definitely have to start judging it as more likely.
I am comfortable agreeing with this statement.
I do not accept that this is the case. Postulating counterfactuals does not validate the possibility of those counterfactuals (this is just another variation of the Ontological Argument, and is as invalid as is the Ontological Argument itself, for very similar reasons.)
This is a question which reveals why I have the problems I do with the epistemology embedded in Bayesian reasoning. It is very similar to the fact that I’m pretty sure you could agree with the notion that it is conceptually plausible that if reality were ‘fundamentally incoherent’ the statement
¬A = A
could be true.The thing is, it isn’t possible for such incoherence to exist. They are manifestations of definitions; the act of being so defined causes them to be proscriptive restrictions on the behavior of all that which exists.
For what it’s worth, I also hold that the Logical Absolutes are in fact absolute, though I also assert that this is essentially meaningless.
How do you know?
It seems pretty reasonable to me that A must always equal A, but if I suddenly found that the universe I was living in ran on dream logic, and 2+2 stopped equaling the same thing every time, I’d start viewing the notion with rather more skepticism.
Because they are functions of definition. Altering the definition invalidates the scenario.
Eliezer used a cognitive trick in this case: he postulated a counterfactual and treated that postulation as sufficient justification to treat the counterfactual as a possibility. This is not justified.
If you state that A is A by definition, then fine, you’ve got a definition that A=A. But that doesn’t mean that the definition must be manifest. Things cannot be defined into reality.
Now, I believe that A=A is a real rule that reality follows, with very high confidence. I am even willing to accept it as a premise within my system of beliefs, and assign it a probability of 1 in the context of that system of beliefs, but I can’t assign a probability of 1 to that system of beliefs.
It’s possible contingent on things that we have very strong reason to believe simply being wrong.
The postulation does not make it a possibility. It was a possibility without ever being postulated.
The postulation “Maybe time isn’t absolute, and there is no sense in which two events in different places can objectively be said to take place at the same time” would probably have been regarded by most, a thousand years ago, as impossible. People would respond that that simply isn’t what time is. But from what we can see today, it looks like reality really does work like that, and any definition of “time” which demands absolute simultaneity exist simply doesn’t correspond to our universe.
In any case, I don’t see how you get from the premise that you can have absolute certainty about your experience of qualia to the position that you can’t judge how likely it is that the Sportland Sports won the game last night based on Neighbor Bob’s say so.
Another epistemological breakdown may be about to occur here. I separate “real” from “existant”.
That which is real is any pattern that which exists is proscriptively constrained by.
That which exists is any phenomenon which directly interacts with other phenomena.
I agree with you that you cannot define things into existence. But you most assuredly can define things into reality. The logical absolutes are examples of this.
Yup. But that’s a manifestation of the definition, and nothing more. If A=A had never been defined, no one would bat an eye at
¬A = A
. But that statement would bear no relation to the definitions supporting the assertion A=A.We’re talking at cross purposes here. Any arbitrary counterfactual may or may not be possible. The null hypothesis is that they are not. Until there is justification for the assertion, the mere presentation of a counterfactual is insufficient cause/motive to accept the assertion that it is possible.
Agreed. But time isn’t “definitional” in nature; that is, it is more than a mere product of definition. We define ‘time’ to be an existing phenomenon with specific characteristics—and thus the definition of how time behaves is constrained by the actual patterns that existing phenomenon adheres to.
The number 3 on the other hand is defined as itself, and there is no existing phenomenon to which it corresponds. We can find three instances of a given phenomenon, but that instantiated collection should not be conflated with an instantiation of the number three itself.
You can’t. Conversations—and thus, topics—evolve over time, and this conversation has been going on for over 24 hours now.
Things that have been believed impossible have sometimes been found to be true. Just because you do not yet have any reason to suspect that something is true, does not mean that you can have assign zero probability to its truth. If you did assign a proposition zero probability, no amount of justification could ever budge you from zero probability; whatever evidence or arguments you received, it would always be infinitely more likely that they came about from some process unconnected to the truth of the proposition than that the proposition was true.
I’m getting rather tired of having to start this conversation over yet again. Do you understand my meaning when I say that until such time as a given counterfactual is provided with justification to accept its plausibility, the null hypothesis is that it is not?
I believe that I understand you, but I don’t think that is part of an epistemology that is as good at forming true beliefs and recognizing false ones.
You can label certain beliefs “null hypothesis,” but it will not correspond to any consistent likelihood that they will turn out to be true.
As a social prescription, you can call certain claims “justified knowledge,” based on how they are arrived at, and not call other claims about reality “justified knowledge” because the evidence on which they were arrived at does not meet the same criteria. And you can call it an “intellectual failing” for people to not acquire “justified knowledge” about things that they reason about. But not all “justified knowledge” will be right, and not all claims that are not “justified knowledge” will be wrong. If you had a computer that could process whatever information you put into it, and tell you exactly how confident you ought to be about any given claim given that information, and have the claims be right exactly that often, then the category of “justified belief” will not correspond to any particular range of likelihood.
This sounds like a point where you would complain that I am confusing “aggregated average” with “manifested instantiation.” We can accept the premise that the territory is the territory, and a particular event either happened or it did not. So if we are talking about a particular event, should we be able to say that it actually definitely happened, because that’s how the territory works? Well, we can say that, but then we start observing that some of the statements we make about things that “definitely happened” turn out to be wrong. If we except as a premise in our epistemologies that the territory is what it is, we can accept that the statements we make about the territory are either right, absolutely, or wrong, absolutely. But we don’t know which are which, and “justified knowledge” won’t correspond to any particular level of confidence.
If you have an epistemology that lets you label some things “absolutely justified knowledge” that are actually wrong, and tells you that you can’t predict the likelihood of some things when you can actually make confidence predictions about them and be right as often as you say you should be, I think you’ll have a hard time selling that it’s a good epistemology.
The use of null hypothesises is a pivotal tool for scientific endeavor and rational-skepticism in general. If you object to this practice, feel free to do so, but please note that what you’re objecting to is a necessary condition of the process of forming beliefs.
I do not agree with the use of “justified knowledge” so ubiquitously. Furthermore, I have tried to establish a differentiation between belief claims and knowledge claims. The latter are categorically claims about “the territory”; the former are claims of approximation of “your map” to “the territory”.
Being able to readily differentiate between these two categories is a vital element for ways of knowing.
I don’t know, at this point, exactly where this particular thread is in the nested tree of various conversations I’m handling. But I can tell you that at no time have I espoused any form of absolutism that allows for non-absolute truths to be expressed as such.
What I have espoused, however, is the use of contingent naive realism coincident with Bayesian reasoning—while understanding the nature of what each “tool” addresses.
Did you reach this conclusion by testing the hypothesis that the use of null hypotheses is not a pivotal tool for scientific endeavor or rational-skepticism in general against collected data, and rejecting it?
Rational-skepticism doesn’t necessarily operate experimentally and I’m sure you know this.
That being said; yes, yes I did. Thought-experiment, but experiment it is.
Yes, I am aware of how scientists use null hypotheses. But the null hypothesis is only a construct which exists in the context of significance testing, and significance testing does not reliably get accurate results when you measure small effect sizes, and also gives you different probabilities based on the same data depending on how you categorize that data; Eliezer already gave an example of this in one of the pages you linked earlier. The whole reason that people on this site in general have enthusiasm for Bayesian statistics is because we believe that they do better, as well as as it is possible to do.
The concept of a “null hypothesis,” as it is used in frequentist statistics, doesn’t even make sense in the context of a system where, as you have claimed, we cannot know that it is more likely that a coin will come up heads on the basis of it having come up heads the last hundred times you’ve flipped it.
If I say “it is raining out,” this is a claim about the territory, yes. If I say “I believe that it is raining out,” this is a claim about my map (and my map exists within, and is a part of, the territory, so it is also a claim “about the territory,” but not about the same part of the territory.)
But my claim about the territory is not certain to be correct. If I say “it is raining out, 99.99999% confidence,” which is the sort of thing I would mean when I say “it is raining out,” that means that I am saying that the territory probably agrees with this statement, but the information in my map does not allow me to have a higher confidence in the agreement than that.
A null hypothesis is a particularly useful simplification not strictly a necessary tool.
So I understand you—you are here claiming that it is not necessary to have a default position in a given topic?
I am saying it is not strictly necessary to have a hypothesis called “null” but rather that such a thing is just extremely useful. I would definitely say the thing about defaults too.
That’s not what a null hypothesis is. A null hypothesis is a default state.
I’m curious; how could a situation be arranged such that an individual has no default position on a given topic? Please provide such a scenario—I find my imagination insufficient to the task of authoring such a thing.
If you tell me you’ve flipped a coin, I may not have any default position on whether it’s heads or tails. Similarly, I might have no prior belief, or a symmetric prior, on lots of questions that I haven’t thought much about or that don’t have any visible correlation to anything else I know about.
But you would have the default position that it had in fact occupied one of those two outcomes.
Sure. I didn’t say “I have no information related to the experiment”. What I’m saying is this: if I do an experiment to choose between K options, it might be that I don’t have any prior (or have a symmetric prior) about which of those K it will be. That’s what a null hypothesis in statistics is. When a pharmaceutical company does a drug trial and talks about the null hypothesis, the hypothesis they’re referring to is “no effect”, not “the chemical never existed”.
Yes, any experiment also will give you information about whether the experiment’s assumptions were valid. But talking about null hypotheses or default hypotheses is in the context of a particular formal model, where we’re referring to something more specific.
Correct. And that’s a situationally relevant phenomenon. One quite similar to how “the coin will be on one of its sides” is situationally relevant to the coin-toss. (As opposed to it being on edge, or the faces smoothed off.)
I’m not sure whether we’re disagreeing here.
You had asked for an example of where an individual might have no prior on “on a given topic.” I gave one, for a narrow topic: “is the coin heads or tails?”. I didn’t say, and you didn’t ask, for a case where an individual has no prior on anything related to the topic.
But let me give on attempt at that, stronger, claim. You’ve never met my friend Sam. You’ve never heard of or thought of my friend Sam. Before this comment, you didn’t have a prior on his having a birthday, you don’t have a prior on his existing; you never considered the possibility that lesswrong-commenter asr has a friend Sam. I would say you have no beliefs on the topic of Sam’s birthday.
It is my default position that they do not exist, even as absent constructs, unless otherwise noted (with the usual lowered threshold of standards of evidence for more evidently ordinary claims). That’s the whole point of default positions; they inform us how to react to new criteria or phenomena as they arise.
By bringing up the issue of non-topics you’re moving the goal-post. I asked you how it could be that a person could have no defaults on a given topic.
If I tell you that there is a flugmizzr, then you know that certain things are ‘true’—at least, you presume them to be until shown otherwise: one, that flugmizzrs are enumerable and two, that they are somehow enumerable by people, and three, flugmizzrs are knowable, discrete phenomena.
Those are most assuredly amongst your defaults on the topic. They could easily each be wrong—there is no way to know—but if you trust my assertion of “a flugmizzr” then each of these become required defaults, useful for acquiring further information.
To clarify—my answer is that “there are topics I’ve never considered, and before consideration, I need not have a default belief.” For me at least, the level consideration to actually form a belief is nontrivial. I am often in a state of uninterested impartiality. If you ask me whether you have a friend named Joe, I would not hazard a guess and would be about equally surprised by either answer.
To put it more abstractly: There’s no particular computational reason why the mind needs to have a default for every input. You suggest some topic X, and it’s quite possible for me to remain blank, even if I heard you and constructed some internal representation of X. That representation need not be tied to a belief or disbelief in any of the propositions about X under discussion.
Agreed with your main point; curious about a peripheral:
Is this meaningfully different, in the context you’re operating in, from coming to believe that you probably didn’t experience what you believe you experienced?
Because I also have trouble imagining myself being skeptical about whether I’m actually experiencing what I currently, er, experience myself experiencing (though I can certainly imagine believing things about my current experience that just ain’t so, which is beside your point). But I have no difficulty imagining myself being skeptical about whether I actually experienced what I currently remember myself having experienced a moment ago; that happened a lot after my brain injury, for example.
Yes. I have plenty of evidence that people sometimes become convinced that they’ve had experiences that they haven’t had, but reality would have to work very differently than I think it does for people not to be having the quales they think they’re having.
(nods) That’s what I figured, but wanted to confirm.
Given that my “current” perception of the world is integrating a variety of different inputs that arrived at my senses at different times, I mostly suspect that my intuitive confidence that there’s a sharp qualitative difference between “what I am currently experiencing” and “what I remember having experienced” is (like many of my intuitive confidences) simply not reflective of what’s actually going on.
Pedanterrific nailed it.
Or rather, people mostly don’t realize that even claims for which they have high confidence ought to have probabilities attached to them. If I observe that someone seems to be dead, and I tell another person “he’s dead,” what I mean is that I have a very high but less than 1 confidence that he’s dead. A person might think they’re justified in being absolutely certain that someone is dead, but this is something that people have been wrong about plenty of times before; they’re simply unaware of the possibility that they’re wrong.
Suppose that you want to be really sure that the person is dead, so you cut their head off. Can you be absolutely certain then? No, they could have been substituted by an illusion by some Sufficiently Advanced technology you’re not aware of, they could be some amazing never-before-seen medical freak who can survive with their body cut off, or more likely, you’re simply delusional and only imagined that you cut off their head or saw them dead in the first place. These things are very unlikely, but if the next day they turn up perfectly fine, answer questions only they should be able to answer, confirm their identity with dental records, fingerprints, retinal scan and DNA tests, give you your secret handshake and assure you that they absolutely did not die yesterday, you had better start regarding the idea that they died with a lot more suspicion.
Thank you for reiterating how to properly formulate beliefs. Unfortunately, this is not relevant to this conversation.
That a truth claim is later falsified does not mean it wasn’t a truth claim.
Again, thank you for again demonstrating the Problem of Induction. Again, it just isn’t relevant to this conversation.
By continuing to bring these points up you are rejecting restrictions, definitions, and qualifiers I have added to my claims, to the point where what you are attempting to discuss is entirely unrelated to anything I’m discussing.
I have no interesting in talking past one another.
Ok. I don’t think 1 is a Bayesian issue by itself. That’s a general rationality issue. (Speaking as a non-Bayesian fellow-traveler of the Bayesians.)
2,3, and 4 seems roughly accurate. Whether 3 is correct depends a lot on how you unpack occurrence. A Bayesian is perfectly ok with the central limit theorem and applying it to a coin. This is a statement about occurrences. A Bayesian agrees with the frequentist that if you flip a fair coin the ratio of heads to tails should approach 1 as the number of flips goes to infinity. So what do you mean by occurrences here that Bayesians can’t talk about them?
But there then seems to be a total disconnect between those statements and your later claims and even going back and reading your earlier remarks doesn’t give any illuminating connection.
I can’t parse this in any way that makes sense. Are you simply objecting to the fact that 0 and 1 are not allowed probabilities in a Bayesian framework? If not, what does this mean?
I’m saying that the Bayesian framework is restricted to probabilities, and as such is entirely unsuited to non-probabilistic matters. Such as the number of fingers I am currently seeing on my right hand as I type this. For you, it’s a question of what you predict to be true. For me, it’s a specific manifested instance, and as such is not subject to any form of probabilistic assessments.
Note that this is not a claim that we do not share a single physical reality, but rather a question of the ability of either of us to make valid claims of truth.
I’m slowly beginning to understand your thought process. The Bayesian approach treats the number of fingers you currently see on you right hand as a probabilistic matter. The reason it does this, and the reason this method is preferable to those which treat the number of fingers on your hand as not subject to probabilistic assessments is that you can be wrong about the number of fingers on your hand. To demonstrate this I could describe any number of complex scenarios in which you have been tricked about the number of fingers you have. Or I could just point you to real instances of people being wrong about the number of limbs they possess or people who outright deny their disability.
Like anything else a “specific manifested instance” is an entity which we are justified in believing so long as it reliably explains and predicts our sensory impressions.
True, but irrelevant. It would have helped if you had continued to read further; you would have seen me explain to JoshuaZ that he had made exactly the same error that you just made in understanding what I just said.
The specific manifested instance is not the number of fingers but the number of fingers I am currently seeing. It is not, fundamentally speaking, possible for me to be mistaken about the latter (caveat: this only applies to ongoing perception, as opposed to recollections of perception).
It is not proper to speak of beliefs about specific manifested instances when making assertions about what those instantiations actually are.
The statement “I observe X” is unequivocably absolutely true. Any conclusions derived from it however do not inherit this property.
Similarly, I want to note that you are close to making a categorical error regarding the relevance of explanatory power and predictions to this discussion. Those are very important elements for the formation of rational beliefs; but they are irrelevant to justified truth claims.
Perceptions do not have propositional content- speaking about their truth or falsity is nonsense. Beliefs about perceptions like “the number of fingers I am currently seeing is five” do and can correspondingly be false. They are of course rarely false but humans routinely miscount things. The closer a belief gets to merely expressing a perception the less there is that can be meaningfully said about it’s truth value.
A rational belief is a justified truth claim.
Unless you are operating within the naive realist framework.
You’ve fallen into that same error I originally warned about. You are conflating beliefs about the nature of the perception with beliefs about the content of the perception.
True but irrelevant to this discussion. I never claimed that there were absolute truths accessible to an arbitrary person which were of significant informational value. I only asserted that they do exist.
Then I guess my epistemology doesn’t exist. I must have just been trolling this entire time. I couldn’t possibly believe that this is statement is committing a categorical error.
Are you willing to accept the notion that I am conveying to you a framework that you are currently unfamiliar with that makes statements about how truth, belief, and knowledge operate which you currently disagree with? If yes, then why do you actively resist comprehending my statements in this manner? What are you hoping to achieve?
What are you talking about! We’re talking about epistemology! If you want to demonstrate why a calling a rational belief a justified truth claim is a category error then do so. But please stop condescendingly repeating it. I actively “resist comprehending your statements”?! You can’t just assert things that don’t make sense in another person’s framework and expect them to not say “No those things are the same”.
In any case, if it is a common position in the epistemological literature then I suspect I am familiar with it and that you are simply really bad at explaining what it is. If it is your original epistemological framework then I suspect it will be a bad one (nothing personal, just my experience with the reference class).
Is that your position?
You keep doing this. You keep using words to make distinctions as if it were obvious what distinction is implied. I can assure you nearly no one here has any idea what you mean by the difference between the nature of the perception and the content of the perception. Please stop acting like we’re stupid because you aren’t explaining yourself.
Actually, that’s a probabilistic assertion that you are seeing n fingers for whatever n you choose. You could for example be hallucinating. Or could miscount how many fingers you have. Humans aren’t used to thinking that way, and it generally helps for practical purposes not to think this way. But presumably if five minutes from now a person in a white lab coat walked into your room and explained that you had been tested with a new reversible neurological procedure that specifically alters how many fingers people think they have on their hands and makes them forget that they had any such procedure, you wouldn’t assign zero chance it is a prank.
Note by the way there are stroke victims who assert contrary to all evidence that they can move a paralyzed limb. How certain are you that you aren’t a victim of such a stroke? Is your ability to move your arms a specific manifested instance? Are you sure of that? If that case is different than the finger example how is it different?
If I am hallucinating, I am still seeing what I am seeing. If I miscount, I still see what I see. There is nothing probabilistic about the exact condition of what it is that I am seeing. You can, if you wish to eschew naive realism, make fundamental assertions about the necessarily inductive nature of all empirical observations—but then, there’s a reason why I phrased my statement the way I did: not “I can see how many fingers I really have” but “I know how many fingers I am currently seeing”.
Are you able to properly parse the difference between these two, or do I need to go further in depth about this?
(The remainder of your post expounded further along the lines of an explanation into your response, which itself was based on an eroneous reading of what I had written. As such I am disregarding it.)
If I understand this correctly you aren’t understanding the nature of subjective probability. But please clarify “specific manifested instantiation”.
No, I understand it perfectly well. I’m asserting that subjective probability is irrelevant in justifying truth-claims.
specific : “Belonging or relating uniquely to a particular subject”
manifest: “Display or show (a quality or feeling) by one’s acts or appearance; demonstrate”
Instatiate: “Represent as or by an instance”
Do you take the same attitude with all the intellectual communities that don’t believe appeals to authority are always fallacious? If so you must find yourself rather isolated.
FWIW, for I think the first time I see LW as going more or less mad in perceiving value in a blog post linked to in the discussion section.
What are the chances I am wrong? Before looking at the subject itself and the comments there, what could we say about the chances I am wrong?
For what it is worth I hadn’t followed the thread and my impression when reading it after your priming was “just kinda ok”. The reasoning wasn’t absurd or anything but there isn’t any easy way to see how much influence that particular dynamic has had relative to other factors. My impression is that the effect is relatively minor.
I tentatively think any story humans tell about natural selection that obeys certain Darwinian and logical rules is true in that it must have an effect. However this effect may be too small to make any predictions from. This thought is under suspicion for committing the no true Scotsman fallacy.
An example is group selection. If humans can tell a non-flawed story about why it would be in a region of foxes’ benefit to individually restrain their breeding, this does not mean one can predict foxes can be seen to do this. It does mean that the effect itself is real subject to caveats about the rate of migration for foxes from one region to another, etc., such that under artificual enough conditions the real effect could be important. The problem is that there are a million other real effects that don’t come to mind as nice stories, and all have different vectors of effect.
This is why evolutionary psychology and the like is so bewitching and misleading. Pretty much all the effects postulated are true—though most are insignificant. People are entranced by their logical truth.
I think I agree with all of that (with the caveat that I don’t know exactly which Evolutionary Psychology claims you would dismiss as insignificant.)
… I am unable to parse “FWIW, for I think the first time I see LW as going more or less mad in perceiving value in a blog post linked to in the discussion section.” to anything intelligible. What are you trying to say? I’ll wait until you respond before following the link.
Recently, I found myself disagreeing with dozens of LWers. Presumably, when this happens, sometimes I’m right and sometimes I’m wrong. Since I shouldn’t be totally confident I am right this time, how confident should I be?
Ahh.
Confidence in a given circumstance should be constrained by:
the available evidence at hand
how well you can demonstrate internal and external consistency in conforming to said evidence.
how well your understanding of that evidence allows you to predict outcomes of events associated with said evidence.
Any probabilities resultant from this would have to be taken as aggregates for predictive purposes, of course, and as such could not be ascribed as valid justification in any specific instance. (This caveat should be totally unsurprising from me at this point.)
It’s a bit tricky because my position is that the post has practically no content and cannot be used to make predictions because it is a careful construction of an effect that is reasonable and does not contradict evidence, though it is in complete disregard of effect size.
After a brief skimming I have come to the conclusion that a brief skimming is not effective enough to provide a sufficient understanding of the conversation thread in question as to allow me to form any opinions on the topic.
tl;dr version: I skimmed it, and couldn’t wrap my head around it, so I’ll have to get back to you.
All three of those points are appeals to authority.
… To be an appeal to authority I would have to claim I was correct because some other person’s reputation says I am. So this is just you signalling you don’t care whether what you say is true; you merely wish to score points.
That you were upvoted tells me that others share this hostility to me.
This radically adjusts my views of LW as a community. Very, very negatively.
You are appealing to your authority regarding your mental states, degree of comprehension and reading history. This is why it is valid for you to simply assert them instead of us expecting you to provide us with internet histories and fMRI lie detector results. I am trying to point out how absurdly wrong your position on appeals to authority is. Detailed explanations had not succeeded so I hoped pointing out your use of valid appeals to authority would succeed. I desired karma to the extent that one always desires karma when commenting on Less Wrong.
You are of course free to do this. But I suggest that before leaving you consider the possibility that you are wrong. Presumably at one point you found insight in this community and thought it was worth spending your time here. Did you at that time imagine you could write an accurate and insightful comment that would be voted to −8? Is it not possible that you don’t understand the reasons we’ve given for considering appeals to authority sometimes valid? Is it not possible that you misunderstand how “appeal to authority” is being used? Alternatively, is it not possible that you have not adequately explained your clever understanding of justification that prohibits appeals to authority? If you seriously consider these possibilities and cannot see how we could be responding to you rationally then you are probably right to hold a low opinion of us.
In one day, in one thread, I have ‘lost’ roughly 70 ‘karma’. All from objecting to the notion that appeals to authority can be valid, and from my disparaging on Bayesian probabilism’s capability to make truth statements in a non-probabilistic fashion.
I expected better of you all, and I have learned my lesson.
For what it’s worth, as someone who has been reading your various exchanges without becoming involved in them (even to the extent of voting), I think your summary of the causes of that shift leaves out some important aspects.
That aside, though, I suspect your conclusion is substantively correct: karma-shift is a reasonable indicator (though hardly a perfectly reliable one) of how your recent behavior is affecting your reputation here , and if you expected that your recent behavior would improve your reputation your expectations were way misaligned with the reality of this site.
Even assuming Logos was entirely correct about all his main points it would be bizarre to expect anything but a drastic drop in reputation in response to Logos’ recent behavior. This requires only a rudimentary knowledge of social behavior.
It’s a question of degree. I realized from the outset that I’m essentially committing heresy against sacred beliefs of this community. But I expected a greater capacity for rationality.
I’m a non-Bayesian with over 6000 karma. I’ve started discussion threads on problems about problems with finding the right priors in a Bayesian context and have expressed skepticism that there any genuinely good set of priors exists. Almost every time I post on something related to AGI it is to discuss reasons why I think fooming isn’t likely. I’m not signed up for cryonics and have made multiple comments discussing problems with it from both a strict utilitarian perspective and from a more general framework. When there was a surge of interest in bitcoins here I made a discussion thread pointing out a potentially disturbing issue with that. One of my very first posts here was arguing that phlogiston is a really bad example of an unfalsifiable theory, and I’ve made this argument repeatedly here, despite phlogiston being the go-to example here for a bad scientific theory (although I don’t seem to have had much success in convincing anyone).
I have over 6000 karma. A few days ago I had gained enough karma to be one of the top contributors in the last 30 days. (This signaled to me that I needed to spend less time here and more time being actually productive.)
It should be clear from my example that arguing against “sacred beliefs” here does not by itself result in downvotes. And it isn’t like I’ve had those comments get downvoted and balanced out by my other remarks. Almost all such comments have been upvoted. I therefore have to conclude that either the set of heresies here is very different than what I would guess or something you are doing is getting you downvoted other than your questioning of sacred beliefs.
It would not surprise me if quality of arguments and their degree of politeness matter. It helps to keep in mind that in any community with a karma system or something similar, high quality, polite arguments help more. Even on Less Wrong, people often care a lot about civility, sometimes more than logical correctness. As a general rule of thumb in internet conversations, high quality arguments that support shared beliefs in a community will be treated well. Mediocre or low quality arguments that support community beliefs will be ignored or treated somewhat positively. At the better places on the internet high quality arguments against communal beliefs will be treated with respect. Mediocre or low quality arguments against communal beliefs will generally be treated harshly. That’s not fair, but it is a good rule of thumb. Less Wrong is better than the vast majority of the internet but in this regard it is still roughly an approximation of what you would expect on any internet community.
So when one is trying to argue against a community belief, you need to be very careful to have your ducks lined up in a row. Have your arguments carefully thought out. Be civil at all times. If something is not going well take a break and come back to it later. Also, keep in mind that aside from shared beliefs, almost any community has shared norms about communication and behavior and these norms may have implicit elements that take time to pick up. This can result in harsh receptions unless one has either spent a lot of time in the community or has carefully studied the community. This can make worse the other issues mentioned above.
That’s a standard element of Bayesian discourse, actually. The notion I’ve been arguing for, on the other hand, fundamentally violates Bayesian epistemology. And yes, I haven’t been highly rigorous about it; but then, I’m also really not all that concerned about my karma score in general. I was simply noting it as demonstrative of something.
However, actively dishonest rhetorical tactics have been taken up ‘against’ me in this thread, and that is what I have reacted strongly negatively against.
That’s an interesting notion which I’d be curious if the Bayesians here could comment on. Do you agree that discussions that good priors might not be possible seem to be standard Bayesian discourse?
I haven’t seen any indications of that in this thread. I do however recall the unfortunate communication lapses that you and I apparently had in the subthread on anti-aging medicine and it seems that some of the comments you made there fit a similar pattern match to accusing people for “actively dishonest rhetorical tactics” (albeit less extreme in that context). Given that two similar issues have occurred on wildly different topics, there seem to be two different explanations: 1) There is a problem with most of the commentators at Less Wrong 2) Something is occurring with the other common denominator of these discussions.
I know you aren’t a Bayesian so I won’t ask you to estimate the probabilities in these two situations. But let me ask a different question: If we took a neutral individual who has never gone to LW and isn’t a Bayesian, do you think you would bet even that on reading this thread they would agree with you and think there’s a problem with the Less Wrong community? What about 2-1 odds? 3-1? If not, why not?
Very gently worded. It is my current belief that both statements are true. I have never before so routinely encountered such difficulty in expressing my ideas and having them be understood, despite the fact that the inferential gap between myself and others is… well, larger than what I have ever witnessed between any other two people saving those with severely abnormal psychology. When I’m feeling particularly “existential” I sometimes worry about what that means about me.
On the other hand, I have also never before encountered a community whose dialogue was so deeply entrenched with so many unique linguistic constructs as LessWrong. I do not, fundamentally, disapprove of this: language shapes thought, after all. But this does also create a problem that if I cannot express my ideas within that patois, then I am not going to be understood—with the corollary that ideas which directly violate those construct’s patterns will be violently rejected as incomprehensible, “wrong”, or “confused”.
I am disappointed by this. Shifting the framework of the question from the personal perspective to the outside view does not substantively make it a different question.
I’m not sure this is the case. At least it has not struck me as the case. There is a fair number of constructs here that are specific to LW and a larger set that while not specific to LW are not common. But in my observation this results much more frequently in people on LW not explaining themselves well to newcomers. It rarely seems to result in people not being understood or rejected as confused. The most likely way for that to happen is for someone to try to speak in the LW patois without internalizing the meaning of the terms.
Not my intended point by the question. I wanted an outside view point in general and wanted your estimate on what it would be like. I phrased it in terms of a bet so one would not need to talk about any notion of probability but could just speak of what bets you would be willing to take.
To be frank, have you ever encountered a scenario which displays this phenomenon as vulgarly as this one?
Well, again; note—what I’m doing here is actually directly violating the framework of the “LWP”. Those who have internalized it, but are unfamiliar with my own framework, would have significant barriers to comprehension. And that is very frequently associated with all sorts of negative reactions—especially when by that framework I am clearly a very confused person who keeps asserting that I am not the one who’s confused here.
I can see why you think it would be an interesting question. I have, however, no opinion or belief on the matter; it is a thoroughly uninteresting question to me.
I’m not at all convinced that that is what is going on here, and this doesn’t seem to be a very vulgar case if I am interpreting your meaning correctly. You seem to think that people are responding in a much more negative and personal fashion than they are.
So the solution then is not to just use your own language and get annoyed by when people fail to respond positively. The solution there is to either use a common framework (e.g. very basic English) or to carefully translate into the new language, or to start off by constructing a helpful dictionary. In general, it is socially rude and and unlikely to be productive to go to any area of the internet where there’s a specialized vocab and not only not learn it but to use a different vocab that has overlapping words. I wouldn’t recommend this in a Dungeons and Dragons forum either.
This is unfortunate. It is a question that while uninteresting to you may help you calibrate what is going on. I would tentatively suggest spending a few seconds on the question before dismissing it.
eg. “d20 doesn’t mean a twenty sided dice it refers to the bust and cup size of a female NPC!”
Also the framework presented in “A Practical Study of Argument” by Grovier—my textbook from my first year Philosophy class called “Critical Thinking”. It is actually the only textbook I kept from my first undergrad degree—definitely recommended for anyone wanting to get up to speed on pre-bayesian rational thinking and argument.
You mean Grovier.
This is unwarranted and petty.
That’s true.
Nonsense. This is exactly on topic. It isn’t my “Less Wrong Framework” you are challenging. When I learned about thinking, reasoning and fallacies LessWrong Wasn’t even in existence. For the matter Eliezer’s posts on OvercomingBias weren’t even in existence. Your claim that the response you are getting is the result of your violation of lesswrong specific beliefs is utterly absurd.
So that justifies your assertion that I violate the basic principles of logic and argumentation?
I have only one viable response: “Bullshit.”
What justifies my assertion that you violate the basic principles of logic and argumentation is Trudy Govier, “A Practical Study of Argument”, 4th edition—Chapter 5 (“Premises—What to accept and why”), page 138. Under the subheading “Proper Authority”.
For an explanation of when and why an appeal to authority is, in fact, fallacious see pages 141, 159 and 434. Or wikipedia. Either way my disagreement with you is nothing to do with what I learned on LessWrong. If I’m wrong it’s the result of my prior training and an independent flaw in my personal thinking. Don’t try to foist this off on LessWrong groupthink. (That claim would be credible if we were arguing about, say, cryonics.)
You are really going to claim that by logically arguing over what qualifies as a valid argument I violate the basic principles of argumentation and logic?
I reiterate: I have but one viable response.
Just guessing from the chapter and subheading titles, but I’m pretty sure that bit of “A Practical Study of Argument” has to do with why arguments from authority are not always fallacious.
And this makes whatever it says the inerrant truth, never to be contradicted, and therefore a fundamental basic principle of logic and argumentation?
The claim was
The claim was later refined to: “[the] assertion that [Logos01] violate[s] the basic principles of logic and argumentation”.
By you, yes.
Which was agreed to
Okay. But do you acknowledge that the quoted exchange involves a shifting of the goalposts on your part?
Sure.
This is another straw man.
Then by all means enlighten me as to how it can be possible that merely by disagreeing with Grovier on the topic of appeals to authority, and in doing so providing explanations based on deduction and induction, I “violate the basic principles of logic and argumentation”.
That is not what an appeal to authority is.
I have no interest in being a party to such a wildly dishonest conversation.
I don’t know why this hasn’t been done before: appeal to authority on wikipedia.
As far as I can tell, this definition is what the rest of us are talking about, and it specifically says that appealing to authority only becomes a fallacy if a) the authority is not a legitimate expert, or b) it is used to prove the conclusion must be true. If you disagree with WP’s definition, could you lay out your own?
An appeal to authority is the use of an authority’s statements as a valid argument in the absence of corroborating materials to support that argument.
Note that arguments here refer to specific, instantiated claims. And as such are not subject to probabilistic assessments.
What, you can’t appeal to your own authority? What would you call “because I said so”?
Bare assertion.
EDIT: Right, saying that without providing some context was probably a bad idea. I’m not trying to disparage Jack’s comment here; it’s of the same general form as an appeal to external authority, and I’d expect that to come across without saying so. But if you’re being extra super pedantic...
Hey, I didn’t downvote you. (I actually thought that stating “Bare assertion.” as a bare assertion was metahilarious, but I didn’t think it technically applied.)
I wish I could say I’d done that on purpose.
I appreciate the clarification but what he said was not as bad as a bare assertion. In fact, what he said was not unsupported at all! He was speaking as the best authority we have here on Logos01′s comprehension and reading habits. A bare assertion would have been if he made a claim about which we have no reason to think he is an authority (like, say, the rules of inductive logic).
On reflection you’re right.
A bare assertion, as Nornagest indicated. Also a form of fallacy. If I had done such a thing, that would be worthy of consideration here. I have not, so we can safely stop here.
Evidence itself can be mistaken. If your theory says an event is rare, and it happens, then that is evidence against the theory. If the theory is correct, it should be overwhelmed by evidence for the theory. If the statements of experts statistically correlate with reality, then you should update on the statements of experts until they are screened off by evidence/arguments you have looked at more directly or the statements of other experts.
Statistical projections are not useful in instantiated instances. That is all.
I do not follow.