One who possesses a maximum-entropy prior is further from the truth than one who possesses an inductive prior riddled with many specific falsehoods and errors. Or more to the point, someone who endorses knowing nothing as a desirable state for fear of accepting falsehoods is further from the truth than somebody who believes many things, some of them false, but tries to pay attention and go on learning.
How about “If you know nothing and are willing to learn, you’re closer to the truth than someone who’s attached to falsehoods”? Even then, I suppose you’d need to throw in something about the speed of learning.
It would seem that the difference of opinion here originates in the definition of further. Someone who knows nothing is further (in the information-theoretic sense) from the truth than someone who believes a falsehood, assuming that the falsehood has at least some basis in reality (even if only an accidental relation), because they must flip more bits of their belief (or lack thereof) to arrive at something resembling truth. On the other hand, in the limited, human, psychological sense, they are closer, because they have no attachments to relinquish, and they will not object to having their state of ignorance lifted from them, as one who believes in falsehoods might object to having their state of delusion destroyed.
Right, I’d take it as a statement on how humans actually think, not how a perfect rationalist thinks. Or maybe how most humans think since humans can be unattached to their beliefs.
To me “filled with falsehoods and errors” translates into more falsehoods than “some”. Though I agree its not a very good quote within the context of LW.
If you make a numerical statement of your confidence -- P(A) = X, 0 < X < 1 -- measuring the shannon entropy of that belief is a simple matter of observing the outcome and taking the binary logarithm of your prediction or the converse of it, depending on what came true. S is shannon entropy: If A then S = log2(X), If ¬A then S = log2(1 - X).
The lower the magnitude of the resulting negative real, the better you faired.
Simple, under dubiously ethical and physically possible conditions, you turn their internal world model into a formal bayesian network, and for every possible physical and mathematical observation and outcome, do the above calculation. Sum, print, idle.
It’s impossible in practise, but only like, four line formal definition.
How do you measure someone whose internal world model is not isomorphic to one formal Bayesian network (for example, someone who is completely certain of something)? Should it be the case that someone whose world model contains fewer possible observations has a major advantage in being closer to the truth?
Note also that a perfect Bayesian will score lower than some gamblers using this scheme. Betting everything on black does better than a fair distribution almost half the time.
I am not very certain that humans actually can have an internal belief model that isn’t isomorphic to some bayesian network. Anyone who proclaims to be absolutely certain; I suspect that they are in fact not.
I don’t think people just miscalculate conjunctions. Everyone will tell you that HFFHF is less probable than H, HF, or HFF even. It’s when it gets long and difference is small and the strings are quite specially crafted, errors appear. And with the scenarios, a more detailed scenario looks more plausibly a product of some deliberate reasoning, plus, existence of one detailed scenario is information about existence of other detailed scenarios leading to the same outcome (and it must be made clear in the question that we are not asking about the outcome but about everything happening precisely as scenario specifies it).
On top of that, the meaning of the word “probable” in everyday context is somewhat different—a proper study should ask people to actually make bets. All around it’s not clear why people make this mistake, but it is clear that it is not some fully general failure to account for conjunctions.
edit: actually, just read the wikipedia article on the conjunction fallacy. When asking about “how many people out of 100”, nobody gave a wrong answer. Which immediately implies that the understanding of “probable” has been an issue, or some other cause, but not some general failure to apply conjunctions.
There have been studies that asked people to make bets. Here’s an example. It makes no difference—subjects still arrive at fallacious conclusions. That study also goes some way towards answering your concern about ambiguity in the question. The conjunction fallacy is a pretty robust phenomenon.
I’ve just read the example beyond it’s abstract. Typical psychology: the actual finding was that there were fewer errors with the bet (even though the expected winning was very tiny, and the sample sizes were small so the difference was only marginally significant), and also approximately half of the questions were answered correctly, and the high prevalence of “conjunction fallacy” was attained by considering at least one error over many questions.
How is it a “robust phenomenon” if it is negated by using strings of larger length difference in the head-tail example or by asking people to answer in the N out of 100 format?
I am thinking that people have to learn reasoning to answer questions correctly, including questions about probability, for which the feedback they receive from the world is fairly noisy. And consequently they learn that fairly badly, or mislearn it all-together due to how more detailed accounts are more frequently the correct ones in their “training dataset” (which consists of detailed correct accounts of actual facts and fuzzy speculations).
edit:
Let’s say, the notion that people are just generally not accounting for conjunction is sort of like Newtonian mechanics. In a hard science—physics—Newtonian mechanics was done for as a fundamental account of reality once conditions were found where it did not work. Didn’t matter any how “robust” it was. In a soft science—psychology—an approximate notion persists in spite of this, as if it should be decided by some sort of game of tug between experiments in favour and against that notion. If we were doing physics like this, we would never have moved beyond Newtonian mechanics.
Framing the problem in terms of frequencies mitigates a number of probabilistic fallacies, not just the conjunction fallacy. It also mitigates, for instance, base rate neglect. So whatever explanation you have for the difference between the probability and frequency framings shouldn’t rely on peculiarities of the conjunction fallacy case. A plausible hypothesis is that presenting frequency information simply makes algorithmic calculation of the result easier, and so subjects are no longer reliant on fallible heuristics in order to arrive at the conclusion.
The claim of the heuristics and biases program is that the conjunction fallacy is a manifestation of the representativeness heuristic. One does not need to suppose that there is a misunderstanding about the word “probability” involved (if there is, how do you account for the betting experiments?). The difference in the frequency framing is not that it makes it clear what the experimenter means by “probability”, it’s that the ease of algorithmic reasoning in that case reduces reliance on the representativeness heuristic. Further evidence for this is that the fallacy is also mitigated if the question is framed in terms of single-case probabilities, but with a diagram clarifying the relationship between properties in the problem. If the effect were merely due to a misunderstanding about what is meant by “probability”, why would there be a mitigation of the fallacy in this case? Does the diagram somehow make it clear what the experimenter means by “probability”?
In response to your Newtonian physics example, it’s simply not true that scientists abandoned Newtonian mechanics as soon as they found conditions under which it appeared not to work. Rather, they tried to find alternative explanations that preserved Newtonian mechanics, such as positing the existence of Uranus to account for discrepancies in planetary orbits. It was only once there was a better theory available that Newtonian mechanics was abandoned. Is there currently a better account of probabilistic fallacies than that offered by the heuristics and biases program? And do you think that there is anything about the conjunction fallacy research that makes it impossible to fit the effect within the framework of the heuristics and biases program?
I’m not familiar with the effect of variable string length difference, and quick Googling isn’t helping. If you could direct me to some research on this, I’d appreciate it.
A plausible hypothesis is that presenting frequency information simply makes algorithmic calculation of the result easier, and so subjects are no longer reliant on fallible heuristics in order to arrive at the conclusion.
There’s only room for making it easier when the word “probable” is not synonymous with “larger N out of 100“. So I maintain that alternate understanding of the word “probable” (and perhaps also an invalid idea of what one should bet on) are relevant. edit: to clarify, I can easily imagine an alternate cultural context where “blerg” is always, universally, invariably, a shorthand for “N out of 100”. In such context, asking about “N out of 100” or about “blerg” should produce nearly identical results.
Also, in your study, about half of the questions were answered correctly.
The claim of the heuristics and biases program is that the conjunction fallacy is a manifestation of the representativeness heuristic.
I guess that’s fair enough, albeit its not clear how that works on Linda-like examples.
In my opinion its just that through their life people are exposed to a training dataset which consists of
Detailed accounts of real events.
Speculative guesses.
and (1) is much more commonly correct than (2) even though (1) is more conjunctive. So people get mis-trained through a biased training set. A very wide class of learning AIs would get mis-trained by this sort of thing too.
I’m not familiar with the effect of variable string length difference, and quick Googling isn’t helping. If you could direct me to some research on this, I’d appreciate it.
The point is that you can’t pull the representativeness trick with e.g. R vs RGGRRGRRRGG . All research I ever seen had strings with small % difference in their length. I am assuming that the research is strongly biased towards researching something un-obvious, while it is fairly obvious that R is more probable than RGGRRGRRRGG and frankly we do not expect to find anyone who thinks that RGGRRGRRRGG is more probable than R.
There’s only room for making it easier when the word “probable” is not synonymous with “larger N out of 100″. So I maintain that alternate understanding of the word “probable” (and perhaps also an invalid idea of what one should bet on) are relevant.
Maybe a misunderstanding about the word is relevant, but it clearly isn’t entirely responsible for the effect. Like I said, the conjunction fallacy is much less common if the structure of the question is made clear to the subject using a diagram (e.g. if it is made obvious that feminist bank tellers are a proper subset of bank tellers). It seems implausible that providing this extra information will change the subject’s judgment about what the experimenter means by “probable”.
I guess that’s fair enough, albeit its not clear how that works on Linda-like examples.
The description given of Linda in the problem statement (outspoken philosophy major, social justice activist) is much more representative of feminist bank tellers than it is of bank tellers.
Maybe a misunderstanding about the word is relevant, but it clearly isn’t entirely responsible for the effect.
In the study you quoted, a bit less than half of the answers were wrong, in sharp contrast to the Linda example, where 90% of the answers were wrong. It implies that at least 40% of the failures were a result of misunderstanding. This only leaves 60% for fallacies. Of that 60%, some people have other misunderstandings and other errors of reasoning, and some people are plain stupid (10% are the dumbest people out of 10, i.e. have an IQ of 80 or less), leaving easily less than 50% for the actual conjunction fallacy.
It seems implausible that providing this extra information will change the subject’s judgment about what the experimenter means by “probable”.
Why so? If the word “probable” is fairly ill defined (as well as the whole concept of probability), then it will or will not acquire specific meaning depending on the context.
The description given of Linda in the problem statement (outspoken philosophy major, social justice activist) is much more representative of feminist bank tellers than it is of bank tellers.
Then the representativeness works in the opposite direction from what’s commonly assumed of the dice example.
Speaking of which, “is” is sometimes used to describe traits for identification purposes, e.g. “in general, an alligator is shorter and less aggressive than a crocodile” is more correct than “in general, an alligator is shorter than a crocodile”. If you were to compile traits for finding Linda, you’d pick the most descriptive answer. People know they need to do something with what they are told, they don’t necessarily understand correctly what they need to do.
It’s not unheard of people to bet their life on some belief of theirs.
That doesn’t show that they’re absolutely certain; it just shows that the expected value of the payoff outweighs the chance of them dying.
The real issue with this claim is that people don’t actually model everything using probabilities, nor do they actually use Bayesian belief updating. However, the closest analogue would be people who will not change their beliefs in literally any circumstances, which is clearly false. (Definitely false if you’re considering, e.g. surgery or cosmic rays; almost certainly false if you only include hypotheticals like cult leaders disbanding the cult or personally attacking the individual.)
Is someone absolutely certain if the say that they cannot imagine any circumstances under which they might change their beliefs (or, alternately, can imagine only circumstances which they are absolutely certain will not happen)? It would seem to be a better definition, as it defines probability (and certainty) as a thing in the mind, rather than outside.
In this case, I would see no contradiction as declaring someone to be absolutely certain of their beliefs, though I would say (with non-absolute certainty) that they are incorrect. Someone who believes that the Earth is 6000 years old, for example, may not be swayed by any evidence short of the Christian god coming down and telling them otherwise, an event to which they may assign 0.0 probability (because they believe that it’s impossible for their god to contradict himself, or something like that).
Further, I would exclude methods of changing someone’s mind without using evidence (surgery or cosmic rays). I can’t quite put it into words, but it seems like the fact that it isn’t evidence and instead changes probabilities directly means that it doesn’t so much affect beliefs as it replaces them.
Is someone absolutely certain if they say that they cannot imagine any circumstances under which they might change their beliefs (or, alternately, can imagine only circumstances which they are absolutely certain will not happen)?
Disagree. This would be a statement about their imagination, not about reality.
Also, people are not well calibrated on this sort of thing. People are especially poorly calibrated on this sort of thing in a social context, where others are considering their beliefs.
ETA:
An example: While I haven’t actually done this, I would expect that a significant fraction of religious people would reply to such a question by saying that they would never change their beliefs because of their absolute faith. I can’t be bothered to do enough googling to find a specific interviewee about faith who then became an atheist, but I strongly suspect that some such people actually exist.
I can’t quite put it into words, but it seems like the fact that it isn’t evidence and instead changes probabilities directly means that it doesn’t so much affect beliefs as it replaces them.
Disagree. This would be a statement about their imagination, not about reality.
You are correct. I am making my statements on the basis that probability is in the mind, and as such it is perfectly possible for someone to have a probability which is incorrect. I would distinguish between a belief which it is impossible to disprove, and one which someone believes it is impossible to disprove, and as “absolutely certain” seems to refer to a mental state, I would give it the definition of the latter.
(I suspect that we don’t actually disagree about anything in reality. I further suspect that the phrase I used regarding imagination and reality was misleading; sorry, it’s my standard response to thought experiments based on people’s ability to imagine things.)
I’m not claiming that there is a difference between their stated probabilities and the actual, objective probabilities. I’m claiming that there is a difference between their stated probabilities and the probabilities that they actually hold. The relevant mental states are the implicit probabilities from their internal belief system; while words can be some evidence about this, I highly suspect, for reasons given above, that anybody who claims to be 100% confident of something is simply wrong in mapping their own internal beliefs, which they don’t have explicit access to and aren’t even stored as probabilities (?), over onto explicitly stated probabilities.
Suppose that somebody stated that they cannot imagine any circumstances under which they might change their beliefs. This is a statement about their ability to imagine situations; it is not a proof that no such situation could possibly exist in reality. The fact that it is not is demonstrated by my claim that there are people who did make that statement, but then actually encountered a situation that caused them to change their belief. Clearly, these people’s statement that they were absolutely, 100% confident of their belief was incorrect.
I would still say that while belief-altering experiences are certainly possible, even for people with stated absolute certainty, I am not convinced that they can imagine them occurring with nonzero probability. In fact, if I had absolute certainty about something, I would as a logical consequence be absolutely certain that any disproof of that belief could not occur.
However, it is also not unreasonable that someone does not believe what they profess to believe in some practically testable manner. For example, someone who states that they have absolute certainty that their deity will protect them from harm, but still declines to walk through a fire, would fall into such a category—even if they are not intentionally lying, on some level they are not absolutely certain.
I think that some of our disagreement arises from the fact that I, being relatively uneducated (for this particular community) about Bayesian networks, am not convinced that all human belief systems are isomorphic to one. This is, however, a fault in my own knowledge, and not a strong critique of the assertion.
I would expect that most religious fundamentalists would reply to such a question by saying that they would never change their beliefs because of their absolute faith.
First, fundamentalism is a matter of theology, not of intensity of faith.
Second, what would these people do if their God appeared before them and flat out told them they’re wrong? :-D
Their verbal response would be that this would be impossible.
At which point you can point out to them that God can do WTF He wants and is certainly not limited by ideas of pathetic mortals about what’s impossible and what’s not.
Oh, and step back, exploding heads can be messy :-)
This is not the place to start dissecting theism, but would you be willing to concede the possible existence of people who would simply not be responsive to such arguments? Perhaps they might accuse you of lying and refuse to listen further, or refute you with some biblical verse, or even question your premises.
Counterexamples: Religion (Essentially all of them that make claims about reality). Almost every macroeconomic theory. The War on Drugs. Abstinence-based sex education. Political positions too numerous and controversial to call out.
You are confused. I am not saying that false claims about reality cannot persist—I am saying that reality always wins.
When you die you don’t actually go to heaven—that’s Reality 1, Religion 0.
Besides, you need to look a bit more carefully at the motivations of the people involved. The goal of writing macroeconomic papers is not to reflect reality well, it is to produce publications in pursuit of tenure. The goal of the War on Drugs is not to stop drug use, it is to control the population and extract wealth. The goal of abstinence-based sex education is not to reduce pregnancy rates, it is to make certain people feel good about themselves.
I thought you were saying that reality has a pattern of convincing people of true beliefs
You misunderstood. Reality has the feature of making people face the true consequences of their actions regardless of their beliefs. That’s why reality always wins.
Sort of. Particularly in the case of belief in an afterlife, there isn’t a person still around to face the true consequences of their actions. And even in less extreme examples, people can still convince themselves that the true consequences of their actions are different—or have a different meaning—from what they really are.
And even in less extreme examples, people can still convince themselves that the true consequences of their actions are different—or have a different meaning—from what they really are.
In those cases reality can take more drastic measures.
Believing that 2 + 2 = 5 will most likely cause one to fail to build a successful airplane, but that does not prohibit one from believing that one’s own arithmetic is perfect, and that the incompetence of others, the impossibility of flight, or the condemnation of an airplane-hating god is responsible for the failure.
See my edit. Basically, the enemy airplanes flying overhead and dropping bombs should convince you that flight is indeed possible. Also any remaining desire you have it invent excuses will go away once one of the bombs explodes close enough to you.
The founders don’t get to decide whether or not it is a movement, or what goal it does or doesn’t have. It turns out that many founders in this case are also influential agents, but the influential agents I’ve talked to have expressed that they expect the world to be a better place if people generally make better decisions (in cases where objectively better decision-making is a meaningful concept).
The War on Drugs. Abstinence-based sex education. Political positions too numerous and controversial to call out.
Careful, those are the kind of political claims that where there is currently so much mind-kill that I wouldn’t trust much of the “evidence” you’re using to declare them obviously false.
The general claim is one where I think it would be better to test it on historical examples.
At which point you can point out to them that God can do WTF He wants
This is not an accurate representation of mainstream theology. Most theologists believe, for example, that it is impossible for God to do evil. See William Lane Craig’s commentary.
This is not an accurate representation of mainstream theology.
First you mean Christian theology, there are lot more theologies around.
Second, I don’t know what is “mainstream” theology—is it the official position of the Roman Catholic Church? Some common elements in Protestant theology? Does anyone care about Orthodox Christians?
Third, the question of limits on Judeo-Christian God is a very very old theological issue which has not been resolved to everyone’s satisfaction and no resolution is expected.
Fourth, William Lane Craig basically evades the problem by defining good as “what God is”. God can still do anything He wants and whatever He does automatically gets defined as “good”.
This is starting to veer into free-will territory, but I don’t think God would have much problem convincing these people that He is the Real Deal. Wouldn’t be much of a god otherwise :-)
I cannot imagine circumstances under which I would come to believe that the Christian God exists. All of the evidence I can imagine encountering which could push me in that direction if I found it seems even better explained by various deceptive possibilities, e.g. that I’m a simulation or I’ve gone insane or what have you. But I suspect that there is some sequence of experience such that if I had it I would be convinced; it’s just too complicated for me to work out in advance what it would be. Which perhaps means I can imagine it in an abstract, meta sort of way, just not in a concrete way? Am I certain that the Christian God doesn’t exist? I admit that I’m not certain about that (heh!), which is part of the reason I’m curious about your test.
If imagination fails, consult reality for inspiration. You could look into the conversion experiences of materialist, rationalist atheists. John C Wright, for example.
I am not arguing that it is not an empty set. Consider it akin to the intersection of the set of natural numbers, and the set of infinities; the fact that it is the empty set is meaningful. It means that by following the rules of simple, additive arithmetic, one cannot reach infinity, and if one does reach infinity, that is a good sign of an error somewhere in the calculation.
Similarly, one should not be absolutely certain if they are updating from finite evidence. Barring omniscience (infinite evidence), one cannot become absolutely/infinitely certain.
What definition of absolute certainty would you propose?
So you are proposing a definition that nothing can satisfy. That doesn’t seem like a useful activity. If you want to say that no belief can stand up to the powers of imagination, sure, I’ll agree with you. However if we want to talk about what people call “absolute certainty” it would be nice to have some agreed-on terms to use in discussing it. Saying “oh, there just ain’t no such animal” doesn’t lead anywhere.
As to what I propose, I believe that definitions serve a purpose and the same thing can be defined differently in different contexts. You want a definition of “absolute certainty” for which purpose and in which context?
You are correct, I have contradicted myself. I failed to mention the possibility of people who are not reasoning perfectly, and in fact are not close, to the point where they can mistakenly arrive at absolute certainty. I am not arguing that their certainty is fake—it is a mental state, after all—but rather that it cannot be reached using proper rational thought.
What you have pointed out to me is that absolute certainty is not, in fact, a useful thing. It is the result of a mistake in the reasoning process. An inept mathematician can add together a large but finite series of natural numbers, and then write down “infinity” after the equals sign, and thereafter goes about believing that the sum of a certain series is infinite.
The sum is not, in fact, infinite; no finite set of finite things can add up to an infinity, just as no finite set of finite pieces of evidence can produce absolute, infinitely strong certainty. But if we use some process other than the “correct” one, as the mathematician’s brain has to somehow output “infinity” from the finite inputs it has been given, we can generate absolute certainty from finite evidence—it simply isn’t correct. It doesn’t correspond to something which is either impossible or inevitable in the real world, just as the inept mathematician’s infinity does not correspond to a real infinity. Rather, they both correspond to beliefs about the real world.
While I do not believe that there are any rationally acquired beliefs which can stand up to the powers of imagination (though I am not absolutely certain of this belief), I do believe that irrational beliefs can. See my above description of the hypothetical young-earther; they may be able to conceive of a circumstance which would falsify their belief (i.e. their god telling them that it isn’t so), but they cannot conceive of that circumstance actually occurring (they are absolutely certain that their god does not contradict himself, which may have its roots in other absolutely certain beliefs or may be simply taken as a given).
the possibility of people who are not reasoning perfectly
:-) As in, like, every single human being...
certainty … cannot be reached using proper rational thought
Yep. Provided you limit “proper rational thought” to Bayesian updating of probabilities this is correct. Well, as long your prior isn’t 1, that is.
I do believe that irrational beliefs can
I’d say that if you don’t require internal consistency from your beliefs then yes, you can have a subjectively certain belief which nothing can shake. If you’re not bothered by contradictions, well then, doublethink is like Barbie—everything is possible with it.
In fact, unless you’re insane, you probably already believe that tomorrow will not be Friday!
(That belief is underspecified- “today” is a notion that varies independently, it doesn’t point to a specific date. Today you believe that August 16th, 2013 is a Friday; tomorrow, you will presumably continue to believe that August 16th, 2013 was a Friday.)
I very much doubt that you are absolutely certain. There are a number of outlandish but not impossible worlds in which you could believe that it is Friday, yet it might not be Friday; something akin to the world of The Truman Show comes to mind.
Unless you believe that all such alternatives are impossible, in which case you may be absolutely certain, but incorrectly so.
On the other hand, you think I’m mistaken about that.
On the third tentacle I think you are mistaken because, among other things, my mind does not assign probabilities like 0.999999999 -- it’s not capable of such granularity. My wetware rounds such numbers and so assigns the probability of 1 to the statement that today is Friday.
So if you went in to work and nobody was there, and your computer says it’s Saturday, and your watch says Saturday, and the next thirty people you ask say it’s Saturday… you would still believe it’s Friday?
If you think it’s Saturday after any amount of evidence, after assigning probability 1 to the statement “Today is Friday,” then you can’t be doing anything vaguely rational—no amount of Bayesian updating will allow you to update away from probability 1.
If you ever assign something probability 1, you can never be rationally convinced of its falsehood.
Sure. But by definition they are irrational kludges made by human brains.
Bayesian updating is a theorem of probability: it is literally the formal definition of “rationally changing your mind.” If you’re changing your mind through something that isn’t Bayesian, you will get the right answer iff your method gives the same result as the Bayesian one; otherwise you’re just wrong.
Okay, so, this looks like a case of arguing over semantics.
What I am saying is: “You can never correctly give probability 1 to something, and changing your mind in a non-Bayesian manner is simply incorrect. Assuming you endeavor to be /cough/ Less Wrong, you should force your System 2 to abide by these rules.”
What I think Lumifer is saying is, “Yes, but you’re never going to succeed because human brains are crazy kludges in the first place.”
In which case we have no disagreement, though I would note that I intend to do as well as I can.
What I think Lumifer is saying is, “Yes, but you’re never going to succeed because human brains are crazy kludges in the first place.”
I am sorry, I must have been unclear. I’m not staying “yes, but”, I’m saying “no, I disagree”.
I disagree that “you can never correctly give probability 1 to something”. To avoid silly debates over 1/3^^^3 chances I’d state my position as “you can correctly assign a probability that is indistinguishable from 1 to something”.
I disagree that “changing your mind in a non-Bayesian manner is simply incorrect”. That looks to me like an overbroad claim that’s false on its face. Human mind is rich and multifaceted, trying to limit it to performing a trivial statistical calculation doesn’t seem reasonable to me.
I think the claim is that, whatever method you use, it should approximate the answer the Bayesian method would use (which is optimal, but computationally infeasible)
The thing is, from a probabilistic standpoint, one is essentially infinity—it takes an infinite number of bits of evidence to get probability 1 from any finite prior.
And the human mind is a horrific repurposed adaptation not at all intended to do what we’re doing with it when we try to be rational. I fail to see why indulging its biases is at all helpful.
My point, as I stated the first time, is that evolution is dumb, and does not necessarily design optimal systems. See: optic nerve connecting to the front of the retina. This is doubly true of very important, very complex systems like the brain, where everything has to be laid down layer by layer and changing some system after the fact might make the whole thing come crumbling down. The brain is simply not the optimal processing engine given the resources of the human body: it’s Azathoth’s “best guess.”
So I see no reason to pander to its biases when I can use mathematics, which I trust infinitely more, to prove that there is a rational way to make decisions.
The brain is simply not the optimal processing engine given the resources of the human body
How do you define optimality?
So I see no reason to pander to its biases when I can use mathematics
LOL.
Sorry :-/
So, since you seem to be completely convinced of the advantage of the mathematical “optimal processing” over the usual biased and messy thinking that humans normally do—could you, um, demonstrate this advantage? For example financial markets provide rapid feedback and excellent incentives. It shouldn’t be hard to exploit some cognitive bias or behavioral inefficiency on the part of investors and/or traders, should it? After all their brains are so horribly inefficient, to the point of being crippled, really...
Actually, no, I would expect that investors and/or traders would be more rational than the average for that very reason. The brain can be trained, or I wouldn’t be here; that doesn’t say much about it’s default configuration, though.
As far as biases—how about the existence of religion? The fact that people still deny evolution? The fact that people buy lottery tickets?
And as far as optimality goes—it’s an open question, I don’t know. I do, however, believe that the brain is not optimal, because it’s a very complex system that hasn’t had much time to be refined.
investors and/or traders would be more rational than the average
That’s not good enough—you can “use mathematics” and that gives you THE optimal result, the very best possible—right? As such, anything not the best possible is inferior, even if it’s better than the average. So by being purely rational you still should be able to extract money out of the market taking it from investors who are merely better than the not-too-impressive average.
As to optimality, unless you define it *somehow* the phrase “brain is not optimal” has no meaning.
I am not perfectly rational. I do not have access to all the information I have. That is why am I here: to be Less Wrong.
Now, I can attempt to use Bayes’ Theorem on my own lack-of-knowledge, and predict probabilities of probabilities—calibrate myself, and learn to notice when I’m missing information—but that adds more uncertainty; my performance drifts back towards average.
As to optimality, unless you define it somehow the phrase “brain is not optimal” has no meaning.
Not at all. I can define a series of metrics—energy consumption and “win” ratio being the most obvious—and define an n-dimensional function on those metrics, and then prove that given bounds in all directions that a maximum exists so long as my function follows certain criteria (mostly continuity.)
I can note that given the space of possible functions and metrics, the chances of my brain being optimal by any of them is extremely low. I can’t really say much about brain-optimality mostly because I don’t understand enough biology to understand how much energy draw is too much, and the like; it’s trivial to show that our brain is not an optimal mind under unbounded resources.
Which, in turn, is really what we care about here—energy is abundant, healthcare is much better than in the ancestral environment, so if it turns out our health takes a hit because of optimizing for intelligence somehow we can afford it.
I can define a series of metrics—energy consumption and “win” ratio being the most obvious—and define an n-dimensional function on those metrics, and then prove that given bounds in all directions that a maximum exists
I don’t think you can guarantee ONE maximum. But in any case, the vastness of the space of all n-dimensional functions makes the argument unpersuasive. Let’s get a bit closer to the common, garden-variety reality and ask a simpler question. In which directions do you think human brain should change/evolve/mutate to become more optimal? And in these directions, is the further the better or there is a point beyond which one should not go?
so if it turns out our health takes a hit because of optimizing for intelligence somehow we can afford it
Um, I have strong doubts about that. Your body affects your mind greatly (not to mention your quality of life).
it is literally the formal definition of “rationally changing your mind.”
No, unless you define “rationally changing your mind” this way in which case it’s just a circle.
If you’re changing your mind through something that isn’t Bayesian, you will get the right answer iff your method gives the same result as the Bayesian one; otherwise you’re just wrong.
Nope.
The ultimate criterion of whether the answer is the right one is real life.
On the third tentacle I think you are mistaken because, among other things, my mind does not assign probabilities like 0.999999999 -- it’s not capable of such granularity.
While I’m not certain, I’m fairly confident that most people’s minds don’t assign probabilities at all. At least when this thread began, it was about trying to infer implicit probabilities based on how people update their beliefs; if there is any situation that would lead you to conclude that it’s not Friday, then that would suffice to prove that your mind’s internal probability is not Friday.
Most of the time, when people talk about probabilities or state the probabilities they assign to something, they’re talking about loose, verbal estimates, which are created by their conscious minds. There are various techniques for trying to make these match up to the evidence the person has, but in the end they’re still just basically guesses at what’s going on in your subconscious. Your conscious mind is capable of assigning probabilities like 0.999999999.
Taking a (modified) page from Randaly’s book, I would define absolute certainty as “so certain that one cannot conceive of any possible evidence which might convince one that the belief in question is false”. Since you can conceive of the brain-in-the-vat scenario and believe that it is not impossible, I would say that you cannot be absolutely certain of anything, including the axioms and logic of the world you know (even the rejection of absolute certainty).
-Thomas Jefferson
One who possesses a maximum-entropy prior is further from the truth than one who possesses an inductive prior riddled with many specific falsehoods and errors. Or more to the point, someone who endorses knowing nothing as a desirable state for fear of accepting falsehoods is further from the truth than somebody who believes many things, some of them false, but tries to pay attention and go on learning.
How about “If you know nothing and are willing to learn, you’re closer to the truth than someone who’s attached to falsehoods”? Even then, I suppose you’d need to throw in something about the speed of learning.
It would seem that the difference of opinion here originates in the definition of further. Someone who knows nothing is further (in the information-theoretic sense) from the truth than someone who believes a falsehood, assuming that the falsehood has at least some basis in reality (even if only an accidental relation), because they must flip more bits of their belief (or lack thereof) to arrive at something resembling truth. On the other hand, in the limited, human, psychological sense, they are closer, because they have no attachments to relinquish, and they will not object to having their state of ignorance lifted from them, as one who believes in falsehoods might object to having their state of delusion destroyed.
Right, I’d take it as a statement on how humans actually think, not how a perfect rationalist thinks. Or maybe how most humans think since humans can be unattached to their beliefs.
To me “filled with falsehoods and errors” translates into more falsehoods than “some”. Though I agree its not a very good quote within the context of LW.
-LessWrong Community
Maybe it’s just where my mind was when I read it but I interpreted the quote as meaning something more like:
“It is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence.”
In what units does one measure distance from the truth, and in what manner?
Bits of Shannon entropy.
That’s half of the answer. In what manner does one measure the number of bits of Shannon entropy that a person has?
If you make a numerical statement of your confidence -- P(A) = X, 0 < X < 1 -- measuring the shannon entropy of that belief is a simple matter of observing the outcome and taking the binary logarithm of your prediction or the converse of it, depending on what came true. S is shannon entropy: If A then S = log2(X), If ¬A then S = log2(1 - X).
The lower the magnitude of the resulting negative real, the better you faired.
That allows a prediction/confidence/belief to be measured. How do you total a person?
Simple, under dubiously ethical and physically possible conditions, you turn their internal world model into a formal bayesian network, and for every possible physical and mathematical observation and outcome, do the above calculation. Sum, print, idle.
It’s impossible in practise, but only like, four line formal definition.
How do you measure someone whose internal world model is not isomorphic to one formal Bayesian network (for example, someone who is completely certain of something)? Should it be the case that someone whose world model contains fewer possible observations has a major advantage in being closer to the truth?
Note also that a perfect Bayesian will score lower than some gamblers using this scheme. Betting everything on black does better than a fair distribution almost half the time.
I am not very certain that humans actually can have an internal belief model that isn’t isomorphic to some bayesian network. Anyone who proclaims to be absolutely certain; I suspect that they are in fact not.
How do you account for people falling prey to things like the conjunction fallacy?
I don’t think people just miscalculate conjunctions. Everyone will tell you that HFFHF is less probable than H, HF, or HFF even. It’s when it gets long and difference is small and the strings are quite specially crafted, errors appear. And with the scenarios, a more detailed scenario looks more plausibly a product of some deliberate reasoning, plus, existence of one detailed scenario is information about existence of other detailed scenarios leading to the same outcome (and it must be made clear in the question that we are not asking about the outcome but about everything happening precisely as scenario specifies it).
On top of that, the meaning of the word “probable” in everyday context is somewhat different—a proper study should ask people to actually make bets. All around it’s not clear why people make this mistake, but it is clear that it is not some fully general failure to account for conjunctions.
edit: actually, just read the wikipedia article on the conjunction fallacy. When asking about “how many people out of 100”, nobody gave a wrong answer. Which immediately implies that the understanding of “probable” has been an issue, or some other cause, but not some general failure to apply conjunctions.
There have been studies that asked people to make bets. Here’s an example. It makes no difference—subjects still arrive at fallacious conclusions. That study also goes some way towards answering your concern about ambiguity in the question. The conjunction fallacy is a pretty robust phenomenon.
I’ve just read the example beyond it’s abstract. Typical psychology: the actual finding was that there were fewer errors with the bet (even though the expected winning was very tiny, and the sample sizes were small so the difference was only marginally significant), and also approximately half of the questions were answered correctly, and the high prevalence of “conjunction fallacy” was attained by considering at least one error over many questions.
How is it a “robust phenomenon” if it is negated by using strings of larger length difference in the head-tail example or by asking people to answer in the N out of 100 format?
I am thinking that people have to learn reasoning to answer questions correctly, including questions about probability, for which the feedback they receive from the world is fairly noisy. And consequently they learn that fairly badly, or mislearn it all-together due to how more detailed accounts are more frequently the correct ones in their “training dataset” (which consists of detailed correct accounts of actual facts and fuzzy speculations).
edit: Let’s say, the notion that people are just generally not accounting for conjunction is sort of like Newtonian mechanics. In a hard science—physics—Newtonian mechanics was done for as a fundamental account of reality once conditions were found where it did not work. Didn’t matter any how “robust” it was. In a soft science—psychology—an approximate notion persists in spite of this, as if it should be decided by some sort of game of tug between experiments in favour and against that notion. If we were doing physics like this, we would never have moved beyond Newtonian mechanics.
Framing the problem in terms of frequencies mitigates a number of probabilistic fallacies, not just the conjunction fallacy. It also mitigates, for instance, base rate neglect. So whatever explanation you have for the difference between the probability and frequency framings shouldn’t rely on peculiarities of the conjunction fallacy case. A plausible hypothesis is that presenting frequency information simply makes algorithmic calculation of the result easier, and so subjects are no longer reliant on fallible heuristics in order to arrive at the conclusion.
The claim of the heuristics and biases program is that the conjunction fallacy is a manifestation of the representativeness heuristic. One does not need to suppose that there is a misunderstanding about the word “probability” involved (if there is, how do you account for the betting experiments?). The difference in the frequency framing is not that it makes it clear what the experimenter means by “probability”, it’s that the ease of algorithmic reasoning in that case reduces reliance on the representativeness heuristic. Further evidence for this is that the fallacy is also mitigated if the question is framed in terms of single-case probabilities, but with a diagram clarifying the relationship between properties in the problem. If the effect were merely due to a misunderstanding about what is meant by “probability”, why would there be a mitigation of the fallacy in this case? Does the diagram somehow make it clear what the experimenter means by “probability”?
In response to your Newtonian physics example, it’s simply not true that scientists abandoned Newtonian mechanics as soon as they found conditions under which it appeared not to work. Rather, they tried to find alternative explanations that preserved Newtonian mechanics, such as positing the existence of Uranus to account for discrepancies in planetary orbits. It was only once there was a better theory available that Newtonian mechanics was abandoned. Is there currently a better account of probabilistic fallacies than that offered by the heuristics and biases program? And do you think that there is anything about the conjunction fallacy research that makes it impossible to fit the effect within the framework of the heuristics and biases program?
I’m not familiar with the effect of variable string length difference, and quick Googling isn’t helping. If you could direct me to some research on this, I’d appreciate it.
There’s only room for making it easier when the word “probable” is not synonymous with “larger N out of 100“. So I maintain that alternate understanding of the word “probable” (and perhaps also an invalid idea of what one should bet on) are relevant. edit: to clarify, I can easily imagine an alternate cultural context where “blerg” is always, universally, invariably, a shorthand for “N out of 100”. In such context, asking about “N out of 100” or about “blerg” should produce nearly identical results.
Also, in your study, about half of the questions were answered correctly.
I guess that’s fair enough, albeit its not clear how that works on Linda-like examples.
In my opinion its just that through their life people are exposed to a training dataset which consists of
Detailed accounts of real events.
Speculative guesses.
and (1) is much more commonly correct than (2) even though (1) is more conjunctive. So people get mis-trained through a biased training set. A very wide class of learning AIs would get mis-trained by this sort of thing too.
The point is that you can’t pull the representativeness trick with e.g. R vs RGGRRGRRRGG . All research I ever seen had strings with small % difference in their length. I am assuming that the research is strongly biased towards researching something un-obvious, while it is fairly obvious that R is more probable than RGGRRGRRRGG and frankly we do not expect to find anyone who thinks that RGGRRGRRRGG is more probable than R.
Maybe a misunderstanding about the word is relevant, but it clearly isn’t entirely responsible for the effect. Like I said, the conjunction fallacy is much less common if the structure of the question is made clear to the subject using a diagram (e.g. if it is made obvious that feminist bank tellers are a proper subset of bank tellers). It seems implausible that providing this extra information will change the subject’s judgment about what the experimenter means by “probable”.
The description given of Linda in the problem statement (outspoken philosophy major, social justice activist) is much more representative of feminist bank tellers than it is of bank tellers.
In the study you quoted, a bit less than half of the answers were wrong, in sharp contrast to the Linda example, where 90% of the answers were wrong. It implies that at least 40% of the failures were a result of misunderstanding. This only leaves 60% for fallacies. Of that 60%, some people have other misunderstandings and other errors of reasoning, and some people are plain stupid (10% are the dumbest people out of 10, i.e. have an IQ of 80 or less), leaving easily less than 50% for the actual conjunction fallacy.
Why so? If the word “probable” is fairly ill defined (as well as the whole concept of probability), then it will or will not acquire specific meaning depending on the context.
Then the representativeness works in the opposite direction from what’s commonly assumed of the dice example.
Speaking of which, “is” is sometimes used to describe traits for identification purposes, e.g. “in general, an alligator is shorter and less aggressive than a crocodile” is more correct than “in general, an alligator is shorter than a crocodile”. If you were to compile traits for finding Linda, you’d pick the most descriptive answer. People know they need to do something with what they are told, they don’t necessarily understand correctly what they need to do.
Poor brain design.
Honestly, I could do way better if you gave me a millenium.
You know, at some point, whoever’s still alive when that becomes not-a-joke needs to actually test this.
Because I’m just curious what a human-designed human would look like.
How likely do you believe it is that there exists a human who is absolutely certain of something?
Is this a testable assertion? How do you determine whether someone is, in fact, absolutely certain?
It’s not unheard of people to bet their life on some belief of theirs.
That doesn’t show that they’re absolutely certain; it just shows that the expected value of the payoff outweighs the chance of them dying.
The real issue with this claim is that people don’t actually model everything using probabilities, nor do they actually use Bayesian belief updating. However, the closest analogue would be people who will not change their beliefs in literally any circumstances, which is clearly false. (Definitely false if you’re considering, e.g. surgery or cosmic rays; almost certainly false if you only include hypotheticals like cult leaders disbanding the cult or personally attacking the individual.)
Is someone absolutely certain if the say that they cannot imagine any circumstances under which they might change their beliefs (or, alternately, can imagine only circumstances which they are absolutely certain will not happen)? It would seem to be a better definition, as it defines probability (and certainty) as a thing in the mind, rather than outside.
In this case, I would see no contradiction as declaring someone to be absolutely certain of their beliefs, though I would say (with non-absolute certainty) that they are incorrect. Someone who believes that the Earth is 6000 years old, for example, may not be swayed by any evidence short of the Christian god coming down and telling them otherwise, an event to which they may assign 0.0 probability (because they believe that it’s impossible for their god to contradict himself, or something like that).
Further, I would exclude methods of changing someone’s mind without using evidence (surgery or cosmic rays). I can’t quite put it into words, but it seems like the fact that it isn’t evidence and instead changes probabilities directly means that it doesn’t so much affect beliefs as it replaces them.
Disagree. This would be a statement about their imagination, not about reality.
Also, people are not well calibrated on this sort of thing. People are especially poorly calibrated on this sort of thing in a social context, where others are considering their beliefs.
ETA: An example: While I haven’t actually done this, I would expect that a significant fraction of religious people would reply to such a question by saying that they would never change their beliefs because of their absolute faith. I can’t be bothered to do enough googling to find a specific interviewee about faith who then became an atheist, but I strongly suspect that some such people actually exist.
Yeah, fair enough.
You are correct. I am making my statements on the basis that probability is in the mind, and as such it is perfectly possible for someone to have a probability which is incorrect. I would distinguish between a belief which it is impossible to disprove, and one which someone believes it is impossible to disprove, and as “absolutely certain” seems to refer to a mental state, I would give it the definition of the latter.
(I suspect that we don’t actually disagree about anything in reality. I further suspect that the phrase I used regarding imagination and reality was misleading; sorry, it’s my standard response to thought experiments based on people’s ability to imagine things.)
I’m not claiming that there is a difference between their stated probabilities and the actual, objective probabilities. I’m claiming that there is a difference between their stated probabilities and the probabilities that they actually hold. The relevant mental states are the implicit probabilities from their internal belief system; while words can be some evidence about this, I highly suspect, for reasons given above, that anybody who claims to be 100% confident of something is simply wrong in mapping their own internal beliefs, which they don’t have explicit access to and aren’t even stored as probabilities (?), over onto explicitly stated probabilities.
Suppose that somebody stated that they cannot imagine any circumstances under which they might change their beliefs. This is a statement about their ability to imagine situations; it is not a proof that no such situation could possibly exist in reality. The fact that it is not is demonstrated by my claim that there are people who did make that statement, but then actually encountered a situation that caused them to change their belief. Clearly, these people’s statement that they were absolutely, 100% confident of their belief was incorrect.
I would still say that while belief-altering experiences are certainly possible, even for people with stated absolute certainty, I am not convinced that they can imagine them occurring with nonzero probability. In fact, if I had absolute certainty about something, I would as a logical consequence be absolutely certain that any disproof of that belief could not occur.
However, it is also not unreasonable that someone does not believe what they profess to believe in some practically testable manner. For example, someone who states that they have absolute certainty that their deity will protect them from harm, but still declines to walk through a fire, would fall into such a category—even if they are not intentionally lying, on some level they are not absolutely certain.
I think that some of our disagreement arises from the fact that I, being relatively uneducated (for this particular community) about Bayesian networks, am not convinced that all human belief systems are isomorphic to one. This is, however, a fault in my own knowledge, and not a strong critique of the assertion.
First, fundamentalism is a matter of theology, not of intensity of faith.
Second, what would these people do if their God appeared before them and flat out told them they’re wrong? :-D
Fixed, thanks.
Their verbal response would be that this would be impossible.
(I agree that such a situation would likely lead to them actually changing their beliefs.)
At which point you can point out to them that God can do WTF He wants and is certainly not limited by ideas of pathetic mortals about what’s impossible and what’s not.
Oh, and step back, exploding heads can be messy :-)
This is not the place to start dissecting theism, but would you be willing to concede the possible existence of people who would simply not be responsive to such arguments? Perhaps they might accuse you of lying and refuse to listen further, or refute you with some biblical verse, or even question your premises.
Of course. Stuffing fingers into your ears and going NA-NA-NA-NA-CAN’T-HEAR-YOU is a rather common debate tactic :-)
Don’t you observe people doing that to reality, rather than updating their beliefs?
That too. Though reality, of course, has ways of making sure its point of view prevails :-)
Reality has shown itself to be fairly ineffective in the short term (all of human history).
8-0
In my experience reality is very very effective. In the long term AND in the short term.
Counterexamples: Religion (Essentially all of them that make claims about reality). Almost every macroeconomic theory. The War on Drugs. Abstinence-based sex education. Political positions too numerous and controversial to call out.
You are confused. I am not saying that false claims about reality cannot persist—I am saying that reality always wins.
When you die you don’t actually go to heaven—that’s Reality 1, Religion 0.
Besides, you need to look a bit more carefully at the motivations of the people involved. The goal of writing macroeconomic papers is not to reflect reality well, it is to produce publications in pursuit of tenure. The goal of the War on Drugs is not to stop drug use, it is to control the population and extract wealth. The goal of abstinence-based sex education is not to reduce pregnancy rates, it is to make certain people feel good about themselves.
Wait, isn’t that pretty much tautological, given the definition of ‘reality’?
What’s your definition of reality?
I can’t get a very general definition while still being useful, but reality is what determines if a belief is true or false.
I thought you were saying that reality has a pattern of convincing people of true beliefs, not that reality is indifferent to belief.
You misunderstood. Reality has the feature of making people face the true consequences of their actions regardless of their beliefs. That’s why reality always wins.
Most of my definition of ‘true consequences’ matches my definition of ‘reality’.
Sort of. Particularly in the case of belief in an afterlife, there isn’t a person still around to face the true consequences of their actions. And even in less extreme examples, people can still convince themselves that the true consequences of their actions are different—or have a different meaning—from what they really are.
In those cases reality can take more drastic measures.
Edit: Here is the quote I should have linked to.
Believing that 2 + 2 = 5 will most likely cause one to fail to build a successful airplane, but that does not prohibit one from believing that one’s own arithmetic is perfect, and that the incompetence of others, the impossibility of flight, or the condemnation of an airplane-hating god is responsible for the failure.
See my edit. Basically, the enemy airplanes flying overhead and dropping bombs should convince you that flight is indeed possible. Also any remaining desire you have it invent excuses will go away once one of the bombs explodes close enough to you.
What’s the goal of rationalism as a movement?
No idea. I don’t even think rationalism is a movement (in the usual sociological meaning). Ask some of the founders.
The founders don’t get to decide whether or not it is a movement, or what goal it does or doesn’t have. It turns out that many founders in this case are also influential agents, but the influential agents I’ve talked to have expressed that they expect the world to be a better place if people generally make better decisions (in cases where objectively better decision-making is a meaningful concept).
Careful, those are the kind of political claims that where there is currently so much mind-kill that I wouldn’t trust much of the “evidence” you’re using to declare them obviously false.
The general claim is one where I think it would be better to test it on historical examples.
So, because Copernicus was eventually vindicated, reality prevails in general? Only a smaller subset of humanity believes in science.
This is not an accurate representation of mainstream theology. Most theologists believe, for example, that it is impossible for God to do evil. See William Lane Craig’s commentary.
First you mean Christian theology, there are lot more theologies around.
Second, I don’t know what is “mainstream” theology—is it the official position of the Roman Catholic Church? Some common elements in Protestant theology? Does anyone care about Orthodox Christians?
Third, the question of limits on Judeo-Christian God is a very very old theological issue which has not been resolved to everyone’s satisfaction and no resolution is expected.
Fourth, William Lane Craig basically evades the problem by defining good as “what God is”. God can still do anything He wants and whatever He does automatically gets defined as “good”.
Clearly they would consider this entity a false God/Satan.
This is starting to veer into free-will territory, but I don’t think God would have much problem convincing these people that He is the Real Deal. Wouldn’t be much of a god otherwise :-)
That’s vacuously true, of course. Which makes you original question meaningless as stated.
It wasn’t so much meaningless as it was rhetorical.
I cannot imagine circumstances under which I would come to believe that the Christian God exists. All of the evidence I can imagine encountering which could push me in that direction if I found it seems even better explained by various deceptive possibilities, e.g. that I’m a simulation or I’ve gone insane or what have you. But I suspect that there is some sequence of experience such that if I had it I would be convinced; it’s just too complicated for me to work out in advance what it would be. Which perhaps means I can imagine it in an abstract, meta sort of way, just not in a concrete way? Am I certain that the Christian God doesn’t exist? I admit that I’m not certain about that (heh!), which is part of the reason I’m curious about your test.
If imagination fails, consult reality for inspiration. You could look into the conversion experiences of materialist, rationalist atheists. John C Wright, for example.
So you’re effectively saying that your prior is zero and will not be budged by ANY evidence.
Hmm… smells of heresy to me… :-D
I would argue that this definition of absolute certainty is completely useless as nothing could possibly satisfy it. It results in an empty set.
If you “cannot imagine under any circumstances” your imagination is deficient.
I am not arguing that it is not an empty set. Consider it akin to the intersection of the set of natural numbers, and the set of infinities; the fact that it is the empty set is meaningful. It means that by following the rules of simple, additive arithmetic, one cannot reach infinity, and if one does reach infinity, that is a good sign of an error somewhere in the calculation.
Similarly, one should not be absolutely certain if they are updating from finite evidence. Barring omniscience (infinite evidence), one cannot become absolutely/infinitely certain.
What definition of absolute certainty would you propose?
So you are proposing a definition that nothing can satisfy. That doesn’t seem like a useful activity. If you want to say that no belief can stand up to the powers of imagination, sure, I’ll agree with you. However if we want to talk about what people call “absolute certainty” it would be nice to have some agreed-on terms to use in discussing it. Saying “oh, there just ain’t no such animal” doesn’t lead anywhere.
As to what I propose, I believe that definitions serve a purpose and the same thing can be defined differently in different contexts. You want a definition of “absolute certainty” for which purpose and in which context?
You are correct, I have contradicted myself. I failed to mention the possibility of people who are not reasoning perfectly, and in fact are not close, to the point where they can mistakenly arrive at absolute certainty. I am not arguing that their certainty is fake—it is a mental state, after all—but rather that it cannot be reached using proper rational thought.
What you have pointed out to me is that absolute certainty is not, in fact, a useful thing. It is the result of a mistake in the reasoning process. An inept mathematician can add together a large but finite series of natural numbers, and then write down “infinity” after the equals sign, and thereafter goes about believing that the sum of a certain series is infinite.
The sum is not, in fact, infinite; no finite set of finite things can add up to an infinity, just as no finite set of finite pieces of evidence can produce absolute, infinitely strong certainty. But if we use some process other than the “correct” one, as the mathematician’s brain has to somehow output “infinity” from the finite inputs it has been given, we can generate absolute certainty from finite evidence—it simply isn’t correct. It doesn’t correspond to something which is either impossible or inevitable in the real world, just as the inept mathematician’s infinity does not correspond to a real infinity. Rather, they both correspond to beliefs about the real world.
While I do not believe that there are any rationally acquired beliefs which can stand up to the powers of imagination (though I am not absolutely certain of this belief), I do believe that irrational beliefs can. See my above description of the hypothetical young-earther; they may be able to conceive of a circumstance which would falsify their belief (i.e. their god telling them that it isn’t so), but they cannot conceive of that circumstance actually occurring (they are absolutely certain that their god does not contradict himself, which may have its roots in other absolutely certain beliefs or may be simply taken as a given).
:-) As in, like, every single human being...
Yep. Provided you limit “proper rational thought” to Bayesian updating of probabilities this is correct. Well, as long your prior isn’t 1, that is.
I’d say that if you don’t require internal consistency from your beliefs then yes, you can have a subjectively certain belief which nothing can shake. If you’re not bothered by contradictions, well then, doublethink is like Barbie—everything is possible with it.
Well, yes.
That is the point.
Nothing is absolutely certain.
Why does a deficient imagination disqualify a brain from being certain?
Vice versa. Deficient imagination allows a brain to be certain.
… ergo there exist human brains that are certain.
if people exist that are absolutely certain of something, I want to believe that they exist.
So… a brain is allowed to be certain because it can’t tell it’s wrong?
Tangent: Does that work?
Nope. “I’m certain that X is true now” is different from “I am certain that X is true and will be true forever and ever”.
I am absolutely certain today is Friday. Ask me tomorrow whether my belief has changed.
In fact, unless you’re insane, you probably already believe that tomorrow will not be Friday!
(That belief is underspecified- “today” is a notion that varies independently, it doesn’t point to a specific date. Today you believe that August 16th, 2013 is a Friday; tomorrow, you will presumably continue to believe that August 16th, 2013 was a Friday.)
Not exactly that but yes, there is the reference issue which makes this example less than totally convincing.
The main point still stands, though—certainty of a belief and its time-invariance are different things.
I very much doubt that you are absolutely certain. There are a number of outlandish but not impossible worlds in which you could believe that it is Friday, yet it might not be Friday; something akin to the world of The Truman Show comes to mind.
Unless you believe that all such alternatives are impossible, in which case you may be absolutely certain, but incorrectly so.
I don’t have to believe that the alternatives are impossible; I just have to be certain that the alternatives are not exemplified.
Define “absolute certainty”.
In the brain-in-the-vat scenario which is not impossible I cannot be certain of anything at all. So what?
So you’re not absolutely certain. The probability you assign to “Today is Friday” is, oh, nine nines, not 1.
Nope. I assign it the probability of 1.
On the other hand, you think I’m mistaken about that.
On the third tentacle I think you are mistaken because, among other things, my mind does not assign probabilities like 0.999999999 -- it’s not capable of such granularity. My wetware rounds such numbers and so assigns the probability of 1 to the statement that today is Friday.
So if you went in to work and nobody was there, and your computer says it’s Saturday, and your watch says Saturday, and the next thirty people you ask say it’s Saturday… you would still believe it’s Friday?
If you think it’s Saturday after any amount of evidence, after assigning probability 1 to the statement “Today is Friday,” then you can’t be doing anything vaguely rational—no amount of Bayesian updating will allow you to update away from probability 1.
If you ever assign something probability 1, you can never be rationally convinced of its falsehood.
That’s not true. There are ways to change your mind other than through Bayesian updating.
Sure. But by definition they are irrational kludges made by human brains.
Bayesian updating is a theorem of probability: it is literally the formal definition of “rationally changing your mind.” If you’re changing your mind through something that isn’t Bayesian, you will get the right answer iff your method gives the same result as the Bayesian one; otherwise you’re just wrong.
The original point was that human brains are not all Bayesian agents. (Specifically, that they could be completely certain of something)
… Okay?
Okay, so, this looks like a case of arguing over semantics.
What I am saying is: “You can never correctly give probability 1 to something, and changing your mind in a non-Bayesian manner is simply incorrect. Assuming you endeavor to be /cough/ Less Wrong, you should force your System 2 to abide by these rules.”
What I think Lumifer is saying is, “Yes, but you’re never going to succeed because human brains are crazy kludges in the first place.”
In which case we have no disagreement, though I would note that I intend to do as well as I can.
I wasn’t restricting the domain to the brains of people who intrinsically value being rational agents.
I am sorry, I must have been unclear. I’m not staying “yes, but”, I’m saying “no, I disagree”.
I disagree that “you can never correctly give probability 1 to something”. To avoid silly debates over 1/3^^^3 chances I’d state my position as “you can correctly assign a probability that is indistinguishable from 1 to something”.
I disagree that “changing your mind in a non-Bayesian manner is simply incorrect”. That looks to me like an overbroad claim that’s false on its face. Human mind is rich and multifaceted, trying to limit it to performing a trivial statistical calculation doesn’t seem reasonable to me.
I think the claim is that, whatever method you use, it should approximate the answer the Bayesian method would use (which is optimal, but computationally infeasible)
The thing is, from a probabilistic standpoint, one is essentially infinity—it takes an infinite number of bits of evidence to get probability 1 from any finite prior.
And the human mind is a horrific repurposed adaptation not at all intended to do what we’re doing with it when we try to be rational. I fail to see why indulging its biases is at all helpful.
Given that here rationality is often defined as winning, it seems to me you think natural selection works in opposite direction.
… Um. No?
I might have been a little hyperbolic there—the brain is meant to model the world—but...
Okay, look, have you read the Sequences on evolution? Because Eliezer makes the point much better than I can as of yet.
Regardless of EY, what is your point? What are you trying to express?
*sigh*
My point, as I stated the first time, is that evolution is dumb, and does not necessarily design optimal systems. See: optic nerve connecting to the front of the retina. This is doubly true of very important, very complex systems like the brain, where everything has to be laid down layer by layer and changing some system after the fact might make the whole thing come crumbling down. The brain is simply not the optimal processing engine given the resources of the human body: it’s Azathoth’s “best guess.”
So I see no reason to pander to its biases when I can use mathematics, which I trust infinitely more, to prove that there is a rational way to make decisions.
How do you define optimality?
LOL.
Sorry :-/
So, since you seem to be completely convinced of the advantage of the mathematical “optimal processing” over the usual biased and messy thinking that humans normally do—could you, um, demonstrate this advantage? For example financial markets provide rapid feedback and excellent incentives. It shouldn’t be hard to exploit some cognitive bias or behavioral inefficiency on the part of investors and/or traders, should it? After all their brains are so horribly inefficient, to the point of being crippled, really...
Actually, no, I would expect that investors and/or traders would be more rational than the average for that very reason. The brain can be trained, or I wouldn’t be here; that doesn’t say much about it’s default configuration, though.
As far as biases—how about the existence of religion? The fact that people still deny evolution? The fact that people buy lottery tickets?
And as far as optimality goes—it’s an open question, I don’t know. I do, however, believe that the brain is not optimal, because it’s a very complex system that hasn’t had much time to be refined.
That’s not good enough—you can “use mathematics” and that gives you THE optimal result, the very best possible—right? As such, anything not the best possible is inferior, even if it’s better than the average. So by being purely rational you still should be able to extract money out of the market taking it from investors who are merely better than the not-too-impressive average.
As to optimality, unless you define it *somehow* the phrase “brain is not optimal” has no meaning.
That is true.
I am not perfectly rational. I do not have access to all the information I have. That is why am I here: to be Less Wrong.
Now, I can attempt to use Bayes’ Theorem on my own lack-of-knowledge, and predict probabilities of probabilities—calibrate myself, and learn to notice when I’m missing information—but that adds more uncertainty; my performance drifts back towards average.
Not at all. I can define a series of metrics—energy consumption and “win” ratio being the most obvious—and define an n-dimensional function on those metrics, and then prove that given bounds in all directions that a maximum exists so long as my function follows certain criteria (mostly continuity.)
I can note that given the space of possible functions and metrics, the chances of my brain being optimal by any of them is extremely low. I can’t really say much about brain-optimality mostly because I don’t understand enough biology to understand how much energy draw is too much, and the like; it’s trivial to show that our brain is not an optimal mind under unbounded resources.
Which, in turn, is really what we care about here—energy is abundant, healthcare is much better than in the ancestral environment, so if it turns out our health takes a hit because of optimizing for intelligence somehow we can afford it.
I don’t think you can guarantee ONE maximum. But in any case, the vastness of the space of all n-dimensional functions makes the argument unpersuasive. Let’s get a bit closer to the common, garden-variety reality and ask a simpler question. In which directions do you think human brain should change/evolve/mutate to become more optimal? And in these directions, is the further the better or there is a point beyond which one should not go?
Um, I have strong doubts about that. Your body affects your mind greatly (not to mention your quality of life).
Yes.
No, unless you define “rationally changing your mind” this way in which case it’s just a circle.
Nope.
The ultimate criterion of whether the answer is the right one is real life.
While I’m not certain, I’m fairly confident that most people’s minds don’t assign probabilities at all. At least when this thread began, it was about trying to infer implicit probabilities based on how people update their beliefs; if there is any situation that would lead you to conclude that it’s not Friday, then that would suffice to prove that your mind’s internal probability is not Friday.
Most of the time, when people talk about probabilities or state the probabilities they assign to something, they’re talking about loose, verbal estimates, which are created by their conscious minds. There are various techniques for trying to make these match up to the evidence the person has, but in the end they’re still just basically guesses at what’s going on in your subconscious. Your conscious mind is capable of assigning probabilities like 0.999999999.
Taking a (modified) page from Randaly’s book, I would define absolute certainty as “so certain that one cannot conceive of any possible evidence which might convince one that the belief in question is false”. Since you can conceive of the brain-in-the-vat scenario and believe that it is not impossible, I would say that you cannot be absolutely certain of anything, including the axioms and logic of the world you know (even the rejection of absolute certainty).