It’s not unheard of people to bet their life on some belief of theirs.
That doesn’t show that they’re absolutely certain; it just shows that the expected value of the payoff outweighs the chance of them dying.
The real issue with this claim is that people don’t actually model everything using probabilities, nor do they actually use Bayesian belief updating. However, the closest analogue would be people who will not change their beliefs in literally any circumstances, which is clearly false. (Definitely false if you’re considering, e.g. surgery or cosmic rays; almost certainly false if you only include hypotheticals like cult leaders disbanding the cult or personally attacking the individual.)
Is someone absolutely certain if the say that they cannot imagine any circumstances under which they might change their beliefs (or, alternately, can imagine only circumstances which they are absolutely certain will not happen)? It would seem to be a better definition, as it defines probability (and certainty) as a thing in the mind, rather than outside.
In this case, I would see no contradiction as declaring someone to be absolutely certain of their beliefs, though I would say (with non-absolute certainty) that they are incorrect. Someone who believes that the Earth is 6000 years old, for example, may not be swayed by any evidence short of the Christian god coming down and telling them otherwise, an event to which they may assign 0.0 probability (because they believe that it’s impossible for their god to contradict himself, or something like that).
Further, I would exclude methods of changing someone’s mind without using evidence (surgery or cosmic rays). I can’t quite put it into words, but it seems like the fact that it isn’t evidence and instead changes probabilities directly means that it doesn’t so much affect beliefs as it replaces them.
Is someone absolutely certain if they say that they cannot imagine any circumstances under which they might change their beliefs (or, alternately, can imagine only circumstances which they are absolutely certain will not happen)?
Disagree. This would be a statement about their imagination, not about reality.
Also, people are not well calibrated on this sort of thing. People are especially poorly calibrated on this sort of thing in a social context, where others are considering their beliefs.
ETA:
An example: While I haven’t actually done this, I would expect that a significant fraction of religious people would reply to such a question by saying that they would never change their beliefs because of their absolute faith. I can’t be bothered to do enough googling to find a specific interviewee about faith who then became an atheist, but I strongly suspect that some such people actually exist.
I can’t quite put it into words, but it seems like the fact that it isn’t evidence and instead changes probabilities directly means that it doesn’t so much affect beliefs as it replaces them.
Disagree. This would be a statement about their imagination, not about reality.
You are correct. I am making my statements on the basis that probability is in the mind, and as such it is perfectly possible for someone to have a probability which is incorrect. I would distinguish between a belief which it is impossible to disprove, and one which someone believes it is impossible to disprove, and as “absolutely certain” seems to refer to a mental state, I would give it the definition of the latter.
(I suspect that we don’t actually disagree about anything in reality. I further suspect that the phrase I used regarding imagination and reality was misleading; sorry, it’s my standard response to thought experiments based on people’s ability to imagine things.)
I’m not claiming that there is a difference between their stated probabilities and the actual, objective probabilities. I’m claiming that there is a difference between their stated probabilities and the probabilities that they actually hold. The relevant mental states are the implicit probabilities from their internal belief system; while words can be some evidence about this, I highly suspect, for reasons given above, that anybody who claims to be 100% confident of something is simply wrong in mapping their own internal beliefs, which they don’t have explicit access to and aren’t even stored as probabilities (?), over onto explicitly stated probabilities.
Suppose that somebody stated that they cannot imagine any circumstances under which they might change their beliefs. This is a statement about their ability to imagine situations; it is not a proof that no such situation could possibly exist in reality. The fact that it is not is demonstrated by my claim that there are people who did make that statement, but then actually encountered a situation that caused them to change their belief. Clearly, these people’s statement that they were absolutely, 100% confident of their belief was incorrect.
I would still say that while belief-altering experiences are certainly possible, even for people with stated absolute certainty, I am not convinced that they can imagine them occurring with nonzero probability. In fact, if I had absolute certainty about something, I would as a logical consequence be absolutely certain that any disproof of that belief could not occur.
However, it is also not unreasonable that someone does not believe what they profess to believe in some practically testable manner. For example, someone who states that they have absolute certainty that their deity will protect them from harm, but still declines to walk through a fire, would fall into such a category—even if they are not intentionally lying, on some level they are not absolutely certain.
I think that some of our disagreement arises from the fact that I, being relatively uneducated (for this particular community) about Bayesian networks, am not convinced that all human belief systems are isomorphic to one. This is, however, a fault in my own knowledge, and not a strong critique of the assertion.
I would expect that most religious fundamentalists would reply to such a question by saying that they would never change their beliefs because of their absolute faith.
First, fundamentalism is a matter of theology, not of intensity of faith.
Second, what would these people do if their God appeared before them and flat out told them they’re wrong? :-D
Their verbal response would be that this would be impossible.
At which point you can point out to them that God can do WTF He wants and is certainly not limited by ideas of pathetic mortals about what’s impossible and what’s not.
Oh, and step back, exploding heads can be messy :-)
This is not the place to start dissecting theism, but would you be willing to concede the possible existence of people who would simply not be responsive to such arguments? Perhaps they might accuse you of lying and refuse to listen further, or refute you with some biblical verse, or even question your premises.
Counterexamples: Religion (Essentially all of them that make claims about reality). Almost every macroeconomic theory. The War on Drugs. Abstinence-based sex education. Political positions too numerous and controversial to call out.
You are confused. I am not saying that false claims about reality cannot persist—I am saying that reality always wins.
When you die you don’t actually go to heaven—that’s Reality 1, Religion 0.
Besides, you need to look a bit more carefully at the motivations of the people involved. The goal of writing macroeconomic papers is not to reflect reality well, it is to produce publications in pursuit of tenure. The goal of the War on Drugs is not to stop drug use, it is to control the population and extract wealth. The goal of abstinence-based sex education is not to reduce pregnancy rates, it is to make certain people feel good about themselves.
I thought you were saying that reality has a pattern of convincing people of true beliefs
You misunderstood. Reality has the feature of making people face the true consequences of their actions regardless of their beliefs. That’s why reality always wins.
Sort of. Particularly in the case of belief in an afterlife, there isn’t a person still around to face the true consequences of their actions. And even in less extreme examples, people can still convince themselves that the true consequences of their actions are different—or have a different meaning—from what they really are.
And even in less extreme examples, people can still convince themselves that the true consequences of their actions are different—or have a different meaning—from what they really are.
In those cases reality can take more drastic measures.
Believing that 2 + 2 = 5 will most likely cause one to fail to build a successful airplane, but that does not prohibit one from believing that one’s own arithmetic is perfect, and that the incompetence of others, the impossibility of flight, or the condemnation of an airplane-hating god is responsible for the failure.
See my edit. Basically, the enemy airplanes flying overhead and dropping bombs should convince you that flight is indeed possible. Also any remaining desire you have it invent excuses will go away once one of the bombs explodes close enough to you.
The founders don’t get to decide whether or not it is a movement, or what goal it does or doesn’t have. It turns out that many founders in this case are also influential agents, but the influential agents I’ve talked to have expressed that they expect the world to be a better place if people generally make better decisions (in cases where objectively better decision-making is a meaningful concept).
The War on Drugs. Abstinence-based sex education. Political positions too numerous and controversial to call out.
Careful, those are the kind of political claims that where there is currently so much mind-kill that I wouldn’t trust much of the “evidence” you’re using to declare them obviously false.
The general claim is one where I think it would be better to test it on historical examples.
At which point you can point out to them that God can do WTF He wants
This is not an accurate representation of mainstream theology. Most theologists believe, for example, that it is impossible for God to do evil. See William Lane Craig’s commentary.
This is not an accurate representation of mainstream theology.
First you mean Christian theology, there are lot more theologies around.
Second, I don’t know what is “mainstream” theology—is it the official position of the Roman Catholic Church? Some common elements in Protestant theology? Does anyone care about Orthodox Christians?
Third, the question of limits on Judeo-Christian God is a very very old theological issue which has not been resolved to everyone’s satisfaction and no resolution is expected.
Fourth, William Lane Craig basically evades the problem by defining good as “what God is”. God can still do anything He wants and whatever He does automatically gets defined as “good”.
This is starting to veer into free-will territory, but I don’t think God would have much problem convincing these people that He is the Real Deal. Wouldn’t be much of a god otherwise :-)
I cannot imagine circumstances under which I would come to believe that the Christian God exists. All of the evidence I can imagine encountering which could push me in that direction if I found it seems even better explained by various deceptive possibilities, e.g. that I’m a simulation or I’ve gone insane or what have you. But I suspect that there is some sequence of experience such that if I had it I would be convinced; it’s just too complicated for me to work out in advance what it would be. Which perhaps means I can imagine it in an abstract, meta sort of way, just not in a concrete way? Am I certain that the Christian God doesn’t exist? I admit that I’m not certain about that (heh!), which is part of the reason I’m curious about your test.
If imagination fails, consult reality for inspiration. You could look into the conversion experiences of materialist, rationalist atheists. John C Wright, for example.
I am not arguing that it is not an empty set. Consider it akin to the intersection of the set of natural numbers, and the set of infinities; the fact that it is the empty set is meaningful. It means that by following the rules of simple, additive arithmetic, one cannot reach infinity, and if one does reach infinity, that is a good sign of an error somewhere in the calculation.
Similarly, one should not be absolutely certain if they are updating from finite evidence. Barring omniscience (infinite evidence), one cannot become absolutely/infinitely certain.
What definition of absolute certainty would you propose?
So you are proposing a definition that nothing can satisfy. That doesn’t seem like a useful activity. If you want to say that no belief can stand up to the powers of imagination, sure, I’ll agree with you. However if we want to talk about what people call “absolute certainty” it would be nice to have some agreed-on terms to use in discussing it. Saying “oh, there just ain’t no such animal” doesn’t lead anywhere.
As to what I propose, I believe that definitions serve a purpose and the same thing can be defined differently in different contexts. You want a definition of “absolute certainty” for which purpose and in which context?
You are correct, I have contradicted myself. I failed to mention the possibility of people who are not reasoning perfectly, and in fact are not close, to the point where they can mistakenly arrive at absolute certainty. I am not arguing that their certainty is fake—it is a mental state, after all—but rather that it cannot be reached using proper rational thought.
What you have pointed out to me is that absolute certainty is not, in fact, a useful thing. It is the result of a mistake in the reasoning process. An inept mathematician can add together a large but finite series of natural numbers, and then write down “infinity” after the equals sign, and thereafter goes about believing that the sum of a certain series is infinite.
The sum is not, in fact, infinite; no finite set of finite things can add up to an infinity, just as no finite set of finite pieces of evidence can produce absolute, infinitely strong certainty. But if we use some process other than the “correct” one, as the mathematician’s brain has to somehow output “infinity” from the finite inputs it has been given, we can generate absolute certainty from finite evidence—it simply isn’t correct. It doesn’t correspond to something which is either impossible or inevitable in the real world, just as the inept mathematician’s infinity does not correspond to a real infinity. Rather, they both correspond to beliefs about the real world.
While I do not believe that there are any rationally acquired beliefs which can stand up to the powers of imagination (though I am not absolutely certain of this belief), I do believe that irrational beliefs can. See my above description of the hypothetical young-earther; they may be able to conceive of a circumstance which would falsify their belief (i.e. their god telling them that it isn’t so), but they cannot conceive of that circumstance actually occurring (they are absolutely certain that their god does not contradict himself, which may have its roots in other absolutely certain beliefs or may be simply taken as a given).
the possibility of people who are not reasoning perfectly
:-) As in, like, every single human being...
certainty … cannot be reached using proper rational thought
Yep. Provided you limit “proper rational thought” to Bayesian updating of probabilities this is correct. Well, as long your prior isn’t 1, that is.
I do believe that irrational beliefs can
I’d say that if you don’t require internal consistency from your beliefs then yes, you can have a subjectively certain belief which nothing can shake. If you’re not bothered by contradictions, well then, doublethink is like Barbie—everything is possible with it.
In fact, unless you’re insane, you probably already believe that tomorrow will not be Friday!
(That belief is underspecified- “today” is a notion that varies independently, it doesn’t point to a specific date. Today you believe that August 16th, 2013 is a Friday; tomorrow, you will presumably continue to believe that August 16th, 2013 was a Friday.)
I very much doubt that you are absolutely certain. There are a number of outlandish but not impossible worlds in which you could believe that it is Friday, yet it might not be Friday; something akin to the world of The Truman Show comes to mind.
Unless you believe that all such alternatives are impossible, in which case you may be absolutely certain, but incorrectly so.
On the other hand, you think I’m mistaken about that.
On the third tentacle I think you are mistaken because, among other things, my mind does not assign probabilities like 0.999999999 -- it’s not capable of such granularity. My wetware rounds such numbers and so assigns the probability of 1 to the statement that today is Friday.
So if you went in to work and nobody was there, and your computer says it’s Saturday, and your watch says Saturday, and the next thirty people you ask say it’s Saturday… you would still believe it’s Friday?
If you think it’s Saturday after any amount of evidence, after assigning probability 1 to the statement “Today is Friday,” then you can’t be doing anything vaguely rational—no amount of Bayesian updating will allow you to update away from probability 1.
If you ever assign something probability 1, you can never be rationally convinced of its falsehood.
Sure. But by definition they are irrational kludges made by human brains.
Bayesian updating is a theorem of probability: it is literally the formal definition of “rationally changing your mind.” If you’re changing your mind through something that isn’t Bayesian, you will get the right answer iff your method gives the same result as the Bayesian one; otherwise you’re just wrong.
Okay, so, this looks like a case of arguing over semantics.
What I am saying is: “You can never correctly give probability 1 to something, and changing your mind in a non-Bayesian manner is simply incorrect. Assuming you endeavor to be /cough/ Less Wrong, you should force your System 2 to abide by these rules.”
What I think Lumifer is saying is, “Yes, but you’re never going to succeed because human brains are crazy kludges in the first place.”
In which case we have no disagreement, though I would note that I intend to do as well as I can.
What I think Lumifer is saying is, “Yes, but you’re never going to succeed because human brains are crazy kludges in the first place.”
I am sorry, I must have been unclear. I’m not staying “yes, but”, I’m saying “no, I disagree”.
I disagree that “you can never correctly give probability 1 to something”. To avoid silly debates over 1/3^^^3 chances I’d state my position as “you can correctly assign a probability that is indistinguishable from 1 to something”.
I disagree that “changing your mind in a non-Bayesian manner is simply incorrect”. That looks to me like an overbroad claim that’s false on its face. Human mind is rich and multifaceted, trying to limit it to performing a trivial statistical calculation doesn’t seem reasonable to me.
I think the claim is that, whatever method you use, it should approximate the answer the Bayesian method would use (which is optimal, but computationally infeasible)
The thing is, from a probabilistic standpoint, one is essentially infinity—it takes an infinite number of bits of evidence to get probability 1 from any finite prior.
And the human mind is a horrific repurposed adaptation not at all intended to do what we’re doing with it when we try to be rational. I fail to see why indulging its biases is at all helpful.
My point, as I stated the first time, is that evolution is dumb, and does not necessarily design optimal systems. See: optic nerve connecting to the front of the retina. This is doubly true of very important, very complex systems like the brain, where everything has to be laid down layer by layer and changing some system after the fact might make the whole thing come crumbling down. The brain is simply not the optimal processing engine given the resources of the human body: it’s Azathoth’s “best guess.”
So I see no reason to pander to its biases when I can use mathematics, which I trust infinitely more, to prove that there is a rational way to make decisions.
The brain is simply not the optimal processing engine given the resources of the human body
How do you define optimality?
So I see no reason to pander to its biases when I can use mathematics
LOL.
Sorry :-/
So, since you seem to be completely convinced of the advantage of the mathematical “optimal processing” over the usual biased and messy thinking that humans normally do—could you, um, demonstrate this advantage? For example financial markets provide rapid feedback and excellent incentives. It shouldn’t be hard to exploit some cognitive bias or behavioral inefficiency on the part of investors and/or traders, should it? After all their brains are so horribly inefficient, to the point of being crippled, really...
Actually, no, I would expect that investors and/or traders would be more rational than the average for that very reason. The brain can be trained, or I wouldn’t be here; that doesn’t say much about it’s default configuration, though.
As far as biases—how about the existence of religion? The fact that people still deny evolution? The fact that people buy lottery tickets?
And as far as optimality goes—it’s an open question, I don’t know. I do, however, believe that the brain is not optimal, because it’s a very complex system that hasn’t had much time to be refined.
investors and/or traders would be more rational than the average
That’s not good enough—you can “use mathematics” and that gives you THE optimal result, the very best possible—right? As such, anything not the best possible is inferior, even if it’s better than the average. So by being purely rational you still should be able to extract money out of the market taking it from investors who are merely better than the not-too-impressive average.
As to optimality, unless you define it *somehow* the phrase “brain is not optimal” has no meaning.
I am not perfectly rational. I do not have access to all the information I have. That is why am I here: to be Less Wrong.
Now, I can attempt to use Bayes’ Theorem on my own lack-of-knowledge, and predict probabilities of probabilities—calibrate myself, and learn to notice when I’m missing information—but that adds more uncertainty; my performance drifts back towards average.
As to optimality, unless you define it somehow the phrase “brain is not optimal” has no meaning.
Not at all. I can define a series of metrics—energy consumption and “win” ratio being the most obvious—and define an n-dimensional function on those metrics, and then prove that given bounds in all directions that a maximum exists so long as my function follows certain criteria (mostly continuity.)
I can note that given the space of possible functions and metrics, the chances of my brain being optimal by any of them is extremely low. I can’t really say much about brain-optimality mostly because I don’t understand enough biology to understand how much energy draw is too much, and the like; it’s trivial to show that our brain is not an optimal mind under unbounded resources.
Which, in turn, is really what we care about here—energy is abundant, healthcare is much better than in the ancestral environment, so if it turns out our health takes a hit because of optimizing for intelligence somehow we can afford it.
I can define a series of metrics—energy consumption and “win” ratio being the most obvious—and define an n-dimensional function on those metrics, and then prove that given bounds in all directions that a maximum exists
I don’t think you can guarantee ONE maximum. But in any case, the vastness of the space of all n-dimensional functions makes the argument unpersuasive. Let’s get a bit closer to the common, garden-variety reality and ask a simpler question. In which directions do you think human brain should change/evolve/mutate to become more optimal? And in these directions, is the further the better or there is a point beyond which one should not go?
so if it turns out our health takes a hit because of optimizing for intelligence somehow we can afford it
Um, I have strong doubts about that. Your body affects your mind greatly (not to mention your quality of life).
it is literally the formal definition of “rationally changing your mind.”
No, unless you define “rationally changing your mind” this way in which case it’s just a circle.
If you’re changing your mind through something that isn’t Bayesian, you will get the right answer iff your method gives the same result as the Bayesian one; otherwise you’re just wrong.
Nope.
The ultimate criterion of whether the answer is the right one is real life.
On the third tentacle I think you are mistaken because, among other things, my mind does not assign probabilities like 0.999999999 -- it’s not capable of such granularity.
While I’m not certain, I’m fairly confident that most people’s minds don’t assign probabilities at all. At least when this thread began, it was about trying to infer implicit probabilities based on how people update their beliefs; if there is any situation that would lead you to conclude that it’s not Friday, then that would suffice to prove that your mind’s internal probability is not Friday.
Most of the time, when people talk about probabilities or state the probabilities they assign to something, they’re talking about loose, verbal estimates, which are created by their conscious minds. There are various techniques for trying to make these match up to the evidence the person has, but in the end they’re still just basically guesses at what’s going on in your subconscious. Your conscious mind is capable of assigning probabilities like 0.999999999.
Taking a (modified) page from Randaly’s book, I would define absolute certainty as “so certain that one cannot conceive of any possible evidence which might convince one that the belief in question is false”. Since you can conceive of the brain-in-the-vat scenario and believe that it is not impossible, I would say that you cannot be absolutely certain of anything, including the axioms and logic of the world you know (even the rejection of absolute certainty).
Is this a testable assertion? How do you determine whether someone is, in fact, absolutely certain?
It’s not unheard of people to bet their life on some belief of theirs.
That doesn’t show that they’re absolutely certain; it just shows that the expected value of the payoff outweighs the chance of them dying.
The real issue with this claim is that people don’t actually model everything using probabilities, nor do they actually use Bayesian belief updating. However, the closest analogue would be people who will not change their beliefs in literally any circumstances, which is clearly false. (Definitely false if you’re considering, e.g. surgery or cosmic rays; almost certainly false if you only include hypotheticals like cult leaders disbanding the cult or personally attacking the individual.)
Is someone absolutely certain if the say that they cannot imagine any circumstances under which they might change their beliefs (or, alternately, can imagine only circumstances which they are absolutely certain will not happen)? It would seem to be a better definition, as it defines probability (and certainty) as a thing in the mind, rather than outside.
In this case, I would see no contradiction as declaring someone to be absolutely certain of their beliefs, though I would say (with non-absolute certainty) that they are incorrect. Someone who believes that the Earth is 6000 years old, for example, may not be swayed by any evidence short of the Christian god coming down and telling them otherwise, an event to which they may assign 0.0 probability (because they believe that it’s impossible for their god to contradict himself, or something like that).
Further, I would exclude methods of changing someone’s mind without using evidence (surgery or cosmic rays). I can’t quite put it into words, but it seems like the fact that it isn’t evidence and instead changes probabilities directly means that it doesn’t so much affect beliefs as it replaces them.
Disagree. This would be a statement about their imagination, not about reality.
Also, people are not well calibrated on this sort of thing. People are especially poorly calibrated on this sort of thing in a social context, where others are considering their beliefs.
ETA: An example: While I haven’t actually done this, I would expect that a significant fraction of religious people would reply to such a question by saying that they would never change their beliefs because of their absolute faith. I can’t be bothered to do enough googling to find a specific interviewee about faith who then became an atheist, but I strongly suspect that some such people actually exist.
Yeah, fair enough.
You are correct. I am making my statements on the basis that probability is in the mind, and as such it is perfectly possible for someone to have a probability which is incorrect. I would distinguish between a belief which it is impossible to disprove, and one which someone believes it is impossible to disprove, and as “absolutely certain” seems to refer to a mental state, I would give it the definition of the latter.
(I suspect that we don’t actually disagree about anything in reality. I further suspect that the phrase I used regarding imagination and reality was misleading; sorry, it’s my standard response to thought experiments based on people’s ability to imagine things.)
I’m not claiming that there is a difference between their stated probabilities and the actual, objective probabilities. I’m claiming that there is a difference between their stated probabilities and the probabilities that they actually hold. The relevant mental states are the implicit probabilities from their internal belief system; while words can be some evidence about this, I highly suspect, for reasons given above, that anybody who claims to be 100% confident of something is simply wrong in mapping their own internal beliefs, which they don’t have explicit access to and aren’t even stored as probabilities (?), over onto explicitly stated probabilities.
Suppose that somebody stated that they cannot imagine any circumstances under which they might change their beliefs. This is a statement about their ability to imagine situations; it is not a proof that no such situation could possibly exist in reality. The fact that it is not is demonstrated by my claim that there are people who did make that statement, but then actually encountered a situation that caused them to change their belief. Clearly, these people’s statement that they were absolutely, 100% confident of their belief was incorrect.
I would still say that while belief-altering experiences are certainly possible, even for people with stated absolute certainty, I am not convinced that they can imagine them occurring with nonzero probability. In fact, if I had absolute certainty about something, I would as a logical consequence be absolutely certain that any disproof of that belief could not occur.
However, it is also not unreasonable that someone does not believe what they profess to believe in some practically testable manner. For example, someone who states that they have absolute certainty that their deity will protect them from harm, but still declines to walk through a fire, would fall into such a category—even if they are not intentionally lying, on some level they are not absolutely certain.
I think that some of our disagreement arises from the fact that I, being relatively uneducated (for this particular community) about Bayesian networks, am not convinced that all human belief systems are isomorphic to one. This is, however, a fault in my own knowledge, and not a strong critique of the assertion.
First, fundamentalism is a matter of theology, not of intensity of faith.
Second, what would these people do if their God appeared before them and flat out told them they’re wrong? :-D
Fixed, thanks.
Their verbal response would be that this would be impossible.
(I agree that such a situation would likely lead to them actually changing their beliefs.)
At which point you can point out to them that God can do WTF He wants and is certainly not limited by ideas of pathetic mortals about what’s impossible and what’s not.
Oh, and step back, exploding heads can be messy :-)
This is not the place to start dissecting theism, but would you be willing to concede the possible existence of people who would simply not be responsive to such arguments? Perhaps they might accuse you of lying and refuse to listen further, or refute you with some biblical verse, or even question your premises.
Of course. Stuffing fingers into your ears and going NA-NA-NA-NA-CAN’T-HEAR-YOU is a rather common debate tactic :-)
Don’t you observe people doing that to reality, rather than updating their beliefs?
That too. Though reality, of course, has ways of making sure its point of view prevails :-)
Reality has shown itself to be fairly ineffective in the short term (all of human history).
8-0
In my experience reality is very very effective. In the long term AND in the short term.
Counterexamples: Religion (Essentially all of them that make claims about reality). Almost every macroeconomic theory. The War on Drugs. Abstinence-based sex education. Political positions too numerous and controversial to call out.
You are confused. I am not saying that false claims about reality cannot persist—I am saying that reality always wins.
When you die you don’t actually go to heaven—that’s Reality 1, Religion 0.
Besides, you need to look a bit more carefully at the motivations of the people involved. The goal of writing macroeconomic papers is not to reflect reality well, it is to produce publications in pursuit of tenure. The goal of the War on Drugs is not to stop drug use, it is to control the population and extract wealth. The goal of abstinence-based sex education is not to reduce pregnancy rates, it is to make certain people feel good about themselves.
Wait, isn’t that pretty much tautological, given the definition of ‘reality’?
What’s your definition of reality?
I can’t get a very general definition while still being useful, but reality is what determines if a belief is true or false.
I thought you were saying that reality has a pattern of convincing people of true beliefs, not that reality is indifferent to belief.
You misunderstood. Reality has the feature of making people face the true consequences of their actions regardless of their beliefs. That’s why reality always wins.
Most of my definition of ‘true consequences’ matches my definition of ‘reality’.
Sort of. Particularly in the case of belief in an afterlife, there isn’t a person still around to face the true consequences of their actions. And even in less extreme examples, people can still convince themselves that the true consequences of their actions are different—or have a different meaning—from what they really are.
In those cases reality can take more drastic measures.
Edit: Here is the quote I should have linked to.
Believing that 2 + 2 = 5 will most likely cause one to fail to build a successful airplane, but that does not prohibit one from believing that one’s own arithmetic is perfect, and that the incompetence of others, the impossibility of flight, or the condemnation of an airplane-hating god is responsible for the failure.
See my edit. Basically, the enemy airplanes flying overhead and dropping bombs should convince you that flight is indeed possible. Also any remaining desire you have it invent excuses will go away once one of the bombs explodes close enough to you.
What’s the goal of rationalism as a movement?
No idea. I don’t even think rationalism is a movement (in the usual sociological meaning). Ask some of the founders.
The founders don’t get to decide whether or not it is a movement, or what goal it does or doesn’t have. It turns out that many founders in this case are also influential agents, but the influential agents I’ve talked to have expressed that they expect the world to be a better place if people generally make better decisions (in cases where objectively better decision-making is a meaningful concept).
Careful, those are the kind of political claims that where there is currently so much mind-kill that I wouldn’t trust much of the “evidence” you’re using to declare them obviously false.
The general claim is one where I think it would be better to test it on historical examples.
So, because Copernicus was eventually vindicated, reality prevails in general? Only a smaller subset of humanity believes in science.
This is not an accurate representation of mainstream theology. Most theologists believe, for example, that it is impossible for God to do evil. See William Lane Craig’s commentary.
First you mean Christian theology, there are lot more theologies around.
Second, I don’t know what is “mainstream” theology—is it the official position of the Roman Catholic Church? Some common elements in Protestant theology? Does anyone care about Orthodox Christians?
Third, the question of limits on Judeo-Christian God is a very very old theological issue which has not been resolved to everyone’s satisfaction and no resolution is expected.
Fourth, William Lane Craig basically evades the problem by defining good as “what God is”. God can still do anything He wants and whatever He does automatically gets defined as “good”.
Clearly they would consider this entity a false God/Satan.
This is starting to veer into free-will territory, but I don’t think God would have much problem convincing these people that He is the Real Deal. Wouldn’t be much of a god otherwise :-)
That’s vacuously true, of course. Which makes you original question meaningless as stated.
It wasn’t so much meaningless as it was rhetorical.
I cannot imagine circumstances under which I would come to believe that the Christian God exists. All of the evidence I can imagine encountering which could push me in that direction if I found it seems even better explained by various deceptive possibilities, e.g. that I’m a simulation or I’ve gone insane or what have you. But I suspect that there is some sequence of experience such that if I had it I would be convinced; it’s just too complicated for me to work out in advance what it would be. Which perhaps means I can imagine it in an abstract, meta sort of way, just not in a concrete way? Am I certain that the Christian God doesn’t exist? I admit that I’m not certain about that (heh!), which is part of the reason I’m curious about your test.
If imagination fails, consult reality for inspiration. You could look into the conversion experiences of materialist, rationalist atheists. John C Wright, for example.
So you’re effectively saying that your prior is zero and will not be budged by ANY evidence.
Hmm… smells of heresy to me… :-D
I would argue that this definition of absolute certainty is completely useless as nothing could possibly satisfy it. It results in an empty set.
If you “cannot imagine under any circumstances” your imagination is deficient.
I am not arguing that it is not an empty set. Consider it akin to the intersection of the set of natural numbers, and the set of infinities; the fact that it is the empty set is meaningful. It means that by following the rules of simple, additive arithmetic, one cannot reach infinity, and if one does reach infinity, that is a good sign of an error somewhere in the calculation.
Similarly, one should not be absolutely certain if they are updating from finite evidence. Barring omniscience (infinite evidence), one cannot become absolutely/infinitely certain.
What definition of absolute certainty would you propose?
So you are proposing a definition that nothing can satisfy. That doesn’t seem like a useful activity. If you want to say that no belief can stand up to the powers of imagination, sure, I’ll agree with you. However if we want to talk about what people call “absolute certainty” it would be nice to have some agreed-on terms to use in discussing it. Saying “oh, there just ain’t no such animal” doesn’t lead anywhere.
As to what I propose, I believe that definitions serve a purpose and the same thing can be defined differently in different contexts. You want a definition of “absolute certainty” for which purpose and in which context?
You are correct, I have contradicted myself. I failed to mention the possibility of people who are not reasoning perfectly, and in fact are not close, to the point where they can mistakenly arrive at absolute certainty. I am not arguing that their certainty is fake—it is a mental state, after all—but rather that it cannot be reached using proper rational thought.
What you have pointed out to me is that absolute certainty is not, in fact, a useful thing. It is the result of a mistake in the reasoning process. An inept mathematician can add together a large but finite series of natural numbers, and then write down “infinity” after the equals sign, and thereafter goes about believing that the sum of a certain series is infinite.
The sum is not, in fact, infinite; no finite set of finite things can add up to an infinity, just as no finite set of finite pieces of evidence can produce absolute, infinitely strong certainty. But if we use some process other than the “correct” one, as the mathematician’s brain has to somehow output “infinity” from the finite inputs it has been given, we can generate absolute certainty from finite evidence—it simply isn’t correct. It doesn’t correspond to something which is either impossible or inevitable in the real world, just as the inept mathematician’s infinity does not correspond to a real infinity. Rather, they both correspond to beliefs about the real world.
While I do not believe that there are any rationally acquired beliefs which can stand up to the powers of imagination (though I am not absolutely certain of this belief), I do believe that irrational beliefs can. See my above description of the hypothetical young-earther; they may be able to conceive of a circumstance which would falsify their belief (i.e. their god telling them that it isn’t so), but they cannot conceive of that circumstance actually occurring (they are absolutely certain that their god does not contradict himself, which may have its roots in other absolutely certain beliefs or may be simply taken as a given).
:-) As in, like, every single human being...
Yep. Provided you limit “proper rational thought” to Bayesian updating of probabilities this is correct. Well, as long your prior isn’t 1, that is.
I’d say that if you don’t require internal consistency from your beliefs then yes, you can have a subjectively certain belief which nothing can shake. If you’re not bothered by contradictions, well then, doublethink is like Barbie—everything is possible with it.
Well, yes.
That is the point.
Nothing is absolutely certain.
Why does a deficient imagination disqualify a brain from being certain?
Vice versa. Deficient imagination allows a brain to be certain.
… ergo there exist human brains that are certain.
if people exist that are absolutely certain of something, I want to believe that they exist.
So… a brain is allowed to be certain because it can’t tell it’s wrong?
Tangent: Does that work?
Nope. “I’m certain that X is true now” is different from “I am certain that X is true and will be true forever and ever”.
I am absolutely certain today is Friday. Ask me tomorrow whether my belief has changed.
In fact, unless you’re insane, you probably already believe that tomorrow will not be Friday!
(That belief is underspecified- “today” is a notion that varies independently, it doesn’t point to a specific date. Today you believe that August 16th, 2013 is a Friday; tomorrow, you will presumably continue to believe that August 16th, 2013 was a Friday.)
Not exactly that but yes, there is the reference issue which makes this example less than totally convincing.
The main point still stands, though—certainty of a belief and its time-invariance are different things.
I very much doubt that you are absolutely certain. There are a number of outlandish but not impossible worlds in which you could believe that it is Friday, yet it might not be Friday; something akin to the world of The Truman Show comes to mind.
Unless you believe that all such alternatives are impossible, in which case you may be absolutely certain, but incorrectly so.
I don’t have to believe that the alternatives are impossible; I just have to be certain that the alternatives are not exemplified.
Define “absolute certainty”.
In the brain-in-the-vat scenario which is not impossible I cannot be certain of anything at all. So what?
So you’re not absolutely certain. The probability you assign to “Today is Friday” is, oh, nine nines, not 1.
Nope. I assign it the probability of 1.
On the other hand, you think I’m mistaken about that.
On the third tentacle I think you are mistaken because, among other things, my mind does not assign probabilities like 0.999999999 -- it’s not capable of such granularity. My wetware rounds such numbers and so assigns the probability of 1 to the statement that today is Friday.
So if you went in to work and nobody was there, and your computer says it’s Saturday, and your watch says Saturday, and the next thirty people you ask say it’s Saturday… you would still believe it’s Friday?
If you think it’s Saturday after any amount of evidence, after assigning probability 1 to the statement “Today is Friday,” then you can’t be doing anything vaguely rational—no amount of Bayesian updating will allow you to update away from probability 1.
If you ever assign something probability 1, you can never be rationally convinced of its falsehood.
That’s not true. There are ways to change your mind other than through Bayesian updating.
Sure. But by definition they are irrational kludges made by human brains.
Bayesian updating is a theorem of probability: it is literally the formal definition of “rationally changing your mind.” If you’re changing your mind through something that isn’t Bayesian, you will get the right answer iff your method gives the same result as the Bayesian one; otherwise you’re just wrong.
The original point was that human brains are not all Bayesian agents. (Specifically, that they could be completely certain of something)
… Okay?
Okay, so, this looks like a case of arguing over semantics.
What I am saying is: “You can never correctly give probability 1 to something, and changing your mind in a non-Bayesian manner is simply incorrect. Assuming you endeavor to be /cough/ Less Wrong, you should force your System 2 to abide by these rules.”
What I think Lumifer is saying is, “Yes, but you’re never going to succeed because human brains are crazy kludges in the first place.”
In which case we have no disagreement, though I would note that I intend to do as well as I can.
I wasn’t restricting the domain to the brains of people who intrinsically value being rational agents.
I am sorry, I must have been unclear. I’m not staying “yes, but”, I’m saying “no, I disagree”.
I disagree that “you can never correctly give probability 1 to something”. To avoid silly debates over 1/3^^^3 chances I’d state my position as “you can correctly assign a probability that is indistinguishable from 1 to something”.
I disagree that “changing your mind in a non-Bayesian manner is simply incorrect”. That looks to me like an overbroad claim that’s false on its face. Human mind is rich and multifaceted, trying to limit it to performing a trivial statistical calculation doesn’t seem reasonable to me.
I think the claim is that, whatever method you use, it should approximate the answer the Bayesian method would use (which is optimal, but computationally infeasible)
The thing is, from a probabilistic standpoint, one is essentially infinity—it takes an infinite number of bits of evidence to get probability 1 from any finite prior.
And the human mind is a horrific repurposed adaptation not at all intended to do what we’re doing with it when we try to be rational. I fail to see why indulging its biases is at all helpful.
Given that here rationality is often defined as winning, it seems to me you think natural selection works in opposite direction.
… Um. No?
I might have been a little hyperbolic there—the brain is meant to model the world—but...
Okay, look, have you read the Sequences on evolution? Because Eliezer makes the point much better than I can as of yet.
Regardless of EY, what is your point? What are you trying to express?
*sigh*
My point, as I stated the first time, is that evolution is dumb, and does not necessarily design optimal systems. See: optic nerve connecting to the front of the retina. This is doubly true of very important, very complex systems like the brain, where everything has to be laid down layer by layer and changing some system after the fact might make the whole thing come crumbling down. The brain is simply not the optimal processing engine given the resources of the human body: it’s Azathoth’s “best guess.”
So I see no reason to pander to its biases when I can use mathematics, which I trust infinitely more, to prove that there is a rational way to make decisions.
How do you define optimality?
LOL.
Sorry :-/
So, since you seem to be completely convinced of the advantage of the mathematical “optimal processing” over the usual biased and messy thinking that humans normally do—could you, um, demonstrate this advantage? For example financial markets provide rapid feedback and excellent incentives. It shouldn’t be hard to exploit some cognitive bias or behavioral inefficiency on the part of investors and/or traders, should it? After all their brains are so horribly inefficient, to the point of being crippled, really...
Actually, no, I would expect that investors and/or traders would be more rational than the average for that very reason. The brain can be trained, or I wouldn’t be here; that doesn’t say much about it’s default configuration, though.
As far as biases—how about the existence of religion? The fact that people still deny evolution? The fact that people buy lottery tickets?
And as far as optimality goes—it’s an open question, I don’t know. I do, however, believe that the brain is not optimal, because it’s a very complex system that hasn’t had much time to be refined.
That’s not good enough—you can “use mathematics” and that gives you THE optimal result, the very best possible—right? As such, anything not the best possible is inferior, even if it’s better than the average. So by being purely rational you still should be able to extract money out of the market taking it from investors who are merely better than the not-too-impressive average.
As to optimality, unless you define it *somehow* the phrase “brain is not optimal” has no meaning.
That is true.
I am not perfectly rational. I do not have access to all the information I have. That is why am I here: to be Less Wrong.
Now, I can attempt to use Bayes’ Theorem on my own lack-of-knowledge, and predict probabilities of probabilities—calibrate myself, and learn to notice when I’m missing information—but that adds more uncertainty; my performance drifts back towards average.
Not at all. I can define a series of metrics—energy consumption and “win” ratio being the most obvious—and define an n-dimensional function on those metrics, and then prove that given bounds in all directions that a maximum exists so long as my function follows certain criteria (mostly continuity.)
I can note that given the space of possible functions and metrics, the chances of my brain being optimal by any of them is extremely low. I can’t really say much about brain-optimality mostly because I don’t understand enough biology to understand how much energy draw is too much, and the like; it’s trivial to show that our brain is not an optimal mind under unbounded resources.
Which, in turn, is really what we care about here—energy is abundant, healthcare is much better than in the ancestral environment, so if it turns out our health takes a hit because of optimizing for intelligence somehow we can afford it.
I don’t think you can guarantee ONE maximum. But in any case, the vastness of the space of all n-dimensional functions makes the argument unpersuasive. Let’s get a bit closer to the common, garden-variety reality and ask a simpler question. In which directions do you think human brain should change/evolve/mutate to become more optimal? And in these directions, is the further the better or there is a point beyond which one should not go?
Um, I have strong doubts about that. Your body affects your mind greatly (not to mention your quality of life).
Yes.
No, unless you define “rationally changing your mind” this way in which case it’s just a circle.
Nope.
The ultimate criterion of whether the answer is the right one is real life.
While I’m not certain, I’m fairly confident that most people’s minds don’t assign probabilities at all. At least when this thread began, it was about trying to infer implicit probabilities based on how people update their beliefs; if there is any situation that would lead you to conclude that it’s not Friday, then that would suffice to prove that your mind’s internal probability is not Friday.
Most of the time, when people talk about probabilities or state the probabilities they assign to something, they’re talking about loose, verbal estimates, which are created by their conscious minds. There are various techniques for trying to make these match up to the evidence the person has, but in the end they’re still just basically guesses at what’s going on in your subconscious. Your conscious mind is capable of assigning probabilities like 0.999999999.
Taking a (modified) page from Randaly’s book, I would define absolute certainty as “so certain that one cannot conceive of any possible evidence which might convince one that the belief in question is false”. Since you can conceive of the brain-in-the-vat scenario and believe that it is not impossible, I would say that you cannot be absolutely certain of anything, including the axioms and logic of the world you know (even the rejection of absolute certainty).