How many rationalists would retain their belief in reason, if they could accurately visualize that hypothetical world in which there was no rationality and they themselves have become irrational?
if they could accurately visualize that hypothetical world in which there was no rationality and they themselves have become irrational?
I just attempted to visualize such a world, and my mind ran into a brick wall. I can easily imagine a world in which I am not perfectly rational (and in fact am barely rational at all), and that world looks a lot like this world. But I can’t imagine a world in which rationality doesn’t exist, except as a world in which no decision-making entities exist. Because in any world in which there exist better and worse options and an entity that can model those options and choose between them with better than random chance, there exists a certain amount of rationality.
Well, a world that lacked rationality might be one in which all the events were a sequence of non-sequiters. A car drives down the street. Then dissappears. We are in a movie theater with a tyrannosaurus. Now we are a snail on the moon. Then there’s just this poster of rocks. Then I can’t remember what sight was like, but there’s jazz music. Now I fondly remember fighting in world war 2, while evading the Empire with Hans solo. Oh! I think I might be boiling water, but with a sense of smell somehow.… that’s a poor job of describing it—too much familiar stuff—but you get the idea. If there was no connection between one state of affairs and the next, talking about what strategy to take might be impossible, or a brief possibility that then dissappears when you forget what you are doing and you’re back in the movie theater again with the tyrannosaurus. If ‘you’ is even a meaningful way to describe a brief moment of awareness bubbling into being in that universe. Then again, if at any moment ‘you’ happen to exist and ‘you’ happen to understand what rationality means- I guess now that I think about it, if there is any situation where you can understand what the word rationality means, its probably one in which it exists (howevery briefly) and is potentially helpful to you, even if there is little useful to do about whatever situation you are in, there might be some useful thing to do about the troubling thoughts in your mind.
While that is a world without rationality, it seems a fairly extreme case.
Another example of a world without rationality is a world in which, the more you work towards achieving a goal, the longer it takes to reach that goal; so an elderly man might wander distractedly up Mount Everest to look for his false teeth with no trouble, but a team of experienced mountaineers won’t be able to climb a small hill. Even if they try to follow the old man looking for his teeth, the universe notices their intent and conspires against them. And anyone who notices this tendency and tries to take advantage of it gets struck by lightning (even if they’re in a submarine at the time) and killed instantly.
That reminds me of Hofstadter’s Law: “It will always take longer than you think it is going to take. Even when you take into account Hofstadter’s Law.”
I like both Volairina and your takes on the non-rational world. I was having a lot of trouble working something out.
That said, while Voltairina’s world is a bit more horrifyingly extreme than yours, it seems to me more probably that cause and effect simply did not exist. I can envision a structure of elementary physics that simply change—functionally randomly—far more easily than that causality does exist, but operates in the inverse. I have more trouble envisioning the elementary physics that bring that into existence without a observational intellect directly upsetting motivated plans.
All that is to say, might not your case be the more extreme one?
...it’s possible. There are many differences between our proposed worlds, and it really depends on what you mean by “more extreme”. Volairina’s world is “more extreme” in the sense that there are no rules, no patterns to take advantage of. My world is “more extreme” in that the rules actively punish rationality.
My world requires that elementary physics somehow takes account of intent, and then actively subverts it. This means that it reacts in some way to something as nebulous as intent. This implies some level of understanding of the concept of intent. This, in turn, implies (as you state) an observational intellect—and worse, a directly malevolent one. Volairina’s can exist without a directly malevolent intelligence directing things.
So it really comes down to what you mean by “extreme”, I guess. Both proposed worlds are extreme cases, in their own way.
That’s not the idea that really scares Less Wrong people.
Here’s a more disturbing one; try to picture a world where all the rational skills you’re learning on Less Wrong are actually somehow flawed, and actually make it less likely that you’ll discover the truth or made you correct less often, for whatever reason? What would that look like? Would you be able to tell the difference.
I must say, I have trouble picturing that, but I can’t prove it’s not true (we are basically tinkering with the way our mind works without a software manual, after all).
I’m not sure what “no rationality” would mean. Evolutionarily relevant kinds of rationality can still be expected, like preference to sexually fertile mates, fearing spiders/snakes/heights, and if we’re still talking about something at all similar to Homo Sapiens, language and cultural learning and such, which require some amounts of rationality to use.
I wonder if you might be imagining rationality in the form of essentialism, allowing you to universally turn the attribute off, but in reality there no such off switch that is compatible with having decision making agents.
No rationality, or no Bayesianism? Rationality is a general term for reasoning about reality. Bayesianism is the specific school of rationality advocated on LessWrong.
A “world in which there was no rationality” is not even meaningful, just like “world in which there was no physics” is meaningless. Even if energy and matter behaves in a way that’s completely alien to us, there are still laws that govern how it works and you can call these laws “physics”. Similarly, even if we’d live in some hypothetical world where the rules of reasoning are not derived from Bayes’ theorem, there are still rules that can be thought of as that reality’s rationalism.
A world without Bayesianism is easy to visualize, because we have all seen such worlds in fiction. Cartoons takes this to the extreme—Wile E. Coyote paints a tunnel and expects Road Runner to crash into it—but Road Runner manages to go through. Then he expects that if Road Runner could go through, he could go through as well—but he crashes into it when he tried.
Coyote’s problem is that his rationalism could have worked in our world—but he is not living in our world. He is living in a cartoon world with cartoon logic, and needs a different kind of rationalism.
Like… the one Bugs Bunny uses.
Bugs Bunny plugs Elmer Fudd’s rifle with his finger. In our world, this could not stop the bullet. But Bugs Bunny is not living in our world—he lives in cartoon world. He correctly predicts that the rifle will explode without harming him, and his belief in that prediction is strong enough to bet his life on it.
Now, one may claim that it is not rationality that gets messed up here—merely physics. But in the examples I picked it is not just that laws of nature that don’t work like real world dwellers would expect—it is consistency itself that fails. Let us compare with superhero comics, where the limitations of physics are but a suggestion but at least some effort is done to maintain consistency.
When mirror master jumps into a mirror, he uses his technology/powers to temporarily turn the mirror into a portal. If Flash is fast enough, he can jump into the mirror after him, before the mirror turns back to normal. The rules are simple—when the portal is open you can pass, when it’s closed you can’t. Even if it doesn’t make sense scientifically it makes sense logically. But there are no similar rules that can tell Coyote whether or not its safe to pass.
Superman can also plug his finger into criminals’ guns to stop them from shooting, just like Bugs Bunny. But Superman can stop the bullets with any part of his body, before or after they leave the barrel. So him successfully plugging the guns is consistent. Bugs Bunny, however, is not invulnerable to bullets. When Elmer Fudd chases after him, rifle blazing, Bugs Bunny runs for his life because he know the bullets will pierce him. They are stronger than his body can handle. Except… when he sticks his finger into the barrel. Not consistent.
Still—there are laws that govern cartoon reality. Like the law of funny. Bugs Bunny is aware of them—his actions may seem chaotic when judged by our world’s rationality, but they make perfect sense in cartoon world. Wile E. Coyote’s actions make perfect some sense in our world’s rationality, but are doomed to fail when executed under cartoon world logic.
Had I lived in cartoon world, I’d rather be like Bugs Bunny than like Wile E. Coyote. Not to insist on Bayesianism even though it wouldn’t work, but try to figure out how reasoning in that reality really works and rely on that.
Then again—wouldn’t Bayesianism itself deter me from relying on things that don’t work? Is Wile E. Coyote even Bayesian if he doesn’t update his believes every time his predictions fail?
I’m no longer sure I can imagine a world where there is no Bayesianism...
I don’t know. But I would. Irrationality is caused by ignorance, so there will always be tangent worlds (while regarding this current one as prime) in which I give up. There will always be a world where anything that is physically possible occurs. (and probably many where even that requirement doesn’t hold)
To put it another way, there has been a moment in time when I was not rational. Is that reason to give up rationality forever? Time could be just another dimension, it’s manipulation as far out of our grasp as that of other possible worlds.
How many rationalists would retain their belief in reason, if they could accurately visualize that hypothetical world in which there was no rationality and they themselves have become irrational?
I just attempted to visualize such a world, and my mind ran into a brick wall. I can easily imagine a world in which I am not perfectly rational (and in fact am barely rational at all), and that world looks a lot like this world. But I can’t imagine a world in which rationality doesn’t exist, except as a world in which no decision-making entities exist. Because in any world in which there exist better and worse options and an entity that can model those options and choose between them with better than random chance, there exists a certain amount of rationality.
Well, a world that lacked rationality might be one in which all the events were a sequence of non-sequiters. A car drives down the street. Then dissappears. We are in a movie theater with a tyrannosaurus. Now we are a snail on the moon. Then there’s just this poster of rocks. Then I can’t remember what sight was like, but there’s jazz music. Now I fondly remember fighting in world war 2, while evading the Empire with Hans solo. Oh! I think I might be boiling water, but with a sense of smell somehow.… that’s a poor job of describing it—too much familiar stuff—but you get the idea. If there was no connection between one state of affairs and the next, talking about what strategy to take might be impossible, or a brief possibility that then dissappears when you forget what you are doing and you’re back in the movie theater again with the tyrannosaurus. If ‘you’ is even a meaningful way to describe a brief moment of awareness bubbling into being in that universe. Then again, if at any moment ‘you’ happen to exist and ‘you’ happen to understand what rationality means- I guess now that I think about it, if there is any situation where you can understand what the word rationality means, its probably one in which it exists (howevery briefly) and is potentially helpful to you, even if there is little useful to do about whatever situation you are in, there might be some useful thing to do about the troubling thoughts in your mind.
While that is a world without rationality, it seems a fairly extreme case.
Another example of a world without rationality is a world in which, the more you work towards achieving a goal, the longer it takes to reach that goal; so an elderly man might wander distractedly up Mount Everest to look for his false teeth with no trouble, but a team of experienced mountaineers won’t be able to climb a small hill. Even if they try to follow the old man looking for his teeth, the universe notices their intent and conspires against them. And anyone who notices this tendency and tries to take advantage of it gets struck by lightning (even if they’re in a submarine at the time) and killed instantly.
That reminds me of Hofstadter’s Law: “It will always take longer than you think it is going to take. Even when you take into account Hofstadter’s Law.”
I like both Volairina and your takes on the non-rational world. I was having a lot of trouble working something out.
That said, while Voltairina’s world is a bit more horrifyingly extreme than yours, it seems to me more probably that cause and effect simply did not exist. I can envision a structure of elementary physics that simply change—functionally randomly—far more easily than that causality does exist, but operates in the inverse. I have more trouble envisioning the elementary physics that bring that into existence without a observational intellect directly upsetting motivated plans.
All that is to say, might not your case be the more extreme one?
...it’s possible. There are many differences between our proposed worlds, and it really depends on what you mean by “more extreme”. Volairina’s world is “more extreme” in the sense that there are no rules, no patterns to take advantage of. My world is “more extreme” in that the rules actively punish rationality.
My world requires that elementary physics somehow takes account of intent, and then actively subverts it. This means that it reacts in some way to something as nebulous as intent. This implies some level of understanding of the concept of intent. This, in turn, implies (as you state) an observational intellect—and worse, a directly malevolent one. Volairina’s can exist without a directly malevolent intelligence directing things.
So it really comes down to what you mean by “extreme”, I guess. Both proposed worlds are extreme cases, in their own way.
Fair point.
I suppose I’d just think about before I met LessWrong. I wouldn’t choose that world.
That’s not the idea that really scares Less Wrong people.
Here’s a more disturbing one; try to picture a world where all the rational skills you’re learning on Less Wrong are actually somehow flawed, and actually make it less likely that you’ll discover the truth or made you correct less often, for whatever reason? What would that look like? Would you be able to tell the difference.
I must say, I have trouble picturing that, but I can’t prove it’s not true (we are basically tinkering with the way our mind works without a software manual, after all).
related: http://lesswrong.com/lw/9p/extreme_rationality_its_not_that_great/
I’m not sure what “no rationality” would mean. Evolutionarily relevant kinds of rationality can still be expected, like preference to sexually fertile mates, fearing spiders/snakes/heights, and if we’re still talking about something at all similar to Homo Sapiens, language and cultural learning and such, which require some amounts of rationality to use.
I wonder if you might be imagining rationality in the form of essentialism, allowing you to universally turn the attribute off, but in reality there no such off switch that is compatible with having decision making agents.
No rationality, or no Bayesianism? Rationality is a general term for reasoning about reality. Bayesianism is the specific school of rationality advocated on LessWrong.
A “world in which there was no rationality” is not even meaningful, just like “world in which there was no physics” is meaningless. Even if energy and matter behaves in a way that’s completely alien to us, there are still laws that govern how it works and you can call these laws “physics”. Similarly, even if we’d live in some hypothetical world where the rules of reasoning are not derived from Bayes’ theorem, there are still rules that can be thought of as that reality’s rationalism.
A world without Bayesianism is easy to visualize, because we have all seen such worlds in fiction. Cartoons takes this to the extreme—Wile E. Coyote paints a tunnel and expects Road Runner to crash into it—but Road Runner manages to go through. Then he expects that if Road Runner could go through, he could go through as well—but he crashes into it when he tried.
Coyote’s problem is that his rationalism could have worked in our world—but he is not living in our world. He is living in a cartoon world with cartoon logic, and needs a different kind of rationalism.
Like… the one Bugs Bunny uses.
Bugs Bunny plugs Elmer Fudd’s rifle with his finger. In our world, this could not stop the bullet. But Bugs Bunny is not living in our world—he lives in cartoon world. He correctly predicts that the rifle will explode without harming him, and his belief in that prediction is strong enough to bet his life on it.
Now, one may claim that it is not rationality that gets messed up here—merely physics. But in the examples I picked it is not just that laws of nature that don’t work like real world dwellers would expect—it is consistency itself that fails. Let us compare with superhero comics, where the limitations of physics are but a suggestion but at least some effort is done to maintain consistency.
When mirror master jumps into a mirror, he uses his technology/powers to temporarily turn the mirror into a portal. If Flash is fast enough, he can jump into the mirror after him, before the mirror turns back to normal. The rules are simple—when the portal is open you can pass, when it’s closed you can’t. Even if it doesn’t make sense scientifically it makes sense logically. But there are no similar rules that can tell Coyote whether or not its safe to pass.
Superman can also plug his finger into criminals’ guns to stop them from shooting, just like Bugs Bunny. But Superman can stop the bullets with any part of his body, before or after they leave the barrel. So him successfully plugging the guns is consistent. Bugs Bunny, however, is not invulnerable to bullets. When Elmer Fudd chases after him, rifle blazing, Bugs Bunny runs for his life because he know the bullets will pierce him. They are stronger than his body can handle. Except… when he sticks his finger into the barrel. Not consistent.
Still—there are laws that govern cartoon reality. Like the law of funny. Bugs Bunny is aware of them—his actions may seem chaotic when judged by our world’s rationality, but they make perfect sense in cartoon world. Wile E. Coyote’s actions make
perfectsome sense in our world’s rationality, but are doomed to fail when executed under cartoon world logic.Had I lived in cartoon world, I’d rather be like Bugs Bunny than like Wile E. Coyote. Not to insist on Bayesianism even though it wouldn’t work, but try to figure out how reasoning in that reality really works and rely on that.
Then again—wouldn’t Bayesianism itself deter me from relying on things that don’t work? Is Wile E. Coyote even Bayesian if he doesn’t update his believes every time his predictions fail?
I’m no longer sure I can imagine a world where there is no Bayesianism...
I don’t know. But I would. Irrationality is caused by ignorance, so there will always be tangent worlds (while regarding this current one as prime) in which I give up. There will always be a world where anything that is physically possible occurs. (and probably many where even that requirement doesn’t hold)
To put it another way, there has been a moment in time when I was not rational. Is that reason to give up rationality forever? Time could be just another dimension, it’s manipulation as far out of our grasp as that of other possible worlds.