I agree that the conclusion follows from the premises, but nonetheless it’s hypothetical scenarios like this which cause people to distrust hypothetical scenarios. There is no Omega, and you can’t magically stop believing in red pandas; when people rationalize the utility of known falsehoods, what happens in their mind is complicated, divorces endorsement from modeling, and bears no resemblance to what they believe they’re doing to themselves. Anti-epistemology is a huge actual danger of actual life,
Absolutely! I’m definitely dead set against anti-epistemology—I just want to make the point that that’s a contingent fact about the world we find ourselves in. Reality could be such that anti-epistemology was the only way to have a hope of survival. It isn’t, but it could be.
Once you’ve established that epistemic rationality could give way to instrumental rationality, even in a contrived example, you then need to think about where that line goes. I don’t think it’s likely to be relevant to us, but from a theoretical view we shouldn’t pretend the line doesn’t exist.
Indeed, advocating not telling people about it because the consequences would be worse is precisely suppressing the truth because of the consequnces ;) (well, it would be more on-topic if you were denying the potential utility of anti-epistemology even to yourself...)
I don’t think this is a fair analogy. We’re talking about ceasing to believe in red pandas without the universe helping; the 2+2=3 case had the evidence appearing all by itself.
I think I might be able to stop believing in red pandas in particular if I had to (5% chance?) but probably couldn’t generalize it to most other species with which I have comparable familiarity. This is most likely because I have some experience with self-hacking. (“They’re too cute to be real. That video looks kind of animatronic, doesn’t it, the way they’re gamboling around in the snow? I don’t think I’ve ever seen one in real life. I bet some people who believe in jackalopes have just never been exposed to the possibility that there’s no such thing. Man, everybody probably thinks it’s just super cute that I believe in red pandas, now I’m embarrassed. Also, it just doesn’t happen that a lot rides on me believing things unless those things are true. Somebody’s going to an awful lot of effort to correct me about red pandas. Isn’t that a dumb name? Wouldn’t a real animal that’s not even much like a panda be called something else?”)
Alicorn is correct; and similarly, there is of course a way I could stop believing in pandas, in worlds where pandas never had existed and I discovered the fact. I don’t know of anything I can actually do, in real life, over the next few weeks, to stop believing in pandas in this world where pandas actually do exist. I would know that was what I was trying to do, for one thing.
I wasn’t making an analogy exactly. Rather, that example was used to point out that there appears to be some route to believing any proposition that isn’t blatant gibberish. And I think Eliezer is the sort of person who could find a way to self-hack in that way if he wanted to; that more or less used to be his ‘thing’.
Wouldn’t a real animal that’s not even much like a panda be called something else?
Exactly—“red pandas” were clearly made up for Avatar: the Last Airbender.
There isn’t an Omega, but there historically have been inquisitions, secret police, and all manner of institutional repressive forces. And even in free countries, there is quite powerful social pressure to conform.
It may often be more useful to adopt a common high-status false belief, than to pay the price of maintaining a true low-status belief. This applies even if you keep that belief secretly—there’s a mental burden in trying to to systematically keep clear your true beliefs from your purported beliefs, and there’s a significant chance of letting something slip.
To pick a hopefully non-mind-killing example: whether or not professional sports are a wasteful and even barbaric practice, knowing about the local team, and expressing enthusiasm for them, is often socially helpful.
Anti-epistemology is a huge actual danger of actual life,
So it is, but I’m wondering if anyone can suggest a (possibly very exotic) real-life example where “epistemic rationality gives way to instrumental rationality.”? Just to address the “hypothetical scenario” objection.
EDIT: Does the famous Keynes quote “Markets can remain irrational a lot longer than you and I can remain solvent.” qualify?
Situations of plausible deniability for politicians or people in charge of large departments at corporations. Of course you could argue that these situations are bad for society in general, but I’d say it’s in the instrumental interest of those leaders to seek the truth to a lesser degree.
Any time you have a bias you cannot fully compensate for, there is a potential benefit to putting instrumental rationality above epistemic.
One fear I was unable to overcome for many years was that of approaching groups of people. I tried all sorts of things, but the best piece advice turned out to be: “Think they’ll like you.” Simply believing that eliminates the fear and aids in my social goals, even though it sometimes proves to have been a false belief, especially with regard to my initial reception. Believing that only 3 out of 4 groups will like or welcome me initially and 1 will rebuff me, even though this may be the case, has not been as useful as believing that they’ll all like me.
It doesn’t sound like you were very successful at rewriting this belief, because you admit in the very same paragraph that your supposedly rewritten belief is false. What I think you probably did instead is train yourself to change the subject of your thoughts in that situation from “what will I do if they don’t like me” to “what will I do if they like me”, and maybe also rewrite your values so that you see being rebuffed as inconsequential and not worth thinking about. Changing the subject of your thoughts doesn’t imply a change in belief unless you believe that things vanish when you stop thinking about them.
Let’s suppose that if you believe that when you believe you have a chance X to succeed, you actually have a chance 0.75 X to succeed (because you can’t stop your beliefs from influencing your behavior). The winning strategy seems to believe in 100% success, and thus succeed in 75% of cases. On the other hand, trying too much to find a value of X which brings exact predictions, would bring one to believing in 0% success… and being right about it. So in this (not so artificial!) situation, a rationalist should prefer success to being right.
But in real life, unexpected things happen. Imagine that you somehow reprogram yourself to genuinely believe that you have 100% of success… and then someone comes and offers you a bet: you win $100 if you succeed, and lose $10000 if you fail. In you genuinely believe in 100% success, this seems like an offer of free money, so you take the bet. Which you probably shouldn’t.
For an AI, a possible solution could be this: Run your own simulation. Make this simulation believe that the chance of success is 100%, while you know that it really is 75%. Give the simulation access to all inputs and outputs, and just let it work. Take control back when the task is completed, or when something very unexpected happens. -- The only problem is to balance the right level of “unexpected”; to know the difference between random events that belong to the task, and the random events outside of the initially expected scenario.
I suppose evolution gave us similar skills, though not so precisely defined as in the case of AI. An AI simulating itself would need twice as much memory and time; instead of this, humans use compartmentalization as an efficient heuristic. Instead of having one personality that believes in 100% success, and another that believes in 75%, human just convices themselves that the chance of success is 100%, but prevents this belief from propagating too far, so they can take the benefits of the imaginary belief, while avoiding some of its costs. This heuristic is a net advantage, though sometimes it fails, and other people may be able to exploit it: to use your own illusions to bring you to a logical decision that you should take the bet, while avoiding a suspicion of something unusual. -- In this situation there is no original AI which could take over control, so this strategy of false beliefs is accompanied by a rule “if there is something very unusual, avoid it, even if it logically seems like the right thing to do”. It means to not trust your own logic, which in a given situation is very reasonable.
I do this every day, correctly predicting I’ll never succeed at stuff and not getting placebo benefits. Don’t dare try compartmentalization or self delusion for the reasons Eliezer has outlined. Some other complicating factors. Big problem for me.
Be careful of this sort of argument, any time you find yourself defining the “winner” as someone other than the agent who is currently smiling from on top of a giant heap of utility.
Yea, I know that, but I’m not convinced fooling myself wont result in something even worse. Better ineffectively doing good than effectively doing evil.
As part of a fitness reigime, you might try to convince yourself that “I have to do 50 pressups every day”. Strictly speaking, you don’t: if you do fewer every now and again it won’t matter too much. Nonetheless, if you believe this your will will crumble and you’ll slack of too regularly. So you try to forget about that fact.
I agree that the conclusion follows from the premises, but nonetheless it’s hypothetical scenarios like this which cause people to distrust hypothetical scenarios. There is no Omega, and you can’t magically stop believing in red pandas; when people rationalize the utility of known falsehoods, what happens in their mind is complicated, divorces endorsement from modeling, and bears no resemblance to what they believe they’re doing to themselves. Anti-epistemology is a huge actual danger of actual life,
Absolutely! I’m definitely dead set against anti-epistemology—I just want to make the point that that’s a contingent fact about the world we find ourselves in. Reality could be such that anti-epistemology was the only way to have a hope of survival. It isn’t, but it could be.
Once you’ve established that epistemic rationality could give way to instrumental rationality, even in a contrived example, you then need to think about where that line goes. I don’t think it’s likely to be relevant to us, but from a theoretical view we shouldn’t pretend the line doesn’t exist.
Indeed, advocating not telling people about it because the consequences would be worse is precisely suppressing the truth because of the consequnces ;) (well, it would be more on-topic if you were denying the potential utility of anti-epistemology even to yourself...)
Technically you can; it’s just that the easiest methods have collateral effects on your ability to do most other things.
If you’re not talking about shooting yourself in the head, I don’t know of any method I, myself, could use to stop believing in pandas.
Interesting given that you believe there is evidence that could convince you 2+2=3.
Given that you don’t know of such a method, I would guess that you haven’t tried very hard to find one.
I don’t think this is a fair analogy. We’re talking about ceasing to believe in red pandas without the universe helping; the 2+2=3 case had the evidence appearing all by itself.
I think I might be able to stop believing in red pandas in particular if I had to (5% chance?) but probably couldn’t generalize it to most other species with which I have comparable familiarity. This is most likely because I have some experience with self-hacking. (“They’re too cute to be real. That video looks kind of animatronic, doesn’t it, the way they’re gamboling around in the snow? I don’t think I’ve ever seen one in real life. I bet some people who believe in jackalopes have just never been exposed to the possibility that there’s no such thing. Man, everybody probably thinks it’s just super cute that I believe in red pandas, now I’m embarrassed. Also, it just doesn’t happen that a lot rides on me believing things unless those things are true. Somebody’s going to an awful lot of effort to correct me about red pandas. Isn’t that a dumb name? Wouldn’t a real animal that’s not even much like a panda be called something else?”)
Alicorn is correct; and similarly, there is of course a way I could stop believing in pandas, in worlds where pandas never had existed and I discovered the fact. I don’t know of anything I can actually do, in real life, over the next few weeks, to stop believing in pandas in this world where pandas actually do exist. I would know that was what I was trying to do, for one thing.
Not that hard. Jimmy will gladly help you.
Okay, so there’s no such thing as jackalopes. Now I know.
Hee hee.
I wasn’t making an analogy exactly. Rather, that example was used to point out that there appears to be some route to believing any proposition that isn’t blatant gibberish. And I think Eliezer is the sort of person who could find a way to self-hack in that way if he wanted to; that more or less used to be his ‘thing’.
Exactly—“red pandas” were clearly made up for Avatar: the Last Airbender.
No, in AtLA they’re called “fire ferrets”.
There isn’t an Omega, but there historically have been inquisitions, secret police, and all manner of institutional repressive forces. And even in free countries, there is quite powerful social pressure to conform.
It may often be more useful to adopt a common high-status false belief, than to pay the price of maintaining a true low-status belief. This applies even if you keep that belief secretly—there’s a mental burden in trying to to systematically keep clear your true beliefs from your purported beliefs, and there’s a significant chance of letting something slip.
To pick a hopefully non-mind-killing example: whether or not professional sports are a wasteful and even barbaric practice, knowing about the local team, and expressing enthusiasm for them, is often socially helpful.
So it is, but I’m wondering if anyone can suggest a (possibly very exotic) real-life example where “epistemic rationality gives way to instrumental rationality.”? Just to address the “hypothetical scenario” objection.
EDIT: Does the famous Keynes quote “Markets can remain irrational a lot longer than you and I can remain solvent.” qualify?
Situations of plausible deniability for politicians or people in charge of large departments at corporations. Of course you could argue that these situations are bad for society in general, but I’d say it’s in the instrumental interest of those leaders to seek the truth to a lesser degree.
Any time you have a bias you cannot fully compensate for, there is a potential benefit to putting instrumental rationality above epistemic.
One fear I was unable to overcome for many years was that of approaching groups of people. I tried all sorts of things, but the best piece advice turned out to be: “Think they’ll like you.” Simply believing that eliminates the fear and aids in my social goals, even though it sometimes proves to have been a false belief, especially with regard to my initial reception. Believing that only 3 out of 4 groups will like or welcome me initially and 1 will rebuff me, even though this may be the case, has not been as useful as believing that they’ll all like me.
It doesn’t sound like you were very successful at rewriting this belief, because you admit in the very same paragraph that your supposedly rewritten belief is false. What I think you probably did instead is train yourself to change the subject of your thoughts in that situation from “what will I do if they don’t like me” to “what will I do if they like me”, and maybe also rewrite your values so that you see being rebuffed as inconsequential and not worth thinking about. Changing the subject of your thoughts doesn’t imply a change in belief unless you believe that things vanish when you stop thinking about them.
Let’s suppose that if you believe that when you believe you have a chance X to succeed, you actually have a chance 0.75 X to succeed (because you can’t stop your beliefs from influencing your behavior). The winning strategy seems to believe in 100% success, and thus succeed in 75% of cases. On the other hand, trying too much to find a value of X which brings exact predictions, would bring one to believing in 0% success… and being right about it. So in this (not so artificial!) situation, a rationalist should prefer success to being right.
But in real life, unexpected things happen. Imagine that you somehow reprogram yourself to genuinely believe that you have 100% of success… and then someone comes and offers you a bet: you win $100 if you succeed, and lose $10000 if you fail. In you genuinely believe in 100% success, this seems like an offer of free money, so you take the bet. Which you probably shouldn’t.
For an AI, a possible solution could be this: Run your own simulation. Make this simulation believe that the chance of success is 100%, while you know that it really is 75%. Give the simulation access to all inputs and outputs, and just let it work. Take control back when the task is completed, or when something very unexpected happens. -- The only problem is to balance the right level of “unexpected”; to know the difference between random events that belong to the task, and the random events outside of the initially expected scenario.
I suppose evolution gave us similar skills, though not so precisely defined as in the case of AI. An AI simulating itself would need twice as much memory and time; instead of this, humans use compartmentalization as an efficient heuristic. Instead of having one personality that believes in 100% success, and another that believes in 75%, human just convices themselves that the chance of success is 100%, but prevents this belief from propagating too far, so they can take the benefits of the imaginary belief, while avoiding some of its costs. This heuristic is a net advantage, though sometimes it fails, and other people may be able to exploit it: to use your own illusions to bring you to a logical decision that you should take the bet, while avoiding a suspicion of something unusual. -- In this situation there is no original AI which could take over control, so this strategy of false beliefs is accompanied by a rule “if there is something very unusual, avoid it, even if it logically seems like the right thing to do”. It means to not trust your own logic, which in a given situation is very reasonable.
I do this every day, correctly predicting I’ll never succeed at stuff and not getting placebo benefits. Don’t dare try compartmentalization or self delusion for the reasons Eliezer has outlined. Some other complicating factors. Big problem for me.
(from “Newcomb’s Problem and Regret of Rationality”)
Yea, I know that, but I’m not convinced fooling myself wont result in something even worse. Better ineffectively doing good than effectively doing evil.
As part of a fitness reigime, you might try to convince yourself that “I have to do 50 pressups every day”. Strictly speaking, you don’t: if you do fewer every now and again it won’t matter too much. Nonetheless, if you believe this your will will crumble and you’ll slack of too regularly. So you try to forget about that fact.
Kind of like an epistemic Schelling point.
Idea I got just now and haven’t though about for 5 min or looked for flaws in yet but an stating before I forget it:
Unless omega refers to human specific brain-structures, shouldn’t UDT automaticaly “un-update” on the existance of red pandas in this case?
Also, through some unknown intuitive pathways, the unsolvedness of logical uncertainty is brought up a an association.