I keep running into problems with various versions of what I internally refer to as the “placebo paradox”, and can’t find a solution that doesn’t lead to Regret Of Rationality. Simple example follows:
You have an illness from wich you’ll either get better, or die. The probability of recovering is exactly half of what you estimate it to be due to the placebo effect/positive thinking. Before learning this you have 80% confidence in your recovery.
Since you estimate 80%, your actual chance is 40% so you update to this.
Since the estimate is now 40%, the actual chance is 20%, so you update to this.
Then it’s 10%, so you update to that. etc. Until both your estimated and actual chance of recovery are 0. then you die.
An irrational agent, on the other hand, upon learning this could self delude to 100% certainty of recovery, and have a 50% chance of actually recovering.
This is actually causing me real world problems, such as inability to use techniques based on positive thinking, and a lot of cognitive dissonance.
Another version of this problem features in HP:MoR, in the scene where harry is trying to influence the behaviour of dementors.
And to show this isn’t JUST a quirk of human mind design, one can envision Omega setting up an isomorphic problem for any kind of AI.
For actual humans, I’d look into ways of possibly activating the placebo effect without explicit degrees of belief, such as intense visualization of the desired outcome.
This is an interesting idea but I’m skeptical that this would actually work. There are studies which I don’t have the citations for (they are cited in Richard Wiseman’s “59 Seconds”) which strongly suggest that positive thinking in many forms doesn’t actually work. In particular, having people visualize extreme possibilities of success (e.g. how strong they’ll be after they’ve worked out, or how much better looking they will be when they lose weight, etc.) make people less likely to actually succeed (possibly because they spend more time simply thinking about it rather than actually doing it.). This is not strong evidence but it is suggestive evidence that visualization is not sufficient to do that much. These studies didn’t look at medical issues where placebos are more relevant.
any data on if this is actually possible, and if so how to do it? Does it work for other things such as social confidence, positive thinking, etc.?
It certainly SEEMS like it’s the declarative belief itself, not visualizations of outcomes, that cause effects. And the fact so many attempts at perfect deception have failed seems to indicate it’s not possible to disentangle [your best rational belifs] from what your “brain thinks” you believe.
(… I really need some better notation for talking about these kind of things unambiguously.)
It certainly SEEMS like it’s the declarative belief itself, not visualizations of outcomes, that cause effects. And the fact so many attempts at perfect deception have failed seems to indicate it’s not possible to disentangle [your best rational belifs] from what your “brain thinks” you believe.
I’m skeptical as to how common it is for your beliefs to influence anything outside of your head, except through your actions. If your belief X makes Y happen because of method Z, then in order to get Y you only need to know about Z, and that it works. Then you can do Z regardless of X, because what you do mostly screens off what you think.
If you can’t get yourself to do something because of a particular belief, that’s another issue.
No, in humans this is not the case, unless you have a much broader definition of “action” than is useful. For example, other humans can read your intentions and beliefs from your posture and facial expression, the body reacts autonomously to beliefs with stuff like producing drugs and shunting around blood flow, and some entire classes of problems such as mental illness or subjective well being reside entirely in your brain.
Sorry about my last sentence in the previous post sounding dismissive, that was sloppy, and not representative of my views.
I guess my real issue with this is that I don’t think that there’s a 50% placebo, and disagree that the “declarative belief” does things directly. My anticipation of success or failure has an influence on my actions, but a 50% placebo I would imagine would work in real life based on hidden, unanticipated factors to the point that someone with accurate beliefs could say that “my anticipation contributes this much, X contributes this much, Y contributes this much, Z contributes this much, and given my x,y,z I anticipate this” and be pretty much correct.
In the least convenient possible universe, there seems to be enough hacks that rationality enables that I would reject the 50% placebo, and still net a win. I don’t think we live in a universe where the majority of utility is behind 50% placebos.
Why does everyone get stuck on that highly simplified example that I just made like that so that the math would be easy to follow?
Or are you simply saying that placebos and the like are an unavoidable cost of being a rationalist and we just have to deal with it and it’s not that big a cost anyway?
More the latter, with the added caveat that I think that there are fewer things falling under the category of “and the like” than you think there are.
I used to think that my social skills were being damaged by rationality, but then through a combination of “fake it till you make it”, learning a few skills, and dissolving a few false dillemas, they’re now better than they were pre-rationality.
If you want to go into more personal detail, feel free to PM.
It certainly SEEMS like it’s the declarative belief itself, not visualizations of outcomes, that cause effects.
Taboo “declarative”. To me, it sounds like you’re talking about a verbal statement (“declared”), in which case it’s pretty obviously false. AFAIK, priming effects work just fine without words.
Actually, you can solve this problem just by snapping your fingers, and this will give you all the same benefits as the placebo effect! Try it—it’s guaranteed to work!
Relevant and amusing (to me at least) story: A few months ago when I had a cold, I grabbed a box of zinc cough drops from my closet and started taking them to help with the throat pain. They worked as well or better than any other brand of cough drops I’ve tried, and tasted better too. Later I read the box, and it turned out they were homeopathic. I kept on taking them, and kept on enjoying the pain relief.
Probably not. Try throwing a coin in a wishing well or lighting a dollar bill on fire for more effect.
In the regular-price group, 85.4% (95% confidence interval [CI], 74.6%-96.2%) of the participants experienced a mean pain reduction after taking the pill, vs 61.0% (95% CI, 46.1%-75.9%) in the low-price (discounted) group (P = .02). Similar results occurred when analyzing only the 50% most painful shocks for each participant (80.5% [95% CI, 68.3%-92.6%] vs 56.1% [95% CI, 40.9%-71.3%], respectively; P = .03).
… Even YOU miss the point? guess I utterly failed at explaining it then.
IF I could solve the problem I’m stating in the first post, then this would indeed be almost true. It might be true in 99% of cases, but 0.99^infinity is still ~0. Thus that is the only probability I can consistently assign to it. I MIGHT be able to self modify to be able to hold inconsistent beliefs, but that’s double think and you have explicitly, loudly and repeatedly warned against and condemned it.
I’m baffled at how I seem unable to point at/communicate the concept. I even tired pointing at a specific instance of you using something very similar in MoR.
… Even YOU miss the point? guess I utterly failed at explaining it then.
Eliezer is not “the most capable of understanding (or repairing to an understandable position) commentor on LessWrong”. He is “the most capable of presenting ideas in a readable format” AND “the person with the most rational concepts” on LessWrong. Please stop assuming these qualities are good proxies for, well, EVERYTHING.
Not quite. Having the right priors about other people’s likely beliefs, patience and humility are all rather important.
There are some people who I consider incredibly intelligent and who clearly understand the language that I basically expect to be replying to a straw man whenever they make a reply, all else being equal. (Not Eliezer.)
Each one of his sequence posts represents a concept in rationality—so he has many more of these concepts than anyone else here on LW.
(I just noticed there’s some ambiguity—it’s the largest amount of rational concepts, not concepts of the highest standard of rational. [most] [rational concepts], not [most rational] [concepts].)
The probability of recovering is exactly half of what you estimate it to be due to the placebo effect/positive thinking.
It would take an artificially bad situation for this to be the case. In the real world, the placebo effect still works, even if you know it’s a placebo—although with diminished efficacy.
But that’s beside the point. More on-point is that intentional self-delusion, if possible, is at best a crapshoot. It’s not systematic; it relies on luck, and it’s prone to Martingale-type failures.
The HPMOR and placebo examples appear, to me, to share another confounding factor: The active ingredient isn’t exactly belief. It’s confidence, or affect, or some other mental condition closely associated with belief. If it weren’t, there’d be no way Harry could monitor his level of belief that the dementors would do what he wanted them to, while simultaneously trying to increase it. Anecdotally, my own attempts at inducing placebo effects feel similar.
The supposed equivalent version in HP:MOR… (I do not wish to speak for anyone else—feel free to chime in yourselves)
That scene was a clear example—to me—of TDT being successful outside of the prisoner’s dilemma scheme. In a case where apparently only ignorance would help, TDT can transcend and provide (almost) the same power.
It’s a toy case, in reality the chance of recovery might be “0.2+0.3*estimate”, but the same general reasoning applies and the end result is still regret of rationality.
However, I also though of a side question. Is the person who is caught in a cycle of negative thinking like the placebo effect that you mention, engaging in confirmation bias?
I mean, if that person thinks “I am caught in a loop of updates that will inexorably lead to my certain death.” And they are attempting to establish that that is true, they can’t simply say “I went from 80%/40% to 40%/20% to 20%/10%, and this will continue. I’m screwed!” as evidence of it’s truth, because that’s like saying “4,6,8” “6,8,10“ “8,10,12” as the guesses for the rule that you know “2,4,6” follows. and then saying “The rule is even numbers, right? Look at all this evidence!”
If a person has a hypothesis that their thoughts are leading them to an inexorable and depressing conclusion, then to test the hypothesis, the rational thing to do is for that person to try proving themselves wrong. By trying “10,8,6” and then getting “No, that is not the case.” (Because the real rule is numbers in increasing order.)
I actually haven’t confirmed that this idea myself yet. I just thought of it now. But casting it in this light makes me feel a lot better about all the times I perform what appear at the time to be self delusions on my brain when I’m caught in depressive thinking cycles, so I’ll throw it out here and see if anyone can contradict it.
Thanks for restating parts of the problem in a much clearer manner!
And yea, that article is why this problem is wreaking such havock on me, and I were thinking of it as I wrote the OP. I’m not sure why I didn’t link it.
However, I still can’t resolve the paradox. Although I’m finally starting to see how one might start on doing so: formalizing an entire decision theory that solves the entire class of problems, and them swapping half my mindware out in a single operation. Doesn’t seem like a very^good solution thou so I’d rather keep looking for third options.
I don’t think I understand the middle paragraph with all the examples. Probably because the way I actually think of it is not the way I used in the OP, but rather an equation where expectation must be equal to actual probability to call my belief consistent, and jumping straight there. Like so: P=E/2, E=P, thus E=0.
Hmm, I just got a vague intuition saying roughly “Hey, but wait a moment, probability is in the mind. The multiverse is timeless and in each Everett branch you either do recover or you don’t! ”, but I’m not sure how to proceed from there.
Updating on the evidence of yourself updating is almost as much as a problem as is updating on the evidence of “I updated on the evidence of myself updating”. Tongue-in-cheek!
That is to say, the decision theory you are currently running is not equipped to handle the class of problems where your response to a problem is evidence that changes the nature of the very problem you are responding to—in the same way that arithmetic is not equipped to handle problems requiring calculus or CDT is not equipped to handle Omega’s two-box problem.
(If it helps your current situation, placebo effects are almost always static modifiers on your scientific/medical chances of recovery)
Do you have a suggestion for a better decision theory, or a suggestion on how exactly I have misinterpreted TDT to cause my current problems?
Knowing that MIGHT help, but probably not in practice. Specifically I’d need to know for every given instance of the problem a probability to assign which if it is assigned is also the actual chance.
Can you see what an absurdly implausible scenario you must use as a ladder to demonstrate rationality as a liability? Rather than being a strike against strict adherence to reality. The fact that we have to stretch so hard to paint it this way, further legitimizes the pursuit of rationality.
I mean no disrespect for your situation whatever it may be. I gave this some additional thought. You are saying that you have an illness in which the rate of recovery is increased by fifty percent due to a positive outlook and the placebo effect this mindset produces. Or that an embrace of the facts of your condition lead to an exponential decline at the rate of fifty percent. Is it depression, or some other form of mental illness? If it is, then the cause of death would likely be suicide. I am forced to speculate because you were purposefully vague.
For the sake of argument I will go with my speculative scenario. It is very common for those with bi-polar disorder and clinical depression to create a negative feedback loop which worsens their situation in the way you have highlighted. But it wouldn’t carry the exacting percentages of taper (indeed no illness would carry that exact level of decline based merely on the thoughts in the patients head). But given your claims that the illness exponentially declines, wouldn’t the solution be knowledge of this reality? It seems that the delusion has come in the form of accepting that an illness can be treated with positive thinking alone. The illness is made worse by an acceptance not of rationality, but of this unsupported data which by my understanding is irrational.
I am very skeptical of your scenario, merely because I do not know of any illnesses which carry this level of health decline due to the absence of a placebo. If you have it please tell me what it is as I would like to begin research now.
It’s not depression or bipolarity, probably, but for the purposes of this discussion the difference is probably irrelevant.
I never claimed the 50% thing was ever anything other than a gross simplification to make the math easier. Obviously it’s much more complicated than that with other factors, less extreme numbers, and so on, but the end result is still isomorphic to it. Maybe it’s even polynomial rather than exponential, but it’s still a huge problem.
Can you actually describe the scenario you really are in? I can think of ways I’d address a lot of real-world analogues, but none of them are actually isomorphic to the example you gave. The solutions generally rely on the lack of a true isomorphism, too.
Now for the bad news: the parts about the solution are confusing and I can’t figure out how I would apply it to my situation. Could someone please translate it to math?
http://www.guardian.co.uk/science/2010/dec/22/placebo-effect-patients-sham-drug It is also well worth noting that the Placebo Effect works just fine even if you know it’s just a Placebo Effect. I hadn’t realized it worked for others, but I’ve been abusing this one for a lot of my life, thanks to a neurological quirk that makes placebos especially potent for me.
Yes, but you have to BELIEVE the placebos will help. In fact, the paradox ONLY appears in the case you know it’s a placebo because that’s when the feedback loop can happen.
I’m not aware of any research that says a placebo won’t help a “non-believer”—can you cite a study? Given the study I linked where they were deliberately handed inert pills and told that they were an inert placebo, and they still worked, I actually strongly doubt your claim.
And given the research I linked, why in the world wouldn’t you believe in them? They do rationally work.
A placebo will help if you think the pill you’re taking will help. This may be because you think it’s a non-placebo pill that’d help even if you didn’t know you were taking it, or because you know it’s a placebo but think placebos work. If you were given a placebo pill, told it was just a candy and given no indication it might help anything, it wouldn’t do anything because it’s just sugar. Likewise if you’re given a placebo, know it’s a placebo, and are convince on al levels that there is no chance of it working.
Right. So find someone who will tell you it’s a placebo, and read up on the research that says it does work. It’d be irrational to believe that they don’t work, given the volume of research out there.
Yes, but you have to BELIEVE the placebos will help.
Quite a few of them. You’re being vague enough that I can only play with the analogies you give me. You gave me the analogy of a placebo not working if you don’t believe in it; I pointed out that disbelief in placebos is rather irrational.
A single study is not sufficient grounds to believe in something, especially a proposition as complicated as “placebos work” (it may not sound complicated expressed in this way, but if you taboo the words ‘placebo’ and ‘work’ you’ll see that there is a lot of machinery in there).
See previous discussion here and note my remarks, I recommend reading the linked articles.
Given Armok is looking for a psychological solution, this still seems relevant. There have been a number of interesting studies on placebo effects; whether it’s the actual pill or just priming, it does have a well document and noted beneficial effect, and it seemed relevant to Armok’s situation.
I think one way to avoid having to call this regret of rationality would be to see optimism as deceiving, not yourself, but your immune system. The fact that the human body acts differently depending on the person’s beliefs is a problem with human biology, which should be fixed. If Omega does the same thing to an AI, Omega is harming that AI, and the AI should try to make Omega stop it.
Well, deceiving somehting else by means of deceiving yourself still involves doublethink. It’s the same as saying humans should not try to be rational.
It’s saying that it may be worth sacrificing accuracy (after first knowing the truth so you know whether to deceive yourself!) in order to deceive another agent: your immune system. It’s still important to be rational in order to decide when to be irrational: all the truth still has to pass through your mind at some point in order to behave optimally.
On another note, you may benefit from reciting the Litany of Tarski:
If lying to myself can sometimes be useful, I want to believe that lying to myself can sometimes be useful.
If lying to myself cannot be useful, I want to believe that lying to myself cannot be useful.
Let me not become attached to beliefs I may not want.
I know by brain is a massively parallel neural network with only smooth fitness curves, and certainly isn’t running an outdated version of Microsoft Windows, but for how it’s behaving in response to this you couldn’t tell. I’m a sucky rationalist. :(
And to show this isn’t JUST a quirk of human mind design, one can envision Omega setting up an isomorphic problem for any kind of AI.
An AI can presumably self-modify. For a sufficient reward from Omega, it is worth degrading the accuracy of one’s beliefs, especially if the reward will immediately allow one to make up for the degradation by acquiring new information/engaging in additional processing.
(A hypothetical: Omega offers me 1000 doses of modafinil, if I will lie on one PredictionBook.com entry and say −10% what I truly believe. I take the deal and chuckle every few minutes the first night, when I register a few hundred predictions to make up for the falsified one.)
This entirely misses the point. Yes, you could self modify, but it’s a self modification away from rationality and that gives rise to all sorts of trouble as has been elaborated many times in the sequences. For example: http://lesswrong.com/lw/je/doublethink_choosing_to_be_biased/
Also, LYING about what you believe has nothing to do with this. Omega can read your mind.
I was trying to apply the principle of charity and interpret your post as anything but begging the question: ‘assume rational agents are penalized. How do they do better than irrational agents explicitly favored by the rules/Omega?’
Question begging is boring, and if that’s really what you were asking - ‘assume rational agents lose. How do they not lose?’ - then this thread is deserving only of downvotes.
And Eliezer was talking about humans, not the finer points of AI design in a hugely arbitrary setup. It may be a bad idea for LWers to choose to be biased, but a perfectly good idea for AIXI stuck in a particularly annoying computable universe.
Also, LYING about what you believe has nothing to do with this. Omega can read your mind.
Since I’m not an AI with direct access to my beliefs in storage on a substrate, I was using an analogy to as close as I can get.
Sorry, I were hoping that there were some kind of difference between “penalize this specific belief in this specific way” and “penalize rationality as such in general”, some kind of trick to work around the problem, that I hadn’t noticed and which resolved the dilemma.
And your analogy didn’t work for me, is all I’m saying.
To fully solve this problem requires answering the question of how the placebo effect physically works, which requires answering the question of what a belief physically is, to have that physical effect.
However, no-one yet knows the answers to those questions, which renders all of these logical arguments about as useful as Zeno’s proof that arrows cannot move. The problem of how to knowingly induce a placebo response is a physical one, not a logical one. Nature has no paradoxes.
The first part is wrong, the second is obvious and I never said anything to contradict it. We don’t need to know exactly how beliefs are implemented just approximately how they behave.
Of coarse this is a physical problem and of coarse we don’t know every detail enough to give an exact answer, the math can still be useful for solving the problem.
the math can still be useful for solving the problem.
The point of your post was that the mathematics you are doing is creating the problem, not solving it. I haven’t seen any other mathematics in this thread that is solving the problem either.
The other is that if belief doesn’t work for you, how about visualisation? Instead of trying to believe it will work, just imagine it working. Vividly imagine, not just imagining that it will work. This doesn’t raise decision-theoretic paradoxes, and people claim results for it, although I don’t know about proper studies. We don’t know how placebos work, and “belief” isn’t necessarily the key state of mind.
That article was probably what caused me to notice the problem in the first place and write the OP.
Visualization is probably the most promising solution, and even if it’s not as strong as placebo might b worth exploring. My main problems with it is that there’s still some kind of psychological resistance to it, and that I have no clear idea of what exact concrete image I’m supposed to visualize given some abstract goal description.
I don’t think it’s a paradox, it’s just that the perfect is sometimes the enemy of the good. Your brain has a lot of different components. With a lot of effort, you can change the way some of them think. Some of them will always be irrational no matter what either because they are impossible to change much or because there just isn’t enough time in your life to do it.
Given that some components are irretrievably irrational, you may be better off in terms of accomplishing your goals if other components—which you might be able to change—stay somewhat irrational.
Thing is I can’t consciously chose to be irrational. I’d first have to entirely reject a huge network of ideals that are the only thing making me even attempt to be slightly rational ever.
I challenge this assumption. I have a very well functioning, blissfully optimistic mindset that I can load when my rationality suggests that this ignorance is indeed my best defense. I wish I had the skill to understand how I reconcile this with the rational compartment in my mind, but the two do seem to co-exist quite happily, and I enjoy many of the perks of a positive outlook.
About Williams syndrome, I have read in several places that language skills are not sub-normal despite having brain abnormalities in those areas because there is much less than normal development in generally spacial and math/logic type areas. Having less raw brainpower to devote to language, they make up for it by being more subconsciously “focused”, though that isn’t quite the right word. They can be above or below average with language, depending on how it balances out, “normal” abilities are something like an average.
Also, such people are not naturally racist, unlike “normal” people. This is relevant for the aspie-leaning population here—non-neurotypial isn’t inherently normative.
I wonder what severity of Asperger’s syndrome is required to be non-racist? I strongly suspect there is a level that would be sufficient.
People with Williams tend to lack not just social fear but also social savvy. Lost on them are many meanings, machinations, ideas and intentions that most of us infer from facial expression, body language, context and stock phrasings. If you’re talking with someone with Williams syndrome and look at your watch and say: “Oh, my, look at the time! Well it’s been awfully nice talking with you . . . ,” your conversational partner may well smile brightly, agree that “this is nice” and ask if you’ve ever gone to Disney World. Because of this — and because many of us feel uneasy with people with cognitive disorders, or for that matter with anyone profoundly unlike us — people with Williams can have trouble deepening relationships. This saddens and frustrates them. They know no strangers but can claim few friends....Like most people with Williams, Nicki loves to talk but has trouble getting past a cocktail-party-level chatter. Nicki, however, has fashioned at least a partial solution. “Ever since she was tiny,” Verna Hornbaker told me, “Nicki has always especially loved to talk to men. And in the last few years, by chance, she figured out how to do it. She reads the sports section in the paper, and she watches baseball and football on TV, and she has learned enough about this stuff that she can talk to any man about what the 49ers or the Giants are up to. My husband gets annoyed when I say this, but I don’t mean it badly: men typically have that superficial kind of conversation, you know — weather and sports. And Nicki can do it. She knows what team won last night and where the standings are. It’s only so deep. But she can do it. And she can talk a good long while with most men about it.”...In Williams the imbalance is profound. The brains of people with Williams are on average 15 percent smaller than normal, and almost all this size reduction comes from underdeveloped dorsal regions. Ventral regions, meanwhile, are close to normal and in some areas — auditory processing, for example — are unusually rich in synaptic connections. The genetic deletion predisposes a person not just to weakness in some functions but also to relative (and possibly absolute) strengths in others. The Williams newborn thus arrives facing distinct challenges regarding space and other abstractions but primed to process emotion, sound and language....This window is longer than that for most infants, as Williams children, oddly, start talking a year or so later than most children...Cognitive scientists argue over whether people with Williams have theory of mind. Williams people pass some theory-of-mind tests and fail others. They get many jokes, for instance, but don’t understand irony. They make small talk but tend not to discuss the subtler dynamics of interpersonal relationships. Theory of mind is a slippery, multilayered concept, so the debate becomes arcane. But it’s clear that Williamses do not generally sniff out the sorts of hidden meanings and intentions that lie behind so much human behavior....“And the most important abnormalities in Williams,” he says, “are circuits that have to do with basic regulation of emotions.” The most significant such finding is a dead connection between the orbitofrontal cortex, an area above the eye sockets and the amygdala, the brain’s fear center. The orbitofrontal cortex (or OFC) is associated with (among other things) prioritizing behavior in social contexts, and earlier studies found that damage to the OFC reduces inhibitions and makes it harder to detect faux pas. The Berman team detected a new contribution to social behavior: They found that while in most people the OFC communicated with the amygdala when viewing threatening faces, the OFC in people with Williams did not. This OFC-amygdala connection worked normally, however, when people with Williams viewed nonsocial threats, like pictures of snakes, sharks or car crashes.
In re “natural racism”: Has it been determined whether it’s always about the same distinctions?
In some places—for example, Protestant vs. Catholic in Northern Ireland—the groups look very similar to outsiders. Does “natural racism” kick in as young as American white-black racism?
Why wouldn’t it be about whatever distinctions the kids can perceive cleanly dividing the group? I don’t really know. Here are some Discover articles that are relevant and have different implications:
Why wouldn’t it be about whatever distinctions the kids can perceive cleanly dividing the group? I don’t really know. Here are some Discover articles that are relevant and have different implications:
My hypothesis is that which distinctions the kids find important are the result of adults’ involuntary reactions to people from the various groups.
It’s possible it is the result of multiple factors.
Inexposure leading to less ability to determine facial differences is a good guess. Glomming on to any difference regardless of culture is a good guess. Modeling adults is a good guess.
After the fact, many changes in the brain would be justified by various possible resultant persons. This is a weakness of CEV, at least, I do not know the solution to the problem. Were you to become the most fundamentalist Christian alive from futuristic brain implants and lobotomies, you would say something like “I am grateful for the surgery because otherwise I never would have known Jesus,” and you would be grateful.
My layman’s understanding of CEV is that the preceding brain should approve of the results of the improvement. So I would have to fervently desire to know Jesus and somehow be incapable of doing so, for CEV to allow me being turned into a fundamentalist.
The other side of the coin is that if we require such approval, where does that leave most of humanity? The most vicious 10% of humanity? How do we account for the most fundamentalist Christian alive in forming CEV? How do we account for people who think that beating their children for not believing in god is OK, and would even want their community to do the same to them if they didn’t believe?
I think the way you phrased it, “allow me being turned,” was very good. Humans see a difference between causing and allowing to happen, so it must be reflected somehow in the first stages of CEV.
If the placebo effect actually worked exactly like that, then yes, you would die while the self-deluded person would do better. However, from personal experience, I highly suspect it doesn’t (I have never had anything that I was told I’d be likely to die from, but I believe even minor illnesses give you some nonzero chance of dying). Here is how I would reason in the world you describe:
There is some probability I will get better from this illness, and some probability I will die.
The placebo effect isn’t magic, it is a real part of the way the mind interacts with the body. It will also decrease my chances of dying.
I don’t want to die.
Therefore I will activate the effect.
To activate the effect for maximum efficiency, I must believe that I will certainly recover.
I have activated the placebo effect. I will recover (Probability: 100%). Max placebo effect achieved!
The world I live in is weird.
In the real world, the above mental gymnastics are not necessary. Think about the things that would make you, personally, feel better during your illness. What makes you feel more comfortable, and less unhappy, when you are ill? For me, the answer is generally a tasty herbal tea, being warm (or cooled down if I’m overheated), and sleeping. If I am not feeling too horrible, I might be up to enjoying a good novel. What would make you feel most comfortable may differ. However, since both of us enjoy thinking rationally, I doubt spouting platitudes like “I have 100% chances of recovery! Yay!” is going to make you personally feel better. Get the benefits of pain reduction and possibly better immune response of the placebo effect by making yourself more physically and mentally comfortable. When I do these things, I don’t think they help me get better because they have some magical ability in and of themselves. I think they will help me get better because of the positive associations I have for them. Hope that helps you in some way.
Well, yea obviously it’s a simplified model to make the math easier, but the end result is the same. The real formula might for example look more like P=0.2+(expectation^2)/3 than P=expectation/2. In that case, the end result is both a real probability and expectation equal to 0.215377 (source: http://www.wolframalpha.com/input/?i=X%3D0.2%2B%28X^2%29%2F3 )
Also, while I used the placebo effect as a dramatic and well known example, it crops up in a myriad other places. I am uncomfortable revealing to much detail, but it has an extremely real and devastating effect on my daily life which means I’m kind of desperate to resolve this and get pissed that people are saying the problem doesn’t exist without showing how mathematically.
You’re asking too general a question. I’ll attempt to guess at your real question and answer it, but that’s notoriously hard. If you want actual help you may have to ask a more concrete question so we can skip the mistaken assumptions on both sides of the conversation. If it’s real and devastating and you’re desperate and the general question goes nowhere, I suggest contacting someone personally or trying to find an impersonal but real example instead of the hypothetical, misleading placebo example (the placebo response doesn’t track calculated probabilities, and it usually only affects subjective perception).
Is the problem you’re having that you want to match your emotional anticipation of success to your calculated probability of success, but you’ve noticed that on some problems your calculated probability of success goes down as your emotional anticipation of success goes down?
If so, my guess is that you’re inaccurately treating several outcomes as necessarily having the same emotional anticipation of success.
Here’s an example: I have often seen people (who otherwise play very well) despair of winning a board game when their position becomes bad, and subsequently make moves that turn their 90% losing position into a 99% losing position. Instead of that, I will reframe my game as finding the best move in the poor circumstances I find myself. Though I have low calculated probability of overall success (10%), I can have quite high emotional anticipation of task success (>80%) and can even be right about that anticipation, retaining my 10% chance rather than throwing 9% of it away due to self-induced despair.
Sounds like we’re finally getting somewhere. Maybe.
I have no way to store calculated probabilities other than as emotional anticipations. Not even the logistical nightmare of writing them down, since they are not introspectively available as numbers and I also have trouble with expressing myself linearly.
I can see how reframing could work for the particular example of game like tasks, however I can’t find similar workaround for the problems I’m facing and even if I could I don’t have the skill to reframe and self modify with sufficient reliability.
One thing that seems like it’s relevant here is that I seem to mainly practice rationality indirectly, by changing the general heuristics, and usually don’t have direct access to the data I’m operating on nor the ability to practice rationality in realtime.
… that last paragraph somehow became more of an analogy because I cant explain it well. Whatever, just don’t take it to literally.
I can see how reframing could work for the particular example of game like tasks, however I can’t find similar workaround for the problems I’m facing and even if I could I don’t have the skill to reframe and self modify with sufficient reliability.
I asked a girl out today shortly after having a conversation with her. She said no and I was crushed. Within five seconds I had reframed as “Woo, I made a move! In daytime in a non-pub environment! Progress on flirting!”
My apologies if the response is flip but I suggest going from “I did the right thing, woo!” to “I made the optimal action given my knowledge, that’s kinda awesome, innit?”
that’s still the same class of problem: “screwed over by circumstances beyond reasonable control”. Stretching it to full generality, “I made the optimal decision given my knowledge, intelligence, rationality, willpower, state of mind, and character flaws”, only makes the framing WORSE because you remember how many things you suck at.
I think that humans can mentally self-modify to some extant, especially if it really really matters. If you really needed to be optimistic, you might be able to modify yourself to be such by significantly participating in certain types of organized religion. (This is a rather extreme example—a couple minutes of brainstorming would probably yield ideas with (much?) lower cost and similar results, but it illustrates the possibility.)
Expected utility maximizers are not necessarily served by updating their map to accurately reflect the territory—there are cases such as the above when one might make an effort to willingly make one’s map reflect the territory less accurately. The reason why expected utility maximizers often do try to update their map to accurately reflect the territory is that it usually yields greater utility in comparison to alternative strategies—having an accurate map is (I would guess) not much of a source of terminal utility for most.
I might theoretically be able to do this, but it would involve rejecting the entirely of rationality and becoming a sophilist or somehting, so after recovery the thing my body would have become would not undo the modification and instead go intentionally create UFAI as an artistic statement or somehting.
Ok, a slight exaggeration, but far less slight than I’m comfortable with.
Since you’re likely the one who would benefit from it, hopefully you brainstormed for a few minutes before you decided that my “religion” approach was really the most effective one—I just typed the first idea that popped in my head and seemed to work.
Huh? Not only was it just an example, but Sophilism is incompatible with every religion I know of.
Anyway, I didn’t brainstorm it for roughly the same reason I don’t brainstorm specific ways to build a pepertum mobile. The way my brain is set up, I can’t reject rationality in any single situation like that without rejecting the entire concept of rationality, and without that my entire belief structure disintegrates onto postmodern relativist sophilism. Similar but more temporary things have happened before and the consequences are truly catastrophic.
And yea, this obviously isn’t how it’s supposed to work but I’ve not been able to fix it, or even figure out what would be needed to do so.
The scenario you propose does seem inevitably to cause a rational agent to lose. However, it is not realistic, and I can’t think of any situations in real life that are like this—your fate is not magically entangled with your beliefs. Though real placebo effects are still not fully understood, they don’t seem to work this way: they may make you feel better, but they don’t actually make you better. Merely feeling better could actually be dangerous if, say, you think your asthma is cured and decide to hike down into the Grand Canyon.
Maybe there are situations I haven’t thought of where this is a problem, though. Can you give a detailed example of how this paradox obtrudes on your life? I think you might get more useful feedback that way.
MAYBE asthma is an exception (I doubt it), but generally, in humans the scenario it actually IS realistic exactly because outcomes are entangled with your beliefs in a great many and powerful ways that influence you every day. It’s why you can detect lies, why positive thinking and placebos work, etc.
Edit: realized this might come of as more hostile than i intended, but to lazy to come up with somehting better.
I was really hoping for a detailed example. As I said, the evidence, though not unequivocal, does not indicate that placebos improve outcomes in any objective way.
I keep running into problems with various versions of what I internally refer to as the “placebo paradox”, and can’t find a solution that doesn’t lead to Regret Of Rationality. Simple example follows:
You have an illness from wich you’ll either get better, or die. The probability of recovering is exactly half of what you estimate it to be due to the placebo effect/positive thinking. Before learning this you have 80% confidence in your recovery. Since you estimate 80%, your actual chance is 40% so you update to this. Since the estimate is now 40%, the actual chance is 20%, so you update to this. Then it’s 10%, so you update to that. etc. Until both your estimated and actual chance of recovery are 0. then you die.
An irrational agent, on the other hand, upon learning this could self delude to 100% certainty of recovery, and have a 50% chance of actually recovering.
This is actually causing me real world problems, such as inability to use techniques based on positive thinking, and a lot of cognitive dissonance.
Another version of this problem features in HP:MoR, in the scene where harry is trying to influence the behaviour of dementors.
And to show this isn’t JUST a quirk of human mind design, one can envision Omega setting up an isomorphic problem for any kind of AI.
For actual humans, I’d look into ways of possibly activating the placebo effect without explicit degrees of belief, such as intense visualization of the desired outcome.
This is an interesting idea but I’m skeptical that this would actually work. There are studies which I don’t have the citations for (they are cited in Richard Wiseman’s “59 Seconds”) which strongly suggest that positive thinking in many forms doesn’t actually work. In particular, having people visualize extreme possibilities of success (e.g. how strong they’ll be after they’ve worked out, or how much better looking they will be when they lose weight, etc.) make people less likely to actually succeed (possibly because they spend more time simply thinking about it rather than actually doing it.). This is not strong evidence but it is suggestive evidence that visualization is not sufficient to do that much. These studies didn’t look at medical issues where placebos are more relevant.
http://articles.latimes.com/2010/dec/22/health/la-he-placebo-effect-20101223
The human brain is a weird thing. Also, see the entire body of self-hypnosis literature.
Another method to try is affirmations.
any data on if this is actually possible, and if so how to do it? Does it work for other things such as social confidence, positive thinking, etc.?
It certainly SEEMS like it’s the declarative belief itself, not visualizations of outcomes, that cause effects. And the fact so many attempts at perfect deception have failed seems to indicate it’s not possible to disentangle [your best rational belifs] from what your “brain thinks” you believe.
(… I really need some better notation for talking about these kind of things unambiguously.)
I’m skeptical as to how common it is for your beliefs to influence anything outside of your head, except through your actions. If your belief X makes Y happen because of method Z, then in order to get Y you only need to know about Z, and that it works. Then you can do Z regardless of X, because what you do mostly screens off what you think.
If you can’t get yourself to do something because of a particular belief, that’s another issue.
No, in humans this is not the case, unless you have a much broader definition of “action” than is useful. For example, other humans can read your intentions and beliefs from your posture and facial expression, the body reacts autonomously to beliefs with stuff like producing drugs and shunting around blood flow, and some entire classes of problems such as mental illness or subjective well being reside entirely in your brain.
Sorry about my last sentence in the previous post sounding dismissive, that was sloppy, and not representative of my views.
I guess my real issue with this is that I don’t think that there’s a 50% placebo, and disagree that the “declarative belief” does things directly. My anticipation of success or failure has an influence on my actions, but a 50% placebo I would imagine would work in real life based on hidden, unanticipated factors to the point that someone with accurate beliefs could say that “my anticipation contributes this much, X contributes this much, Y contributes this much, Z contributes this much, and given my x,y,z I anticipate this” and be pretty much correct.
In the least convenient possible universe, there seems to be enough hacks that rationality enables that I would reject the 50% placebo, and still net a win. I don’t think we live in a universe where the majority of utility is behind 50% placebos.
Why does everyone get stuck on that highly simplified example that I just made like that so that the math would be easy to follow?
Or are you simply saying that placebos and the like are an unavoidable cost of being a rationalist and we just have to deal with it and it’s not that big a cost anyway?
More the latter, with the added caveat that I think that there are fewer things falling under the category of “and the like” than you think there are.
I used to think that my social skills were being damaged by rationality, but then through a combination of “fake it till you make it”, learning a few skills, and dissolving a few false dillemas, they’re now better than they were pre-rationality.
If you want to go into more personal detail, feel free to PM.
Taboo “declarative”. To me, it sounds like you’re talking about a verbal statement (“declared”), in which case it’s pretty obviously false. AFAIK, priming effects work just fine without words.
yea, bad choice of words. Maybe “explicit”, “direct” or “first order” would work better?
Actually, you can solve this problem just by snapping your fingers, and this will give you all the same benefits as the placebo effect! Try it—it’s guaranteed to work!
I’ve been doing this for years, and it really does work!
(No, really, I actually have; it actually does. The placebo effect is awesome ^_^)
Relevant and amusing (to me at least) story: A few months ago when I had a cold, I grabbed a box of zinc cough drops from my closet and started taking them to help with the throat pain. They worked as well or better than any other brand of cough drops I’ve tried, and tasted better too. Later I read the box, and it turned out they were homeopathic. I kept on taking them, and kept on enjoying the pain relief.
Probably not. Try throwing a coin in a wishing well or lighting a dollar bill on fire for more effect.
http://jama.ama-assn.org/content/299/9/1016.full
… Even YOU miss the point? guess I utterly failed at explaining it then.
IF I could solve the problem I’m stating in the first post, then this would indeed be almost true. It might be true in 99% of cases, but 0.99^infinity is still ~0. Thus that is the only probability I can consistently assign to it. I MIGHT be able to self modify to be able to hold inconsistent beliefs, but that’s double think and you have explicitly, loudly and repeatedly warned against and condemned it.
I’m baffled at how I seem unable to point at/communicate the concept. I even tired pointing at a specific instance of you using something very similar in MoR.
Eliezer is not “the most capable of understanding (or repairing to an understandable position) commentor on LessWrong”. He is “the most capable of presenting ideas in a readable format” AND “the person with the most rational concepts” on LessWrong. Please stop assuming these qualities are good proxies for, well, EVERYTHING.
Agree. I wouldn’t go as far as to say he was worse than average at understanding others but it certainly isn’t what he is renowned for!
I though it was all just g factor + understanding of language.
Not quite. Having the right priors about other people’s likely beliefs, patience and humility are all rather important.
There are some people who I consider incredibly intelligent and who clearly understand the language that I basically expect to be replying to a straw man whenever they make a reply, all else being equal. (Not Eliezer.)
Eliezer has always come of as having plenty of those as well.
What does this mean?
Each one of his sequence posts represents a concept in rationality—so he has many more of these concepts than anyone else here on LW.
(I just noticed there’s some ambiguity—it’s the largest amount of rational concepts, not concepts of the highest standard of rational. [most] [rational concepts], not [most rational] [concepts].)
They aren’t?!?
It would take an artificially bad situation for this to be the case. In the real world, the placebo effect still works, even if you know it’s a placebo—although with diminished efficacy.
But that’s beside the point. More on-point is that intentional self-delusion, if possible, is at best a crapshoot. It’s not systematic; it relies on luck, and it’s prone to Martingale-type failures.
The HPMOR and placebo examples appear, to me, to share another confounding factor: The active ingredient isn’t exactly belief. It’s confidence, or affect, or some other mental condition closely associated with belief. If it weren’t, there’d be no way Harry could monitor his level of belief that the dementors would do what he wanted them to, while simultaneously trying to increase it. Anecdotally, my own attempts at inducing placebo effects feel similar.
The placebo effect works if your brain thinks that you think that it will work, if I understood things correctly.
And yes, that I can’t reliably self delude, and even if I could it would be prone to backfire, is exactly what causes this to be a problem.
I’m decently sure that my brain does not store beliefs separately from confidence, affect, etc.
I thoguh that was exactly the point of the dementor sequence; that it was an impossible paradox.
The supposed equivalent version in HP:MOR… (I do not wish to speak for anyone else—feel free to chime in yourselves)
That scene was a clear example—to me—of TDT being successful outside of the prisoner’s dilemma scheme. In a case where apparently only ignorance would help, TDT can transcend and provide (almost) the same power.
Huh? Maybe we’re thinking of different scenes.
Your model assumes a constant effect in each iteration. Is this justified?
I would envisage a constant chance of recovery and an asymptotically declining estimate of recovery. It seems more realistic, but maybe it’s just me?
It’s a toy case, in reality the chance of recovery might be “0.2+0.3*estimate”, but the same general reasoning applies and the end result is still regret of rationality.
Speaking of Omega setting up an isomorphic situation, the Newcomb’s Box problems do a good job of expressing this.
http://lesswrong.com/lw/nc/newcombs_problem_and_regret_of_rationality/
However, I also though of a side question. Is the person who is caught in a cycle of negative thinking like the placebo effect that you mention, engaging in confirmation bias?
I mean, if that person thinks “I am caught in a loop of updates that will inexorably lead to my certain death.” And they are attempting to establish that that is true, they can’t simply say “I went from 80%/40% to 40%/20% to 20%/10%, and this will continue. I’m screwed!” as evidence of it’s truth, because that’s like saying “4,6,8” “6,8,10“ “8,10,12” as the guesses for the rule that you know “2,4,6” follows. and then saying “The rule is even numbers, right? Look at all this evidence!”
If a person has a hypothesis that their thoughts are leading them to an inexorable and depressing conclusion, then to test the hypothesis, the rational thing to do is for that person to try proving themselves wrong. By trying “10,8,6” and then getting “No, that is not the case.” (Because the real rule is numbers in increasing order.)
I actually haven’t confirmed that this idea myself yet. I just thought of it now. But casting it in this light makes me feel a lot better about all the times I perform what appear at the time to be self delusions on my brain when I’m caught in depressive thinking cycles, so I’ll throw it out here and see if anyone can contradict it.
Thanks for restating parts of the problem in a much clearer manner!
And yea, that article is why this problem is wreaking such havock on me, and I were thinking of it as I wrote the OP. I’m not sure why I didn’t link it.
However, I still can’t resolve the paradox. Although I’m finally starting to see how one might start on doing so: formalizing an entire decision theory that solves the entire class of problems, and them swapping half my mindware out in a single operation. Doesn’t seem like a very^good solution thou so I’d rather keep looking for third options.
I don’t think I understand the middle paragraph with all the examples. Probably because the way I actually think of it is not the way I used in the OP, but rather an equation where expectation must be equal to actual probability to call my belief consistent, and jumping straight there. Like so: P=E/2, E=P, thus E=0.
Hmm, I just got a vague intuition saying roughly “Hey, but wait a moment, probability is in the mind. The multiverse is timeless and in each Everett branch you either do recover or you don’t! ”, but I’m not sure how to proceed from there.
Updating on the evidence of yourself updating is almost as much as a problem as is updating on the evidence of “I updated on the evidence of myself updating”. Tongue-in-cheek!
That is to say, the decision theory you are currently running is not equipped to handle the class of problems where your response to a problem is evidence that changes the nature of the very problem you are responding to—in the same way that arithmetic is not equipped to handle problems requiring calculus or CDT is not equipped to handle Omega’s two-box problem.
(If it helps your current situation, placebo effects are almost always static modifiers on your scientific/medical chances of recovery)
Do you have a suggestion for a better decision theory, or a suggestion on how exactly I have misinterpreted TDT to cause my current problems?
Knowing that MIGHT help, but probably not in practice. Specifically I’d need to know for every given instance of the problem a probability to assign which if it is assigned is also the actual chance.
Can you see what an absurdly implausible scenario you must use as a ladder to demonstrate rationality as a liability? Rather than being a strike against strict adherence to reality. The fact that we have to stretch so hard to paint it this way, further legitimizes the pursuit of rationality.
Except I happen to, as far as I can tell, be in that “implausible” scenario IRL, or at least an isomorphic one.
I mean no disrespect for your situation whatever it may be. I gave this some additional thought. You are saying that you have an illness in which the rate of recovery is increased by fifty percent due to a positive outlook and the placebo effect this mindset produces. Or that an embrace of the facts of your condition lead to an exponential decline at the rate of fifty percent. Is it depression, or some other form of mental illness? If it is, then the cause of death would likely be suicide. I am forced to speculate because you were purposefully vague.
For the sake of argument I will go with my speculative scenario. It is very common for those with bi-polar disorder and clinical depression to create a negative feedback loop which worsens their situation in the way you have highlighted. But it wouldn’t carry the exacting percentages of taper (indeed no illness would carry that exact level of decline based merely on the thoughts in the patients head). But given your claims that the illness exponentially declines, wouldn’t the solution be knowledge of this reality? It seems that the delusion has come in the form of accepting that an illness can be treated with positive thinking alone. The illness is made worse by an acceptance not of rationality, but of this unsupported data which by my understanding is irrational.
I am very skeptical of your scenario, merely because I do not know of any illnesses which carry this level of health decline due to the absence of a placebo. If you have it please tell me what it is as I would like to begin research now.
It’s not depression or bipolarity, probably, but for the purposes of this discussion the difference is probably irrelevant.
I never claimed the 50% thing was ever anything other than a gross simplification to make the math easier. Obviously it’s much more complicated than that with other factors, less extreme numbers, and so on, but the end result is still isomorphic to it. Maybe it’s even polynomial rather than exponential, but it’s still a huge problem.
Can you actually describe the scenario you really are in? I can think of ways I’d address a lot of real-world analogues, but none of them are actually isomorphic to the example you gave. The solutions generally rely on the lack of a true isomorphism, too.
I’d rather not, due to it being extremely personal and embarrassing as well as a huge weak spot.
atucker wrote a Discussion post about this.
Thanks! Finally something relevant!
Now for the bad news: the parts about the solution are confusing and I can’t figure out how I would apply it to my situation. Could someone please translate it to math?
http://www.guardian.co.uk/science/2010/dec/22/placebo-effect-patients-sham-drug It is also well worth noting that the Placebo Effect works just fine even if you know it’s just a Placebo Effect. I hadn’t realized it worked for others, but I’ve been abusing this one for a lot of my life, thanks to a neurological quirk that makes placebos especially potent for me.
Yes, but you have to BELIEVE the placebos will help. In fact, the paradox ONLY appears in the case you know it’s a placebo because that’s when the feedback loop can happen.
I’m not aware of any research that says a placebo won’t help a “non-believer”—can you cite a study? Given the study I linked where they were deliberately handed inert pills and told that they were an inert placebo, and they still worked, I actually strongly doubt your claim.
And given the research I linked, why in the world wouldn’t you believe in them? They do rationally work.
A placebo will help if you think the pill you’re taking will help. This may be because you think it’s a non-placebo pill that’d help even if you didn’t know you were taking it, or because you know it’s a placebo but think placebos work. If you were given a placebo pill, told it was just a candy and given no indication it might help anything, it wouldn’t do anything because it’s just sugar. Likewise if you’re given a placebo, know it’s a placebo, and are convince on al levels that there is no chance of it working.
Right. So find someone who will tell you it’s a placebo, and read up on the research that says it does work. It’d be irrational to believe that they don’t work, given the volume of research out there.
facepalms Did you even read any other post in this thread?
Quite a few of them. You’re being vague enough that I can only play with the analogies you give me. You gave me the analogy of a placebo not working if you don’t believe in it; I pointed out that disbelief in placebos is rather irrational.
Trying to figure out if it’s rational or not, and if so HOW it’s rational so I can convince my brain of it, is exactly what the entire discussion is about starting from the first post here: http://lesswrong.com/lw/7fo/open_thread_september_2011/4r8q
Can anyone think of a better thing to have said here?
A single study is not sufficient grounds to believe in something, especially a proposition as complicated as “placebos work” (it may not sound complicated expressed in this way, but if you taboo the words ‘placebo’ and ‘work’ you’ll see that there is a lot of machinery in there).
See previous discussion here and note my remarks, I recommend reading the linked articles.
http://scienceblogs.com/insolence/2011/07/dangerous_placebo_medicine_in_asthma.php for a second study, and one that explicitly addresses your concern of psychological vs health benefits (summary: placebos have no actual health benefits, they just manage the psychological side)
Given Armok is looking for a psychological solution, this still seems relevant. There have been a number of interesting studies on placebo effects; whether it’s the actual pill or just priming, it does have a well document and noted beneficial effect, and it seemed relevant to Armok’s situation.
I think one way to avoid having to call this regret of rationality would be to see optimism as deceiving, not yourself, but your immune system. The fact that the human body acts differently depending on the person’s beliefs is a problem with human biology, which should be fixed. If Omega does the same thing to an AI, Omega is harming that AI, and the AI should try to make Omega stop it.
Well, deceiving somehting else by means of deceiving yourself still involves doublethink. It’s the same as saying humans should not try to be rational.
It’s saying that it may be worth sacrificing accuracy (after first knowing the truth so you know whether to deceive yourself!) in order to deceive another agent: your immune system. It’s still important to be rational in order to decide when to be irrational: all the truth still has to pass through your mind at some point in order to behave optimally.
On another note, you may benefit from reciting the Litany of Tarski:
If lying to myself can sometimes be useful, I want to believe that lying to myself can sometimes be useful.
If lying to myself cannot be useful, I want to believe that lying to myself cannot be useful.
Let me not become attached to beliefs I may not want.
I know by brain is a massively parallel neural network with only smooth fitness curves, and certainly isn’t running an outdated version of Microsoft Windows, but for how it’s behaving in response to this you couldn’t tell. I’m a sucky rationalist. :(
An AI can presumably self-modify. For a sufficient reward from Omega, it is worth degrading the accuracy of one’s beliefs, especially if the reward will immediately allow one to make up for the degradation by acquiring new information/engaging in additional processing.
(A hypothetical: Omega offers me 1000 doses of modafinil, if I will lie on one PredictionBook.com entry and say −10% what I truly believe. I take the deal and chuckle every few minutes the first night, when I register a few hundred predictions to make up for the falsified one.)
This entirely misses the point. Yes, you could self modify, but it’s a self modification away from rationality and that gives rise to all sorts of trouble as has been elaborated many times in the sequences. For example: http://lesswrong.com/lw/je/doublethink_choosing_to_be_biased/
Also, LYING about what you believe has nothing to do with this. Omega can read your mind.
I was trying to apply the principle of charity and interpret your post as anything but begging the question: ‘assume rational agents are penalized. How do they do better than irrational agents explicitly favored by the rules/Omega?’
Question begging is boring, and if that’s really what you were asking - ‘assume rational agents lose. How do they not lose?’ - then this thread is deserving only of downvotes.
And Eliezer was talking about humans, not the finer points of AI design in a hugely arbitrary setup. It may be a bad idea for LWers to choose to be biased, but a perfectly good idea for AIXI stuck in a particularly annoying computable universe.
Since I’m not an AI with direct access to my beliefs in storage on a substrate, I was using an analogy to as close as I can get.
Sorry, I were hoping that there were some kind of difference between “penalize this specific belief in this specific way” and “penalize rationality as such in general”, some kind of trick to work around the problem, that I hadn’t noticed and which resolved the dilemma.
And your analogy didn’t work for me, is all I’m saying.
To fully solve this problem requires answering the question of how the placebo effect physically works, which requires answering the question of what a belief physically is, to have that physical effect.
However, no-one yet knows the answers to those questions, which renders all of these logical arguments about as useful as Zeno’s proof that arrows cannot move. The problem of how to knowingly induce a placebo response is a physical one, not a logical one. Nature has no paradoxes.
The first part is wrong, the second is obvious and I never said anything to contradict it. We don’t need to know exactly how beliefs are implemented just approximately how they behave.
Of coarse this is a physical problem and of coarse we don’t know every detail enough to give an exact answer, the math can still be useful for solving the problem.
The point of your post was that the mathematics you are doing is creating the problem, not solving it. I haven’t seen any other mathematics in this thread that is solving the problem either.
Honestly, this discussion was to long ago for me to really remember what it was about well enough to discus it properly.
I have a couple of suggestions more constructive than my earlier comments.
One is that according to a paper recently cited here, placebos can work even if you know they’re placebos.
The other is that if belief doesn’t work for you, how about visualisation? Instead of trying to believe it will work, just imagine it working. Vividly imagine, not just imagining that it will work. This doesn’t raise decision-theoretic paradoxes, and people claim results for it, although I don’t know about proper studies. We don’t know how placebos work, and “belief” isn’t necessarily the key state of mind.
That article was probably what caused me to notice the problem in the first place and write the OP.
Visualization is probably the most promising solution, and even if it’s not as strong as placebo might b worth exploring. My main problems with it is that there’s still some kind of psychological resistance to it, and that I have no clear idea of what exact concrete image I’m supposed to visualize given some abstract goal description.
Could you explain further what you think is wrong about Richard’s analysis of the placebo effect?
I don’t think it’s a paradox, it’s just that the perfect is sometimes the enemy of the good. Your brain has a lot of different components. With a lot of effort, you can change the way some of them think. Some of them will always be irrational no matter what either because they are impossible to change much or because there just isn’t enough time in your life to do it.
Given that some components are irretrievably irrational, you may be better off in terms of accomplishing your goals if other components—which you might be able to change—stay somewhat irrational.
Thing is I can’t consciously chose to be irrational. I’d first have to entirely reject a huge network of ideals that are the only thing making me even attempt to be slightly rational ever.
I challenge this assumption. I have a very well functioning, blissfully optimistic mindset that I can load when my rationality suggests that this ignorance is indeed my best defense. I wish I had the skill to understand how I reconcile this with the rational compartment in my mind, but the two do seem to co-exist quite happily, and I enjoy many of the perks of a positive outlook.
Well, I don’t and I can’t. And I strongly doubt I could ever learn anything like that no matter what.
Given that a human brain can do it, you are perhaps too confident. A proof of concept would be to edit your brain with neurosurgery.
I don’t really count lobotomy as “learn”.
About Williams syndrome, I have read in several places that language skills are not sub-normal despite having brain abnormalities in those areas because there is much less than normal development in generally spacial and math/logic type areas. Having less raw brainpower to devote to language, they make up for it by being more subconsciously “focused”, though that isn’t quite the right word. They can be above or below average with language, depending on how it balances out, “normal” abilities are something like an average.
Also, such people are not naturally racist, unlike “normal” people. This is relevant for the aspie-leaning population here—non-neurotypial isn’t inherently normative.
I wonder what severity of Asperger’s syndrome is required to be non-racist? I strongly suspect there is a level that would be sufficient.
Language-wise, it’s kind of a mixed bag. How much do social things like sarcasm matter for ‘language skills’? And how Williams syndrome leads to sociability and lack of racism is very interesting; following extract dump from https://www.nytimes.com/2007/07/08/magazine/08sociability-t.html?reddit
In re “natural racism”: Has it been determined whether it’s always about the same distinctions?
In some places—for example, Protestant vs. Catholic in Northern Ireland—the groups look very similar to outsiders. Does “natural racism” kick in as young as American white-black racism?
Why wouldn’t it be about whatever distinctions the kids can perceive cleanly dividing the group? I don’t really know. Here are some Discover articles that are relevant and have different implications:
Williams syndrome children show no racial stereotypes or racial fear
They don’t all look the same
Racial bias weakens our ability to feel someone else’s pain
Probably using those one could backtrack and find the actual research and the citations from it, etc. From the first article:
Well, it was a good hypothesis. Not really sure what “signs of” means exactly.
My hypothesis is that which distinctions the kids find important are the result of adults’ involuntary reactions to people from the various groups.
It’s possible it is the result of multiple factors.
Inexposure leading to less ability to determine facial differences is a good guess. Glomming on to any difference regardless of culture is a good guess. Modeling adults is a good guess.
I strongly doubt that no matter what I couldn’t ever produce a lobotomy procedure anything like something you would mistake for learning.
After the fact, many changes in the brain would be justified by various possible resultant persons. This is a weakness of CEV, at least, I do not know the solution to the problem. Were you to become the most fundamentalist Christian alive from futuristic brain implants and lobotomies, you would say something like “I am grateful for the surgery because otherwise I never would have known Jesus,” and you would be grateful.
My layman’s understanding of CEV is that the preceding brain should approve of the results of the improvement. So I would have to fervently desire to know Jesus and somehow be incapable of doing so, for CEV to allow me being turned into a fundamentalist.
The other side of the coin is that if we require such approval, where does that leave most of humanity? The most vicious 10% of humanity? How do we account for the most fundamentalist Christian alive in forming CEV? How do we account for people who think that beating their children for not believing in god is OK, and would even want their community to do the same to them if they didn’t believe?
I think the way you phrased it, “allow me being turned,” was very good. Humans see a difference between causing and allowing to happen, so it must be reflected somehow in the first stages of CEV.
Which was exactly my point.
If the placebo effect actually worked exactly like that, then yes, you would die while the self-deluded person would do better. However, from personal experience, I highly suspect it doesn’t (I have never had anything that I was told I’d be likely to die from, but I believe even minor illnesses give you some nonzero chance of dying). Here is how I would reason in the world you describe:
There is some probability I will get better from this illness, and some probability I will die.
The placebo effect isn’t magic, it is a real part of the way the mind interacts with the body. It will also decrease my chances of dying.
I don’t want to die.
Therefore I will activate the effect.
To activate the effect for maximum efficiency, I must believe that I will certainly recover.
I have activated the placebo effect. I will recover (Probability: 100%). Max placebo effect achieved!
The world I live in is weird.
In the real world, the above mental gymnastics are not necessary. Think about the things that would make you, personally, feel better during your illness. What makes you feel more comfortable, and less unhappy, when you are ill? For me, the answer is generally a tasty herbal tea, being warm (or cooled down if I’m overheated), and sleeping. If I am not feeling too horrible, I might be up to enjoying a good novel. What would make you feel most comfortable may differ. However, since both of us enjoy thinking rationally, I doubt spouting platitudes like “I have 100% chances of recovery! Yay!” is going to make you personally feel better. Get the benefits of pain reduction and possibly better immune response of the placebo effect by making yourself more physically and mentally comfortable. When I do these things, I don’t think they help me get better because they have some magical ability in and of themselves. I think they will help me get better because of the positive associations I have for them. Hope that helps you in some way.
Well, yea obviously it’s a simplified model to make the math easier, but the end result is the same. The real formula might for example look more like P=0.2+(expectation^2)/3 than P=expectation/2. In that case, the end result is both a real probability and expectation equal to 0.215377 (source: http://www.wolframalpha.com/input/?i=X%3D0.2%2B%28X^2%29%2F3 )
Also, while I used the placebo effect as a dramatic and well known example, it crops up in a myriad other places. I am uncomfortable revealing to much detail, but it has an extremely real and devastating effect on my daily life which means I’m kind of desperate to resolve this and get pissed that people are saying the problem doesn’t exist without showing how mathematically.
You’re asking too general a question. I’ll attempt to guess at your real question and answer it, but that’s notoriously hard. If you want actual help you may have to ask a more concrete question so we can skip the mistaken assumptions on both sides of the conversation. If it’s real and devastating and you’re desperate and the general question goes nowhere, I suggest contacting someone personally or trying to find an impersonal but real example instead of the hypothetical, misleading placebo example (the placebo response doesn’t track calculated probabilities, and it usually only affects subjective perception).
Is the problem you’re having that you want to match your emotional anticipation of success to your calculated probability of success, but you’ve noticed that on some problems your calculated probability of success goes down as your emotional anticipation of success goes down?
If so, my guess is that you’re inaccurately treating several outcomes as necessarily having the same emotional anticipation of success.
Here’s an example: I have often seen people (who otherwise play very well) despair of winning a board game when their position becomes bad, and subsequently make moves that turn their 90% losing position into a 99% losing position. Instead of that, I will reframe my game as finding the best move in the poor circumstances I find myself. Though I have low calculated probability of overall success (10%), I can have quite high emotional anticipation of task success (>80%) and can even be right about that anticipation, retaining my 10% chance rather than throwing 9% of it away due to self-induced despair.
Sounds like we’re finally getting somewhere. Maybe.
I have no way to store calculated probabilities other than as emotional anticipations. Not even the logistical nightmare of writing them down, since they are not introspectively available as numbers and I also have trouble with expressing myself linearly.
I can see how reframing could work for the particular example of game like tasks, however I can’t find similar workaround for the problems I’m facing and even if I could I don’t have the skill to reframe and self modify with sufficient reliability.
One thing that seems like it’s relevant here is that I seem to mainly practice rationality indirectly, by changing the general heuristics, and usually don’t have direct access to the data I’m operating on nor the ability to practice rationality in realtime.
… that last paragraph somehow became more of an analogy because I cant explain it well. Whatever, just don’t take it to literally.
I asked a girl out today shortly after having a conversation with her. She said no and I was crushed. Within five seconds I had reframed as “Woo, I made a move! In daytime in a non-pub environment! Progress on flirting!”
My apologies if the response is flip but I suggest going from “I did the right thing, woo!” to “I made the optimal action given my knowledge, that’s kinda awesome, innit?”
that’s still the same class of problem: “screwed over by circumstances beyond reasonable control”. Stretching it to full generality, “I made the optimal decision given my knowledge, intelligence, rationality, willpower, state of mind, and character flaws”, only makes the framing WORSE because you remember how many things you suck at.
I think that humans can mentally self-modify to some extant, especially if it really really matters. If you really needed to be optimistic, you might be able to modify yourself to be such by significantly participating in certain types of organized religion. (This is a rather extreme example—a couple minutes of brainstorming would probably yield ideas with (much?) lower cost and similar results, but it illustrates the possibility.)
Expected utility maximizers are not necessarily served by updating their map to accurately reflect the territory—there are cases such as the above when one might make an effort to willingly make one’s map reflect the territory less accurately. The reason why expected utility maximizers often do try to update their map to accurately reflect the territory is that it usually yields greater utility in comparison to alternative strategies—having an accurate map is (I would guess) not much of a source of terminal utility for most.
ETA: Missing words. >.<
I might theoretically be able to do this, but it would involve rejecting the entirely of rationality and becoming a sophilist or somehting, so after recovery the thing my body would have become would not undo the modification and instead go intentionally create UFAI as an artistic statement or somehting.
Ok, a slight exaggeration, but far less slight than I’m comfortable with.
Since you’re likely the one who would benefit from it, hopefully you brainstormed for a few minutes before you decided that my “religion” approach was really the most effective one—I just typed the first idea that popped in my head and seemed to work.
Huh? Not only was it just an example, but Sophilism is incompatible with every religion I know of.
Anyway, I didn’t brainstorm it for roughly the same reason I don’t brainstorm specific ways to build a pepertum mobile. The way my brain is set up, I can’t reject rationality in any single situation like that without rejecting the entire concept of rationality, and without that my entire belief structure disintegrates onto postmodern relativist sophilism. Similar but more temporary things have happened before and the consequences are truly catastrophic.
And yea, this obviously isn’t how it’s supposed to work but I’ve not been able to fix it, or even figure out what would be needed to do so.
The scenario you propose does seem inevitably to cause a rational agent to lose. However, it is not realistic, and I can’t think of any situations in real life that are like this—your fate is not magically entangled with your beliefs. Though real placebo effects are still not fully understood, they don’t seem to work this way: they may make you feel better, but they don’t actually make you better. Merely feeling better could actually be dangerous if, say, you think your asthma is cured and decide to hike down into the Grand Canyon.
Maybe there are situations I haven’t thought of where this is a problem, though. Can you give a detailed example of how this paradox obtrudes on your life? I think you might get more useful feedback that way.
MAYBE asthma is an exception (I doubt it), but generally, in humans the scenario it actually IS realistic exactly because outcomes are entangled with your beliefs in a great many and powerful ways that influence you every day. It’s why you can detect lies, why positive thinking and placebos work, etc.
Edit: realized this might come of as more hostile than i intended, but to lazy to come up with somehting better.
I was really hoping for a detailed example. As I said, the evidence, though not unequivocal, does not indicate that placebos improve outcomes in any objective way.