Defeating Ugh Fields In Practice
Unsurprisingly related to: Ugh fields.
If I had to choose a single piece of evidence off of which to argue that the rationality assumption of neoclassical economics is totally, irretrievably incorrect, it’s this article about financial incentives and medication compliance. In short, offering people small cash incentives vastly improves their adherence to life-saving medical regimens. That’s right. For a significant number of people, a small chance at winning $10-100 can be the difference between whether or not they stick to a regimen that has a very good chance of saving their life. This technique has even shown promise in getting drug addicts and psychiatric patients to adhere to their regimens, for as little as a $20 gift certificate. This problem, in the aggregate, is estimated to cost about 5% of total health care spending -$100 billion—and that may not properly account for the utility lost by those who are harmed beyond repair. To claim that people are making a reasoned decision between the payoffs of taking and not-taking their medication, and that they be persuaded to change their behaviour by a payoff of about $900 a year (or less), is to crush reality into a theory that cannot hold it. This is doubly true when you consider that some of these people were fairly affluent.
A likely explanation of this detrimental irrationality is something close to an Ugh field. It must be miserable having a life-threatening illness. Being reminded of it by taking a pill every single day (or more frequently) is not pleasant. Then there’s the question of whether you already took the pill. Because if you take it twice in one day, you’ll end up in the hospital. And Heaven forfend your treatment involves needles. Thus, people avoid taking their medicine because the process becomes so unpleasant, even though they know they really should be taking it.
As this experiment shows, this serious problem has a simple and elegant solution: make taking their medicine fun. As one person in the article describes it, using a low-reward lottery made taking his meds “like a game;” he couldn’t wait to check the dispenser to see if he’d won (and take his meds again). Instead of thinking about how they have some terrible condition, they get excited thinking about how they could be winning money. The Ugh field has been demolished, with the once-feared procedure now associated with a tried-and-true intermittent reward system. It also wouldn’t surprise me the least if people who are unlikely to adhere to a medical regimen are the kind of people who really enjoy playing the lottery.
This also explains why rewarding success may be more useful than punishing failure in the long run: if a kid does his homework because otherwise he doesn’t get dessert, it’s labor. If he gets some reward for getting it done, it becomes a positive. The problem is that if she knows what the reward is, she may anchor on already having the reward, turning it back into negative reinforcement—if you promise your kid a trip to Disneyland if they get above a 3.5, and they get a 3.3, they feel like they actually lost something. The use of a gambling mechanism may be key for this. If your reward is a chance at a real reward, you don’t anchor as already having the reward, but the reward still excites you.
I believe that the fact that such a significant problem can be overcome with such a trivial solution has tremendous implications, the enumeration of all of which would make for a very unwieldy post. A particularly noteworthy issue is the difficulty of applying such a technique to one’s own actions, a problem which I believe has a fairly large number of workable solutions. That’s what comments, and, potentially, follow-up posts are for.
- Applying Behavioral Psychology on Myself by 20 Jun 2010 6:25 UTC; 70 points) (
- Activation Costs by 25 Oct 2010 21:30 UTC; 39 points) (
- Assuming Nails by 5 Jul 2010 22:26 UTC; 9 points) (
- 2 Jul 2010 5:15 UTC; 8 points) 's comment on Open Thread: July 2010 by (
- 18 Oct 2012 3:52 UTC; 6 points) 's comment on Open Thread, October 16-31, 2012 by (
- Discuss: Have you experimented with Pavlovian conditioning? by 11 Oct 2010 6:15 UTC; 5 points) (
- 21 Jun 2010 15:37 UTC; 5 points) 's comment on Applying Behavioral Psychology on Myself by (
- 11 Dec 2011 2:50 UTC; 5 points) 's comment on An akrasia case study by (
- 22 Jun 2010 4:05 UTC; 4 points) 's comment on Applying Behavioral Psychology on Myself by (
- 27 Jun 2011 17:33 UTC; 3 points) 's comment on What can we gain from rationality? by (
I have had success working around ‘Ugh’ reactions to various activities. I took the direct approach. I (intermittently) use nicotine lozenges as a stimulant while exercising. Apart from boosting physical performance and motivation it also happens to be the most potent substance I am aware of for increasing habit formation in the brain.
Perhaps more important than the, you know, chemical sledge hammer, is the fact that the process of training myself in that way brings up “anti-Ugh” associations. I love optimisation in general and self improvement in particular. I am also fascinated by pharmacology and instinctively ‘cheeky’. Having never even considered smoking a cigarette and yet using the disreputable substance ‘nicotine’ in a way that can be expected to have improvements to my health and well-being is exactly the sort of thing I know my brain loves doing.
I like this idea, and might even adopt it myself. But I feel I should emphasize, for anyone who considers adopting this strategy, that it absolutely requires proper bookkeeping, a predetermined rate limit, and predetermined blackout periods. The rate limit protects you if a change in schedule increases the chem-reward frequency by too much. The blackout periods ensure you’ll find out if any sort of dependency forms.
Respect for the process is important, and ‘proper bookkeeping’ sounds like a good theory but I know that this suggestion would ‘absolutely’ make the process counterproductive. Trying this would utterly destroy my exercise programming rather than helping it. Ugh! The opposite of what would work.
Cycling (drugs, especially stimulating ones) is important, both to prevent withdrawal effects and to ensure continued usefulness. But I’ve learned that it is best to do things in a way that works for me.
How about if it were handled by a button in your phone’s UI, which would log the event, roll dice to determine whether you get the reinforcement that time, and enforce rate limits automatically?
That is something I would do. In fact, by preference I would spend a day coding it up instead of two hours in aggregate manually bookkeeping. “Flow” vs “Ugh”!
I should note that the role nicotine lozenges are taking here is not primarily as a training reward, like giving the rat electronically stimulated orgasms when it presses the lever. Nicotine isn’t particularly strong in that role compared to alternatives (such as abusing ritalin), at least when it is not administered by a massive hit straight into the brain via the lungs. No, the particular potency of nicotine is that it potentates the formation of habits for activities undertaken while under the influence by means more fundamental than a ‘mere’ stimulus-reward mechanism. Habits that are found to be harder to extinct than an impulse to take a drug. This is what makes smoking so notoriously hard to quit even with patches and makes the use of fake cigarettes to suck on useful.
In a different thread I’ve been discussing nootropics that enhance learning via the acetylcholine system. Half of those acetylcholine receptors are called nAChRs (Nicotinic acetylcholine receptors). This is not a coincidence.
The other fascinating (to me) fact regarding nicotine is that it has the opposite effect on the sensitivity of the brain’s reward mechanism than other stimulatory drugs of abuse. Where abusing meth, cocaine or coffee will make all rewards you experience in life less salient when you stop medicating, the reverse occurs with nicotine. The systems get downregulated but that mechanism is itself countered with the addition of more receptors leaving a net boost. This means that if you stop using nicotine food starts to taste really good (and you may gain weight!)
How long do you have to do it for before that becomes noticeable?
I have never noticed myself gaining weight, I have noticed food tasting great! In regards to weight gain I can only refer to the experiences of those who quit smoking, although even then the process of quitting smoking provokes all sorts of other complications.
I can say that if you use ritalin every day for one month then stop you can expect everything to seem somewhat dull for a week, like the contrast has been turned down. With nicotine you can approximately expect the reverse. I cannot tell you whether that is good or bad for you in particular.
I was more interested in the food tasting really good part.
How often during that month would you have to use it to get that kind of response, in your experience?
Pardon me, that was what I was getting at with the ‘dull’ and the reverse. It applies to all your senses and taste is perhaps the most obvious of those. You could try three days then one day off. That usually makes things taste good. But to be honest I’m not sure that would be the counter-regulation of dopaminargic receptors. It’s probably just being hungry after a few days of stimulant based appetite suppression.
To expect to experience significant withdrawal effects after one month you would need to be using it most days. Within-week cycling slows that down.
I find this to be an intriguing idea, especially having had a lot of difficulty maintaining any kind of exercise regime in the past. Can you explain in more detail the kind of bookkeeping required, and also the effects you personally feel as a result of having developed the habit of exercising?
I am not the best person to ask. I’ve always been a health nut and I’ve spent years at a time training for marathons (ie. addicted to running), doing various martial arts and soccer. What I was doing was reforming a running habit after letting it slide in favour of being a gym junkie with some mates. Once I got used to associating exercise with socialization it was amazingly hard to get back into the solo running habit. This is even though I know the time alone in a state of flow, with all the hormones associated with intense cardio, is extremely important to me. It is great for stress relief and gives my brain a chance to think things through, solve problem and occasionally write code in my head.
It would be very cool to read a series of top level posts about this experience. Perhaps...
The first would give the basic idea, plus a set of warnings and provisos as to who might be seriously hurt by trying to replicate your results and general cautions . Perhaps you could create a sub area in the comments for other people to suggestion reasons for caution to be voted up and down?
The second post would give some background theory as to why the general approach should be expected to work, possibly with some links to some psycho-pharmacology and so on. Also useful would be to suggest a way to measure success and/or detect negative side effects—possibly with a logging system like this?
Finally, you provide practical instructions about how to “build a habit” in terms of habit design, and what to take, and when with an explanation of benefits and any side effects or worries that you were harboring on the side.
I think that would be enough for a brave soul or two (who was not likely to boomerang into a bad situation, like falling back into a smoking habit) to try to replicate your success in a documentable and relatively safe way, to see if they got similar benefits.
It would be hilarious (and almost plausible) if, five years from now, one of the primary reasons people gave for not smoking was because it interfered with their use of the “wedrifid method” for nicotine assisted positive habit formation :-)
I like your thoughts! Particluarly that part about the ’wedrifid method”. A place where posts somewhat like what you mention are commonplace is imminst.org.
Before I got into anything quite so experimental I would probably want to post on some basics. There is some real low hanging fruit out there!
Please do! I would be very interested in a series on “use of chemicals to increase willpower”. I would even contribute...
I don’t know if you’ve written anything in the last ~year since (pretty sure you haven’t), so I’ve started compiling information at http://www.gwern.net/Nootropics#nicotine
I would like to second patrissimo in a way more concrete than merely upvoting you. Have you made any progress on this?
The idea of using something as powerful as nicotine both terrifies and tempts me, and I’m not sure I’d want to try it without considerable documentation.
Without condoning or condemning I’d like to point out that there seems to be some mental accounting here, which is to say that the possible harm of nicotine is easily justified by confining it to the health bucket—it’s outweighed by the health benefits supposedly obtained. Would you be willing to go outside the health bucket and apply the same technique to paying bills (which incidentally is the ultimate Ugh field for most people) or studying? I understand this is a personal choice of sorts, just curious about your thinking.
Good questions.
I’ll add that the primary reasons that I happen to have a supply of nicotine patches and lozenges are in the other benefits that it can supply if used carefully (and, in particular, with cycling). The biggest downsides to nicotine can be associated with the usual delivery mechanism. There are other downsides too, which are mostly those that come with stimulants (vasoconstriction, increased blood pressure). These are things that can be managed and balanced. There are positive studies on the effectiveness of nicotine in treating conditions like ADHD, with drastic improvements in focus and motivation. I do not swear by it but it is something I keep in my arsenal.
Bills are not a problem for me. I get them, I click a few buttons on my computer and they are gone. I don’t think nicotine (via this delivery mechanism) would be especially effective for bill paying unless you think of a way to form the process into a habit or ritual that nicotine can enhance the learning of. I suppose you could use the direct reinforcement mechanism if you delivered nicotine via gum or via an inhalant form.
Absolutely. And I do use nicotine for studying at times (usually patches that I have cut into the desired dose). Partly for learning mental habits and partly for enhanced focus and motivation without the agitation that comes (with methamphetamine (at least, for me). Again, I don’t swear by it but it works.
Is the dosing based on experience or did you invest into some basic pharmacology studies? Also, what’s your criteria for patch vs. lozenge?
Basic pharmacology studies, case studies from like minded individuals and personal experience. (If the dose is too great cut it into smaller pieces next time.)
Do you want to do something specific that lasts 3-5 hours or do you want stable benefits to motivation and focus over a day? You can approximately consider total dose to be a limited resource, based on cycling to prevent dependency and reduced effects.
I’m curious as to whether you’ve ever been an addicted cigarette smoker before? For those of us who have I suspect the risks of a total relapse to smoking (as opposed to other delivery methods) would be too great. I can image it could be effective though.
I have never smoked a cigarette. Nor have I ever had a remote tendency towards addiction to any substance. That is even one of the reasons I gave when describing why this is an effective technique for me personally. I am more at risk of becoming addicted to discussing substances on the internet than the substances themselves.
Absolutely. The habit of smoking is ingrained for life, that particular power of nicotine over memory at work. And I’m not talking about the habit of getting yourself a nicotine fix. It is a habit of physically getting a cigarette, lighting it, putting it in your mouth and sucking on it. Adding a nicotine trigger back into that would be absolutely insane.
Since positive reinforcement is generally more effective than punishment, we could apply this idea across society.
Why pay police officers to sit on the side of the road all day, pulling over speeders and writing citations? How about automated cameras that can randomly reward drivers with $10-$20 for driving the speed limit? Shouldn’t we expect more safe drivers and less overall expense?
Even if it were proven effective, the reason it won’t take off with traffic or medication is that most people want to see wrong-doers punished more than they want to see less wrong-doing. Don’t take your meds? You deserve your illness. Speeding (even though I do it too)? You deserve your $250 fine. You did the right thing? Woopty-do. What, you want a cookie?
Sure this applies to punishments in society, but for self-motivation it is the opposite. I want my self-motivation to be fun not punitive.
In the OP’s examples, the reward (watching movies) was designed to reinforce behaviour which the protagonist recognised as rational (taking meds). People who speed might rationally (perhaps even correctly) believe that speeding is rational, so the reward (small lottery ticket) could be trying to fight against against the rational desired action. Do you think it would still work?
I seem to recall reading about this actually being tried, with the crime in question being not cleaning up after one’s dog.
There’s nothing stopping us from combining positive and negative reinforcement. I think it would be a pretty easy sell to propose adding the random, small no-speeding rewards without removing the existing laws and fines.
Nothing except for large segments of the population that will revolt at the very idea. Politicians win by promising to be “tough on crime” regardless of the real result. People like to think most others are much, much worse humans; and they like to see them punished for it to reinforce their belief. Paying a drug addict to get clean won’t be popular, but paying people for driving “normally” won’t fare much better.
I agree, though, we would ideally keep some/most existing laws and fines while cutting back on the number of officer-hours to make the immediate costs balance.
Paying a drug addict to get clean isn’t rewarding good behavior so much as rewarding the cessation of bad behavior. This has clear problems. For one thing, it isn’t random like the “follow the speed limit for a chance at a small reward” scheme.
A true equivalent would be rewarding random people for not being on drugs, including the population of former addicts that have since gone clean. Being on drugs would be a garantee of not getting this reward.
I’ve heard that in some places they have deliberately unrealistically low speed limits because fines are a sizeable fraction of the municipality’s revenues.
I find that a great way to self-motivate is to tie an action to intermittent, stimulating rewards. That’s how mice get addicted to pushing levers, right? That’s how people get addicted to WoW and similar games, right? But you can harness the power for good instead of evil.
Exercise. I keep an exercise log in a public forum. Every now and then, someone comes by with a comment like “Great workout!” The prospect of getting those intermittent, stimulating responses—which I only get if I post regularly—is great motivation to keep exercising.
Studying. I often find that my problem, when reading a technical book, is that I finish a chapter and don’t review and summarize it. I’m in too much of a hurry. Solution: now I post summaries on a blog. I get intermittent rewards in the form of blog hits and comments.
The general theme here is that publicizing your goals is an easy, effective way to get intermittent rewards.
It occurs to me that it might be very useful to have some sort of ‘hub’ for such blogs—something similar to the autism hub (which I don’t actually recommend; all the bloggers I’ve liked have left the hub in the last 6 months or so).
It seems to me that that would have the potential to increase the chance of getting positive feedback, and also the chance of getting feedback if you start to slip—if the blogs are sorted by the date of their most recent post, it’s fairly easy for someone to scroll down to the last few entries and post comments along the lines of “hey, are you still doing this?”. (Perhaps each participant could commit to making at least one comment of either type per month, or something.)
I took some time to play with some tools, and managed to turn out a simple version of this. (Thanks in part to evelynm who gave me a the idea to use google docs when I was stuck for an output method.)
The web page is here, currently using the most recent posts of the top 5 LW contributors as dummy data. It lists the most recent item from each RSS feed on its list, in chronological order.
If you have a blog that you’d like to have added, email me.
Edit: Found a bug, not sure if it’s solvable. I’ll keep playing with this, but might give up without feedback.
URL?
http://numberblog.wordpress.com
I’m all glad it works for you, but 1) does not work if you don’t start. 2) does not work if you care about quality (except if you’re a genius).
on quality: it takes me about 40 minutes to write a summary of something that I’ve already read and understood. That’s about the level of detail and quality I need, for my own learning purposes. It might even be informative to other people. Summarizing is a modest goal, I think, and it shouldn’t take anything like “genius.”
Quality matters if you have a community that’s interested in your work; you’ll get more “nice job” comments if it IS a nice job.
Sometimes a ugh field exists for good reasons. Sometimes a med has bad side effects which more than counterbalance its good effects. Sometimes a diet is ill-conceived.
Do methods which are just aimed at getting compliance need to be matched with methods of checking on whether the reinforced behavior is actually a good idea?
The overjustification effect suggests caution may be warranted when giving rewards for desired behaviour.
This reminds me of a thought I had before:
University costs thousands. Imagine that you received, along with your exam marks, $1 per % for your average grade.
It’s meaningless, really, compared to the value of the degree, but… it feels like you’re getting something real for that work. You’re directly receiving money, rather than earning the chance to earn it in the future.
Hell, you could even use this as a replacement for merit-based partial subsidies (though not for fully free education). Everybody pays 1000 at the beginning of the academic year, then over time they ‘earn’ back a percentage proportional to their grades, eg. 60% or so for a straight-A student.
The one truly massive drawback to this is it would strongly encourage students of little means to pursue courses of study populated by easy graders. It’s my experience that more practical courses of study, like Accounting, Engineering, and hard sciences tend to be much harder to succeed in than, say, Art History or English Literature. So, while a good idea, this may nudge students towards academic tracks with lower expected earnings attached to them.
Reward grades more and students will respond. The fact that we are so worried about small amounts of money causing large distortions in behavior is a sign of how powerful we expect this incentive to be. If maximizing your grades is not a good way to learn then that is a sign we need to be evaluating students on a different metric, presumably one that rewards difficulty.
Erk. I don’t disbelieve your claims but the very thought seems so bizarre to me. In the hard sciences you get to go do exams that are worth about 90% of the mark, mostly objective and based on some rules from nature that are fairly easy to grasp. The alternative is trying to learn an endless stream of teacher’s passwords!
I think it’s just a human trait: we find it much easier to punish wrongness than not-very-rightness. On a math test, almost every answer you could give to a question is wrong. On an English Literature test, virtually any interpretation of the text is a right answer, provided you can back it up in some way, so even if your answer is flawed, it’s easy to avoid saying anything obviously wrong. Furthermore, I think the culture of grading in the two differs greatly—the type of personality who is drawn to be a professor of creative writing is rather different than that of one who becomes a professor of electrical engineering, and I suspect the first is far less inclined to fail or treat people harshly.
Now that is an interesting consideration. You could well be right in general. But my anticipation of personal experience is of getting treated more harshly from a professor of creative writing than of engineering. This is because I can far more easily elicit the desired behavior from the engineering professor. That is, the desired behavior of giving me top marks and interfering too much with my education. If all goes well I may even be able to avoid him learning my name.
With a professor in something less objective I expect harsh marking for not optimally conforming to the (possibly flawed) positions that I was supposed to have guessed in the assessments. I am also more at risk of harsh treatment for political reasons. Given that their way of thinking is less like mine I am less able to predict what sort of things will piss them off and so provoke grudges more easily. I may say something that seems obvious to me but incidentally undermines something they care about. Once that happens I am not all that talented at making bitchy people not be hostile. My instinct is to avoid situations where I am potentially vulnerable to capricious whims.
(Yes, my personal anticipation is different than that of most people!)
That has less to do with professors’ personalities than with the nature of their teaching.
An engineering professor may very well be a fanatical Nazist who would gladly fail any students he discovered harbouring pro-democracy views, but he’s not going to discover them unless you wear a political t-shirt while handing over your home assignments. If he taught History of contemporary literature, however, the issue would be all but guaranteed to emerge.
Not that conflicts over personal views are limited to the humanities, of course. Imagine if Andrew Tanenbaum had been teaching at Helsinki in the early 90s...
That reminds me of the biology teacher who, when asked to write letters of recommendation, demanded that his students swear allegiance to evolution. A student sued in 2003. Some time between February and April, he added a little disclaimer. That form remains today. Of course, this was only for letters, not grades, and it was all put forward in writing ahead of time.
The nature of their teaching matters but I place specific emphasis on the professor’s personalities:
The effect of personality is real. And I am not merely talking hypothetically here. It can bite me in the arse if I’m not careful. It is all too easy to overestimate how similar people are to ourselves and doing so comes at great price.
I may be biased on this issue, based on personal experience. But it seems to me that someone teaching English Literature is unlikely to be inclined to fail you, or even give you a low mark, if you’re at least making a concerted effort. The idea that professors fail you for ideological differences is conceivable, and probably more prominent at lower-quality institutions. I’ve disagreed very strongly with professor’s views before without it harming my grade, so long as able to state the grounds articulately. If you do have a professor with a clear bent, it’s usually pretty easy to figure out what they want to hear and thus easy to get a good grade. I just think that because it’s so much easier to be clearly wrong in harder subject matter, it’s more tempting for professors to set a higher bar for a good grade. People who teach softer subjects are simply less concerned about right and wrong, and thus less inclined to punish people who make any kind of effort.
I can envision someone from LW writing a humanities paper that unintentionally raises red flags, leading to either a poor grade or an ugly disciplinary action, while it’s hard to think of how that would happen in a quantitative course (except for in personal interactions with other students, but that’s a danger either way.)
That sort of disaster is quite unlikely, but not entirely negligible. However, I tend to think the benefits of a humanities course in a topic of interest can vastly outweigh this sort of concern.
I did rather poorly in an ethics class because I realized moral relativism was rather obviously right and any form of realism logically indefensible (without an axiom—if you admit you have an unprovable assumption underlying your moral framework, then it’s morally real-ish, but it isn’t totally objectively correct, because of that assumption). There is also a real risk of relying on concepts that the reader is not familiar with, but that’s usually pretty easy to screen, especially if you have a friend to read a draft.
The thing is, when I say, “rather poorly,” I mean a B (possibly +), and this was at a top public university. In a math class (having not taken math in five years), I realized shortly before an exam that I had been taking derivatives when I should have been integrating. This would have had a rather more pronounced effect on my grade had I not caught it.
I suspect you underestimate how well developed these guessing skills become in high school. From my experience, students become very good at turning a sentence of content into a paragraph of gibberish.
I’m surprised by your reaction. I didn’t expect you to be ignorant of the fact of the grade percentages in different fields and what students find difficult. The situation may be different in Australia, but your reaction seems to be total ignorance of the median.
I’m told that there’s an easy algorithm, maybe not for majoring in English, but for the small amount that’s required of all american students: ask the other students. (this may not be fast enough for discussion, but it works for essays)
Don’t confuse ignorance and self expression.
It is indeed a problem, but it will be present any time you want to offer any type of incentive towards good academic performance. The proper solution is to tighten grading standards in humanities, not to take it as a given and drop the idea of merit-based incentives altogether. Or you could establish different subsidy rates for each department, but 1) it’s an inelegant hack and 2) it’s a political minefield.
(Somewhat related: a little over a year ago I was looking at applying for studying at the Université du Québec. It turned out that, for the purpose of converting the grades one gets during a bachelor’s degree from an Italian university, they actually had separate tables depending on your course of studies. Had I been a philosophy or literature student, only my straight 30/30s would have been turned into As; as a math student, 27⁄30 and above would have sufficed.)
Declare that across the board all subjects must rate student performance on a bell curve.
The downside would be that some subjects have students that are just all round better. This is solved in the Victorian (Australian state) high school grading system which scales subjects based on statistical inferences that can drawn systematically from relative student performances across their various subjects (if all students who get medium grades in Specialist Mathematics get top grades in English it is probably hard...)
It’s not a bad system, but it runs into two problems inversely proportional to the size of the classroom. First, it’s very easy for students to start exercising social pressure against excellent performance, which “ruins” everybody else’s grades; I’ve witnessed this happening first-hand, when one teacher at my high school decided to try this method. Secondly, statistical anomalies will happen where almost all students are diligent (or negligent), and it would be scandalous to have to fail a previously-determined percentage of them.
If I had to choose a strategy in the few minutes I’m dedicating to writing this post, I’d go for setting objectively measurable standards, which by necessity will come down to memorising a ton of works and notions. It’s by no means an efficient educational supplement (and will be ready for a reform in a couple decades or so), but my anecdotal experience with humanities student suggests that what they are in most need of is some push towards competitiveness. Not all top-quality fiction writers, literary critics, and assorted essayists need be obsessed book-devourers, but a large majority of them will be. Plus, if they find this hypothetical academic neo-sciolism stifling, autodidacticism is a much more viable career option for them than for technical, business, and to some degree science students.
This is definitely something that you should not do within one classroom. Within one course is the absolute minimum I would want to accept.
I like this idea too, but I suspect it would be quickly hijacked—it’s easier to bug your instructor until she lets you have a better grade than to study. Ask most “tough graders” how their student reviews compare to “easy graders.”
I always hated those assessments that weren’t marked anonymously!
That is a really, really good idea. And I don’t think I’m just saying that because I’m biased.
Why would you?
Systems which provide financial privileges based on merit can be expected to appeal to those who consider themselves to have an abundance of merit. (And, naturally, any who dare speak out against such systems can expected to be considered to be doing so because they lack such confidence.)
Someone I know actually started a business around this idea:
http://www.ultrinsic.com/
In the intro to Dan Aliely’s new book he describes dealing with his own medical compliance problem: he had to take some very rough hepatitis meds that made him nauseous, He essentially bribed himself with movies, which he liked a lot, specifically arranging the details to create positive associations (he would start the movie right away after giving himself the shot, before the nausea would set in). He was apparently the only one who finished the course (the treatment was experimental), so +1 for behavioral economists.
One wonders what effect his desire for his own theory to work might have played in this… Still, a good idea.
In my experience, the rational actor model is generally more like a “model” or an “approximation” or sometimes an “emergent behavior” than an “assumption,” and people who want us to criticize it as an “assumption” or “dogma” or “faith” or some such thing are seldom being objective.
(If you think this criticism is merely uninformed or based on a deep misunderstanding, then perhaps it would be rational to turn the phrase “the rationality assumption of neoclassical economics” in your opening paragraph into a hyperlink to some neoclassical authority you are engaging.)
There are various individual cases where it is quite justifiable to beat up neoclassical economists for trying to push rationality too far, either against the clear evidence in simple situations or beyond the messy evidence in complicated situations. As an example of the latter, my casual impression is that the running argument at Falkenblog against the Capital Asset Pricing Model could well be a valid and strong empirical critique. But there are also various individual cases where neoclassical economists can justifiably fire back with “[obvious rational reactions to] incentives matter [and are too often underappreciated]!” E.g., simple clean natural experiments like surprisingly large perverse consequences of a few thousand dollar tax incentive for babies born after a sharp cutoff date, or strong suggestive evidence in large messy cases like responses to price controls, high marginal tax rates, or various EU-style labor market policies.)
And it seems me that w.r.t. our rationality when we hold a discussion here about rationality in the real psychohistorical world, the elephant in the room is how commonly people’s lively intellectual interest in pursuing the perverse consequences of some shiny new behavioral phenomenon in the real world turns out to be in fact an enthusiasm for privileging their preference for governmental institutions by judging market institutions (and evidence for them, and theoretical arguments for them, and observed utility of outcome from them) by a qualitatively harsher standard. The real world is dominated by mixed economies, so the implications of individual irrationality for existing governmental institutions (like democracy and hierarchical technocratic regulatory agencies) have at least as much practical importance as the implications for some idealized model of pure free markets. And neoclassical economists have some fairly good reasons (both theoretical and empirical) to expect key actors in market institutions to display more of some relevant kinds of rationality than (e.g.) random undergrads display in psych experiments, while AFAICS political scientists seldom have comparably good reasons to expect it in institutions in general.
I commend this post for picking a telling example of behavioral anomalies which show a strong impact in the real world (as opposed to, e.g., in bored undergraduates working off a psych class requirement by being lab rats). But I see nothing essentially market-specific about this anomaly. Thus, it is obvious why it is interesting regarding self-help w.r.t. ugh fields, and it is not obvious why when considering its application to the broader world, we should focus on its importance for economics-writ-very-small as opposed to its importance for existing mixed economies. And as above, unless you link to someone significant who actually makes your “rationality assumption” so broadly that this experiment would falsify it, I don’t think you’ve actually engaged your enemy, merely a weak caricature.
Just checked my well used copy of Mankiiw. Rational actor model applies more generally than to successful investors.
There’s a difference between the psychology of being in a lottery by taking your medication and receiving cash every time you take your medicine.
There is also evidence that bribing people reduces their inherent interest in an activity. There was a study that showed that kids paid to do homework did it enthusiastically for a while, but then quickly lost interest over time as they became habituated to the possibility of reward and began to lose inherent interest in the material.
An alternative to making things fun is to make things unconscious and/or automatic. No healthy individual complains about insulin production because their pancreas does it for them unconsciously, but diabetic patients must actively intervene with unpleasant, routine injections. One option would be to make the injections less unpleasant (make the process fun and/or less painful), but a better option would be to bring them in line with non-diabetic people and make the process unconscious and automatic again.
Brilliant formulation of the problem & solution.
(Very successful) animal trainers using reinforcement techniques make a distinction between bribe and reinforcement, which was not ever completely clear to me, but appears to be addressing the same problem. But one thing they do, “shaping” the expected behavior, always changing it a little bit to get loser to the “target”, might be serving the same purpose as the gambling mechanism: preventing anchoring on obtaining a reward in specific manner.
I’m curious how you all would feel about introducing gambling, in some sense, with children. Like a large fishbowl filled with slips of paper. Whenever they do something good, you let them go get a slip and receive whatever reward is written on it. Obviously you’d have to deal with cheating.
Although I’m weary about punish or rewarding doing chores as opposed to good effort. I feel like chores should just be expected of the child. Instead of it seeming like a job. But I suppose the randomness is supposed to help with that.
Odd as it may sound, it would have to be “structured randomness” so to speak. Picking a slip out of a bowl would probably work—getting a reward only when the parent is in the mood to give one would likely not. The latter is just as random from the child’s perspective, but inconsistent parenting (or animal training, or employee rewarding schemes) is known to be bad at shaping behaviour in the desired fashion.
That’s true. Arbitrary responses can lead to learned helplessness, although that’s for negative responses. I can imagine there is are more relevant psychological concepts.
I felt a bit uneasy with that as well. Does this sort of behaviour encourage gambling-like activities? Is an adult raised with the gambling reward more likely to walk into a casino and risk his family’s nest egg? Though, in the best case, the child grows up learning how to identify positive habits and trick himself into doing them (e.g. with the fishbowl of rewards).
Also, your comment on how these jobs should be “just expected” reminds me of sports contracts. A stand-up comedian (I can’t remember) said that contracts had cash bonuses if the player did not commit any crimes (e.g. illegal drugs). I guess sometimes, federal prosecution is not a strong enough motivator.
Would that work for non-life-saving medicine? I’m thinking of dietary programs, specifically. With everybody and their brother offering “new, revolutionary fat loss techniques” I’d be surprised if nobody ever tried a micropayment system.
(Something similar I do is offering myself alternative rewards for my good behaviour—when I really crave some ice-cream, I decide I will treat myself to a new toy/book/clothing item/etc. if I manage to abstain. Doesn’t really work if it’s raining, unfortunately.)
I’ve heard of sites where you bet a good chunk of money (and inform people) to see if you can complete/maintain your diet. Micropayment back would probably work. With intermittent rewards even?
I think I know what my next startup idea is..
A method to apply the “lottery technique” to overcome an Ugh field might be to report any relevant subgoal to a partner, who decides on a set of rewards and a mode of giving them, whereas each subgoal is a “lottery ticket”. This has the advantages: this precise chances of winning are hidden, which may lead to motivation to figure out “the system”, there can be hidden prizes and social accountability
You may do likewise for the partner. If this proves successful, it could be facilitated by a website where people track each others goals.
Since this is framed as a hypothetical, its not clear exactly what your thoughts are on the subject, but I always encourage people to ask whether a model aids our thinking, or hinders it, rather then whether it is correct or incorrect.
The answer, in this case, is clearly both. It may be better to have an overapplied model than no model at all, but if you’ve got a model you’re clearly overapplying, improvement is a very low-hanging fruit.
You have it all wrong. Your “ugh” field should go into their utility function! Whether or not they invest the resources to overcome that “ugh” field and save their life is endogenous to their situation!
You are making the case for rationality, it seems to me. Your suggestion may be that people are emotional, but not that they are irrational! Indeed, this is what the GMU crowd calls “rationally irrational.” Which makes perfect sense—think about the perfectly rational decision to get drunk (and therefore be irrational). It has costs and benefits that you evaluate and decide that going with your emotions is preferable.
I see this comment as not understanding the definition of “rational” in economics, which would be simply maximizing utility subject to costs such as incomplete information (and endogeneous amounts of information), emotional constraints and costs, etc.
I appreciate the Devil’s Advocacy. The simple issue, though, is that if you use a definition of “rational” that encompasses this behaviour, you’ve watered the word down to oblivion. If the behaviour I described is rational, then, “People who act always act rationally,” is essentially indistinguishable from, “People who act always act.” It’s generally best to avoid having a core concept with a definition so vacuous it can be neatly excised by Occam’s Razor.
You are just wrong. These are people whose utility function does not place a higher utility on “dieing but not having to take my meds”.
If your preferred theory takes a human and forces the self-contradictions into a simple rational agent with a coherent utility function you must resolve the contradictions the way the agent would prefer them to be resolved if they were capable of resolving them intelligently. If your preferred theory does not do this then it is a crap theory. A map that does not describe the territory. A map that is better used as toilet paper.
“These are people whose utility function does not place a higher utility on ‘dieing but not having to take my meds’.”
Why are you making claims about their utility functions that the data does not back? Either people prefer less to more, knowingly, or they are making rational decisions about ignorance, and not violating their “ugh” field, which is costly for them.
How is that any different than a smoker being uncomfortable quitting smoking? (Here I recognize that smoking is obviously a rational behavior for people who choose to smoke).
I get it. You define humans as rational agents with utility functions of whatever it is that they happen to do because it was convenient for the purposes of a model they taught you in Economics 101. You are still just wrong.
Your posts under this name have the potential for some hilarious and educational trolling, though you have some stiff competition if you want to be the best.
You should probably refine your approach a little bit. Links to the literature would give you more points for style. Also, the parenthetical aside was a bit much—it made the trolling too obvious.
It’s pretty similar, actually: just as a smoker may prefer to quit but find doing so psychologically difficult, someone with a terminal illness may prefer to take their meds but also find it difficult. It’s not clear how to assign utility in such a case, as the agent involved isn’t a unified whole. There’s the sub-agent who is addicted and the sub-agent who wants to quit.
Are you sure it was there to begin with? This lottery thing sounds like it would work as a prevention measure against ugh fields, but maybe not a cure.
If you were going to do something like this, the best thing might be to purposely stop taking your meds for a while before starting again, so the ugh field could languish and weaken in intensity for a while.
If you look at the article, the people interviewed used to have trouble taking their meds. Some of the drug/psych patients failed rehab multiple times before getting through due to tiny financial incentives. That sounds like a cure to me.
And that solution does not make sense to me in any event. Getting yourself to do something you’ve been avoiding is, to my knowledge, never helped by deliberately avoiding it for an extended period of time.
I never even thought to ask why rewarding works better than punishing, or why intermittent rewards work better than predictable ones.
It didn’t even occur to me that there would such reasonable answers, based on already-established principles.
I think I had a blind spot, something like: Because psychology is the product of blind, stupid evolution, don’t expect “meaningful” answers to why the brain reacts certain ways, the answer will always boil down to “local pressures in the evolutionary environment.”
But you can reduce myriad findings into more general, foundational principles, for the sake of efficiency and elegance, in psychology as everywhere else. Good lesson for me.
I don’t know if your particular answers are right or not, but kudos for generating the questions.
So the technique described here requires thinking that an X chance of Y is better or worse than a certainty of XY?
We could do with some alternative approaches to ugh field defeat, so here’s one.
Yesterday, when I was trying to decide to go to bed (just “to”, not “whether to”), I managed to create an effective “counter” ugh field to staying up, by considering that at the same time I’d be worrying about the results, failing at something so very basic, and turning back to the hellish failure mode of losing self control in general (A phase I consider my worst experience of all (admittedly short) time).
I don’t know whether the last point is something everyone goes through at some time (like a second puberty) but I’m sure you can come up with your own points to mount counter-ugh-fields on.
“This also explains why rewarding success may be more useful than punishing it in the long run: if a kid does his homework because otherwise he doesn’t get dessert, it’s labor. If he gets some reward for getting it done, it becomes a positive.”
Shouldn’t we expect loss aversion to partially or completely counteract this, and doesn’t receiving a reward for something qualify as labor too?
(Side note: This reads as ‘rewarding success is better than punishing success.’)
Good read.