Open thread, Dec. 14 - Dec. 20, 2015
If it’s worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the ‘open_thread’ tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
Guess the correlation
This is almost as much fun as the calibration game.
Who created the game? How can they be contacted?
This guy. In the Main Menu go to About to see his email.
I was in the programming channel of the lesswrong slack this morning (it’s a group chat web thing, all are welcome to ask for an invite if you’d like to chat with rationalists in a place that is not the archaic, transient mess that is IRC. (though irc.freenode.net/#lesswrong is not so terrible a place to hang out either, if you’re into that))
, and a member expressed difficulty maintaining their interest in programming as a means to the end of earning to give. I’ve heard it said more than once that you can’t teach passion, but I’d always taken that as the empty sputtering of those who simply do not know what passion is or what inspires it, so I decided, since we two have overlapping aesthetics and aspirations, that I would try to articulate my own passion for programming. Maybe it would transfer.
Here’s what I wrote, more or less
It was well received. Maybe it was enough? I don’t know. But I think more should be written on the relationship between the act of programming and the analysis of concepts. Every time I meet a programmer who clearly has enough talent to.. let’s say.. put together sanely architectected patches to the source code of our culture.. but who instead recedes into their work and never devotes any time to analytic philosophy, it breaks my heart a little.
Could you elaborate on this? You sound very certain for someone whom I wouldn’t expect to have much background on the subject.
=/ The conveyance of passion is not an esoteric subject. Anyone who’s spent a significant portion of their life as a student will have seen it happen, on and off. We might be talking about different things, of course. I’m only talking about passion the spark, which is liable to fizzle out if it’s not immediately and actively fed, whereas I’d expect more extensive investigations into passion to focus on passion the blaze, a phenomenon has greater measurable impact, a passion well enough established to spread itself over new resources and keep feeding itself. (although with programming there’s less of a difference between the two, since there’s an abundance of resources.)
Aside from that, my prior for the probability of a complex of human thought being impossible to transmit from one mind to another is just extremely low. IME when a person who is not a poet, or a writer, or a rationalist or an artist says that a thought can’t be communicated or explained, that’s coming from a place of ignorance. People who are not rationalists rarely ever properly explain anything, nor do they usually require proper explanations. People who are not poets, who do not read poetry, have no sense of the limits of what can be expressed. When one of these people says that something can’t be expressed, they are bullshitting. They do not know. They could not possibly know.
.
Why haven’t the good people at GiveWell written more about anti-aging research?
According to GiveWell, the AMF can save a life for $3.4e3. Let’s say it’s a young life with 5e1 years to live. A year is 3.1e7 seconds, so saving a life gives humanity 1.5e9 seconds, or about 5e5 sec/$.
Suppose you could invest $1e6 in medical research to buy a 50-second increase in global life expectancy. Approximating global population as 1e10, this buys humanity 5e11 seconds, or about the same value of 5e5 sec/$.
Buying a 50-second increase in life expectancy for a megabuck seems very doable. In practice, any particular medical innovation wouldn’t give 50 seconds to everyone, but instead would give a larger chunk of time (say, a week) to a smaller number of people suffering from a specific condition. But the math could work out the same.
Of course, it could turn out that the cost of extending humanity’s aggregate lifespan with medical research is much more than $5e5/sec. But it could also turn out to be much cheaper than that.
ETA: GiveWell has in fact done a lot of research on this theme, thanks to ChristianKl for pointing this out below.
For AMF it’s a lot easier to estimate the effect than it is for anti-aging research. GiveWell purposefully started with a focus on interventions for which the can study the effect.
GiveWell writes:
You find a bit of data gathering under http://www.givewell.org/node/1339
More recently GiveWell Labs which then was renamed into the Open Philanthropy project will put more emphasis in that direction.
Articles that were written are:
http://blog.givewell.org/2013/12/26/scientific-research-funding/
http://blog.givewell.org/2014/01/07/exploring-life-sciences-funding/
http://blog.givewell.org/2014/01/15/returns-to-life-sciences-funding/
GiveWell Labs managed get Steve Goodman and John Ioannidis matchmaked with the Laura and John Arnold Foundation at the tune of $6 million.
Meta-Research doesn’t sound as sexy as anti-aging research but if we want to have good anti-aging research we need a good basis in biology as a whole.
Anti-aging research is a catch-phrase and it makes sense that it’s decently funded but alone it won’t work. Biology as a whole needs to progress and chasing after shiny anti-aging targets might not always be the most effective use of money. Do you have a reason why you think it makes more sense to speak about anti-aging research than it makes sense to speak about life-science research?
Please do a Fermi estimation of how you arrive at that conclusion.
Ooh, I know! So, Holden is aware of SENS. However, by default, GiveWell doesn’t publish any info on charities it looks at and decides not to recommend, if they don’t ask GiveWell to. This is to encourage other charities to go through GiveWell’s recommendation process—it keeps GiveWell from lowering a charity’s reputation by evaluating them.
Anyways, GiveWell did some sort of surface-level look at SENS a while back, and didn’t recommend them. I think the only way to get more info about this would be to email Aubrey about his interaction with GiveWell.
When doing the calculations be sure to QA your LYs. Spending an extra week lying doped up and in pain in a hospital bed may not be worth all that much. Also with medical research, you often wind up with a patented drug which then costs $1e5 per patient treated at least for the first decade or two of its use at least as used in the USA and other non-single payer countries. Or it requires $1e5 of medical professional intervention per patient to implement. My priors are that the low-hanging fruit is not in turning 90 year olds into 91 year olds, and won’t be for many decades.
I think their argument was that they don’t support Pascal’s Mugging and they don’t see any proof of medical research within reach that could end aging with a significant probability.
EDIT: …and I should have read the comment in more detail. You are talking about stuff such as donating to curing diseases. I think they just didn’t assign analysts to this yet. I guess it’s hard to measure scientific progress.
An interesting blog post which argues that in medical studies the great majority of improvement in non-intervention arms that is attributed to the placebo effect actually comes from regression to the mean.
To really test this, we should see if placebo is much smaller in studies where this can’t happen (certain chronic diseases for example).
The issue is distinguishing placebo (defined as a psychosomatic effect) from “natural healing” and I suspect it will be not easy—in diseases where psychosomatic placebo “can’t happen”, can natural healing happen?
Pretty sure Ilya suggested the reverse—diseases where natural healing doesn’t happen, but the placebo effect is possible.
The question is whether those coherently exist. If the placebo works for the disease humans in their natural enviroment might do something they believe will cure the disease and thus you have
natural healing
.Same objection: do such exist? Can you give any examples?
The problem is that the difference between (psychosomatic) placebo and natural healing is just the involvement of the mind. If no natural healing is possible, what kind of magic is the mind doing?
It’s easier to exclude placebo—e.g. if the patient is in a long-term coma, no placebo effects seem to be possible.
Physical injury, chronic disease.
I meant placebo as baseline effect (from all sources, psychosomatic or statistical), and the falsifiable prediction is it should drastically decrease in situations where regression to the mean should not happen.
Not clear why psychosomatic effects happen, may work in coma. Very clear why regression to the mean happens, well understood issue in sampling from a distribution. So: easier to exclude well-understood thing.
Actually, you can view this as a causal issue, the blog post is really about a type of selection bias, or “confounding by health status.”
edit: Lumifer, this is curious. I mentioned chronic disease in my original response. Do you … parse what people write before you respond?
I think the core point of that article (and one I agree with) is that if we want to attribute the ‘placebo effect’ to medical care, we need to measure not the difference between the patient before and after placebo treatment, but the difference between the after for no treatment and the after for placebo treatment. And so it seems very useful (for determining the social benefit of medicine / homeopathy / etc.) to separate out psychosomatic effects (which are worth paying for) from statistical effects (which aren’t worth paying for).
Sure, I agree. If the article is right.
I think this part is a bit too strong, which corrupts one of the main points of the whole post:
It’s not called the stay-the-same-anyway effect, it’s called the get-better-anyway effect. The patient who reports lower pain a week later actually is in less pain. Health isn’t repeated draws from an urn: if you crack a rock one day it won’t regress to the mean. It’ll stay cracked. That people heal is not a statistical artefact.
That is, I agree much more with the O’Connell quote (emphasis mine):
Regression to the mean plays a part, especially for chronic variable conditions like lower back pain or depression, but even there natural recovery plays a huge part (otherwise the condition would be a degenerative one).
I agree, but here I am (uncharacteristically :-/) inclined to the charitable reading and treat “it” in “it provides no benefit whatsoever” as referencing placebo.
I would also think of regression to the mean (in this context) as an observable manifestation of “natural recovery” and not oppose them.
I think the structure of the paragraph is pretty clear (differentiating sentence, name A, explain A, name B, explain B, compare A and B), and the rest of the article matches my interpretation.
Yes, one could say that natural recovery is the mechanism by which regression to the mean works.
The chief thing I’m objecting to is the idea that the regression is in some way illusory or nonexistent. In the discussion of the NSLBP, for example, DC claims “none of the treatments work” when I think the result is the opposite, that “all of the treatments work.” Now, DC and I agree on the right course of treatment (do nothing) for the same reason (why spend more to get the same effect as doing nothing?), but we disagree on the presentation. Instead of “treatment” vs “no treatment,” both of which are equally ineffective, cast it as “natural recovery plus treatment” vs. “natural recovery alone,” both of which are equally effective.
Here you might get into an object level vs. meta level debate. I argue that one should talk up doing nothing instead of talking down treatments that are no better than doing nothing, because it will be hard to convince the man on the street reasoning by post hoc ergo propter hoc that his attempts did not actually lead to recovery, but if convinced to try doing nothing then the same fallacy will, when doing nothing turns out to work, cause him to gain trust in doing nothing. One could respond that the important point is not that he get the object level question right, but that he avoid fallacious reasoning.
That naturally leads to the effect of treatment being zero which is conventionally called “the treatment does not work”.
When you have some baseline process and some zero-effect interventions on top of it, I think it’s misleading to say that all these interventions work.
These, of course, are not mutually exclusive. Besides, you need to do something to counteract the proponents of the no-effect treatments—such people exist (typically they are paid for providing these treatments) and if you just ignore them they will dominate the debate.
The placebo group is called such because it receives the placebo treatment, not because medical researchers think all improvement in it is attributable to the placebo effect. Results are reported as improvement in the treatment arm vs. the placebo arm, and never have I seen these differences explicitly reported as treatment effect vs. placebo effect, and I’ve read hundreds of medical papers. The real magnitude of the placebo effect is almost never of interest in these papers. Some professionals in the medical community could have such a misconception because of the usual lack of scientific training, but I’d like to think they are a small minority.
If the placebo effect is of real importance, I think a more significant problem would be the lack of use of active placebos that mimick side effects since most drugs have them and this is a potential source of breaking the blinding of RCTs.
Sure. But the question under discussion here is what actually is the placebo effect and how much of it can you attribute to psychosomatic factors and how much to just regression to the mean (aka natural healing).
You are correct in that most intervention studies don’t care about the magnitude of the placebo effect, they just take the placebo arm of the trial as a baseline. But that doesn’t mean that we couldn’t or shouldn’t ask questions about the placebo effect itself.
In that case your opener is slightly polemical :)
Agreed. The problem with nonintervention arms for studying the placebo effect is that there aren’t clear incentives for adding them and they cost statistical power.
My n=1 experiment evidences against this. When my son was much younger and complained some part of him was hurting (because, say, he bumped against a wall) I would put lotion on the part and say it was powerful medicine. It usually made him feel better. And I wasn’t even lying because the medicine I had in mind was the placebo effect.
You were lying, because you were making a statement that you knew would be understood as an untruth and with the intention of it being understood as that untruth. The fact that it may be true using a definition that isn’t used by the target doesn’t change that.
Disagree. I believed that my statement would be interpreted as “this will reduce your pain.” Because of my belief in the placebo effect I really thought that the lotion would reduce my son’s pain.
I suspect you may be overestimating young childrens’ critical thinking abilities. If daddy say X is “powerful medicine”, then “powerful medicine” is defined as X.
You were not measuring actual improvement—you were measuring the amount of whining/complaining.
Which is strongly correlated with pain. A reduction in pain is an actual improvement.
No, not in the sense we are talking about here. Pain is known to be quite psychosomatic, anyway.
Right, which is why the effect in the placebo arm is not called the placebo effect.
Here they found dopamine to encode some superposed error signals about actual and counterfactual reward:
http://www.pnas.org/content/early/2015/11/18/1513619112.abstract
Could that be related to priors and likelihoods?
Significance
Abstract
Interesting, thanks!
I wonder if starting a GiveWell-like organization focused on evaluating the cost-effectiveness of anti-aging research would be a more effective way to fund the most effective anti-aging research than earning-to-give. Attracting a Moskovitz-lever funder would allow us to more than completely fund SENS (provisional on SENS still seeming like the best use of funds after more research was done).
The product of meta-orgs is taste. If boardgamegeek thinks that Twilight Struggle is a good game, then you, not having played it, should expect that it’s likely a ‘good’ game. If Givewell thinks that AMF is a good charity, then you, not having looked at it yourself, should expect that it’s likely a ‘good’ charity.
With games that many people play, a website can average together those ratings and then sort them to generate a solid taste measure. With charities that have done things in the past and have models of what they can do in the future, an organization can evaluate the things done and the models for how things would change and estimate impacts and then sort by those impacts.
But with scientific projects, this seems much more difficult, because you’re extrapolating past the fringes of current knowledge. It’s not “which board games that already exist are best?” but “which board game would be best, if you made it?” This is a skill that people have to various degree—someone is designing these things, after all—but I think it’s very difficult to communicate, and more importantly, for listeners to have a good sense of why they should or should not trust another person’s taste.
Another way of looking at this is, with medical research, all of the cost-effectiveness is driven by whether or not the technology works. If the research is to validate or invalidate a theory, the usefulness of that theory (and thus the evidence) is determined by the technologies enabled by that theory (or the attention spared to work on other technologies that do work). But this is the thing that, by definition, we don’t know yet, and is the thing that, say, SENS leadership spends its time thinking about. Do we approve this grant, or that grant?
(This comment may oversell the degree to which tech growth is about creating new knowledge / tech rather than spreading old knowledge / tech, but I think the point still gets at what you’re talking about here.)
That depends on how you define “technology”. Knowledge about which lifestyle choices that result in healthier living has an effect but I wouldn’t call it “technology” in the narrow sense. I think there a good chance that there’s a bias of people focusing too much on trying to use technology to solve problems.
Agreed but I think I’m more willing to call lifestyle choices, and in particular the means by which medical experts can guide the lifestyle choices of their patients, ‘cultural technology’ or something similar. One can know that some exercises will fix the patient’s back pain, but not know how to get the patient to do those exercises. (Even if the patient is you!)
Strechting the meaning of the term technology that way is baily-and-moat. For large parts of the medical community “technology” refers to something you can in principle patent.
Even if you see the notion more broadly the mental model of medical experts using cultural technology to get patients to comply isn’t the only way you can think.
You can also practice the values of what Kant called enlightenment where individuals engage in self-chosen actions because they can reason. With enlightment values it becomes important to educate people but how the body works. If you think as patients as subjects that benefit from eduction you have a different health system then if you think of them as objects to forced into engaging in certain actions.
If easy to make the moral argument that what Kant calls enlightment is good but it might also be in practice the paradigm that produces better health outcomes.
If you care about radical process in medicine than it’s important to be open for different paradigms to produce medical progress. Scientific paradigms are in flux and it’s important to be open for the possibility that different paradigms from how we do medicine might have advantages. I think ideally we have pluralism in medicine with many different paradgims getting explored.
How can different paradigms lead to a different science? Take an area like the question whether a single sperm is enough to get a woman pregnant. You will find a lot of mainstream sex advice from sources like WebMD say that a single sperm is enough. That’s likely wrong.
If you believe that the point of sex education is to get people to always use condoms it can be helpful to teach the wrong fact that a single sperm is enough. A system that however would focus on true eduction would rather teach the truth. Knowing the truth in this instance isn’t a “technology” that does anything specific. I don’t trust biology to progress if it doesn’t care for the truth and just tries to find out facts that get people to comply with what their doctor tells them.
I seriously engaged with NLP and it’s “change is what matters, truth of statements is secondary”-ideology. NLP is actually much more honest about this but once you accept the technology frame, you get there. I think that relationship to the truth is flawed.
One of the key characteristics of research of the unknown is that you don’t know the cost-effectiveness beforehand.
What kind of research do you think could prove that claim?
The interesting thing of that claim is the idea that effective anti-aging research is research that’s branded as anti-aging. I would guess that one of the most effective investments to further anti-aging research was the NHI decision to give out grants to DNA sequencing companies.
Investigating SENS more closely is also an interesting proposition. Doing so will show that it’s over-optimistic and driven by assumptions that are likely wrong. However it scores high in the “clarity of vision” department that YCombinator uses to decide which startups to fund. SENS doesn’t have to be right on it’s core assumption to produce useful knowledge.
Startups don’t profit from highly critical outside scrutiny into how they invest their money. Critical scrutiny might harm SENS.
Thoughts this week:
Effective Altruism
(1)
All I want for Christmas...is for someone from the effective altruism movement to take the prospect of using sterile-insect techniques and more advanced gene drives against the Tsetse fly seriously. This might control African Sleeping Sickness, a neglected disease, and more importantly, unlock what is largely suspected to be THE keystone cause, according to GiveWell of malnutrition in Africa through an extensive causal pathway. I feel EA’s are getting too stuck into causes that were identified early in the movement and are neglected the virtue of cause neutrality.
(2)
Isn’t it time effective altruists matured to using standardised measures of impact on an individual such as the impact on psychological distress. Then, approximated where interventions sat on a scale of magnitude of cummulative K10 scores. They’re a simple metric, you can teach NGO/Aid orgs how to understand them quickly and measures of psychological well-being are the ‘net result’ of individual differences in changes to health and SES.
(3)
Any thoughts on the prospective impact of a documentary about effective altruism? Looks like the best we got are Vaughan’s great speeches from effective altruism global and other little to no view YouTube clips, and Singer’s TED talk.
(4)
Kidney donation saves 14 QALY. Death organ donation saves perhaps 10 people with donations, that’s 140 QALY’s. GiveWell gets a QALY for about 80 bucks, so being an organ donor is worth about 80*140=11200 dollars. Upon Googling I found cyronics has a 90 percent chance of success. That sounds wildely optimistic so I’m going to half that and estimate that unintended consequences will kill my (guess) 500 years into my life. So, assuming that extends from a 80 year average lifestyle, I’d have to make 500-80 years of additional life = 420. Maths isn’t needed to suggest I’ll have a donation capacity and propensity for as if not more effectiveness donation opportunities than GiveWell’s in those years. So, cyronics is more altruistic for EA’s than organ donors, no?
Update, the lesswrong survey says the probability is 7% at a glance. So that’s around 1⁄15.
420⁄15 = 28 years. In 28 years I still imagine I’d be able to donate that amount, assuming 10%/y income donation into a trust that actualises upon my death.
Productivity
(1)
I want to contionue to streamline my workflow. Screw SMS, I’m gonna phase out to email alone with Google Voice forwarding on my SMS’s to email
Relationships
Anyone on LessWrong wanna get together ha … ha …? Just assume the worst traits for me, and don’t ask about them. Then just evaluate my writing here as my best trait and make a choice on that ;)
Info diet
Last read with an open mind: Zero to One by Peter Thiel
Take-aways from this one include:
the importance of thinking carefully about marketshare
the value of ‘value-capture’ and thinking like a monopolist to private gains
Last listened to with an open mind: Danger and Play podcast by Mike Cenovich
Take-aways from this one include:
‘mindset is like a conversation, I wouldn’t be as harsh on myself as on others’ There are take aways in every watching and it suprises me everytime. Even now I feel my attitude towards they show hasn’t upward callibrated that I don’t feel the need to publicly declare my interest in it in the hope of prompting a re-watch at a time when it will help. They same goes for my ‘last watched:’
Last watched with an: Mark Freeman (youtube)
A world of insights in every video. They are not new insights when I rewatch videos, but they are sufficienty abstract, complex and result in a high enough cognitive load that I forget between days of watching. Too bad it’s extremely boring to listen to them, and, somewhat shame-inducing since mental illness is a taboo topic.
Medical malpractice
Australia has the highest rate of medical error in the world according to the World Health Organisation. Counterintuitive as it may seem in Australia there are negligible institutional incentive to fight medical malpractice Instead over the past couple of years, extensive lobbying has taken place by the medical profession for changes to the law for medical negligence in Australia. Medical lobby groups have sought to have the governments legislate what is known as the Bolam test- where the negligence of a doctor is determined solely on the basis of other doctor’s opinions about the doctor’s conduct, regardless of what judges and the courts have to say
From the last article, some other interesting points made:
Looks like a pretty tangly situation with no clear fix
EA mostly is about using statistics that are already out there.
The K10 scores has questions that are strongly culturally dependent.
1. During the last 30 days, about how often did you feel tired out for no good reason?
depends heavily on what people consider to be “good reasons which differes a lot from culture to culture. It might very well be interpreted by some people as:Did you do anything that produced karma that you have to pay of by being tired
So K10 isn’t validated in other cultures?
The tool used to measure psychological distress by GiveDirectly isn’t validated in Africa.
As a pampered modern person, the worst part of my life is washing dishes. (Or, rinsing dishes and loading the dish washer.) How long before I can buy a robot to automate this for me?
It’s weird, of all the people I know who hate the whole process of getting clean dishes, you are the only one who hates loading the dish washer instead of unloading it.
Anyway, I think that loading the dish washer is sufficiently complicated (manipulating fragile objects, allocating weird shapes into boxes efficiently, moving around a human environment, etc.) that only full fledged robotic butler could do it. I’d say more than 10 years, with 0.9 confidence.
In the mean time, you should really not rinse the dishes before putting it in, otherwise the device will have no benefit: just remove the bigger food residues and chuck ’em dishes in as they are.
Where I live you don’t often see dishwashers, so I always assumed they were as convenient as it could get.
Washing dishes is less bad if you don’t have to wash LOADS of dishes, so you could wash them (and negotiate with your family that everyone ought to do the same) (your plates specifically) right after you’re finished with your food. They’re also easier to wash when the food had’nt dried to the plates. If you have kids too small to even reach the sink, that’s harder.
I don’t really know how dishwasher’s oughta work, is there some reason one can not load their plates right into it? (after some initial rinsing, I presume).
As for pots, and other stuff for which immediate cleansing is not an option… well, those are still gonna suck, especially since they are the things hardest to clean, and neither would it be clear who would have to clean it (if, as is customary, your missus is the one cooking, it is only proper that you help relieve her from some of the load). If you were always to cook just enough, you could at least clean those while they are easier to, but we specifically tend to make more days worth of food in one go.
Short papers get cited more often. Should we believe that the correlation is due to causal factors? Should aspriring researchers keep their titels as short as possible?
Papers with very long titles tend to be about a very narrowly defined problem in a very narrow subfield therefore they get cited less?
Given that all of the correlations reported in the paper are smaller in magnitude than −0.07, and when lumped by journal, smaller than −0.2, I don’t think that these observations, statistically “significant” though they are, can be taken as a basis for advice on choosing a title.
The science myths that will not die
The Strangest, Most Spectacular Bridge Collapse (And How We Got It Wrong)
The point that you can drive an excitation without resonance is a worthwhile one, but I think this article is too long by about a factor of 3, and the weird over-literal use and repetition of “self-excitation” is detrimental.
Notes on the Oxford IUT workshop by Brian Conrad
What’s your credence that Mochizuki’s proof will turn out to be correct?
This is a kind of repost of something I share on the LW slack.
Someone mentioned that “the ability to be accurately arrogant is good”. This was my reply:
What do you think? Do others have this pattern?
…apparently they do: This post is about how dealing with this can fail.
See also this other post about another aspect of arrogance.
Yes, when you imply that you’re smarter than someone, you make them feel bad. And yes, many smart people don’t realize that. But such behavior can also be attractive to onlookers, especially on the internet. I think Eliezer’s arrogance played a big role in his popularity. Personally, I try to avoid being arrogant, but sometimes I can’t help it :-)
You might have been more arrogant when you were young because you might have actually been smarter than most people around you. As people grow up they self select into careers that require intelligence and most of them are no longer smarter than most of their peers and signaling ‘I’m smarter than you’ becomes unfounded and starts to look silly.
The classic example of this is when a smart kid from a middling high school finds herself at a good university. She was so used to being the smartest one around and not having to work hard to get good grades, and then… BAM! The level of effort she’s used to is now clearly insufficient and there are smarter people all around her. The adjustment can be difficult.
To avoid arrogance signalling let’s instead poll for it:
I think I was smarter than my class-mates in school [pollid:1078]
I think I was smarter than my co-students during university [pollid:1079]
I think I was smarter than my colleagues on the job [pollid:1080]
I have appeared arrogant in school [pollid:1081]
I have looked silly in school [pollid:1082]
I have appeared arrogant during university [pollid:1083]
I have looked silly during university [pollid:1084]
I have appeared arrogant on the job [pollid:1085]
I have looked silly on the job [pollid:1086]
For reference: My IQ [pollid:1087]
Use IQ 138 if you don’t know or don’t want to say. Assume present tense where applicable. Use the middle option to see results or if it doesn’t apply.
There are probably instances were I do come across as arrogant but I don’t think it’s an automatic effect of being coimpetent and having high self-esteem.
Valentine from CFAR would be a counter-example. He’s competent and self-confident but he has the social skills that prevent it from coming across as arrogant.
I am, of course, an arrogant smartass :-)
I deal with this problem by being aware of it and by having the (apparently rare) ability to shut up. I also find it easy to go meta, so when I notice that the status layer of the conversation becomes tumescent and starts to dominate the subject layer, I adjust accordingly.
This works not all the time, but well enough so that I find it acceptable.
Here’s a letter to an editor.
“The Dec. 6 Wonkblog excerpt “Millions and millions of guns” [Outlook] included a graph that showed that U.S. residents own 357 million firearms, up from about 240 million (estimated from the graph) in 1995, for an increase of about 48 percent. The article categorically stated that “[m]ore guns means more gun deaths.” How many more gun deaths were there because of this drastic increase in guns? Using data from the FBI Uniform Crime Reports, total gun murders went from 13,673 in 1995 to 8,454 in 2013 — a decrease in gun deaths of about 38 percent resulting from all those millions more guns. I’m not going to argue causation vs. correlation vs. coincidence, but I can say that “more guns, more gun deaths” is wrong, as proved by the numbers.”
Getting into lurking variables is one way of handling this but I’m wondering why the author just didn’t “go all the way” and declare that more guns = less deaths rather than just more guns <> more deaths.
Maybe making false statements or lying while sounding credible is not so easy. Maybe the statement can’t be too counterintuitive to too many people.
E.g., I complained to a chain store about customer service via their e-mail link, and the cust. service rep. said he couldn’t help me because he works the night shift and the store in question is open in the daytime.
Also see https://www.psychologytoday.com/blog/extreme-fear/201005/top-ten-secrets-effective-liars
How much should you use LW, and how? Should you consistently read the articles on Main? What about discussion? What about the comments? Or should a more case-by-case system be used?
Should is one of those sticky words that needs context. What are your goals for using LW?
Improving my rationality. Are you looking for something more specific?
Yes.
Epistemic rationality or Instrumental rationality? If the former, what specific aspects of it are you looking to improve, if the latter, what specific goals are you looking to achieve.
I would like to improve my instrumental rationality and improve my epistemic rationality as a means to do so. Currently, my main goal is to obtain useful knowledge (mainly in college) in order to obtain resources (mainly money). I’m not entirely sure what I want to do after that, but whatever it is, resources will probably be useful for it.
What are the strongest arguments that you’ve seen against rationality?
Well, it depends on what you mean by “rationality”. Here’s something I posted in 2014, slightly revised:
If not rationality, then what?
LW presents epistemic and instrumental rationality as practical advice for humans, based closely on the mathematical model of Bayesian probability. This advice can be summed up in two maxims:
Obtain a better model of the world by updating on the evidence of things unpredicted by your current model.
Succeed at your given goals by using your (constantly updating) model to predict which actions will maximize success.
Or, alternately: Having correct beliefs is useful for humans achieving goals in the world, because correct beliefs enable correct predictions, and correct predictions enable goal-accomplishing actions. And the way to have correct beliefs is to update your beliefs when their predictions fail.
We can call these the rules of Bayes’ world, the world in which updating and prediction are effective at accomplishing human goals. But Bayes’ world is not the only imaginable world. What if we deny each of these premises and see what we get? Other than Bayes’ world, which other worlds might we be living in?
To be clear, I’m not talking about alternatives to Bayesian probability as a mathematical or engineering tool. I’m talking about imaginable worlds in which Bayesian probability is not a good model for human knowledge and action.
Suppose that making correct predictions does not enable goal-accomplishing actions. We might call this Cassandra’s world, the world of tragedy — in which those people who know best what the future will bring, are most incapable of doing anything about it.
In the world of heroic myth, it is not oracles (good predictors) but rather heroes and villains (strong-willed people) who create change in the world. Heroes and villains are people who possess great virtue or vice — strong-willed tendencies to face difficult challenges, or to do what would repulse others. Oracles possess the truth to arbitrary precision, but they accomplish nothing by it. Heroes and villains come to their predicted triumphs or fates not by believing and making use of prediction, but by ignoring or defying it.
Suppose that the path to success is not to update your model of the world, so much as to update your model of your self and goals. The facts of the external world are relatively close to our priors; not much updating is needed there — but our goals are not known to us initially. In fact, we may be thoroughly deceived about what our goals are, or what satisfying them would look like.
We might consider this to be Buddha’s world, the world of contemplation — in which understanding the nature of the self is substantially more important to success than understanding the external world. In this world, when we choose actions that are unsatisfactory, it isn’t so much because we are acting on faulty beliefs about the external world, but because we are pursuing goals that are illusory or empty of satisfaction.
There are other models as well, that could be extrapolated from denying other premises (explicit or implicit) of Bayes’ world. Each of these models should relate prediction, action, and goals in different ways: We might imagine Lovecraft’s world (knowledge causes suffering), Qoheleth’s world (maybe similar to Buddha’s), Job’s world, or Nietzsche’s world.
Each of these models of the world — Bayes’ world, Cassandra’s world, Buddha’s world, and the others — does predict different outcomes. If we start out thinking that we are in Bayes’ world, what evidence might suggest that we are actually in one of the others?
This is a perspective I hadn’t seen mentioned before and helps me understand why a friend of mine gives low value to the goal-oriented rationality material I’ve mentioned to him.
Thank you very much for this post!
It’s worth noting that, from what I can tell at least (having not actually taken their courses), quite a bit of CFAR “rationality” training seems to deal with issues arising not directly from Bayesian math, but from characteristics of human minds and society.
Rationality takes extra time and effort, and most people can get by without it. It is easier to go with the flow—easier on your brain, easier on your social life, and easier on your pocketbook. And worse, even if you decide you like rationality, you can’t just tune into the rationality hour on TV and do what they say—you actually have to come up with your own rationality! It’s way harder than politics, religion, or even exercise.
“It’s cold-hearted.”
This isn’t actually a strong argument, but many people find it persuasive.
It applies to certain kinds of rationality but I don’t think it applies to rationality!CFAR or the rationality I see at LW events in Germany.
People discard artificial constructs after they are beaten a few times, and return to simply powering through.
It is hard, sometimes, to follow epistemic rationality when it seems in conflict with instrumental one. Like, when a friend and colleague cries me a river about her ongoing problems, I try to comfort her but also to forget the details, in case I betray her confidence next minute speaking to our other coworkers. Surely epistemic rationality requires committing information to memory as losslessly as possible? And yet I strive to remember the voice and not the words.
(A partial case of what people might mean by ‘rationality is cold’, I guess.)
You need to forget a fact lest you accidentally mention it?
Even an oblique reference, a vague reflection can be harmful.
Superstition hasn’t worked in the past, so it’s due to be right soon.
Does anyone know of some good program for eye training. I would like to try to become a little less near-sighted by straining to make out things which are at the edge of my range of good vision. I know near-sighted means my eyeball is squashed, but I am hoping my brain can fix a bit of the distortion in software. Currently I am doing random printed out eye charts, and I have gotten a bit better over time, but printing out the charts is tedious.
An acquaintance runs http://eye-track.me/ for measuring vision
How nearsighted are you (in diopters)?
About 20⁄50, I don’t know if that can be unambiguously converted to diopters. I measure by performance by sitting at a constant 20 feet away and when I am over 80% correct I shrink the font on the chart a little bit. I can currently read a slightly smaller font than what corresponds to 20⁄50 on an eye chart.
So that’s fairly minor myopia.
Eye training programs train eye muscles, it’s not an issue of fine-tuning “brain software”. You can train your eye muscles to compensate, somewhat, but the downside is that if you’re e.g. just tired or stressed your vision degrades back to baseline.
Not only muscles directly at the eye but also at the back of the head.