Rationality Quotes November 2013
Another month has passed and here is a new rationality quotes thread. The usual rules are:
Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
Do not quote yourself.
Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you’d like to revive an old quote from one of those sources, please do so here.
No more than 5 quotes per person per monthly thread, please.
Robert Burton, from On Being Certain: Believing You’re Right Even When You’re Not reminding me of Epiphany Addictions
It looked like nonsense to me. I stopped reading after a few sentences.
I’m not saying I’m immune to epiphany addiction, but I want the good stuff.
I thought it was a puzzle or riddle, so I went back and looked at it again. My first guess was that it was something to do with running, then paper airplanes (which can be made from newspaper, but not a magazine). The rock as anchor made me realize there needed to be something attached, which made me realize it was a kite.
On the other hand, I don’t have any trouble seeing alternative interpretations; perhaps it’s because I already tried several and came to the conclusion myself. (Or maybe it’s just that I’m more used to looking at things with multiple interpretations; it’s a pretty core skill to changing one’s self.)
Then again, I also don’t see the paragraph as infused with irreversible knowing. I read the words literally every time, and have to add words like, “for flying a kite” to the sentences in order to make the link. I could just as easily add “in bed”, though, at which point the paragraph actually becomes pretty hilarious—much like a strung-together collage of fortune cookie quotes… in bed. ;-)
:-D
The reason I posted the link to epiphany addiction was that this quote is an example of how confusion doesn’t feel good (it prompted you to stop reading...), and that “sense of knowing” feels pleasant. The danger being that we have very little control over when we feel either, so the feeling of knowing is no substitute for rationality.
Thanks. I had no idea that was what you had in mind.
“The feeling of knowing” is probably worth examining in detail.
Sorry no cite, but I heard about a prisoner whose jailers talked nonsense to him for a week. When they finally asked him a straight question it was such a relief he blurted out the answer.
I tried to come up with a different ‘magic word’ and thought about bombs. The DIY kind (like sulfuric acid + KMnO4 + …), with newspaper being better because it is easier to tear… Anybody has other ideas?
ETA: another possibility is a herbarium press, though the running part becomes confusing. Still, it might be better to run if you follow an expert on a survey, and to walk afterwards, trying to recall what you learned on the trip.
What makes it hard to think of alternatives is an automatic arrangement expectation of parsimony and classical unities of time and space. Bias?
-- Marvin Minsky
And your experiences to date, which is also a thing about reality.
True, the availability heuristic, which the quote condemns, often does give results that correspond to reality—otherwise it wouldn’t be a very useful heuristic, now would it! But there’s a big difference between a heuristic and a rational evaluation.
Optimally, the latter should screen out the former, and you’d think things along the lines of “this happened in the past and therefore things like it might happen in the future,” or “this easily-imaginable failure mode actually seems quite possible.”
“This is an easily-imaginable failure mode therefore this idea is bad,” and its converse, are not as useful, unless you’re dealing with an intelligent opponent under time constraints.
Economist and likely future chairperson of the Federal Reserve Board Janet Yellen shows the key rationality trait of being able to admit you were wrong.
Alternatively, she thought that kind of a lie would be well received. It’s a widely used social skill to admit you were wrong even though you think you weren’t.
Why would she claim she hadn’t seen it coming, when it would be have been much more to her benefit if she had claimed that she had seen the crisis coming?
That claim a) begs the question of why she didn’t say something at the time, or short the stock market, and b) is somewhat cliched anyway.
I think you nailed it.
Well, I know nothing of her role in what happened, but what you’re suggesting is much harder to sell if her past actions and statements contradict it, which I assume is the case here.
Lots of people benefitted from the crisis and the events that preceded it.
-Paul Halmos
Sean Carroll
Sometimes it’s disturbing how good Sean Carrol is at articulating my thoughts. Especially when it pertains to, as above, the philosophy of science. Here’s another:
Sean Carrol, at 24:10 in the talk God is not a Good Theory
“I don’t understand why a question is interesting, so clearly it’s meaningless.”
Math with Bad Drawings
Scott Adams, in How to Fail at Almost Everything and Still Win Big
Nate Silver
(h/t Rob Wiblin)
From an article about Obamacare.
James Franklin, The Science of Conjecture: Evidence and Probability before Pascal
Julien Smith
Corollary 1: Always try to be that person.
Disputed. Some people are naturally on the defensive even when debating true propositions. Defensiveness though is more often a bad sign, since somebody defending a false proposition that they know on some level to be false, is more likely to try to hold territory and block opponent progress. Many advocating true propositions very commonly go on the offensive, nor is it clear to me that this is always wrong in human practice.
Nitpicking, but the quote stated that people who are on neither offensive nor defensive are people you can learn from—it didn’t say that people who are on the offensive or defensive are necessarily wrong to do so.
I’m not sure that’s just a nitpick. It’s a mistake so common that it should probably be listed under biases. It might be a variation on availability bias—what’s actually mentioned fills in the mental space so that the cases which aren’t mentioned get ignored.
And I’m not sure it’s a mistake. If you’re getting your information in a context where you know it’s meant completely literally and nothing else (e.g., Omega, lawyers, Spock), then yes, it would be wrong. In normal conversation, people may (sometimes but not always; it’s infuriating) use “if” to mean “if and only if.” As for this particular case, somervta is probably completely right. But I don’t think it’s conducive to communication to accuse people of bias for following Grice’s maxims.
I also dispute this- obvious cases include partial disagreement and partial agreement between parties, somebody who is simply silent or who says nothing of substance, and someone who is themself trying to learn from you/the other side.
(In particular, consider a debate between a biologist and the Pope on evolution. I would expect the Pope to be neither offensive nor defensive- though I’m not totally clear on the distinction here, and how a debater can be neither- but I would expect to learn much more from the biologist than the Pope.)
One reading: “offense” as “trying to lower another’s status” and “defense” as “trying to preserve one’s own status”. The people you can learn from are the ones whose brains focus on facts rather than status.
I’m not sure if this is relevant.
In a technical sense, of course you can learn from people on offense/defense, since they are giving you information.
The set of people who hold and argue for accurate beliefs about a given issue, and the set of people from whom it is possible to learn about a given issue, may often overlap but are not at all the same.
I agree, besides which, if there is a popular rule, sign, or proxy for determining who is on the better side of a debate, you can bet that the intellectually dishonest fellow will start waving the “I’m right” flag.
That doesn’t sound like a debate I could learn anything from listening to.
Corollary 2: If they are on offense or defense, check with yourself what you expect to gain from continuing with the debate.
-- Peter Drucker
“It is far better to improve the [quality] of testing first than to improve the efficiency of poor testing. Automating chaos just gives faster chaos.”—Mark Fewster & Dorothy Graham, Software Test Automation
See also: the distinction between verification and validation, or between quality control and quality assurance.
I dont get the distinction between verification and validation.
I really like this sentence (from wikipedia):
Thanks! http://en.wikipedia.org/wiki/Verifying
Jonathan Safran Foer, Extremely Loud and Incredibly Close (emphasis mine)
The “23 Enigma” is the Discordian belief that all events are connected to the number 23, given enough ingenuity on the part of the interpreter.
Apophenia.
Well, sort of—the protagonist is a child who tries to decipher a clue for a treasure hunt and so he realizes that a model that can predict anything is useless.
--John Ciardi
I guess the most difficult part is learning to extract signal from noise. Then all you have to do is keep the fools talking, and they will gladly do.
Jon Schwarz, A Tiny Revolution
Opportunity costs of time?
Source.
To what nugget of rationality does this point?
That behaviourally people treat free very differently from even $1, and that effective policymaking requires removing even trivial-seeming barriers to desired actions.
Yes. I also take it as a warning against hyperbolic discounting.
I’m a little bemused by the fact that my karma dropped, shortly after this, by exactly twice the then-current score of the great-grandparent. The precision confuses me at least as much as the magnitude.
Out of all the possible things that might have happened this month for which you would have surprisedly noticed that they had only a 1% chance of happening, how many have actually happened?
Like most human beings, I’m not very good at estimating the exact probability of low probability events, so would have no idea if some event had a probability of 1% as opposed to 10% or 0.0001% in a situation where the distinction is actually important.
1% is a very conservative lower bound for the probability of the change of your karma being twice the then-current score of its great-grandparent. The cursive words are those whose many possible modifications already yield enough possibilities that noting one of them occuring is completely uninteresting, except for this resulting free lesson in combinatorics ;)
Oh, Death was never enemy of ours!
We laughed at him, we leagued with him, old chum.
No soldier’s paid to kick against His powers.
We laughed, -knowing that better men would come,
And greater wars: when each proud fighter brags
He wars on Death, for lives; not men, for flags.
-Wilfred Owen
From “The Next War” http://en.wikipedia.org/wiki/The_Next_War_%28poem%29 http://en.wikisource.org/wiki/The_Next_War
--Richard Feynman
Analects of Confucius
That’s because words are in general a grossly inadequate way to express thoughts. I wonder if telepathy would help with that.
What do you mean with telepathy? You can’t just transfer data from one neural net into another with having some sort of common data protocal that makes classifications. You still need some sort of language.
I suspect shared access to the conceptual referents of words would help with that specific problem, yes.
I suspect (with Feynman) that it would make only a small difference.
In particular, I suspect that if we did that we’d run more often into the currently-masked problem that everybody thinks about things a little bit wrong.
(People seem to differ in their interpretations of “telepathy,” so I’ve started trying to develop the habit of Tabooing the word. The irony of this in the current context does not escape me.)
That would depend partly on the specific problems with the words, and partly on the rationality of the people.
If everyone is making the same mistake, telepathy will just amplify the problem. There will be a chorus of agreement which is amplifying the mistake.
If people are using the wrong word, but with different shadings, then perhaps people will look into how well the word fits the concept it is supposed to indicate. However, the odds favor people yelling at each other about who’s right.
West Hunter
The article contains the line:
What’s wrong here? 4 degrees of accuracy for brain size and no error bars? That’s a sign of someone being either intentionally or unintentionally dishonest.
Quick Googling shows that there’s a paper published that states that European’s average cranial capacities is 1347.
Rather then describing the facts as they are he paints things as more certain than they are. I think that people who do that in an area, where false beliefs lead to people being descrimited, are in no position to complain when they some social scorn.
How meaningful are figures on brain size without figures on overall body size?
That’s close enough to not effect his point, or even the order. I think you’re engaging in motivated continuing to avoid having to acknowledge conclusions you find uncomfortable.
Do you also apply the same criticism to the (much larger number of) people how make (much larger errors) in the direction of no difference? Also, could you taboo what you mean by “descrimited”. Steelmanning suggests you mean “judged according to inaccurate priors”, yet you also seem be implying that inaccurately equaliterian priors aren’t a problem.
Whatever the problem with non-factually-based equality may be, it is not a problem of discrimination, so the same criticism does not apply.
This gets back to the issue that neither you nor Christian have defined what you mean by “discrimination”. I gave one definition: “judged according to inaccurate priors”, according to which your comment is false. If you want to use some other definition, please state it.
Why would you think we are not using it in the standard sense? “Discrimination is the prejudicial and/or distinguishing treatment of an individual based on their actual or perceived membership in a certain group or category”
By that reasoning, refusing to hire someone who doesn’t have good recommendations, is discrimination, because you’re giving him distinguishing treatment (refusing to hire him) based on membership in a category (people who lack good recommendations).
I think you have some assumptions that you need to make explicit, after thinking them through first. (For instance, one obvious change is to replace “category” with “irrelevant category”, but that won’t work.)
Well, the recommendations you have are to some extent the result of your choices and actions, but whether your name sounds black hardly is. So, regret-of-rationality considerations apply more to the former than to the latter.
So you are saying that I should modify the definition of “discrimination” supplied by TAG, to include a qualifier “only as a result of your choices and actions”?
That seems to say that some forms of religious discrimination don’t count (choosing not to convert to Christianity is a result of your own choices and actions). It also ignores the fact that it is possible for someone to fall into a group where some of the group’s members got there by their own choices and actions but some don’t—not every person who can’t get good recommendations is in that situation because of his own choices and actions. In fact, there’s a continuum; what if, say, 10% of the people in a category got there by their own choices and actions but the other 90% had no choice?
Yes, it’s a continuum. That’s why I said “to some extent”, “hardly”, and “more … than”.
That would still mean that if I say “convert to Christianity or I won’t hire you” and you refuse, that wouldn’t count as discrimination. It would also mean that refusing to hire gay people would not be discrimination, as long as you only refused to hire people who participated in some activity, whether having gay sex, wearing rainbow flags, having a gay wedding, etc.
So what? I’m not particularly outraged that atheists aren’t allowed into the Swiss Guards.
For some reason, this sounds more problematic to me; not sure why.
Probably because you’ve been more exposed to rhetoric saying how horrible it is to be “anti-gay” than rhetoric saying how horrible it is to be “anti-atheist”.
Actually I think it’s the other way round, at least in the last few years. (There are many more atheists than gay people in my social circles.)
Alternate hypothesis: one has a choice and the other one does not.
You realize this is circling back around, right? The issue is that the two are both defined by choices, because the second group is not gay men but men who have consensual sex with men, and the second group’s membership is voluntary regardless of whether or not the first’s is.
Eugine_Nier responded, in effect, that the most likely reason to see those restrictions as being in different classes is the “who—whom?” of religious hiring restrictions being ‘okay’ and sexual-behavior-based hiring restrictions being ‘not okay’ because of status alignments of religion and sexual behavior.
That’s a good point.
What definition of “choice” are you using that makes that statement true?
I’m not sure there’s a coherent one here. But for issues such as this, what people, or society are comfortable with is a to a large extent a function of what intuitions they have. In this sort of context, the distinction is likely connected to intuitions of free will, whether or not that makes any sense.
I don’t see any a priori reason why those intuitions should apply to the case of religion but not homosexuality.
How a priori reason do you want it to be? There’s at this point a general consensus that sexual orientation, while culturally mediated, has a large genetic component. I don’t think that anyone thinks there’s say a gene for Christianity or the like. And while self-identified sexual orientation can change, that’s substantially less common an occurrence as changes in one’s religious self-identification. For example, in the US about half of all adults have changed their religion at least once in their lives. See here. Some of those people are people changing from one version of Protestantism to another, but even when you take them out, one is still talking about around a third of the population. And this is fairly well known- people understand that religious beliefs can change: indeed, many major religions have specific rules about how to handle conversions, and many forms of major religions (especially in Christianity and Islam) have active commandments to go and convert people.
Well, atheism is correlated with IQ and (ISTM) Openness to Experience, which are both somewhere around 50% inheritable.
Look at what I found by googling for
atheism twin studies
!Yeah, well, Judaism is correlated with IQ as well :-D and I bet any non-mainstream religious beliefs will be correlated with Openness to Experience.
Well, I guess Jews and Wiccans would be as reluctant to get baptised in order to get a job as atheists are—possibly more so, insofar as I can easily imagine the latter thinking ‘if a lie is what you want, a lie is what you’ll get’ (there was a comment somewhere on LW to that effect, but I can’t find it).
Such a requirement would bother (to put it mildly) me much more as a Jew than it would as an atheist, FWIW.
Yes, on re-reading my comment it sounds like I was trying to win an award for the Understatement of the Year.
I’m not aware of any ‘consensus’ that sexual orientation has a ‘large’ genetic component. Some research suggests there is a genetic component, and it is known that there are developmental factors as well (i.e. what happens in the womb and afterwards) and also environmental factors. It is widely accepted that homosexuality is ‘not a (conscious) choice’ but that’s most definitely not the same as saying “it’s genetic”.
See other discussion in this subthread for evidence that there’s a large genetic component and that most papers in this context matter. I agree that there’s likely developmental and environmental issues at play, as well as stochastic effects.
And near as I can tell, the only things underlying this consensus are the “we can get it to show up on brain scans, therefore it must be genetic” fallacy and the accusation of being an EVIL HOMOPHOBE!!1!! hurled against anyone who questions it.
The fact that the kind of people responsible for these kinds of consensuses believe it is perfectly acceptable to promote falsehoods under the name of science for the “greater good” doesn’t exactly increase my confidence in it.
Twin studies show a strong genetic component. There’s also (weak) evidence for epigenetic effects. See here. There’s also circumstantial evidence of a link to female fertility.
The brain evidence is interesting, but it goes more to a lack of choice, rather than a genetic aspect. Of course, as with any complicated behavioral trait with some variation, it is likely that at least part of the trait is due to environmental effects at a young age, possibly including womb environment.
And you are A) Linking to a discussion where you and I already discussed exactly this and why your response didn’t make sense, and B) making a massive leap that from some political groups presenting selective evidence, that one should therefore doubt scientific studies. This does not follow.
The page you linked to doesn’t exactly present evidence of a strong genetic component (or much of a consensus for that matter).
Except it’s not evidence of lack of choice (unless you adopt a very specific kind of Cartesian dualism) either.
Your reasoning was basically “everyone lies thus some political group lying isn’t evidence against their other claims”. Not a particularly strong argument at the best of times, certainly not here were the situation is extremely analogous to the lie at the link (just substitute “transgender” for “homosexuality”).
It wasn’t just presenting selective evidence, it was getting BS or outright lies declared part of the scientific consensus.
Could you elaborate? For example, the estimate from the largest study in the page (3,826 pairs) seems pretty nontrivial: 39% of variance! That’s not small or trivial by any means—that’s getting up there with some twin estimates of intelligence. And especially since the shared-environment estimate is 0.
Of the studies in question, the only one which showed a weak correlation listed is Bearman and Brückner, which still showed a 7.7% concordance rate for male homosexuality. The female rate is lower, which shouldn’t be that surprising, given that the other studies included also showed lower concordance rates for females, and that there’s other evidence for female sexuality being more malleable than male sexuality. Moreover, the largest study in question, the Sweden study, used literally every single pair of twins born in the country, which makes for a much larger study and handles the selection bias problems. And that study agreed with the results of the other studies excepting Bearman and Bruckner.
What does this have to do with Cartesian dualism at all? For that matter, how is this at all relevant, since as I stated earlier, the matter under discussion is what is descriptively thought of as a choice or not by members of society. The entire point was discussing why someone would have a different reaction to the homosexuality discrimination issue than the atheism discrimination issue. In that context, colloquial intuitive notions of choice are relevant.
That’ isn’t what the argument is. Please reread that discussion.
Really? Did you see any evidence in that discussion that any aspect of this has had any influence on the scientific studies in question? Have they somehow managed to fake data and get it through? Do you think they’ve had papers get rejected? The only plausible sort of situation is selective granting, which requires specific aspects of the LGBTQE movement (not even generically but people who would agree with this lying idea, which the vast majority would not) to have somehow gotten positions in grant giving institutions. Do you have any evidence for that?
In isolation, this number tells us nothing. What is important is the gap between concordance rates for MZ, DZ, and siblings. There are several popular hypotheses that predict a large effect of prenatal environment. In fact, the particular paper give 7.7% MZ, 4.2% DZ, and 4.5% siblings, so it detects no effect of prenatal environment and does suggest genetics.
Unless you accept some form of Cartesian dualism you should expect every aspect of the mind to show up on sufficiently advanced brain scans, whether or not it is a choice. Thus, the brain evidence isn’t particularly interesting.
And my point is that this perception isn’t grounded in anything objective and thus itself needs an explanation.
I have, twice in the last few days in fact. Why don’t you try rereading it?
I’m going to give you the benefit of the doubt and assume I misunderstood your argument there, if so could you state it more explicitly.
Well Julian practically admitted it.
Sometimes, generally pressure is applied to people who produce politically incorrect results. Look what happened to Mark Regnerus or Jason Richwine, or to use a more famous example James Watson.
Eugine, within 2 minutes of your comment above, I received a block downvote to all my recent comments regardless of subject. Given this thread, which I presume you must have already seen, my interest in whether this was your action should be clear. If you are attempting to simply ignore the issues, you may want to be aware that this is causing serious concerns and problems within the community, and at least one LW user has considered implementing a response of block downvoting all your comments and those of people with views similar to your own. This situation is creating a mindkilling political mindfield and it is in your and everyone else’s best interest for it to be headed off before the situation deteriorates. Therefore a straight answer from you as to whether you are involved in this would be both appreciated and in the best interest of the community.
If you think you can get away with ignoring ialdabaoth’s requests because of the user’s relatively low status, you should be aware that my own status on LW, while not very high, is likely not so low that simply ignoring me will be a remotely productive response.
You have made some valid points above, but I cannot given recent events discuss them or any other issue with you, until you address the community’s concerns here.
I would like to loudly add that I am very interested in the norms of polite discourse being followed on LW, and am throwing whatever status I have behind this request. Furthermore, if this turns out to be the sort of thing that is best resolved in PMs rather than comments, I am willing to facilitate conversations that way.
Attempted to reverse the effects of the block downvote. Note that any permanent solution to this problem cannot rely on the co-operation of possible defectors. If there is not a method for detecting such defections and preferably determing the source, it continues to be a viable method of asymmetric warfare.
Also might be a decent idea for someone to take formal responsibility for moderating the website. Several people have moderator powers, but I do not know of anyone who is actually responsible for such moderation. (It seems that this would fall to Eliezer by default, but this does not seem to be Eliezer’s comparative advantage.)
ETA: A designated moderator is necessary even with a system such as I described in place since the automated detection can screw up. A human overseer spending, say, an hour on this each week, may be greatly productive if thy have the appropriate tools available.
I didn’t relaise that “falsehood” meant “not necessarily true”. I also remain unclear why a statment as guarded “believed to have a genetic component” would need to be qualiied with a further “maybe”.
Let me get this straight… Are you trying to evaluate the truth of a statement by its position in a signaling game?
So would you oppose discrimination against wheelchair bound construction workers?
Oh dear. Whoever wrote the WP article I was quoting didn’t steelman their definition.
Wikipedia is supposed to use what’s in the sources. They’re not allowed to steelman.
It may just mean “group or category which I like”, but I wouldn’t count that as steelmanning.
The best I can come up with is “group or category which has, in the past, often been subject to inaccurately negative judgment based on inaccurate priors”. In fact, let’s try that one.
First sorry for the typo.
Claiming 4 degrees of accuracy means, claiming that the factor of uncertainity about the difference is off by a factor of more than ten.
Understanding the uncertainity that exist in vital for reasoning effectively about what’s true.
Different people have different goals. If your goal is the search for truth than it matters greatly whether what you speaking is true.
If your goal is to spread memes that produce social change than it makes sense to use different criteria.
What does discrimination mean? If a job application with a name that common with black people gets rejected while an identical one with a name that’s common with white people gets accepted that would be an example of bad discrimination.
Does it matter if having said name is in fact correlated with job performance?
Only if it’s still correlated when you control for anything else on the CV and cover letter, incl. the fact that the candidate is not currently employed by anyone else.
Being correlated isn’t very valuable in itself. Even if you do believe that blacks on average have a lower IQ, scores on standardized test tell you a lot more about someone IQ.
The question would be whether the name is a better predictor of job performance than grades to distinguish people in the population of people who apply or whether the information that comes from the names adds additional predictive value.
But even if various proxies of social status would perform as predictors I still value high social mobility. Policies that increase it might not be in the interest of the particular employeer but of interest to society as a whole.
Emphasis mine. I don’t think this is the question at all, because you also have the grade information; the only question is if grades screen off evidence from names, which is your second option. It seems to me that the odds that the name provides no additional information are very low.
To the best of my knowledge, no studies have been done which submit applications where the obviously black names have higher qualifications in an attempt to determine how many GPA points an obviously black name costs an applicant. (Such an experiment seems much more difficult to carry out, and doesn’t have the same media appeal.)
So, this “only question” formulation is a little awkward and I’m not really sure what it means. For my part I endorse correctly using (grades + name) as evidence, and I doubt that doing so is at all common when it comes to socially marked names… that is, I expect that most people evaluate each source of information in isolation, failing to consider to what extent they actually overlap (aka, screen one another off).
ChristianKI brought up the proposition “(name)>(grades)” where > means that the prediction accuracy is higher, but the truth or falsity of that proposition is irrelevant to whether or not it’s epistemically legitimate to include name in a decision, which is determined by “(name+grades)>(grades)”.
Doing things correctly is, in general, uncommon. But the shift implied by moving from ‘current’ to ‘correct’ is not always obvious. For example, both nonsmokers and smokers overestimate the health costs of smoking, which suggests that if their estimates became more accurate, we might see more smokers, not less. It’s possible that hiring departments are actually less biased against people with obviously black names than they should be.
...insofar as their current and future estimates of health costs are well calibrated with their actual smoking behavior, at least. Sure.
Well, it’s odd to use “bias” to describe using observations as evidence in ways that reliably allow more accurate predictions, but leaving the language aside, yes, I agree that it’s possible that hiring departments are not weighting names as much as they should be for maximum accuracy in isolation… in other words, that names are more reliable evidence than they are given credit for being.
That said, if I’m right that there is a significant overlap between the actual information provided by grades and by names, then evaluating each source of information in isolation without considering the overlap is nevertheless a significant error.
Now, it might be that the evidential weight of names is so great that the error due to not granting it enough weight overshadows the error due to double-counting, and it may be that the signs are such that double-counting leads to more accurate results than not double-couting. Here again, I agree that this is possible.
But even if that’s true, continuing to erroneously double-count in the hopes that our errors keep cancelling each other out isn’t as reliable a long-term strategy as starting to correctly use all the evidence we have.
Agreed. Any sort of decision process which uses multiple pieces of information should be calibrated on all of those pieces of information together whenever possible.
It’s even possible that if the costs of smoking are overestimated, more people should be smoking—part of the campaign against smoking is to underestimate the pleasures and social benefits of smoking.
That in no way implies that it would be a good choice for people to smoke more. People don’t make those decisions through rational analysis.
If you combine a low noise signal with a high noise signal the combined signal can be of medium noise. Combining information isn’t always useful if you want to use both signal as proxy for the same thing.
For combining information in such a way you would have to believe that the average black with a IQ of 120 will get a higher GPA score than the average white person of the same IQ.
I think there little reason to believe that’s true.
Without actually running a factor analysis on the outcomes of hiring decision it will be very difficult to know in which direction it would correct the decision.
Even if you do run factor analysis integrating addtional variables costs you degrees of freedom so it not always a good choice to integrate as much variables as possible in your model. Simple models often outperform more complicated ones.
Human’s are also not good at combining multiple sources of information.
Agreed that if you have P(A|B) and P(A|C), then you don’t have enough to get P(A|BC).
But if you have the right objects and they’re well-calibrated, then adding in a new measurement always improves your estimate. (You might not be sure that they’re well-calibrated, in which case it might make sense to not include them, and that can obviously include trying to estimate P(A|BC) from P(A|C) and P(A|B).)
Not quite. Regression to the mean implies that you should apply shrinkage which is as specific as possible, but this shrinkage should obviously be applied to all applicants. (Regressing black scores to the mean, and not regressing white scores, for example, is obviously epistemic malfeasance, but regressing black scores to the black mean and white scores to the white mean makes sense, even if the IQ-grades relationship is the same for blacks and whites.)
It could also be that the GPA-job performance link is different for whites and blacks, even if the IQ-GPA link is the same for whites and blacks. (And, of course, race could impact job performance directly, but it seems likely the effects should be indirect for almost all jobs.)
If you’re just comparing GPAs, rather than GPAs weighted by course difficulty, there could be a systematic difference in the difficulty of classes that applicants take by race. I’ve had a hard time getting numerical data on this, for obvious reasons, but there are rumors that some institutions may have a grade bias in favor of blacks. (Obviously, you can’t fit a parameter to a rumor, but this is reason to not discount an effect that you do see in your data.)
Yes, but… motivated cognition alert. If you’re building models correctly, you take this into account by default, and so there’s no point in bringing it up for any particular input because you should already be checking it for every input.
Could you explain your reasoning here?
IQ is a strong predictor of academic performance, and a 1.5 sd gap is a fairly significant difference. The only thing I could think of to counterbalance it so that the average white would get a higher GPA would be through fairly severe racial biases in grading policies in their favor, which seems at odds with the legally-enforced racial biases in admissions / graduation operating in the opposite direction. Not to mention that black African immigrants, legal ones anyway, seem to be the prototype of high-IQ blacks who outperform average whites.
I am a little puzzled by the claim, which leads me to believe I’ve misunderstood you somehow or overlooked something fairly important.
I missed the qualification of speaking of whites with the same IQ. I added it via an edit.
Right, okay. I did misunderstand you. I’ll correct my comment as soon as I figure out the strikethrough function here.
I believe the primary way to get strikethrough is to strikethrough the entire comment, by retracting it.
You can use unicode.
I’d recommend Vaniver’s solution instead—IME Android phones don’t like yours.
Source is here. SD for Asians and Europeans is 35, SD for Africans was 85. N=20,000.
...no? Why in the world would he present error bars? The numbers are in line with other studies, without massive uncertainty, and irrelevant to his actual, stated and quoted, point.
His stated point is about telling things that everybody is supposed to know.
If you have an SD of 35 for an average of 1362 you have no idea about whether the last digit should be a 2. That means either you do state an error interval or you round to 1360.
Human height changed quite a bit over the last century. http://www.voxeu.org/article/reaching-new-heights-how-have-europeans-grown-so-tall . Taking data about human brainsize with 4 digit accuracy and assuming that it hasn’t changed over the last 30 years is wrong.
European gained a lot of bodymass over the last 100 years due to better nutrition. The claim that it’s static at 4 digit in a way where you could use 30 year old data to describes todays situation, gives the impression that human brainsize is something with is relatively fixed.
The difference in brain size between Africans and European in brainsize in that study is roughly the difference in height between todays Europeans and Europeans 100 years ago.
Given that background taking a three decades old average from one sample population and claiming that it’s with 4 digits accuracy the average that exist today is wrong.
No, that was absolutely not his point. I don’t understand how you could have come away thinking that- literally the entire next paragraph directly stated the exact opposite:
More generally, that was not a tightly reasoned book/paper about brainsize. That line was a throwaway point in support of a minor example (“For example, average brain size is not the same in all human populations”) on a short blog post. Arguments about the number of significant figures presented, when you don’t even disagree about the overall example or the conclusion, are about as good an example of bad disagreement as I can imagine.
I don’t think that the following classes are the same:
(1) Facts everyone should know.
(2) Facts everyone knows.
I think the author claims that this is a (1) fact but not a (2) fact.
His claim was:
(a) Everybody knew that different ethnicities had different brain sizes (b) It was an uncomfortable fact, so nobody talked about it (c) Now nobody knows that different ethnicities have different brain sizes
If individual datapoints have an SD of 35, and you have 20000 datapoints, then the SD of studies like this is 35/sqrt(20000)≈0.24. So giving a one’s digit for the average is perfectly reasonable.
According to the paper the total mean brain size for males is 1,427 while for females it’s 1,272. Given around half women and half men the SD per point should be higher than 35.
(Assuming the sample is unbiased.)
Well, he did say “about”.
--Stuart Diamond, Getting More, 2010, pp. 51-52
I was once at a meetup, and there were some people there new to LessWrong. After listening to a philosophical argument between two long-time meetup group members, where they agreed on a conclusion that was somewhere between their original positions, a newcomer said “sounds like a good compromise,” to which one of the old-comers (?) said “but that has nothing to do with whether it’s true… in fact now that you point that out I’m suspicious of it.”
Later in the meetup, an argument ended with another conclusion that sounded like a compromise. I pointed it out. One of the arguers was horrified to agree with me that compromising was exactly what he was doing.
Is this actually a failure mode though, if you only “compromise” with people you respect intellectually? In retrospect, this sounds kind of like an approximation to Aumann agreement.
Each side should update on the other’s arguments and data, and on the fact that the other side believes what it does (inasfar we can’t perfectly trust our own reasoning process). This often means they update towards the other’s position. But it certainly doesn’t mean they’re going to update so much as to agree on a common position.
You don’t need to try to approximate Aumann agreement because you don’t believe that either yourself or the other party is perfectly rational, so you can’t treat your or the other’s beliefs as having that kind of weight.
Also, people who start out looking for a compromise might be led to compromise in a bad way: A’s theory predicts ball will fall down, B’s theory predicts ball will fall up, compromise theory predicts it will stay in place, even though both A and B have evidence against that.
Part of intellectual debate is that you judge arguments on their merits instead of negotiating what’s true. Comprosing suggests that you are involved in a negotiation over what’s true instead of search for the real truth.
It doesn’t matter what it sounds like you are doing or what you think you are doing. One thing matters: how good is your actual answer?
Yes, but if you followed a crappy reasoning process it’s less likely that you end up with a high quality answer than when you followed a good process.
Theoretically, if you treat your own previous position as a prior and the other guy’s arguments as some evidence, the standard updating will lead you to have a new position somewhere in between which will look like a compromise.
Obviously there are are lot of caveats—e.g. the assumption that an intermediate position makes sense (that is, the two positions are on some kind of continuous axis), etc.
Not to the extent your current position already takes those arguments into account (in which case the arguments fail to address any disagreement). More than that, by conservation of expected evidence some arguments should change your mind in the opposite direction from what they are intended to argue.
Just underlining a bit I especially like.
In my experience, bringing substance into the conversation greatly reduces your chances of convincing anyone of anything. If your goal is not to seek the truth, but to actually convince someone, it’s best to stay away from substance and to reach directly for their emotional levers.
I don’t think this is a failure of rationality: in disagreements about facts you have to trust the other person to not lie, in a negotiation you have to trust the other person to keep his end of the bargain.
On not doing the impossible:
-Article at The Atlantic
Will Burns
-- Peter Drucker
-- Ian Hacking, Images of Science: Essays on Realism and Empiricism
That is (ETA: also) from Hacking’s Representing and Intervening.
-- Reid Hastie & Robyn Dawes (Rational Choice in an Uncertain World)
The “Ulysses” reference is to the famous Ulysses pact in the Odyssey.
--Benjamin Franklin
SMBC comics on the relative proximity of excretory and reproductive outlets in humans.
Evo-devo (that is to say, actual real science) gives an even better account of that accident of evolutionary history. For simple sessile animals, reproduction often involves dumping quantities of spores or gametes into the environment. And what other system already dumps quantities of stuff into the environment...?
Okay, but why should the reproductive outlets be there too?
I agree connotationally, but the comic only answers half of the question.
I am a fan of SMBC, but the entire explanation is wrong. The events that led to the integration of reproductive and digestive systems happened long before a terrestrial existence of vertebrates, and certainly long before hands. To get a start on a real explanation you have to go back to early bilaterals:
http://www.leeds.ac.uk/chb/lectures/anatomy9.html
As near as I can tell it was about pipe reuse. But you can’t make a funny comic about that (or maybe you can?). Zach is a “bard”, not a “wizard.” He entertains.
Try carrying the fetus and giving birth from any other location. I suppose having the fun parts somewhere else than the reproductive dumping tube could be nice, but wouldn’t make any sense.
Consider the chicken, with its ingenious production line of eggs. Constant fertilization from a different orifice seems ideal, as (the source I just Googled suggests that) chickens have very short fertilization cycles. (They don’t have separate orifices. Poor cloacas.)
Since fertilization occurs at one end of a long tube, and birth occurs at the other, I wouldn’t be surprised if the optimal arrangement involved separate organs.
Natural selection also led us to breathe and eat through the same hole. Seriously???? This causes so many problems. Well, not enough problems for natural selection to change it, I guess.
Having two (three, technically) holes you can breath through has its advantages. Ever had a nasty head cold that clogs your sinuses so bad you can’t breathe?
You still have just one pharynx, though.
Being able to smell what you’re chewing is a huge advantage. I suppose achieving that some other way could get pretty convoluted.
I’ve read horses can only breathe through their noses.
I’ve never heard this, but I have read and just re-checked, and apparently whales and dolphins have separate passages for breathing (connected to their blowholes) and eating, and thus cannot choke on food.
It’s all a matter of which fish we evolved from and what solution evolution came up with, in the development from that water-breathing creature to air breathers. It could have produced separate air and foodways, as it did for whales and dolphins. But the blind idiot has no foresight and it can’t get there from here any more.
Dolphins and whales have the same fish ancestors we do; they’re former terrestrial mammals who returned to the sea and share the same common ancestors of all mammals.
″ a morally blind, fickle, and tightly shackled tinkerer” (1) who “should be in jail for child abuse and murder”(2)
(1) POWELL, Russell & BUCHANAN, Allen. “Breaking evolution’s chains: the prospect of deliberate genetic modification in humans.” In: SAVULESCU, J. & MEULEN, Rudd ter (orgs.) “Enhancing Human Capacities”. Wiley-Blackwell. 2011.
(2) BOSTROM, Nick. “In defense of posthuman dignity.” Bioethics, v. 19, n. 3, p. 202-214, 2005.
There is no escape from evolution (variation and selection).
Deliberate genetic selection is just more complicated evolution.
Sure there is. Organisms could, in theory, create perfect replicas without variation for selection to act on. Contrariwise, they could create new organisms depending on what they needed that would bear no relation to themselves and would not reproduce in kind (or at all).
If I could write an AI, the last thing I’d want is to make it reproduce with random variations. If I could genetically engineer myself or my children, I’d want to introduce deliberate changes and eliminate random ones. (Apart from some temporary exceptions like the random element in our current immune systems.)
I think you’re overusing the term “evolution”. If you let it include any kind of variation (deliberate design) and any kind of selection (deliberate intelligent selection), you can’t make any predictions that would hold for all “evolving” systems.
In which theory? I don’t think this is true if temperatures are above absolute zero, for example.
I suspect that you’re being too restrictive- it doesn’t seem like variation has to be blind, and selection done by replication, for ‘evolution’ to be meaningful. Now, blind biological evolution and engineering design evolution will look different, but it seems reasonable to see an underlying connection between them.
True, you can’t create perfect physical copies or even keep a single object perfectly unchanged for long. But macro-scale systems designed to eliminate variance and not to let microscopic deviations affect their macro-scale behavior can, for practical purposes, be made unchanging. Especially given an intelligent self-repairing agent that fixes unavoidable damage over time.
So, what kind of statements are valid for all kinds of evolution?
The direction (and often magnitude) of expected change over time is generally predictable, for example.
Can you be more specific? What is the expected direction of change for all evolutionary processes?
In general, the entities undergoing evolution will look more like the complement of their environments as time goes on.
I’m sorry, I don’t understand. What is the “complement of the environment”?
Suppose a gazelle lives in a savannah; we should expect the gazelles to digest savannah grass, flee from cheetahs, be sexy to other gazelles, etc., and become that way if not already so. I think Dawkins has a good explanation of this somewhere, but I was unable to find it quickly, that genes are in some sense records of the ancestral environment.
Similarly, internet memes are in some sense a record of the interests of internet users, and car designs a record of the interests of car buyers and designers, and so on. Is that a clearer presentation?
It seems clear what you mean (though not why you called it the complement of the environment). But I still don’t see what’s common to all kinds of evolutions, so maybe I’m still misunderstanding.
It’s certainly true that any evolved object is a function of its environment and we can deduce features of the environment from looking at the object. But this is also true for any object that has a history of being influenced by its environment. A geologist looks at a stone and tells you how it was shaped by rain. An astronomer looks at a nebula and tells you how it was created by a supernova. “Being able to learn about a thing’s past environment from looking at its present shape” is so general that you must have meant something more than that, but what?
That’s basically what I meant, actually, with the inclusion of “looking at a thing’s present environment tells you about its likely future shapes.” I chose “complement” because it seemed like a better word than “mirror,” but I’m not sure it was the best choice, and think “record” might have been better.
What about AGI? Radical human enhancement? Computronium? Post-biological civilizations? All impossible?
Offhand, I think they’d all include variation and selection.
I’ve seen an argument that a nanotech organism with a reasonable level of error-correction could with high probability make error-free clones of itself until the heat death of the universe.
That assumes the lack of black swans. Not a very good assumption when we extrapolate things until the heat death of the universe.
True, but it would have to be an exceedingly black swan to result in evolutionary-like mutations rather than simple annihilation.
First, annihilation is good enough—a destroyed nanobot fails at making “error-free clones of itself until the heat death of the universe”.
Second, all you need to do is to screw up the error-correction mechanism, the rest will take care of itself naturally.
That seems plausible to me, but it’s still likely to be subject to selection because of competition for resources. Depending on its intelligence level and ethical structure, it might also be affected by arguments that it should limit its reproduction.
The point is there’d be no variation for evolution to select on.
Why do you think there’d only be one sort of nanotech organism to select on and/or that perfect self-replication is the best or only strategy?
Well, the organism would need to be preprogrammed to survive in whatever environment it might find itself in until then.
-Goro Shimura on Yutaka Taniyama
-- Peter Drucker The Effective Executive
So what about MWI?
-- Sam Starfall, FreeFall #1516
-- David Chapman
Saying that something is ineffable and saying that nothing we can say is meaningful without the exact same shared experience are rather different things. To use your own example, comparision is possible—so we can imperfectly describe chocolate in terms of sugar and (depending on the type) bitterness, even if our audience has never heard of chocolate.
Conveniently, this allows us to roughly fathom experiences that nobody has ever had. Playwrights, for example, set out to create an experience that does not yet exist and prompt actors to react to situations they have never lived through, and through their capability to generalize they can imperfectly communicate their ideas.
Skitter the bug girl on morality, consequentialism and metaethics in Worm, the online serial recommended by Eliezer for HPMoR withdrawal symptoms.
We’re also biased toward believing we’re in one of those circumstances when we’re not.
Yep, and the part after the quote alludes to that.
snip
Dupe.
Thanks. I just read the article, so I guess I was assuming it was new and wouldn’t have been quoted.
Why did you edit the text of the quote away rather than retracting the comment?
--Paul Graham
Dupe.
I can’t help but wondering if he’s overcompensating due to a certain incident.
So he’s one of those Fair Witnesses from Stranger in a Strange Land?
Heinlein’s Fair Witnesses show a rationality failure.
It is impossible to report things “just as they are” without imposing implicit interpretation.
All observation and all language depends on an implicit model.
As I recall, Fair Witnesses didn’t ever give probabilities, they talked in binary terms. Morris, on the other hand, sounds calibrated.
They never drew conclusions.
-- Mother Goose
Presumably, a wise implementation of this quote would consider a continuum of remedies, ranging from mild treatment of symptoms to vaccination against the possibility of ever contracting the ailment. Even if there is no cure for an ailment, there is still value in mitigating its negative effects.
--Paul Waldman Why Isn’t Everyone More Worried about Me? November 11, 2013
James A. Donald
Exact same argument. Does it sound equally persuasive to you?
I’d extend Eugene’s reply and point out that both the original and modified version of the sentence are observations. As such, it doesn’t matter that the two sentences are grammatically similar; it’s entirely possible that one is observed and the other is not. History has plenty of examples of people who are willing to do harm for a good cause and end up just doing harm; history does not have plenty of examples of people who are willing to cut people open to remove cancer and end up just cutting people open.
Also, the phrasing “to end malaria” isn’t analogous to “to remove cancer” because while the surgery only has a certain probability of working, the uncertainty in that probability is limited. We know the risks of surgery, we know how well surgery works to treat cancer, and so we can weigh those probabilities. When ending malaria (in this example), the claim that the experiment has so-and-so chance of ending malaria involves a lot more human judgment than the claim that surgery has so-and-so chance of removing cancer.
Yes, but keep in mind the danger of availability bias; when people are willing to do harm for a good cause, and end up doing more good than harm, we’re not so likely to hear about it. Knut Haukelid and his partners caused the death of eighteen civilians, and may thereby have saved several orders of magnitude more. How many people have heard of him? But failed acts of pragmatism become scandals.
Also, some people (such as Hitler and Stalin) are conventionally held up as examples of the evils of believing that ends justify means, but in fact disavowed utilitarianism just as strongly as their critics. To quote Yvain on the subject, “If we’re going to play the “pretend historical figures were utilitarian” game, it’s unfair to only apply it to the historical figures whose policies ended in disaster.”
We already have a situation where we can cause harm to innocent people for the general good. It’s called taxes.
Since I got modded down for that before, here’s a hopefully less controversial example: the penal system. If you decide that your society is going to have a penal system, you know (since the system isn’t perfect) that your system will inevitably punish innocent people. You can try to take measures to reduce that, but there’s no way you can eliminate it. Nobody would say we shouldn’t put a penal system into effect because it is wrong to harm innocent people for the greater good—even though harming innocent people for the greater good is exactly what it will do.
I don’t think anyone really objects to hurting innocent people for the greater good. The kind of scenarios that most people object to have other characteristics than just that and it may be worth figuring out what those are and why.
It seems to me that utilitarianism decides how to act based on what course of action benefits people the most; deciding who counts as people is not itself utilitarian or non-utilitarian.
And even ignoring that, Hitler and Stalin may be valuable as examples because they don’t resemble strict utilitarianism, but they do resemble utilitarianism as done by fallible humans. Actual humans who claim that the ends justify the means also try to downplay exactly how bad the end is, and their methods of downplaying that do resemble ideas of Hitler and Stalin.
Can you provide examples of this? In my experience, while utilitarianism done by fallible humans may be less desirable than utilitarianism as performed by ideal rationalists, the worst failures of judgment on an “ends justify the means” basis tend not to come from people actually proposing policies on a utilitarian basis, but from people who were not utilitarians whose policies are later held up as examples of what utilitarians would do, or from people who are not utilitarians proposing hypotheticals of their own as what policies utilitarianism would lead to.
Non utilitarians in my experience generally point to dangers of a hypothetical “utilitarianism as implemented by someone much dumber or more discriminatory than I am,” which is why for example in Yvain’s Consequentialism FAQ, the objections he answered tended to be from people believing that utilitarians would engage in actions that those posing the objections could see would lead to bad consequences.
Utilitarianism as practiced by fallible humans would certainly have its failings, but there are also points of policy where it probably offers some very substantial benefits relative to our current norms, and it’s disingenuous to focus only on the negative or pretend that humans are dumber than they actually are when it comes to making utilitarian judgments.
Any example I could give you of humans fallibly being utilitarian you could equally well describe as an example of humans not being utilitarian at all. After all, that’s what “fallible” means—”doing X incorrectly” is a type of “not doing X”.
If you want an example of humans doing something close to utilitarian (which is all you’re going to get, given how the word “fallible” works), Stalin himself is an example. Just about everything he did was described as being for the greater good, because building the perfect Soviet society is for the greater good and the harm done to someone by giving him a show trial and executing him is necessary to build that society. Of course, you could explain how Stalin wasn’t really utilitarian, but if he was really utilitarian, he wouldn’t be fallibly utilitarian.
I would accept any figure as “fallibly utilitarian” if they endorsed utilitarian ethics and claimed to be attempting to follow it, but Stalin did not do so, and while his actions might be interpreted as a fallible attempt at utilitarianism, his own pronouncements don’t particularly invite such interpretation.
That doesn’t actually contain any of his own pronouncements.
I managed to Google this for Hitler: “It is thus necessary that the individual should come to realize that his own ego is of no importance in comparison with the existence of his nation; that the position of the individual ego is conditioned solely by the interests of the nation as a whole … that above all the unity of a nation’s spirit and will are worth far more than the freedom of the spirit and will of an individual. …. This state of mind, which subordinates the interests of the ego to the conservation of the community, is really the first premise for every truly human culture …. we understand only the individual’s capacity to make sacrifices for the community, for his fellow man.”
This may not be technically utilitarian,. but it is an example of Hitler endorsing the idea that some people should be harmed to benefit others (in this case to benefit society), and is an example of how that doesn’t go well.
People have been proposing that some people should be harmed to benefit others long before anyone proposed the idea of utilitarianism; usually it was justified because the people being harmed are outgroup members, or simply Less Important compared to the people being helped, or to subordinate individual identity to the group identity.
Sometimes, this doesn’t go well. In some cases, such as, by your own example, in the case of taxes or a justic system, it goes much better than a refusal to harm some people to help others. Utilitarianism is a construction for formalizing under what conditions it is or is not a good idea to attempt such tradeoffs. Calling the purges of Hitler or Stalin failings of Utilitarianism is about as fair as calling every successful government intervention or institution ever, which after all all involve sacrificing resources of the public for a common good, successes of Utilitarianism.
Another way that a penal system is extremely likely to harm innocents is that the imprisoned person may have been supplying a net benefit to their associates in non-criminal ways, and they can’t continue to supply those benefits while in prison. This is especially likely for some of the children of prisoners, even if the prisoners were guilty..
That’s indirect harm. Non-utilitarians don’t have to care about it (and certainly not care about it as much as utilitarians).
I am unsure how to map decisions under uncertainty to evidence about values as you do here.
A still-less-controversial illustration: I am shown two envelopes, and I have very high confidence that there’s a $100 bill in exactly one of those envelopes. I am offered the chance to pay $10 for one of those envelopes, chosen at random; I estimate the EV of that chance at $50, so I buy it. I am then (before “my” envelope is chosen) offered the chance to pay another $10 for the other envelope, this chance to be revoked once the first envelope is selected. For similar reasons I buy that too.
I am now extremely confident that I’ve spent $10 for an empty envelope… and I endorse that choice even under reflection. But it seems ridiculous to conclude from this that I endorse spending $10 for an empty envelope. Something like that is true, yes, but whatever it is needs to be stated much more precisely to avoid being actively deceptive.
It seems to me that if I punish a hundred people who have been convicted of a crime, even though I’m confident that at least some of those people are innocent, I’m in a somewhat analogous situation to paying $10 for an empty envelope… and concluding that I endorse punishing innocent people seems equally ridiculous. Something like that is true, yes, but whatever it is needs to be stated much more precisely to avoid being actively deceptive.
In your example, you are presenting “I think you should spend $10 for an empty envelope” as a separate activity, and you are being misleading because you are not putting it into context and saying “I think you should spend $10 for an empty envelope, if this means you can get a full one”.
With the justice system example, I am presenting the example in context—that is, I am not just saying “I think you should harm innocent people”, I am saying “I think you should harm innocent people, if other people are helped more”. It’s the in-context version of the statement that I am presenting, not the out-of-context version.
(nods) Yes, that makes sense.
Thanks.
I (and James Donald) agree. Remember that the traditional ethical laws this is based on also have traditional exceptions, e.g., for punishment and war, and additional laws governing when and how those exceptions apply. The thing to remember is that you are not allowed to add to the list of exceptions as you see fit, nor are you allowed to play semantic games to expand them. In particular, no “war on poverty”, or “war on cancer”, even “war on terror” is pushing it.
I think you’re misunderstanding me. We all know that most ethical systems think it’s okay to punish criminals. I’m not referring to the fact that criminals are punished, but the fact that when we try to punish criminals we will, since no system is perfect, inevitably end up punishing some innocent people as well. Those people did nothing wrong, yet we are hurting them, and for the greater good.
This is no different from the fact that it’s okay to fly planes even though some of them will inevitably crash.
Note that if a judge punishes someone who turns out to be innocent, we believe he should feel guilty about this rather then simply shrugging and saying “mistakes will happen”. Similarly, if an engeneer makes a mistake than causes a plane to crash.
Just like not all people punished are guilty, not all innocent people punished are discovered; there’s always going to be a certain residue of innocent people who are punished, but not discovered, with no guilty judges or anything else to make up for it. Hurting such innocent people is nevertheless an accepted part of having a penal system.
Sure. So we’re sometimes willing to do some harm to innocents for the greater good. But if we were utilitarians we would always be thus willing. Civilized societies don’t torture or execute their criminals, and wouldn’t do so even if it was for the greater good.
The US has executions yet is otherwise considered civilized. So you must be claiming that the US is not civilized because it has executions, which makes it into a “no true Scotsman” argument.
Nope; there are other things about the US (poor public healthcare / higher education, high religiosity, weak labour laws...) that move it away from the empirical cluster of societies I’m thinking of. I made a lazy and politicized choice of terminology but it’s a natural category, not a funny boundary I’m drawing arbitrarily.
That seems like a pretty low bar for “not civilized”, which is a seriously bad characterization, and some of those are downright bizarre. It reads as though you took a laundry list of things you don’t like about the US and decided that that’s your definition of “not civilized”. Would it make any sense if I pointed out that Europe tends to have weaker freedom of speech than the US, and a higher tendency towards anti-Semitism, and higher taxes, therefore Europe is “not civilized”?
If “uncivilized” means anything it has to mean something other than “has policies I hate”. And defining it to mean “has policies that I hatem and which hurt people” is no good—everyone thinks that policies that they hate hurt people, so that collapses down to just “has policies I hate”.
BTW, do you consider Japan to be uncivilized? It has the death penalty.
(My own theory on how Japan manages to keep the death penalty is that it’s easy for activists in Europe to connect to activists in nearby countries where some people are bilingual, but hard to connect with activists on the other side of the world who have no languages in common.)
It was a bad choice of word. But it seems like you agree that there are distinct empirical clusters for Europe-like and America-like (and Japan resides somewhere between the two—like most empirical classifications it’s imperfect, but still useful).
I think the case can be made that Europe is a better place to live (and that US states that practice executions are worse than those that don’t). But in any case this is all beside the point; the fact that there is this kind of resistance to the death penalty, even if only in Europe, demonstrates that humans are not naturally utilitarian.
Or that the death penalties utilitarian merits are debatable. Or that in some societies, ‘natural’ utilitarian tendencies are subverted/modified/removed/replaced by the cultural environment.
I agree that if you try to list the differences between Europe and the US, you can come up with a list. However, many of the items on the list are related to each other mostly by historical accident. Europe lacks the death penalty because once activists get a foothold in one place, that makes it easier for them to get a foothold in other culturally and geographically close places. Not because people who like high taxes necessarily have to oppose the death penalty. The state of Europe right now is very path-dependent.
Or that Europe is run by an unrepresentative subset of humans.
Or that humans are not generally naturally anything to the exclusion of everything else. (Of course, in the limit, everything is utilitarian—people in Europe may get displeasure from using the death penalty the same way they get displeasure from bad-tasting food. Is someone who avoids bad tasting food for food that costs more a utilitarian, because pleasure from food taste is a form of utilon?)
Ok, we actually disagree then. I think that progress in the progressive sense is real, that most of today’s politics is a product of historical/technological/etc. forces and more-or-less inevitable. (If I had to guess why the US is different from Europe I’d say it’s largely an artifact of which groups of people originally settled there, and I expect the US to become more European in the future). I predict that even in legislatures far away from Europe we’d observe a correlation between support for high taxation and opposition to the death penalty, and that more generally if we did a NOMINATE-style analysis we’d find that positions on many issues were largely explained by a single axis of variation, and the list of things I mentioned would be at one end of it.
Sure. I’m not arguing that we’re naturally virtue ethicists or anything. But I don’t think utilitarianism is an adequate description of intuitive human morality (even American morality). Perhaps the fat man in the trolley problem is a better example; while there are no doubt many clever arguments that people are being utilitarian via some convoluted route, it’s not the result we would naturally predict utilitarian thinkers to come to.
I understand utilitarian to mean someone who tries to maximize some pseudo-economically consistent objective function of the external world. If someone assigns different values to actions that have the same result but get there by different paths, or evaluates a future state differently depending on the current state of the world, or believes that the same action could have a different moral value depending solely on the internal state of the person performing it, then they’re not a utilitarian.
Yep. I’ve heard similar speculations regarding surgeons before. Fortunately nowadays we can take appropriate measures to compensate (surgeons are highly paid and closely monitored; we take a lot of care that medicine be evidence-based; the rationale behind specific medical interventions is carefully documented and checked multiple times; we require medical professionals to train for longer than any other profession). But note that for most of human history, the interventions performed by almost all medical professionals were literally worse than nothing.
The second sentence is an empirical observation that is clearly false in your example.
Utilitarianism isn’t a description of human moral processing, it’s a proposal for how to improve it.
One problem is that if we, say, start admiring people for acting in “more utilitarian” ways, what we may actually be selecting for is psychopathy.
Agreed. Squicky dilemmas designed to showcase utilitarianism are not generally found in real life (as far as I know). And a human probably couldn’t be trusted to make a sound judgement call even if one were found. Running on untrusted hardware and such.
Ah- and this is the point of the quote. Oh, I like that.
Our nature is not purely utilitarian, but I wouldn’t go so far as to say that utilitarianism is not in our nature. There are things we avoid doing regardless of how they advance our goals, but most of what we do is to accomplish goals. If you can’t understand that there are things you need to do to eat, then you won’t eat.
Strawman. Does any moral system anyone’s ever proposed say we should never attempt to accomplish goals?
I agree that utilitarianism is “not in our nature,” but what has this to do with rationality?
Utilitarianism is pretty fundamental around here. Not everyone here agrees with it, but pretty much all ethical discussions here take it as a precondition for even having a discussion. The assertion that we are not, cannot be, and never will be utilitarians is therefore very relevant.
If you are suggesting by that emphasis on “nature” that we might act to change our nature and remake ourselves into better utilitarians, I would ask, if we are in fact not utilitarians, why should we make ourselves so? Infatuation with the tidiness of the VNM theorem?
We us::should try to be as utilitarian as we can because our intuitive morality is kind of consequentialist, so we care about how the world actually ends up, and utilitarianism helps us win.
If we ever pass up a chance to literally hold one child’s face to a fire and end malaria, we have screwed up. We are not getting what we care about most.
It’s not the “tidiness” in any aesthetic sense of VNM axioms that are important, it’s the not-getting-money-pumped. Not being able to be money pumped is important not because getting money pumped is stupid and we can’t be stupid, but because we need to use our money on useful stuff.
In another comment James A. Donald suggests a way torturing children could actually help cure malaria:
Would you be willing to endorse this proposal? If not, why not?
If I’m not fighting the hypothetical, yes I would.
If I encountered someone claiming that in the messy real world, then I run the numbers VERY careful and most likely conclude the probability is infinitesimal of him actually telling the truth and being sane. Specifically, of those claims the one that it’d be easier to kidnap someone than to find volunteer (say, adult willing to do it in exchange for giving their families large sums of money) sounds highly implausible.
What’s your opinion of doing it Tuskegee-style, rather than kidnapping them or getting volunteers? (One could believe that there might be a systematic difference between people who volunteer and the general population, for example.)
In general, given ethical norms as they currently exist, rather than in a hypothetical universe where everyone is a strict utilitarian, I think the expected returns on such an experiment are unlikely to be worth the reputational costs.
The Tuskegee experiment may have produced some useful data, but it certainly didn’t produce returns on the scale of reducing global syphilis incidence to zero. Likewise, even extensive experimentation on abducted children is unlikely to do so for malaria. The Tuskegee experiment though, is still seen as a black mark on the reputation of medical researchers and the government; I’ve encountered people who, having heard of it, genuinely believed that it, rather than the extremely stringent standards that currently exist for publishable studies, was a more accurate description of the behavior of present researchers. That sort of thing isn’t easy to escape.
Any effective utilitarian must account for the fact that we’re operating in a world which is extremely unforgiving of behavior such as cutting up a healthy hospital visitor to save several in need of organ transplants, and condition their behavior on that knowledge.
Here’s one with actual information gained: Imperial Japanese experimentation about frostbite
The cost of this scientific breakthrough was borne by those seized for medical experiments. They were taken outside and left with exposed arms, periodically drenched with water, until a guard decided that frostbite had set in. Testimony From a Japanese officer said this was determined after the “frozen arms, when struck with a short stick, emitted a sound resembling that which a board gives when it is struck.”
I don’t get the impression that those experiments destroyed a lot of trust—nothing compared to the rape of Nanking or Japanese treatment of American prisoners of war.
However, it might be worth noting that that sort of experimentation doesn’t seem to happen to people who are affiliated with the scientists or the government.
Logically, people could volunteer for such experiments and get the same respect that soldiers do, but I don’t know of any real-world examples.
It’s hard for experiments to destroy trust when those doing the experiments aren’t trusted anyway because they do other things that are as bad (and often on a larger scale).
I was going to say that I didn’t think that medical researchers had ever solicited volunteers for experiments which are near certain to produce such traumatic effects, but on second thought, I do recall that some of the early research on the effects of decompression (as experienced by divers) was done by a scientist who solicited volunteers to be subjected to decompression sickness. I believe that some research on the effects of dramatic deceleration was also done similarly.
I have heard of someone who was trying to determine the biomechanics of crucifixion, what part of the forearm the nail goes through and whether suffocation is the actually the main cause of death and so on, who ran some initial tests with medical cadavers, and then with tied-up volunteers, some of whom were disappointed that they weren’t going to have actual nails driven through their wrists. Are extreme masochists under-represented on medical ethics boards?
Actual medical conspiracies, such as the Tuskegee syphilis experiment, probably contribute to public credence in medical conspiracy theories, such as anti-vax or HIV-AIDS denialism, which have a directly detrimental effect on public health.
Probably.
In a culture of ideal rationalists, you might be better off having a government run lottery where people were randomly selected for participation in medical experiments, with participation on selection being mandatory for any experiment, whatever its effects on the participants, and all experiments being vetted only if their expected returns were more valuable than any negative effect (including loss of time) imposed on the participants. But we’re a species which is instinctively more afraid of sharks than stairs, so for human beings this probably isn’t a good recipe for social harmony.
So would you be in favor of educating people why things like the Tuskegee experiment or human experimentation on abducted children are good things?
Not directly, because I don’t think it would be likely to work. I do think that people should be educated in practical applications of utilitarianism (for instance, the importance of efficiency in charity,) but I don’t think that this would be likely to result in widespread approval of such practices.
In the specific case of the Tuskegee experiment, the methodology was not good, and given that treatments were already available, the expected return was not that great, so it’s not a very good example from which to generalize the potential value of studies which would be considered exploitative of the test subjects.
That already had a treatment, hence it was not going to save the millions suffering, since they were already saved. Also, those scientist didn’t have good enough methodology to have gotten anything useful out of it in either case. There’s a general air of incompetence surrounding the whole thing that worries me more than the morality.
As I said; before doing anything like this you have to run your numbers VERY carefully. The probability of any given study solving a disease on it’s own is extremely small, and there are all sorts of other practical problems. That’s the thing; utilitarianism is correct, and not answering according to it is fighting the hypothetical. but in cases like this perhaps you should fight the hypothetical, since you’re using specific historical examples that very clearly did NOT have positive utility and did NOT run the numbers.
It’s a fact that a specific type of utilitarianism is the only thing that makes sense if you know the math. It’s also a fact that there are many if’s and buts that make human non-utilitarian moral intuition an heuristic way more reliable for actually achieving the greatest utility than trying to run the numbers yourself in the vast majority of real world cases. Finally, it’s a fact that most things done in the name of ANY moral system is actually bullshit excuses.
http://en.wikipedia.org/wiki/Tuskegee_syphilis_experiment
What do you think of that utilitarian calculation? I’m not sure what I think of it.
It seems like either (1) Rivers was deceived, or (2) she was in some other way unaware that there was already an effective cure for syphilis which was not going to be given to the experimental subjects, or (3) the other options available to these people were so wretched that they were worse than having syphilis left untreated.
In cases 1 and 2, it doesn’t really matter what we think of her calculations; if you’re fed sufficiently wrong information then correct algorithms can lead you to terrible decisions. In case 3, maybe Rivers really didn’t have anything better to do—but only because other circumstances left the victims of this thing in an extraordinarily terrible position to begin with. (In much the same way as sawing off your own healthy left arm can be the best thing to do—if someone is pointing a gun at your head and will definitely kill you if you don’t. That doesn’t say much about the merits of self-amputation in less ridiculous situations.)
I find #3 very implausible, for what it’s worth.
(Now, if the statement were that Rivers believed that the benefits to the community outweighed the risks, and indeed the overt harm, to the subjects of the experiment, that would be more directly to the point. But that’s not what the article says.)
Or (4), she was led to believe, either explicitly or implicitly, that her career and livelihood would be in jeopardy if she did not participate—thus motivating her to subconsciously sabotage her own utility calculations and then convince herself that the sabotaged calculations were valid.
But that might still matter. It may be that utilitarianism produces the best results given no bad information, but something else, like “never permit experimentation without informed consent” would produce better results (on the average) in a world that contains bad information. Especially since whether the latter produces better results will depend on the frequency and nature of the bad information—the more the bad information encourages excess experimentation, the worse utilitarianism comes out in the comparison.
But a good utilitarian will certainly take into account the likelyhood of bad information and act appropriately. Hence the great utilitarian Mill’s advocacy of minimal interference in people’s lives in On Liberty, largely on the basis of the ways that ubiquitous bad information will make well-intentioned interference backfire often enough to make it a lower expected utility strategy in a very wide range of cases.
A competent utilitarian might be able to take into account the limitations of noisy information, maybe even in some way more useful than passivity. That’s not the same class of problem as information which has been deliberately and systematically corrupted by an actual conspiracy in order to lead the utilitarian decisionmaker to the conspiracy’s preferred conclusion.
The cure was discovered after the experiment had been going on for eight years, which complicates matters. At this point, I think her best strategy would have been to arrange for the men to find out about the cure in some way which can’t be traced back to her.
She may have believed that the men would have died more quickly of poverty if they hadn’t been part of the experiment.
Which one? The presumed altruistic one or the real-life one (which I think included the utilitly of having a job, the readiness to disobey authority, etc.)
The altruistic one, mostly.
Endorse? You mean, publicly, not on LessWrong, where doing so will get me much more than downvotes, and still have zero chance of making it actually happen? Of course not, but that has nothing to do with whether it’s a good idea.
I meant “endorse” in the sense that, unlike the Milgram experiment, there is no authority figure to take responsibility on your behalf.
Do you think it’s a good idea?
If it will actually work, and there’s no significant (as in at least the size of malaria being cured faster), and bad, consequences we’re missing, or there are significant bad consequences but they’re balanced out by significant good consequences we’re missing, then yes.
The question is not “would this be a net benefit” (and it probably would, as much as I cringe from it). The question is, are there no better options?
Such as? Experimenting on animals? That will probably cause progress to be slower and think about all the people who would die from malaria in the meantime.
Yes. How many more? Would experimenting on little girls actually help that much? Also consider that many people consider a child’s life more valuable than an adult one, that even in a world where you would not have to kidnap girls and evade legal problems and deal with psychological costs on the scientists caring for little humans is significantly more expensive then caring for little mice, that said kidnapping, legal, and psychological costs do exist, that you could instead spend that money on mosquito nets and the like and save lives that way...
The answer is not obviously biased towards “experiment on little girls.”. In fact, I’d say it’s still biased towards “experiment on mice.” Morality isn’t like physics, the answer doesn’t always add up to normality, but a whole lot of the time it does.
...
So your answer is that in fact it would not work. That is a reasonable response to an outrageous hypothetical. Yet James A. Donald suggested a realistic scenario, and beside it, the arguments you come up with look rather weak.
Given the millions killed by malaria and at most thousands of experimental subjects, it takes a heavy thumb on the scales of this argument to make the utilitarian calculation come out against.
This is a get-out-of-utilitarianism-free card. A real utilitarian simply chooses the action of maximum utility. He would only pay a psychological cost for not doing that. When all are utilitarians the laws will also be utilitarian, and an evaluation of utility will be the sole criterion applied by the courts.
You are not a utilitarian. Neither is anyone else. This is why there would be psychological costs and why there are legal obstacles. You feel obliged to pretend to be a utilitarian, so you justify your non-utilitarian repugnance by putting it into the utilitarian scales.
But not any more expensive than caring for chimpanzees. Where, of course, “care for” does not mean “care for”, but means “keep sufficiently alive for experimental purposes”.
This looks like motivated reasoning. The motivation, to not torture little children, is admirable. But it is misapplied.
Can you expand on what you see as the differences?
No, seriously. I’ve read the original comment, James A. Donald does not support his claim.
This is granted. References to small mice were silly and are now being replaced by “small chimpanzees.” However...
This is not the calculation being made. Using your numbers, experimenting on little girls needs to be at least 1.001 times as effective as experimenting on chimpanzees or mice to be worthwhile (because then you save an extra thousand lives for your thousand girls sacrificed.) It’s not a flat “little girls versus millions of malaria deaths.”
This is, quite frankly, not clear to me, and I’d want to call in an actual medical researcher to clarify. Doubly so, with artificial human organs becoming more and more possible (such organs are obviously significantly cheaper than humans.)
Actually, I was interpreting the hypothetical as “utilitarian government in our world.” But fine, least convenient possible world and all that. That’s why I set the non-society costs aside from the rest.
Honestly, this is probably true—case in point, I would rather not write a similar post from the opposite side. That being said, looking through my arguments, most of them hinge on the implausibility of human experimentation really being all that more effective compared to chimpanzee and artificial organ experimentation.
The physics calculations around us have already been done perfectly. If, when we try to emulate them with our theories, we get something abnormal, it means our calculations are wrong and we need to either fix the calculation or the model. When we’ve done it all right, it should all add up to normality.
Our current morality, on the other hand, is a thing created over a few thousand years by society as a whole, that occasionally generates things like slavery. It is not guaranteed to already be perfectly calculated, and if our calculations turn out something abnormal, it could mean that either our calculations or the world is wrong.
Point taken.
Well, yes. I doubt that JAD has particular expertise in malarial research, I don’t and neither do you. To know whether a malarial research programme would benefit scientifically from a supply of humans to experiment on with no more restraint than we use with chimpanzees, one would have to ask someone with that expertise. But I think the hypothesis prima facie plausible enough to conduct the hypothetical argument, in a way which merely saying “suppose you could save millions of lives by torturing some children” is not.
After all, all medical interventions intended for humans must at some point be tested on humans, or we don’t really know what they do in humans. At present, human testing is generally the last phase undertaken. That’s partly because humans are more expensive than test-tubes or mice. (I’m not sure how they compare with chimpanzees, given the prices that poor people in some parts of the world sell their children for.) But it is also partly because of the ethical problems of involving humans earlier.
Note also that getting humans to experiment on by buying them from poor third world parents is generally frowned upon.
Well, given that more then 1 in 1000 drugs that look promising in animals fail human trials, I’d say that is a ridiculously low bar to pass.
How many drugs that look promising in one human trial fail to pass later human trials?
If it would result in a timely cure for malaria which would result in the disease’s global eradication or near-eradication, I would say that it would be worth kidnapping a few thousand children. But not only would a world where you could get away with doing so differ from our own in some very significant ways, I honestly doubt that a few thousand captive test subjects constitute a decisive and currently limiting factor in the progress of the research.
Of wait we’re talking about an entire society thats utilitarian and rational. In that case I’m (coordinating with everyone else via auman agreement) just dedicating the entire global population to a monstrous machine for maximally efficient FAI research where 99% of people are suffering beyond comprehension with no regard for their own well being in order to support a few elite researchers as the dedicate literally every second of their lives to thinking at maximal efficiency while pumped full of nootropics that’ll kill them in a few years.
This particular proposal? No.
But mainly because we already have the tech to effectively cure malaria; it’s called “DDT” and the only reason we aren’t using it now is a lack of political will to challenge the environmental movement. If we lived in the Donaldverse where this proposal could be taken seriously, it wouldn’t be hard to get a widespread mosquito eradication movement started; after all, sentimental concerns are the main reason we’re handicapping ourselves here in the first place.
In general though, I think human experimentation does have merits. So much of what we know about our biology, especially the biology of the brain, comes from examining the victims of rare mutations diseases or accidents which impaired the functioning of a specific chemical pathway or tissue. If we could do organized knockout studies there is a good chance that we could gain a lot of knowledge which otherwise might take decades to uncover. But like a lot of other interesting ideas, the Nazis kind of messed this one up for the rest of us; there’s really no chance of this sort of thing being allowed in the current political climate, so speculating about it is idle almost by definition.
DDT is widely used in the third world right now
DDT resistance in mosquitoes is rampant due to overuse
Current WHO regulations specify not using it where resistance is observed. Hardly the sort of regulation we have against DDT in the US (where malaria is not really a problem)
That is backwards. It is not because the Nazis did it that experimenting on non-consenting human subjects is considered repugnant. It is because it is repugnant, that the Nazis are condemned for doing it.
Is that an expression of regret for lost possibilities? There is no chance of this sort of thing being allowed in any non-evil political climate.
While that may be true, the catch may lie in finding “non-evil” political climate.
Here’s what has been happening in reality in the XXI century: …after 9/11, health professionals working with the military and intelligence services “designed and participated in cruel, inhumane and degrading treatment and torture of detainees”. Medical professionals were in effect told that their ethical mantra “first do no harm” did not apply, because they were not treating people who were ill. (Link)
Would it were that this were so.
It’s more a matter of what evil means. If it is allowed, that’s worth a good many points in the evil column of the report card.
It certainly has happened in climates, than which we know of more evil ones, but that case was enabled by the general lack of human regard for the class of people experimented on.
Yes, agreed. I’m not sure I know what “evil” means, but I’m fairly sympathetic to the view that, as the saying goes, good folk can allow evil to thrive by doing nothing.
Then risk being the later man, while taking as many precautions as possible to preserve your intent.
Vox Day
I upvoted this comment, but I want to add an important caveat. Whether, and how much, you trust your own judgment over that of an expert should depend at least in part on the degree to which you think your situation is unusual.
The IT guy wants you to shut up and go away, but (if in fact he is an expert and not a trained monkey reading a script) he’s not going to spout random nonsense at you just to get you to leave. He’s going to tell you things relevant to what is, in his experience, the usual situation.
Consider well whether you’re sure your problem is some special snowflake. The IT guy has seen a lot of issues. Sometimes he can, before you finish your first sentence, know exactly what your problem is and how to fix it, and if he sounds bored when he tells you “just reboot it”, that doesn’t mean that he’s wrong. If it costs you little, try his advice first.
The expert also is better equipped to discern whether a situation is unusual, because the expert has seen more.
To the non-expert, something really mysterious and weird must be going on to explain these puzzling symptoms. Computer A can ping computer B, but B can’t ping A? That’s so strange! After all, ping is supposed to test whether two computers can talk to each other on the network, right? How could it possibly work one way but not the other? Is something wrong with the switch? Is one of the network cards broken? Is it a virus?!
To the expert, that’s not unusual at all. One computer has the wrong subnet mask set. Almost every time. Like, that’s 20 to 100 times more likely than a hardware problem or something broken in the network infrastructure, and it can be checked in seconds. And while the machine may have a virus too, that’s not what causes these symptoms.
Very true as well, though I will add the counter-caveat that the expert is usually biased toward concluding that your situation is not unusual. This is why many “tech support horror stories” have a bit where the narrator goes ”… and then, when they finally got it through their heads that yes, I had tried restarting it five times, and no, I didn’t have the wrong settings …”
I suspect there are a couple of things going on there.
One, it’s important to distinguish consulting an expert from consulting a tech support script. Most of the time when you call up tech support, you’re talking to a human being, but not an expert. You’re talking to a person whose job it is to execute a script in order to relieve the experts from dealing with the common cases.
(And yes, it’s in the interest of a consumer tech-support department to spend as little money on expensive experts as they can get away with — which is why when a Windows box has gotten laggy, they say “reboot it” and not “pop open the task manager and see what’s using 100% of your CPU”. They don’t want to diagnose the long-term problem (your Scrabble game that you left running in the background has a bug that makes it busy-wait if it’s back there for 26 hours); they want to make your computer work now and get you off the line. That’s a different case from, for instance, an institutional IT department (at, say, a university) that has to maintain a passable reputation with the faculty who actually care about getting their research done.)
Two, there’s narrative bias. The much-more-numerous cases where the simple fix works don’t make for good “horror stories”, so you don’t hear them retold. Especially the ones where the poor user is now embarrassed because they have to admit they were outguessed by a tech-support script after giving the support tech a hard time.
(Yeah, I like good tech support too; that’s part of why I use the local awesome option (Sonic.net) for my ISP instead of Comcast. I can call them up and talk to someone who actually knows what ARP means. But sometimes the problem does go away for months when you power-cycle the damn modem.)
Well, we don’t know that they’re actually biased in this direction until we know how their assessment of the probability that the usual thing is going on compares to the actual probability that the usual thing is going on.
Yes, there are plenty of “tech support horror stories” where the consultant has a hard time catching on to the fact that the complainant is not dealing with a usual or trivial problem, but for every one of those, there tends to be a slew of horror stories from the other end, of people getting completely wound up over something that the consultant can solve trivially, and failing to follow the simple advice needed to do so.
The consultants could be very well calibrated, and still occasionally be dramatically wrong. Beware availability bias.
Note that cases where the tech tells you that it’s usual problem X, and you deny this, asserting that your thing is a special snowflake, is NOT a case of the opposite bias. It’s just a case of correct identification.
The opposite bias would be if the usual thing was going on, but the tech thought that it was some unusual thing.
Other IT-experienced people are welcome to correct me on this, but in my experience, the latter almost never happens, and when it does, it’s mostly with newbie techies, recent hires/trainees, etc.
This makes it substantially more likely that tech people have “usual problem bias” than that they have “unusual problem bias”, and that they are well-calibrated. The usual problem bias could be small or it could be large, but available evidence is fairly clear that it exists.
The point I was making was not that tech support is likely to have an “unusual problem bias,” but that being correctly calibrated with respect to usual and unusual problems will tend to appear like a “usual problem bias” when you examine in isolation the cases where they’re wrong, because you would tend to observe cases where they need a significant amount of evidence to persuade them of the presence of an unusual problem, but not an unusual one.
If you examine cases where they’re right, you may find a large number of cases where the customer insists that the problem is not addressed by the tech support’s script, only to be proven wrong; these often appear in the horror stories posted by tech support. Thus, tech support may to some extent be rationally discounting evidence favoring unusual problems.
This brings up another related problem, namely how often supposed “experts” actually aren’t.
GK Chesterton
I don’t really like quotes like this. It’s not that it’s not true and it’s not that it’s not that no one commits the error it warns against.
It’s that no one who is blind to fallacies due to popularity is going to notice their mistake and change—it’s too easy to agree with the quote without firing up the process that would lead you to making the mistake.
Good quotes will make it easy to put yourself in either position so that you can mentally bridge the two. If you’re thinking “I can’t imagine how they might make that mistake!”, then you won’t recognize that thought process when you go through it yourself.
Harrap’s First Law
“The enemy of my enemy has their own relationship with me.”
Yup.
Maxim 29
--Paul
Found here.
Publilius Syrus
I’m not sure that’s true in general. I can think of situations where the prudent course of action is to act as fast as possible. For instance, if you accidentally set yourself on fire on the cooker, if you are acting prudently, you will stop, drop and roll, and do it hastily.
The more I look at this, the less sure I am what “hastily” means.
More precisely… if I understand “hastily” to mean, roughly, “more rapidly/sloppily than prudence dictates”, then this statement is trivially true. If I assume the statement is nontrivial, I’m not sure how to test whether something is being done hastily.
Trivial statements are often useful as reminders of facts, particularly when those facts are tradeoffs we would rather not have to face.
A good observation. I was going to suggest throwing a live hand grenade out of your tent as a counter-example—but you don’t want to do it so hastily that it misses the opening bounces back and lands in your lap.
James Argo
Would you mind unpacking that a bit. I don’t understand it, and neither DuckDuckGo nor Google turns up anything but people on Twitter and Hacker News quoting the same sentence.
puts “quantum” + ” consciousness”
=> quantum consciousness
Here ignorance is acting as the “+” operator, binding the two strings into one. On reflection, it’s not as clever as I thought it was when I read it on Hacker News. Rationality quotes should be scrutable. I’ll retract.
Eric Raymond
I don’t get it.
Eric Raymond
Google Is My Friend.
Nassim Taleb
What’s the difference between “based on computation of the odds” and “based on some model”?
Taleb is doing some handwaving here.
“Some model” in this context is just the assumption of a specific probability distribution. So if, for example, you believe that the observation values are normally distributed with the mean of 0 and the standard deviation of 1, the chance of seeing a value greater than 3 (a “three-sigma value”) is 0.13%. The chance of seeing a value greater than 6 (a “six-sigma value”) is 9.87e-10. E.g. if your observations are financial daily returns, you effectively should never ever see a six-sigma value. The issue is that in practice you do see such values, pretty often, too.
The problem with Taleb’s statement is that to estimate the probabilities of seeing certain values in the future necessarily requires some model, even if implicit. Without one you can not do the “computation of the odds” unless you are happy with the conclusion that the probability to see a value you’ve never seen before is zero.
Taleb’s criticism of the default assumption of normality in much of financial analysis is well-founded. But when he starts to rail against models and assumptions in general, he’s being silly.
So, this.
Well, yeah, sure. Yvain wrote it up nicely, but the main point—that what the model says and how much do you trust the model itself are quite different things—is not complicated.
To get back to Taleb, he is correct in pointing out that estimating what the tails of an empirical distribution look like is very hard because you don’t see a lot of (or, sometimes, any) data from these tails. But if you need an estimate you need an estimate and saying “no model is good enough” isn’t very useful.
But surely Taleb isn’t saying “no model is good enough.” He explicitly advocates greater care in model-building and greater awareness of the risks of error, not people throwing up their hands and giving up. He says at the end:
Actually, yes, he is. He is not terribly consistent, but when he goes into his “philosopher” mode he rants against all models.
In fact, his trademark concept of a black swan is precisely what no model can predict.
Maybe it isn’t the clearest way of describing it, but it seems that by “computation of odds” he means using at least some observation of frequencies, and is contrasting this with computing the probability of events for which there have as yet been no occurrences, so no observation of frequency has been possible.
No two real world events are exactly identical. You always need some model to generalize and say the ones you observed are like the ones you predict in some relevant way to reuse the observed frequency in your prediction. Without a model all you can say is that if the circumstances were to repeat exactly, then so would the outcome. And that just isn’t very useful.
Hmm. But, if you multiply “once in every ten thousand years” by all the different kinds of things that could be said to happen once every ten thousand years, don’t you get something closer to “many times a day”?
The computation is not relevant, because when you make a prediction that, say, some excursion in the stock market will happen only once in ten thousand years, you are making a prediction about that specific thing, not ten thousand things. It will be a thing you have never seen, because if you had seen it happen, you could not claim it would only happen once in ten thousand years—the observation would be a refutation of that claim. Since you have not seen it, you are deriving it from a theory, and moreover a theory applied at an extreme it has never been tested at. For such a prediction to be reliable, you need to know that your theory actually grasps the basic mechanism of the phenomenon, so that the observations that you have been able to make justify placing confidence in its extremes. This is a very high bar to reach. Here are a few examples of theories where extremes turned out to differ from reality:
Newtonian gravity --> precession of Mercury
Ideal gas laws --> non-ideal gases
Daltonian atomic theory --> multiple isotopes of the same element
The computation is directly relevant, given that Taleb is talking about how often he sees “should only happen every N years” in newspapers and faculty news. Doesn’t he realise how many things newspapers report on? Astronomy faculties are pretty good for this too, since they watch ridiculous numbers of stars at once.
You can’t just ignore the multiple comparisons problem by saying you’re only making a prediction about “one specific thing”. What about all the other predictions about the stock market you made, that you didn’t notice because they turned out to be boringly correct?
Intuition pump: my theory says that the sequence of coinflips HHHTHHTHTT-THHTHHHTT-TTHTHTTTTH-HTTTHTHHHTT, which I just observed, should happen about once every 7 million years.
Intuition pump: if I choose an interesting sequence of coinflips in advance, I will never see it actually happen if the coinflips are honest. There aren’t enough interesting sequences of 40 coinflips to ever see one. Most of them look completely random, and in terms of Kolmogorov complexity, most of them are: they cannot be described much more compactly than by just writing them out.
Now, we have a good enough understanding of the dynamics of tossed coins to be fairly confident that only deliberate artifice would produce a sequence of, say, 40 consecutive heads. We do not have such an understanding of the sort of things that appear in the news as “should only happen every N years”.
Feynman on the same theme.
Every sequence of 40 coin flips is interesting. Proof: Make a 1 to 1 relation on the sequence of 40 coin flips and a subset of the natural numbers, by making H=1 and T=0 and reading the sequence as a binary representation. Proceed by showing that every natural number is interesting.
If you made ten million predictions of things that happen once every ten thousand years, and about three times a day one of them happens, then it would be sensible to conclude that a given one happens about once every ten thousand years. Most people don’t do this, however. If someone managed to make ten million such predictions, they’d likely end up with a lot more than three of them happening a day.
-William Deresiewicz
Dupe
-- Bas van Fraassen, Laws and Symmetry
I think Hume’s ‘reason is the slave of the passions’ expresses the sentiment more clearly.
Rescue by saying
?
Guilded Age
Edit: mispelling of “write” corrected.
Write, not right.
Sorry if you feel this is nitpicky; it broke up my concentration.
Thanks, fixed.
In fact, the more you ponder it, the more inevitable it seems. Evolution gave us the cognition we needed, nothing more. To the degree we relied on metacognition and casual observation to inform our self-conception, the opportunistic nature of our cognitive capacities remained all but invisible, and we could think ourselves the very rule, stamped not just the physical image of God, but in His cognitive image as well. Like God, we had no back side, nothing to render us naturally contingent. We were the motionless centre of the universe: the earth, in a very real sense, was simply enjoying our ride. The fact of our natural, evolutionarily adventitious componency escaped us because the intuition of componency requires causal information, and metacognition offered us none.
-Scott Bakker
This looks like it might mean something. Can anyone dumb it down to the point where I can understand it?
The basic gist of it seems to be that we don’t have, and ought not expect evolved systems like ourselves to have, the cognitive capability to understand our cognitive limitations, at least not without information about the causes of those limitations that comes from something other than “metacognition”.
I’m not sure what Bakker means by “metacognition”; if he means this I’m pretty confident he’s just wrong.
-- Stefan Kendall
I bet he believes you can’t walk on the moon either.
Yet, no one has been on the moon in decades. Environmental circumstances cannot be ignored. You can’t go to the moon right now—maybe in some years, most likely not. “What can a twelfth-century peasant do to save themselves from annihilation? Nothing.”
That is true. There are some things that we cannot do. There are some things that we cannot do yet. There are some things that we can do, but have not.
The objection to the quote is that it seems to place “moving Mt. Fuji”, as an example of some larger class, in the first class not arbitrarily, but in spite of the evidence (the fact that the audience has come up with an answer, if indeed they have). While the surrounding article makes a good point, the quote in isolation smacks of irrational defeatism.
As an alternative to the quote, I propose the following:
Well, if we’re proposing alternatives, I would probably reduce this to “Before setting out to move a mountain, consider what moving it accomplishes and whether there are cheaper ways of accomplishing that.”
Why not?
-Tyrion Lannister, Game of Thrones
I’m always eager to upvote a Game of Thrones quote, but unfortunately I don’t see the rationality insight here beyond an ordinary quid pro quo.
Tyrion is frequently put into situations where he relies on his family’s reputation for paying debts.
It’s a real-life Newcomb-like problem—specifically a case of Parfit’s Hitchhiker—illustrating the practical benefits of being seen as the sort of agent who keeps promises. It’s not an ordinary quid-pro-quo because there is, in fact, no incentive for Tyrion to keep his end of the bargain once he gets what he wants other than to be seen as the sort of person who keeps his bargain.
Think it’s a stretch?
Ahem.
Er...right. Realistic, I should have said!
We often construct such ridiculous scenarios to illustrate this sort of thing …”You’re in a desert and a selfish pseudo-psychic drives by”? Really?
I enjoyed the fact that Parfit’s Hitchhiker came up as a pop-culture reference, in a situation that arose organically.
The point of these scenarios is make the issue as “clean” as possible, to strip away all the unnecessary embellishments which usually only cause people to fight the hypothetical.
I guess what’s inside the screenwriter’s skull is organic… :-)
But really, since the invention of writing pretty much every writer who addressed the issue pointed out the importance of one’s reputation of keeping promises. There are outright commands (e.g. Numbers 30:2 If a man … swears an oath to bind himself by a pledge, he shall not break his word. He shall do according to all that proceeds out of his mouth.) and innumerable stories and fables about good things which happen to those who keep their promises and bad things which happen to those who don’t.
I don’t disagree with what you say, but I do disagree with the connotation that things which are not original or counter intuitive are not worth pointing out.
The last time this show was quoted, it basically amounted to “try hard to win, give it everything”, which is also something that people have been saying since the beginning of writing. All quote threads are filled with things that have been said again and again in slightly different ways. Even outside of quote threads, it’s worth rephrasing things. Pretty much every Lesswrong post has been conceptually written before by someone, with a few rare exceptions.
Yes, but usually it’s a punishment or reward issued directly from the other party, or by forces of nature...not about the practical value of going out of your way to establish reputation.
Parfit’s Hitchhiker is not a “Newcomb-like problem”. In fact, it’s not even obvious that it is actually a proper decision problem: the only decision maker is the driver and you can’t control their decision. You only get to decide if you can precommit.
Anyway, Google doesn’t turn any reference to Parfit’s Hitchhiker independent of LessWrong. Has this dilemma really originated from Parfit or did EY make it up?
Yes, it does. Try using syntax similar to this:
Thanks!
Google points to this as the original reference. It’s paywalled, so I cannot check.
The paper also apparently mentions Kavka’s toxin puzzle, an interesting exercise in rationality and precommitment occasionally discussed on LW.
Thanks. I’ve also found this one, which is not paywalled.
It’s in Reasons and Persons
Thanks!
Then you’re defining it differently from the way I, and others, are.
My req’s for a Newcomblike problem:
1) Individuals who make certain decisions seem to win at higher rates than individuals who do not.
2) As far as you know, the act of decision doesn’t causally effect the likelihood of a win.
what where your reqs?
These two requirements seem inconsistent.
I’d define a decision problem to be Newcomb-like if the payoff and the agent mental state (preferences, beliefs, decision procedures) are not independend conditional on the agent’s decision.
Some of the problem on the list you linked are Newcomb-like, other are committment problems, other aren’t even decision problems.
My def. isn’t inconsistent. Those who buy computers are less likely to die of malaria,
^that’s an instance of Solomon’s problem, which is considered a newcomblike problem.
It’s the same as the fact that those who one-box are more likely to get more money in Newcomb’s. A third factor (socioeconomic status, CTGA allele, agent’s mental state prior to decision) accounts for the variance.
Once you condition on all the available evidence, such as the socioeconomic status, these two events become independent.
Likewise, in the Solomon’s problem, a genetic test that detects the lesion would destroy the Newcomb-like structure of the dilemma.
AFAICT Kavka’s toxin puzzle is isomorphic to it (except that in this case the billionaire’s motives are alien).
Something about the opposite of Parfit’s hitchhiker? Developing a reputation for following through on promises one could renege on.
Jim
I’ve counted 7 from you.
I think you’ve got a half-truth there.
Memes spread horizontally aren’t selected for virulence, but memes spread generationally have at least been selected for not being utterly deadly.
I don’t think there’s any way to discriminate between crazy things your mum believes and crazy things the man on the street corner believes.
I also think the virulence of a meme complex, like the virulence of a virus, is very dependent on the context i.e. the population it is introduced to and the other memes it competes with in that population.
“what do you think you know, and how do you think you know it?” is snappy enough to be “virulent” and, I think, not too harmful to the individual host.
I wish “likely to be beneficent, providing divine authority for behaviors that parents know to be beneficial, behaviors which provide long term rewards but not short term rewards” (for some much less than literal value of “divine”) was a good description of “those who successfully reproduce”. Unfortunately it seems like “too dumb to use a condom” is a much more accurate description. Ever seen 16 and Pregnant? (Or, [pointless dig at the interlocutor removed], who reproduces more in the present-day US, white people or black people?)
http://en.wikipedia.org/wiki/Fertility_and_intelligence
From the point of view of your genes, likely to reproduce and beneficial are exactly the same thing. That’s trivially true.
Also not particularly interesting even if true: crazy beliefs that get you killed or prevent you from breeding have to spread non-parentally. They don’t have to be particularly persuasive or virulent, there just has to be some other mechanism (e.g. state control of education, military discipline, enjoyable but idiotic forms of mass entertainment) to spread them.
The prevalence of these means doesn’t even have to depend on the ones spreading the memes adopting them for themselves: there can be an economic motive for those who promote them for others to believe crazy things; like there is for drug dealers to sell rather than use heroin.
The welfare subsidies that make this a viable strategy are a very recent phenomenon.
More educated, healthier, wealthier people reproducing less is not a phenomenon restricted to 21st-century America.
Individual (or even social) benefit and reward do not necessarily follow from reproductive fitness; they could be utterly miserable but nonetheless have children* with the same beliefs.
*I am not certain how literally to interpret this from the quote.
-- my wife, on my Facebook timeline, although it’s clearly not original to her.
This is not a quote; it is an unrelated picture with some text we would not consider worthy of inclusion on its own merits.
Who is this “we” you speak of?
If your objection is that it’s mere text and not a quote, well, allow me to add an attribution. I do think you’re taking the whole quotes thread idea a little more seriously than it was ever intended to be taken.
No, I think that even if Russell or Jaynes had said it we (by which I mean, the Lesswrong community, or whatever that might mean) wouldn’t consider it worthy of inclusion.
I don’t want to see the quotes thread turn into the ‘put Chesterton quotes on cat pictures’ thread, let alone the ‘put sub-standard quotes on non-cat pictures’ thread
I deny that the quote is sub-standard. If you think it is, I suggest that you’re not looking hard enough for the message.