Rationality Quotes October 2014
Another month, another rationality quotes thread. The rules are:
Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
Do not quote yourself.
Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you’d like to revive an old quote from one of those sources, please do so here.
No more than 5 quotes per person per monthly thread, please.
Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.
A novice asked master Banzen: “What separates the monk from the master?”
Banzen replied: “Ten thousand mistakes!”
The novice, not understanding, sought to avoid all error. An abbot observed and brought the novice to Banzen for correction.
Banzen explained: “I have made ten thousand mistakes; Suku has made ten thousand mistakes; the patriarchs of Open Source have each made ten thousand mistakes.”
Asked the novice: “What of the old monk who labors in the cubicle next to mine? Surely he has made ten thousand mistakes.”
Banzen shook his head sadly. “Ten mistakes, a thousand times each.”
The Codeless Code
Nate Soares
Now also posted to Less Wrong. (It hadn’t yet been when Luke quoted it above.)
The Courage Wolf looked long and slow at the Weasley twins. At length he spoke, “I see that you possess half of courage. That is good. Few achieve that.”
“Half?” Fred asked, too awed to be truly offended.
“Yes,” said the Wolf, “You know how to heroically defy, but you do not know how to heroically submit. How to say to another, ‘You are wiser than I; tell me what to do and I will do it. I do not need to understand; I will not cost you the time to explain.’ And there are those in your lives wiser than you, to whom you could say that.”
“But what if they’re wrong?” George said.
“If they are wrong, you die,” the Wolf said plainly, “Horribly. And for nothing. That is why it is an act of courage.”
HPMOR omake by Daniel Speyer.
Nice. Where did you find that? Either Uncle Google is failing me, or I am failing Uncle Google.
It’s a comment on one of Eliezer Yudkowsky’s Facebook posts. I got permission to post it here, as I thought it was worth posting.
It was a reply to a post on Eliezer Yudkowsky’s facebook.
I honestly cannot see how the mere existence of people wiser than myself constitutes a valid reason to turn off my brain and obey blindly. The vast majority of all historical incidences of blind obedience have ended up being Bad Ideas.
I believe this lesson is designed for crisis situations where the wiser person taking the time to explain could be detrimental. For example, a soldier believes his commander is smarter than him and possesses more information than he does. The commander orders him to do something in an emergency situation that appears stupid from his perspective, but he does it anyway, because he chooses to trust his commander’s judgement over his own.
Under normal circumstances, there is of course no reason why a subordinate shouldn’t be encouraged to ask why they’re doing something.
I’m not sure that’s the real reason a soldier, or someone in a similar position, should obey their leader. In circumstances that rely on a group of individuals behaving coherently, it is often more important that they work together than that they work in the optimal way. That is, action is coordinated by assigning one person to make the decision. Even if this person is not the smartest or best informed in the situation, the results achieved by following orders are likely to be better than by each individual doing what they personally think is best.
In less pressing situations, it is of course reasonable to talk things out amongst a team and see if anyone has a better idea. However even then it’s common for there to be more than one good way to do something. It is usually better to let the designated leader pick an acceptable solution rather than spend a lot of time arguing about the best possible solution. And unless the chosen solution is truly awful (not just worse but actively wrong) it is usually better to go along with the leader designated solution than to go off in a different direction.
“It can get worse, though, can’t it?” Fred said, “Isn’t that sort of following how people wound up working for Grindlewald?”
“I am talking to you, not to those people. Have you ever come close to doing evil through excess obedience?” the Wolf asked.
“We’ve hardly ever obeyed at all,” George said.
The Wolf waited for the words to sink in.
“But not every act of courage is right,” Fred said, “Just because someone is wiser than us doesn’t seem like a reason to obey them blindly.”
“If one who is wiser than you tells you to do something you think is wrong, what do you conclude?” the Wolf asked patiently.
“That they made a mistake,” George said, as if it were obvious.
“Or?” the wolf said.
There was silence. The Wolf’s eyes bore into the twins. It was clearly prepared to wait until they found the answer or the castle collapsed.
“Or it could… conceivably… mean we’ve made… some kind of mistake,” Fred muttered at last.
“And which seems more likely?”
“Wisdom isn’t everything,” George rallied, “maybe we know something they don’t, or they got careless—”
“Good things to think about,” the Wolf interrupted, “but are you capable of thinking about them?”
“What do you mean?” Fred asked.
“Can you take seriously the idea that you might be wrong? Can you even think of it without my help?”
“We’ll try,” George said.
“There’s more options, though,” Fred though aloud, “We don’t have to decide on our own whether we’re wrong or they are—we could talk to them. Couldn’t we?”
“Sometimes you can,” the Wolf said, “and the benefits are obvious. Can you see the costs?”
“It takes time, that we sometimes don’t have” George said.
“It could give you all away—if you’re trying to sneak past somebody and you start whispering, I mean,” Fred said.
“And it makes extra work for the leader. Overwhelming work if there are many followers,” the Wolf added.
“So it’s another tradeoff,” George said.
“Now you understand. But understanding now and in this place is easy. What is hard is to continue to understand. To make the best choice you can, when all paths may run ill, and one ill fills you with fear but another is only words to you. You have the understanding to make that choice, but do you have the courage?
Unfortunately, the Courage Wolf’s existence proof for “people wiser than you” is nonconstructive: he has failed to give evidence that any particular person is wiser, and thus should be trusted.
How to recognize someone wiser than you is indeed left as an exercise for the reader. And, yes, there will always be uncertainty, but you handle uncertainty in tradeoffs all the time.
Are you seriously claiming the Weasely twins are the wisest characters in HPMoR?
They already listen to Dumbledore and McGonnagal, they’re already wary of Quirrell, and frankly my actual wisdom rating for Harry (as opposed to raw intelligence that might eventually become wisdom with good training) is quite low.
(You know that the only statements Eliezer himself actually endorses are those made about science and those made by Godric Gryffindor, right?)
How do you figure? The more famous ones were Bad Ideas, but that’s why they were famous.
Do you have evidence to back that up? Seems to me that organisations with obedient members usually outperform those whose members question every decision; the exception being possibly those organisation who depend on their (non-leader) members being creative (e.g. software development), but those are a pretty recent development.
No, they are not a pretty recent development at all. The historical common-case is leaders taking credit for the good thinking of their underlings.
And, frankly, your underestimation of the necessary intelligent thought to run most organizations is kinda… ugh.
I agree that there are (probably a lot of) cases where creative thinking from rank-and-file members helps the organization as a whole; however my claim is that obedience also helps the organisation in other ways (coordinated action, less time spent on discussion, less changes of direction), and cases where the first effect is stronger than the second are rare until recently.
i.e. (content warning: speculation and simplification!) you may have had medieval construction companies/guilds where low-level workers were told to Just Obey Or Else, and when they had good ideas supervisors took credit, but it’s likely that if you had switched there organization to a more “democratic” one like (some) modern organisations, the organization as a whole would have performed less well.
I don’t have any in-depth knowledge of the history of organization, I just think that “The vast majority of all historical incidences of blind obedience have ended up being Bad Ideas” is a nice-sounding slogan but not historically true.
I specifically referred to non-leader members, i.e. rank-and-file. Which is, like, the opposite of what you seem to be reading into my comment.
No, I was referring to the rank-and-file as well.
Then we should ask someone who does.
Then why did we switch, and why are our organizations more efficient in correlation with being more democratic?
More education and literacy; a more complex world (required paperwork for doing anything...); more knowledge work.
Truth of claim not in evidence.
Claim at least partially in evidence. Methinks your prior doth protest too much.
Then why haven’t worker cooperatives replaced corporations as the main economic form?
Because the correct trade-off between ability to raise expansion capital via selling stock and maintaining worker control has not yet been achieved. Most current worker coops, for instance, do not have any structure for selling nonvoting stock, so they face a lot of difficulty in raising capital to expand.
How will you recognize the “correct trade-off”?
How would a worker controlled coop expand? Would the new workers be given the same voting rights as the original workers? If so you have to ensure that the new workers have the same vision for how the coop should be run. Also, what do you do if market conditions require a contraction?
These questions are all answered in the existing literature.
What about Honda?
Mike Rowe
Yeah, see this :-)
“While there are problems with what I have proposed, they should be compared to the existing alternatives, not to abstract utopias.”
Jaron Lanier, Who Owns the Future (page number not provided by e-reader)
Huh? It would be more fair to compare proposals to other proposals, and existing things to other existing things.
Yes, compare existing proposals to existing proposals, as opposed to showing a flaw in one proposal and claiming that you have proven that it’s bad when your alternative also is less than flawless.
That’s just an argument for letting the status quo impose the Anchoring Effect on us.
It’s an argument against the Nirvana fallacy. It’s not saying that we should accept the status quo. Quite the opposite. It’s saying that we should reject the status quo as soon as we have a better alternative, rather than waiting for a perfect one.
This depends on whether you are dealing with processes subject to entropic decay (they break apart and “die” without effort-input) or entropic growth (they optimize under their own power). For the former case, the Nirvana fallacy remains a fallacy; for the latter case, you are in deep trouble if you try to go with the first “good enough” alternative rather than defining a unique best solution and then trying to hit it as closely as possible.
Maybe it should. That’s what Chesterton’s Fence is.
-John D. Cook
Except that Windows 95 actual version number is 4.0, and Windows 98 version number is 4.1.
It seems that Microsoft has been messing with version numbers in the last years, for some unknown (and, I would suppose, probably stupid) reason: that’s why Xbox One follows Xbox 360 which follows Xbox, so that Xbox One is actually the third Xbox, the Xbox with 3 in the name is the second one, and the Xbox without 1 is the first one. Isn’t it clear?
Maybe I can’t understand the logic behind this because I’m not a billionarie, but I’m inclined to think this comes from the same geniuses who thought that the design of Windows 8 UI made sense.
The programs causing the problem are reading the version name string, not the version number.
Examples: https://searchcode.com/?q=if%28version%2Cstartswith%28%22windows+9%22%29
But then Microsoft could just have set the new version string to “Windows9” or “Windows_9“ or “Windows-9” or “Windows.9” or “Windows nine”, etc., without messing with the official product name.
I don’t buy this was the issue.
No, this is due to their own code. A shortcut in the standard developer’s tools (published by Microsoft) for Windows devs bring use ‘windows 9’ as a shortcut to windows 95 and windows 98. This is a problem of their own making.
Microsoft got where it is, in part, by relying on the exact opposite user psychology. “What the guy is supposed to do is feel uncomfortable, and when he has bugs, suspect that the problem is DR-DOS and then go out to buy MS-DOS.”
Crikey, how does the dumb software react to running on Windows 1?
I am rather doubtful that a noticeable number of programs are actually capable of running on both Windows 1 and Windows 10.
I think the core reason is marketing. Windows 10 sounds more revolutionary then switching from 8 to 9.
Why not “Windows Nine”? :-)
-- Interesting Times, Terry Pratchett
Zach Weinersmith (Twitter)
Related:
Erin Brodwin Business Insider
That article about the flu “forgets” to mention a rather important fact: the effectiveness of the flu vaccine is only about 60%.
In particular, with this effectiveness there will be no herd immunity even if you vaccinate 100% of the population.
So? A 60% reduction in the chances of getting the flu is still orders of magnitude better than a 100% reduction in the chances of getting ebola. Also, herd immunity isn’t all-or-nothing. I’d expect giving everyone a 60% effective flu vaccine would still reduce the the probability of getting the flu by significantly more than 60%.
I hear that herd immunity only really works when the percentage of people vaccinated is in the high 90s, but IANAD.
According to the Wikipedia page on herd immunity, it seems to be that it generally has to be at about the 80s. But my point is that it’s somewhat of a false dichotomy. Herd immunity is a sliding scale. Someone chose an arbitrary point to say that it happens or it doesn’t happen. But there still is an effect at any size. IANAD, but I would expect a 60% reduction would still be enough for a significant amount of the disease to be prevented in the non-immune population. In fact, I wouldn’t be surprised if it was higher. If you vaccinate 90% of the population, then herd immunity can’t protect more than the remaining 10%.
You can treat herd immunity as a sliding scale, but you can treat it as a hard threshold as well.
In the hard threshold sense it means that if you infect a random individual in the immune herd, the disease does not spread. It might infect a few other people, but it will not spread throughout the entire (non-immunized) herd, it will die out locally without any need for a quarantine.
Mathematically, you need a model that describes how the disease spreads in a given population. Plug in the numbers and calculate the expected number of people infected by a sick person. If it’s greater than 1, the disease will spread, if it’s less then 1, the disease will die out locally and the herd is immune.
The spreading of deseases sounds like it would be modeled quite well using Percolation Theory, although on the applications page there is mention but no explanation of epidemic spread.
The interesting thing about percolation theory is that in that model both DanielLC and Lumifer would be right: there is a hard cutoff above which there is zero* chance of spreading, and below that cutoff the chance of spreading slowly increases. So if this model is accurate there is both a hard cutoff point where the general population no longer has to worry as well as global benefits from partial vaccination (the reason for this is that people can be ordered geographically, so many people will only get a chance to infect people that were already infected. Therefore treating each new person as an independent source, as in Lumifer’s expected newly infected number of people model, will give wrong answers).
*Of course the chance is only zero within the model, the actual chance of an epidemic spread (or anything, for that matter) cannot be 0.
I think percolation theory concerns itself with a different question: is there a path from starting point to the “edge” of the graph, as the size of the graph is taken to infinity. It is easy to see that it is possible to hit infinity while infecting an arbitrarily small fraction of the population.
But there are crazy universality and duality results for random graphs, so there’s probably some way to map an epidemic model to a percolation model without losing anything important?
The main question of percolation theory, whether there exists a path from a fixed origin to the “edge” of the graph, is equivalently a statement about the size of the largest connected cluster in a random graph. This can be intuitively seen as the statement: ‘If there is no path to the edge, then the origin (and any place that you can reach from the origin, traveling along paths) must be surrounded by a non-crossable boundary’. So without such a path your origin lies in an isolated island. By the randomness of the graph this statement applies to any origin, and the speed with which the probability that a path to the edge exists decreases as the size of the graph increases is a measure (not in the technical sense) of the size of the connected component around your origin.
I am under the impression that the statements ‘(almost) everybody gets infected’ and ‘the largest connected cluster of diseased people is of the size of the total population’ are good substitutes for eachother.
In something like the Erdös-Rényi random graph, I agree that there is an asymptotic equivalence between the existence of a giant component and paths from a randomly selected points being able to reach the “edge”.
On something like an n x n grid with edges just to left/right neighbors, the “edge” is reachable from any starting point, but all the connected components occupy just a 1/n fraction of the vertices. As n gets large, this fraction goes to 0.
Since, at least as a reductio, the details of graph structure (and not just its edge fraction) matters and because percolation theory doesn’t capture the idea of time dynamics that are important in understanding epidemics, it’s probably better to start from a more appropriate model.
Maybe look at Limit theorems for a random graph epidemic model (Andersson, 1998)?
The statement about percolation is true quite generally, not just for Erdős-Rényi random graphs, but also for the square grid. Above the critical threshold, the giant component is a positive proportion of the graph, and below the critical threshold, all components are finite.
The example I’m thinking about is a non-random graph on the square grid where west/east neighbors are connected and north/south neighbors aren’t. Its density is asymptotically right at the critical threshold and could be pushed over by adding additional west/east non-neighbor edges. The connected components are neither finite nor giant.
If all EW edges exist, you’re really in a 1d situation.
Models at criticality are interesting, but are they relevant to epidemiology? They are relevant to creating a magnet because we can control the temperature and we succeed or fail while passing through the phase transition, so detail may matter. But for epidemiology, we know which direction we want to push the parameter and we just want to push it as hard as possible.
Not, quite, there are costs associated with pushing the parameter. We want to know at what point we hit diminishing returns.
How do you know there is no phase transition?
And indeed the table you mention does shows ranges rather than points. But even the bottom of those ranges are far above 60%.
Retracted after reading Kyre’s comment that what applies to measles doesn’t necessarily apply to flu.
I believe this is incorrect. The required proportion of the population that needs to be immune to get a herd immunity effect depends on how infectious the pathogen is. Measles is really infectious with an R0 (number of secondary infections caused by a typical infectious case in a fully susceptible population) of over 10, so you need 90 or 95% vaccination coverage to stop it speading—and why it didn’t much of a drop in vaccination before we saw new outbreaks.
R0 estimates for seasonal influenza are around 1.1 or 1.2. Vaccinating 100% of the population with a vaccine with 60% efficacy would give a very large herd immunity effect (toy SIR model I just ran says starting with 40% immune reduces attack rate from 35% to less than 2% for R0 1.2).
(Typo edit)
I feel the Ebola article makes a false comparison. We have highly competent disease control measures that keeps Influenza’s death toll bounded around the 50k order of magnitude per year. With Ebola, the curve still looks exponential rather than logistic—if the trend continues we’ll have a 6-figure bodycount by January.
A fairer comparison would be Ebola to 1918 Spanish Flu.
(Oh and that isn’t even taking into account that the officials have been feeding the media absolute horseshit about the “single patient” with Ebola)
Downvoted for mindless panic.
There are no measures to speak of to control the flu. It goes through the world every year and we just live with it because it’s rarely fatal.
The Ebola curve is not exponential in the countries where appropriate measures were taken, Nigeria and Senegal: http://www.usatoday.com/story/news/nation/2014/09/30/ebola-over-in-nigeria/16473339/ Clearly the US can do at least as well.
While Ebola might mutate to become airborne and spread like flu, and there is a real risk of that, there is little indication of it having happened. Until then the comparison with the Spanish Flu is silly. It’s not nearly as contagious.
Your linked post in the underground medic is pretty bad. The patient contracted Ebola on Sep 15, most people become contagious 8-10 days later, so the flight passengers on Sep 20 are very likely OK. There is no indication that the official story is grossly misleading. There are bound to be a few more cases showing up in the next week or so, just as there were with SARS, but with the aggressive approach taken now the odds of it spreading wide are negligible, given that Nigeria managed to contain a similar incident.
My guess is that the total number of cases with the Dallas vector will be under a dozen or so, with <40% fatalities. I guess we’ll see.
Upvoted for the firm prediction. Confidence level?
I would say 90% or so.
… And it looks like I was right, if unduly pessimistic. Total new cases: 2, total new fatalities: 0. I expected at least some of the patient 0′s relatives to get infected, and I did not expect the hospital’s protection measures to be so bad. It looks like the strain they got there is not particularly infectious, which saved their asses.
the numbers of ebola cases were no longer exponential since mid Sept. instead they stay almost constant with ~900 new cases per week since Sep.14 This should have been clear to WHO and researchers at least since mid-Oct. Still they publically repeated the “exponential” forecasts , based on papers using old data. Ban ki Moon (on 2014/10/09) and Chan(on 2014/10/14) and Aylward said it. WHO until now puts forward their containment plan based on 5000-10000 new cases in the first week of december. They didn’t correct it yet.
according to Fukuda on 2014/10/23, the WHO-committee on 2014/10/22 on the third meeting of the International Health Regulations Emergency Committee regarding the 2014 Ebola outbreak in West Africa stated that there continued to be exponential increase of cases in Guinea,Liberia,Sierra Leone
I’m far from an expert myself but unless, as you say, the experts are feeding us via the media “absolute horseshit” the expected number of U.S. deaths from Ebola is way below 50K.
What countermeasures is that number conditional on being taken?
What we seem to be doing but with significantly more countermeasures if the number of U.S. victims increases. Obama would suffer a massive political hit if > 1000 Americans die from Ebola and I trust that this is a sufficient condition to motivate the executive branch if things start to look like they could get out of control.
Motivation may be necessary but it’s not sufficient. The Federal Government is not exactly a shining example of competency.
Will the CDC handle Ebola like FEMA handled Katrina?
“You know, esoteric, non-intuitive truths have a certain appeal – once initiated, you’re no longer one of the rubes. Of course, the simplest and most common way of producing an esoteric truth is to just make it up.”
West Hunter
If it’s so simple… mind making one up?
To stay young requires unceasing cultivation of the ability to unlearn old falsehoods
-- Robert Heinlein (http://tmaas.blogspot.co.uk/2008/10/robert-heinlein-quotes.html)
“Put simply, the truth about all those good decisions you plan to make sometime in the future, when things are easier, is that you probably won’t make them once that future rolls around and things are tough again.”
Sendhil Mullaainathan and Eldar Shafir, Scarcity, p. 215
“Nobody supposes that the knowledge that belongs to a good cook is confined to what is or may be written down in a cookery book.”—Michael Oakeshott, “Rationalism in Politics”
“What we assume to be ‘normal consciousness’ is comparatively rare, it’s like the light in the refrigerator: when you look in, there you are ON but what’s happening when you don’t look in?”
Keith Johnstone, Impro—Improvisation and the Theatre
Megan McArdle
Scott Aaronson on why quantum computers don’t speed up computations by parallelism, a popular misconception.
The misconception isn’t exactly that quantum computers speed up computations by parallelism. They kinda do. The trouble is that what they do isn’t anything so simple as “try all the possibilities and report on whichever one works”—and the real difference between that and what they can actually do is in the reporting rather than the trying.
Of course that means that useful quantum algorithms don’t look like “try all the possibilities”, but they can still be viewed as working by parallelism. For instance, Grover’s search algorithm starts off with the system in a superposition that’s symmetrical between all the possibilities, and each step changes all those amplitudes in a way that favours the one we’re looking for.
For the avoidance of doubt, I’m not in any way disagreeing with Scott Aaronson here: The naive conception of quantum computation as “just like parallel processing, but the other processors are in other universes” is too naive and leads people to terribly overoptimistic expectations of what quantum computers can do. I just think “quantum computers don’t speed up computations by parallelism” is maybe too simple in the other direction.
[EDITED to remove a spurious “not”]
I agree that “parallelism but in other universes” is a weird phrasing.
What happens with quantum computation is cancellation due to having negative probabilities. The closest classical analogue seems to me to be dynamic programming, not parallel programming—you have a seemingly large search space that in fact can be made to reduce into a smaller search space by e.g. cleverly caching things. In other words, this is about how the math of the search space works out.
If your parallelism relies on invoking MWI, then it’s not “real” parallelism because MWI is observationally indistinguishable from other stories where there aren’t parallel worlds.
Negative (and imaginary) phase. The probability is the norm of the phase and is always positive.
I just don’t think <-> I just think, or is this one of those American/British differences? Also, nice recursion in the grandparent.
No, it’s one of those right/wrong differences. I changed my mind about how to structure the sentence—from “I don’t think X is quite right” to “I think X is not quite right”—and failed to remove a word I should have removed. (I seem to be having trouble with negatives at the moment: while trying the last sentence, my fingers attempted to add “n’t” to both “should” and “have”!)
Wait, American/British? I think we live within 10 miles of one another. Admittedly, I was born in the US, but I haven’t lived there since I was about 4.
Ahh, the mysterious ‘g’. Hi there. We really should have lunch sometime!
Yup, ’tis I. (No, wait, I’m two letters of the alphabet off.)
Yes, we should. At weekday lunchtimes I’m near the Science Park; how about you?
Consulting for the engineering department at the moment, but my time’s my own, and I’m intrigued enough to put myself out. You choose place and time, and I’ll try to be there.
It may even be that we have better ways of communicating than blog comments! I am lesswrong@aspden.com, 07943 155029.
Inserting a ‘not’ where it shouldn’t be is not an American/British difference.
But is it not possible that whether it should or shouldn’t be there is a matter of the dialect of the speaker?
In general, of course it is. (I think “couldn’t care less” / “could care less” is an example, though my Inner Pedant gets very twitchy at the latter.) But I think it’s unusual to have such big differences in idiom, and I suspect they generally arise from something that was originally an outright mistake (as I think “could care less” was).
And in particular, such a twisted usage does not fall neatly across the America/Britain divide.
Especially in this particular case where it was pretty clearly an editing error.
So Data can’t set his phasor to NP-hard? :)
Philip Jose Farmer’s character, “Richard Francis Burton,” The magic labyrinth
-Daniel Dennett, Intuition Pumps and Other Tools for Thinking
Think he’s a bit too enthusiastic about that X-D
Making more grand mistakes in addition to my usual number doesn’t look appealing to me :-/
I think he’s implicitly restricting himself to philosophy. A “grand mistake” in philosophy has little ill effects.
Um, they’ve been known to result in up to a quarter of the world’s population living under totalitarian dictatorships.
Fair enough. Good examples: Hegel --> Marx --> Soviet Union/China. Hegel --> Husserl --> Heidegger <---> Nazism.
I don’t know the context of the quote, but going just by the text quoted it doesn’t look like this.
That’s a pretty severe put-down of philosophy :-D
I didn’t read it that way—when I read “seek our opportunities to make grand mistakes”, the things I imagine are more like travel to foreign countries, try new things you’re bad at, talk to people way outside your usual circle, etc.
Not disagreeing, but “The natural human reaction to making a mistake is embarrassment and anger (we are never angrier than when are angry at ourselves)” is weird.
Why is the natural...anger?
Also, is that even true for everyone? I make mistakes all the time and don’t feel that, so I’m thinking he means “to publically taking a strong position and then being made to look like a fool”, which I certainly do feel. But maybe not?
If it’s not true for you then it isn’t true for everyone. But FWIW it’s somewhat true for me (though “anger” is a strong word). I get cross at how unreliable my brain is.
Scott Adams musing on what that woman in the Manhattan harassment video could do.
This actually clashes with the idea of heroic responsibility, a popular local notion. I guess it depends on what your values are.
Or what your skills are. People who are poor at soliciting the cooperation of others might begin to classify all actions which intend to change others’ behavior as “blame” and thus doomed to fail, just because trying to change others’ behavior doesn’t usually succeed for them.
What could the woman in the harassment video do? Maybe she could start an entire organization dedicated to ending harassment, and then stay in NY as a way to signal she is refusing to let the harassers win. Or if the tradeoff isn’t worth it to her personally, leave as Adams suggests. She isn’t making it Scott Adam’s problem, she’s making it the problem of anybody who actually wants it to also be their problem. That’s how cooperation works, and people can be good or bad at encouraging cooperation, in completely measurable ways. Assigning irremediable blame, or refusing to encourage change at all are both losing solutions.
I don’t exactly see how it clashes with heroic responsibility?
“When you do a fault analysis, there’s no point in assigning fault to a part of the system you can’t change afterward, it’s like stepping off a cliff and blaming gravity.”
Because it might seem to you that you cannot change it, but if you have Eliezer’s do the impossible attitude, then maybe you can.
I can’t tell if your misinterpreting him or if he real meant something that stupid. The problem with “doing the impossible” is that it amounts to an injunction to use all available and potentially available resources to address the problem. Of course, its impossible to do this for every problem.
I don’t think anyone implied “every problem”. Only the one you think is really worth the trouble. Like FAI for Eliezer (or the AI-box toy example), or the NSA spying for Snowden. The risk, of course, is that the problem might be too hard and you fail, after potentially wasting a lot of resources, including your life.
I think I buy this line of reasoning in general, but I don’t think Adams is applying it correctly in this case. If group A is doing something that makes you unhappy because group B is rewarding them for it, then it is no more “winner behavior” to go after group B than group A: in both cases you’re trying to get others to fix your problems for you, by adding a negative incentive in one case and by removing a positive incentive in another.
I can make sense of this in a few ways: maybe Adams thinks at some level that B has agency as a group but A doesn’t. (This is, clearly, wrong.) Or maybe he thinks that you’re just more likely to convince members of B than members of A, which at least isn’t obviously wrong but still requires assumptions not in evidence.
I think taking responsibility for everything whether or not you caused in is exactly what heroic responsibility is about.
Apart from that Scotts get’s a lot in the article wrong. In particular Scott argues:
That’s a naive view. It’s probably wrong.
To the extend that Eliezer argues “Do the impossible” he doesn’t argue doing things that literally have 0% of success. TDT discourages doing things with 0% of success. Eliezer doesn’t argue virtue ethics where it matters that you try regardless of whether you succeed.
Not stopping with a naive view and actually working on the problem is something that Eliezer advocates and that’s useful in cases like this. Even if it leads to questions that are even more politically incorrect then the ones Scott is asking.
--Michael I. Jordan, Pehong Chen Distinguished Professor at the University of California, Berkeley, Machine-Learning Maestro Michael Jordan on the Delusions of Big Data and Other Huge Engineering Efforts
My greatest inspiration is a low bank balance.
Ludwig Bemelmens
A similar thought from Heinlein:
Source.
I have heard both my father and my brother, professional musicians, mention the tremendous difference between professionals and amateurs. There is their differing levels of skill, of course, but the more fundamental difference is the seriousness that a professional brings to the work. There’s nothing like having to put food on the table and a roof over your head, to give yourself that seriousness and get the work done, no matter what.
SMBC on the Ultimatum Game
Specifically, the human economists.
But spherical cows of uniform density are so much easier to model.
Scientific nirvana is spherical cows floating in vacuum under a streetlight :-D
Qi at The Codeless Code
Yogi Berra, on Harder choices matter less
(I will keep doing this. I have no shame.)
″… beware of false dichotomies. Though it’s fun to reduce a complex issue to a war between two slogans, two camps, or two schools of thought, it is rarely a path to understanding. Few good ideas can be insightfully captured in a single word ending with -ism, and most of our ideas are so crude that we can make more progress by analyzing and refining them than by pitting them against each other in a winner-take-all contest.”
Steven Pinker, on page 345 of The Sense of Style.
Practically everyone is wary of false dichotomies. The trick is recognizing them. This quote doesn’t help much with that.
Practically everyone can be relied upon to go from “That’s a false dichotomy” to “Therefore, I should be wary of it.”
However, being wary of false dichotomies means thinking, “That’s a dichotomy. Therefore, the probability that it is false is sufficient to justify my thinking it through carefully and analytically.” That is not something that practically everyone can be relied upon to do in general.
I don’t think the quote significantly increases the probability someone will have that thought. I think practically everyone here already has that habit of wariness. Maybe I’m wrong, typical mind fallacy, but identifying false dichotomies has always been rather automatic for me and I thought that was true for everyone (except when other biases are involved as well).
Benjamin Disraeli.
Originally said by Thomas Nagel (I got it from Hofstadter and Dennett here )
This is a quote from memory from one of my professors in grad school:
http://arxiv.org/abs/1210.1847 , Constraints on the Universe as a Numerical Simulation
The LW software thinks the comma is part of the URL. Try escaping it with a backslash.
Also, limits of Lorentz invariance violations from the ultra-high-energy cosmic ray spectrum are much weaker if you take into account the possibility that some of them are heavier nuclei rather than protons, as various lines of evidence suggest. There are very few solid conclusions we can draw from the experimental data we have.
(This is what I am working on, BTW!)
“Information always underrepresents reality.”
Jaron Lanier, Who Owns the Future? (page number not provided by e-reader)
What does this mean?
Reality is always more complex than what you know of it.
The map is smaller than the territory? I think?
I bet there are big maps of small territories somewhere.
Physically? Maybe. information-wise? I heavily doubt it.
If the map is bigger than the territory, why not go live in the map? :-/
Physically’s easy enough, but even information-wise, I had a guide to programming the Z80 that wouldn’t have fit in the addressable memory of a Z80, let alone the processor. Will that do? If not, we should probably agree definitions before debating.
Would it have fit into less space than the set of possible programs for the Z80?
That is a great point! I am grudgingly prepared to concede that sets are smaller than their power sets.
-- Steven Kaas
Holmes: “What’s the matter? You’re not looking quite yourself. This Brixton Road affair has upset you.”
Watson: “To tell the truth, it has,” I said. “I ought to be more case-hardened after my Afghan experiences. I saw my own comrades hacked to pieces in Maiwand without losing my nerve.”
Holmes: “I can understand. There is a mystery about this which stimulates the imagination; where there is no imagination there is no horror .”
From Conan Doyle’s “a study in scarlet” (bold added by me for emphasis)
Chesterton’s fence is the principle that reforms should not be made until the reasoning behind the existing state of affairs is understood. The quotation is from Chesterton’s 1929 book The Thing, in the chapter entitled “The Drift from Domesticity”:
Wikipedia: Chesterton’s Fence
Prompted by this comment; curiously this appears to be lacking from rationality quotes threads despite some references to the fence around here.
I’ve seen Chesterton’s quote used or misused in ways that assume that an extant fence must have some use that is both ① still existent, and ② beneficial; and that it can only be cleared away if that use is overbalanced by some greater purpose.
But some fences were created to serve interests that no longer exist: Hadrian’s Wall, for one. The fact that someone centuries ago built a fence to keep the northern barbarians out of Roman Britain does not mean that it presently serves that purpose. Someone who observed Hadrian’s Wall without knowledge of the Roman Empire, and thus the wall’s original purpose, might correctly conclude that it serves no current military purpose to England.
For that matter, some fences exist to serve invidious purposes. To say “I don’t see the use of this” is often a euphemism for “I see the harm this does, and it does not appear to achieve any counterbalancing benefit. Indeed, its purpose appears to have always been to cause harm, and so it should be cleared away expeditiously.”
One big problem with Chesterton’s Fence is that since you have to understand the reason for something before getting rid of it, if it happens not to have had a reason, you’ll never be permitted to get rid of it.
Good point. Some properties of a system are accidental.
“We don’t know why this wall is here, but we know that it is made of gray stone. We don’t know why its builders selected gray stone. Therefore, we must never allow its color to be changed. When it needs repair we must make sure to use gray stone.”
“But gray stone is now rare in our country and must be imported at great expense from Dubiously Allied Country. Can’t we use local tan stone that is cheap?”
“Maybe gray stone suppresses zombie hordes from rising from the ground around the wall. We don’t know, so we must not change it!”
“Maybe they just used gray stone because it used to be cheap, but the local supplies are now depleted. We should use cheap stone, as the builders did, not gray stone, which was an accidental property and not a deliberate design.”
“Are you calling yourself an expert on stone economics and on zombie hordes, too!?”
“No, I’d just like to keep the wall up without spending 80% of our defense budget on importing stone from Dubiously Allied Country. I’m worried they’re using all the money we send them to build scary battleships.”
“The builders cared not for scary battleships! They cared for gray stone!”
“But it’s too expensive!”
“But zombies!”
“Superstition!”
“Irresponsible radicalism!”
“Aaargh … just because we don’t have the builders here to answer every question about their design doesn’t mean that we can’t draw our own inferences and decide when to change things that don’t make sense any more.”
“Are you suggesting that the national defense can be designed by human reason alone, without the received wisdom of tradition? That sort of thinking led to the Reign of Terror!”
That, and for certain kinds of fences, if there is an obvious benefit to taking one down, it’s better to just take it down and see what breaks, then maybe replace it if it wasn’t worth it, than to try and figure out what the fence is for without the ability to experiment.
Devils advocating that somethings are without reason and that is an exception to the rule is a fairly weak straw man.
Not having a reason is a simplification that does not hold up: Incompetence, apathy, out of date thinking, because grey was the factory default colour palette(credit to fubarobfusco), are all reasons. It is a mark of expertise in your field to recognize these reasonless reasons.
Seriously, this happens all the time! Why did that guy driving beside me swerve wildly, is he nodding off, texting, or are there children playing around that blind corner? Why did this specification call for a impossible to source part, because the drafter is using european software with european part libraries in north america, or the design has a tight tolerance and the minor differences between parts matter.
What Chesterton actually said is that he wants to know something’s use, and if you read the whole quote it’s clear from context that he really does mean what one would consider as a use in the ordinary sense. Incompetence and apathy don’t count.
“Not having a reason” is a summary; summaries by necessity gloss over details.
Right, this is indeed a misuse. The intended meaning is obviously that you ought to figure out the original reason for the fence and whether it is still valid before making changes. It’s a balance between reckless slash-and-burn and lost purposes. This is basic hygiene in, say, software development, where old undocumented code is everywhere.
Yep. On the other hand, in well-tested software you can make a branch, delete a source file you think might be unused, and see if all the binaries still build and the tests still pass. If they do, you don’t need to know the original reason for that source file existing; you’ve shown that nothing in the current build depends on it.
This is a bit of a Chinese Room example, though — even though you don’t know that the deleted file no longer served any purpose, the tests know it.
Yes, if you solve the Chesterton fence of figuring out why certain tests are in the suite to begin with. Certainly an easier task than with the actual code, but still a task. I recall removing failed (and poorly documented) unit and integration tests I myself put in a couple of years earlier without quite recalling why I thought it was a valid test case.
Unfortunately, this doesn’t work outside software. And even in software most of it isn’t well tested.
Sure it does—that’s how a lot of biological research works. Take some rats, delete a gene, or introduce a nutritional deficiency, etc. and see how the rats turn out.
I agree that the quote is vague, but I think it’s pretty clear how he intended it to be parsed: Until you understand why something was put there in the past, you shouldn’t remove it, because you don’t sufficiently understand the potential consequences.
In the Hadrian’s Wall example, while it’s true that the naive wall-removing reformer reaches a correct conclusion, they don’t have sufficient information to justify confidence in that conclusion. Yes, it’s obviously useless for military purposes in the modern day, but if that’s true, why hasn’t anyone else removed it? Until you understand the answer to that question (and yes, sometimes it’s “because they are stupid”), it would be unwise to remove the wall. And indeed, here, the answer is “it’s preserved for its historical value”, and so it should be kept.
At the risk of generalizing from fictional evidence: This line of reasoning falls apart when it turns out that the true reason for the wall is to keep Ice Zombies out of your kingdom. Chesterton would surely have seen the need be damn sure that the true purpose is to keep the wildlings out, before agreeing to reduce the defense at the wall.
Um, people generally don’t build fences to gratuitously cause harm.
That’s either trivial, or false.
It’s trivial if you define “gratuitously cause harm” such that wanting someone else to be harmed always benefits oneself either directly or by satisfying a preference, and that counts as non-gratuitous.
It’s false if you go by most modern Westerners’ standard of harm.
There was no reason to limit Jews to ghettos in the Middle Ages except to cause harm (in sense 2).
Er, this looks like a great example of not looking things up. Having everyone in a market dominant minority live in a walled part of town is great when the uneducated rabble decides it’s time to kill them all and take their things, because you can just shut the gates and man the walls. Consider the Jewish ghettoes in Morocco:
When you tell people to look things up, be sure you first looked it up correctly yourself. That link says that ghettoes were used to protect Jews in the manner you describe. It does not say that that is why ghettoes were created.
Since I lost karma for that, I’d better elaborate. Your specific quoted line shows that protection was the reason for the ghetto’s placement, given that they were going to have one. It does not say that protection was the reason for having a ghetto.
Your own link says that “Jewish ghettoes in Europe existed because Jews were viewed as alien due to their non-Christian beliefs in a Christian environment”. The only mentionthat is anything like what you claim is halfway down the page, has no reference, does not name the location of the ghetto, and neither 1) says whether Jews could live only there or 2) if so, gives a reason for why they were prevented from living anywhere else.
It seems to me that we should separate the claim that the actual historical motivation of creating ghettoes was to cause harm to Jews, and the claim that there was no reason to make them besides causing harm to Jews. If there is one reason that Jews benefit from living separately from Christians or Muslims, then we can’t make the second argument.
But I don’t think we can make the first argument, because we can’t generalize across all Jewish quarters. In some cities, the rulers had to establish an exclusive zone for Jews in order to attract the Jews to move in, which suggests to me that this is a thing that Jews actively wanted. It makes sense that they would: notice that a function of many Jewish religious practices is to exclude outsiders and make it more likely for Jews to marry other Jews. Given the fact that Jews were on average wealthier than the local population and wealth played a part in how many of your grandchildren would survive to reproductive age, that’s not just raw ingroup preference. (Indeed, Jews moving from a city where a Jew-hating ruler had set up a ghetto to keep them separate might ask a Jew-loving ruler to set them up a ghetto, because they noticed all the good things that a ghetto got them and thought they were worth the costs.)
As for whether or not people voluntarily choose to segregate themselves, consider, say, Chinatowns in the US. Many might have been caused by soft (or hard) restrictions on where Asians could live, but I imagine that most residents stay in them now because they prefer living around people with the same culture, having access to a Chinese-language newspaper, and so on.
Notice what I said: to limit Jews to ghettoes. Voluntary segregation and creating Jewish areas to attract Jews does not limit Jews to ghettoes. In general, creating ghettoes to benefit Jews is not a reason to limit them to ghettoes. Furthermore, since I was using ghettoes as a counterexample, even if I had not phrased it that way voluntary segregation still wouldn’t count, because in order to have a counterexample it only need be true that some ghettoes were created to harm Jews, even if others were not.
Azathoth123 said that people generally don’t build fences to gratuitously cause harm, not that they never ever do.
The word “generally” in there is another of those things which makes a statement true and trivial at the same time. For one thing, it depends on how you count the fences (When you have a fence about not being a gay male and another about not being a lesbian, does that count as one or two fences?)
A more reasonable interpretation is to take “generally” as a qualifier for how wide the support is for the fence rather than for how common such fences are among the population of all fences—that is, there aren’t fences with wide support, the majority of whose supporters wish to cause harm. “Mandatory ghettoes” are indeed a counterexample to the statement when read that way.
The medieval allegations against Jews were so persistent and so profoundly nasty that they constitute a genre of their own; we still use the phrase “blood libel”. It seems plausible that some of the people responsible for the ghetto laws believed them.
They were entirely wrong, of course, but by the same token it may well turn out that Chesterton’s fence was put there to keep out chupacabras. That still counts as knowing the reason for it.
That falls under case 1. It is always possible to answer (given sufficient knowledge) “why did X do Y”. Y can then be called a reason, so in a trivial sense, every action is done for a reason.
Normally, “did they do it for a reason” means asking if they did it for a reason that is not just based on hatred or cognitive bias. Were blacks forced to use segregated drinking fountains for a “reason” within the meaning of Chesterton’s fence?
No, I don’t think it does. We can consider that particular cases of what we now see as harm may have been inspired by bias or ignorance or mistaken premises without thereby concluding that every case must have similar inspirations. Sometimes people really are just spiteful or sadistic. This just isn’t one of those times.
It seems clear to me, though, that Chesterton doesn’t require the fence to have originally been built for a good reason. Pure malice doesn’t strike me as a likely reason unless it’s been built up as part of an ideology (and that usually takes more than just malice), but cognitive bias does; how many times have you heard someone say “it seemed like a good idea at the time”?
Has been posted before, more than once.
It strikes me that one might simply presume the worst of whoever put up the fence. It was a farmer, for example, with a malicious desire to keep hill-walkers from enjoying themselves. I would extend the principle of Chesterton’s fence, then, to Chesterton’s farm: one should take care to assess the possible uses that it might have served for the whole institution around it as well as the motives of the man.
It has appeared before, twice. Maybe it should have a Wiki article here.
Appears every two years… when tho old quotes are two far down in the search results I guess.
Done: http://wiki.lesswrong.com/wiki/Chesterton%27s_Fence
An official Army War College publication, 1923
While reverse stupidity isn’t intelligence, learning how others rationalize failure can help us recognize our own mistakes.
Edited to reflect hydkyll’s comment.
How do you know it’s a German Army War College publication? Reasons for my doubt:
“Ellis Bata” doesn’t sound at all like a German name.
There was no War College in Germany in 1923. There were some remains of the Prussian Military Academy, but the Treaty of Versailles forbid work being done there. The academy wasn’t reactivated until 1935.
The academy in Prussia isn’t usually called “Army War College”. However, there are such academies in Japan, India and the US.
The link is from Strategy Page. I have listened to a lot of their podcasts and greatly respect them.
But the link doesn’t say it was from a German Army War College publication. It just says “In an official Army War College publication”. All hydkyll’s reasons for thinking it likely to be from another country seem strong to me.
You are right, I added “German” for clarity because I assumed it was true given the context then forgot I had done this. Sorry.
This is a common failure mode, where the risk analysis is ignored completely. Falling in love with a perfect plan happens all the time in industry. Premortem analysis was not a thing back then, and is exceedingly rare still.
The context in with the sentence stands is that around that time there was the believe that the Germany army counted on being supported by other German institutions and those institutions didn’t support the army but failed the army.
This is commonly known as the stab-in-the-back myth. “Myth” as the winners of WWII wrote our history books. There nothing inherently irrational about that sentiment even though it might have been wrong.
It’s not about blaming the troops. If something seems so stupid that it doesn’t make sense to you, it might be that the problem is on your own end.
I read the quote to mean that it’s silly to claim that a plan is perfect when it’s actually unworkable.
This is my interpretation, similar to a teacher saying he gave a great lecture that his students were not smart enough to understand.
Given German thought at the time I find that unlikely.
The author could have written: “We lost the war because Jews, Social Democrats and Communists backstepped us and not because we didn’t have a good plan to fight two sides at once.” He isn’t that direct, but it’s still the most reasonable reading for someone who writes that sentence in 1923 at a military academy in Germany.
I don’t think I said what I meant, which is that the quote is a good example of irrational thinking.
ChristianKI’s point is that this quote is a good example of coded language (aka dogwhistle) and while it looks irrational on the surface, it’s likely that it means “That those plans failed was not due to any unsoundness on the part of the plans, but rather due to the fact that we were betrayed”.
Or it could be read ironically. It would be hard for anyone to disagree with it without looking bad, allowing the writer to say what he really thought (as in Atheism Conquered)
Of note, Alfred von Schlieffen, the architect of the original deployment plan for war against France, was on record as recommending a negotiated peace in the event that the German Army fail to quickly draw the French into a decisive battle. Obviously, this recommendation was not followed. Also of note, Schlieffen’s plan was explicitly for a one-front war; the bit with the Russians was hastily tacked on by Schlieffen’s successors at the General Staff.
No plans were made for a war even one year long (although highly placed individuals had their doubts and are now widely quoted about it). No German (or other) plans which existed at the start of WW1 were relevant to the way the war ended many years later. Conversely, whatever accusations were made about betrayal in the later years of the war were clearly irrelevant to the way those plans played out in 1914 when all Germans were united behind the war effort, including Socialists.
While you’re right, this all happened after Bismarck and the pre-WWI German government had put a lot of effort into avoiding a two-front war because they did not share the General Staff’s optimism about being able to handle it. So this constitutes failing to admit losing a very high stakes bet, and does seem inherently irrational.
My impression is that the German military was never optimistic concerning winning vs England, France, and Russia. Those that claimed WWI was deliberately initiated by Germany, however, had to falsely claim that the German military was optimistic.
Is it plausible that the German politicians ignored the German military?
It’s theoretically plausible, but from my understanding of WWI once the Russians mobilized the Germans justifiably believed that they either had to fight a two front war or allow the Russians to get into a position that would have made it extremely easy for Russia+France to conquer Germany.
Right. The ‘Blank Check’ was the major German diplomatic screwup. Once the Austro-Hungarian Empire issued its ultimatum, they were utterly stuck.
Agreed, although German further diplomatic errors contributed to England going against them. What they should have done is offer to let England take possession of the German fleet in return for England not fighting Germany and protecting Germany’s trade routes.
Ummmmm. That seems rather drastic, and would go over like something that doesn’t go over.
Indeed. A more plausible alternative strategy for Germany would be to forget the invading Belgium plan, fight defensively on the western front, and concentrate their efforts against Russia at the beginning. Britain didn’t enter the war until the violation of Belgian neutrality. Admittedly, over time French diplomats might have found some other way to get Britain into the war, but Britain was at least initially unenthusiastic about getting involved, so I think Miller is on the right track in thinking Germany’s best hope was to look for ways to keep Britain out indefinitely.
Eh, with perfect hindsight, maybe. The thing about Russia is, it has often been possible to inflict vast defeats on its armies in the field; but how do you knock it out of a war? Sure, in the Great War it did happen eventually—but the Germans weren’t planning on multiple years of war that would stretch societies past their breaking point. (For that matter, in 1917 Germany was itself feeling the strain; it’s called the “Turnip Winter” for a reason.) There were vast slaughters and defeats on the Eastern Front, true; but the German armies were never anywhere near Moscow—not even after the draconian peace signed at Brest-Litovsk. The German staff presumably didn’t think there was any chance of getting a reasonably quick decision in Russia.
Do note, when a different German leader made the opposite assumption, “it is only a question of kicking in the door, and the whole rotten structure will come tumbling down”… that didn’t go so well either; and he didn’t even have a Western front to speak of. It seems to me that Germany’s “problems” in 1914 just didn’t have a military solution; I put problems in scare quotes because they did have the excellent peaceful solution of keeping your mouth shut and growing the economy. It’s not as though France was going to start anything.
Not by itself, but France was very willing to support Russian aggression against the central powers.
Simone de Beauvoir, The Ethics of Ambiguity, Part I (trans. by Bernard Frechtman).
Cf. Rationality is Systematized Winning and Rationality and Winning.
Nate Silver
Nate Silver has a chapter in his book called Less and Less and Less wrong..… (or something very similar).
PS. I haven’t read it, but just happened to flip through the contents once long ago...
“If we take everything into account — not only what the ancients knew, but all of what we know today that they didn’t know — then I think that we must frankly admit that we do not know. But, in admitting this, we have probably found the open channel.”
Richard Feynman, “The Value of Science,” public address at the National Academy of Sciences (Autumn 1955); published in What Do You Care What Other People Think (1988); republished in The Pleasure of Finding Things Out: The Best Short Works of Richard P. Feynman (1999) edited by Jeffrey Robbins.
I found the “open channel” metaphor obscure from just the quote, and found some context. The open channel is a contrast to the blind alley of seizing to a single belief that may be wrong.
I noticed that later in the passage, he says:
This doesn’t sit well with dreams of making a superintelligent FAI that will be the last invention we ever need make, after which we will have attained the perfect life for everyone always.
Indeed, but it does agree with the argument for the importance of not getting AI wrong in a way that does chain the future.
It sits well with FAI, but poorly with assuming that FAI will instantly or automatically make everything perfect. The warning is against assuming a particular theory must be true, or a particular action must be optimal. Presumeably good advice for the AI as well, at least as it is “growing up” (recursively self-improving).
Thankfully, they have ways of verifying historical facts so this [getting facts wrong] doesn’t happen too much. One of them is Bayes’ Theorem, which uses mathematical formulas to determine the probability that an event actually occurred. Ironically, the method is even useful in the case of Bayes’ Theorem itself. While most people attribute it to Thomas Bayes (1701 − 1761), there are a significant number who claim it was discovered independently of Bayes—and some time before him—by a Nicholas Saunderson. This gives researchers the unique opportunity to use Bayes’ Theorem to determine who came up with Bayes’ Theorem. I love science.
John Cadley, Funny You Should Say That—Toastmaster magazine
From Myst Shadow’s excellent fanfiction, Forging the Sword.
I understand the sentiment and why it’s quoted. In fanboy mode though, I think Gryffindor and Ravenclaw are reversed here. I.e. a Gryffindor might sacrifice themself, but would not sacrifice a friend or loved one. They would insist that there must be a better way, and strive to find it. In fiction (as opposed to real life) they might even be right.
The Ravenclaw is the one who does the math, and sacrifices the one to save the many, even if the one is dear to them. More realistically, the Ravenclaw is the effective altruist who sees all human life as equally valuable, and will spend their money where it can do the most good, even if that’s in a far away place and their money helps only people they will never meet. A Ravenclaw says the green children being killed by our blue soldiers are just as deserving of life as our own blue children; and a Ravenclaw will say this even when he or she personally feels far more attached to blue children. The Ravenclaw is the one who does not reject the obvious implications of clear logic, just because they are unpopular at rallies to support the brave blue soldiers.
-- Joker, The Dark Knight
Richard Feynman, Appendix F: Personal Observations on the Reliability of the Shuttle
-- Introducing the Baen Free Library, Eric Flint
(Which I can no longer find at Baen, but copies are scattered across the internet, including here)
Quite a lot, in my experience. I’ve seen so many well-paid people fired for fiddling their expenses over trivial amounts. Eric Flint, as befits a fiction author, makes a rhetorically compelling case though!
Even more take home with them papers or pens from their workplace and don’t get punished for it.
Quite right, too.
Being able to take paper and pens home from the workplace to work is clearly useful and beneficial to the business. It’s plainly not worth a business’s time to track such things punctiliously unless its employees are engaging in large-scale pilfering (e.g., selling packs of printer paper) because the losses are so small. It’s plainly not worth an employee’s time to track them either for the same reason. (And similarly not worth an employee’s time worrying about whether s/he has brought papers or pens into work from home and left them there.)
The optimal policy is clearly for no one to worry about these things except in cases of large-scale pilfering.
(In large businesses it may be worth having a formal rule that just says “no taking things home from the office” and then ignoring small violations, because that makes it feasible to fight back in cases of large-scale pilfering without needing a load of lawyering over what counts as large-scale. Even then, the purpose of that rule should be to prevent serious violations and no one should feel at all guilty about not keeping track of what paper and pens are whose. I suspect the actual local optimum in this vicinity is to have such a rule and announce explicitly that no one will be looking for, or caring about, small benign violations. But that might turn out to spoil things legally in the rare cases where it matters.)
Lest I be thought self-serving, I will remark that I’m pretty sure my own net flux of Stuff is very sizeably into, not out of, work.
This post is right on the money. Transaction costs are real and often wind up being deceptively higher than you anticipate.
Including legal concerns, the local optimum is probably officially stating that response will be proportional to seriousness of the ‘theft’, with a stated possible maximum. This essentially dog-whistles that small items are free to take, without giving an explicit pass.
A better optimum might be what some tech company (I thought Twitter but can’t find my source) that changed their policy on expense accounts for travel/food/etc. to ‘use this toward the best interests of the company’, to significant positive results. But some of the incentives there (in-house travel-agent arrangements are grotesquely inefficient) are missing here.
I’m curious: why the downvote for the parent comment? It seems obviously not deserving of a downvote.
… Oh look, someone appears to be downvoting all VAuroch’s comments. Dammit, this needs to stop.
It’s not nearly as bad as it used to be (I was one of Eugine_Nier’s many targets), but yeah, it’s frustrating.
How is this a rationality quote? I can see people thinking this is a good argument, especially if you politically agreed with the author, but it doesn’t seem to be about rationality, or demonstrating an unusually great deal of rationality
It would definitely be a rationality quote if it went on to quote the part where Eric Flint decided to test his hypothesis by putting some of his books online, for free, and watching his sales numbers.
Does he say what the results were anywhere?
Huge success. Sales jumped up in ways that are hard to explain as anything other than the free library’s effect.
It expresses two ideas:
Reduction to incentives is such a useful hammer that it’s tempting to think of the world as homo economus nails. Like all simplified models, that can be useful, but it can also be dangerously wrong.
It isn’t very much information to say that people have a price. The real information lies in what that price is. It may be true to say “people are dishonest”, but if you want to win, you need to specify which people and how dishonest.
Thomas W. Myers in Anatomy Trains—Page 3
“What you can do, or dream you can do, begin it! / Boldness has genius, power and magic in it.”
-- John Anster in a “very free translation” of Faust from 1835. (http://www.goethesociety.org/pages/quotescom.html)
Benjamin Disraeli.
In what units?
Choice of units does not change relative magnitudes.
quite..
George Will, writing in the Washington Post.
The quote illustrates rationality with a particular example from a political subject, which we all know can be mind-killing. For the avoidance of doubt, I would therefore note that the lesson in rationality from the quote applies equally to anyone, regardless of their politics, who is keen to censor discussions.
I disagree with the object level of this quote. Censorship can achieve multiple goals, and a lack of censorship does not necessarily imply “a regime of robust discussion.”
Examples of the first would be using the censorship itself as the action (e.g. a despot censoring religious minorities doesn’t just limit discussion, it’s an active method of subjugation), or protecting people from messages with annoying content or form (e.g. regulations on advertising).
The second is nearly a human universal, but is especially clear in propaganda situations—if we’re at war, and someone is spreading slanderous enemy propaganda, and I destroy their materials and arrest them, this is censorship. But it also increases the robustness of discussion, because they were trying to inject falsehoods into the discussion. Or for another example—sometimes you have to ban trolls from your message board.
I also dislike the implications of this quote for any discussion where it shows up. Some times ad hominem arguments are right. But they’re almost never productive, especially when cast in such general terms.
I wouldn’t say that it’s an ad hominem quote. I disagree with the premise—that censorship is a “default position regarding so many things” within progressivism—but I think that the link between censorship as a default position and a fear of the survivability-under-discussion of one’s own ideas is a rationally visible one. Unlike a typical example of an ad hominem attack, the undesirable behavior (fiat elimination of competing ideas as a default response) is related to the conclusion (that the individual is afraid of the effects of competing ideas). It’s oversimplified, but one can say only so much in a short quip.
Would the term “genetic argument” be better, do you think? Fewer emotional associations, certainly :P
Anyhow, what I meant to indicate is arguments of the form “Person or group X’s argument is wrong because X has trait Y.” Example: “Rossi’s claims of fusion are wrong because he’s been shown to be a fraud before.” fits this category. Rather than examining any specific argument, we are taking it “to the man.”
And I agree that these arguments can absolutely be valid. But if there is any kind of emotionally-charged disagreement, then not only is making this sort of “rhetorical move” not going to help you persuade anybody (it can be fine as a way to preach to the choir of course), but if someone presents an argument like this to you, you should give it much less credence than if you were discussing a trivial matter. I think “fallacy” can also mean a knife that people often cut themselves with.
The fact that I use knifes and forks to eat my meal isn’t evidence for my pessimism to successfully eat my meal without knifes and forks. It’s just evidence that I consider those tools useful.
“Dammit, Roselyn, you’ve done enough. If you keep putting it off, you could end up in a desperate place one day!” Caprice nuzzled Roselyn’s leg. “You always act as if you are trying to make up for something. If you’d just take the serum, you’d find that Celestia forgives humans. She knows humans can’t help what they are.” Caprice looked deeply into Roselyn’s eyes, and Roselyn felt that somehow Caprice was speaking from personal experience. “Just do it, Ros. Run and grab a cup and then get on the boat. It’s alright. You have my permission.”
-Jennifer Reitz, 27 Ounces, Final Chapter.
Context: Taking ponification serum increases the expected value of one’s lifespan by 300 years, though Roselyn is averse to taking the serum, because she feels that existing in human form serves as penitence for wrongs she has committed in the past. There is a tenuous connection between Roselyn’s act of procrastinating on taking the ponification serum, and the practice of cryocrastinating in real life.
Edit: I’m sorry that nopony liked the above quote; my intent in posting it was to cheer for the sentiment that living for a long time is a good thing. I, um, guess that I did a bad job, sorry. I will leave the test of my original post unedited, so that everypony can see what I originally wrote.
Heinz von Foerster (he founded the Biological Computer Lab in 1958)